id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
229481613
pes2o/s2orc
v3-fos-license
Application of Vector Measure Construction Methods to Estimate Quality of Institutions: Nations in Transition* Purpose: The aim of the article is a multi-criteria assessment of the quality of institutions in countries of Central and Eastern Europe (CEE) and Central Asia, which in the early 1990s introduced a market system in their economies. The analysis was carried out taking into account the division of 28 analysed countries into two groups current EU members and those that do not belong to this group. Design/Methodology/Approach: The economies under studies were analysed by selected 8 dimensions quality of institutions. Data from 1995, 2004 and 2018 were collected and compared, taking into consideration the EU members and other countries. The economies were compared and classified using the Vector Measure Construction Method (VMCM). Findings: The effects of introducing major transformation reforms turned out to be different in the analysed countries. The initial conditions for the transformation and the timing of reforms seem to have influenced them. Also accession to the EU had a significant impact on the improvement of the quality of institutions in the EU member states. It has led to a large stratification of two groups of countries – the EU members and other analysed countries. Practical Implications: VMCM method is dedicated to the study of complex economic processes. These approach allows for making rankings, classifications of objects and the analysis of the change dynamics. The performed assessment of the quality of institutions may be helpful for the governments of the surveyed countries and constitute a justification for a change in institutional policy, or may support the decisions of investors looking for an appropriate country of location. Originality/Value: Due to the advantages of VMCM, applying it to the analysis and comparison of the quality of institutions in the surveyed countries allowed for taking into account the multidimensionality of the problem and its better, in-depth assessment. Introduction Economic and social phenomena are usually characterized by multidimensionality. Their essence can be described by many variables that carry a certain amount of information, which makes it difficult to assess them in a relatively comprehensive manner (Thalassinos and Thalassinos, 2006). This also applies to the quality of institutions, determined with the use of many variables, which characterize features of institutions or assess the effects of their performance (Aron, 2000). The concept of an institution has been introduced to the main stream of economy by the new, institutional economy. North (1991), which is the most eminent representative of this economy and a Nobel prize winner, delineates institutions as the rules of the game in a society or, more formally, are the humanly devised constraints that shape and structure political, economic and social human interactions. They shape the subjective mental constructs that individuals use to interpret the world around them and make choices. Moreover, by structuring the interaction of human beings in certain ways, formal institutions affect the price we pay for our actions (North, 1991). Institutions considered as legal, administrative and customary relationships of repetitive human interactions form a system of formal rules (determined as to the forms, i.e. defined, and set down in writing by a human beinge.g. legal norms, property rights) and informal (not defined as to the forms i.e. customary patterns of behaviore.g. traditions, customs, conventions, standards) (North, 1997;North, Acemoglu, Fukuyama, and Rodrik, 2008). The formal institution are created and set down in writing, and often complement and increase effectiveness of the informal institutions. The informal institutions are embedded in traditional social practices and culture which can be equally binding. According to Rodrik (2003), institutions refer to the quality of formal and informal socio-political arrangements, from the legal system to broader political institutions that play an important role in promoting or obstructing economic activity. The relevant institutions, such as secure property rights, legislation, the independence of the judiciary appropriate regulatory structures as well as bureaucratic capacity and others are needed to structure, enforce and reduce the uncertainty of agreements. The good institutions, i.e. creating the structure of incentives reducing uncertainty and supporting effective markets for goods and production factors, contribute to the improvement of economic results. They play a vital role in reducing transaction costs and shaping the appropriate incentives that drive long-run economic growth and development, create stability within firms and economies, should encourage efforts and eliminate errors, facilitate collective action, create order, facilitate exchanges and the management of conflicts (North, 1991;North, 1997;Rodrik, 2002;Bardhan, 2006;Easterly, 2001;Ostrom, 2010;Ostrom, 2014). The differences in institutions can explain the differences in economic performance across time and space (North, 1991;Olson, 1996;Hall and Jones, 1999;Rodrik et. al., 2004;Acemoglu and Robinson, 2012). Therefore, according to Acemoglu et al. (2005;2012) the distinction between "extractive" and "inclusive" institutions is very important. The first one refers to non-democratic political institutions on one hand and weak rule of law and the absence of private property rights on the other. While the "inclusive" institutions are a web of democratic political institutions, strong rule of law and the protection of private property for a broad cross section of society. The "inclusive" institutions are considered to be the one of fundamental growth factors (deep roots of growth) that directly or indirectly influence two other determinants (geography and openness of economy) (Acemoglu et al., 2001;2002;Rodrik, 2003;Bloch and Tang, 2004;Rodrik, et al., 2004;Owen and Weatherston, 2007;Economides and Egger, 2009;Besley and Persson, 2011;Acemoglu and Robinson, 2012). And as stated in the latest literature, formal and informal institutions are considered as an important determinant of growth, quality of live and subjective well-being (Bjørnskov, Dreher, and Fischer, 2010;Helliwell, Huang, and Wang, 2014;Nikolova, 2015;Graafland and Lous, 2018;Graafland, 2020;Roka, 2020). According to Rodrik (2003), presence of good institutions cannot be taken for granted in many countries -these institutions would not emerge endogenously and effortlessly as a by-product of economic growth, they rather are the basic preconditions and determinants of growth. Roland (2004), referring to the institutions' ability to change, proposed a classification into "slow-moving" and "fast-moving" institutions. A prime example of a slow-moving institution are values, beliefs, and social norms. Fast-moving institutions, political institutions or formal, do not necessarily change often but can change more quickly. As Acemoglu (2009) emphasizes, the institutions are social choices and they can be potentially reformed so as to achieve better outcomes. While laws and regulations are not directly chosen by individuals and some institutional arrangements may be historically persistent, in the end the laws, policies and regulations under which a society lives are the choices of the members of that society. If the members of the society collectively decide to change them, they are capable to change them. Such reforms may not be easy, they may encounter a lot of opposition, and often we may not exactly know which reforms will work. Thus, economists coming from the new, institutional economy proclaim that institutions, especially formal ones, can be created and changed quite quickly depending on the needs of a dynamically changing economy. However, the slow pace of changes in informal institutions may undermine the effectiveness of changing formal rules, and only formal rules can be introduced top-down, while informal ones change bottom-up and evolutionary. It should also be remembered that identical institutions will not function equally well in every economy, because it is determined by the historical, geographic, cultural and social conditions of these economies. The systemic transformation of the former socialist countries is perceived as a long process of changes in institutions (Morawski, 1998;Lisowska, 2004;Godłów-Legiędź, 2005;Nikolova, 2015). It was a sudden, radical change introduced topdown, breaking the continuity of institutions and implementing a new institutional order. These fledgling democracies had to create the legal and institutional fundamentals that underpin democratic and capitalists states and the importance of institutions for the economic performance in the countries in transition was large and increasing over time (Grogan and Moers, 2001;Havrylyshyn, 2001;Guriev and Zhuravskaya, 2009;Havrylyshyn et. al., 2016). Some of these countries joined the EU, which required the adaptation to the institutional conditions in force in the EU. The Eastward enlargement of the European Union represents one of the greatest social and economic transformations in modern times. Candidate countries adopt, implement and enforce the EU acquis. These adaptation to the EU acquis is a slowmoving process, and political conditionality create major obstacles to the accession. The institutional change mandated by the EU is very wide and extends both to general areas of a state, such as the judiciary and the state bureaucracy, but also to several dozens of regulatory fields and changes in informal institutions (Schimmelfennig and Sedelmeier, 2004;Sedelmeier, 2008;Vachudova, 2009;Bruszt and Lundstedt, 2016). And the opinion is that the new member states had integrated the EU system rather smoothly (de Vitte, 2019). Both the systemic transformation and accession to the EU should contribute to the improvement of the quality of institution. However, the institutions cannot be directly observed or measured. Instead, proxies are used to estimate quality of institutions and various measures to assess the quality of institutions operating in different areas e.g. are applied: There is no clear consensus on which indicators are the best. These study uses the first two measures, which allow the assessment of eight dimensions of institutional quality. Trends for the selected indicators, which show change direction of the quality of institutions in the surveyed countries have been analyzed. Data and Methods In the paper, authors used Vector Measure Construction Method (VMCM) as a methodical apparatus, to assess the problem related to quality of institutions (Hanias et al., 2007;Ugurlu et al., 2014). The time series analysis of economic variables reflecting quality of institutions in the selected 24 countries of CEE and Central Asia was performed. At the beginning, to assess the quality of institutions in 1995, 2004 and 2019 (based on the adopted variables), a ranking of 24 countries of CEE and Central Asia according to the so-called artificial pattern was made from data of 1995, as the base year. The purpose of this research concept was to enable the general ranking of the selected 24 countries of CEE and Central Asia, with regard to development of quality of institutions, and the possibility of making comparison with respect to the result of the ranking for each country in year 2004 and 2019 related to 1995, as a reference year. The following countries ( PR and CL (X1 -X2) variables are indicators of the quality of institutions assessed within the FIW. The other variables (X3 -X8) form the evaluation of the quality of institutions within WGI. The FIW dimensions score (PR and CL) are measured on a one-to-seven scale, with 1 representing the highest degree of Freedom and 7 the lowest 3 . The WGI indicators ranges from -2.5 to +2.5 -the higher the value, the better the evaluation of the institutions in a given economy in the six key dimensions of the WGIa measure obtained to embrace various aspects of institutional structure of economies and a base for the rating of country achievements in the quality of institutions 4 . The indicators value was collected for three years: -1995, when all of the analysed countries accomplished implementation of the set of major transformation reforms 5 ; -2004, when eight of the countries under studies joined the EU, and two others were admitted to this group in 2007, -2019 is the last year, when comparable data are available. Data referred to these three years were obtained from the Freedom House database (for two FIW dimension) and World Bank database (for six WGI dimension). In the problem under consideration, the quality of the institution, is an complex socioeconomic phenomena described by many indicators therefore the VMCM method was selected for its evaluation, as a Multidimensional Comparative Analysis method. VMCM allows for making a ranking, of socio-economic objects (countries) described with many indicators and study their change dynamics (Nermend, 2009). The procedure of VMCM comprises 8 stages (Nermend, 2017 While nominants are such variables, which desired values are within a specific range. 4. Defining the diagnostic variables weight -in our case we didn't use weights system because all selected diagnostic variables were equally important. 5. Normalization of variables -the next stage in the construction of the aggregate measure of the quality of institutions provides for the elimination of different units of measure of diagnostic variables, which additionally hinder any arithmetic operations. 6. Determination of the pattern and anti-pattern -in the paper the pattern and antipattern was automatically determined. It was made under 1995 data, as the base year. For the variables which have stimulant character the pattern was constructed 5 It is not possible to fully assess changes in the quality of institutions from the beginning of transformation, because studies have not been started for all countries since 1990. It is known, however, that in terms of the quality of institutions, the post-socialist countries formed quite homogenous group at the beginning of transformation, with a very large distance from democratic market economies (Kitschelt 2003, p. 49;Piątek 2011). Application of Vector Measure Construction Methods to Estimate Quality of Institutions: Nations in Transition 22 based on the first quartile and the third quartile for destimulants. For the antipattern it was the third quartile for variable values which are stimulants and the first quartiles for destimulants. 7-8. Construction of the aggregate measure and classification of objects -the last two stages of the VMCM method. The value of the aggregate measure is within the range from zero to one. The aggregate measure is the value equal to zero for antipattern and the value equal to one for the pattern. The objects which have the value of measure more than one are better than the pattern. Objects that are worse than the anti-pattern will have a negative value of measure. In order to better visualize the results of calculations, objects can be divided into classes with similar measurement values. In the research we have four classes (Class number 1 is for the best objects, Class 2 for good objects, Class 3 is for objects which have mean value of the aggregate measure and Class 4 for the objects with lowest value of the aggregate measure). Detailed mathematical description of the VMCM method the authors described in the two works (Piwowarski et al., 2018;. Empirical Results The results of the analysis carried out allow to answer the questions whether the quality of institutions (aggregate measure) estimated on the basis of 8 variables is a stable feature of the surveyed countries which underwent systemic transformation in the 1990s of the twentieth century , or whether it changes over time and whether EU membership improves the quality of the institution? The ranking of the quality of institutions (aggregate measure) in the surveyed countries from three annualized years is presented in Table 1. In the 1995 year the highest aggregate measure value (Class 1) is observed in 5 countries: Slovenia, Hungary, Czech Republic, Poland, Estonia. These are countries providing potentially the best quality of institutions among the selected 24 countries of CEE and Central Asia. On the other side, four CEE and Central Asia countries: Georgia, Kazakhstan, Azerbaijan, Uzbekistan, Tajikistan have the lowest aggregate measure value of the quality of institutions (Class 4). In 2004 we can see that all the countries belonging nowadays to the EU are in class 1 and 2. However, in 2004 the quality of institutions in Poland deteriorated and this country moved from Class 1 to Class 2. While Croatia changed its position and moved up from Class 3 to 2. In 2019, in total 4 countries were classified to Class 1 (Estonia, Slovenia, Czech Republic, and Latvia). In 2019 the quality of institutions in Hungary deteriorated and moved from Class 1 to Class 2. In turn, Estonia is the leading country in this ranking. We can also see changes at the end of ranking. Russian Federation moved down from Class 3 to 4. While Kazakhstan moved up from Class 4 to 3 and Georgia moved up from Class 4 to 2. The calculated mean value of the measure for each country from the three analysed years shows how large the differences exist in the assessment of the quality of institutions in the analysed countries. In this respect, both the group of the EU and non-EU countries is diverse (Figure 1). In the group of the EU countries, the lowest quality of institutions is found in the countries that joined the EU as the latest ones-Croatia (HR), Bulgaria (BG), Romania (RO) (see Figure 1A). The diversity of non-EU countries is much greater, and the more south-east a country is located, the lower the quality of its institutions is -Azerbaijan (AZ), Uzbekistan (UZ), and Tajikistan (TJ) (see Figure 1B). Figure 1. The average value of the institution quality measure (from the three analysed years) for each countries the EU (A) and non-EU (B) A B Source: Authors' elaboration. Figure 2. The average value of the institution quality measure in individual years for two group countries (the EU and non-EU) Source: Authors' elaboration. Looking at the average of aggregate measures in individual years for these two groups of countries, we can see an upward trend in both groups ( Figure 2). The pace of changes in the quality of institutions in non-EU countries was initially small (the average value of the aggregate measure increased by half in the period 1995-2004). However, the quality of institutions in those countries increased in the period 2004-2019 -the average value of the aggregate measure increased 11 times, however, compared to the EU countries remained very low. The average for the EU countries grew much faster in the initial period of the analysis (by approximately 13% in the period 1995-2004), and then its growth rate, although positive, was slightly lower (approximately 10% in the period 2004-2019). Discussion and Conclusions The VMCM is a method dedicated to analysis of a complex socio-economic issues described by many indicators. The method have both features of the multidimensional analysis -Multidimensional Comparative Analysis and Multi-Criteria Decision Making Methodsboth used to solve the multifactorial problem. VMCM due to appropriate processing and automatic data ordering allow for 'objectification' of the obtained results (Nermend 2017;Piwowarski et al., 2018;. Its application to the research conducted for the purposes of this article allowed for the study of long-term trends in shaping the quality of institutions in post-socialist countries, drawing conclusions and giving the direction of further research. Despite the fact that at the beginning of the transformation all post-socialist countries formed a fairly homogeneous group in which the institutions inherited from the period of socialism were inadequate to the market economy (Piątek, 2011), during the transformation period a clear differentiation of post-socialist countries, what is supported by the research conducted, and division of these countries into two groups the EU members and the other analysed post-socialist countries is justified. The first group of countries is characterized by a much higher quality of institutions, and most of the countries declared their willingness to join the EU already in the initial period of transformation and introduced sets of major reforms faster and with a view to their membership. This is one of the main reasons for the huge difference in the average level of aggregate measures for these two groups of countries in 1995 and 2004. However, the decline in the rate of increase in the value of the aggregate measure for the first group of countries after 2004 confirms the observation described in the literature that for EU Member States EU membership was a stronger motive for reform improving the quality of institutions in the pre-accession phase than in the post-accession period (Bruszt and Lundstedt, 2016;Bruszt and Compos, 2018). And although the second group of countries managed to improve the quality of institutions and the pace of this improvement in the last 15 years was faster than in the group of the EU countries, still the majority of these countries have low level of democratization, economic and political freedom and civil liberties, high corruption and authoritarianism. This may be the result of insufficient transformations of informal institutions and acceptance of corrupt behaviour. Therefore, the distance of this group of countries to the group of EU members remains large (EBRD, 2020). Future research on this group of countries could therefore focus on finding differences in the dimensions of the quality of institutions in the individual countries that constitute the EU. Both groups of countries are also internally different. As the results of our research show, it is much higher precisely in the group of non-EU countries (higher value of the standard deviation in each of the analysed years). In recent years, however, attention has been drawn to some post-socialist countries that are the EU members, such as Hungary and Poland (characterized by a reduction in the aggregate measure of the quality of institutions in the analysed period), where democracy and civil liberties become limited, and corruption grows, while the ruling populist groups use a pattern of hasty legislating and restricting the participation of the opposition, appropriating political institutions, taking over the media, trying to transform them into a vulnerable tool (Miłaszewicz, 2019;Csaky, 2020). These changes indicate the potential direction of future research, which could focus on an in-depth analysis of those aspects of institution quality that deteriorate and the impact of this deterioration on the achieved socio-economic results. The indicators adopted in this article to assess the quality of institutions take into account, first of all, more measurable formal institutions, although some elements of informal institutions are taken into account when assessing corruption or the rule of law. Therefore, it can be concluded that the variables collected for the research constitute a fairly comprehensive picture of the quality of institutions in the analysed countries. However, this picture may be supplemented in future research, in order to obtain the fullest possible assessment, by taking into account additional dimensions of the institution's quality, i.e. including in the analysis components of other measures -EFW, IEF, FP, CPRI or DB.
2020-11-26T09:06:58.704Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "297c1bbdaffcf0d234b6eae0a6a3bdddeef5ab13", "oa_license": null, "oa_url": "https://www.ersj.eu/journal/1805/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6b686f9dd1bc02e674341b01ca6cb436cf0b7956", "s2fieldsofstudy": [ "Economics", "Political Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237386660
pes2o/s2orc
v3-fos-license
TERROR THROUGH SCREEN IMAGES AS A POWER DISCOURSE The purpose of the research is to analyze terror through screen images as a power discourse and to establish the role of an impact in the field of television art. Research methodology. The following methods were used: analysis and synthesis (the interdependence of the screen images, which are a kind of amplifier of intellectual potential on the screen, was analyzed); generalization (summary was made based on the analyzed links); systematization (all information collected during the research is systematized). The scientific novelty lies in the detailed consideration of the terror’s components in the screen arts. An attempt to influence the modern viewer with “terrorist” images. Conclusions. During the research, the scientific achievements of domestic researchers on the topic of coverage of power discourse in the media and cinema were analyzed. The role of the power discourse’s impact in the field of television art has been established. The peculiarities of screen images, their role and their influence on society have been revealed. The peculiarities of terror by screen images in the modern media space have been generalized. The impact of social networks and TV channels on the consciousness of people through manipulations has been determined. Problem statement In recent years modern television has become a major source of manipulation. Political and analytical programs have the greatest influence among all that is demonstrated live. Most of the images have a so-called terrorist character, which helps them find their audience. Nowadays similar projects still increase in the pursuit of ratings. Influence creates proposition and shows the trust of viewers. When each program contains information that is needed not by the viewer, but by the TV channel, then the terrorist act becomes the main thing in all psychological states of a person. And the trust in false and manipulative information is greater than in real and verified information. Recent research and publications analysis. The topic of manipulation on television has been addressed by a small number of researchers. But some scientists have resorted to investigate terror-screen images. S. Datsiuk, V. Ivanov, N. Lihachev, S. Chernenko (2003) described the manipulation of the TV. H. Pocheptsov (2020) investigated power as communication and how the world is governed through communication. H. Chmil (2013) analyzed powerful role-playing and cinema as a disciplinary discourse of power. The purpose of the research is to analyze terror through screen images as a powerful discourse and to establish the role of influence in the field of television art. Main research material In the 21st century, there is an "information war", which has the highest rate on television and is quickly transmitted from one viewer to another. This is most often seen during election campaigns and events that change history. A prime example is COVID-19, which turned the world upside down. Some are intimidated, and some continue to live their lives because they want to believe only in the best. H. Pocheptsov (2011a) in his article "Power as communication" quite accu-rately described the current situation in the world: "The government has no other means of influence than communication. The way a person was becoming freer in history, the less remained in the state of opportunities for influence (except for communication). By the way, totalitarian power is also, first of all, communicative power. Its only and most significant feature is total control over the information and communication spaces. The democratic government also controls them, no matter how it denies it. But it does so by generating and maintaining a better information product". In general, screen art is a kind of staging that has its features, staging and even roles. Thanks to this, the certain integrity of the screen and the person in front of it appears. The viewer becomes an integral part of this world and becomes an active user of the information provided to him (Chmil, 2013). It is generally accepted that the key component of manipulation is to convince people of information that does not have accurate confirmation and consistency. It is called a suggestion. It is a process that has a huge impact on the psychological state of the viewer and is associated with a certain uncertainty, poor self-criticism and a lack of fulfilment in a favourite area. The easiest way to control the feelings of a person who does not have his/her own expressed "I" is in contemporary art. Such a person perceives the content of the information as an important attitude that affects her/ him and her/his environment (Datsiuk, 2003, p.43). News, political and analytical programs use the suggestion process most clearly. This terrorist type helps, first of all, politicians in the electoral process to gain the required number of votes and to feel in their place. More often people cannot even recognize that they are being manipulated on purpose with a specific purpose. Although they may not feel terror at all. The spectator becomes a part of the game, which is beneficial to only one side. Nowadays Ukrainian TV channels and their space belongs to politicians. They set trends for their screen programs. Who can be invited to broadcasts, show what and how to talk, what can be shown, facial expressions, gestures, images, commercials, words that can completely rebuild the viewer and leave the screen to analyze the situation? And this is what the target audience is based on, what it is interested in, how to keep it on its TV channel and make it trust every second of air. The easiest way to enable manipulation is through the TV presenter. It is the entire program lever. He is the moderator and the crucial organizer of the live broadcast. The images created by the TV channel format help to set up interesting work and gather viewers at the screen. Projects that involve viewers in the discussion have the highest performance among others. After all, a person becomes part of the topic and feels that his opinion can solve something or push for certain changes in a given situation. Just writing comments on social networks or the phone, will help you gather around certain "imaginary" people, like-minded people or at the same wavelength or people against everything that is said. As practice shows, those who think the same way are much less. They are afraid to be like others, but they are easier to manipulate. Keeping on the psychological thread in them appears a screen image. On the one hand, it is a lever of electoral integrity, and on the other -a terrorist discourse. The main percentage of power terror is carried out with the help of modern television. Today, TV screens act as a kind of pendulum that hypnotizes the viewer and conveys information that will be beneficial only to a certain caste of society, as a result of which a person, like a puppet, will follow certain imposed instructions, believing his/her actions to be correct. Scientist and analyst H. Pocheptsov (2011a) believes that "today the state's communications with its citizens are based on a more objective and detailed basis than before. But our government still allows itself to experiment with society and teach it". In one of his articles, H. Pocheptsov analyzed the current terrorist television situation in the world and stated that "there is such a "spy" truth: "Who owns the information, he owns the world". But today a new managerial truth is more important: "By managing communication, we rule the world". By giving the mass consciousness certain facts and interpretations, selecting some and rejecting others, we create a concrete model of the world for it. Having received and accepted it, the mass consciousness can easily do without propaganda and external censorship, since now it will be able to determine for itself what is truth and what is false, although in accordance with the model introduced by someone". The communicative component has become one of the mechanisms in the manipulation of 2020. All sources speak only of the killing coronavirus. At the same time, people began to forget that some situations and events are more important than character. From all monitors sounds that "we care about your life and health." The screen moved as far as possible from the themes of war, violence and murder, which are directly related to peace. "Words are power, but they become even more powerful when they sound from someone who is the authority for society. Ukraine has gradually lost its authority, but new ones do not appear. Today we have only "chair authorities" or authorities from show business or sports who can speak on all topics. And this is wrong since they are neither experts nor participants in the events", -this opinion was stated in his article "Communications between the government and society: new ways" H. Pocheptsov (2011b). Scared people have a great tendency to stress, which means they are trusting and can themselves deliver manipulative information to their environment. Regardless of whether someone needs their opinion and its implementation or not. Ukrainian politics has long lacked logical ideas. The authorities use the thoughts and options of the West. This is detrimental to work and communication in various spheres of life (Pocheptsov, 2011a). Due to the developed imagination, in a certain layer of the Ukrainian population, television helps them to fantasize and get into a completely different world of events. Since many do not have enough positive emotions, special manipulation technologies are used to form their thinking in the paradigm of screen art and space. This is a definite manifesto of terror, between psychological, mental and imaginary fantasies (Chmil, 2013). The modern viewer has lost the ability to recognize fakes in the era of post-Soviet truth. This allowed him to be an influenced object. A person is no longer a problem for the state and for those who have money. They control the behaviour and consumption of information in simple ways (Pocheptsov, 2019b). Not every viewer can overcome the modern flow of information. For some people, it's just communication, and for others, it's an important part of life. Pocheptsov (2019c) noted that "information and communication are the base for structuring the world in nature, since there is a direct correlation between them. There are deviations, of course, when censorship or propaganda demands to say one thing, but in reality, something else is or should be". Now the world is at the crossroads of new influential principles, to which some viewers are not yet ready. Countries express this through hybrid warfare. Every time a person becomes an object of terror and can get an attack on his psychological state (Pocheptsov, 2019b). The need for manipulation in art and on the screen has existed before, but it has become more pronounced in the last six years, after the Revolution of Dignity in Kyiv. It was during this time that communication changed significantly. It seemed that everything had become transparent and truthful. And the emergence of social networks along with the work of television channels began to jointly influence the consciousness of people. Ukrainian rating TV channels use all kinds of terror technologies. The huge difference is precisely between the behaviour of the presenter in the evening and night news broadcasts. S. Datsiuk (2003) notes that "in the nightly broadcasts, journalists "cut through" their voices, quite bold their assessments of events, they make the stories much more professional, more adhering to the principles of "good" journalism". The quantity of political and analytical TV channels in Ukraine is increasing almost monthly. Viewers are ready to watch the broadcasts several times and begin to trust even the words of those who did not believe them before. Conclusions It can be concluded that manipulation plays an integral role in the modern media space. Nearly every program on television wants the viewer to always remain with them and helped keep ratings. For their sake, political projects do not spare informational reasons and even create new ones. Influencing the viewer from the screen has already become a certain kind of art. When the program information can be simple to keep a man, the more difficult it to terrorize his brain to become for them the most simplified version. The question is whether in the future viewers will filter information, and not trust every spoken word from the TV screen. But the fact remains that it cannot last forever. Already historically, there are times of a revolution of consciousness and they change with every century. Now in Ukraine and the world, there is an "information war" about which everyone knows. And yet, manipulative factors and screen images manage to keep the bar.
2021-09-01T15:03:17.448Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "53d99bec05f329e4d1e86eda1617e4fd8982ac4b", "oa_license": "CCBY", "oa_url": "http://audiovisual-art.knukim.edu.ua/article/download/235066/234438", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1260ac07ef00956880af300fa0cfb2ecde16ad37", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
265117807
pes2o/s2orc
v3-fos-license
Women on the frontline: exploring the gendered experience for Pacific healthcare workers during the COVID-19 pandemic Summary Background Women comprise 90% of patient-facing global healthcare workers (HCWs), yet remain underpaid, undervalued, and under-represented in leadership and decision-making positions, particularly across the Pacific region. The COVID-19 pandemic has exacerbated these health workplace inequalities. We sought to understand Pacific women HCWs experience from the COVID-19 frontline to contribute to policies aimed at addressing gendered gaps in regional health systems. Methods Our interpretative phenomenological study used critical feminist and social theory, and a gendered health systems analytical framework. Data were collected using online focus groups and in-depth interviews with 36 Pacific regional participants between March 2020 and July 2021. Gender-specific content and women's voices were privileged for inductive analysis by Pacific and Australian women researchers with COVID-19 frontline lived experience. Findings Pacific women HCWs have authority and responsibility resulting from their familial, biological, and cultural status, but are often subordinate to men. They were emancipatory leaders during COVID-19, and as HCWs demonstrated compassion, situational awareness, and concern for staff welfare. Pacific women HCWs also faced ethical challenges to prioritise family or work responsibilities, safely negotiate childbearing, and maintain economic security. Interpretation Despite enhanced gendered power differentials during COVID-19, Pacific women HCWs used their symbolic capital to positively influence health system performance. Gender-transformative policies are urgently required to address disproportionate clinical and community care burdens and to protect and support the Pacific female health workforce. Funding Epidemic Ethics/10.13039/100004423World Health Organization (WHO), Foreign, Commonwealth and Development Office/10.13039/100004440Wellcome Grant 214711/Z/18/Z. Co-funding: 10.13039/100011365Australasian College for Emergency Medicine Foundation, International Development Fund Grant. Introduction Since March 2020, the COVID-19 pandemic has disrupted health, socio-economic, and political systems globally, and exacerbated known inequalities across gender, race, and disability in every resource context. 1 Sex and gender differences in direct health effects have been well researched, 2 but in broader measurements of well-being that include health access, socio-economic, and safety indicators, it is clear that women have been substantially more negatively impacted by the COVID-19 pandemic than men. 3 Women and girls have been more likely to lose employment, drop out of education, become a carer, or experience gender-based violence; effects that are also influenced by regional and cultural contexts. 3Understanding how socio-cultural, environmental, political, and resource dynamics influence the sex and gendered impacts of COVID-19 can lead to nuanced and contextspecific policies and practices that are more likely to successfully address gender disparities and inequity. 4omen healthcare workers (HCWs) comprise more than 70% of the global health workforce and 90% of health workers in patient facing roles. 5,6Despite their critical role to uphold health systems, women HCWs remain underpaid, undervalued, and under-represented in leadership and decision-making positions. 7,8][11][12] Women also comprise the majority of the health workforce across the Pacific region, again in lower paid and in less leadership positions. 13Although we can assume they have faced similar gendered challenges and inequalities documented in other global health contexts, we know little about the specific experiences of Pacific women HCWs and emergency care (EC) stakeholders during COVID-19.Despite disparate geographies and populations, the 22 Pacific Island Countries and Territories (PICTs) that comprise the Pacific region share common historical, colonising, language, and sociocultural experiences, 14 including the lowest level of women's political representation globally. 15Connected by the Blue Pacific Continent, PICT peoples place high value on natural resource stewardship, community inclusion and consensus, cultural practice, and traditional knowledge. 16Although most PICTs are low-and middle-income countries (LMICs) with significant health resource challenges, regional strengths characterised by strong relationships, past surge event experience, and adaptability 17 Research in context Evidence before this study Gendered inequality within health systems has been exacerbated by the COVID-19 pandemic, whereby women healthcare workers (HCWs) have disproportionally experienced occupational health and safety risks, increased mental ill-health, higher workloads, and reduced access to leadership and decision-making opportunities.Furthermore, women HCWs have faced a double burden both professionally and in the personal domain where they have shouldered most caregiving responsibilities for children, families, and communities.We searched Medline (OVID and PubMed), Embase (OVID), Google Scholar, the Cochrane Library, WHO resources, Pacific and grey literature using search terms 'COVID-19/pandemic/ surge events', 'emergency care', 'emergency medicine', 'gender', 'gender analysis', 'women healthcare workers', 'critical feminist methodology', 'women', 'Pacific Islands/ region' and related terms.We found that Pacific regional healthcare systems also comprised a high majority female workforce, consistent with global patterns.During the COVID-19 pandemic, Pacific emergency care clinicians demonstrated leadership and resilience, strengthened by cultural relationships and innovation.COVID-19 created increased barriers for Pacific women's sexual and reproductive healthcare, but little is known about the specific gendered experience of women HCWs at the frontline of the pandemic response across the Pacific region.Momentum is increasing globally for improved health system policies that understand and address gendered inequity, particularly to sustain the feminised workforce. Added value of this study This is the first study to deeply explore Pacific women HCW's experience at the coalface of the COVID-19 response across the region.Although Pacific women HCWs faced similar ethical challenges in negotiating professional and personal caregiving responsibilities to their global female colleagues, we also found unique cultural strengths and threats.The COVID-19 pandemic reinforced power differentials between Pacific men and women HCWs, particularly enhancing occupational segregation in emotional labour, and lack of access to high-stakes decision-making.Our feminist and social theoretical approach enabled an understanding of how Pacific women can use their symbolic capital to positively influence health systems by demonstrating emancipatory leadership, situational awareness, and loving care. Implications of all the available evidence Effective gender-transformative health system policies are urgently required to protect and uphold the Pacific female workforce, who face disproportionate risks and responsibilities enhanced by the COVID-19 experience.Unique to the Pacific context, policies and practices that recognise Pacific women HCWs cultural strengths and symbolic capital will enable participatory leadership and enhance team performance.Elevating and sustaining Pacific women in decision-making, leadership, and advocacy roles will likely increase stakeholder trust in Pacific health systems and positively contribute to regional aspirations for Healthy Islands and Universal Health Coverage. may have influenced how Pacific HCWs have experienced the COVID-19 pandemic across all genders. In this context, we aim to explore the gendered experience of the COVID-19 pandemic for frontline HCWs and EC stakeholders across the Pacific region, using a critical feminist theoretical framework. Study design, setting, participants, and data collection We conducted an interpretive phenomenological study, seeking to explore and understand the COVID-19 experience of participants from their lived experience using in-depth interviews supplemented by focus group discussions. 18,19We understand the ubiquitous impact of gender as a social construct on all aspects of health systems, 20 but also that individuals experience sex and gender differences uniquely.Our hermeneutic phenomenological approach enables us to appreciate the 'lifeworld' context of our participants, but also to incorporate our own understanding and gendered experience of COVID-19 into the interpretive process. 21his study is a subset of a larger prospective, qualitative research collaboration which has been described at length previously. 22,23Data were collected from 116 consenting HCWs and other stakeholders from at least 15 different PICTs using online platforms between March 2020 and July 2021.All data were recorded, transcribed, and de-identified.Key informant interview and focus group discussion guides incorporated gender and feminist theoretical framing, which enabled us to code data for gender-specific content.For this study, we inductively analysed data from 36 participants: the complete transcripts of interviews with all seven women key informants (2 doctors, 4 nurses, 1 regional health program manager), transcripts of two focus group discussions (a total of 23 participants of both genders) and gender-specific content extracted from transcripts of interviews with six male key informants.PICTs represented through the seven women's in-depth interviews included Fiji, Kiribati, Palau, Papua New Guinea, Samoa, Solomon Islands and Tonga, while all recognised geographical Pacific regions (Melanesia, Micronesia, and Polynesia 24 ) were represented in focus group discussions and the remaining key informant interviews. Theoretical framing We applied critical feminist theory to expose and resist conventional health system research approaches that maintain knowledge structures and practices rooted in patriarchal systems of power. 25Our study adopted deliberate women-centred approaches reflecting the five suggested methodological considerations for critical feminist research: how/why questions are asked, attention to language, reflexivity, representation, and research for social transformation. 26We believe a critical feminist approach is required when exploring issues of gender in healthcare systems, particularly in the Pacific region, where women's voices are underrepresented and health outcomes inequitable. 27onsistent with our critical feminist approach, we specifically privileged and elevated women's voices by focusing on women key informants.However, recognising that gendered experiences in health systems research are about differential power relations, gender-specific data from all participants were included for analysis.To assist with interpretation, we used the gender framework developed by Morgan et al., 20 which focuses on power as a driver of inequity within health systems and uses four key questions to explore the attainment and use of power: who has what, who does what, how values are defined, and who decides.Examining gender as a power relation also enables an intersectional lens that may incorporate other social stratifiers such as education, class, ethnicity, race, age, sexuality, and geographical location, all of which may interact to influence how power and vulnerability is experienced by Pacific women HCWs. 20inally, we use the theory and concept of 'capital' from French social philosopher Pierre Bourdieu to guide data interpretation.Bourdieu introduces symbolic capital to describe power derived from all forms of assets, beyond mere mercantile economics. 28People accrue, transform, and exchange forms of capital that encompass cultural, social, political, educational, scientific, and other fields in which they operate.When analysing how gender influences and informs power relations within health systems, Bourdieu's concept of symbolic capital helps to reveal less obvious forms of women's power and can expose how women may exchange their accrued capital to attain influence.In Pacific contexts, the dynamic use of symbolic capital has been used to illuminate how women have accessed political power in masculinised fields. 29 Data analysis, researcher characteristics and reflexivity The principal researcher (GP) has been trained in qualitative research methods and led the interpretative, reflexive thematic analysis as outlined by Braun and Clarke, 30,31 assisted primarily by co-researcher MK, and in collaboration with women research team members.Data were open-coded, subsequently collated into subthemes, and finally analysed to identify key themes through an iterative, robust process of deep reading, thinking, and discussion.Results and interpretation were presented to the entire research team for feedback on clarity and veracity. Our research team comprised clinicians and EC stakeholders from Australia and the Pacific region, prioritising women co-researchers for data interpretation.Principal co-authors (GP and MK) have their own lived experience of working as EC clinicians on the frontline, including through sex-specific and gendered events during the pandemic, ranging from pregnancy and childbirth, through to menopause.and Standards for Reporting Qualitative Research (SRQR) 33 guidelines. Role of the funding source There was no specific funding for this study.Funders of the original research 22 had no role in study design, data collection, data analysis, interpretation, nor writing of the manuscript. Results We identified four core themes encapsulating the diverse experience of Pacific women HCWs at the frontline of the COVID-19 response: 1. Women's emancipatory leadership; 2. Women's bodies and responsibilities; 3. Women as workers; and 4. Women in Pacific culture. Although presented independently, these themes intersect and complement each other through the fundamental identity of Pacific women, who hold authority, responsibility, and prowess by virtue of their cultural, biological, and familial status.These attributes inform women HCWs' leadership and shape how they work; bringing great strength but also tension as they individually and collectively negotiated ethical care obligations in the early phases of the COVID-19 pandemic.Women's testimonies and observations are elevated in this analysis rather than presented as if in opposition to male HCWs, who themselves may have both unique and shared pandemic frontline experiences.Each theme is explored and illuminated with participant quotes in the following subsections (using women's voices, unless otherwise specified).However, the thematic findings are to be read and contextualised in the patriarchal milieu that Pacific women navigate and work within, where even women of very high power and status endure gendered inequality and face discriminatory practices: "When I talk about women, and work, I feel a lot of pain, because we see it every day in our workplaces.I see it on TV, played out with our female parliamentarians, by their male counterparts, I just find it so disgusting.And that trickles right down.So if people up in leadership position face that, how much more a person towards the bottom of the ladder?How much more do they face, and they don't have a voice." [Key Informant (KI)9] Theme 1: Women's emancipatory leadership Participants observed women health clinicians in leadership positions demonstrating a strong, inclusive leadership style, which was also described by participating women leaders themselves.They empowered and united their colleagues to feel confident and work together in navigating the workplace unease and pressures ushered in by the COVID-19 pandemic: "I think for this pandemic, all of us females here are managing the situation.So I think with our director, also a femalewe have a strong leader as a femaleso it also helps us that we work together as a team and we [are] able to manage our situation here in [our country]" [Focus Group (FG)1, Participant (P)13] "For the good of your staff.And staff meaning from the doctors right down to the cleaners, they all come under you.And they have families whom you have to consider, so their safety is of paramount importance. Even the clerks too.So yeah, that's how I see it, you kind of take care of everybody who's working in your department, with regards to COVID-19…" [KI10] By taking on leadership, women became role models for other female staff.Participants observed women leaders building the capacity of their colleagues by sharing information, exemplifying expertise, and engaging them in complex problem solving: "… She has all the other colleagues, other medical staff and the nurses, the doctors, the lab technicians, even the ambulance drivers, from the very menial staff to the top ranking ones, they all get together, they plan and they try and develop and implement the results that she has found out." [FG2,P16] As adept communicators in the healthcare field, women HCWs allayed fears and enabled their co-workers to feel positive and efficacious, thereby demonstrating a distributive and emancipatory style of leadership: "…myself, in my chief operation, as much as you want to also take time off you kind of really have to be there for everybody else … you have to do a lot of mental distressing, and talking to people." [KI11] "So I try during the times that we're on shift, and we have some free time, I try to talk to them, to brief them.And even run through a scenario or two, just to, you know, just to get them to open up… You talk to them until they feel so good, and so confident, that they could probably do this and that.And then when they see their colleagues perform and do well, they want to be a part of that as well." [KI12] Talking skills became a form of altruistic and powerful advocacy from mature women leaders, who demonstrated courage, audacity, and perseverance to ensure their staff were protected and patient care was maintained: "So, you just have to talk and talk.If one door is shut there's some other option you can go to, you cannot just say 'Okay, just leave it like that', no.You have to just keep on talking, talking, talking until you find a way around to get what you want, for the good of everybody….I had to approach the Director of Medical Services who is a male, even the CEO himself, I had to [go] Formidable female leadership in the workplace derived from some women's authority status within families and the community.This style and recognition of leadership intersects with Pacific women's cultural strengths (Theme 4), and was particularly effective at the height of COVID-19 pandemic preparations: "A lot of our work as well is affected by how we work in our families, there's like this invisible hierarchy of importance.Some people come with their crowns from home, so.It's recognised in the workplace, and people really use it.I like it when they use it for the good things in the emergency department." [KI12] For younger women attempting to assert their expertise and step into leadership roles within strongly patriarchal contexts, persistence and endurance were critical to overcoming seemingly hostile barriers and garnering respect: "So in [my town], where I practice, it is a very maledominant society and I am not from there, and I'm a woman, and I'm a young consultant.So I think one of the challenges for me as a woman was getting people to sit up and listen.And it really just got to a point where they didn't really have a choice because all the senior management was in quarantine or in isolation so there was no leadership on the ground, and there were only [X] consultants on the ground and I was one of them.So people didn't really have a choice but to listen.But it was really difficult, just pushing and pushing until something had to give." This consistent demonstration of female professionalism, reliable presence, and courage was supported and validated by the presence of other women colleagues, and access to trustworthy and reputable resources: "I didn't find that too much of a challenge because we reported to the emergency operation centre, and we were familiar with the people in [there] and there's a lot of females as well in the emergency operation centre… If I disagreed with something I was able to say it.And what also helped was because I was able to get evidence from the ACEM group and also from the network of emergency physicians, I was able to ask them what they thought as well, to ensure that whatever I said in the meeting had some background basis and it wasn't just my opinion." [KI5] Theme 2: Women's bodies and responsibilities Pacific women's bodies-their fertility, their nutritional source (breastmilk), their physical size and capabilities -were inextricably entwined with women's personal and professional experience as HCWs at the COVID-19 frontline.Commonly, male staff were simply referred to as men or doctors (and infrequently but notably 'male nurses').Women at work, by contrast, frequently became 'mothers', 'breastfeeding mothers', or 'expecting mothers'.Although we can assume that male staff were equally likely to be 'fathers', this parental role was never used as a critical identity signifier for men in the workplace: "Gender inequality.So we've gotone, two, three, four -I think four or five boys, five male nurses, and then the rest are female.And most of them are mothers." [KI13] Women's fertility exposed and restricted them in the workplace.Breastfeeding female staff with newborns were fearful of COVID-19 exposure.Women's fecundity became public knowledge, and ensured decisions were made about them and on their behalf.Such decisions were often made with good intentions to 'protect' pregnant women, but simultaneously denied them autonomy. "…like for the pregnant staff, we made it clear that if any staff did find out they were pregnant they need to let us know early, so that appropriate protection would be given to them… when the outbreak started we have a few of our staff who are pregnant, so they were put in the non-COVID side.And as soon as our hospital started having increasing cases we've just asked them to stay home and look after themselves, rather than coming and getting exposed." [KI8, male] Maternity was also an additional cause of stress in the workplace-as a contributor to reduced staff and cause of future recruitment difficulties, exacerbated by the pandemic: "Well even now, it's been a constant issue with staffing, especially nurses.It's like we're always short in staff… And now with COVID, we will all be stretched, with our limited staff that we have now and yes I think the conflict, the internal conflict, that nurses will face, whether to stay at home or to come to work, will also affect our staffing… And we're already stretched, and if someone calls in sick or someone goes on maternity leave, the remaining staff who is on day off will be called back…" [KI13] "…we will be burning out very fast if we don't have enough staff on the ground to do the work.So yes, we do have issues about female staff coming to work for the COVID team." As mothers, daughters, and wives, women HCWs were bound by their family as the determinant of all decisions.The conflict between duty to work and responsibilities at home provoked ethical challenges, borne almost exclusively by women frontline clinicians: "Yes, I think the female staff were impacted greatly, especially those who are married with kids, with children, those who were expecting mothers.They had concerns over their obligation or their duty to their family versus their duty to work, to patients.So that was an ethical decision that they had to make, a really hard one.I've had a few talks with a few of my fellow nurses who say to me, if COVID does come she feels like she's deciding not to come to work, because she feels more obligated to take care of her family and her role as a Mum, as a wife, as a daughter as well." [KI13] Women HCWs in our sample were prepared to expose themselves to the risks of COVID-19, despite concerns about adequate PPE, but also adapted their practice according to physical and gendered experiences of comfort and safety: "I was a female with the COVID team and there was more males, … And [we] went down to the ships.I had to climb the mothership to do the screening.I went down two times and then after that I discussed with the [female] director; I think it is much better if the male doctors do the screening on the ships … I went into quarantine, and our director also asks us who they would like to partner with …, so I chose another female doctor to go into the quarantine, because I felt comfortable you know." [FG2,P19] Theme 3: women as workers Women clinicians brought compassion, empathy, and love to the workplace-for their patients and for each other.When describing their roles, women participants commonly used words of care and support: "So as an emergency care nurse, I deliver emergency assessment, stabilise, providing ongoing management.Also as a caregiver, I get to provide palliative tender loving care to terminal elderly that comes over to ED… And as a counsellor, I also provide emotional and spiritual support.Lastly, I am an advocate as well.I act as a medium between doctors and patients and friends, voicing their needs and requests, etc. " [KI13] "We need to support our staff as well.So when the plane comes with our repatriates we have to be present at the airport to give them support as well, that we are there for them." [KI7] In contrast to their male colleagues, some women HCWs identified that they paid attention to detail and demonstrated a wide awareness of multiple perspectives influencing patient care and workplace function.Professionally, women perceived the many stressors facing colleagues and were mindful to support staff wellbeing and prevent burnout.Women worked their way strategically around issues to prioritise patients: "About a female perspective and male perspective?Well, we have to strategise and compromise with a male prior to doing something, to take care of patients.So it's a challenge.Sometimes males are straightforward.They don't think about some little, little things that will be a distraction to how we do the care or how we facilitate everything." [FG1,P12] Women were at the frontline of the COVID-19 response-in multiple roles and greater numbers than men, and often at the forefront of exposure and risk: "But they do most of the work!They're out there, doing all the work in the background.Who goes to the, does contact tracing?I see here, in [my country], on the news all the timewho's in the front?A woman carrying a bag, all donned up and walking around the neighbourhood to do contact tracing.Then they come back and they're in the forefront giving vaccination.And then they're back into the hospital.So, yes, I think women do a lot more out there." [KI9] "And then there was another added responsibility where, like, I didn't have my senior medical officers on the ground, so I had to play that role of senior person to liaise between the staff and the management with regards to all kinds of things happening in the ED.Deaths of patients, moving patients, transporting things here and there, pushing for enough PPE to be on the ground for staff to use and work, so that was like an added thing on to what I'd already been doing,…" [KI10] Often women HCWs were the first to put themselves forward, although not necessarily gaining attention or recognition for their work.This preparedness to step up intersects with Theme 1, demonstrating how some Pacific women HCWs empowered colleagues through role-modelling and inclusive, positive leadership: "In other words, it's a bit different; male and female.The males, even though they know a lot and they put all their support into things, it's the females that do most of the work… the females actually volunteered and then over time we have the males stepping in." [FG2,P16] "Initially when the first suspected case came through, I remember at that time there was a lot of apprehension as to 'Who is going to do it?'.I remember I came to two of the senior [female] nurses and I said 'We have to do the first case, we have to set the examplewe need to go in and swab that patient because if we are scared to do it then everyone else is going to be scared in our team.So we need to show them that its okay'.I had to make sure that I understood, that I was clear in my mind, about how to don and doff, and what was the risk." [KI5] Economic imperatives to provide for the family were a strong motivator for women HCWs, who faced the ongoing dilemma of duty to patient care and responsibilities at home, as identified in Theme 2. Financial incentives and expected rewards for overtime shift work and pandemic risk allowances were specifically removed in many PICTs, which disproportionally targeted the largely female nursing workforce: "COVID has, well see, for examplethere's no more overtime.And who does overtime?Nurses.Nurses do overtime to, a lot of them find doing overtime is more attractive, because they get additional money to support their low salaries, to be able to support their family." [KI9] Female staff were concerned about the potential risks and impact of COVID-19 on health and wellbeing not only personally, but also on their families.Serious threats to their occupational health and safety had equally serious implications for others who relied on women HCWs in their personal roles as carers. "Another of my colleagues has got elderly parents who [have] other non-communicable diseases as well.She's like, 'If I die or if I contract this virus, there's no one else to take care of my parents 'cause all my brothers are married and moved away, so I'm the only one responsible for my parents'." [KI13] Despite their critical professional and personal roles, women often had to seek permission from superiors and garner male approval before they could speak or act publicly.Male relatives sometimes refused to allow women their professional autonomy: "…it is cultural … as we have the three Fsso family, faith and food.So family is, 'cause we exist in a community, we exist in families.So we can't make decisions without consulting our elders or, we don't think for ourselves, we put our family first before ourselves…" [KI13] "So our female staff, we have this cultural thingso some of the female staff are unable to come to work because the husbands will not allow them to come to work due to fear and so many things.So that's one challenge because not every female staff are allowed to come to work by their spouses.. ." [FG1,P9] As within the home, women HCWs relied on men's power to recognise their workplace leadership-which was not always visible to male counterparts.In some contexts where men could not see women's strengths, women's voices were silenced: "Not providing that platform that women can come out and speak freely, a platform that looks at nurses as leaders in their units, as family members, as children with children." [KI9] "No, I have not been offered or had the opportunity to talk to, yeah, those guys with authorities. [Laughs] I'm just like looking…" [KI13] The patriarchal structure of Pacific societies ensured male voices were heard and male leadership was magnified, even if women HCWs had more experience or provided most of the healthcare service: "Pacific society yeah.Male-dominated.You can see that even with sessions that we have out there, you have all the male counterparts speaking out, in the forefront, giving information that, talking on behalf of everybody… And females, women, don't speak out.In the Pacific 70% of the workforce is females.Over 60% are nurses and nurses are predominantly female.But how many leaders out there are women?And in the forefront?Just a handful.So that speaks volumes." [KI9] In some PICTs, the pandemic provided an opportunity to elevate and embed women's cultural attributes in a way that enhanced health workplace and team function: "It shows that culture is also been instilled in the workplace, where normally we have, I'm not sure how I can say it in English, but in [my country] the females are mostly "(local word)" …. it means to be nosey and investigative.We are very fortunate because this COVID-19 has pushed our culture as females to actually wanting to know more.We are thankful that we have our male colleagues in the back supporting us in whatever we find, we actually pushed back to them." [FG2, P16] Pacific women bring many cultural strengths to the healthcare workforce.Their centrality within families and communities, and demonstrations of love and empathy, motivated and inspired their colleagues during the COVID-19 experience: "… a lot of our culture comes through in our work.Like, working together, the cultural concepts like patriotism.It's not really patriotism, it's like love of country, love of people." [KI12] Discussion This is the first study to illuminate the COVID-19 experience of Pacific women frontline HCWs using a critical feminist approach and an analytical gender framework.We found that Pacific women's multiple roles across home and work are both a burden and a source of strength and skill.As pivotal carers and breadwinners within Pacific families, women possess particular work attributes of care, empathy, compassion, multi-tasking, attention to detail, and emotional intelligence for effective leadership and pastoral care.Authority conferred by Pacific cultural, church, and community status-the 'crowns from home'-enables some women to step into leadership roles and demonstrate empowering, distributive management styles.However, this doesn't work for all women or all the time.There is tension between women having authority and women being subordinate to men, senior family, or community members.Furthermore, Pacific women's identity and status are sources of additional stress and introduce complex ethical challenges and moral distress 34 to prioritise work over family (or visa-versa), meet financial demands, and balance occupational health and safety risks. 7][8]35 Our participants' lived experience of shouldering the burden of the pandemic response across the population and community level, as well as at the forefront of clinical service, reinforces the urgent need to develop and implement gender-responsive global health security policies. 36Future initiatives should incorporate greater female representation in decision-making leadership, better protection for women HCWs, recognition of women's unpaid work, gender-sensitive data handling, and more expansive support for civil society women's organisations during ongoing and future health emergencies. 36acific women HCWs are likely to be in their peak reproductive years.During COVID-19, pregnancy and maternal activities were segregated and pathologised as high-risk (perhaps by both men and women), becoming signifiers for how staff were identified (women as mothers, men as men) and confirming expectations around motherhood found in other contexts. 11This had serious implications for rostering and recruitment of women, and a potential impact on morale and emotional fortitude for Pacific women HCWs who did experience pregnancy, childbirth, and/or breastfeeding during the pandemic.Evidence before COVID-19 highlighted the serious risks of systemic structural gender discrimination and inequality to global strategies towards building and maintaining Human Resources for Health. 37Our findings reinforce global evidence that the pandemic has exacerbated these risks. 38Without addressing women's recruitment, support, embodied needs, and pay equity with gender-transformative policies and practice, the Pacific region may face critical future workforce shortages that threaten Healthy Islands 39 and Universal Health Coverage aspirations. 40lthough current WHO Western Pacific Region strategic planning priorities lack sufficient attention to this issue, 41 recognition of the need to protect and invest in the global healthcare workforce is gaining political and intersectoral traction. 42acific women HCW's caring attributes and situational awareness skills induced their additional role as custodians of workplace wellbeing.In the pandemic context, this sensitivity, perception, and responsiveness to the mental health landscape at work elevated the importance of women HCWs in the Pacific region.However, it also reinforced social gender ideologies of 'women's emotional work' 43 that contribute to occupational segregation. 44Women were also highly effective communicators who were not afraid to speak out to those in authority for the benefit of their colleagues and patients, but who were denied platforms to speak or make decisions in higher-stakes contexts.This further illustrated occupational segregation in Pacific health workplaces 6 ; but it also demonstrated mechanisms some women HCWs used to resist conditions that contributed to moral distress at the frontline. 34andemic contexts where Pacific women HCWs were able to model highly efficacious leadership and participatory decision-making were enabled through the supportive presence of other women. 45sing the social theory of Bourdieu, our findings can be understood as transactions of symbolic capital.When Pacific women's social and cultural capital (their 'crowns from home') are recognised and valued in society, they can convert this symbolic capital to gain access to professional or political capital in the workplace.This transcends previous understandings of social relationships as sources of support and/or stress during pandemic events, 46 to a unique Pacific understanding of some social connections (for example, 'family and faith') as sources of strength and power.Using their legitimised symbolic capital to 'play the game' in the professional sphere enabled Pacific women HCWs to assert power and influence decisions and practice. 29In addition to symbolic capital conferred by their centrality within families, Pacific women accrue substantial emotional capital through their caring duties within the community and (for some) maternal roles. 47Unfortunately during COVID-19 they were also required to expend disproportionate amounts of emotional capital on both patients and highly stressed staff in the workplace, without commensurate professional or political capital gain.Recognising and valuing the different forms of Pacific women's capital, and how it may be amassed and exchanged to allow for increased access to power and decision-making, may assist in future policy interventions aimed at improving women's health leadership. Applying a 'health services research' gender framework to our findings enhances an understanding of how power relations influence Pacific women HCWs' COVID-19 experience.Viewing gendered inequalities through the lens of power differentials can also lead to focused policy interventions aimed at restructuring how power is distributed in health systems.We therefore briefly re-order our findings in Table 1, according to the following key domains of gendered power relations: who has what; who does what; how values are defined; and who decides. 20n everyday health systems, these gendered power domains are dynamically maintained through shifting policies, practices, and attitudes.Our findings emphasise how the COVID-19 pandemic reinforced power differentials between men and women HCWs (particularly around occupational segregation), but also demonstrated the potential for Pacific women HCWs to reshape power inequalities through their symbolic capital and to positively influence health system performance.This is particularly salient for transformational leadership in health, whereby Pacific women HCWs' qualities of emotional intelligence, situational awareness, and loving care could substantially enhance team function if recognised as essential for formal health leadership roles during surge events. 8,48,49Furthermore, placing a higher value on Pacific women's effective advocacy for patient and staff safety may contribute to altruistic health governance that engenders trust among all stakeholders, 50 and meets post-pandemic calls to elevate nursing and midwifery leadership to achieve Universal Health Coverage across the Pacific. 51ower was disproportionally used against women HCWs by the almost exclusively male Parliaments of some PICTs through decisions about financial benefits and overtime pandemic pay.Although our data did not demonstrate the additional advocacy efforts that economically disenfranchised Pacific women HCWs undertook to realise their rights, 52 we assume this magnified their ethical challenges and emotional labour at the height of the pandemic and may have contributed to increased stress and burnout. 12Similarly, none of our participants specifically discussed gendered experiences of PPE, despite the known impact of ill-fitting and poorly designed PPE for frontline women clinicians. 9Gender-based violence (GBV) was mentioned briefly in reference to barriers created by pandemic restrictions for women victims seeking refuge.Although physical and/or sexual violence against women is known be highly prevalent in many PICTs, 53 and statistically very likely to affect Pacific women HCWs, our data did not link this issue with the COVID-19 frontline clinical experience.We are unlikely to learn more without targeted and appropriately conducted research focused on women's pandemic experience of GBV, and instead should proactively focus on rights-based responses and recommendations. 54imitations of the broader project have been described at length elsewhere, including measures taken to address potential gender, role, and cultural barriers to open participation by Pacific stakeholders. 22n terms of this paper's focus on gender, our findings are limited by our small sample size and constraint of few gender-specific questions embedded within interview and focus group discussions that covered a large topic field.These factors may have restricted data depth and nuance, and potentially contributed to the gaps identified earlier.However, participatory engagement of Pacific women as co-researchers in data analysis and interpretation provides additional acuity and validity to our findings. This study has highlighted the substantial contribution of Pacific women HCWs during the COVID-19 pandemic as frontline clinicians, emancipatory leaders, role models, advocates, staff welfare champions, and insightful care providers both in the public and personal realm.To perform all these roles, Pacific women ethically negotiate their moral responsibilities to their families, communities, patients, and colleagues, and strive to overcome cultural and structured gendered inequalities that privilege men's power.By making visible this women's work, we hope to positively encourage our Pacific colleagues who uphold their health systems.Further, our findings endorse the urgent need to protect and support the female healthcare workforce through Women shape and confirm values using their social and emotional capital: Women HCWs perceive themselves as well (possibly better) suited to patient-facing roles by virtue of their attention to detail and loving care provision Some PICTs elevated women's cultural capital ('crowns from home'), and accepted social norms of women's assertiveness ('being nosey and investigative') which enabled courageous advocacy and frank speaking from women Women's solidarity assisted with leadership, safety in the workplace and pandemic planning Common Pacific cultural values (from our data): -Patriarchy (men as head of the family and community) -Respect for elders/hierarchies -Women's domestic centrality -Communitarian Assumption of male leadership and of male speaking opportunities and platforms Women's leadership qualities of inclusivity, persuasion and empowerment not necessarily valued or recognised High value on maternity/motherhood resulting in identification and protection of pregnant HCWs Who decides?Group decision-making when in formal leadership roles Informally-women HCWs withdraw their labour if they feel unsafe or the risk/benefit ratio falls towards the family/community rather than the workplace Can enter decision-making roles by default through persistence and advocacy Men make decisions and speak out on behalf of women or may actively exclude the female voice by nature of male-only leadership Political (almost exclusively male) decisions about pay, overtime allowances and pandemic benefits Institutional decision-making for pregnant HCWs (potentially denying women's autonomy) gender-transformative policies across the Pacific region and beyond. Contributors GP was responsible for this study design.GP and MK performed data coding, analysis, and interpretation, with LMH and SK performing initial transcription and gender-specific content extraction.All authors contributed to final data interpretation, with MK and SM providing essential regional input and contextual advice.GP developed the first draft of this manuscript.The final version was reviewed and approved by all authors. For the original project design MC, GP, CEB, SK (along with Rob Mitchell and Gerard O'Reilly) were primarily responsible.MC and SK coordinated funding acquisition and project administration.MK (along with Deepak Sharma, Berlin Kafoa and Penisimani Poloniati) provided regional perspectives and contextual advice.Study materials were developed by LMH, MC, GP, SK and CEB (along with Rob Mitchell and Gerard O'Reilly).All original authors engaged in data collection through online support forums, interviews or focus group discussions. Data sharing statement De-identified and coded interview transcript data used to support the results in this article may be made available to interested stakeholders after careful consideration by the research project team and upon receipt of a written request to the corresponding author. Declaration of interests GP and MC declare they are recipients of International Development Fund Grants from the Australasian College for Emergency Medicine (ACEM) Foundation and are members of ACEM.SK declares past employment at ACEM, CEB former ACEM committee membership, and LMH past contract payment from ACEM.GP reports visiting Faculty status at the University of Papua New Guinea and Fiji National University. Theme 4 : Women in Pacific cultureWomen are the centre of families and community life in the Pacific.For women frontline clinicians, COVID-19 healthcare duties seriously threatened their cultural expectations and economic responsibilities: "[In my country], and I heard nurses in [another PICT] do it when they go into, when there's a new lot they care for these people for 14 days and then they quarantine for 14 days!That's one whole month away from your children, from your family, from your husband.Women in the Pacific, they're the ones that hold their families together.Most of them are, quite a lot of them are, breadwinners for the family.So they just have no option.So they still are disadvantaged."[KI9] the World Health Organization's AdHoc COVID-19 Research Ethics Review Committee (Protocol ID CERC.0077) and declared exempt.Reporting of study data adheres to Enhancing the Quality and Transparency of Health Research (EQUATOR) all the way to them.It's just how you talk and how you frame your words properly and give it to them.You must be somebody who can really convince them to get what you want, yes. "I argue with the doctors on a regular basis 'cause I'm fighting for my nurses"[KI11] Most of the patient-facing, clinical work (service-provision, frontline, professional roles), including working overtime Look out for staff; care and protect staff Family and community caring (personal roles) Pregnancy, breastfeeding, maternal care Role modelling for other female staff Effective, 'on-the ground' advocacy Take risks and put themselves forward Higher-level, management work Provide support for some women in some PICTs in leadership roles May or may not recognise women's work and women's leadership Speak on behalf of women
2023-11-11T16:06:08.311Z
2023-11-09T00:00:00.000
{ "year": 2023, "sha1": "13b0e421989b399fc24d48b97aae2a5085dff3d4", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "2b94fc2160e771a8e4a1e1e6f7e8c06d3c3118e1", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "extfieldsofstudy": [] }
99943545
pes2o/s2orc
v3-fos-license
Perfectly Aligned Shallow Ensemble Nitrogen-Vacancy Centers in (111) Diamond We report the formation of perfectly aligned, high-density, shallow nitrogen vacancy (NV) centers on the ($111$) surface of a diamond. The study involved step-flow growth with a high flux of nitrogen during chemical vapor deposition (CVD) growth, which resulted in the formation of a highly concentrated (>$10^{19}$ cm$^{-3}$) nitrogen layer approximately $10$ nm away from the substrate surface. Photon counts obtained from the NV centers indicated the presence of $6.1$x$10^{15}$-$3.1$x$10^{16}$ cm$^{-3}$ NV centers, which suggested the formation of an ensemble of NV centers. The optically detected magnetic resonance (ODMR) spectrum confirmed perfect alignment (more than $99$ %) for all the samples fabricated by step-flow growth via CVD. Perfectly aligned shallow ensemble NV centers indicated a high Rabi contrast of approximately $30$ % which is comparable to the values reported for a single NV center. Nanoscale NMR demonstrated surface-sensitive nuclear spin detection and provided a confirmation of the NV centers depth. Single NV center approximation indicated that the depth of the NV centers was approximately $9$-$10.7$ nm from the surface with error of less than $\pm$$0.8$ nm. Thus, a route for material control of shallow NV centers has been developed by step-flow growth using a CVD system. Our finding pioneers on the atomic level control of NV center alignment for large area quantum magnetometry. 2 Nitrogen-vacancy (NV) color centers in diamond emerged as a breakthrough material to realize quantum sensing and quantum information processing. These centers possess unique spin-dependent fluorescence combined with microwave coherent manipulation and constitute a material platform for a quantum magnetometer 1,2 . Furthermore, a NV center located in close proximity to an external spin of interest allows for statistical nuclear polarization detection at the nanoscale for magnetic sensing applications 3,4,5 . Nanoscale nuclear magnetic resonance (NMR) allows for small detection volume in the order of approximately (5 nm) 3 , and can be utilized for the determination of single protein structure, in contrast to the standard magnetic detection techniques such as NMR and magnetic force microscopy (MFM) 6,7 . A fundamental limitation of an NV center based magnetometer is the material control required to confine the NV center in the vicinity (<10 nm) of the substrate surface with a high magnetic sensitivity. Previous studies that examined shallow NV centers focused on either a high-density ensemble for twodimensional large area imaging or a single NV center for high contrast and high coherent time to obtain a minimal detection volume using nanoscale NMR. However, it was found necessary to combine spatial localization of a NV centers with alignment, high density, and a long spin coherence time (T2) to obtain high magnetic sensitivity. The alignment of NV centers in an ensemble is the key to accomplish high contrast while maintaining high signal to noise ratio for high magnetic sensitivity with low accumulation time. In this regard, low energy ion implantation is the most common technique utilized for the production of NV centers in the vicinity of a surface 8 . However, this methodology suffers from large depth dispersion (>10 nm) of the NV centers due to ion straggling and channeling effects 9,10 . Additionally, high-density surface defects formed during implantation affect the spin coherence time and the ensembles show a random orientation with this technique 12,13,14 . Existing studies include reports of CVD growth that demonstrated a narrow distribution in the confinement of NV centers in the vicinity of a surface 11 and their atomic alignment on (100), (110), (113), and (111) substrates for the formation of thick diamond films 14,15,16,17,18,19,20 . Nearly all previous studies have focused on either low density NV centers (<10 13 cm -3 3 ) in the vicinity of a surface with no alignment 21,22,23 or the formation of NV ensembles with alignment in bulk. In this paper, the formation of a perfectly aligned high-density shallow NV center film for surfacesensitive detection of nuclear spin has been demonstrated. Results obtained from SIMS measurement combined with an effective depth obtained from nanoscale NMR measurement confirms presence of shallow NV center approximately 9-10.7 nm from surface with error of less than ±0.8 nm. The results of this study offer a path toward controlling the alignment of shallow NV center ensembles. In this study, NV-containing diamond films were grown on diamond IIa (111) substrates by using a microwave plasma chemical vapor deposition (MPCVD) system using CH4 and H2 as source gases. During the growth, N2 gas was introduced as a nitrogen source to form NV centers in the diamond films. A shallow NV center was formed by increasing the time of growth from 30 to 120 s at growth rate of 0.3 nm s -1 as confirmed by secondary ion mass spectrometry (SIMS). The growth condition included 75 Torr pressure, 620 W power, 900 °C temperature, and a total gas flow of 1000 sccm with CH4: 0.5 sccm, N2: 3.2-4 sccm and H2 as a carrier gas. An intrinsic diamond was grown for 7 h prior to the formation of the NV centers. NV centers were formed on the surface without intrinsic diamond cap layer. The off-angle of the substrates corresponded to 2-3 ° along the <1 � 1 � 2> direction and off-direction less than 5 ° from the <1 � 1 � 2> direction. The morphology of the samples was investigated by atomic force microscopy (AFM). The fluorescence intensity of the NV centers was measured by using a home-built confocal microscope system that was equipped with a 532 nm laser, avalanche photo diode detectors, and a spectrometer. Optically detected magnetic resonance (ODMR) was performed to analyze the alignment ratio of the NV axis. Surface magnetic sensitivity was determined by performing nanoscale NMR using a XY8 pulse sequence. 4 A typical AFM image of the sample is shown in Figure 1(a). Step-flow is observed towards the <1 � 1 � 2> direction as indicated by a black arrow in Figure 1(a) and the cross sectional scan of the AFM image in this location is shown in Figure 1(b). The average step height lies in the range of 1-4 nm. The step heights of 1 nm and 4 nm are indicated by arrows in Figure 1(b). Small steps of 1 nm and a considerably smaller terrace size are packed tightly between the larger step height terrace. The step height distribution corresponds to a distribution in nanoscale bunching that occurs during intrinsic layer deposition. Figure 1 (c) shows a confocal XY scan of the sample that illustrates the high emission counts observed throughout the sample. Confocal spot size estimated by 300 nm diameter and thickness obtained from SIMS measurements were used for calculation of the confocal spot volume. Emission counts were compared with photon counts from single NV center for calculation of an ensemble NV density. Photon counts observed in the sample corresponds to the NV centers density of 6.1×10 15 cm -3 -3.1×10 16 cm -3 . The observed NV center distribution can be correlated to the distribution of step height in the AFM image, which is caused by difference in the speed of propagation of steps. Higher steps are caused by bunching of step-flow wherein higher steps propagate at slower speed in the lateral direction and in which the NV centers are more localized when compared to a smaller step size with higher propagation speed. Ensembles of NV centers are observed throughout the sample and have a density that exceeds 6.1×10 15 cm -3 . 10 study is comparable to the value reported previously 4 . Further study to investigate depth distribution of T2 and depletion of NV centers within 10 nm from surface will be our future work for improving sensitivity of nanoscale NMR. In conclusion, this study demonstrates that a highly aligned high-density shallow NV centers ensemble is formed by step-flow growth using MPCVD growth on (111) substrates. More than 6.1×10 15 cm -3 -3.1×10 16 cm -3 NV centers are detected from confocal scan. The results demonstrate highest NV density in the vicinity of the surface with perfect alignment of more than 99 %. Surface sensitive magnetic field measurement was performed by observing thin layer of proton and fluorine contained in Fomblin oil by nanoscale NMR using XY8-80 pulse sequence. The single NV center approximation indicates that the depth of the NV centers is approximately 9-10.7 nm from surface with error of less than ±0.8 nm. Our finding offers a route for material engineering for future of quantum magnetometry using NV centers that requires atomic level control of NV centers alignment for precise alignment of magnetic field and surface sensitive magnetic field detection in nanoscale for wide field imaging.
2019-04-08T13:13:23.279Z
2017-04-12T00:00:00.000
{ "year": 2017, "sha1": "e35c2c3237fc8caf2685ce81e5a21c928fb36a9b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1704.03642", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e35c2c3237fc8caf2685ce81e5a21c928fb36a9b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
5137476
pes2o/s2orc
v3-fos-license
Transcription Factor Functional Protein-Protein Interactions in Plant Defense Responses Responses to biotic stress in plants lead to dramatic reprogramming of gene expression, favoring stress responses at the expense of normal cellular functions. Transcription factors are master regulators of gene expression at the transcriptional level, and controlling the activity of these factors alters the transcriptome of the plant, leading to metabolic and phenotypic changes in response to stress. The functional analysis of interactions between transcription factors and other proteins is very important for elucidating the role of these transcriptional regulators in different signaling cascades. In this review, we present an overview of protein-protein interactions for the six major families of transcription factors involved in plant defense: basic leucine zipper containing domain proteins (bZIP), amino-acid sequence WRKYGQK (WRKY), myelocytomatosis related proteins (MYC), myeloblastosis related proteins (MYB), APETALA2/ ETHYLENE-RESPONSIVE ELEMENT BINDING FACTORS (AP2/EREBP) and no apical meristem (NAM), Arabidopsis transcription activation factor (ATAF), and cup-shaped cotyledon (CUC) (NAC). We describe the interaction partners of these transcription factors as molecular responses during pathogen attack and the key components of signal transduction pathways that take place during plant defense responses. These interactions determine the activation or repression of response pathways and are crucial to understanding the regulatory networks that modulate plant defense responses. Introduction The growth and development of plants are constantly affected by various environmental stresses, and among the most important biotic stresses are those caused by viruses, bacteria, fungi and nematodes [1]. Plants withstand pathogenic attacks by activating a large variety of defense mechanisms, including the hypersensitive response (HR), the induction of genes that encode pathogen-related proteins (PR), the production of antimicrobial compounds called phytoalexins, the generation of reactive oxygen species (ROS), and enhancement of the cell wall [1]. The response mechanisms of these complexes are finely regulated by a large number of genes that encode regulatory proteins. A typical example of a regulatory protein is a transcription factor [2]. Transcription factors are primordial proteins that respond to stress, altering the expression of a cascade of defense genes [2]. Many of these transcription factors are co-induced in response to different stressors suggesting the existence of complex interaction [2]. Transcription factors are defined as transcriptional regulators that function by binding to specific cis-regulatory elements present in the promoters of target genes [3]. Transcriptional regulation plays a central role in the control of gene expression in plants, with approximately 2,000 genes predicted to be transcription factors in Arabidopsis thaliana [4]. In plants, the main families of transcription factors responsible for the regulation of genes responsive to pathogens are categorized into the following families: a family of proteins that contain either one or two 60-amino-acid regions that contain the amino-acid sequence WRKYGQK (WRKY); APETALA2/ETHYLENE-RESPONSIVE ELEMENT BINDING FACTORS family (AP2/ERF); basic leucine zipper containing domain proteins (bZIP); myelocytomatosis related proteins (MYC); myeloblastosis related proteins (MYB) and, more recently, the no apical meristem (NAM), Arabidopsis transcription activation factor (ATAF), and cup-shaped cotyledon (CUC), or also termed NAC family [1,5]. Each transcription factor family has a specific binding domain such as bZIP, zinc finger, or helix turn helix. These domains bind to DNA cis-elements associated with the response to a specific environmental stress set, and the differences between these domains are key features that distinguish one family from another [1,5]. Modulating the function of transcription factors through interactions with regulatory proteins is a crucial process in the activation or repression of signal transduction pathways [1,5]. Processes such as effector-triggered immunity (ETI), which results in a rapid process of programmed cell death known as the hypersensitive response (HR), and pathogen-associated molecular pattern (PAMP)-triggered immunity (PTI), which results in the prevention of infection by the pathogen, are finely regulated by the interactions between different proteins with transcription factors [6][7][8]. Several proteins have been reported to modulate the function of various plant transcription factors, such as the NON-EXPRESSER OF PATHOGEN-RELATED (PR) GENES (NPR1) protein, which binds to the TGACGTCA cis-element-binding protein (TGA) factor of the basic leucine zipper domain (bZIP) family during the activation of salicylic acid (SA) signaling [6][7][8], and the MITOGEN-ACTIVATED PROTEIN (MAP) kinases, which also have a proven role in regulating WRKY family trans-acting factors [9]. In this paper, we discuss the current understanding of the interactions between transcription factors and several regulatory proteins that modulate the activities of these trans-acting factors by various mechanisms, such as inactivation, subcellular localization, degradation and post-translational modification, and the manner in which these interactions affect signal transduction pathways in plant defenses against environmental challenges. bZIP Family The family of transcription factors containing the bZIP domain is one of the largest families of transcriptional factors in eukaryotes. In plants, these factors regulate genes in response to abiotic stress, seed maturation, floral development and defense against pathogens [10]. Jakoby and collaborators classified bZIP proteins from Arabidopsis (AtbZIPs) into 10 distinct groups: A, B, C, D, E, F, G, H, I and S. In the literature, specific interactions of bZIP proteins with other proteins that regulate the bZIP protein's activity, subcellular localization and function during defense processes against pathogens have been reported [10,11]. Acting as key regulators of signaling mediated by SA, the TGA proteins, members of Group D of the Arabidopsis bZIP proteins, comprise a class of bZIP proteins that are linked with responses to biotic stress [10]. A major development in the study of the functional interactions of TGA members during pathogen responses has been the discovery of interactions with members of the ankyrin repeat protein family, specifically NON-EXPRESSER OF PATHOGEN-RELATED (PR) GENES (NPR1), which are key components in the defense signaling pathway mediated by SA [6][7][8]. Under normal conditions, most NPR1 is retained in the cytoplasm as an oligomer via intermolecular disulfide bonds ( Figure 1) [6,12]. Under pathogen attack, SA is synthesized and induces changes in the cellular redox state [6][7][8]12], promoting the monomerization of NPR1 through the activity of the THIOREDOXINS H3 and H5 (TRX-H3/H5). In SA-induced cells, monomeric NPR1 translocates into the nucleus via the nuclear pore complex (NPC) [6][7][8]12], and the NPR1 monomers interact with members of the TGA family (bZIP) and bind to SA-responsive gene promoters ( Figure 1). During this process, NPR1 is phosphorylated and then ubiquitinated by an E3 ubiquitin ligase that has a high affinity for phosphorylated NPR1, thus targeting NPR1 for degradation by the proteasome complex. This process starts in the nucleus and ends in the cytosol (Figure 1) [6][7][8]12]. NPR3 and NPR4, protein homologs of NPR1, act as receptors of SA in this process, binding to this molecule with different affinities. NPR3 and NPR4 serve as Cullin 3, E3 ubiquitin ligase adapters, that mediate the ubiquitination (Ub) and degradation of NPR1 and are regulated by SA (Figure 1) [6][7][8]12]. The Arabidopsis double mutants, npr3 npr4, accumulate high levels of NPR1 and are insensitive to the induction of systemic acquired resistance [6]. Studies have also demonstrated that 17 CC-type glutaredoxins interact with TGA2 [13]. It has been proposed that this interaction between CC-type glutaredoxins and TGA proteins plays a role not only in defense against pathogens but also in processes involved in plant development [13]. WRKY proteins also interact with TGA proteins [14]. In tobacco, the NtWRKY12 protein interacts in vitro and in vivo with TGA proteins [14]. In addition to the TGA proteins, it has been demonstrated that AtbZIP10 interacts with LESIONS SIMULATING DISEASE RESISTANCE 1 (LSD1), a protein with a zinc finger domain, in vivo ( Figure 1) [15,16]. LSD1 is a negative regulator of cell death and protects plant cells from oxidative stress [16]. The interaction between LSD1 and AtbZIP10 occurs in the cytoplasm, resulting in the partial retention of AtbZIP10 ( Figure 1) [16]. AtbZIP10 positively regulates basal defense responses and cell death induced by reactive oxygen species (ROS), and these activities are antagonized by LSD1 [16]. Studies have also shown that a protein related to NPR1, an ANKYRIN-REPEAT PROTEIN (ANK1), interacts with a bZIP protein known as BZI1 (Figure 1) [17]. BZI1 has a DNA-binding domain and a D1 domain that is apparently essential for auxin signaling and defense against pathogens [17]. The molecular characterization of ANK1 has demonstrated that this protein is unable to bind to DNA and modulate gene transcription [17]. ANK1 is preferentially localized in the cytosol, and its transcription is negatively regulated under pathogen attack [17]. These features have led to the conclusion that ANK1 is involved in the modulation of auxin signaling and defense against pathogens in a manner dependent on its interaction with members of the bZIP family, such as BZI1 [17]. AP2/ERF Family APETALA2/ETHYLENE-RESPONSIVE ELEMENT BINDING FACTORS (AP2/ERF) proteins belong to a family of plant transcription factors that exhibit the AP2/ERF domain necessary for specific binding to DNA and that can be subdivided into four subfamilies defined by Sakuma et al. [18]: AP2, DEHYDRATION-RESPONSIVE ELEMENT-BINDING (DREB), ERF and RELATED TO ABI3/VPI (RAV). The subfamily AP2 contains two AP2 domains, AP2/ERF, separated by a linker containing 25 amino acids. While members of the subfamily RAV have, in addition to the AP2/ERF domain, another DNA-binding domain known as B3, members of the subfamilies DREB and ERF contain only one AP2/ERF domain. AP2/ERF transcription factors and other factors frequently act synergistically, increasing the expression of genes related to plant defense, as reported by Singh and Buttner [19]. The AtEBP protein (Arabidopsis ethylene binding protein), during activation of the defense pathway mediated by ethylene, recognizes the cis-element GCC-box and interacts with a bZIP family protein, OCTOPINE SYNTHASE (ocs) ELEMENTS BINDING FACTOR (OBF), that is able to recognize the G-box (CACGTG) (Figure 2). This interaction increases the expression of PR genes that contain both cis-elements. Similarly, in tobacco, the protein TOBACCO STRESS-INDUCED 1 (Tsi1) recruits the zinc-finger-containing Tsi1-INTERACTING PROTEIN1 (TSIP), an interaction demonstrated by two-hybrid assays, Western blotting and co-immunoprecipitation. This interaction results in increased tolerance to Pseudomonas tabaci, a hemibiotrophic plant pathogen, and transcription of the genes PATHOGENESIS RELATED PROTEIN 4 (PR4), SYSTEMIC ACQUIRED RESISTANCE PROTEIN 8.2 (SAR8.2) and LIPID TRANSFER PROTEIN (LTP), which are stress-related [19]. Other interactions can result in the phosphorylation of AP2/ERF proteins. When the ethylene signaling pathway is induced, phosphorylation can occur via MAPK kinases, such as the pair OsEREBP1/BWMK1 in rice [20] and TaERF1/TaMAPK1 in wheat [21], or by Ser/Thr kinases, such as the Pseudomonas tomato resistance-interacting4 (Pti4) and Pseudomonas tomato resistance (Pto) kinase of tomato [22]. In tobacco, the transcription factor octadecanoid-responsive-Catharanthus-APETALA2-domain protein (ORC1) can be phosphorylated by MAP kinases or other kinases [23]. In all the examples mentioned, phosphorylation results in increased activity of the transcription factor ORC1. Another example of an interaction that regulates the activity of AP2/ERFs is that of EREBP2 with the protein NITRILASE-LIKE PROTEIN (NLP), proposed by Xu et al. [24], where NLP proteins associate with EREBP proteins and retain these factors in the cytoplasm. Contact with elicitors result in a dissociation process, and the factor EREBP is translocated into the nucleus where it promotes the expression of PR genes ( Figure 2C) [24]. MYB Family During a pathogenic infection, the expression of myeloblastosis related (MYB) family of transcription factors is diverse and present in all eukaryotes. This family has a variable number of MYB domains, which influence the capacity to bind to DNA [25]. The N-terminal region of the protein contains the DNA-binding domain and is highly conserved. The C-terminal region may contain a domain necessary for activation or transcriptional repression. Based on this structure, these proteins are divided into four classes: 1R, R2R3, 3R and 4R [26], and the R2R3-MYB class is divided into 22 subgroups [27]. The proteins of the R2R3-MYB class are plant-specific and are involved in the following processes: primary and secondary metabolism, cell destination and identity, development and responses to abiotic and biotic stress [26]. Previous studies have verified that Arabidopsis AtMYB30 over-expression accelerates and intensifies the hypersensitivity response (HR) after attack from avirulent strains of Pseudomonas syringae, suggesting that it acts as a positive regulator of cell death in response to the attack of pathogenic bacteria [27]. MYB30 targets very long chain fatty acid biosynthesis genes (VLCFA) during pathogen infection ( Figure 3). VLCFAs and their derivatives are likely involved in the establishment or control of HR [28]. To control the concentration of MYB30, the enzyme ubiquitin ligase E3 MYB30-INTERACTING E3 LIGASE1 (MIEL1) interacts specifically with MYB30 in the plant cell nucleus (Figure 3). MIEL1 ubiquitinates MYB30, targeting it for degradation in the 26S proteasome. The Arabidopsis mutant miel1 presents increased HR and resistance to avirulent bacteria. The expression of MIEL1 is inhibited during infiltration of avirulent P. syringae, enabling the accumulation of the MYB30 required to promote HR and, consequently, restricting the propagation of the bacteria to other regions of the tissue [29]. AtsPLAα binds with MYB30 and they translocate from the cytoplasmic vesicles into the nucleus, but the interaction of AtsPLAα with target DNA is prevented. In one known mechanism of suppression of plant defense responses, XopDXcv, one of the Type III effectors of Xanthomonas campestris pv. vesicatoria specifically interacts with the HLH domain of MYB30 and promotes its localization to nuclear bodies ( Figure 3). The localization of MYB30 into the nuclear bodies prevents the activation of genes related to synthesis of VLCFA, preventing the appropriate activation of plant defense pathways [30]. The reprogramming of the host's transcription by XopD represents a virulence strategy that allows for the establishment of infections by the Xanthomonas species [30]. In plants, the PHOSPHOLIPASE A2S (AtsPLAα) is related to growth, development, stress responses and defense signaling. AtsPLAα is a negative regulator of HR and defense responses in Arabidopsis and is mediated specifically by AtMYB30 localized in cytoplasmic vesicles, preventing the transcription of genes normally mediated by AtMYB30 (Figure 3) [31]. BOTRYTIS SUSCEPTIBLE 1 (BOS1), a transcription factor of the R2R3MYB subgroup termed AtMYB108/BOS1, is necessary for responses to biotic and abiotic stresses in Arabidopsis. Mutants present a higher susceptibility to necrotic lesions and also have less tolerance to water deficits, salinity and oxidative stress when compared with wild type [32]. BOS physically interacts with BOTRYTIS SUSCEPTIBLE1 INTERACTOR (BOI) in plant cell nuclei through the central preserved domain dominated WRD, a region that is important in forming the coiled-coil structure that is often important for protein-protein interactions [32] (Figure 4). BOI is a one RING E3 ligase able to ubiquitinate the protein R2R3MYB in vitro, and possibly in vivo, leading to subsequent degradation by the proteasome. Plants with BOI silenced by RNAi are much more susceptible to Botrytis cinerea and less tolerant to salinity [33], similar to observations made of the bos1 mutant [32]. Curiously, RNAi-BOI plants expressing 35S:BOS1-GUS are more resistant to fungi than wild-type plants, suggesting that BOS1 is a direct target of BOI. Expression of BOI is induced by SA and 1-aminocyclopropane-1-carboxylic acid (ACC), which is a precursor compound of the ethylene biosynthesis pathway, but is inhibited by methyl jasmonate (MeJA) and gibberellins (GAs), presenting evidence for the complex regulation that is responsible for maintaining a normal level of BOI in wild plants. However, the occurrence of B. cinerea infections is known to be increased by the accumulation of SA, ET, MeJA and abscisic acid in wild plants [33]. MYC Family The myelocytomatosis related family (MYC) represents a subfamily of transcription factors that contain a basic-Helix-Loop-Helix (bHLH) domain, is present in all eukaryotes, and is characterized by having a basic DNA-binding region in the N-terminal region and, in the C-terminal region, hydrophobic residues that form two alpha helices separated by a loop, which determine the protein's dimerization capacity. The bHLH domain is characteristic of a large family of bHLH transcription factors to which MYC belongs [34]. MYC transcription factors are key transcriptional regulators in the expression of jasmonate (JA)-responsive genes, positively regulating wound resistance genes and acting as negative regulators during the expression of pathogen defense genes [1,35]. Under pathogen attack and herbivory, plants produce JA conjugated with isoleucine (JA-Ileu, a JA bioactive form), which is recognized and bound by its receptor CORONATINE INSENSITIVE-1 (COI1). The COI1 protein is an F-box protein that associates with the cullin, SKP1 and RBX1 proteins, together forming the SCF COI1 complex. The presence of JA-Ileu and its surrounding sequence allows the protein to bind to COI1, leading to a switch in the jasmonate-zinc-finger protein expression in inflorescence meristem. The JASMONATE-ZIM-DOMAIN (JAZ) proteins and their binding partners lead to JAZ unbinding from MYC. JAZ interacts, by means of its Jas domain, with the SCF COI1 complex. JAZ is then ubiquitinated by the complex and sequentially degraded by the 26S proteasome [35][36][37][38][39][40]. Thus, in the presence of JA-Ileu, JAZ quickly undergoes proteolysis, promoting the release and activation of MYC. MYC activation also results in the expression of other transcription factors, such as MYBs and WRKYs, which are important in stress defense [40]. In addition, MYC activates the transcription of the JAZ protein, leading to a basal level restoration of JA [37]. JAZ proteins are composed of a family of 12 proteins that contain a centrally located ZIM domain on the C-terminal side of the JASMONATE-ASSOCIATED (Jas) domain and in the N-terminal region. JAZ proteins act as suppressors of the JA response, and the majority of JAZ proteins (such as JAZ3 and JAZ10.1), in the absence of JA-Ileu, have the ability to interact with MYC and negatively regulate its activity ( Figure 5) [36]. JAZ proteins interact with MYC2 through their N-terminal portion, and when the Jas domain is truncated, the JAZ protein is not degraded, remaining irreversibly bound to MYC2 and acting as a dominant-negative repressor. This effect indicates that JAZ proteins do not require a Jas domain to interact with MYC2 and that repression occurs through an interaction of the JAZ N-terminal domain with MYC2 ( Figure 5) [37]. This interaction and regulation model of MYC is not applicable to all JAZ proteins because the interaction of the JAZ3 protein with MYC2 has been described as occurring via a different mechanism. A Jas domain deletion in JAZ3 renders this protein unable to interact with MYC2, and it has been demonstrated that the Jas domain itself is sufficient for the interaction of JAZ3 with MYC2 [38]. Thus, it is proposed that JAZ3 interacts by binding as a dimer through the Jas domain to MYC2, suppressing its action ( Figure 5). An interesting observation is that MYC2 is irreversibly inactivated by the truncated protein that is derived from a deletion in the C-terminal region of JAZ3. It has been proposed that this interaction occurs through heterodimerization with another JAZ protein through its N-terminal domain, which, in turn, binds irreversibly to MYC2, thus acting as a dominant-negative repressor [37]. In Arabidopsis, MYC2 is able to interact with all 12 of the JAZ proteins, whereas MYC3 demonstrates a strong interaction with only eight of these proteins (JAZ1, JAZ2, JAZ5, JAZ6, JAZ8, JAZ9, JAZ10 and JAZ11) [39] and MYC4 interacts with only JAZ1, JAZ3 and JAZ9 [1]. All of the mechanisms of interaction are similar to that described for MYC2 [1,39]. WRKY Family The defining feature of the WRKY transcription factors is their DNA-binding domain, a highly conserved region of 60 amino acids. In this region, there is a nearly invariable sequence, WRKYGQK, and the N-terminal portion of the protein is followed by a zinc finger motif, Cx4-HxC 5Cx22-23HxH or Cx7Cx23 [41]. WRKY factors are divided into three groups based on the number of WRKY domains in the protein and the structure of their zinc fingers [42]. Group II genes have been subdivided into IIa, IIb, IIc, IId and lIe on the basis of their amino acid sequence. Another division uses phylogenetic data and suggests that the WRKY family in higher plants should be divided into groups I, IIa + IIb, IIc, IId + IIe, and III [43,44]. WRKY transcription factors generally bind to a conserved sequence of DNA known as the W-box, (T) (T) TGAC (C/T) [42]. WRKY proteins are implicated in various molecular events in plants, such as seed development, senescence, dormancy and germination, and abiotic and biotic stresses among others [41]. A large number of members of the WRKY family are related to pathogen infection and thus are important factors for plant immunity. Some WRKY protein partners have already been identified, and the interactions between WRKY and its binding partners may play roles in signaling, transcription, chromatin remodeling, and other cellular processes [45]. The AtWRKY33 protein in Arabidopsis plays an important role during infection by necrotrophic pathogens and is a part of the group I WRKY family [46]. AtWRKY33 interacts with the proteins SIGMA FACTOR-INTERACTING PROTEIN 1 and 2 (SIB1 and SIB2) ( Figure 6) [47]. The SIB1 and SIB2 proteins are classified as VQ proteins because they have the conserved FXXXVQXLTG or VQ motif [48][49][50]. The proteins AtWRKY33, SIB1 and SIB2 are induced by the necrotrophic fungus Botrytis cinerea, which is also coordinately regulated during infection with this pathogen. Through the BiFC assay, we determined that the interaction between SIB1 and SIB2 occured in the nucleus of the plant cell ( Figure 6). Tests with deletion mutants sib1 and sib2 showed a decrease in plant resistance to B. cinerea, whereas in plants, over-expressing the mutant protein SIB1 led to increased resistance to the fungus. These experiments indicate a positive role for these two proteins as AtWRKY33 activators but that they are not essential in defense-mediated AtWRKY33 in plants [47]. Other interaction partners have been described for the AtWRKY33 protein, including one MAPK (MITOGEN-ACTIVATED PROTEIN KINASE) or MPK4 and its substrate, a VQ protein called MAP KINASE SUBSTRATE1 (MSK1) (Figure 6). In addition to AtWRKY33, the AtWRKY25 protein is also capable of interacting with MPK4 and MSK1 [48,50]. It has been proposed that interactions with AtWRKY25 in the absence of the pathogen are in the form of a nuclear-localized complex between MPK4, MKS1 and AtWRKY33. After induction by either Pseudomonas syringae or flagellin (a protein found in bacterial flagella), the MPK4 protein is activated and phosphorylates its substrate, MSK1. MSK1 phosphorylation releases the AtWRKY33 complex, allowing AtWRKY33 to bind to the promoter region of some genes, including the phytoalexin deficient3 (PAD3) promoter, which encodes an enzyme that participates in the synthesis of the antimicrobial compound camalexin, a type of phytoalexin that plays an important role in plant defense ( Figure 6) [50]. In addition to MPK4, the AtWRKY33 protein can also interact with MPK3 and MPK6 ( Figure 6) [51]. In Arabidopsis, the MPK3/MPK6 activation cascade results in the increased expression of genes related to camalexin biosynthesis and MPK6 and also increases the expression of AtWRKY33. In atwrky33 mutant plants, functions, such as the expression of genes involved in the production of camalexin through the MPK3/MPK6 cascade and the actual induction of camalexin, are compromised [51]. AtWRKY33 is phosphorylated by MPK3/MPK6 both in vivo and in vitro, and mutations at the phosphorylation target sites of MPK3/MPK6 in the gene AtWRKY33 are unable to complement the deficiency in the production of camalexin in the loss-of-function mutant atwrky33. Possibly by the phosphorylation of MPK3/MPK6, AtWRKY33 leads to the increased expression of AtWRKY33, triggering a positive feedback mechanism that triggers the plant's response to pathogens, including the production of camalexin [51]. In tobacco, the protein NtWRKY1 (representative of the Group I WRKY family) binds to one MAPK known as salicylic acid-induced protein kinase (SIKP) [52]. SIKP is activated after infection with Tobacco mosaic virus (TMV) [53] and is also related to HR cell death after induction by an elicitor [54]. SIPK phosphorylates WRKY1, resulting in an increase in the binding activity of this transcription factor to its target DNA sequence, the W-box, which also exists in the tobacco chitinase gene CHN50. In assays for the co-expression of SIPK and WRKY1 in Nicotiana benthamiana, cell death by HR is faster compared with plants expressing only SIPK1, suggesting the involvement of WRKY1 in the induction of cell death derived from the HR, which could be a component of the pathway located downstream of SIPK [52]. In N. benthamiana, a WRKY that is also a representative of the group I WRKY family, NtWRKY8, is also phosphorylated by SIPK and other MAPKs, specifically the WOUND-INDUCED PROTEIN KINASE (WIPK) and NTF4 (a tobacco mitogen-activated protein kinase related to plant defense response). WRKY8 contains seven potential MAPK phosphorylation sites, five of which are concentrated in the N-terminal region. The N-terminal region of WRKY8 is characterized by having groups of proline-directed serine residues (SP clusters), which serve as phosphorylation sites for MAPKs in vitro and in vivo. WRKY8 also contains a D domain adjacent to the N-terminus of the SP cluster, which is essential for the effective phosphorylation of WRKY8 in plants. NtWRKY8 phosphorylation increases its binding to W-box sites and also its ability for transactivation. The silencing of WRKY8 decreases the expression of genes related to defense and increases the plant's susceptibility to pathogens such as Phytophthora infestans and Colletotrichum orbiculare, demonstrating the importance of this protein in plant defense [55]. WRKY proteins can also interact with proteins involved in autophagy [56,57]. In the nucleus, WRKY33 interacts with ATG18a, an important protein in the autophagy pathway in Arabidopsis. The fungus, B. cinerea induces autophagic gene expression and the formation of autophagosomes. In plants with wrky33 loss-of-function, ATG18a induction and the formation of autophagosomes are compromised. Mutants defective for autophagy demonstrate a higher susceptibility to B. cinerea and the necrotrophic fungus Alternaria brassicicola. The interaction between ATG18a and WRKY33, and consequently with the autophagy pathway, is important for signaling the plant defense response against necrotrophic pathogens [58]. It has been reported that interactions between two or more WRKY proteins are induced by pathogens. The Arabidopsis proteins WRKY18, WRKY40 and WRKY60 can form homo-and heterocomplexes; however, the binding activities of these transcription factors vary with the protein region of the complex. Experiments with single loss-of-function mutants for each WRKY protein have demonstrated little change in the phenotype of these mutants for infection by P. syringae or B. cinerea compared to wild type. Currently, it is known that the double mutants, wrky18 wrky40 and wrky18 wrky60, and the triple mutant, wrky18 wrky40 wrky60, are more resistant to P. syringae and more susceptible to B. cinerea compared to the WT [59]. atwrky18 atwrky40 mutant plants are highly resistant to the fungus Golovinomyces orontii, and WRKY18 and 40 have been shown to act as negative regulators in defense against this fungus [60]. The protein CALMODULIN (CaM) is a modulator of Ca 2+ signaling in eukaryotic cells [61]. Calmodulin interacts with several proteins, including WRKYs. Through a screen using an Arabidopsis library as bait to CaM, the protein AtWRKY17 was identified as an interaction partner of CaM. AtWRKY17 belongs to Group IId of the WRKY family, and its region that binds to CaM is a conserved structural motif (C-motif) that is also found in other representatives of this group [62]. Representatives of the WRKY family Group IId are induced by pathogen infection and also by salicylic acid [63]. The binding site where AtWRKY17 interacts with CaM is commonly found in proteins that are known to interact with CaM [62]. Ten other Group IId WRKY proteins also bind to CaM, and all of their binding domains are similar to the C-motif present in AtWRKY17. Thus, this WRKY/CaM interaction is likely common to all representatives of this group. More studies are needed to establish the role of members of the Group IId family of WRKY transcription factors in signaling mediated by CaM/Ca 2+ [62]. Transcription factors that belong to the WRKY family may also interact with chromatin remodeling proteins, such as histone deacetylases, which catalyze the removal of acetyl groups on histones. This interaction causes the DNA to become more inaccessible, thereby repressing expression of a gene that is present in this region [64]. Arabidopsis AtWRKY38 and AtWRKY62 are part of Group III of the WRKY family. AtWRKY38 and AtWRKY62 appear to have partially redundant functions as negative regulators of basal plant resistance to P. syringae and the PR1 gene expression induced by the pathogen [65]. Yeast two-hybrid experiments have identified that HISTONE DEACETYLASE 19 (HDA19) interacts with AtWRKY38 and AtWRKY62, and BiFC assays and co-immunoprecipitations have demonstrated that the interaction occurs in the nucleus and is highly specific. HDA19 expression is also induced by P. syringae. HDA19 over-expression in plants results in repression of the transcription activation activities of AtWRKY38 and AtWRKY62 [65]. NAC Family In addition to the most studied families of transcription factors involved in defense signaling pathways in plants, such as WRKY and MYB AP2/ERF, factors from other families also participate in modulating responses to biotic stresses. One example is the family of transcription factors containing the NAC domain [66]. The NAC superfamily can be divided into at least seven subfamilies and the functions of NAC genes are defined by their subfamily [66]. Recent studies have shown that proteins produced by pathogens interfere with the function of NAC transcription factors. An example is the effector LxLR (Pi03192) produced by Phytophthora infestans, which interacts with two transcription factors belonging to the NAC family, termed NAC TARGETED BY PHYTOPHTHORA 1 and 2 (NTP1 and NTP2). This interaction occurs in the endoplasmic reticulum and prevents NTP1 localization to the nucleus (Figure 7) [67]. This virus-induced gene silencing (VIGS) of genes encoding these two NAC factors results in increased susceptibility to infection by P. infestans, suggesting that these transcription factors play an important role in plant defense [67]. Viral proteins also interact with transcription factors belonging to the NAC family. A NAC protein, designated TCV-INTERACTING PROTEIN (TIP), from Arabidopsis interacts specifically with the capsid protein (CP) of turnip crinkle virus (TCV) (Figure 7) [68]. TIP functions through transcriptional activation to promote a basal level of resistance in the plant [68]. The viral CP, produced in infected cells, functions as a virulence factor by binding to TIP to reduce basal resistance and to promote rapid systemic infection (Figure 7). Resistant plants expressing a HYPERSENSITIVE RESPONSE PROTEIN (termed HRT) may guard the TIP protein by detecting a change in TIP caused by the TIP-CP interaction, which will result in a stronger, HR-mediated resistance response [68]. Similarly, an interaction between the helicase domain of TMV 126-/183-kDa replicase protein(s) and the Arabidopsis NAC domain transcription factor ATAF2 was identified [69]. In this interaction, TMV suppresses the basal defense pathways during the compatible virus-host interaction with ATAF2 ( Figure 7) [68]. This hypothesis is supported by the reduced ability of SA to transcriptionally activate defense-related genes within tissues systemically infected by TMV [69]. NAC proteins interact with protein suppressors of plant defense. In non-induced conditions (without pathogen attack), the protein SUPPRESSOR OF NONEXPRESSOR OF PR GENES INDUCIBLE 1 (SNI1), binds to CBNAC, a calmodulin-regulated NAC transcriptional repressor in Arabidopsis [70]. CBNAC binds to the E0-1-1 element of PR1 promoter and SNI1 enhances the DNA-binding activity of CBNAC, consequently enhancing repression of the PR1 gene by SNI1 [70]. In the presence of inducer (during pathogen attack), PR1 gene expression is induced by the translocation of a large amount of active NPR1 to the nucleus and its interaction with TGA transcription factors. The SNI1/CBNAC protein complex can be disassembled by NPR1, calmodulin or other unknown mechanisms [70]. Conclusions The evolution of the plant immune response has resulted in a highly effective defense system that is able to resist potential attacks by several types of pathogens. Within this complex defense system are regulatory proteins, such as transcription factors. Over the past few years, a substantial number of proteins that interact with transcription factors involved in plant defenses against pathogens have been identified. In this review, we describe some of the key protein-protein interactions involved in regulating the function of transcription factors important in the defense against biotic stress in plants, such as members of the bZIP families, AP2/ERF, MYB, MYC, WRKY and, more recently, the NAC family. The presence of diversified modular domains involved in direct interactions with different proteins present in transcription factors indicate the diversity of possible interactions, modulating the function of these factors in the process of plant defense. Various processes of plant defense against pathogen attack are known today, each having a multitude of refined regulatory mechanisms. In this context, examples of interactions are presented, and these interactions can act by modulating the functions of important transcription factors, either by activation or repression of signaling pathways of defense against pathogens from protein-protein interactions (Figures 1 to 6). A broader view of the amazing diversity of the regulatory mechanisms shown during the plant defense reveals the functional redundancy of several transcription factors-interaction partners, such as ANK1 and LSD1 proteins (Figure 1), both genetically unrelated, that interact with transcription factors from the bZIP family, preventing the translocation of these factors to the nucleus. On the other hand, a diverse molecular mode of repression for plant defense pathways is produced by pathogens such as fungi, oomycetes, bacteria and viruses, which suppress the plant response to biotic stress (Figures 3 and 7). We also discuss the key role of the UPS26 system in protein turnover during regulation of the activity of transcription factors in different molecular pathways of plant defense, including the modulation of the concentration of these factors in different subcellular compartments (Figures 1, 3, 4 and 5). A major question left unanswered about networking of interactions is if those interactions are conserved across plant species, or if they evolved to fine-tune particular responses to specific plant pathogens. The study of Pseudomonas syringae (Pst) DC3000 pathogenesis has not only provided several conceptual advances in understanding how a bacterial pathogen employs Type III effectors to suppress plant immune responses and promote disease susceptibility but has also facilitated the discovery of the immune function of stomata and key components of JA signaling in plants [12,27]. The concepts derived from the study of Pst DC3000 provided understanding of pathogenesis mechanisms of other plant pathogens [12]. Similar virulence mechanisms and infection strategies are generally shared in viruses, bacteria, fungi and oomycetes, for example, despite differences in biochemistry, physiology and genetics [12] (Figure 7). In the coming years, it is expected that interacting proteins will be identified by traditional procedures, such as by yeast two-hybrid assays, and by more recently developed methods, such as high density protein microarrays. A particularly important effort will be the integration of knowledge of these complex protein-protein interactions and protein-DNA interactions in the context of the transcription of target genes important for the development of a thorough understanding of the regulatory network of responses to stress caused by pathogens. These studies may lead to a better understanding, not only of the interactions that regulate these transcription factors but also of the important biological processes that these factors modulate.
2015-09-18T23:22:04.000Z
2014-03-01T00:00:00.000
{ "year": 2014, "sha1": "5e0c5013929ea614877df9ab3de9ec797dcab9b1", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/2227-7382/2/1/85/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e0c5013929ea614877df9ab3de9ec797dcab9b1", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
265034858
pes2o/s2orc
v3-fos-license
Transcatheter closure in preterm infants with patent ductus arteriosus: feasibility, results, hemodynamic monitoring and future prospectives Ductal patency of preterm infants is potentially associated with long term morbidities related to either pulmonary overflow or systemic steal. When an interventional closure is needed, it can be achieved with either surgical ligation or a catheter-based approach. Transcatheter PDA closure is among the safest of interventional cardiac procedures and it is the first choice for ductal closure in adults, children, and infants weighing more than 6 kg. In preterm and very low birth weight infants, it is increasingly becoming a valid and safe alternative to ligation, especially for the high success rate and the minor invasiveness and side effects. Nevertheless, being it performed at increasingly lower weights and gestational ages, hemodynamic complications are possible events to be foreseen. Procedural steps, timing, results, possible complications and available monitoring systems, as well as future outlooks are here discussed. Supplementary Information The online version contains supplementary material available at 10.1186/s13052-023-01552-2. Background The ductus arteriosus in preterm infants remains patent for ten or more days after birth in more than 50% of all infants born before 30 weeks' gestation [1,2].Ductal patency is potentially associated with long term morbidities related to either pulmonary overflow or systemic steal.Nevertheless, no causal relationship has been proven between patent ductus arteriosus (PDA) and increased mortality or specific morbidities, except for retrospective studies [3,4].After more than 40 years of clinical research, including many randomized controlled trials (RCTs) many questions remain unanswered.Strategies for its management which include medical pharmacological approach, interventional approach and conservative approach, remain a subject of great controversy because of the paucity of evidence that interventions reduce adverse outcomes [5][6][7]. Medical and management approaches In a meta-analysis of 58 RCTs inclusive of 6028 subjects, medical prophylaxis or treatment of the PDA was not associated with any significant reduction in neonatal mortality or in measured morbidities [8].Nevertheless, since RCTs included infants in a wide range of GAs, had widely varying PDA definitions including PDA diameter alone, and provided open-label treatment, it is difficult to draw inferences on clinical outcomes based on the results of these trials. Medical therapy aimed at hemodynamically significant PDA (HsPDA) closure is based on the administration of either non-steroidal anti-inflammatory drugs (NSAIDs) like indomethacin or ibuprofen, or paracetamol.However, which dose or which drug is recommended for each infant is far to be established. The modern conservative approach has gained interest since the early 2000s.It is driven by concerns over unnecessary and potentially harmful interventions, without demonstrated benefits other than ductal closure itself [7].This approach includes a variety of actions, including positive pressure for respiratory support, mild fluid restriction [9], selective diuretic use, avoiding anemia and providing adequate nutrition until the duct is no longer hemodynamically significant. Non-medical approaches Even if not considered as first line option, when medical treatments fail and the patient is still suffering from the hemodynamic impact of a large PDA, then an interventional closure is considered.This may be obtained by a surgical or a catheter-based approach. Surgical ligation of the PDA historically has been the first alternative to failed pharmacological treatment and it is usually performed through left thoracotomy.However, it is at increased risk of mortality and significant morbidities in this vulnerable group of infants.Thirty days mortality rate has been reported around 5-8%.Surgical ligation has been reported to be associated to bleeding, infection, vocal paresis, hence to gastroesophageal reflux disease (GERD) and need for prolonged intubation and mechanical ventilation [10].The post-operative course of preterm infants undergoing surgical ligation of PDA is often complicated by post ligation cardiac syndrome (PLCS) with decreased cardiac output and hemodynamic instability in 28%-45% of infants despite targeted milrinone prophylaxis.It has also been associated with an increased incidence of bronchopulmonary dysplasia (BPD), retinopathy of prematurity (ROP) and neurodevelopmental impairment in comparison with delayed ligation in a selected population.Nevertheless, controversies remain whether these are related to surgical ligation or prolonged exposure of preterm infants to PDA itself or possible associated co-morbidities [11,12]. Transcatheter PDA closure is among the safest of interventional cardiac procedures and is the first choice for ductal closure in adults, children, and infants ≥ 6 kg.A device is deployed by a transcatheter approach to seal the opening between the aorta and the pulmonary artery, thereby restoring normal blood flow. In a recent study, Wilson et al. evaluated success and complication rates of transcatheter closure of PDA in 141 adult patients.They reported a 100% success rate and no major complications.Six percent of treated patients had a small residual shunt, and only 2 patients had a residual leak on echocardiography at follow-up.The authors concluded that transcatheter PDA closure is very effective in adults across all duct morphologies and associated with a very low complication rate [13]. In another study, Sudhakar et al. provided comprehensive data on the safety and efficacy of transcatheter closure of PDA in an adult and adolescent population, thus confirming the feasibility of this technique in a younger population.Of 70 PDA device closure cases, 64 were carried out using occluders (ADO-I and II, Lifetech, Cardi-O-Fix).Device success was achieved in all including patients with very large PDAs, and no major complications occurred.At follow up, complete closure was observed in all patients [14]. Therefore, this success in adults and adolescents paved the way for transcatheter closure in preterm infants. Transcatheter PDA occlusion in preterm infants In recent years, the use of transcatheter PDA closure has gained wide attention as a less invasive alternative to surgical ligation and a more effective treatment option than medical therapy for extremely low birth weight (ELBW) infants [15]. In two studies, Zahn et al. demonstrated that transcatheter PDA closure can be successfully performed in preterm neonates using currently available technology with a high success rate and a low incidence of complications.In addition, the author introduced a new transvenous method that utilizes both echocardiography and careful employment of fluoroscopy to circumvent arterial access in this vulnerable group of patients [16,17]. Since 2019, FDA and EU approved the device Amplatzer Piccolo ™ Occluder (APO, Abbott Structural Heart, Plymouth, MN, USA) for the treatment of preterm patients [18,46].It has a particular design for fetal ductus morphology, elongated-tubular PDA with a narrowing on the pulmonary side, (Hockey stick morphology) [19,20] (Fig. 1).In the United States, a single arm, prospective, multicenter, non-randomized study was conducted to assess its efficacy in patients weighing 700 g or more.It resulted in an implant success rate of 95.5% overall and 99% in patients weighing less than or equal to 2 kg [18]. In order to proceed with transcatheter closure the duct must be longer than 3-5 mm with a maximal diameter of 4 mm [21]. Transcatheter procedure Cath lab settings is extremely important for treating preterm infants; in fact, this procedure needs a multidisciplinary team that include neonatologists, anesthesiologists, pediatric cardiologists, and specialized nurses of catheterization laboratory and neonatal intensive care unit. Before the procedure is performed, a checklist is shared with neonatologists and anesthesiologists to reduce potential risks: recent blood exams are verified, one red blood cells bag should be available, and the ventilator in catheterization laboratory should be specific for preterm infants.Temperature control of the preterm infant is mandatory as well as the availability of a neonatal ultrasound probe.If possible, the temperature of catheterization laboratory should be raised up to avoid cooling of the preterm infant. When the team, and especially the interventional cardiologist is more than confident with the procedure, the transcatheter closure can be done in neonatal intensive care unit with portable fluoroscopic unit. Before starting the procedure, an accurate Echocardiography is performed to confirm the anatomy of the PDA and measurements [22,23]. To reduce potential complications, the procedure should be concluded in 60-90 min.Therefore, in normal settings, right heart catheterization with measurement of pressure, PVR and CI is avoided.When transcatheter closure is completed, babies must return in intensive care unit as soon as possible.A surgical back-up should be available. The procedure is performed under general anaesthesia, and 4-French femoral vein access is required.Arterial access is contraindicated for high risk of vascular complications [24].Vascular accesses are echo guided inserted. The first critical step is crossing the tricuspid valve with guidewire and catheter.In order to do so, a 3.3 Fr right coronary catheter (JR Mongoose) is advanced up to the annulus of the tricuspid valve.A 0.0014″ J tip coronary guidewire is then advanced trough the tricuspid valve into the right ventricle, main pulmonary artery, ductus arteriosus and descending aorta. Further on, over the coronary guidewire a telescopic system is advanced.It includes the LP torqVue delivery system and a microcatheter.By using this system, the risk of entrapping the tricuspid valve is extremely low. Single hand injection of contrast is performed across the PDA; 1 ml/kg is enough for complete view of PDA in latero-lateral projection.Echocardiographic and angiographic measurements are obtained to finally choose the device (Figs. 2, 3, 4).The device is chosen by using two parameters: PDA measurements and weight of the preterm. In preterm infants weighing less than 2 kg, the Piccolo Occluder must be placed completely inside the ductus.Conversely, in infants weighing more than 2 kg the device can be placed with external disks in aorta and in pulmonary artery [18,19,22].Amplatzer Piccolo Occluder ™ (APO) is advanced and deployed in correct position across the PDA with echocardiography monitoring [25,23] and fluoroscopy control (LL projection) (clip 1, clip 2).Trachea and oesophagus are good markers for correct positioning of the device.A proper device orientation at fluoroscopy shows coaxially aligned with the long axis of the ductus and pointing toward 10 o'clock on a 90° in lateral fluoroscopy view [18,19]. Residual shunt, aortic coarctation, protrusion in left pulmonary artery are excluded with echocardiography.The device is than released when in correct position (Clip 3, Fig. 5). Results of the transcatheter procedure Transcatheter closure of the PDA in preterm infants is therefore a feasible and safe technique with reported success rate of 98% [26,27] with a very low rate of major adverse events as reported in meta-analysis studies [16,17,25,28]. Sathanandam et al. summarized the current consensus guidelines for the prevention and management of periprocedural complications of transcatheter PDA closure with the Amplatzer Piccolo Occluder in ELBW infants [19].Despite the low frequency of periprocedural complications, severe reported complication are dissection of inferior vena cava, cardiac perforation (rare ≃ 0.8%), these risks are minimized using a 0.014 guide wire [29].Less severe and more frequent complications are protrusion of the proximal disk at the pulmonary end causing left pulmonary artery (LPA) stenosis (1.2%), protrusion of the distal disk in aorta causing aortic coarctation (1.2%), device embolization (more frequent in patients with large PDA) (2.8%), tricuspid regurgitation (mild-trivial ≃ 2%).There is also a risk of residual shunt or recurrence of PDA, which Surgical versus transcatheter PDA closure in preterm infants A recent metanalysis [34] that screened 97 studies, 8 of which met the eligibility criteria, with a total of 756 preterm infants below 2000 g birthweight, aimed at assessing the safety and efficacy of transcatheter closure (TC) when compared to surgical ligation (SL) in preterm infants with PDA.Compared to TC, SL had higher mortality rates.No difference was seen in post-procedural complication rate, mean duration of post-procedural mechanical ventilation, hospital stay length or neonatal intensive care unit stay length. As to renal function, a single center retrospective study observed a significant improvement in renal function after transcatheter closure, even with the use of contrast, comparable to those of patients who underwent surgical closure [35].Table 1 summarizes pros and cons of surgical and transcatheter procedures. When is the right time to implant a transcatheter occluder? There is still debate on how to evaluate a hemodynamically significant duct.Consistent PDA scores [36] should be developed in order to ensure that infants at greatest risk for adverse ductal consequences are included. Ideal timing of transcatheter closure is yet to be determined.As previously stated, both transcatheter and surgical procedures are mostly (but not only) performed after medical treatment failure. Besides anecdotal findings and single center experiences which could suggest that the time lapse a preterm infant is exposed to the effects of a significant ductal shunting could be directly related to the risk of developing morbidities such as BPD [3] and acute renal failure [37], no clear evidence can support a specific recommendation in terms of timing. Nevertheless, Regan et al. 's subgroup analysis of their cohort [27] demonstrated a shorter hospitalization in babies younger than 4 weeks of life at the time of transcatheter closure. It is important to note that not all cases of PDA can be treated with transcatheter closure, and it is crucial to consider individual patient factors when determining the appropriate treatment strategy.Therefore, a multidisciplinary team consisting of neonatologists, cardiologists, and pediatric cardiac surgeons is necessary to make informed decisions about treating preterm infant with PDA.Surgical closure remains a viable option for infants with complex anatomy or significant comorbidities. Hemodynamic monitoring of patients with PDA pre, during and after the procedure Infant hemodynamic balance depends on cardiac output (CO) and systemic vascular resistances (SVR).McNamara et al [38] described a population of preterm infants weighting between 995 and 1318 g who closed the PDA with percutaneous device.They showed that one hour after PDA closure there was a significant decrease in stroke volume (SV), consequent to a reduced left ventricular pulmonary venous return and an increase of arterial elastance, due to a loss of low resistance pulmonary vascular bed circuit, with maintained diastolic blood pressure (BP).After PDA closure, the significant increase of arterial elastance would be expected to generate significant increase of BP, but the pronounced drop in preload determines a low cardiac output and, consequently, an apparently stable diastolic pressure, so that the clinicians may fail to recognize significant changes in left ventricular function. For this reason, monitoring CO and SVR in preterm infants undergoing percutaneous PDA closure is very important and multiple tools to identify short-term myocardial dysfunction are needed to set an early treatment. The use of targeted neonatal echocardiography is useful to early detection infants at risk of PLCS: CO < 200 ml/ kg/min within 1 h of PDA ligation may predict subsequent cardiorespiratory compromise and the need of inotropic agents, and administration of i.v.milrinone is associated with improved postoperative stability [39].In the same way, post percutaneous PDA closure, early functional echocardiography allows to detect the cases of inability of the myocardium to adapt to sudden changes in loading condition. Post PDA closure, interstitial pulmonary oedema, sustained to exposure to high volume left-to-right shunt, is reduced and it has been demonstrated a reduction in lung ultrasound score (LUS) 1 h after surgical intervention [40].Moreover, the drop of LUS is correlated to lowering in CO, suggesting that the lung ultrasound may be a useful tool to guide monitoring of the pulmonary disease and the cardiac function also after PDA device position. Electrical cardiometry (EC) is a non-invasive method that measures thoracic electrical bioimpedance and derives hemodynamic parameters such CO, SVR and contractility index; application during PDA ligation has demonstrated that abrupt diversion of a ductal shunting contributes to hemodynamic aberrations in VLBW infants and that increased SVR, decreased preload and impaired left ventricular performance might be the principal causes of it [41].In preterm infants undergoing percutaneous PDA closure, EC is useful to record hemodynamic changes, to recognize the acute increase of SVR and their trend: in a recent paper, it seems that long persistence of high SVR could be correlated to circulation impairment and drop of CO, resulting in development of PLCS, while a rapid normalization to preoperatory value of SVR may be a good indicator of cardiorespiratory stability [42].Near-infrared spectroscopy (NIRS) measuring the difference in the absorption spectra of oxygenated and deoxygenated hemoglobin to indirectly assess flow is a valid continuous assessment of regional tissue oxygenation (rSO2) and it has become available and gained evidence-based application in neonatal intensive care [43]. In pediatric and neonatal cardiac surgery, it can be applied perioperatively to monitor regional cerebral tissue oxygenation and perfusion. Cerebral and renal oxygen saturation and extraction do not seem to be affected by an HsPDA or by retrograde diastolic blood flow in the descending aorta [44]. After PDA ligation and transcatheter closure in preterm infants, an initial short-term decrease followed by an increase in cerebral rStO2 can been observed [45], due to the perturbation of cerebral blood flow; future research is needed to understand the effects on cerebral oxygenation during transcatheter closure of ductus arteriosus. Little is known about hemodynamic complications of transcatheter closure.The lower incidence of hemodynamic imbalance, need for inotropes and ventilatory support might be due to demographic parameters.Infants that undergo surgical ligation are generally smaller and younger in gestational age (GA) and in days of life (DOL). Future outlook Many steps have been done in the recent years in term of a less invasive procedure with a higher rate of success.However, there are still several critical steps related to the transfer of the patients in the catheterization laboratory.There are some efforts in performing the procedure at the bedside in the neonatal intensive care unit under echocardiographic and fluoroscopic monitoring.However, several units in the world are trying to do the next step that is a totally echocardiographic guided procedure at the bedside in the neonatal intensive care unit [46].This will reduce the logistic burden and the impact of a transfer on the preterm infants'wellbeing. Conclusions In conclusion, PDA transcatheter closure is increasingly becoming a valid and safe alternative to ligation, especially for the minor invasiveness and side effects.Nevertheless, with the reduction of the weight and gestational age of the newborns to which it is performed, hemodynamic complications are possible events to be foreseen. Despite the usefulness of this method in managing preterm neonates, there are still limitations to the procedure, and surgical closure may still be a viable option depending on the individual case.It is therefore important to continue researching and enhancing the device and delivery system to maximize its potential benefits for this vulnerable population. Furthermore, hemodynamic monitoring should include the integration of multiple systems (functional echocardiography, lung ultrasound scan, EC, NIRS) to recognize soon those infants with ventricular dysfunction, who may benefit from early treatment. Fig. 2 Fig. 3 Fig. 2 Echocardiographic measurement of the patent duct 's diameter in a short axis scan Fig. 4 Fig. 5 Fig. 4 Angiographic latero-lateral view and measurement of the patent duct after injection of contrast Table 1 Pros and cons of surgical and transcatheter procedures
2023-11-07T14:12:08.280Z
2023-11-06T00:00:00.000
{ "year": 2023, "sha1": "e8e6a581f6a5102c8d9004a29c37bace9bf0df01", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "b3b77f7f1c84e392925a24fa5cf29123dd187b59", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267028840
pes2o/s2orc
v3-fos-license
Effect of different surgical techniques on postoperative wound infection in patients with uterine prolapse: A meta‐analysis Abstract The assumption is that a number of controlled trials have been conducted to assess the impact of uterus retaining or hysterectomy on wound and haemorrhage, but there is no indication as to which method would be more beneficial for wound healing. This research is intended to provide a comprehensive overview of the availability of wound healing in case studies of both operative methods. From inception to October 2023, four databases were reviewed. The odds ratio (OR) and the mean difference (MD) for both groups were computed with a random effect model, as well as the corresponding 95% confidence intervals. A total of five studies were carried out in the overall design and enrolled 16 972 patients. No statistical significance was found in the rate of postoperative wound infection among the two treatments (OR,1.46; 95% CI,0.66,3.22 p = 0.35); The rates of bleeding after surgery did not differ significantly from one procedure to another (OR,1.41; 95% CI,0.91,2.17 p = 0.12); two studies demonstrated no statistical significance for the rate of incisional hernia after surgery (OR,2.58; 95% CI,0.37,18.05 p = 0.34). Our findings indicate that there is a similar risk between uterine preservation and hysterectomies for the incidence of wound infection, haemorrhage and protrusion of incision. Over the past few years, it has been a marker for a hysterectomy, regardless of whether or not there is a problem with the uterus, regardless of whether the woman is willing or not.Hysterectomy is still regarded as the standard procedure for possible treatment of uterovaginal prolapse, even if it is the consequence, not the cause, of the prolapse.But the role of hysterectomy in prolapse reconstruction has been called into question.In recent years, there has been a profound shift in the way women live, believe and think about sex and pregnancy, with a number of people who have had an operation wanting to keep their uterus. 1There is some doubt about the effect of hysterectomy on uterine prolapsed prolapsed uterus.3][4] Today, uterine preservation is still new conception in prolapsed operation. 5][8][9][10][11] It is very important that the surgeon knows what is going on with this kind of surgery in terms of prolapse. Very little research has been done to preserve uterus, and it remains unclear whether the uterus should be preserved or removed.3][14][15][16][17] While most of the research is concerned about the safe operation of surgery, there is not a concrete investigation into the impact of various operative options on postoperation wound infection in women with prolapsed uterus.Thus, we performed a meta-analysis to compare the efficacy of uterus preserving and hysterectomy in treating uterus prolapse. | Eligibility criteria The selection of this report should involve a female who has a symptomatic uterus prolapsed uterus, a comparison between the efficacy of uterus preserving and hysterectomy in treating uterus prolapse and a prospective, controlled, randomised or prospective cohort study.If a study is published in the form of an abstract, a letter to an editor or a review, it is not considered in the case of meta-analyses or reviews. | Search strategy An exhaustive electronic search of four databases, including PubMed and Embase, from inception to October 2023 was conducted.References to identified objects were also searched.The search was restricted to English published articles.The search was carried out using a combination of related term descriptors.You can see the detailed search pattern in Table 1. | Study selection The papers were chosen and grouped by two authors, and the data were further extracted with standardised forms.All discrepancies in the choice of the study or the collection of data were settled by mutual agreement of both authors.The authors evaluated the names and summaries of all the research results that were discovered with the search strategy.Where there was insufficient information in the heading or summary of the report, the entire text was considered.The screening procedure is illustrated in Figure 1. | Data collection Only trials that fulfilled the eligibility criteria were included.An inventory of trials has been prepared to be included in a systematic assessment.The review and exclusion papers were reviewed to determine which studies might not have been captured with the main search strategy.The meta-analyses included only those studies that reported a standard deviation (SD) for each of the parameters discussed.The standard tables contained the following data: the name of the study, the author, the date of release, the number and length of the sample, the eligibility and exclusion criteria, the result measures and their outcomes. | Analysis and assessment of risk of bias We evaluated the methodology for each trial on the basis of scientific merit, the probability of bias and the integrity of the report by applying a pre-defined three-level classification of the studies as low, medium and severe.The rating is determined by the reviewer's perception that the trial is likely to be biased in light of issues associated with the ROBINS-I risk-of-bias instrument.In every trial, individual results were scored individually based on their respective degree of performance, repeatability and reliability, as well as the significance of the results from a patient's point of view. | Data analysis The odds ratio (OR) and the relevant 95% CI for the occurrence of an outcome event were derived from the analysis of the data.The results of the two trials were combined with a randomised and fixed-effect model.The statistical significance of p < 0.05 was regarded as significant.The I 2 statistic is used to characterise the variability of the trials due to heterogeneity, not the sampling error and to quantify the variability of the data.When the heterogeneity is more than 50%, a random-effect model is applied.Meta-analyses were carried out with RevMan 5.3. | Study characteristics For our final analysis, we chose five publications out of 508.Five publications were released from 2005 to 2022.There were 16 972 cases of prolapsed uterus, of whom 2506 were operated on hysteropreservation and 14 466 were treated by hysterectomy.The total sample size ranged from 34 to 14 192.The features of the uterus prolapsed patients are presented in | Wound infection Among the five trials, there were some cases of postoperative wound infections among women with prolapsed uterus, 2506 cases of uterus preservation operation and 14 466 cases of hysterectomy.No statistical significance was found in the rate of postoperative infection after operation of uterine preserving and operation of hysterectomy(OR, 1.46; 95% CI, 0.66, 3.22 p = 0.35), Figure 4. | Wound haematoma In four clinical trials, there were some cases of postoperative wound haematoma in the uterine prolapse of the uterus, 2472 in the uterine preservation operation and 14 428 in the hysterectomies.No statistical significance was found in the incidence of postoperative wound haematoma after operation of uterus and operation with hysterectomy (OR, 1.41; 95% CI, 0.91, 2.17 p = 0.12), Figure 5. F I G U R E 3 Summary of risk of bias. | Incisional hernia In two trials, the incidence of postoperative incisional hernias was observed in 75 cases of hysteropreservation and 79 cases of hysterectomies.There was no difference between surgical interventions with hysteropreservation and hysterectomised hernias after operation (OR, 2.58; 95% CI, 0.37, 18.05 p = 0.34), Figure 6. | Publication bias Published reports of biased analyses of wound infection and haematoma after uterine prolapse surgery are shown in Figures 7 and 8. | DISCUSSION The classical therapy for serious uterine prolapse is to remove the womb and reconstruct it simultaneously, but advances in anatomy and surgery have cast doubt on the necessity of hysterectomy.In addition, it is easier for a woman to be recommended to retain her womb if there is no need for removal.This meta-analysis is intended to give physicians a few suggestions as to what kind of surgery should be used in terms of postoperative wound healing.This systematic assessment and meta-analysis was based on controlled studies that compared the effects of uterus preservation and hysterectomy on postoperative wound healing.This study enrolled 16, 972 patients in five controlled studies, among them 2506 with uterine preserving operation and 14 466 with hysterectomy.The total sample size ranged from 34 to 14 192.There were a number of features.Our findings did not indicate that there was a statistically significant difference in the risk of postoperative wound infection, traumatic haematoma and incisional hernia. While a number of studies have tried to address this issue with this research, their results have been restricted by the design of the trial or the features of the patient.Moreover, the stratified analysis did not distinguish among the trial designs.The clinical features of uterine preserved or hysterectomised women were not well balanced, and the combination of these findings might be biased.Thus, the latest meta-analysis is a comparison of the impact of uterus conservation versus hysterectomy on the wound healing of women with a limited number of published controlled studies in order to enhance the existing data for analysis. This study has a number of shortcomings: First, while the results were derived from published controlled trials, their quality varied; Second, the heterogeneous nature of the trial should be taken into consideration according to the seriousness of the condition, the setting and the recovery therapy; Third, in this trial, we mainly used conservation and hysterectomy, but no definite operative route was established, which might influence the postoperation healing of uterus prolapse.Fourth, publication bias is inevitable as the analyses were based on published articles. | CONCLUSION This study showed that there was no difference in the risk of postoperation infection, bleeding in the wound or occurrence of incisional hernia after operation between two group.Therefore, the hysterectomised women did not receive any extra benefit.It is important to assess the impact of the treatment of hysteropreservation and hysterectomy on the postoperative recovery of the wound in a large, randomised, controlled study. F I G U R E 1 Flow chart of the study. F I G U R E 4 Forest plot of the effect of surgery with hysteropreservation and hysterectomy on postoperative wound infection in patients with uterine prolapse.F I G U R E 5 Forest plot of the effect of hysteropreservation and hysterectomy on postoperative wound haematoma in patients with uterine prolapse.F I G U R E 6 Forest plot of the effect of surgery with hysteropreservation and hysterectomy on the occurrence of postoperative incisional hernia in patients with uterine prolapse. F I G U R E 7 Funnel plot of the results of hysteropreservation and hysterectomy on the occurrence of postoperative wound infection in patients with uterine prolapse.F I G U R E 8 Funnel plot of the results of hysteropreservation and hysterectomy on postoperative wound haematoma in patients with uterine prolapse. Table 2 . A qualitative evaluation of the five trials is presented in Figures 2 and 3. Characteristics of the selected studies. T A B L E 2
2024-01-19T05:09:56.120Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "40422ad61c68a8e7857d550e470c7c2494bf829a", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/iwj.14588", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "40422ad61c68a8e7857d550e470c7c2494bf829a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246891913
pes2o/s2orc
v3-fos-license
Modelling of Bioremediation of Oil-Contaminated Soil Using Chicken Droppings as The suitability of poultry droppings as a biostimulant for the remediation of crude oilcontaminated soil has been investigated. Four equal-sized containers, each containing 400g of soil were each contaminated 20 ml of crude oil after which they were thoroughly mixed. To first three containers, A,B and C were respectively added 20 g, 60 and 100 g of poultry droppings which were previously dried and pulverised. To the fourth container which served as control, was added no poultry droppings. The degradation of oil in the samples were monitored for 7 weeks by observing the residual hydrocarbon content (RHC) and pH of samples. The RHC of the samples over the seven-week period were then modelled using artificial neural network (ANN) in the MATLAB Neural Network Toolbox. The RHC values decreased for all samples, with the highest reduction of 100 mg/kg obtained in sample C and the least reduction of 1000 mg/kg in sample D. The PH values were observed to increase slightly from the acidic region of 5.5 to a range of 7.8 to 8.3. The best and most suitable training algorithm of RHC of the samples was TRAINSCG since it had the least mean square error (MSE) value of 0.00183 as well as the highest R-squared value of 0.99808. DOI: https://dx.doi.org/10.4314/jasem.v25i11.7 Copyright: Copyright © 2021 Uwadiae and Obasi. This is an open access article distributed under the Creative Commons Attribution License (CCL), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Dates: Received: 22 August 2021; Revised: 17 September 2021; Accepted: 06 October 2021 Due to the high demand for petroleum products in the society today, the petroleum industry activities have increased enormously, resulting some times in the undesirable occurrence of oil spills especially in oil producing countries such as Nigeria (Adams et al., 2015;Albert et al., 2018). Oil spills in soils are unwanted as they adversely affect the functioning of natural processes, disturb agricultural activities and result in economic growth drawback and production of undesirable environmental and health effects (Prasad and Anuprakash, 2016).On account of the highlighted adverse effect of oil spill on soil, there is therefore need to decontaminate it. The methods used for remediation of polluted soils , depending on type and level of contaminants present, include chemical oxidation, soil stabilization, physical methods and bioremediation. Of all these methods used for remediation, bioremediation has received much attention of researchers because it is not only generally safe, but it is also an economic option in the treatment of a contaminated site (Erdogan and Karaca, 2011). Bioremediation is described as the use of microorganisms to destroy or immobilize waste materials (Shanahan, 2004). This process of detoxification targets the harmful chemicals by mineralization, transformation, or alteration (Shannon and Unterman, 1993). Bioremediation has been applied to treat crude oil-contaminated soils, wastewater, oil-contaminated water and refinery sludge (Mohammadi-Sichani et al., 2017;Benyahia, et al., 2005;Siles and Margesin, 2018;Paladino, et al, 2016, Varjani andUpasani, 2017). The communities of microbes which are exposed to hydrocarbons become adapted, exhibiting selective enrichment and genetic changes (Leahy and Colwell 1990;Atlas and Bartha, 1998). If the adapted microbial communities are biostimulated, they can respond to the presence of hydrocarbon pollutants within hours (Chikere et al., 2011). Hence in this study, chicken droppings were employed as biostimulants of indigenous microorganisms for the purpose of treating crude oilcontaminated soil. MATERIALS AND METHODS Collection of Poultry Droppings and Soil sample: Loamy soil samples collected were air-dried and sieved using a 2 mm mesh sieve and stored in polythene bags. The poultry droppings were obtained from the University of Benin agricultural farm. They were sun-dried for three days and thereafter pulverized before mixing with the soil. UWADIAE, SE; OBASI, CD Bioremediation Process: 400 g each of the prepared soil samples were put into four equal-sized containers, and then contaminated with 20 ml of crude oil and thoroughly mixed to ensure homogeneity. Thereafter, 20 g, 60 g and 100 g of the prepared poultry droppings were respectively mixed the oil-contaminated soil samples in the container A, B and C. The fourth container, D which was not mixed with poultry droppings served as the control. Table 1 shows the design of remediation experiments. Where PD stands for poultry droppings Residual hydrocarbon contents (RHC) and pH of the samples were used as indicators of bioremediation. These parameters were monitored for a period of seven weeks. Determination of pH and RHC: The pH values of the soil samples were measured in a soil−water suspension (1:5, w/v extraction ratio) according to the method described by Van Lierop and MacKenzie(1977).The residual hydrocarbon content ,RHC of the samples were determined using a method described by Osuji and Nwoye (2007). A mixture of 5 g of contaminated soil and 50 ml of xylene was vigorously shaken for twenty minutes. The mixture was then left to stand for 20 minutes, after which it was filtered with Whatmann filter paper (no 2). The RHC was calculated after reading the absorbance of the extract from the spectrophotometer at a wavelength of 425 nm. Modelling Studies with ANN: In this study, ANN analyses were performed using MATLAB Neural Network Toolbox. The percentage degradations of sample THC were predicted by using the Feed forward prop. The architecture and the topology of the ANN consisted of an input layer with 7 neurons, 7 output layers with 7 neurons and one hidden layer and 6 output layers. ), and TRRAINR (Resilient). Each ANN was trained using a stopping criterion of 1000 iterations. All data were divided into two parts as training and testing periods. The first 20 sets of data were used for training of the ANN models, and the last 8 sets of data were employed for testing. The model performance indicators used in this study were coefficient of determination (R 2 ) and mean square error (MSE). A high R 2 and low MSE values indicate a good model. RESULT AND DISCUSSION The variation of soil pH and RHC of the soil during the period of bioremediation are shown in Figures 1 and 2. From figure 1, it can be seen that sample A, sample B, sample C and sample D, show no significant increase in the pH value in the first four weeks. Their pH values were however observed to significantly increase from the fifth week to the seventh week. The results also clearly indicate that the pH values of sample A to D were fairly acidic initially for the first five weeks, but increased to become basic from the sixth week. The trend observed during the first four weeks may be due to microorganism initial adaptation phase to the environment and as such their activities would not be high enough to cause a significant change in the pH value, but the deviation from the fifth week can be attributed to the bio-degradation of the microbes responsible for the degradation of the hydrocarbon content through the utilization of nutrients, water and oxygen. The result obtained was in line with the findings of Cunningham and Philp (2000) , who obtained the optimum value of pH between the range of 6 and 8 while comparing bio-augmentation and biostimulation in ex-situ treatment of diesel-contaminated soil . As shown elsewhere (Madhavi et al., 2012) both very low and very high pH values inhibit the microbial activities and as such it requires moderate or close to neural pH value for optimum function . Figure 2 shows the variation of the residual hydrocarbon content (RHC) of the sample with time. Figure 2 shows the residual hydrocarbon content of the sample over a period of seven weeks. It was observed that increase in the amount of poultry droppings in the samples generally lead to a reduction in the RHC. It was also observed that the samples generally gave a decrease in RHC as time of remediation increased. It was however noticed that sample D (control) showed the least decease in RHC from week one to week seven in comparison with the other samples. It was also observed that sample C gave the highest reduction in RHC, followed by sample B and then sample A. This reduction in the RHC of the sample may be influenced directly by the amount of microorganism available for the bio-degradation. Hence, the observation that the sample with the highest amount of poultry manure, which also lead to increased microorganism availability, gave the highest reduction in RHC. This observation is in agreement with the trend observed in a previous related study (Onuoha et al., 2014). The models for RHC was simulated with MATLAB 2018a, using the feed forward prop architecture of the ANN. The goodness of the model at predicting the experimental results obtained were assessed with mean square error (MSE) and the coefficient of determination (R 2 ) value. The smaller the value of MSE and the closer the R 2 value is to unity, the better the model is able to predict the results. The MSE and R 2 values obtained for each training algorithm for the percentage degradation of THC of the samples are shown in Table 2. In Table 2, the values of MSE and R 2 range from 5.91*10 6 to 0.00183 and -0.70381 to 0.99809 respectively for the training algorithms. A low value of the MSE is an good indication of a good model. It can be observed that the best and most suitable training algorithm is TRAINSCG since it has the least MSE value of 0.00183. It is also observed that this training algorithm had the highest R-squared value of 0.99808. This decision was based on the performance value of MSE, which gave the minimum deviation of the experimental data obtained to the predicted data. Conclusion: This study has shown that chicken droppings, which is safe, biodegradable and generally environmentally friendly is effective as a biostimulant for the remediation of crude oil-contaminated soil. This study showed that the soil amended with 100 g of poultry droppings gave the highest degradation of crude oil over a seven week period. The biodegration increased with increase in the amount of poultry droppings used to amend the crude oil-contaminated soil. .
2022-02-17T16:14:53.792Z
2022-02-10T00:00:00.000
{ "year": 2022, "sha1": "c7ece06f46daad7c487ffe22c687da251b91920e", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/jasem/article/download/221231/208766", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c30f91077e8d8855d2e78ecf03639d8c7aaed0f1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
114851048
pes2o/s2orc
v3-fos-license
Automatic humidification system to support the assessment of food drying processes This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves. Introduction Food drying is an industrial process used to reduce water activity in products allowing its conservation, storage stability and transportation [1]. However, the development of new drying techniques or the evaluation of new dried products requires a very careful approach. All processes involved even in simpler dryers are highly non-linear and its scale-up is generally very difficult, requiring experimentation at laboratory and pilot scales coupled with the experience of the researcher [2]. Hence, a research approach involves basic models that are validated using experimental outcomes. Most of models require an understanding of the typical product behavior during drying, and most of this information can be drawn from the so-called drying curves that presents the mass decrement versus time at stable drying conditions. Every food has a representative drying curve related to drying air velocity, temperature and pressure that reflects how removal of water is affected during the process; using the slope of these curves, is possible to obtain the drying rate curve, which is in turn very important to make assumptions for the construction of mechanics of dried materials [3,4]. Foods in particular presents, after a period of constant rate drying, a period characterized by a deceleration of water removal. The rate-controlling factors in this period are complex, depending upon diffusion through the food, and upon the changing energy-binding pattern of the water molecules. There is little theoretical information available for drying of foods in this particular region and experimental drying curves are the only adequate approach. * To whom any correspondence should be addressed. The drying curves are based on the mass reduction with time when the product is dehydrated in a laboratory dryer. Most of the experimental devices used are composed by a propeller fan, an electric heater, a scale and a humidifier. The temperature and the moisture content of the drying air is keep stable through heating and a supply of fresh air or steam, or using saturated air. One of the most important parameters in drying is humidity, but unlike temperature and velocity of air, it is quite difficult to measure it and control it. In earlier experiments the air used had a low humidity or was taken from the environment. This is a very simple way to set up experiments, but it has a poor reproducibility because the humidity depends on ambient conditions [5]. Nonetheless, it is possible to use air with known humidity. From one hand, air is equilibrated with a saturated salt solution at a constant temperature, providing a known humidity air [6]. Its drawbacks are the long time required to reach equilibrium and the difficulty to produce large amounts of conditioned air. Another procedure is known as the two-flow method [7]. In this instance, a stream of gas is divided into two parts, one is saturated with water at a certain temperature and the other is a dry gas; both streams are then mixed producing the desired conditioned air. Its most important drawback is the flow rate measurement and requires an efficient saturation process along with good temperature control of the air coming out the saturator. Similarly, in the two temperature method a stream of air is saturated with air at a given temperature and then the last is raised. Here, accuracy depends on the temperature measurement and the saturator efficiency. Finally, in the two-pressure method a stream of air at a high pressure is saturated with water at a given temperature and then reduced to a lower pressure. This method requires accurate measurement of the pressure and temperature. As a common characteristic, all precedent methods require the measurement of at least one physical variable and their accuracy depends on how well the specific variable is measured. Besides, they can only produce limited amounts of treated air [8]. Consequently, most of the experimental units used for investigation of drying processes have a reduced size and only small samples of the product of interest can be evaluated. The experimental apparatus normally consist of a drying device in which the product sample is exposed to a controlled air flow (Temperature, humidity and speed), and both test cross section and samples zone are quite small [9,10]. Description of the automatic humidification system The automatic humidification system is used to control humidity and temperature of air in an experimental facility. This facility has been developed at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University, looking forward to emulate ambient air of different climatic zones and to obtain the drying curves of a wide range of agro-industrial products. Figure 1 illustrates the main elements of the test unit where the automatic humidification system was set-up. It consists of a ring of convective drying where a dryer or a drying box can be placed to evaluate different products. The unit has been instrumented with several temperature, humidity and velocity sensors for air monitoring, and strain gages to measure the weight of the product being dried. The experimental apparatus has to produce an important quantity of conditioned air because its major purpose is to evaluate the performance of prototype dryers. In this unit, temperature and humidity of the air are controlled in real time. From one hand, it features an air conditioning unit that reduces temperature and humidity of ambient air depending on the conditions of the convective stream to be use in a particular experiment. Then, the amount the air required in the dryer is extracted by a blower and passed through the Heating Humidification Unit (H2U), which finally delivers the air to the Dryer. Figure 2 illustrates the main features of the H2U. It has temperature and humidity sensors, a set of high pressure water nozzles and a set of electric heaters. Sensors and actuators are linked to a control unit that is responsible for maintaining the required conditions of the air flowing to the dryer. A Programing Logic Control (PLC) device was used along with four variable frequency drives and two slave modules for in/out (IO) signals. Figure 3 shows an outline of the control network put in place and its components. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to the controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) that allows direct communication between the control unit and the computer used to build the experimental curves. Industrial devices like PLC have limited local memory, making necessary to manage and save huge amounts of data for off-line analysis. Notwithstanding that industrial systems commonly use OPC (Object Linking and Embedding for Process Control) as communication protocol, in order to link devices from different brands, it is necessity to acquire specific commercial software applications. This represents additional costs to implement supervisory control and data acquisition systems. On the contrary, DDE is a standard feature in computers running Microsoft Windows, which allows easy access to data from multiple applications. Additionally, the experimental facility has actively involved the concept of Totally Integrated Automation (TIA) to combine different automation architectures, with the derived advantages outlined in the specialized literature [11]. Results When the experimental facility is used to emulate a specific climate zone, the results show great agreement with the real air conditions. The equipment is able to produce air with the same average temperature and humidity of Colombian cities with extreme environments. Figure 4 shows a Humidity and Temperature diagram when the automatic system is controlling the variables of interest using an on/off strategy. In this instance, the temperature of the air reaches a relative humidity of 80%, while the temperature of the air leaving the H2U unit remains at about 35°C. After this point, the unit stops humidification and the air inlet temperature is no longer controlled in the conditioning ring. Figure 5 shows another example of the system operating strategy to reach air at 50°C and of 43% relative humidity, when air temperature is high and humidity is low at the initial condition. In this instance, it was implemented a step on/off fuzzy control strategy. Step on/off fuzzy control. When a prototype rotary dryer was used to build the drying curves of coffee beans and cassava, the automatic system allowed to obtain the weight of the dryer load in real time using precision strain gages. Figure 6 shows the dryer load weight variation with time. For these particular tests, environmental conditions of the air going to the dryer where set to emulate a production area of Santander in Colombia (T=28°C, RH=57%), and the results showed great agreement with other experimental studies [12,13]. Conclusions The automatic humidification system integrated to the experimental facility allows to configure a high capacity air conditioning unit to support drying processes, especially for the construction of drying curves of agro-industrial products. The use of a robust system to control temperature and humidity of the air for drying applications is the most important outcome. In this experimental device is possible to use different control strategies in order to obtain the particular air conditions required in the dryer under test; simple on/off control strategy combined with fuzzy control techniques showed good results and a low computational cost. Finally, the use of a DDE server to link the PLC to a computer has provided an easy and inexpensive way to process experimental data.
2019-04-15T13:05:03.392Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "5ca3c0642b7b6cca7fa270749d351e5abdfe5d81", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/138/1/012019", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "29b83260c8378b3c834c3b09f0f16117badc1702", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
244541250
pes2o/s2orc
v3-fos-license
Antileukemic properties of the kinase inhibitor OTSSP167 in T-cell acute lymphoblastic leukemia Key Points • OTSSP167 has antileukemic properties in T-ALL by inducing cell cycle arrest and apoptosis.• OTSSP167 controls leukemia burden in xenografts from patients with T-ALL and exhibits a synergistic effect with standard drug therapy. Introduction T-cell acute lymphoblastic leukemia (T-ALL) is an aggressive hematological malignancy, representing about 15% of pediatric leukemia cases. 1,2 Although the 5-year event-free survival of childhood ALL has improved to more than 85% in most centers, 3 the prognosis of patients with refractory or relapsed T-ALL is dismal. Because relapsed leukemia remains the leading cause of cancer-related mortality in children, [4][5][6][7][8] it is necessary to develop alternative drugs with antileukemic capacity and low toxicity for these patients who are at high-risk. The development of alternative therapies requires the identification of novel actionable targets. Genomic analysis of 675 pediatric patients with cancer revealed that the MAPK pathway was one of the most affected and potentially druggable events. 9 Within MAPK signaling, MAP2Ks activate the effector kinases (extracellular signal-regulated kinase 1/2 [ERK1/2]), c-JUN NH2-terminal protein kinase (JNK), ERK5, and p38 at the end of the cascade and regulate cell proliferation, differentiation, and survival. 10 Our group described aberrant activation of the kinase MAP2K7, a component of a 3-tier signaling cascade associated with epigenetic silencing of the transcription factor KLF4, in pediatric patients with T-ALL. 11 Because JNK is the sole substrate of MAP2K7, we initially studied the antileukemic properties of JNK inhibition using the JNK-IN-8 compound and 2 adenosine triphosphate (ATP)-competitor JNK inhibitors tested in phase I/II clinical trials. 12,13 Although JNK inhibition could control leukemia burden in a mouse model of T-ALL, their low specificity and potency prevented significant improvements in survival by reaching sustained therapeutic concentrations with minimal toxicity. 11,14 We recently studied the compound 5Z-7-oxozeaenol in T-ALL because this chemical compound inhibited MAP2K7 through a covalent reaction with cysteine 218. 15 Although more potent than JNK inhibitors, 5Z-7-oxozeaenol toxicity limited the capacity of this compound to control leukemia efficiently in preclinical mouse models. 16 Although OTSSP167 was described as an inhibitor of the maternal embryonic leucine zipper kinase (MELK), 17,18 the analysis of kinome studies deposited in the Library of Integrated Networkbased Cellular Signatures (LINCS) shows a broad spectrum of kinase inhibition. OTSSP167 has been described as anticarcinogenic in solid tumors, such as adrenocortical carcinoma, breast cancer, glioma, cervical cancer, teratoid/rhabdoid tumors, adenocarcinoma, and lung squamous cell carcinoma. [19][20][21][22][23] OTSSP167 has also been studied in blood cancers, such as chronic myeloid leukemia, B cell lymphoma, and chronic lymphocytic leukemia. 18,24 OTSSP167 has been tested in clinical trials, including a phase 1 study of solid metastatic tumors, a safety study of breast cancer, and 2 open trials to evaluate the bioavailability of oral OTSSP167 and intravenous administration in leukemia (acute myeloid leukemia, ALL, advanced myelodysplastic syndrome, myeloproliferative neoplasm, and chronic myeloid leukemia). We decided to study OTSSP167 in T-ALL because it was identified as a potential MAP2K7 inhibitor in the LINCS program. And most importantly, a small screen using a thermal shift assay revealed OTSSP167 among 9 compounds with strong binding and inhibition of MAP2K7 and potency within a submicromolar range. 25 Here, we describe the antileukemic properties of OTSSP167 in pediatric T-ALL using cell lines and patient samples. We show that OTSSP167 is cytotoxic in T-ALL cells via the induction of G2/M and G1/S cell cycle arrest and apoptosis associated with the inhibition of MAP2K7 kinase activity. In vivo studies establish high tolerance in mice and a significant capacity to control leukemia burden in cell-based and patient-derived xenograft studies. Drug combination studies revealed synergistic effects of OTSSP167 combined with drugs used in standard therapy. Our study warrants further testing of OTSSP167 as an adjuvant agent in combination with standard chemotherapy in frontline therapy or as a salvage agent in refractory and relapsed leukemia. Cytotoxicity assays Cell lines were plated in triplicates at a cell density of 2 × 10 4 cells per well (96-well plate) and cultured for 48 hours in the presence of OTSSP167 (MedChemExpress) or vehicle control (dimethyl sulfoxide [DMSO]). Cell viability was measured using CellTiter-Glo Luminescent Cell Viability Assay. The half-maximal inhibitory concentration (IC 50 ) was calculated using nonlinear regression analysis via GraphPad software. Apoptosis was measured using the FITC Annexin V apoptosis detection kit (Becton-Dickinson #559763). DNA content was determined by nuclei staining with propidium iodine. Flow cytometry analysis was conducted using FACS Canto (Becton-Dickinson Bioscience) and FlowJo software (TriStar). For drug combination, cells were plated with OTSSP167 and drugs used in remission induction (vincristine [VCN], L-asparaginase [ASNase], and dexamethasone [Dex]). Cytotoxicity was measured as described above, and the data were analyzed using Combenefit [32]. In vitro kinase assay Purified human MAP2K7 (Origene, 320 nM) was preincubated with a dead-JNK2 fragment (350 nM) in the presence of different concentrations of OTSSP167 or vehicle for 30 minutes. Afterward, ATP (100 μM final concentration) was added to initiate kinase activity for 30 minutes. Kinase activity was measured as the generation of adenosine diphosphate (ADP) using the ADP-GLO kit (Promega #V6930), and luminescence was determined using a 96-well plate Luminoskan ascent reader. Bone marrow samples from patients with T-ALL were collected during diagnosis at the Texas Children's Cancer and Hematology Center. Samples were collected after written informed consent was obtained from all patients under a research protocol approved by the institutional review board. Leukemic blasts were transplanted into 10-week-old female NSG mice (0.5-1.0 × 10 6 cells per mouse). Peripheral blood sampled from the tail vein was routinely monitored for human CD45 + cells via flow cytometry. Human leukemic cells were collected from the femur, tibia, and spleen, examined for human CD45 surface antigen expression, and viably frozen. Finally, NSG mice were injected with T-ALL PDX cells (0.5 × 10 6 ) and randomized into 2 groups (administration of vehicle or 10 mg/kg OTSSP167) when leukemic cells reached 1% to 5% in the blood. Mice were monitored at the end of each week for expansion of human CD45 + cells in the peripheral blood via flow cytometry. To evaluate drug toxicity, OTSSP167 (10 mg/kg) was prepared in 10% DMSO and 90% of a 20% solution of sulfobutyletherβ-cyclodextrin (SBE-β-CD) and administered intraperitoneally every day from Monday to Friday for 2 weeks, and mice were monitored for body weight and complete blood counts. To study the efficacy of OTSSP167 in inhibiting leukemic growth, NSG mice were transplanted with KOPTK-1 cells labeled with firefly luciferase (2.5 × 10 5 ) and treated with vehicle (DMSO/SBE-β-CD) or OTSSP167 (10 mg/kg). Leukemia progression was evaluated by measuring the bioluminescence at the end of each week using the IVIS Imaging System (Xenogen). Images were acquired in anesthetized mice 10 minutes after intraperitoneal injection with 50 mg/kg D-luciferin. Reverse phase protein array (RPPA) Cell lysates, serial dilutions of standards, and positive and negative controls were arrayed on nitrocellulose-coated slides (Grace Bio-Labs) using the Quanteriz 2470 Arrayer. Each slide was probed with a validated primary antibody plus a biotin-conjugated secondary antibody. Signal detection was amplified using an Agilent GenPoint staining platform and visualized by DAP colorimetric reaction. The slides were scanned, analyzed, and quantified to generate spot intensity using customized software (Array-Pro Analyzer, Media Cybernetics). Each dilution curve was fitted with a logistic model (RPPA SPACE developed at MD Anderson). The protein concentrations were then normalized for protein loading. The corrections factor was calculated and normalized across sets via replicates-based normalization using an invariant set of control samples to adjust for batch differences between identical controls. 26 Statistical analysis All sample sizes (n values) indicated in each figure legend correspond to independent biological replicates. Unpaired two-tailed Student t test was used for statistical analysis. P values were determined using GraphPad software. Results with a P value <.05 were considered statistically significant. OTSSP167 inhibits cell viability in human T-ALL cells by inducing apoptosis and G2/M cell cycle arrest We investigated the role of OTSSP167 in T-ALL based on our report that pediatric patients showed an aberrant activation of the MAP2K7-JNK pathway and the information that the MELK inhibitor OTSSP167 inhibits MAP2K7. 11,25 Firstly, immunoblot analysis showed that MELK is expressed in T-ALL cell lines, with elevated expression in KOPT-K1, MOLT-3, and RPMI-8402 ( Figure 1A). MAP2K7 is expressed in all T-ALL cell lines, as previously described by our group ( Figure 1A). 11 Cell viability assays showed dose-dependent cytotoxicity of OTSSP167 in a panel of T-ALL cell lines, with IC 50 s ranging from 10 nM (KOPT-K1) to 57 nM (DND-41) ( Figure 1B-C). The IC 50 s for T-ALL cell lines are summarized in Figure 1C. The specificity of OTSSP167 to leukemic cells was evaluated by comparing cell viability in KOPT-K1 cells with a nonleukemic lymphoblastoid cell line (LCL) ( Figure 1D). Comparative analysis of MAP2K7-JNK inhibitors against the KOPT-K1 cell line demonstrated that OTSSP167 (IC 50 12 nM) is more potent than 5Z-7-Oxozeaenol (IC 50 0.81 μM) and JNK-IN8 (IC 50 8.55 μM) ( Figure 1E). 11,16 This difference in potency was also evident in other cell lines, such as ALL-SIL and RPMI-8402 (supplemental Figure 1). Collectively, the compound OTSSP167 emerges as a powerful antileukemic agent in T-ALL. To further investigate the cause of OTSSP167-induced cytotoxicity, we evaluated the induction of apoptosis through flow cytometric detection of annexin V. Treatment of T-ALL cells with OTSSP167 (15 nM, 48 hours) induced significant apoptosis, particularly in KOPT-K1 and ALL-SIL cells (Figure 2A-B). Cell lines with an IC 50 more than 50 nM (eg, JURKAT and DND-41) do not show a significant increase in apoptosis because the dose of OTSSP167 was lower than their IC 50 ; however, higher OTSSP167 concentrations (50 and 100 nM) induced apoptosis in these cell lines (supplemental Figure 2). Immunoblot analysis revealed that OTSSP167 induces the cleavage of both PARP and caspase 3, especially in the cell lines showing a significant induction of annexin V in response to OTSSP167 treatment ( Figure 2C). Similarly, proteomic analysis by RPPA of KOPT-K1, MOLT-3, and P12-Ichikawa cell lines treated with vehicle or OTSSP167 (15 nM for 48 hours) revealed increased cleavage of caspases of the extrinsic pathway of apoptosis and annexin V in response to OTSSP167 treatment ( Figure 2D). The RPPA also shows deregulation of other cellular pathways (supplemental Figure 3). OTSSP167 has been described to alter cell cycle progression in bladder cancer cells through G1/S arrest via the p53 pathway. 27 We used propidium iodide nuclear staining to determine the effect of OTSSP167 on the cell cycle of T-ALL cells. Incubation of T-ALL cell lines with 15 nM OTSSP167 for 48 hours increased the percentage of cells in the G2/M phase of the cell cycle in the cell lines with lower IC 50 ( Figure 3A-B; supplemental Figure 4). The cell cycle arrest is more significant at 50 nM OTSSP167 in most cell lines (supplemental Figure 5). Some T-ALL cell lines also show a concomitant G1 arrest associated with an increase in G1 and a reduction in cyclin E (supplemental Figures 5 and 6). Although immunoblot analysis showed OTSSP167 increased the phosphorylation of Cdc2 and cyclin B1 ( Figure 3C), higher OTSSP167 concentrations inhibited the phosphorylation of cyclin B1 (not shown). The RPPA analysis revealed an increase in H2AX with a reduction in several regulators of the G2/M checkpoint. A dosedependent decrease of CHK1 and polo-like kinase 1 with an increase in phosphorylated H2AX was confirmed by immunoblots in the 3 cell lines ( Figure 3E). Altogether, OTSSP167 induces DNA damage, cell cycle arrest, and apoptosis in T-ALL cell lines. Inhibition of MAP2K7 by OTSSP167 in T-ALL OTSSP167 is a MELK inhibitor that can also inhibit MAP2K7. 25 In addition to reducing MELK protein, OTSSP167 (50 nM) treatment substantially inhibited the phosphorylation of JNK and downstream ATF2 in all T-ALL cell lines ( Figure 4A). Next, we investigated direct MAP2K7 inhibition in a biochemical assay using purified human MAP2K7 protein and dead-JNK2 as a substrate. The measurement of ADP production shows dose-dependent inhibition of MAP2K7 kinase activity with a 160 nM IC 50 , within a low nanomolar range as previously reported ( Figure 4B). 25 These data suggest that the observed cytotoxic effect may be mediated at least in part through MAP2K7 inhibition in T-ALL cells. To further support this model, we tested the capacity of OTSSP167 to inhibit MAP2K7 acutely activated by metabolic stress. Treatment of T-ALL cells with 400 mM sorbitol increases MAP2K7-mediated phosphorylation of JNK, which OTSSP167 inhibits in a dose-dependent manner ( Figure 4C). Retroviral expression of the constitutively activated fusion protein MAP2K7-JNK2 in JURKAT and P12-Ichikawa cell lines (supplemental Figure 7) is inhibited by OTSSP167, further supporting that OTSSP167 can inhibit the MAP2K7 pathway in T-ALL cells ( Figure 4D). The cytotoxicity of KOPT-K1 cells to OTSSP167 (IC50: 11 nM) and the MELK inhibitor MELK-8a (IC50: 10 μM), which has higher specificity to MELK than to OTSSP167, 28 suggests that low concentrations of OTSSP167 likely induce cell death in T-ALL cells independently of MELK inhibition ( Figure 4E). Because OTSSP167 is a broad-spectrum kinase inhibitor, we performed an unbiased proteomic analysis of T-ALL cell lines treated with OTSSP167 to assess plasticity. RPPA analysis of T-ALL cells treated with 15 nM OTSSP167 revealed the inhibition of other cellular pathways with a critical role in T-ALL cells, such as mTOR and NOTCH1 ( Figure 4F). [29][30][31] Immunoblot analysis of phosphorylated S6 and HES1 and downstream targets of mTOR and NOTCH1 confirmed that OTSSP167 inhibits phosphorylation of the ribosomal protein S6 and the levels of HES1 in T-ALL cells ( Figure 4G). Interestingly, a low kinase specificity of OTSSP167 could have a therapeutic benefit in T-ALL by potentially targeting other pathways, in addition to MAP2K7-JNK, involved in the proliferation and survival of T-ALL cells. In vivo antileukemic properties of OTSSP167 in human T-ALL OTSSP167 is currently being tested in clinical trials for safety, bioavailability, and efficacy in solid tumors and hematological malignancies. Because of its broad inhibitory spectrum, we studied the toxicity of OTSSP167 before evaluating its effectiveness in T-ALL preclinical mouse models. C57BL/6 mice were administered Monday to Friday for 2 weeks and monitored for body weight to indicate general animal well-being and complete blood counts. Interestingly, OTSSP167 was well tolerated at a dose of 10 mg/kg without causing gross alterations in body weight ( Figure 5A) or blood counts ( Figure 5B). Next, we evaluated the efficacy of OTSSP167 in a cell-based xenograft model based on the injection of KOPT-K1 cells labeled with firefly luciferase into NSG mice that were randomized into 2 groups for treatment with vehicle (10% DMSO and 90% SBE-β-CD) and OTSSP167 (10 mg/Kg). Mice were monitored by whole-body bioluminescence imaging at the end of each week. The group treated with OTSSP167 showed a significant delay in the spread of leukemic cells ( Figure 5C) and a considerable reduction of leukemia burden based on luminescence on days 14 and 21 of treatment ( Figure 5D). Most importantly, mice treated with OTSSP167 showed a significantly prolonged survival (n=5, P=.0031) with a median survival of 36 days compared with 23 days in the control group ( Figure 5E). Collectively, these data demonstrate the efficacy of OTSSP167 in controlling the expansion of leukemic cells in vivo with minimal toxicity. Week 1 Week 2 Week 3 Week 4 Week 5 The patient-derived xenograft (PDX) model is a preclinical model that closely correlates with clinical success. Hence, we tested OTSSP167 with a panel of T-ALL PDXs generated in our laboratory using lymphoblasts collected from children diagnosed with T-ALL leukemia at the Texas Children's Hospital and who entered remission or relapse (supplemental Figure 8). T-ALL PDX cells were injected into NSG mice and randomized into 2 groups when the human leukemic blasts were over 1% to 2% in peripheral blood. Treatment of PDX01 mice with OTSSP167 (10 mg/kg, Monday-Friday) for 3 weeks prevented the expansion of human leukemic blasts in blood compared with the vehicle control ( Figure 6A-C). Despite treated mice showing a regrowth of T-ALL cells after discontinuing treatment, the overall survival improved significantly with a 3-week treatment regimen ( Figure 6C-D). To evaluate the clearance of leukemic T cells in different tissues at the end of treatment, NSG mice carrying the PDX02 cells (relapsed T-ALL) were monitored during OTSSP167 treatment in blood and post mortem in the bone marrow and spleen. OTSSP167 controlled leukemia burden, and, in contrast to OTSSP167-treated mice, all mice administered with vehicle died during treatment, suggesting a similar survival as PDX01 ( Figure 6E; supplemental Figure 9). At the end of drug administration, analysis of bone marrow showed an~50% reduction of human CD45 + cells in the bone marrow ( Figure 6E) and a smaller spleen (supplemental Figure 9). Analysis of another PDX04 shows a similar control of leukemia burden during treatment with a significant reduction of human CD45 + cells in the bone marrow ( Figure 6F) and reduced splenomegaly (supplemental Figure 10). Figure 6G summarizes the leukemia burden at the end of treatment in the PDX model. Finally, the immunoblot analysis of phosphorylated JNK and HES1 in PDX01 and PDX04 cells treated in vitro with OTSSP167 shows inhibition of the MAP2K7 and NOTCH1 pathways ( Figure 6H). Altogether, OTSSP167 can efficiently inhibit the expansion of patient T-ALL cells in vivo. Higher concentrations or longer treatments may be required to eliminate leukemic T cells in the bone marrow efficiently. We evaluated the effect of combining OTSSP167 with drugs commonly used to treat pediatric T-ALL, such as VCN, ASNase, Dex, and etoposide. 5 This is critical because a new drug will be administered as an adjuvant rather than as a single agent. In P12-Ichikawa cells, we detected synergism between OTSSP167 and dexamethasone, analyzed using CompuSyn 32 and Combenefit visualization of drug interactions (supplemental Figure 11). 33 A synergistic effect of combining OTSSP167 with Dex, ASNase, or VCN was observed in KOPTK-1 cells ( Figure 7A-C). Strikingly, a combination of OTSSP167 with a mixture of VCN, ASNase, and Dex displayed a strong synergism in KOPTK-1 cells, suggesting that OTSSP167 could be used in multidrug therapy ( Figure 7D). The combination with etoposide was synergistic in MOLT-3 cell lines ( Figure 7E). Finally, the specificity of OTSSP167 for leukemic cells was evaluated by comparing its cytotoxicity with that in normal bone marrow cells, which showed lower toxicity than normal blood cells ( Figure 7F). Collectively, these data indicate that although the use of OTSSP167 is promising, further clinical studies are warranted. Discussion Identification of novel targets is necessary for developing targeted therapies for T-ALL. The prognosis has substantially improved for most patients with T-ALL through advances in risk assessment and intensified multidrug chemotherapy. However, the poor outcome of patients with refractory or relapsed disease supports the development of antileukemic drugs with high potency and low toxicity to withstand aggressive multidrug treatment regimens. The genomic analysis of a large cohort of children with cancer identified MAPK signaling and cell cycle control as potentially druggable events. 9 Thus, the activation of kinase-driven signaling pathways in patients with leukemia warrants studies of pharmacological inhibition to control leukemia. For example, the tyrosine kinase inhibitor ponatinib has been investigated in relapsed/refractory Philadelphia chromosome-positive ALL. 34 PI3K/AKT is one of the most activated T-ALL pathways caused by PTEN mutations. 35 The finding that mutations in IL7R, JAK1, JAK3, or STAT5B activate the JAK-STAT pathway led to the clinical evaluation of the JAK inhibitor, ruxolitinib. 36,37 More recently, it was shown that the MAPK-ERK pathway is activated in IL7R-mediated steroid-resistant T-ALL, and therefore MEK inhibition with selumetinib enhances response to steroids. 38 Similarly, we described the activation of the kinase MAP2K7 via epigenetic silencing of KLF4 in pediatric T-ALL. 11 Thus, studies combining expression with genomic and epigenetic landscapes will reveal actionable pathways for therapeutic targeting not regulated through gene mutations. MAP2K7 (also known as MKK7) is a dual-specificity mitogenactivated protein kinase that associates with its only downstream target, JNK. 10,39 An unidentified upstream MAP3K7 binds to MAP2K7-JNK, and this complex is held together by the scaffold JNK interacting protein. This pathway is activated by stress-associated signals, such as UV radiation, inflammation, metabolism, and the DNA damage response, which mediates the oncogenic stress stimuli to p53. 40 MAP2K7 is activated through phosphorylation of serine and threonine residues in the SKAKT motif in the kinase domain, whereas autoinhibition is controlled by the N-terminal regulatory helix. 41,42 The regulatory N-terminal domain of MAP2K7 contains 3 docking sites that recognize and bind JNK. Activated MAP2K7 phosphorylates the 3 isoforms, JNK1,JNK2 (both ubiquitous expressions), and JNK3 (expression limited to the brain, heart, and testis). JNK, in turn, activates cellular processes such as apoptosis and transcriptional regulation. 43,44 Early work shows JNK inhibition causes cell cycle arrest and apoptosis in JURKAT cells, and conversely, ectopic expression of fusion protein MAP2K7-JNK1 promotes cell cycle progression. 45 Our group later reported that genetic and epigenetic loss of the transcription factor KLF4 was associated with the aberrant activation of MAP2K7 in pediatric T-ALL and expansion of bulk leukemia and leukemia-initiating cells. 11 Consequently, pharmacological inhibition of the MAP2K7-JNK pathway would have antileukemic properties in T-ALL and potentially target leukemia-initiating cells. This study evaluated the antileukemic properties of the MELK inhibitor OTSSP167 in T-ALL because of its capacity to inhibit MAP2K7. 25 Low nanomolar concentrations of OTSSP167 are cytotoxic in most T-ALL cell lines by deregulating the G2/M and G1/S checkpoints and inducing apoptosis. This alteration in the cell cycle is consistent with the arrest observed in Map2k7 −/− mouse embryonic fibroblasts. 46 The inhibition of MAP2K7 kinase activity by OTSSP167, evaluated in a biochemical assay using fulllength human MAP2K7 protein, shows an IC 50 of 160 nM. This is consistent with a previous report showing OTSSP167 inhibits the and NOTCH1 pathways in T-ALL PDX01 and PDX04 treated with vehicle or 100 nM OTSSP167 for 17 hours. **P < .01, ****P < .0001 two-tailed Student t test was used in (D). Log-rank test was used in (D). phosphorylation mimetic mutant MAP2K7 S287D/T291D with an IC 50 of 105 nM determined in an isothermal calorimetry and 60 nM in a kinase assay. 25 In this work, OTSSP167 was classified as a type-I inhibitor that binds to the highly flexible ATP-binding site. 25 The capacity to inhibit MAP2K7 was further supported by the inhibition of JNK phosphorylation in T-ALL treated with sorbitol to activate MAP2K7. In addition to MAP2K7 inhibition, OTSSP167 lowered the expression of MELK protein in the cell lines KOPT-K1, ALL-SIL, and RPMI-8402. This finding suggests a low specificity of OTSSP167 for MAP2K7. Analysis of KINOMEscan in the LINCS indicates a broad spectrum of kinase inhibition for a relatively high concentration of OTSSP167 (10 μM). These findings suggest that OTSSP167 is not a specific inhibitor but offers high potency and low toxicity, 2 highly desired features for clinical translation. Daily administration of OTSSP167 (10 mg/kg) showed good tolerability and efficient inhibition of leukemia expansion in cell-based and patient-derived xenograft models. In addition to T-ALL, identifying drugs that inhibit MAP2K7 will have broader applications because this pathway is activated in several solid tumors, such as breast, prostate, and glioma cancers. [47][48][49][50] Analysis of adult T-ALL gene expression shows higher MAP2K7 expression in patients with early immature leukemia. 51 Our results demonstrate that kinase inhibition with OTSSP167 represents a potential therapeutic strategy for patients with T-ALL because of its high potency and low toxicity. Further studies are needed to evaluate OTSSP167 combined with current intensified chemotherapy in pediatric patients with T-ALL.
2021-11-25T16:07:14.657Z
2021-11-05T00:00:00.000
{ "year": 2022, "sha1": "19a0991e4876e26bf4f08646e74f88895ad47bb7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1182/bloodadvances.2022008548", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e7a38ad04d4a0667d6df613d656c5e041664070", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
3450011
pes2o/s2orc
v3-fos-license
Fundamental solutions for micropolar fluids New fundamental solutions for micropolar fluids are derived in explicit form for two- and three-dimensional steady unbounded Stokes and Oseen flows due to a point force and a point couple, including the two-dimensional micropolar Stokeslet, the two- and three-dimensional micropolar Stokes couplet, the three-dimensional micropolar Oseenlet, and the three-dimensional micropolar Oseen couplet. These fundamental solutions do not exist in Newtonian flow due to the absence of microrotation velocity field. The flow due to these singularities is useful for understanding and studying microscale flows. As an application, the drag coefficients for a solid sphere or a circular cylinder that translates in a low-Reynolds-number micropolar flow are determined and compared with those corresponding to Newtonian flow. The drag coefficients in a micropolar fluid are greater than those in a Newtonian fluid. Introduction The physical mechanisms of heat, mass, and momentum transport in small-scale units may differ significantly from those in macroscale equipment [1,2]. Fundamental and applied investigations of microscale phenomena in fluid mechanics are motivated by developments in the areas of biological molecular machinery, atherogenesis, microcirculation, and microfluidics. At scales larger than a micron, the fluid can be treated as a continuum, and the flow is governed by the Navier-Stokes equation. The continuum model assumes that the properties of the material vary continuously throughout the flow domain. In Newtonian continuum mechanics, the fluid is modeled as a dense aggregate of particles, possessing mass, and translational velocity. However, the field equation, such as the Navier-Stokes equation, does not account for the rotational effects of the fluid micro-constituents. * Author to whom correspondence should be addressed. 2 In the theory of micropolar fluids [3], rigid particles contained in a small volume element can rotate about the centroid of the volume element. The rotation is described by an independent micro-rotation vector. Micropolar fluids can support body couples and exhibit microrotational effects. The theory of micropolar fluids has shown promise for predicting fluid behaviour at microscale. Papautsky, et al. [1] found that a numerical model for water flow in microchannels based on theory of micropolar fluids gave better predictions of experimental results than those obtained using the Navier-Stokes equation. Micropolar fluids can model anisotropic fluids, liquid crystals with rigid molecules, magnetic fluids, clouds with dust, muddy fluids, and some biological fluids [3]. In view of their potential application in microscale fluid mechanics and non-Newtonian fluid mechanics, it is worth exploring new fundamental solutions. The fundamental solutions for Stokes flow [4] and Oseen flow [5] due to a point force are commonly named as the Stokeslet and the Oseenlet. The fundamental solution due to a point force in a steady Stokes In micropolar fluids, the microrotation fundamental solutions due to a point force are the micropolar Stokeslet and micropolar Oseenlet, and those due to a point couple are the micropolar Stokes couplet and micropolar Oseen couplet. Such fundamental solutions do not exist in Newtonian flow due to the absence of microrotation velocity field. Ramkissoon & Majumdar [11] linearized the governing equations of micropolar fluids and applied Fourier transforms to obtain the three-dimensional micropolar Stokeslet. Olmstead & Majumdar [12] derived the two-dimensional micropolar Oseenlet and micropolar Oseen couplet. In this paper, we derive fundamental Stokes and Oseen solutions of micropolar flows in three dimensions, so that the point force and point couple can be prescribed in any direction. Corresponding results for two-dimensional flows are also presented. Stokes and Oseen flows of a micropolar fluid due to a point force Consider a point force in an unbounded, quiescent, incompressible micropolar fluid. Without loss of generality, the point force is placed at the origin, and the free-stream velocity  U is taken to be   The resultant fluid flow is assumed steady. Based on the Oseen approximation, the governing equations [13] reduce to where  is the fluid density, r I is the microinertia,  , r  , r c and m c are the Newtonian, microrotational, and two angular viscosities, respectively, and F is a constant vector. In the equation (3), the divergence of v is assumed zero, which is verified in the latter part of this section. The pressure, p , translational velocity, u , and microrotation velocity, v , are required to decay as   x in an unbounded flow, Suppose that f is an absolutely integrable function that decays at infinity of n  . The n -dimensional complex Fourier transform of the function f , is defined by (4) which states that p is harmonic everywhere except at the pole. To solve (4) for p , we take the Fourier transform, finding     . Stokes and Oseen flows due to a point force have the same pressure field, regardless of whether the fluid is Newtonian or micropolar. Taking the curl of (2) gives Substituting (6) in (7), we derive a partial differential vector equation containing only one unknown, v , To solve partial differential equations of high order, such as (8), we may factorize the high order partial differential operator into products of lower order [14]. This method was used by Olmstead & Majumdar [12]. Formally, it is proposed that , 2 1 where L is a fourth order partial differential operator, and 1 A , 2 A , 1 B and 2 B are constants. While the method of factorization is attractive, a certain relationship between the parameters must exist for L to admit the desired factorization. To factorize the differential operator in (8), the following must be true: Consequently, it is required that We have five equations and only four unknowns. To expedite the solution, the value of 1 B is taken to be zero since Then, from Hence, the partial differential operator in (10) This allows (8) to be rewritten as The above factorization is valid under the physical constraint of the parameters given by (11). To solve (12) for v , it is convenient to take the Fourier transform. (13) The inverse Fourier transform gives (15) Equation (15) gives the microrotation velocity in the presence of a point force. The curl of (3) gives the curl of the curl of u as  , using vector identities, (1) and the assumption that the divergence of v is zero. Taking the Fourier transform, we find   . (11) and (13) in the above equation leads to Finally, we find the translational velocity u is given by the micropolar Oseenlet of u (see Appendix) (17) We see that the Newtonian Oseenlet is recovered. The solution of u for a micropolar fluid is much more complicated than that for a Newtonian fluid. Stokes and Oseen flows of a micropolar fluid due to a point couple Consider a point couple in an unbounded quiescent, incompressible micropolar fluid. Based on the Oseen approximation, the governing equations [13] can be linearized as T as a constant vector. Without loss of generality, the point couple is assumed to be positioned at the origin. We begin by taking the divergence of (20), which states . 0 This reduces the gradient of p in (20) to zero. To obtain the translational velocity field u , we take the curl of (21), where v a    . Vector identities and (19) were used to express the curl of (21) in the above form. To express (22) in terms of u alone, we make use of (20), which can be rewritten as Because (24) and (8) are identical in form, we can factorize the partial differential operator, as in (12), under the physical constraint given by (11). Then, we can write    , The inverse Fourier transform of û yields the micropolar Oseen couplet of u (see Appendix) It is not surprising that the micropolar Oseen couplet of u in (26) is similar to the micropolar Oseenlet of v in (14), except that the former is caused by a point couple while the latter is due to a point force. 9 In the limit 0   U , the micropolar Oseen couplet of u in (26) becomes the micropolar Stokes couplet of u , We take the divergence of (21) to evaluate the divergence of v and find    , To determine the curl of v , we take the Fourier transform of (23) and substitute (25) to find To relate f to a , we make use of the vector identity: The inverse Fourier transform of v is the micropolar Oseen couplet of v (see Appendix) The three-dimensional micropolar Stokes couplet of v almost agrees with that of Eringen [3], except for wrong sign in the first term. Drag on a translating solid sphere in a micropolar viscous flow The drag comes exclusively from the point force. The dimensionless drag coefficient is (18) and (15), the two-and three-dimensional micropolar Stokes couplet, given by (27) and (32), the three-dimensional micropolar Oseenlet, given by (16) and (14), and the three-dimensional micropolar Oseen couplet, given by (26) and (31). These fundamental solutions are possible due to the existence of microrotation velocity fields in micropolar fluids. The fundamental solutions can generate further fundamental solutions by successive differentiation with respect to the singular point [15,16]. A summary of available fundamental solutions is given in Table 1.
2018-01-27T21:49:32.916Z
2008-05-01T00:00:00.000
{ "year": 2014, "sha1": "fd04e1df68fed77eb382883dea55d109c7e3addf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1402.5023", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d07e4e5d200104e1e4c9b45e13ca671ef86f1a81", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
263939319
pes2o/s2orc
v3-fos-license
Reversing motor adaptation deficits in the ageing brain using non-invasive stimulation Healthy ageing is characterised by deterioration of motor performance. In normal circumstances motor adaptation corrects for movements’ inaccuracies and as such, it is critical in maintaining optimal motor control. However, motor adaptation performance is also known to decline with age. Anodal transcranial direct current stimulation (TDCS) of the cerebellum and the primary motor cortex (M1) have been found to improve visuomotor adaptation in healthy young and older adults. However, no study has directly compared the effect of TDCS on motor adaptation between the two age populations. The aim of our study was to investigate whether the application of anodal TDCS over the lateral cerebellum and M1 affected motor adaptation in young and older adults similarly. Young and older participants performed a visuomotor rotation task and concurrently received TDCS over the left M1, the right cerebellum or received sham stimulation. Our results replicated the finding that older adults are impaired compared to the young adults in visuomotor adaptation. At the end of the adaptation session, older adults displayed a larger error (−17 deg) than the young adults (−10 deg). The stimulation of the lateral cerebellum did not change the adaptation in both age groups. In contrast, anodal TDCS over M1 improved initial adaptation in both age groups by around 30% compared to sham and this improvement lasted up to 40 min after the end of the stimulation. These results demonstrate that TDCS of M1 can enhance visuomotor adaptation, via mechanisms that remain available in the ageing population. Introduction When tested with sensitive laboratory-based behavioural tests, healthy ageing has been shown to be associated with a general reduction in motor performance, characterised by decreases in accuracy, coordination and movement speed and increases in movement duration and variability (Seidler et al. 2002Krampe, 2002;Sarlegna, 2006;Heuninckx et al. 2008;Dutta et al. 2013). These impairments are linked to change in brain structure such as regional cortical thinning, decreases in the volume of subcortical structures, concordant ventricular enlargement and changes in white matter integrity (Salat et al. 2004;Walhovd et al. 2005Walhovd et al. , 2011Fjell et al. 2009;Hafkemeijer et al. 2014;Sexton et al. 2014). Brain physiology is also affected in healthy aging, for example decreased neural plasticity is seen (Rogasch et al. 2009;Freitas et al. 2011;Pascual-Leone et al. 2011). These changes to brain structure and function may explain the decrease in motor performance that can be seen using lab-based behavioural testing. Moving successfully in a dynamic and ever-changing world relies on continuous calibration of the motor system. Motor adaptation is a form of motor learning that restores accuracy when systematic motor errors are encountered. As such, it is thought to play a critical role in maintaining motor accuracy in the face of changing factors such as muscle fatigue or weakening. Several studies have reported that motor adaptation is impaired in older adults and as such may play a role in the general decline of motor performance in the ageing population (Buch et al. 2003;Bock & Girgenrath, 2006;Seidler, 2006;Anguera et al. 2010;Heuer & Hegele, 2011;Langan & Seidler, 2011;Huang & Ahmed, 2014). The last 15 years have seen a seemingly exponential rise in the use of transcranial direct current stimulation (TDCS) in both the experimental and clinical settings. TDCS has the attractive property of being capable of modulating neural excitability whilst being painless, non-invasive and well tolerated. It has been repeatedly demonstrated that TDCS can improve some motor behaviour in both healthy subjects (Nitsche et al. 2003;Boggio et al. 2006;Reis et al. 2009) and a number of chronic and acute movement disorders such as stroke (Hummel et al. 2005), spinal cord injury (Fregni et al. 2006a) and Parkinson's disease (Fregni et al. 2006b). In general, it has been shown that the modulation of excitability is polarity dependent with anodal TDCS being excitatory and cathodal TDCS being inhibitory (Nitsche & Paulus, 2000). A recent study took advantage of anodal TDCS ability to enhance behaviour to explore the respective role of the primary motor cortex (M1) and the lateral cerebellum in the adaptation of upper limb movements in young healthy adults (Galea et al. 2011). The authors found that anodal TDCS over the lateral cerebellum increased the rate of adaptation, while anodal TDCS over M1 increased the amount of retention of the adaptation. Their conclusion was that the lateral cerebellum was involved in the development of the adaptation itself, while M1 was responsible for retention of the adapted state. Following this first study, other studies found that anodal TDCS over the lateral cerebellum improved other forms of adaptation such as force-field adaptation, locomotor adaptation and eyeblink conditioning (Jayaram et al. 2012;Zuchowski et al. 2014;Herzfeld et al. 2014), suggesting that anodal TDCS could be a useful tool to enhance the adaptive process in healthy young adults. More recently, a study demonstrated that the application of anodal TDCS over the lateral cerebellum during the adaptation of reaching movements could compensate for the deficit in adaptation normally seen in older adults when compared to young controls (Hardwick & Celnik, 2014). However, to date, no single study has directly compared the effect of TDCS over M1 and the cerebellum between young and older adults. Therefore, the aim of this study was to investigate the effect of anodal TDCS over M1 and over the lateral cerebellum on the motor adaptation of both young and older adults. Specifically, we intended to replicate earlier findings that visuomotor adaptation is indeed impaired in the older adults, and then identify whether the beneficial effect of anodal TDCS over M1 and the cerebellum was comparable between the two age groups. Because of the changes in brain structure and function underlying healthy aging, we could expect that the effect of anodal TDCS would quantitatively differ between young and older adults. Moreover, we tested the online effect of anodal TDCS on the adaptation process, but also its off-line short-term effect 50 min later. We used a version of the classic visuomotor rotation task in which participants must adapt to a rotation imposed between a joystick, controlled using small movements of the fingers and wrist, and the cursor presented on a computer screen (Cunningham, 1989;Krakauer et al. 1999;Miall et al. 2004). Participants received anodal TDCS over left M1, right cerebellum or received sham stimulation while adapting to this visuomotor rotation. Participants were subsequently re-tested after a 50 min break on the same adaptation protocol to evaluate short-term retention, before de-adapting back to their natural state. Participants Eighty-four healthy participants took part in the study, but the data of four older adults was excluded from this paper, as they did not follow the instructions to perform the task (i.e. they did not return to the starting position before the beginning of each trial). In total, we report the results for 80 healthy participants: 38 older participants (20 females, mean age: 63.2 ± 7.5 years old, 4 left-handed participants) and 42 young adults (20 females, mean age: 22.5 ± 3.1 years old, 8 left-handed participants). Handedness was self-reported as the dominant hand and participants received monetary compensation for their time (£10 per hour). Participants were assigned randomly to one of three groups as follows. There was no age difference in the three sub-groups of young participants (F (2,39) < 1, P = 0.81) or the three sub-groups of older participants (F (2,35) < 1, P = 0.73). Young and older participants were screened for personal or familial history of epilepsy, neurological condition, neurosurgery, strokes and depression. Experimental procedures conformed to the Code of Ethics of the World Medical Association (Declaration of Helsinki) and were approved by the National Research Ethics Service (NRES) Committee South Central -Oxford B and C. Written informed consents were obtained from all participants who took part in the study. TDCS TDCS was applied via two saline-soaked electrodes (5 cm × 7 cm) using a DC-stimulator Plus (NeuroConn, Ilmenau, Germany). For M1 stimulation, the anodal electrode was placed over the hand area of the left primary motor cortex, identified in each subject with single-pulse transcranial magnetic stimulation (TMS: Magstim 200, Dyfed, UK), with the cathodal electrode positioned on the contralateral supraorbital area (Nitsche & Paulus, 2000). For cerebellar stimulation, the anodal electrode was centred on the right cerebellar cortex, 3 cm lateral to the inion (Galea et al. 2009;Jayaram et al. 2012;Hardwick & Celnik, 2014), while the cathodal electrode was placed on the left superior aspect of the trapezius muscle. For cerebellar TDCS, we used an extra-cephalic reference electrode to avoid the confound arising from placing the cathodal electrode on the participant's head which would influence the activity of the brain beneath. The cathodal electrode was placed over the trapezius muscle as this montage was successfully used to stimulate the cerebellum and cerebral cortex in several published studies (Joundi et al. 2012;Brittain et al. 2013;Mehta et al. 2014Mehta et al. , 2015Panouillères et al. 2015). In both stimulation conditions, anodal stimulation was delivered at 2 mA (Iyer et al. 2005;Ferrucci et al. 2008;Galea et al. 2011) with the stimulation intensity gradually ramped on and off over a 10 s period. TDCS started at the beginning of the baseline phase, continued during the first adaptation phase (7 min) and continued for approximately 10 min into the break (a total of 17 min stimulation: Fig. 1A). The stimulation was continued into the break as it has been shown that stimulating M1 for at least 13 min leads to changes in M1 excitability in young adults lasting up to 60 min after stimulation termination (Nitsche & Paulus, 2001). For the sham-stimulation sessions, the electrodes were placed as for M1 stimulation, but stimulation only lasted for 30 s, with 10 s ramping on and off. In this way, and was turned off approximately 10 min into the break (stimulation duration: 17 min). After the 50 min break, participants performed the second visuomotor adaptation phase (VM Adapt2) followed by the de-adaptation (De-Adapt). B, in the baseline and de-adaptation trials, the movements of the green cursor followed the exact path of the joystick movement. For the visuomotor adaptation trials (VM Adapt1 and VM Adapt2), the movement of the green cursor was rotated by 60 deg counterclockwise relative to the joystick movement. Note that the red target was presented randomly to one of 8 equidistant positions located on the dashed line circle. all participants thought they were being stimulated. To maintain the blinding regarding the stimulation condition as much as possible, subjects were told that the results of the stimulated group will be compared to a non-stimulated group, but it was not mentioned that the non-stimulated group was in fact a sham group. Experimental design and procedure Participants sat in an armless chair about 80 cm away from a computer screen (size: 26.5 cm × 16.5 cm) placed vertically in front of them and manipulated a joystick with their right hand, regardless of handedness, that was fixed to a table at a comfortable height on their right side. The joystick was 6.5 cm in height, 2 cm in width and the maximal centre-out excursion of 17 deg (low profile contactless joystick, APEM 9000 Series, RS Components). Subjects then controlled the joystick by moving their fingers and/or wrist. The joystick (sampling rate: 60 Hz) moved a green cursor (diameter: 0.3 cm) on the computer screen. A shield was used to prevent the participants from seeing their hand or joystick while performing the task. During the experiment, participants had to follow a red target (diameter: 0.3 cm) that jumped from the centre of the screen to one of eight equidistant positions, separated by 45 deg, located at the perimeter of a visible circle (radius: 4.6 cm, Fig. 1B). The red target was presented in the centre of the screen for 750 ms and then jumped to a randomly selected peripheral position and stayed in this location for a further 750 ms. Participants started each trial with the green cursor in the centre of the screen (resting position of the joystick) and were instructed to make fast, accurate and ballistic movements with the joystick, in order to 'shoot' the red target with the green cursor. Participants had 750 ms to perform this movement; they were asked not to stop on the target but to pass through it and then to release the joystick so that the green cursor could come back to the starting position for the next trial. On average, outward movements for the older participants lasted about 220 ms and those of the young adults lasted about 160 ms, suggesting that both groups were able to elicit fast, ballistic movements. Behavioural testing was divided into five phases: baseline, first adaptation (VM Adapt1), consolidation period (break), second adaptation (VM Adapt2) and de-adaptation (De-Adapt, Fig. 1A). During the baseline, participants performed 50 trials in which the direction of movement of the green cursor matched the movement of the joystick (Fig. 1B). After a break of 1 min, the adaptation phase (VM Adapt1) started and lasted for 150 trials. In this phase, the movement of the green cursor was rotated counter-clockwise by 60 deg relative to the joystick movement (Fig. 1B). Participants were told the nature of the cursor rotation before the start of the adaptation phase, to give young and older participants similar explicit knowledge about the perturbation. Participants were instructed to keep moving as fast, accurately and straight as in the baseline phase and to avoid making corrective secondary movements despite the large errors initially incurred as a consequence of the rotation. Participants were also told not to use any explicit strategy to overcome the error and that the learning will occur implicitly. The consolidation period consisted of a 50 min break where participants were at rest. This period was followed by the second adaptation phase (VM Adapt2) that was identical to the initial adaptation phase. Finally, participants performed the de-adaptation phase (150 trials) in which the movement of the green cursor once again matched the movement of the joystick (Fig. 1B). Data analysis Joystick movements were analysed on a trial-by-trial basis using in-house software written in MATLAB (Mathworks Inc., Natick, MA, USA). Our main measure was the angular error between the initial outward movement of the cursor and the target. We calculated the movement error as the angular difference between a straight line from the start position to the target and the position of the cursor at peak velocity. The automated calculation of movement error based on maximal velocity was checked by the operator trial-by-trial. Trials with premeditated or otherwise poorly defined movements were rejected from further analysis (mean ± standard deviation: 0.86 ± 0.97% of trials were rejected per subject). Baseline performance was measured by averaging the last 10 trials of the baseline phase. Adaptation in the different phases (VM Adapt1, VM Adapt2 and De-Adapt) was measured by averaging the movement error across blocks of 10 trials. The late adapted level reached at the end of VM Adapt1, VM Adapt2 and De-Adapt was taken as the average of the last 30 trials of each phase, where adaptation reaches an asymptote. For all individuals in the M1 groups, the percentage improvement in VM Adapt1 relative to sham was calculated as the difference between the late adapted level of each M1 subject and the mean value of the late adapted level in their respective sham group divided by the mean value of the late adapted level in their respective sham group. Statistical analyses were performed with the SPSS Statistics software package (IBM, Armonk, NY, USA). We ran ANOVAs in the general linear model framework, but for simplicity we will refer to them as ANOVAs. To test for differences in adaptation, blocked error of every 10 trials for the different phases was compared with a three-way ANOVA with the between-subject factors Group (Young and Older) and Stimulation (Sham, M1 and Cerebellum) and the within-subject factor Blocks (1,2, . . . 15). The effect of stimulation on movement error in baseline and on the late adapted levels was evaluated using two-way ANOVAs with the between-subject factors Group (Young and Older) and Stimulation (Sham, M1 and Cerebellum). ANOVAs were performed separately for the different phases. Greenhouse-Geisser corrections to the degrees of freedom were applied if Mauchly's sphericity test revealed a violation of the assumption of sphericity for any of the factors in the ANOVAs. Significant main effects or interactions in the ANOVAs were followed by Bonferroni post hoc tests. Baseline performance Performance of baseline as measured by the mean error across the last 10 baseline trials did not significantly differ between groups or stimulation conditions (two-way ANOVA: Group effect: F (1,74) < 1, P = 0.12; Stimulation effect: F (2,74) = 1.66, P = 0.20). There was a trend for an interaction between group and stimulation conditions (F (2,74) = 3.02, P = 0.06), which is explained by the fact that older participants in the M1 condition had slightly larger positive errors than the other older groups and that young in the cerebellar condition also had slightly larger errors compared to young in the sham condition (Table 1). However, these contrasts were far from being significant (Bonferroni post hoc tests for older M1 vs. older sham: P = 0.26; for older M1 vs. older cerebellum: P = 0.23; for young cerebellum vs. young sham: P = 0.11; for young M1 vs. young sham: P = 1). This result then suggests that TDCS over M1 or the cerebellum did not modify the initial motor performance in this joystick task. Older participants are impaired in motor adaptation Separate ANOVAs for the VM Adapt1, VM Adapt2 and De-Adapt phases were performed on the mean error averaged across blocks of 10 trials with the between-subject factors Group (Young and Older) and Stimulation To evaluate the level of adaptation and de-adaptation reached at the end of each phase, the last 30 trials of VM Adapt1, VM Adapt2 and De-Adapt were averaged for each subject. Two-way ANOVAs with the between-subject factors Group (young and older adults) and Stimulation (Sham, M1 and Cerebellum) were conducted on these late adapted levels, separately for the three adaptation phases (VM Adapt1, VM Adapt2 and De-Adapt). The reduced adaptation in the older is reflected in the late adapted levels, as measured by the mean error of the last three blocks of each phase (Fig. 3). Indeed, in all phases, the young participants consistently made errors of significantly smaller magnitude than the older adults (two-way ANOVAs with main effect of Group: VM Adapt1: F (1,74) = 13.95, P < 0.001; VM Adapt2: F (1,74) = 16.54, P < 0.001; De-Adapt: F (1,74) = 28.60, P < 0.001). M1 stimulation facilitates adaptation in both age groups and aligns the initial adaptation performance of the older participants to that of young participants Adaptation was facilitated in both the young and older participants by M1 stimulation (Fig. 2, M1 condition). With M1 stimulation, error reduction was significantly larger during VM Adapt1 than for the groups that received sham or cerebellar stimulation (main Stimulation effect: F (2,74) = 9.37, P < 0.001; Bonferroni post hoc tests for M1 vs. sham and M1 vs. cerebellum: P < 0.01). Strikingly, adaptation through VM Adapt1 in the older participants receiving M1 stimulation did not differ from the young sham stimulation group (two-way ANOVA comparing the young sham and older with M1 TDCS: main Group effect: F (1,25) < 1, P = 0.85, Block × Group interaction: F (6,151) = 1.21, P = 0.30). The movement errors reached at the end of VM Adapt1 (Fig. 3) were significantly lower in young and older participants who received TDCS over M1 compared Lasting effects of M1 stimulation after the 50 min break Stimulation of M1 during VM Adapt1 led to a significantly improved performance in VM Adapt2 compared to the sham and cerebellar conditions ( Fig. 2; three-way ANOVA with Blocks, Group and Stimulation factors: Stimulation effect: F (1,50) = 6.24, P < 0.05; Bonferroni post hoc tests for M1 vs. sham and M1 vs. cerebellum: P < 0.05). The effect was mainly due to the larger decrease in error in the older participants with M1 stimulation for all VM Adapt2 while young participants were mainly improved at the beginning of this phase (Block × Group × Stimulation interaction: F (18,660) = 1.88, P < 0.05). Indeed, the profile of the error reduction during the second adaptation phase for the older participants who had received M1 TDCS was again very similar to that of the young participants in the sham condition (two-way ANOVA: main Group effect: F (1,25) < 1, P = 0.67, Block × Group interaction: F (7,718) < 1, P = 0.46). The adaptation levels reached at the end of VM Adapt2 were significantly better following M1 TDCS than after sham TDCS in both age groups, as subjects made smaller errors after M1 TDCS than after sham TDCS ( Fig. 3; two-way ANOVA with Group and Stimulation factors: Stimulation effect: F (2,74) = 4.64, P < 0.05; Bonferroni post hoc tests: P < 0.05). M1 stimulation speeds up the de-adaptation of young participants M1 stimulation influenced the performance of young during de-adaptation and this was not seen following cerebellar or sham stimulation ( Fig. 2; three-way ANOVA with Blocks, Group and Stimulation factors: Group × Stimulation interaction: F (2,74) = 4.43, P < 0.05). Indeed the young group with M1 TDCS de-adapted more quickly than the young group who had received sham (Bonferroni post hoc test: P < 0.05), while the older groups de-adapted similarly in all the stimulation conditions (Bonferroni post hoc tests: P > 0.30). The measure of late adapted levels (Fig. 3) shows that all the young participants reached the same level of de-adaptation and that the same was true for older participants (two-way ANOVA with Group and Stimulation factors: main Stimulation effect: F (2,74) < 1, P = 0.42; Group × Stimulation interaction: F (2,74) = 3.07, P = 0.052). Note that there was a small trend for older adults in the M1 condition to de-adapt less than sham and cerebellar conditions, but this was far from being significant (Bonferroni post hoc test: P > 0.25). Online corrections were not affected by stimulation It is possible that the increased accuracy described above is due to more rapid online corrections following TDCS, i.e. corrections before peak velocity is attained. To test this, we re-analysed the data to find the initial movement error calculated as the difference between the target angle and the angle of direction described by the initial straight line of the joystick movement (as opposed to that calculated using peak velocity, see Methods). For VM Adapt1, VM Adapt2 and De-Adapt, ANOVAs with the within-subject factor Block and the between-subject factors Group and Stimulation were conducted on initial movement errors. We again found that VM Adapt1 was facilitated with M1 stimulation relative to sham and cerebellar stimulation (main Stimulation effect: F (2,74) = 8.87, P < 0.001; Bonferroni post hoc tests: M1 vs. sham: P < 0.01; M1 vs. cerebellum: P < 0.001) and that adaptation in VM Adapt1 in the older participants receiving M1 stimulation did not differ from the young sham stimulation group (two-way ANOVA: main Group effect: F (1,25) = 0.34, P = 0.57, Block × Group interaction: F (7,184) = 1.39, p = 0.21). In VM Adapt2, the older participants who had received M1 stimulation adapted more than the ones who had received sham (Block × Group × Stimulation interaction: F (21,763) = 1.93, P < 0.01). Finally, M1 stimulation influenced the de-adaptation only in young participants relative to sham stimulation (Group × Stimulation interaction: F (2,74) = 5.57, P < 0.001). The lack of difference between these analyses and those using data where the initial movements' errors are measured at the peak velocity suggest that TDCS did not influence online corrections, but really affected the adaptation process. This analysis also reveals that the use of peak velocity is an accurate method to calculate initial direction. No effect of handedness on the results All participants used their right hand to perform the task regardless of handedness as we wanted to stimulate the same brain sites in all the participants: the left primary motor cortex and the right lateral cerebellum. The left-handed participants were relatively well distributed across the different groups and stimulation conditions (see Methods). However, to be sure that the presence of left-handed participants did not alter the results, we performed ANOVAs on the movements' error separately for VM Adapt1, VM Adapt2 and De-Adapt after excluding the data of all the 12 left-handers. These ANOVAs showed that older adults adapted at a significantly slower rate than young adults (Group effect: F (1,62) > 11.48, P < 0.01; Group × Block interaction: F (5,290) > 2.50, P < 0.05). Moreover, we also found that anodal TDCS over M1 facilitated the adaptation during VM Adapt1 compared to sham and cerebellar stimulation (Stimulation effect: F (2,62) = 6.25, P < 0.01; Bonferroni post hoc tests: P < 0.05). M1 stimulation was also improving the error reduction during VM Adapt2, mostly for older adults (Stimulation effect: F (2,62) = 2.90, P = 0.06; Stimulation × Block × Group interaction: F (17,533) = 2.02, P < 0.01). Finally, M1 stimulation also affected the performance of young during the de-adaptation phase (Group × Stimulation interaction: F (2,62) = 5.32, P < 0.01). Because these results are qualitatively and statistically similar to the ones presented above, we conclude that the handedness of our subjects did not impact the effect of the stimulation. Discussion The main finding of this study is that anodal TDCS over M1 similarly improved the acquisition of motor adaptation in both young and older adults. This facilitation of motor adaptation in older adults made their performance similar to that of young participants who did not receive any stimulation and the effect of the stimulation continued beyond the 50 min break. Surprisingly, we did not find any effect of the stimulation of the lateral cerebellum on the adaptive process. J Physiol 593.16 Comparison of the two age groups who received sham stimulation replicates findings from previous studies that demonstrate that ageing is associated with a decrement in motor adaptation (Buch et al. 2003;Bock, 2005;Bock & Girgenrath, 2006;Seidler, 2006;Anguera et al. 2010;Fernandez-Ruiz et al. 2011;Heuer & Hegele, 2011;Langan & Seidler, 2011;Huang & Ahmed, 2014;Hardwick & Celnik, 2014). For visuomotor rotation, the deficits have been reported for sudden perturbations of both small (30 deg, Hardwick & Celnik, 2014, but see Heuer & Hegele, 2008 and large amplitude (60 deg, Bock, 2005;Heuer & Hegele, 2008, but not for gradual perturbations (Buch et al. 2003;Cressman et al. 2010). Several mechanisms have been posited for this deficit in adaptation in older people. The impairment could be cognitive and underpinned by a deficit in spatial working memory (Anguera et al. 2010) or by the inability to use explicit strategies to compensate for the rotation (Heuer & Hegele, 2008). More recently, it has been suggested that the decline in adaptation may also be due to a deficit in reinforcement learning in the older adults (Heuer & Hegele, 2014) and/or a deficit of the slow process of motor adaptation (Trewartha et al. 2014). Although our study aimed at replicating the findings of the studies above, further research will be needed to disentangle the mechanisms behind the deficit. Previous studies have found that anodal TDCS over the lateral cerebellum enhances the acquisition of different forms of motor adaptation in young adults (Galea et al. 2011;Jayaram et al. 2012;Block & Celnik, 2013;Zuchowski et al. 2014;Herzfeld et al. 2014) and in older adults (Hardwick & Celnik, 2014). Our results are directly at odds with these reports as we do not find any effect of the anodal TDCS over the lateral cerebellum. While the studies above placed the cathode over the buccinator muscle (i.e. cheek), we placed it on the shoulder to avoid the confound arising from placing the cathodal electrode on the participant's head. Current density modelling suggests that this montage with the reference on the shoulder provides maximal current flow within the cerebellar hemispheres (Parazzini et al. 2014;Rahman et al. 2014). Moreover, electrode montages with reference on the shoulder have been used successfully in a few stimulation studies (Joundi et al. 2012;Brittain et al. 2013;Mehta et al. 2014Mehta et al. , 2015. Finally, in a recent study using a similar montage, we found that TDCS over the cerebellum affected saccadic adaptation in a polarity-dependent manner (Panouillères et al. 2015). All these reasons make it unlikely that the different electrode montage for cerebellar stimulation is the reason for our lack of effect. Our main finding that TDCS of M1 improves adaptation is also at odds with the study by Galea et al. (2011) who found that anodal stimulation of M1 increased the retention of adaptation, but not its initial acquisition. However, at least one other study has found M1 TDCS effective in improving adaptation (Hunter et al. 2009), while three other studies did not find any effect of M1 stimulation on force-field and visuomotor adaptation (Baraduc et al. 2004;Block & Celnik, 2013;Herzfeld et al. 2014). Our results are then consistent with growing evidence that the behavioural response to TDCS is sensitive to small variations in protocol. Several differences in protocol can be highlighted between our study and others already in the literature. For example, compared to Galea et al.'s study (2011), differences include the number of trials in baseline (50 vs. 196), the explicit knowledge of the perturbation, the type of movement and the effectors used (wrist/finger vs. arm) exist. However, it might be that the most important difference is the size of the visuomotor rotation (60 deg vs. 30 deg). It has been suggested that adapting to larger rotations involves more explicit learning strategies compared with adapting to smaller rotations and it has been hypothesised that these explicit processes are specifically prone to age-related changes (Heuer & Hegele, 2008;Hegele & Heuer, 2013). Furthermore, the processes that underpin the explicit components of visuomotor adaptation are thought to be cortical whereas implicit visuomotor adaptation is thought to be cerebellar. Therefore, it might be that the efficacy of M1 stimulation that we see in this study -and that is not seen in other studies using smaller rotationsis due to the cortical locus of the explicit processes engaged during adaptation to larger visuomotor rotations. It should also be noted that the level of control exerted by M1 over fine finger and wrist movements is far higher than that over reaching movements of the whole arm (see Lawrence & Kuypers, 1968). Therefore, it may be that M1 is more involved in the processes that adapt movements of the hand (as used in the current study) than those of movements of the arm. In agreement with our current result, facilitation of implicit and explicit motor learning of finger movements with M1 TDCS have been demonstrated in serial-reaction time tasks (Nitsche et al. 2003;Kantak et al. 2012) and motor skill learning tasks (Reis et al. 2009;Stagg et al. 2011;Schambra et al. 2011). Our study shows that online M1 stimulation is beneficial to adaptation performance on our motor adaptation task irrespective of age. This effect could not be attributed to a placebo effect, as it was only present for M1 but not cerebellar TDCS. In relative terms, M1 stimulation improves the final reduction of error compared to sham stimulation (see Methods) by around 30% in both age groups. Our findings are certainly in line with previous studies showing that TDCS over M1 in the older adults could increase M1 plasticity and facilitate skill acquisitions in hand tasks, e.g. the Jebsen-Taylor hand function test and a finger-tapping task (Hummel et al. 2010;Goodwill et al. 2013;Zimerman et al. 2013). Moreover, M1 TDCS has a lasting effect in both age groups as about 40 min after the end of M1 stimulation young and older participants performed better in VM Adapt2 than in the sham condition. These data highlight that in both young and older adults, TDCS can have a similar behavioural impact lasting up to 40 min after stimulation termination. This is in agreement with a previous study that has shown that initial improvements in motor performance brought about using TDCS lead to improved retention 24 h later in older participants (Zimerman et al. 2013). Thus, our results show that, despite the functional and structural brain changes associated with healthy ageing, the mechanisms 'activated' by TDCS that result in improved performance in visuomotor adaptation in young adults remain available in older participants. In conclusion, we confirmed that ageing is associated with a decline in visuomotor adaptation. Anodal TDCS over the motor cortex similarly enhanced the adaptation of both young and older adults and the improvement lasted in both age groups up to 40 min after the stimulation termination. This effect of the stimulation restored the performance of older adults to the one of young adults (without stimulation). These results demonstrate that TDCS of M1 can enhance visuomotor adaptation via mechanisms that remain available in the ageing population. Our findings indicate that TDCS may be a useful tool to help combat the normal decline in motor performance seen in normal healthy ageing.
2018-04-03T04:23:26.820Z
2015-04-30T00:00:00.000
{ "year": 2015, "sha1": "3e9fc16d2c3e9487867faef15dde479481011178", "oa_license": "CCBY", "oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.1113/JP270484", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b2caf5159d1b32a63c50155548085959d188131d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
237774908
pes2o/s2orc
v3-fos-license
Evaluation of New PCM/PV Configurations for Electrical Energy Efficiency Improvement through Thermal Management of PV Systems Photovoltaic modules during sunny days can reach temperatures 35 °C above the ambient temperature, which strongly influences their performance and electrical efficiency as power losses can be up to −0.65%/°C. To minimize and control the PV panel temperature, the scientific community has proposed different strategies and innovative approaches, one of them through passive cooling with phase change materials (PCM). However, further investigation, including the effects of geometric shape, insulation, phase change temperature, ambient temperature, and solar radiation on the PV module power output and efficiency, needs further optimization and research. Therefore, the current work aims to investigate several system configurations and different PCMs (RT42, RT31, and RT25) and compare the system with and without insulation through computational fluid dynamic (CFD) tools. The final goal is to optimise and control the temperature of PV modules and evaluate their system efficiency and energy generation. The results showed that compared with a rectangular shape of the PCM container, the trapezoid-one exhibits a considerably better cooling performance with a negligible variation of the PV temperature, even when the melting temperature of the PCM was lower than the average ambient temperature. Moreover, the study showed that having insulation in the PCM container increases the amount of PCM needed, compared with no insulation case, and the increased amount depends on the PCM type. The newly proposed PV/PCM system configuration shows an efficiency and power generation enhancement of 17% and 14.6%, respectively, at peak times. Introduction Solar PV, together with wind energy, is fast becoming a mainstream and competitive source of power production. Although accounting for only 4.5% of total electricity generation in 2015, they are expected to represent 58% of total electricity production by 2050 [1]. Electricity generation from solar radiation is achieved through photovoltaic (PV) cells or concentrated solar power plants (CSP). This solar radiation can be used for electricity generation or heat production (space heating, hot water supply). PV cells absorb 80% of the incident solar radiation and depending on the PV module material, a small part of this solar radiation (only 15 to 20%) is converted into electrical energy while the remaining part is converted into heat [2]. Manufacturers claim that the available photovoltaic modules have an efficiency from 6 to 16% [3]. However, this claimed efficiency is measured at 25 • C, and they have not considered the PV module temperature rise during their working conditions. The overheating temperature of the module is due mainly to high solar radiation and high ambient temperatures [4]. PV modules during sunny days can reach temperatures of 35 • C above ambient temperature. This temperature increment strongly influences the performance and electrical efficiency of the PV system, which can lead to power losses from 0.40%/ • C at standard test conditions [5] to 0.65%/ • C [6], and increase the ageing of the module. Typical efficiencies for different PV module materials can be found in Table 1. Table 1. Efficiencies of PV modules vs. PV material [7,8]. Amorphous/microcrystalline Si 11.7-9.9 Dye-sensitized 12.3-8.5 Organic 11.3-9.2 1 At 25 • C and spectrum (1000 W/m 2 ). To inhibit the temperature rise in PV modules, several authors have proposed different cooling techniques using air (natural or forced circulation), water (water cooling system or heat pipes), thermoelectric systems, or Phase Change Materials; some of these methods are passive while some others are active. PV modules can also be combined with solar thermal (PV/T) to deliver heat and electricity into a single module. Some studies have shown an increase in electrical efficiency by 5% [9]. PV/T technology is mainly used in domestic and industrial applications for heating air or water as well as electricity generation [10]. In those cases, water or air are mainly used as the heat transfer fluid. Air type PV/T collectors are used for drying, space heating, and ventilation, whereas water types are used to removing the heat from the PV module. Water types are more effective than air types because the fluid temperature variation is narrower. Already some researchers have pointed out higher thermal efficiencies, 50 to 70% for water heating and 17-51% for air heating [11]. These types of PV/T collectors are mainly used in thermal/heat pump systems, water desalination, solar cooling, or solar greenhouse [2]. However, since 2010 the study of PCMs and nanofluid to increase PV module's efficiency has increased [12]. PCMs are materials that store thermal energy through a phase change, the solid/liquid phase change being the most used. These materials are used for thermal energy storage and also thermal management applications as they can charge/discharge at an almost constant temperature and have high energy density (small footprint) [13] despite suffering poor thermal conductivity. In recent years, innovative passive cooling methods have been presented, compared to PV/T, as they do not require additional power consumption, work at a higher operating temperature to supply useful heat, and are a more complex system with a higher initial investment [14]. Abd-Elhady et al. [15] proposed drilling through holes in the PV module to allow the hot layer of air under the module to rise, creating natural flows that cool down the module. The temperature of the PV module decreased with the increased number of through-holes until an optimum number of holes was reached. The increase of the through-holes diameter reached a maximum cooling effect on the PV modules, above which less cooling occurred. Also, PCM has been proposed as a potential solution, although further cost-effective studies need to be conducted [16][17][18]. Several researchers have proposed the use of thin layers attached to the PV modules, similar to the research carried out by Stropnik et al. [17] which achieved an increase of the electrical power by 9.2% under experimental conditions. Su et al. [19] introduced a PCM layer to an air-cooled system, improving its efficiency by 10.7% compared with the PV module with no PCM. Also, other researchers proposed the use of microencapsulated phase change material (MEPCM) [20]. A MEPCM layer attached to a water-surface PV module resulted in a 2.1% relative efficiency improvement compared with the one without MEPCM [21]. Hasan et al. [19] used different melting temperature PCMs to evaluate the performance of each PCM in four different systems. They found that the salt hydrate PCM (CaCl 2 ) achieved the highest temperature reduction in most of the insulations. The results showed that the thermal conductivity of the PCM container had a strong impact on low thermal conductivity PCMs performance. Most PCMs have low thermal conductivity, which strongly affects their heat transfer rate during the charging/discharging process and limits their application, as several researchers have stated in their work. Different strategies to overcome this challenge are currently under study. Huang et al. [22] studied the thermal behavior of PV modules with and without PCM experimentally and by simulation. The system consists of a vertical southeast-oriented PV/PCM system using real ambient temperature and insolation conditions in South East England. The improvement in the thermal performance achieved using metal fins in the PCM container was significant as they enabled a more uniform temperature distribution within the PV/PCM module. The PCM and fins delayed the temperature increment maintaining the operating temperature of the PV cell at a much lower level for extended hours. It was observed that after the PCM melting process, the rate of PV heat extraction decreased, which produced a rapid increase in the module temperature. Khanna et al. [16] focused on optimizing a finned PV/PCM module to achieve the required cooling under different solar radiation; different lengths, thicknesses, and spacing between fins were used. An alternative to increase the thermal conductivity is the use of metallic foams, which was evaluated by Klemm et al. [23]. According to the simulation results, a storage unit consisting of a PCM-filled metallic fibre structure represents an adequate mean for passive thermal management of PV modules in given ambient conditions. The system was able to decrease 20 K of the PCM storage module. However, the configuration has to be validated experimentally under real conditions, and the volume reduction has to be considered. Other researchers used a PV/PCM system with form-stable paraffin/Expanded Graphite (EG) to improve the uniformity of the temperature distribution of the PV modules and thus improve their power output [18]. The PCM/EG helped to control the temperature and the temperature distribution of the PV modules. The output power achieved was above that of the conventional PV module for 230 min, with a maximum increment of 11.50% and an average increment of 7.28% under the experimental conditions. Others, such as Kumar et al. [24], used nanoPCMs to increase the efficiency. The authors achieve a PV panel electrical performance enhancement of up to 4.3%. The prototype studied consisted of a combined PCM mixture of calcium carbonate, copper nanoparticles, and SiC in a ratio of 7:2:1. Among the techniques for cooling systems mentioned above, PCMs are the most promising and effective cooling technique for photovoltaic due to their higher energy density per unit volume [25,26]. The use of PCM for PV modules cooling shows higher heat transfer rates than both forced air circulation and forced water circulation, a higher heat absorption due to the latent heat, and an isothermal heat removal [27]. Moreover, there is no electricity consumption, no noise, and no maintenance cost. However, the PCM has a higher cost than natural and forced air circulation; some PCMs are toxic, have fire safety issues, are strongly corrosive, and are considered disposable after their life cycle is complete. The research regarding this technology needs to move forward, offer solutions to unresolved problems, and understand the potential barriers to practical application. Additionally, the geographic location of the PV modules, no matter the system, has a direct impact on the intensity of solar radiation and wind speed, together with humidity conditions, dust in the air and/or pollution, factors that determine the PV module performance and output fluctuation [12]. Although the reported studies showed a considerable enhancement of the PV module's performance, the experimental results were mainly conducted in lab conditions, where the solar radiation and ambient temperature were fixed at values of 1000 W/m 2 and 25 • C, respectively. These tests make it difficult to predict the actual amount of PCM needed for real applications. Therefore, systems must be investigated at a designated location [28]. Studies have shown that common assumptions about the UK, such as not receiving enough sunshine and not being viable to install PV, were wrong; some findings have shown that a significant proportion of a house's electrical needs could be obtained more than 40% on average [29]. Another aspect that is sometimes overseen is the container dimensions. Typically, a rectangular-shaped PCM container is considered both in modeling and experimental systems. Novel PCM container shapes, different from the usual rectangular solid container filled with the phase change material at the backside of the PV panel, should be considered. Nizetic et al. [30] proposed a new configuration, where several small containers filled with the PCM material were attached to the PV panel. The number of PCM materials was approximately 47% less and the container material, aluminium, was 36% less when compared with a full PCM container. Both configurations performed better than the PV panel without PCM. Although there were periods where the full PCM configuration had the highest power output, the overall performance considering long periods of time for the small container configuration, was better. The authors relate to that outcome due to the more effective thermal management of the small containers owing to less effective heat transfer from the full PCM container strategy. In this study, passive cooling of PV systems using PCMs was investigated where three different PCM candidates were selected (RT42, RT31, and RT25) based on the average ambient temperature, and a polycrystalline PV module was used. The optimization of the PV module considered different parameters such as ambient temperature, daily solar radiation, PCM type, and its melting temperature, and PCM container shape and size. The parameters were assessed and compared with the system without PCM. This work aims not only to assess the performance of the novel PV/PCM system but also to determine the optimum PCM container parameters (shape/geometry, depth, length, and insulation), the PCM type, and the combined effect on the PV module surface temperature, efficiency and power output using real solar radiation and ambient temperature data. Computational Fluid Dynamic CFD was implemented using Ansys Fluent V18.2 [31], and the dynamic heat transfer, fluid flow, melting/solidification, and other PV/PCM system parameters were studied. System Design A passive cooling PV module system was considered in this study, consisting of a PCM container attached to the bottom of a Polycrystalline PV module, as shown in Figure 1. The PV module assembly is structured in five layers, and the physical properties of each layer are presented in Table 2. [32]. The PCM container material was made of 4 mm thickness aluminium, with dimensions 1000 mm in length and a variable depth from 20 to 120 mm. A convective heat boundary was applied on the top surface of the PV module, whereas two different boundary conditions were applied on the bottom and the side walls (adiabatic wall and convective heat) to determine the effect of insulation on the dynamic of the system. The PCM density change during the melting process leads to accumulated heat at the topmost part of the PCM container, which causes nonuniform distribution of the PV module temperature. This difference in temperature across different rows of cells leads to mismatch losses in the PV module. Each cell produces different power based on its temperature and since cells in the PV module are connected in series, the cell subjected to the highest temperature will produce the lowest power. According to Equation (10), the cell current increases with the increase in temperature, so the cooler cells will produce a lesser current. As in series connection, the lowest current producing cell governs the current of the whole string of cells in the module, and the higher current generated by the other cells will get dissipated as heat across the diode, which is parallel to the light source in the single diode model of the solar cell [33]. To address this issue and achieve a uniform temperature distribution, four different PCM container geometries were considered, as shown in Figure 2. To validate the model, measured solar radiation and ambient temperature data from a study conducted by Savvakis et al. [34] was used as input, and the predicted outputs power was compared to the measured value. The PV module orientation was considered as follows: 30 • from horizontal with an azimuth angle of 0 • , which were the experimental work conditions; the average ambient temperature for the selected day was approximately 30 • C. Therefore, three different PCMs with higher, similar, and lower melting temperatures (RT42, RT31, and RT25) were selected to determine the relationship between the location average ambient temperature and the PCM phase change temperature. The physical properties of these PCMs are shown in Table 3. The PCM density in the model was set as a function in the PCM temperature while the specific heat capacity was assumed to be constant. The data from the PCM supplier shows that approximately 90% of the phase change occurs within a temperature range of 5 • C, as shown in Figure 3 [35]. Thus, a narrow temperature range has been used for the PCM simulation; this was partially done to reduce the computational time [32,34,36,37]. Table 3. Properties of the studied PCMs (RT42, RT31, and RT25). PV/PCM System Model The fraction of the incident solar radiation that passes through the top glass layer and absorbed by the PV cells can be found in Equation (1) which considers the reflectivity of the PV module and the solar radiation losses [34]. where (τα) eff is the effective glass layer transmissivity and absorptivity of the PV cell. A small portion of the absorbed solar radiation can be converted into electricity, and the other major part will be converted into heat; this heat is expressed in Equation (2). where η c is the cell conversion efficiency. A computational fluid dynamic (CFD) tool was used to predict the operating temperature of the PV module considering the experimental ambient temperature and solar radiation of the selected day. The main objective is to assess its performance. The assumptions made to reduce the complexity of the problem and the computational time are the following: 1. The thermal resistance between the PV layers is negligible. 2. There is a uniform heat flux distribution on the PV surface. 3. Heat leaks/gains through the insulation are negligible. Ansys fluent V18.2 software [31] was used in the current study, and a melting and solidification model was chosen to simulate the melting/solidification processes of the different PCMs [31,38,39]. The model can solve thermal and fluid flow problems involving melting/ solidification at a specific temperature such as pure substances or over a wide temperature range such as mixtures or alloys. Enthalpy-porosity formulation method was used in Ansys Fluent to track the liquid/solid front explicitly. The liquid/solid interface is denoted by a mushy zone and treated as a porous zone with a porosity equal to the fluid liquid fraction, which changes from 0 to 1 during the melting process [31,38,39]. To solve the energy equation, the model uses the following equation Equation (3) ∂(ρH) ∂t where ρ is the fluid density, → v is the fluid velocity, S is the source term, and H is the material's enthalpy, which is the summation of the sensible (h) and the latent heat (∆H). Enthalpies are written in the manner of Equation (4) where where h ref is the reference enthalpy, T ref is the reference temperature, and C p is a specific heat capacity at constant pressure. ∆H represents the latent heat component which varies from 0 (for the solid phase) at the initial state of the material to 1 (for the liquid phase) at the end of the phase change. Therefore ∆H of material L during the melting process (mushy zone) can be written as: More details on the melting and solidification model can be found in [31]. Regarding the boundary conditions, a convective heat of 10 W/m 2 -K and radiation heat were applied on the top wall surface of the PV module. The same boundary conditions with a convective heat value of 7 W/m 2 . K were applied on the side and bottom walls in cases without insulation and adiabatic walls in cases with insulation. The model used the measured ambient temperature to predict the heat transfer rate through convection and radiation. Regarding the convective heat, these values are not practically constant. They function in many parameters such as ambient temperature, wind speed, and even the cleanliness of the module surfaces; however, these values were selected because they demonstrated good agreement with the experiment. Regarding radiation, it is mainly due to the temperature difference; therefore, considering the measured ambient temperature will lead to a highly accepted prediction of the radiation heat. The performance of the PV module was assessed in terms of conversion efficiency (η c ), short circuit current (I sc ), open-circuit voltage (V oc ), and power output (P) based on the predicted operating temperature as defined in Equations (8)-(13) [40]: where η T ref is cell/module electrical efficiency at standard operating conditions (SOC) and β ref is the temperature coefficient (TC) which is defined in Equation (9) [40]: where T 0 is the PV temperature that drops the module electrical efficiency to zero; this temperature is equal to 270 • C for crystalline silicon cells [40]. The PV module current and voltage at the operating temperature was calculated using Equations (10) and (11), respectively [40]. where ∝, β c and δ are the current, voltage, and solar radiation correction coefficients for the operating temperature. G T is the solar irradiance on the PV surface (W/m 2 ). G T SRC is the solar irradiance at standard reporting conditions. The maximum power was calculated using Equation (12). The module power output at the operating temperature is defined by the following equation: where A is the module surface area. The power losses generated due to the nonuniformity of the temperature distribution (mismatch loss fraction, L MLF ) on the PV module was calculated using Equation (1) [33]: where P mp is the PV module output power, P i is the cell power generation, n is the number of cells in the module, nr is the number of cells in one row, P row is the power output of one row, V row is the open voltage of one row, and I lowest is the lowest current in the module. Model Validation Nikolaos and Theocharis [34] experimentally tested the PV/PCM system and compared its performance with a conventional PV system. Their measured PV surface temperature with and without PCM cooling system was used to validate the developed CFD modelling. The measured solar radiation and ambient temperature were used in the CFD modelling. The measured PV temperature was compared against the predicted values, and the results are shown in Figure 4. For both cases (with and without PCM attached), the predicted PV temperatures demonstrated good agreement with the experimental work, with a value of R-square of 0.78 and 0.94, respectively, and a maximum temperature difference of only ±2 • C. Result and Discussion The current work investigates the potential of using PCMs to enhance the performance of PV modules where a PCM container is attached to the bottom surface of the PV module. The study aims to optimise the system variables: PCM container (shape, height/depth and length), insulation and PCM type, which contains a sufficient amount of PCM to meet the cooling load, by studying their effect on the operating temperature of the PV module. The PV module temperature was predicted using CFD modelling with Ansys Fluent V18.2 software to assess its performance. A published experimental work was used to validate the developed model. The dynamic power output from the PV module and its conversion efficiency with respect to the operating temperature was calculated using well-known empirical equations. Three different PCMs (RT42, RT31 and RT25) were chosen based on the average ambient temperature (~30 • C) of the studied case. The melting temperature of these PCMs was approximately 5 • C less, equal to and 10 • C higher than the average ambient temperature, to determine the effect of the selected phase change temperature on the container size, insulation, PV temperature, conversion efficiency, and power output. Figure 5a,b show the PV temperature of the RT42 PV/PCM system with different container heights, with and without insulation, versus the daytime. In both cases, the temperature of the PV-only system is included for comparison purposes. The results showed that the insufficient container heights (20,40,50, and 60 mm) led to an increase in the PV/PCM module temperature even higher than the conventional system (PV-only) at certain times. The increase was higher with insulation, as shown in Figure 5b; this is due to the increasing PCM temperature when it completely melts and releases its heat, becoming higher than the ambient temperature. Thus, the conventional PV system showed a lower temperature when the heat could be easily separated from the back of the PV module. Without insulation, the optimum tank height was 70 mm while it was 80 mm with insulation; this means that having insulation in the PCM container when RT42 is used increases the required PCM amount by 14%, in addition to its cost. RT31 and RT25 showed a similar trend to RT42, with and without insulation, as shown in Figures 6 and 7. However, the average PV module temperature using RT31 and RT25 with the optimal PCM height was around 37 • C and 32 • C, respectively, which were lower than that of using RT42 (43 • C). Thus, RT31 and RT25 provide a significant reduction in the PV temperature at peak times by 23 • C and 28 • C, respectively, compared with the PV-only system when it was 17 • C after RT42 was used. The optimum tank height for RT31 was 110 mm when no insulation was used and 120 mm using insulation. When RT25 is used, the optimal heights were 120 and 125 mm, respectively. Figure 8 shows the comparison of RT42, RT31, and RT25 in terms of the PCM container size. The figure demonstrates that when RT31 and RT25 were used, the required amount of PCM was 56% and 72% higher than that of RT42. Regarding the effect of the tank shape on the PV module temperature, four different tank geometries (cases) were considered, as shown in Figure 2. In all these cases, no insulation was used, and RT42 was selected as the PCM material. When the PV module temperature becomes higher than the melting temperature of the PCM, the melting process starts, and the density change occurs. This density change forces the liquid phase to move to the top side of the PCM container, leading to a nonuniform temperature distribution in the PV module. As mentioned above, this temperature gradient is highly dependent on the tank depth, shape, and length. In the first case (Case 1) of the four configurations, the container had a rectangularshaped cross-section with a height value of 70 mm, while the second, third, and fourth cases (Case 2, Case 3 and Case 4) had container cross-sections shaped like trapezoids, with a variation in height from bottom to top. The bottom heights of Case 2, 3, and 4 were 50, 40 and 30 mm, whereas 90, 100 and 110 mm were the heights of the top side, respectively. In all cases, the tank length was fixed at 1000 mm. For the first configuration (Case 1), Figure 9 shows the PV surface temperature gradient along its length at different day times. After 14:50, most of the PCM melted, and the top part's temperature started increasing and reached its peak at 15:40. with a difference higher than 4 • C. Figure 9b,c show the temperature and the mass fraction contours of the PV/PCM system at 15:40. Figures 10a, 11a and 12a show the temperature gradients of PV temperature along with the trapezoid shape containers of Cases 2, 3, and 4, respectively. The PV surface temperature is almost constant in Cases 2 and 3, with a variation of less than 0.5 • C. However, Case 4 shows a considerable reduction in the surface temperature at the bottom side of the container. This container part was affected by the ambient temperature due to the thinness of the PCM layer. Figures 10b, 11b and 12b show the mass fraction contours of the PV/PCM system at 15:40. It can be seen that the solid part of the PCM in Cases 3 and 4 did not move to the bottom side of the container due to their high viscosity and the low container slope. Reducing the tank length leads to a lower temperature variation and vice versa. Considering both the PV module temperature and the movement of the PCM inside the container, Case 2 was the best configuration; however, this result is subjective to the PV tilt angle and the ambient temperature. The PV module efficiency at the operating temperature was calculated using Equation (8). Figure 13 shows the variation of the PV module efficiency at the operating temperature during the studied daytime for both the PV/PCM and the PV-only system. By comparing the two systems, unlike the conventional system, the PV/PCM system showed no significant variation in the PV efficiency during the daytime. The lowest melting temperature PCM (RT25) showed the highest PV module efficiency. The PV/PCM systems reached an efficiency increase of 10%, 13% and 17% at 13:00 when RT42, RT31, and RT25 were used, respectively, as shown in Figure 14. This considerable enhancement of the PV/PCM system efficiency resulted in a great increase in the hourly power output, as shown in Figure 15, where the power output of the 1 m 2 module's system is presented. Figure 16 shows the percentage enhancement of the power output of the PV/PCM system compared to the PV-only system. This enhancement reached around 9%, 11.5% and 14.6% at the maximum solar radiation when RT42, RT31, and RT25, respectively, were used. RT42 showed the lowest PV efficiency and power output enhancement compared with RT31 and RT25. However, the output power when using RT31 and RT25 as PCM showed a maximum increase of only 3% and 5.5%, respectively, compared with RT42, as shown in Figure 17. These results indicate that using PCM with a melting temperature higher than the average ambient temperature significantly reduces the PCM amount without a significant reduction in the total power output. The rectangular PCM container shows the most inhomogeneous temperature distribution, as seen in Figure 9a. This inhomogeneous temperature distribution leads to a mismatch loss, an outcome previously mentioned in Section 2. A PV module with 72 cells was used to estimate the mismatch losses for Case 1 at the daytime hour of 15:40. The module specification is shown in Table 4 with a landscape orientation. The cells array consists of 10 cells in each row and 6 in each column. The CFD simulation results were used to feed the mathematical model to calculate each row's output voltage and power and the whole module. The solar radiation was assumed to be 800 W/m 2 , and the results are shown in Table 5. The results show a mismatch loss fraction of 0.42%, which seems insignificant, but considering a large PV plant consisting of several modules, these losses will significantly contribute to reducing the power generation. Conclusions Researchers have already reported passive cooling systems for PV modules using phase change materials as a promising and effective cooling technique due to their higher energy density per unit volume and high heat transfer rates compared with air circulation. These systems does not require electricity consumption or moving parts and have a low maintenance cost. This work investigated the effects of different design parameters of PV/PCM systems, including PCM container shape, depth, length, insulation, and PCM type on the PV module surface temperature, efficiency, and power output. Three different PCMs were selected (RT42, RT31, and RT25), and experimental hourly solar radiation and ambient temperature data available in the literature were used. A CFD model using Ansys Fluent was developed to simulate the melting/solidification processes of the PCM and to predict the temperature variation in the dynamics of the PV/PCM system during the daytime. The results showed that: 1. The CFD model demonstrated good agreement with the experimental work found in the literature with a maximum temperature difference of less than 2 • C. 2. Insulation in the PCM container will increase the required amount of the PCM, no matter the melting temperature of the PCM. 3. The rectangular shape (Case 1) and the optimum depth/height of the PCM containers with a sufficient amount of PCM to meet the cooling load during the daytime were 70 mm, 110 mm, and 120 when RT42, RT31, and RT25, respectively, were used in cases without insulation. With insulation, the optimum depth/height of the PCM containers were 80 mm, 120 mm, and 125 mm, respectively. 4. PCMs with a lower melting temperature require more amounts of PCM when there is no significant difference in the latent heat. Compared to RT42, RT31 and RT25 showed an increase in the PCM amount by 56% and 72%, respectively. 5. Regarding the PCM container geometry, trapezoid container configurations (Cases 2, 3, and 4) showed a considerably better cooling performance due to their lower variation of PV temperature. This enhances the performance of the PV systems by reducing mismatched losses. 6. In all investigated PCMs, the PV/PCM system showed a considerable enhancement of the PV module efficiency and maintained it at an almost constant level over the daytime. Compared with the PV-only system, the efficiency enhancement at the peak times reached 10%, 13% and 17% when RT42, RT31, and RT25 were used, respectively. 7. PV/PCM systems showed a considerable power output enhancement; at the solar peak time, the power output increased by 9%, 11.5% and 14.6% when RT42, RT31, and RT25 were used, respectively, compared with the PV-only system. 8. Although RT42 showed the lowest efficiency and power enhancement, it showed a significant reduction in the amount of PCM by 36% and 14.6% compared with RT31 and RT25, respectively. Moreover, the power output from RT31 and RT25 cases showed a maximum increase of 3% and 5.5%, respectively, compared with RT42, indicating that using a PCM with a melting temperature higher than the average ambient temperature will lead to a cost-effective system without a significant reduction in the power output.
2021-09-28T01:09:38.843Z
2021-07-08T00:00:00.000
{ "year": 2021, "sha1": "7d9ed27cfd874c3c281b5c1d07b1d4dcd86a3f75", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/14/14/4130/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "51bb862f605b82c8eeb6d4e48d92d420eeb79686", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
264424601
pes2o/s2orc
v3-fos-license
Early experiences of integrating an artificial intelligence-based diagnostic decision support system into radiology settings: a qualitative study Objectives: Artificial intelligence (AI)-based clinical decision support systems to aid diagnosis are increasingly being developed and implemented but with limited understanding of how such systems integrate with existing clinical work and organizational practices. We explored the early experiences of stakeholders using an AI-based imaging software tool Veye Lung Nodules (VLN) aiding the detection, classification, and measurement of pulmonary nodules in computed tomography scans of the chest. Materials and methods: We performed semistructured interviews and observations across early adopter deployment sites with clinicians, strategic decision-makers, suppliers, patients with long-term chest conditions, and academics with expertise in the use of diagnostic AI in radiology settings. We coded the data using the Technology, People, Organizations, and Macroenvironmental factors framework. Results: We conducted 39 interviews. Clinicians reported VLN to be easy to use with little disruption to the workflow. There were differences in patterns of use between experts and novice users with experts critically evaluating system recommendations and actively compensating for sys-tem limitations to achieve more reliable performance. Patients also viewed the tool positively. There were contextual variations in tool performance and use between different hospital sites and different use cases. Implementation challenges included integration with existing information systems, data protection, and perceived issues surrounding wider and sustained adoption, including procurement costs. Discussion: Tool performance was variable, affected by integration into workflows and divisions of labor and knowledge, as well as technical configuration and infrastructure. Conclusion: The socio-organizational factors affecting performance of diagnostic AI are under-researched and require attention and further research. Introduction Artificial intelligence (AI) in healthcare involves applying machine learning techniques to identify and uncover patterns in multidimensional data to improve health outcomes and patient experience. 1 Empirical evidence suggests promise in helping improve early disease and adverse event detection. 2][5][6] Computerized clinical decision support systems (CDSSs) are software applications that provide healthcare professionals with real-time, evidence-based information and recommendations to aid in clinical decision-making. 7CDSSs can be based on explicitly defined rules and medical knowledge or based on machine learning, leveraging advanced computational techniques to learn patterns from data, and make decisions based on statistical inference. 8CDSSs have shown significant potential in improving practitioner performance and patient outcomes, 9 but they have also been associated with issues surrounding alert-fatigue and adverse impacts on work practices (eg, when hard stops cannot be overridden). 10,11CDSS can be knowledge based or data driven.Knowledge-based CDSS relies on predefined rules, guidelines, and medical knowledge to provide recommendations or alerts to healthcare professionals.Data-driven CDSS uses machine learning algorithms to analyze large volumes of data and learn patterns to provide recommendations and predictions. 3][14][15][16][17] AI in radiology is now the top specialty of approved medical AI applications according to the United States (US) Food and Drug Administration (FDA) with 70.3% of all AI devices developed in this area. 18Radiology requires diagnosis and decision-making under uncertainty and AI may help automate some of the labor-intensive tasks such as radiograph interpretation and reporting. 19,202][23][24] However, the processes involved in integrating AIbased imaging systems within existing professional workflows and patient care pathways are still unclear. 8,25,268][29] For example, radiologists and radiographers hold different views about the prospects of AI in their practice.A recent study has shown that radiologists were better informed about the emerging AI in their field than radiographers and had more positive attitudes toward the technology, whereas radiographers were concerned that AI might jeopardize their roles in the future. 27Similarly, a large international survey about AI technology with over 1000 radiologists and radiology residents revealed that clinicians with limited experience with AI associated it with fear, in contrast to intermediate or advanced AI users whose attitudes toward the technology were positive, including holding a belief that AI skills should be part of radiology training. 28tudies with patients highlight their limited knowledge surrounding the use of AI in radiology and in their healthcare. 29here are currently no studies on how AI used for lung imaging is implemented, adopted, and integrated into existing workflows. 30Work in other areas highlights potential issues, such as low specificity resulting in a high volume of false positives and consequent requests for additional investigations (creating anxiety and in some cases resulting in unnecessary invasive procedures, such as biopsies). 31e therefore aimed to explore early experiences of implementing and using an AI-based diagnostic decision support system in chest radiology settings from a variety of stakeholder perspectives, to understand how the technology was integrated within real-world socio-organizational contexts. We studied an AI-based diagnostic decision support tool, Veye Lung Nodules (VLN) (Aidence/RadNet), which was implemented in United Kingdom (UK) hospital-based chest radiology settings (Box 1).The tool is considered AI-based because it utilizes machine learning algorithms.VLN is currently used in 40 National Health Service (NHS) hospitals to support lung cancer screening.VLN runs in the background of the Picture Archiving and Communications System (PACS), automatically processing all eligible studies, including the most recent prior, if available.Its results are delivered directly to the PACS, as part of the original diagnostic series without the need for additional clicks.The results are available to anyone with PACS access, on or off-site. Methods Our work was part of a mixed-methods study exploring the implementation of VLN in multiple hospitals.Aidence (the developer) was awarded funds under the NHS AI Award program in 2020 to undertake a real-world evaluation of its software to generate evidence supporting a full health technology appraisal by the National Institute for Health and Care Excellence (NICE).This qualitative study was part of this evaluation.Other aspects included an assessment of clinical impact and health economic modeling. We conducted a qualitative semistructured interview study with clinicians who used the software, organizational implementers, strategic decision-makers, suppliers, patients with long-term chest conditions, and experts in the field, to obtain a holistic view on how the system was perceived and integrated within healthcare settings.Respondents were sampled because of their knowledge of VLN and their experience in using or evaluating the system. Ethics We obtained ethical approval from the School of Social and Political Science at the University of Edinburgh.Participants were provided with a consent form and an information sheet describing the study aims, procedures, and data management practices before participating in the study.Participants were allowed at least 48 hours to consider whether they agreed to participate and provided written informed consent.The participants were informed that they were free to withdraw at any time and that their responses were not identifiable (ie, all personally identifiable information was removed from interview recordings and transcripts were anonymized). Sampling and recruitment This study was conducted between February and December 2022 in 5 hospitals implementing VLN for screening incidental CT scans for lung nodules.We also interviewed patients with respiratory conditions and implementers including respondents from outside these hospital settings who could Box 1: Description of the tool Veye Lung Nodules (VLN) is an integrated machine-learning-based imaging software tool that aids in the detection, classification, and (size and volume) measurement of pulmonary nodules in CT scans of the chest.VLN is a Communaut e Europ eenne (CE) certified second or concurrent reader, suitable for routine clinical practice and lung cancer screening and fully compliant with the new EU Medical Device Regulation (classified as a class IIb device).Results are automatically available during reporting for all eligible scans.It is currently used in NHS and EU centers to support lung cancer screening.Use cases include the detection and monitoring of pulmonary nodules in lung cancer screening (where the primary aim is to identify lung nodules in an at-risk population) and routine hospital practice (where nodule detection is often incidental to the primary purpose for which the CT chest scan was ordered).In the former, the scan is conducted for patients previously identified with nodules, whilst in the latter, the scan is conducted for other chest conditions (eg, heart, trauma) and patients are asymptomatic for lung cancer but may have multiple other symptoms.The integration with workflows is the same across use cases.Results are transmitted to the PACS and can be seen when examining the scan on the reader.The target user group includes radiologists and radiographers with varying levels of expertise. provide us with insights surrounding the implementation and adoption of AI in radiology.We liaised throughout the study with the project manager of the software company who provided contact details of the local hospital implementation leads.We began recruitment with these individuals and identified further participants by asking for recommendations of others who had an interest in or experience of AI-based radiology imaging.Our aim was to sample for maximum variation in terms of demographics, experience, and expertise. Patients were recruited via gatekeepers at charities for patients with respiratory conditions, including Asthma and Lung UK and the Roy Castle Foundation in the United Kingdom.We included a wide range of patients from various demographics, although none had experience or knowledge of VLN.Wider stakeholders were sampled via our professional networks and included academics who had researched AI radiology imaging systems, as well as system developers of other AI systems in radiology.To attract radiologists, we offered to reimburse them for taking part in interviews. Data collection The research team (KC, RW, NF, SH) had a background in qualitative research in healthcare and developed the interview guides in collaboration with a consultant radiologist from the study team (RR).The topic guides for clinicians, experts, implementers, and patients (Table 1) were amended in discussion with the research team as interviews progressed to include themes that emerged during interviews.Key lines of inquiry for all groups included their level of knowledge, understanding and experience of AI tools in healthcare.For clinicians, strategic decision-makers, suppliers, and academics, we were also interested in implementation and adoption experiences, perceived impact on care provision and organizational functioning, and any concerns and challenges experienced.Interviews were conducted by two researchers (NF and SH) via Microsoft Teams or in person.Interviews were audio-recorded and transcribed verbatim. Data analysis Transcripts were anonymized, numbered, and coded by two researchers (SH and NF) using NVivo (QSR International, v.12).We used thematic analysis to identify common patterns (themes) across transcripts. 32Codes were applied inductively (conceptualized prior to analysis), and deductively drawing on the Technology, People, Organizational and Macroenvironmental factors framework. 33We also held two analysis workshops, where we explored tensions and trade-offs offs in the data by presenting emerging findings to the wider research group.This resulted in minor modifications to the narrative, mainly relating to provision of additional detail in relation to clinical workflows and software functionality.Themes were checked by two researchers (KC and NF) in an iterative process to reach an agreement on the Results narrative. Results We interviewed 39 people (see Table S1), with each interview lasting between 15 and 76 minutes.Twenty-two interviews were conducted with clinicians.These included 11 consultant radiologists, five project managers or specialists in clinical imaging information systems, three trainee radiologists, two radiographers, and one chief clinical information officer.12 interviews involved patients with long-term chest conditions, such as lung cancers, asthma, and chronic obstructive pulmonary disease.All patients had undergone chest CT scans and received care for their conditions, some for over a decade.Five interviews included other experts in the field such as academic researchers with a background in radiology AI.We contacted all eight implementing sites, but 27 clinicians did not reply to the invitation for an interview.They were followed-up once. Detailed participant characteristics are provided in Table S1.All except two clinicians had been using VLN regularly in their work as part of screening programs.During the time of data collection, sites had used VLN for varying lengths of time, ranging from six months to over five years.Accordingly, users reported varying levels of experience.In some clinics, all radiologists and radiographers used VLN, in some clinics both radiographers and radiologists used the tool, and in some, the interviewed clinician was the only user of the tool.22 interviewees were male and 17 were female.Clinical users came from 10 different sites.Wider stakeholders were located in The Netherlands and in Belgium. The findings are summarized in Table 2.We will discuss each theme and subtheme in the following paragraphs. The following paragraphs will describe each of the themes and subthemes in detail and provide supportive quotes from the data. Theme 1: Perceived drivers and benefits Anticipated and early experienced benefits Overall, VLN was seen to be usable, as it integrated directly with the PACS system and required little additional effort by users to view results.It was therefore readily adopted. The good thing about VLN is that it's incorporated into our PACS system or any program that we would have used anyway for reporting.In terms of the workflow, it either wasn't disturbed, or it was very minimally disturbed for the radiologist, it's not like you had to go to a different room or a different computer, you know, you might just have to change to a different screen.(Consultant radiologist, site B) Participants reported that VLN made the process of interpreting images faster and provided details that were not easily perceived with the human eye (eg, nodule volume).This in turn impacted on clinician's confidence in making a diagnosis. I thought that it would make us quicker and more efficient and it has.Without a doubt, it's very good at picking up little nodules that would be difficult to pick up with the naked eye, and, therefore, it does make the process much easier and quicker. . .(Consultant radiologist, site F) Radiologists described using VLN as what they referred to as a "second reader," increasing confidence in their clinical decisions over time and reducing anxiety that they may have missed a nodule.This confidence in the technology led to increased perceived efficiency, as radiologists could now focus their time and attention on complex cases: It's like having a second eye for the radiologist.We all miss things, we're human beings, but having sort of a second pair of eyes, a computer program scanning the scan and picking up a nodule that you may potentially have missed is definitely an extra reassurance for us, but obviously better for patients as well.(Consultant cardio-thoracic radiologist, site D) Time was a factor in the uptake and integration of the technology in everyday workflows.Clinicians with longer user experience with the tool were more confident and familiar with its use, indicating that it took time to learn about the performance of the tool, establish how it might be reliably used and integrated into their everyday workflows and practices.The initial period of familiarization was very short (no more than three months).VLN was designed for easy adoption by aligning with existing workflows.System maintenance was provided by the developer for each site.This also included a service for queries, and local configuration, including adjustment of sensitivity and specificity, which clinicians found valuable because it allowed them to set parameters for their specific patient population. [. ..] anecdotally, people have said that it has improved efficiency, so as they've become more confident using the technology and realized that it is functioning very well.They will not necessarily do their usual extremely in-depth checks because they'll know the system will do it [. ..] there is a lot more detail provided by the software on the types of nodules, the size and volumes and changes over time, than we would have been able to do previously.(Consultant interventional radiologist, site C) Perceived drivers and benefits • Anticipated and early experienced benefits • Benefits vary with differing usage, skills, and workflows 2. Design of the tool and integration • Understanding and compensating for system limitations • Integration with existing health information infrastructures delayed VLN rollout 3. Appropriation of the tool by expert labor • The evolving role of radiologists • AI may help to upskill some staff 4. Clinical governance, quality assurance, maintenance, and post-market surveillance • Governance and professional scrutiny • Post-market surveillance, ongoing quality assurance, and maintenance • Sustainability and scale-up Radiologists found VLN to be superior to the earlier computer-aided detection (CAD) systems (eg, embedded in scanners), which were perceived to be cumbersome and consequently often not used.Importantly, VLN automatically calculated some information-such as nodule volume-that previously had to be manually measured and laboriously estimated.This resulted in faster and more accurate assessment of the size and growth of nodules.VLN also included tools for generating reports, which was seen as advantageous and time-saving.Implementers felt it would improve accuracy of detection and some suggested that the technology could help to reduce compensation claims for missed lung nodules (though there was no direct evidence for this). I think from our [hospital], we are very much of the opinion that given the number of serious incidents that have occurred because of missed lung nodules and stuff, they would happily invest in the technology as a way to reduce that risk because paying out half a million pounds because of a missed nodule and the harm done to a patient eventually, by missing a nodule, having cancer and things, it's considerably cheaper and more sensible to just pay for a product like this, that can help, even if it's not 100% accurate.(Consultant radiologist, site F) Patients we interviewed also anticipated benefits from AI technology.Their general attitudes were very positive, with most patients stating that they had understood the purpose of AI, and more so when the researchers explained it to them.Patients were generally positive about AI in the hands of radiologists because they trusted radiologists were using the technology appropriately.They also trusted healthcare organizations to procure systems appropriately.Several patients had mentioned they would choose a hospital where AI was used to inform clinical decisions if they had this knowledge and choice.I, to a certain extent, yes, I do trust the consultant.And I'm sure he wouldn't suggest it unless he thought it was something helpful for me.I mean, they wouldn't waste the resource as the NHS is stretched to a breaking point.I don't think they would be using that kind of diagnostic tools unless they felt it was something that would benefit the patient or contribute to research.(Female, 80-90, asthma and bronchiectasis, England; did not have an AI scan) Benefits vary with differing usage, skills, and workflows The benefits of VLN were seen to depend to some extent on the workflows and division of labor through which the tool was adopted.Having VLN functionality was perceived to help getting all the information required and potentially saved time and reduced unnecessary follow-ups.Some radiologists don't do volumes so, of course, they'll just say, follow-up, when, actually, they probably don't need a follow-up and then we end up discussing at an MDT [multi-disciplinary team] and then we'll do a volume, then discharge them.There's another extra step that is probably not needed.From that point of view [. ..] it depends on the clinical confidence of the reporting body radiologist, really, but it could, potentially, save a lot of patients being referred into our service.(Radiographer, site D) Radiologists within a cardio-thoracic specialty reported used VLN differently than general radiologists, to collect more detailed information, such as the dimensions of the nodules, instead of just detecting the presence of nodules.Someone who's not necessarily a cardiothoracic radiologist, may just say there is a nodule, follow the guidelines.Whereas a cardiothoracic radiologist is more likely to say, the nodule is there, and it's this volume, the British Thoracic Society (BTS) guidelines advise you to do X, Y and Z, and allow you to give a bespoke follow-up suggestion.(Consultant cardio-thoracic radiologist, site D) We also found differences in use varying with experience.For example, more experienced radiologists were more confident in making clinical decisions without VLN.They also felt better able to discount probably erroneous instances picked up by VLN and were concerned that less experienced users would rely on the machine's judgments (which may in turn lead to unnecessary follow-up). And I was thinking that for people who are less familiar with BTS guidelines and nodules, if it picks up multiple nodules that don't necessarily need to be followed-up, but they're less familiar with it, they might then put the patient into a follow-up program.And the patient is going to be recalled for further CT scans that they might not definitely need.(Consultant chest radiologist, site B) Theme 2: Design of the tool and integration With use and experience, radiologists' confidence in VLN grew.However, there was also an awareness that radiologists should not be entirely reliant on or overconfident in the results of AI.They reported getting used to double-checking the results of the system.This took up some time but much less time than scanning the image without the use of VLN. So I'm confident that the system will pick up basically everything that looks like a nodule, that smells like a nodule, even if it's not and where I think, yes I'm not really convinced about that, then I'll look at the blind images but yes, it's reduced that time, in that respect and I no longer do all the really in-depth checks that I would have done previously with manipulating the images to make nodules look more obvious on the system because I know they will pick them up, so I will just look at that and just go, "yes fine".(Consultant interventional radiologist, site C) Understanding and compensating for system limitations Radiologists followed specific guidelines and internal audits of quality assessments using standard datasets on which they periodically assessed themselves.They were able to use the same procedures to assess the performance of the AI tool. Although the internal operation of VLN was not necessarily understood, its outputs were scrutinized in forensic detail.This in turn allowed users to rely selectively and appropriately on tool alerts.Some radiologists had specifically recognized parts of the chest and parts of the image where the tool may not produce accurate readings.One example was an area in the lungs (in the central midline portion of the thoracic cavity) where, due to the presence of blood vessels, the AI may have "blind spots" and not produce precise results. It doesn't cope very well with identifying masses that are in the area of the lungs where the blood vessels interface with the head and the mediastinum.That's an area where it can be a bit of a blind spot even for big lesions for the software. Then the other area that it struggles with sometimes is lesions that are in the airway itself, so central airways.I think knowing that means that a human will specifically review those areas very carefully to make sure there's nothing in those areas because we know that's an area of potential blind spot or weakness for the AI.(Consultant radiologist, site F) Over time, users learned how to work around the limitations of the system, identifying which areas produced erroneous results and which parts were reliable.Where these factors were likely to produce false negatives or false positives, they were therefore equipped to dismiss these. [. ..] sometimes the AI software would draw round a nodule, but it might also draw round something that wasn't a nodule, like a vessel, or like a benign pleural plaque or something like that.And we would sort of call those false positives.But we would just ignore that, it didn't sort of take up lots of our time.(Consultant radiologist, visiting professor, Belgium) Although experienced clinicians felt able to make a critical assessment of the output of VLN, this did not extend to patients.However, patients with good rapport with their clinicians stated that they would trust a report if it was produced by AI.A few patients mentioned that being shown the report from VLN would be useful. The first high-resolution CT scan I had; I saw the consultant not long after that.In fact, she was really lovely.I was with her for quite a few years.But she's now moved on.So, I'm now with different consultants.And she was very good.She showed me the scan on the screen, and she explained what had been going on.(Female, 70-80, lung cancer, England; did not have an AI scan) A few clinicians also reported that for patients who had many scans, VLN could only compare the current scan to the last (prior) scan.This in turn limited the ability to trace volume changes over an extended period. [. ..] that ability to volume track, historically, over a range of scans, rather than just one scan, I think is something which would really lift the software to another level, and actually make it really useful.(Specialty registrar interventional radiologist, site J) Integration with existing health information infrastructures delayed VLN rollout VLN was designed to integrate with PACS to not interrupt the existing workflows of radiologists.We did, however, observe some implementation challenges relating to Information Governance and integration with local information systems (including PACS systems and electronic health records).This created teething problems such as delays with the planned rollout where the software developer was dependent on complementary product suppliers (eg, PACS, medical imaging cloud solutions, available IT engineers on-site) to resolve these issues. There's some communication issue between the [.Technical and imaging teams also noted that connection issues often delayed the implementation and full integration of the system into care provision.Theme 3: Appropriation of the tool by expert labor The evolving role of radiologists There was an overwhelming sense amongst interviewees that AI was changing care provision in radiology for the better. Although models for how VLN should be incorporated into the division of labor and workflows were still emerging, there was a general view that the role of radiologists would evolve positively with AI. 34 AI won't replace radiologists, but radiologists who use AIenabled tools will replace radiologists who don't.And that's probably the way I see it from my standpoint, I see that AI is an incredibly valuable tool for radiologists to use.And that's why I think we should be embracing these tools in our day-to-day practice.I think it's anything that makes you safer, and secondarily faster, should be welcomed.(Consultant radiologist, site F) However, some radiologists expressed concern about their role becoming undervalued with the evolving use of AI or changing public opinions perceiving them as "barely doing anything" (the notion also made famously by G.E. Hinton 35 ).Nevertheless, there was also an insistence that radiology is a profession that requires a great deal of experience and skill. In the past, when I was applying for radiology, like, five years ago now, the consultant that was helping me with my application said, "Oh, you definitely need to do interventional radiology, because that gives you practical skills and the AI can't take that over".But he says that, "otherwise, your job's going to not be there".I think I do worry that [. ..] it would maybe degrade the opinion of the public or the people that pay us.(Specialty trainee registrar, site I) AI may help to upskill some staff The impact of the tool varied according to the skill of the user and their role in the division of labor.Although there was no sense that AI would replace radiologists, some mentioned that it may help to upskill some staff.There was a recognition from implementers, experts, and experienced clinicians that the tool output would make interpretation by a less experienced clinician easier and more precise.Here, the use of AI was seen to "democratize" imaging knowledge. One of the key principles of using radiology AI is that it democratizes knowledge.So, you go from needing a highly pressurized expert in a very special part of radiology interpreting scans.You essentially have an AI assist, which means that people with less experience and less specialist knowledge can derive the same answers.So, my hope and expectation, I would say, is that anybody who [has good knowledge of] using PACS and a basic IT system can use, interact with, and gain benefit from using AI.(Consultant oncology radiologist, site G) Theme 4: Clinical governance, quality assurance, maintenance, and post-market surveillance Governance, surveillance, quality assurance, and maintenance had a significant influence on adoption and procurement decisions.Participants were aware that actual performance in the field might vary as the tool moved from lab to field and from site to site.Radiologists were also aware of the lack of empirical evidence for AI-based applications in healthcare settings and had initial reservations about the system's reliability. I know there are shortcomings surrounding, obviously clinical utility, based on the lack of evidence and actually. ..when these algorithms work and they're trained on the machine learning platforms with perfect 25-yearold chest x-rays and CTs in people that are completely normal but actually when you put it into the real world and you're scanning 87-year-olds who are full to the brim of fluid and breathless, and does it still work, but actually do you see efficacy completely tail off and things?(Consultant radiologist, site B) Some clinicians wondered how regulatory systems, such as the Medicines and Healthcare Products Regulatory Agency (MHRA), Care Quality Commission, NICE, the European Union Legal Framework, or the FDA would respond to the evolving nature of AI in healthcare settings in the future.Careful institutional and professional clinical governance by hospital organizations and staff with clinical responsibility for interpreting CTs complemented national regulation.Clinicians consistently emphasized that the responsibility for the final clinical decision lay with them. I mean, you know it can only do relatively binary tasks at the moment, and those tasks are generally tasks that help radiologists, so I think there will be a way to go before it could report a whole CT scan, bespoke to the clinical information and the clinical referrer, and go through that sort of multi-faceted thought process.(Consultant cardiothoracic radiologist, site D) We further observed that in some instances organizations struggled to establish a business case for VLN.VLN was funded on a fee-per-scan basis, although sites were not being charged for scans during the trial.Organizations were unsure whether they would be able to develop a business case to justify continued use of VLN when free access to the technology supported by the trial ended.Senior managers noted that VLN competed with ongoing costs of other existing digital projects.They were aware of the high costs of procuring, validating, implementing, and optimizing stand-alone AI solutions focusing on one specific diagnostic application and looked for broader applications of AI in relation to lung cancers or lung diseases in general. So, this is a really complex question to answer and, essentially, if we were to follow NICE guidelines, we basically have to show a health economic benefit within 12 months.And my feeling is that we may not demonstrate NICE's gold standard of health economic benefits in 12 months, particularly given how expensive it is to get everything into one place.I suspect that maybe if we were to use longitudinal studies and observe this data for a bit longer, I suspect there will be financial benefits.(Consultant radiologist, site F) Sustainability and scale-up Failure to attend to environmental and organizational factors may impede acceptance and threaten the longer-term sustainability and scale-up of systems.These need to be considered during development and implementation.Clinicians, researchers, and implementers were aware of the potential technical and organizational challenges of scale-up across different sites.Software developers and local teams invested significant efforts in implementation.Technical teams (both within the hospital and outside) commented on their role as an intermediary between the software developer and hospital managers.Each site required bespoke configuration (eg, in relation to workflows).In some instances, there were also compatibility issues with installed PACS solutions, as described earlier.Many of these challenges became visible only post-implementation and implementation teams (and third-party suppliers) had not always made sufficient resources available for these activities.In one site, a configuration problem led to impaired performance triggered by a system upgrade.This raised issues about the ongoing management of AI tool configuration.However, there was at that stage no sharing of information between hospitals about tool performance statistics (eg, number of false positives or false negatives) or implementation issues.This was partly due to information governance restrictions in implementing sites. Summary of findings Our work showed that VLN was perceived as usable and useful by clinical users as a decision-support tool and as a "second reader."There were some differences in use between expert and novice clinicians in that experienced radiologists rapidly became confident in using the tool in an efficient and reliable way, discounting probably erroneous instances, though noting that less experienced users might lack the skills and confidence to make these judgments.We also found a general view that the role of radiologists would evolve positively with AI and might facilitate re-skilling. Based on the trust they had in their clinicians, patients also viewed VLN positively.The tool was designed to integrate within existing workflows and was readily adopted.Users became proficient over time as they learned the strengths and limitations of system performance.Detailed knowledge of the performance of the tool allowed them to rely selectively and appropriately on tool alerts, enabling responsible and dependable use. Our work further highlighted contextual variations in tool performance and use between different hospital sites and different use cases and workflows depending on specialty and experience.We also showed how AI tools need to be integrated within complex existing infrastructures.This was not always easy (and integration with PACS systems was one of the key perceived issues associated with system usability).Providers highlighted the need to attend to ongoing quality assurance and maintenance. Organizations were concerned that the initial and ongoing costs surrounding tool procurement, implementation, maintenance, and information governance might present challenges for establishing a business model of adoption and sustained use of these systems unless effective systems for handling these issues were established. Strengths and limitations We explored the views of a wide range of stakeholders including specialist chest clinicians, patients, and other implementers working in radiology settings across the UK to gain highlevel insights into the adoption and implementation of diagnostic AI in healthcare settings. However, there are also some limitations.Firstly, some interview data were obtained from chest radiology specialists most of whom had experience of using VLN.A broader range of different types of users with various levels of experience with VLN may have provided more nuanced insights into different use cases (eg, between screening or routine care; general radiology or specialist lung cancer radiology centers).We also struggled to secure access to a large number and a wide range of organizational stakeholders as these were managing challenging workloads.Secondly, technical deployment issues in several hospitals participating in this study affected the progress of the trial and impacted on recruitment of participants.Nevertheless, we have provided an overview of issues that need to be considered when implementing, adopting, scaling, and maintaining diagnostic medical AI.Thirdly, the evaluation of VLN was largely focused on routine hospital practice where nodule detection is often incidental, but our respondents frequently drew on their experiences from lung cancer screening, where the focus is to identify lung nodules in an atrisk population.More detailed work is needed to characterize how the tool is integrated into different care pathways and shaped by different practices, workflow, divisions of labor and skills.Fourthly, it was difficult to gather the views of patients about VLN, as they had no direct experience with the technology.As a result, their views were relatively generic.The quantitative study of VLN implementation is still ongoing and this qualitative evaluation did not collect quantitative data about the performance of the tool.We also did not obtain any cost-effectiveness data, which would help to inform organizational procurement decisions.We did not know at the time of write-up if sites would keep using the system after the free trial had ended.These areas are the subject of ongoing work. The sites in the study also received high levels of support from the software developer during the implementation of VLN, including the provision of training, integration with existing systems, and governance processes.Sites had extensive contact with a dedicated project manager, who logged and fed back their concerns.This extent of assistance offered by the software developer is unlikely to be sustainable in future implementations. Lastly, collaboration with a software developer may be viewed as a potential conflict of interest.However, the research team remained independent throughout the study as an external evaluator.The software developer did not influence the views of the research team or the study findings. Integration of the findings with the current literature Building on the literature surrounding complex health information infrastructures, there is no agreed method for successfully implementing diagnostic AI in radiology across different settings (ie, what may work in one setting may not work in another). 36,37Some of the emerging issues echo the relatively well-established evidence base in knowledge-based CDSS.For example, previous work has highlighted the importance of effective integration with workflows in order to minimize risks associated with alert fatigue.Mitigating factors have been found to include nonintrusive alert presentation and interface design. 38,39This is echoed in our work, where the integration with the PACS system meant that the interface was perceived to be nonintrusive and usable.Similarly, understanding and compensating for system limitations, as well as effective integration with existing health information infrastructures has been found to be a crucial factor in the implementation and adoption of CDSS. 9,40owever, our work has shown that there are several distinct issues with AI-CDSS sustainability: (1) costs of standalone procurement and implementation of specific solutions; (2) scale-up and variations in performance across different sites with different demographic, technological, and organizational features; and (3) extension of the scope of AI solutions. Almost all current advances in the field of AI fall under a narrow AI category, where AI is trained for one task only (eg, specific image recognition tasks, such as nodule detection on chest CT or hemorrhage on brain magnetic resonance imaging 5 ).However, our work has shown that the contingencies surrounding point solutions may not fit within organizational business cases and procurement strategies, both in relation to implementation and ongoing maintenance. We have shown how AI is currently being used responsibly and selectively by highly expert users, able to assess machine strengths and weaknesses.2][43] AI performance also needs to be subjected to ongoing scrutiny and there is a risk of degrading over time.As a result, even if an AI system works well in one organizational setting, this performance cannot be presumed to continue when use is extended to other organizations with different characteristics.Implementation of AI in a hospital setting is likely to involve changing workflow and clinical practices. 44Although these technologies may have become "domesticated" in some settings and workflows, this does not mean that they will easily be assimilated in others. 45revious studies on diagnostic AI have not taken these contextual factors into account and have therefore not been able to consider an extension of the scale and scope of existing functionality. 46Our work suggests that this may, for example, involve exploring different use cases for more-and lessexpert users of these systems (eg, as decision aids).Usage of these tools is liable to evolve.There is also ongoing discussion around the circumstances when AI is a decision-support tool, when it becomes a decision-making tool, and to what degree a human being needs to be kept "in the loop". 1,47,48This will accentuate ongoing accountability concerns around who takes ultimate responsibility for patient safety issues: the clinician or the AI provider.We believe there is therefore a pressing need for more detailed studies of human-AI interaction. Implications for policy and practice There are several recommendations emerging from our work.Most importantly, clinicians felt that they were ultimately responsible for clinical decision-making and used VLN as an assistive tool.We also learned that clinicians quickly came to understand the performance and shortcomings of the device and how to compensate for these.This reinforces work suggesting that we need to conceptualize AI-based systems in healthcare as assistive tools rather than autonomous decisionmaking entities. 49It also highlights the need to address (and educate users on) the strengths and limitations of systems in order for them to be able to develop ways to compensate for these. In addition, we have shown that contextual factors impact the implementation and use of diagnostic AI-based tools.These, therefore, need to be considered throughout the design, procurement, implementation, and adoption process.There is, for example, a need to understand how AI-based tools may be included in existing care pathways (and related research on human-AI interaction and how this varies across different workflows and divisions of labor), how AI may be used to upskill a variety of stakeholders, and what unintended consequences such tools may have that may threaten their acceptance and sustainability. The design principles and regulatory aspects of computerbased tools used in healthcare, including AI, are changing fast. 36,50,51Our study highlights the need for continued scrutiny of tool performance which may call for new post-market surveillance approaches.At this stage, however, we have little understanding of how this may be achieved or who might sustainably deliver it. Finally, we identified three types of governance processes in this study: (1) risk governance by regulatory bodies such as the MHRA; (2) clinical governance by adopting hospitals; and (3) professional governance by the clinical experts involved.At this point, VLN is being deployed subject to detailed professional scrutiny, so the clinical user takes ultimate responsibility. 52The implementation and use of the tool are currently being conducted in a reflective, thoughtful, and responsible manner, but it is not clear that this level of scrutiny will be sustained as technology scales and extends in scope across medical fields and into different health service settings. Although regulatory aspects of the work may only be transferable to a certain degree to other countries, as regulatory frameworks vary, the regulatory challenges posed by this technology are likely to be similar.The majority of our findings are therefore likely to be transferable to contexts outside the UK. Conclusion Our findings highlight that VLN use is coevolving, as the tool is cautiously and responsibly exploited by skilled professionals learning how they may appropriately utilize AI strengths and compensate for its weaknesses.There is a need to develop clear models for how VLN should be incorporated into the division of labor and workflows in the future.In addition, our work has shown that despite high levels of clinical acceptability and usability, failure to attend to environmental and organizational requirements (including procurement costs) may threaten the longer-term sustainability and scale-up of the system. Table 1 . Topic guide for clinicians and implementers, and patients. Do you understand the reports your physicians give you about your results, including AI results? 5. Do you ever ask any questions about AI tools your physician may be using to diagnose you?Section 2: Patient knowledge, understanding and attitudes toward AI 1.What are your general attitudes toward the use of AI in healthcare? 2. What is your understanding of the use of different AI tools and how they aid the diagnosis?(Including any benefits or drawbacks) 3.In your experience, to what extent are you as a patient informed about the way AI helps physicians/clinicians make a diagnosis?4. (Explain the tool if needed) Now that you know more about AI tools, would you specifically request to have a chest scan that would include AI tools? 5. Any other comments regarding AI and your chest condition? Table 2 . Summary of main themes. So previously [. ..], you would have had to put up a different icon on your desktop, type in your patient's name or hospital number, find their study, open their study in the specific workflow package, and then click, run CAD [computer-aided detection], and then review the output.Obviously, what VLN does is -it mostly generates a labeled additional DICOM [Digital Imaging and Communications in Medicine] image at the point that you open up the study, right, and then it's all there.So, when VLN nodule analysis is done, it massively cuts down the time [. ..]And then when it is there, the volume is available.(Consultant radiologist, site A) Yeah, so I've reviewed the lungs. ..in my normal way, so I'd usually review both. ..and then go through the VLN tool on both as well.And then I would check the nodule software on the PACS system, to see if I've missed anything essentially, and then re-review the imaging and correlate those findings if it came up with something I hadn't seen. (Consultant radiologist, site B) Journal of the American Medical Informatics Association, 2023, Vol.00, No. 0 Downloaded from https://academic.oup.com/jamia/advance-article/doi/10.1093/jamia/ocad191/7281919 by University of Edinburgh user on 26 September 2023 this is a bit of a weakness in this process.(Head of clinical imaging systems, site A)There were also some concerns about cybersecurity and data storage requirements including associated costs. It works automatically for 95% of the occasion. . .provided the worklists are up and running, [. ..].If we get an outage in which these things are done by 4G, so as with your mobile phone, dependent upon which site they go to, sometimes the signals are not as strong as in other sites.So, 6 Journal of the American Medical Informatics Association, 2023, Vol.00, No. 0 7 Downloaded from https://academic.oup.com/jamia/advance-article/doi/10.1093/jamia/ocad191/7281919 by University of Edinburgh user on 26 September 2023 working on that now really.(IT Project Lead, former PACS manager, site G) Yeah, there's funding for the first 12 months from [name of funder].So, I'm unsure [. ..]I think there was hope that it might continue to be funded.But I don't know what the ongoing costs would be after, I think it's September time.But that would be up to the clinical team to do their investment appraisal and everything else.So, they should be
2023-10-24T06:18:13.183Z
2023-10-20T00:00:00.000
{ "year": 2023, "sha1": "9059f25f07bd7b378c3a2331c49614f1aaa7f535", "oa_license": "CCBYNC", "oa_url": "https://ebooks.iospress.nl/pdf/doi/10.3233/SHTI230787", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "2a37e4f5a9e654bc31463a4039ce152a93429839", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
249065507
pes2o/s2orc
v3-fos-license
Accumulation of Lipid Droplets in a Novel Bietti Crystalline Dystrophy Zebrafish Model With Impaired PPARα Pathway Purpose Bietti crystalline dystrophy (BCD) is a progressive retinal degenerative disease primarily characterized by numerous crystal-like deposits and degeneration of retinal pigment epithelium (RPE) and photoreceptor cells. CYP4V2 (cytochrome P450 family 4 subfamily V member 2) is currently the only disease-causing gene for BCD. We aimed to generate a zebrafish model to explore the functional role of CYP4V2 in the development of BCD and identify potential therapeutic targets for future studies. Methods The cyp4v7 and cyp4v8 (homologous genes of CYP4V2) knockout zebrafish lines were generated by CRISPR/Cas9 technology. The morphology of photoreceptor and RPE cells and the accumulation of lipid droplets in RPE cells were investigated at a series of different developmental stages through histological analysis, immunofluorescence, and lipid staining. Transcriptome analysis was performed to investigate the changes in gene expression of RPE cells during the progression of BCD. Results Progressive retinal degeneration including RPE atrophy and photoreceptor loss was observed in the mutant zebrafish as early as seven months after fertilization. We also observed the excessive accumulation of lipid droplets in RPE cells from three months after fertilization, which preceded the retinal degeneration by several months. Transcriptome analysis suggested that multiple metabolism pathways, especially the lipid metabolism pathways, were significantly changed in RPE cells. The down-regulation of the peroxisome proliferator-activated receptor α (PPARα) pathway was further confirmed in the mutant zebrafish and CYP4V2-knockdown human RPE-1 cells. Conclusions Our work established an animal model that recapitulates the symptoms of BCD patients and revealed that abnormal lipid metabolism in RPE cells, probably caused by dysregulation of the PPARα pathway, might be the main and direct consequence of CYP4V2 deficiency. These findings will deepen our understanding of the pathogenesis of BCD and provide potential therapeutic approaches. B ietti crystalline dystrophy (BCD) is an inherited progressive retinal degeneration disease that was first described in 1937 by Bietti. 1 BCD appears to be more common in East Asia, especially in Chinese and Japanese people. [2][3][4][5][6] The frequency of pathogenic alleles of BCD has been estimated to be 1:67000. 7,8 BCD patients often show clinical symptoms similar to those of retinitis pigmentosa (RP), including night blindness, progressive loss of visual field, vision decline, and eventually total blindness. 8 The numerous small crystal-like deposits in the fundus are the most obvious feature of BCD patients. Optical coherence tomography imaging studies of BCD patients showed that the crystals were predominantly located in the retinal pigment epithelium (RPE) layer. [9][10][11][12] The above clinical findings indicate that RPE may be the cell type that is predominantly damaged in BCD. Currently, there is no effective treatment for BCD. Until now, mutations of CYP4V2 (cytochrome P450 family 4 subfamily V member 2) are the only known genetic cause of BCD. Three mutations of CYP4V2, including c.802_810del17insGC, c.992A>C, and c.1091-2A>G, account for more than 80% of the mutant alleles identified in BCD. 6,[13][14][15] CYP4V2 is a member of the cytochrome P450 (CYP) 4 family of enzymes, which catalyzed the ωhydroxylation of fatty acid. 16,17 In vitro experiments have shown that recombinant CYP4V2 can selectively hydroxylate long-chain and medium-long chain saturated and unsaturated fatty acids in the presence of NADPH. 18 Lai et al. 19 have reported a higher concentration of octadecanoic acid [18:0] and the lower concentrations of octadecadienoic acid [18:1n-9] and overall monounsaturated fatty acid in serum samples of 16 Chinese BCD patients. In the Cyp4v3 knockout mouse model, changes in serum fatty acid composition were also observed, but in the opposite direction. 20 More importantly, the exclusive ocular phenotypes found in BCD patients and the BCD mouse models suggest that the CYP4V2 gene may play an essential role in these types of cells, especially the RPE cells. Recently, Hata et al. 21 have successfully generated BCD patient-specific RPE cells by induced pluripotent stem cell (iPSC) technology and revealed the accumulation of free cholesterol and the impairment of autophagy flux in these BCD-affected RPE cells. However, because of the difficulties in accessing RPE cells from BCD patients or appropriate animal models of BCD, their findings have not been verified in vivo. Therefore it is particularly important to establish appropriate animal models that could recapitulate the lipid metabolism defects and retinal degeneration phenotypes and so facilitate the molecular mechanism studies of BCD. In the present study, we generated a BCD animal model by knocking out the homologous genes of CYP4V2 in zebrafish through CRISPR/Cas9 technology. The accumulation of lipid droplets in RPE cells and the progressive degeneration of RPE and photoreceptor cells were observed. Furthermore, we performed transcriptome analysis on the isolated RPE cells to investigate the potential pathways mediated by CYP4V2 and to identify the molecular mechanism underlying the abnormal lipid metabolism and the onset and progression of BCD. Zebrafish Husbandry Zebrafish were cultured in a circulated water system at 28.5°C and in a daily cycle of 14-hour-light and 10-hourdark. The study was approved by the Ethics Committee of Huazhong University of Science and Technology. Cryo-sectioning and Hematoxylin and Eosin (H&E) Staining Whole zebrafish eyes were dissected and fixed in 4% paraformaldehyde. Then, the eyes were dehydrated in 30% sucrose and embedded in optical coherence tomography. Retinal sections with a thickness of 10 μm were cut with a cryostat (Leica CM1950; Leica, Wetzlar, Germany). Sections were stained with H&E for analysis. The sections after H&E staining were observed and photographed under the optical microscope BX53. Nile Red Staining and Filipin Staining Cryosections were used for Nile Red (cat. N-1142; Invitrogen) staining. The slides were washed with phosphatebuffered saline solution three times and incubated with Nile Red (2 μg/ml in DMSO) for 10 min in the dark. The nuclei were labeled with DAPI (5 μg/ml) for 5 min. The staining method for cultured cells is the same as above. RPE flat mounts were prepared as described and stained using the same protocol. 22 Filipin staining was performed using a cell-based cholesterol detection kit (Sigma, SAE0087). The cryosections were balanced at room temperature for 20 min and washed three times with phosphate-buffered saline solution, followed by incubation for two hours with Filipin III. The nuclei were stained with PI for five minutes. The sections were mounted under glass coverslips. Fluorescence images were captured using a confocal laser-scanning microscope (FluoView FV1000 confocal microscope; Olympus Imaging, Tokyo, Japan). Triglyceride and free cholesterol concentrations were measured by commercial kits (Beijing Solarbio Science & Technology Co. Ltd, Beijing, China) according to the manufacturer's instructions. Transmission Electron Microscopy (TEM) TEM was performed according to a previous report. 23 Ultrathin sections of 100 nm thickness were prepared using an ultramicrotome and stained for TEM. Immunofluorescence and Western Blotting The preparation of RPE flat mounts and immunofluorescence analysis were performed as described previously. 22,24 Fluorescent images were captured using a confocal laserscanning microscope (FluoView FV1000 confocal microscope; Olympus Imaging). Fresh cells and zebrafish eyes were isolated and lysed in RIPA buffer. Lysates were mixed with loading buffer and boiled for 10 minutes. Protein samples were separated by SDS-PAGE and transferred to nitrocellulose membranes. The membranes were blocked for one hour in 5% skim milk and incubated with primary antibodies overnight at 4°C. The membranes were then washed three times in TBST for five minutes each and incubated with either a goat anti-rabbit or a goat anti-mouse HRPconjugated secondary antibody (1:20,000; Thermo Fisher Scientific) for two hours at room temperature. The protein bands were detected using a ChemiDoc XRS + system (Bio-Rad Life Science, Hercules, CA, USA) with the SuperSignal Sensitivity Substrate (Thermo Fisher Scientific), and quantified with the Quantity One software (Bio-Rad Life Science). In Situ Hybridization In situ hybridization for retinal sections was performed as previously described. 25 Probes were synthesized and labeled with the Digoxigenin using the MAXIscript SP6/T7 Transcription Kit (Invitrogen). RNA Isolation and Quantitative PCR Total RNA samples were extracted by Trizol reagents (Takara Biotechnology Co., Kyoto, Japan) according to the manufacturer's instruction. The first strand cDNAs was synthesized by MMLV reverse transcriptase (Invitrogen). Quantitative PCR (qPCR) was performed using AceQ qPCR SYBR Green Master Mix (Vazyme Biotech, Nanjing, China) on the StepOnePlus quantitative PCR System (Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions. The relative gene expression was quantified using the StepOne software v2.3. Gene primers are listed in Supplementary Table S2. Transcriptional Profiling Eyeballs from 7-month-old wildtype (WT) and cyp4v7/cyp4v8 DKO zebrafish were dissected, and the RPE layer was collected. Total RNA samples were extracted by Trizol Regent. RNA sequencing was performed on an Illumina HiSeq2000 platform (Gene Denovo Biotechnology, Co., Ltd., Guangzhou, China). The trimmed mean of M values was used to normalize raw counts of samples. Differently expressed genes were identified by the edgeR and DESeq2 using the following cut-off values: FC>2 and adjusted P value <0.05. Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis was performed with DAVID. Cell Culture and RNA Interference Human RPE-1 cells (American Type Culture Collection, CRL-4000) were cultured in DMEM (Gibco 11330057; Thermo Fisher Scientific) supplemented with 10% FBS. Human CYP4V2 small interfering RNA (siRNA) was synthesized and purified by RiboBio (Guangzhou RiboBio, Guangzhou, China). The target sequence of CYP4V2 siRNA (siCYP4V2) was ACAGAGATCCGAGATACTT. The siRNA duplexes targeting nonspecific sequences were used as negative control (siNC). Cells were transfected with siNC or siCYP4V2 by Lipofectamine 3000 (L3000015, Invitrogen) for 72 hours and then collected according to the requirements of subsequent experiments. Statistical Analysis All data were presented as mean with SEM. Data groups were compared by Student's t tests (Prism 6.0 software; Graph-Pad Software, Inc., La Jolla, CA, USA). Differences between groups were considered statistically significant if P < 0.05. Generation of the cyp4v7/cyp4v8 Double-Knockout Zebrafish In zebrafish, two proteins encoded by the cyp4v7 and cyp4v8 genes both showed a high degree of homology to CYP4V2. About 65% of amino acid residues are identical and 60% of amino acid residues are positive between zebrafish Cyp4v7/Cyp4v8 and human CYP4V2 (Supplementary Fig. S1). In addition, we constructed a phylogenetic tree based on the amino acid sequences of all CYP4 (cytochrome P450 gene 4) family proteins in human and zebrafish genomes using the maximum likelihood with 500 bootstrap replicates from MEGA 7.0 (Molecular Evolutionary Genetics Analysis). 26 As shown in Figure 1A, there are only four CYP4 family genes in zebrafish, and the cyp4v7 and cyp4v8 genes show the closest evolutionary distance to the human CYP4V2 gene. Next, we detected the expression distribution of cyp4v7 and cyp4v8 in zebrafish retinas by in situ hybridization (Fig. 1B). Noticeably, cyp4v7 and cyp4v8 were both widely expressed throughout the retina, and cyp4v7 showed a relatively enriched expression in RPE cells. These expression patterns are similar to that of CYP4V2 in human retinas reported previously. 18 The expression levels of cyp4v7 and cyp4v8 were also detected by semiquantitative reversetranscription PCR. The mRNA level of cyp4v7 was much higher than cyp4v8 in zebrafish retinas (Fig. 1C). These results suggested that compared with other members in the CYP4 family of zebrafish, cyp4v7 and cyp4v8 are the most likely homologous genes of human CYP4V2 in zebrafish. To further confirm the functional conservation between zebrafish Cyp4v7/Cyp4v8 and human CYP4V2 and establish an appropriate BCD disease model, we generated the cyp4v7 and cyp4v8 double knockout (DKO) zebrafish line by CRISPR/Cas9 technology. Two sgRNA target sites in the exon 9 of cyp4v7 and the exon 6 of cyp4v8 (Fig. 1D), which are located in the corresponding mutational hotspots of human CYP4V2, were chosen for knockout experiments. 27 Through three rounds of screening, we obtained the cyp4v7/cyp4v8 double knockout (named cyp4v7/cyp4v8 DKO) zebrafish line (Fig. 1E). The mRNA levels of cyp4v7 were reduced by about 50% in the mutant group compared with the WT group, although there was no significant decrease of cyp4v8 mRNA levels (Fig. 1F). Meanwhile, we amplified and sequenced the cDNA fragments spanning the two mutations in cyp4v7 and cyp4v8, respectively ( Supplementary Fig. S2). There were no alternative splicing events that could skip the mutant exons, and the mutations indeed existed in the mature mRNAs of cyp4v7 and cyp4v8. These results suggested that the functions of cyp4v7 and cyp4v8 are likely destroyed in the cyp4v7/cyp4v8 DKO zebrafish. Progressive Degeneration of Photoreceptor and RPE Cells in cyp4v7/cyp4v8 DKO Zebrafish To investigate the retinal phenotypes of the cyp4v7/cyp4v8 DKO zebrafish, histological analysis was performed at 7, 10, 12, and 20 months after fertilization (Figs. 2A, 2B). At 7 and 10 mpf, there were no significant differences in the retinal structure or the thickness of retinal layers between WT and mutant zebrafish. However, at 12 mpf, we observed obvious attenuation of the photoreceptor layer, especially the outer segment layer, in cyp4v7/cyp4v8 DKO zebrafish. This situation became much more severe at 20 mpf. The expression of rod-specific (Gnat1) and cone-specific (Gnat2) phototransduction cascade proteins were detected by Western blotting at 10 and 12 mpf (Fig. 2C). The protein levels of Gnat1 were decreased mildly at 10 mpf and severely at 12 mpf. Meanwhile, the protein levels of Gnat2 were unchanged at 10 mpf, but significantly decreased at 12 mpf (Figs. 2C, 2D). These results further supported the existence of progressive photoreceptor degeneration with a rod-first and cone-later pattern. RPE cells are considered the main affected cells in BCD patients. Consequently, we checked the morphology of RPE cells in WT and cyp4v7/cyp4v8 DKO zebrafish by immunostaining using the anti-ZO-1 antibody on RPE flat mounts from 7 to 20 months after fertilization (Fig. 3A). In cyp4v7/cyp4v8 DKO zebrafish, the loss of hexagonal cellular architecture and junctional integrity RPE cells could be observed as early as seven after fertilization, suggesting the dysfunction and atrophy of RPE cells. The number of degenerative RPE cells increased with age dramatically in the mutant zebrafish (Fig. 3B). These results demonstrated that there was also progressive degeneration of RPE cells occurring before photoreceptor degeneration in the cyp4v7/cyp4v8 DKO zebrafish. Excessive Accumulation of Lipid Droplets in RPE Cells of cyp4v7/cyp4v8 DKO Zebrafish CYP4V2 is supposed to play a role in lipid metabolism. 28 We wondered whether there was abnormal lipid accumulation in the retina of cyp4v7/cyp4v8 DKO zebrafish. We found that accumulation of lipid droplets (LDs) could be observed in RPE layer by Nile red staining in the cyp4v7/cyp4v8 DKO zebrafish as early as 3 mpf (Fig. 4A). RPE flat mounts of WT and cyp4v7/cyp4v8 DKO zebrafish were also prepared and stained with Nile Red. LDs could be observed in nearly all RPE cells in the mutant zebrafish at 10 months after fertilization (Fig. 4B). From 3 to 10 months after fertilization, the number and size of LDs located in the RPE cells were significantly increased with age (Fig. 4B). The existence of LDs in RPE cells of cyp4v7/cyp4v8 DKO zebrafish was further validated by TEM (Fig. 4C). These results suggested that the excessive accumulation of lipid droplets in RPE cells may be a major pathological change at the cellular level before photoreceptor degeneration in the progression of BCD. Accumulation of free cholesterol has been reported in the BCD iPSC-RPE cells. 21 We also detected the content of cholesterol in the retinas of WT and cyp4v7/cyp4v8 DKO zebrafish by Filipin staining (Supplementary Fig. S3). No significant accumulation in free cholesterol was observed in 3-month-old cyp4v7/cyp4v8 DKO zebrafish. However, at 12 months after fertilization, free cholesterol was specifically Downregulation of the PPARα Pathway is Involved in the Accumulation of Lipids Caused by CYP4V2 Deficiency To investigate the molecular mechanism underlying the onset and development of BCD, we performed transcriptional profiling of the RPE tissues from WT and cyp4v7/cyp4v8 DKO zebrafish at seven months after fertilization by RNA sequencing (RNA-seq). A total of 3170 differentially expressed genes were identified, of which 1341 were upregulated and 1829 were downregulated (Fig. 5A). The top 20 KEGG enriched pathways (Fig. 5B) involved in metabolism processes accounted for 30% of all pathways. The lipid metabolism pathways could be further divided into bile acid biosynthesis, arachidonic acid metabolism, steroid biosynthesis, linoleic acid metabolism, and fatty acid elongation (Fig. 5C). Interestingly, some of the lipid metabolism-related genes were regulated by peroxisome proliferators activated receptors alpha (PPARα), and its pathway (ko03320) also significantly enriched in the top 20 KEGG pathways. Furthermore, we checked the expression of genes involved in the PPARα pathway by showing them in the heatmap based on our RNA-seq data (Fig. 5D). The fabp (fatty acid binding protein) family genes (fabp1a, fabp7a, fabp7b, fabp11a, fabp11b) and other PPARα-regulated genes such as apo-AI (apolipoprotein A-I) and cyp27a1.2 (cytochrome P450 family 7 subfamily A member 27a) were significantly downregulated, which was also verified by qPCR (Fig. 5E). We also observed the decreased protein levels of PPARα in 7-month-old cyp4v7/cyp4v8 DKO zebrafish (Fig. 5F). Finally, we knocked (Fig. 6A). Moreover, the accumulation of LDs and increased levels of triglycerides and free cholesterol were also observed in CYP4V2-depleted human RPE-1 cells (Figs. 6B-E). These results suggested a conserved and probably direct role of CYP4V2 in the regulation of PPARα pathway and lipid metabolism. DISCUSSION In this study, we generated a novel BCD animal model by knocking out the homologous genes of CYP4V2 in zebrafish. The progressive degeneration of RPE and photoreceptor cells and the accumulation of lipid droplets in RPE cells suggest that the cyp4v7/cyp4v8 double knockout zebrafish is an appropriate model for BCD, in addition to the cyp4v3-knockout mice reported in 2014. 20 Compared with the CYP4V2 KO mice, we have made several important improvements in this study. First of all, we described a more detailed cellular phenotype during the progression of retinal degeneration in the cyp4v7/cyp4v8 double knockout zebrafish. Our results clearly showed that RPE cells were affected first, which was followed by the rod and then cone photoreceptors. This degenerating pattern is highly consistent with the clinical findings from BCD patients. Second, we paid close attention to the changes in lipids in RPE cells. By a variety of means, we demonstrated that lipid droplets are mainly accumulated inside RPE cells but no other retinal cell types in an age-related manner. Third, we have tried to reveal the molecular mechanisms of BCD through transcriptome analysis using the RPE tissues isolated from WT and cyp4v7/cyp4v8 double knockout zebrafish. We found that downregulation of the PPARα pathway may be involved in the lipid accumulation caused by CYP4V2 deficiency in both zebrafish RPE cells and cultured human RPE cells. Furthermore, as a popular laboratory animal model, zebrafish demonstrated many advantages, such as the short growth cycle, the low breeding cost, the strong reproductive ability, the small body size, and the convenience for genetic manipulation and drug screening. Establishment of the BCD zebrafish model will provide sufficient experimental materials and lay a solid foundation for further studies in the related fields. BCD has been considered as a lipid metabolic disorder for a long time. However, because of the limitation of material, this assumption has not been experimentally confirmed until recently. Two previous studies have analyzed serum lipid constituents in, respectively, BCD patients and the BCD mouse model, although inconsistent results were reported. 19,20 The major shortcoming is that the data obtained from serum samples may not accurately reflect the conditions in the mainly damaged tissues, such as the retina, in BCD. Besides, only a few types of lipids were tested in the two previous studies. In 2018, Hata et al. 21 performed untargeted lipidomics in BCD iPSC-RPE cells and found the accumulation of various glucosylceramides and free cholesterol. However, lipid analysis in another BCD iPSC-RPE model reported in 2020 revealed the accumulation of lipid droplets caused by poly-unsaturated fatty acid accumulation. 29 In our BCD zebrafish model, we also observed the accumulation of lipid droplets and free cholesterol in RPE cells in an age-related manner. However, there are still many differences among the three models in detail, probably because of the following reasons: (1) Lack of interactions between RPE cells and photoreceptors and choroid may cause unpredictable consequences in the in vitro cultured RPE cells; (2) Unlike what we have achieved with animal models, it is difficult to match the in vitro cultured RPE cells to the different stages of BCD progression. Therefore a timecourse and comprehensive lipidomics analysis of RPE cells in the BCD zebrafish model helped to clarify the above questions and provided cues for further studies. The mechanisms by which CYP4V2 deficiency causes abnormal lipid accumulation and retinal degeneration remain inconclusive. Dysfunctions of lysosomes and mitochondria have been reported in the two BCD iPSC-RPE models, respectively. 21,29 In this study, we revealed that inhibition of the PPARα pathway may be responsible for the lipid accumulation caused by CYP4V2 deficiency. It remains to be demonstrated whether the three mechanisms complement each other or function at different stages of BCD development. PPARα is a ligand-inducible transcription factor controlling multiple processes in lipid metabolism, such as microsomal, peroxisomal, and mitochondrial fatty acid oxidation, synthesis, and breakdown of triglycerides, fatty acid binding and activation, fatty acid elongation, and desaturation. [30][31][32][33][34][35] A variety of endogenous fatty acids and their metabolites can bind to PPARα as ligands and regulate lipid metabolism. 36 A reasonable assumption is that dysfunction of CYP4V2 may cause the reduction of certain bioactive metabolites that could activate the PPARα pathway as ligands. This will also be investigated in our future studies. In summary, we generated an interesting BCD zebrafish model and investigated the retinal phenotypes and pathogenic mechanisms by a combination of multiple technologies. The accumulation of lipid droplets in RPE cells was observed and became more substantial with increased age. Down-regulation of the PPARα pathway was discovered in both RPE cells of BCD zebrafish and the CYP4V2knockdown human RPE cell line, suggesting a conserved and causative role of CYP4V2 in regulating the PPARα pathway. These findings will deepen our understanding of the pathogenesis of BCD and provide potential therapeutic targets for future studies.
2022-05-27T06:22:18.287Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "c674851a5b0714d6bda9c20186c0372878899bb9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1167/iovs.63.5.32", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07826ceca27ff66ba4e24c442515ed8c43865b82", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
8287340
pes2o/s2orc
v3-fos-license
Investigation of Intercellular Salicylic Acid Accumulation during Compatible and Incompatible Arabidopsis-Pseudomonas syringae Interactions Using a Fast Neutron-Generated Mutant Allele of EDS5 Identified by Genetic Mapping and Whole-Genome Sequencing A whole-genome sequencing technique developed to identify fast neutron-induced deletion mutations revealed that iap1-1 is a new allele of EDS5 (eds5-5). RPS2-AvrRpt2-initiated effector-triggered immunity (ETI) was compromised in iap1-1/eds5-5 with respect to in planta bacterial levels and the hypersensitive response, while intra- and intercellular free salicylic acid (SA) accumulation was greatly reduced, suggesting that SA contributes as both an intracellular signaling molecule and an antimicrobial agent in the intercellular space during ETI. During the compatible interaction between wild-type Col-0 and virulent Pseudomonas syringae pv. tomato (Pst), little intercellular free SA accumulated, which led to the hypothesis that Pst suppresses intercellular SA accumulation. When Col-0 was inoculated with a coronatine-deficient strain of Pst, high levels of intercellular SA accumulation were observed, suggesting that Pst suppresses intercellular SA accumulation using its phytotoxin coronatine. This work suggests that accumulation of SA in the intercellular space is an important component of basal/PAMP-triggered immunity as well as ETI to pathogens that colonize the intercellular space. Introduction In response to pathogens Arabidopsis relies on various induced defenses. Basal resistance or PTI (PAMP-Triggered Immunity) is a defense response elicited by the recognition of conserved pathogen-or microbe-associated molecular patterns (PAMPs or MAMPs) by pattern recognition receptors [1]. Some bacteria, for example, certain pathovars of Pseudomonas syringae, are able to suppress PTI by delivering effector proteins into plant cells using a type-three secretion system [1]. Several of these effectors interfere with various stages of PTI, contributing to the pathogen's ability to cause disease on host plants [2][3][4][5][6]. In response, many plants possess resistance genes (R genes) that recognize effectors either directly or indirectly, leading to R gene-mediated resistance or Effector-Triggered Immunity (ETI) [7]. Some of the defense mechanisms associated with ETI are thought to overlap with those of PTI although they seem to occur more rapidly and with greater strength during ETI [1]. These defenses include extensive transcriptional reprogramming resulting in cellular changes such as expression of defense genes (e.g., PR [PATHOGENESIS-RELATED] genes), production of phytoalexins, salicylic acid (SA) biosynthesis, and cell-wall modifications [8][9][10]. In addition, ETI is often associated with the hypersensitive response (HR), a form of programmed cell death thought to contribute to inhibition of pathogen spread [11]. Another form of induced disease resistance is the developmentally regulated Age-Related Resistance (ARR) response (reviewed in [17,18]). In Arabidopsis, ARR results in enhanced resistance to certain pathogens with increasing plant age [19]. Specifically, 6-week-old Col-0 plants grown in short days limit the growth of P. syringae pv. tomato (Pst) to levels that are 10-to 100-fold lower than in 3-week-old plants. Unlike PTI and ETI, the molecular mechanisms underpinning ARR in Arabidopsis are only beginning to be understood. A common player in many disease resistance pathways is the phytohormone SA (reviewed in [20,21]). Wild-type Arabidopsis accumulates SA in response to inoculation with both virulent and avirulent Pst [22]. The importance of SA accumulation for different disease resistance responses is typically tested using transgenic and mutant plants with a reduced ability to accumulate SA. NahG plants expressing a bacterial salicylate hydroxylase gene convert SA to catechol and consequently accumulate very little SA [23]. ICS1/SID2 (ISOCHORISMATE SYNTHASE1/SALICYLIC ACID INDUCTION DEFICIENT2) encodes a key enzyme in the biosynthetic pathway responsible for most pathogen-responsive SA production in Arabidopsis [24,25]. EDS5/SID1 (ENHANCED DISEASE SUSCEPTIBILITY5) encodes a multidrug and toxin extrusion (MATE) family protein that localizes to the chloroplast envelope and transports SA from its site of synthesis into the cytoplasm [26][27][28][29]. Both sid2 and eds5/sid1 mutants accumulate little SA in response to pathogens [22]. NahG, sid2, and eds5 support higher growth of virulent strains of P. syringae compared to wild-type plants suggesting that SA accumulation is important in limiting pathogen growth even in a compatible (susceptible) interaction [22,23,30,31]. ETI is compromised in NahG [8,23,[32][33][34], sid2, and eds5 [22,32,35] when initiated by several R genes (RPS2, RPS4, RPM1) interacting with their corresponding effectors. In experiments using type-three secretion system mutants or PAMPs (flg22) to initiate PTI in wild-type Col-0 or sid2, Tsuda et al. [36] demonstrated that SA accumulation is required for a successful PTI response. Thus, in Arabidopsis, SA accumulation is important in numerous ETI and PTI pathways. SA accumulation is also required for the Arabidopsis ARR response to Pst as demonstrated by the ARR-defective phenotypes of NahG, sid2-1, and sid1/eds5-3 [19,37]. Examination of mature plants responding to Pst revealed 6-fold higher SA levels in intercellular washing fluids (IWFs) relative to young plants [38]. Anti-microbial activity was often observed in the IWFs of mature plants inoculated with Pst as demonstrated by inhibition of in vitro Pst growth [38]. Preventing SA accumulation in the intercellular space by pressure-infiltrating ARR-competent plants with salicylate hydroxylase disrupted their ability to undergo ARR. Conversely, adding exogenous SA to the intercellular space rescued ARR-defective mutants and enhanced ARR in wild-type Col-0 [38]. Taken together this data led to the hypothesis that SA may act as an antimicrobial agent in the intercellular space during ARR. A classical mutant screen for mature plants with defects in ARR was used to identify genes involved in the ARR response, including iap1-1 (important for the ARR pathway1-1). Along with their ARRdefective phenotype, mature iap1-1 plants accumulate little SA [37]. In this work, a combination of genetic mapping, wholegenome sequencing, and complementation analysis was used to identify iap1-1 as a mutant allele of EDS5. While mapping the iap1-1 mutation, we investigated the role of IAP1 in RPS2-and RPS4mediated ETI by measuring bacterial levels and monitoring HR cell death using trypan staining and electrolyte leakage. Intercellular SA accumulation is important during ARR and requires functional IAP1, therefore we investigated both inter-and intracellular SA accumulation during ETI (incompatible interaction) and during a compatible interaction with virulent Pst in young Col-0 and iap1-1. Our results suggest that inter-and intracellular SA accumulation is important during both compatible and incompatible interactions. Results The iap1-1 mutant is partially compromised in resistance to Pst(avrRpt2) and Pst(avrRps4) The iap1-1 mutant is defective for ARR to virulent Pst [37]. To determine whether IAP1 is also required during NDR1-or EDS1dependent ETI/incompatible interactions, young plants at 3 weeks post-germination (wpg) were inoculated with 10 6 cfu (colony-forming units) ml 21 Pst, Pst(avrRpt2), or Pst(avrRps4) and in planta bacterial density was measured 3 days post-inoculation (dpi). Pst(avrRpt2) grew to significantly lower levels than Pst in both Col-0 and iap1-1 indicating that an ETI response occurred ( Figure 1A). However, Pst(avrRpt2) levels were significantly higher in iap1-1 compared to Col-0 indicating that NDR1-dependent ETI to Pst(avrRpt2) is partially compromised by the iap1-1 mutation. Similar results were obtained when Col-0 and iap1-1 were inoculated with Pst(avrRps4) which indicates that EDS1dependent ETI to Pst(avrRps4) is also partially compromised by the iap1-1 mutation ( Figure 1B). Therefore, IAP1 is required for a full and robust ETI response to Pst carrying effectors recognized by two distinct classes of resistance proteins. The hypersensitive response is partially compromised in iap1-1 Since the hypersensitive response (HR) is a common component of ETI elicited by both AvrRpt2 and AvrRps4 effectors [13,14,16] we investigated whether the iap1-1 ETI defect was accompanied by a reduced HR. Four-week-old Col-0 and iap1-1 were inoculated with 10 7 cfu ml 21 Pst(avrRpt2) or 10 mM MgCl 2 (mock-inoculated) and leaves were collected at 24 hours postinoculation (hpi) and stained with trypan blue. Trypan blue does not pass through intact cell membranes of live cells therefore it selectively stains dying or dead cells and can be used to measure HR-associated cell death [47]. Visual analysis revealed little staining in mock-inoculated leaves whereas intense staining was observed in leaves inoculated with Pst(avrRpt2) ( Figure S1). There was no obvious difference in the intensity of staining between Col-0 and iap1-1 leaves suggesting that iap1-1 undergoes a wild-type HR. By floating treated leaf tissue in a solution and measuring conductance, electrolyte leakage from dead or damaged cells can be quantified and used to track the progression of HR cell death over time [48]. Electrolyte leakage was monitored in tissue collected from 4-week-old Col-0 and iap1-1 that were either mock-inoculated or inoculated with 10 7 cfu ml 21 Pst(avrRpt2) ( Figure 1C). Mock-inoculated leaves showed little change in electrolyte leakage over time. Electrolyte leakage from leaves inoculated with Pst(avrRpt2) increased substantially between 7 and 13 hpi. By 13 hpi, electrolyte leakage from Col-0 leaves inoculated with Pst(avrRpt2) was approximately 4-fold higher than mockinoculated tissue, while the increase in electrolyte leakage from iap1-1 leaves was more modest (2.7-fold increase compared to mock-inoculated tissue). Electrolyte leakage from iap1-1 leaves was significantly less than from wild-type leaves between 10 and 13 hpi with Pst(avrRpt2) (T-test P,0.01) suggesting that HR-mediated cell death is compromised in iap1-1. Intracellular SA accumulation in response to Pst and Pst(avrRpt2) SA is required for RPS2-and RPS4-mediated immunity [8,22,23,[32][33][34]. If young iap1-1 plants accumulate little SA like mature iap1-1 [37], this could explain the compromised ETI response observed in young iap1-1. To test this hypothesis, an ADPWH_lux SA biosensor [42] was used to measure SA levels in both intercellular (IWFs) and intracellular (leaves minus IWFs) compartments of young (4 wpg) Col-0 and iap1-1 plants that were untreated, mock-inoculated, or inoculated with 10 6 cfu ml 21 Pst or Pst(avrRpt2) (Figure 2). Intra-and intercellular SA results are discussed in this section and the next section respectively. Col-0 and iap1-1 had similar levels of intracellular free SA in both untreated and mock-inoculated tissues at 12, 24, and 48 hpi (,100 ng gfw 21 ) (Figure 2A). Col-0 inoculated with Pst accumulated 199 ng gfw 21 intracellular free SA by 48 hpi, significantly more than mock-inoculated controls (T-test P,0.01), whereas intracellular free SA levels in iap1-1 were similar to mockinoculated controls at all time points. In response to Pst(avrRpt2), Col-0 accumulated 546-682 ng gfw 21 intracellular free SA at 12, 24, and 48 hpi, whereas iap1-1 accumulated very little intracellular free SA (109-164 ng gfw 21 ). This suggests that iap1-1 accumulates little intracellular free SA in response to Pst or Pst(avrRpt2). In addition, Col-0 accumulated higher levels of intracellular free SA in response to Pst(avrRpt2) compared to Pst (T-test P,0.01). Similar results were observed for intracellular total SA (free SA+SA-glucosides) with the exception that iap1-1 accumulated high levels of total SA at 48 hpi with both Pst and Pst(avrRpt2)( Figure 2B). Intercellular SA accumulation in response to Pst and Pst(avrRpt2) Intercellular SA accumulation is essential for ARR to Pst, and intercellular SA addition and subtraction experiments suggest that SA acts as an anti-microbial agent in the intercellular space during ARR [38]. To determine if SA accumulates in a similar manner in young plants undergoing ETI, SA levels were measured in IWFs collected from leaf tissue of young (4 wpg) Col-0 and iap1-1 that were untreated, mock-inoculated, or inoculated with 10 6 cfu ml 21 Pst or Pst(avrRpt2) ( Figure 2C,D). IWFs from untreated and mockinoculated Col-0 and iap1-1 had similar levels of intercellular free SA at 12, 24, and 48 hpi (,200 ng ml IWF 21 ). Intercellular free SA levels in IWFs collected from Col-0 inoculated with Pst were similar to mock-inoculated controls. This is consistent with previous findings that young Col-0 accumulates little intercellular SA in response to Pst [38]. IWFs collected from iap1-1 plants inoculated with Pst contained similar free SA levels relative to mock-inoculated controls. IWFs from Col-0 inoculated with Pst(avrRpt2) contained high levels of free SA at 12, 24, and 48 hpi (522-1226 ng ml IWF 21 ), whereas IWFs from iap1-1 inoculated with Pst(avrRpt2) had similar levels as mock-inoculated controls (,200 ng ml IWF 21 ). Similar results were observed for total SA levels in IWFs with the exception that Col-0 accumulated a modest level of intercellular total SA in response to Pst at 48 hpi (262 ng ml IWF 21 compared to 60 ng ml IWF 21 in mock-treated plants, T-test P,0.01). Therefore, iap1-1 accumulated little intercellular SA in response to Pst or Pst(avrRpt2). In addition, wild-type Col-0 accumulated high levels of intercellular SA in response to Pst(avrRpt2), and modest levels in response to virulent Pst. Intercellular SA accumulated concurrently with cell death during ETI As indicated above, elevated intercellular SA levels were observed by 12 hpi with Pst(avrRpt2) in wild-type Col-0 ( Figure 2C,D). How SA reaches the intercellular space is not known, however, it is possible that SA leaks from dead or damaged cells during the HR that accompanies ETI. To test this hypothesis, electrolyte leakage was measured in Col-0 and rps2-201 controls. To make it possible to associate SA accumulation and cell death, plants were inoculated with the same concentration of Pst(avrRpt2) that was used for the SA accumulation experiments (10 6 cfu ml 21 , Figure 4A). Use of a smaller and more sensitive conductivity meter made it possible to use small, standardized units of leaf tissue (leaf discs), which increased the accuracy of electrolyte leakage measurements. Since a lower bacterial inoculum concentration was used, fewer cells would undergo the HR, therefore cell death was monitored over 32 hours. If cell death occurs before and/or concurrently with SA accumulation, then SA may access the intercellular space from dying and dead cells. Bacterial density measured at 3 dpi demonstrated that Pst(avrRpt2) grew to significantly higher levels in rps2-201 mutants than in Col-0 indicating that an ETI response occurred in Col-0 and was defective in the rps2-201 mutant ( Figure 4B). Electrolyte leakage from Col-0 inoculated with Pst(avrRpt2) was significantly greater than in mock-inoculated controls by 10 hpi (T-test, P,0.05) suggesting that cell death was occurring by this time ( Figure 4A). Thus, the possibility that SA accesses the intercellular space by leaking out of dead cells during ETI could not be ruled out. Electrolyte leakage was also measured in iap1-1 to confirm the results of our first set of electrolyte leakage experiments which were carried out with higher inoculum concentrations and a different technique (see materials and methods). Pst(avrRpt2) levels supported by iap1-1 at 3 dpi were significantly higher than in Col-0, consistent with an ETI defect in iap1-1 ( Figure 4B). In this set of experiments electrolyte leakage from iap1-1 inoculated with Pst(avrRpt2) was lower than Col-0 between 14 and 18 hpi, supporting previous evidence for a less robust HR in iap1-1 relative to Col-0 ( Figure 4A). It is interesting to note that electrolyte leakage was greater in iap1-1 compared to Col-0 from 26 to 32 hpi with Pst(avrRpt2), perhaps due to the fact that iap1-1 is more susceptible to Pst than Col-0, and Pst switches to necrotrophy at the end of the infection cycle [50]. Coronatine suppresses SA accumulation during the compatible Arabidopsis-Pst interaction Since young Col-0 accumulates little intercellular SA during the compatible interaction with Pst ( [38], Figure 2C,D) we hypothesized that intercellular SA accumulation could be an important part of PTI/basal defense that is suppressed by virulent Pst. Work done by other groups demonstrated that the Pseudomonas phytotoxin coronatine suppresses SA accumulation in whole leaves [51,52]. To test whether coronatine suppresses intra-and/or intercellular SA accumulation, young (4 wpg) wild-type Col-0 were inoculated with Pst (strain DC3000) or coronatine-deficient replicate samples is shown. T-tests were performed to test for significant differences between means (see results section). This experiment was repeated twice with similar results. doi:10.1371/journal.pone.0088608.g002 Figure 3. Salicylic acid accumulation in response to Pst(avrRpt2) is RPS2-dependent. Intracellular total salicylic acid (SA) levels (A) and intercellular total SA levels (B) were measured at 24 hpi in leaves collected from 4-week-old plants that were untreated, mock-inoculated, or inoculated with 10 6 cfu ml 21 Pseudomonas syringae pv. tomato carrying avrRpt2 [Pst(avrRpt2)]. The mean 6 standard deviation of three replicate samples is shown. Asterisks indicate significant differences between means (T-test, * P,0.05, ** P,0.01). This experiment was repeated twice with similar results. doi:10.1371/journal.pone.0088608.g003 Pst cor 2 (strain DC3661) followed by intra-and intercellular SA quantification at 12, 24, and 48 hpi ( Figure 5A,B). Pst cor 2 grew to lower levels than Pst by 3 dpi ( Figure 5C). As seen previously, Col-0 accumulated modest levels of intracellular free SA by 48 hpi with Pst (410 ng gfw 21 ) relative to untreated controls (144 ng gfw 21 , T-test P,0.01). In contrast, higher levels of intracellular free SA accumulated in response to Pst cor 2 at 24 and 48 hpi. A similar trend was observed for intracellular total SA levels. Moreover, Col-0 also accumulated higher levels of intercellular SA in response to Pst cor 2 relative to Pst at 48 hpi (,1400 ng ml IWF 21 compared to ,200 ng ml IWF 21 respectively). These results suggest that Pst suppresses both intra-and intercellular SA accumulation in a coronatine-dependent manner during the compatible interaction in young plants. Map-based cloning and whole-genome sequencing to identify iap1-1 The iap1-1 mutant was isolated from a screen for ARR-defective plants performed on a population of fast neutron mutants [37]. As demonstrated in this work iap1-1 is partially compromised in ETI. To identify the causal mutation in iap1-1 we began with a typical map-based cloning approach. A mapping population was gener-ated and approximately 160 putative homozygous mutant F2s were used to map the iap1-1 mutation close to marker 461250 (Monsanto Arabidopsis Polymorphism Collection) at 18,087,180 base-pairs (bp) on the long arm of chromosome four. The necessity of screening individual mature plants for an ARR-defective phenotype made the reliable identification of homozygous individuals difficult and time-consuming for fine mapping, therefore we sequenced the iap1-1 genome in order to locate the mutation. DNA isolated from a homozygous iap1-1 individual was used to create a DNA library that was sequenced using an Illumina HiSeq 1500. This generated roughly 68 million paired-end reads which were then mapped to the Col-0 reference genome (TAIR 10) using BWA software [46] resulting in approximately 50-fold coverage of the genome. Since the majority of fast neutron-generated mutations are deletions [53][54][55] we reasoned that iap1-1 was likely a deletion mutant. To identify deletions in the iap1-1 genome we compiled a list of positions in the reference genome that had zero coverage by iap1-1 reads. A region of zero coverage might indicate that the corresponding sequence is deleted in iap1-1. Genome-wide, 814 regions of zero coverage were identified, 244 of which were located on chromosome four. Based on the mapping data, zero-coverage regions located near marker 461250 on chromosome four were investigated. Two zero-coverage regions were found within 500 kb of the marker (Table S1), one of which was located in an intergenic region and was therefore unlikely to represent the causal mutation. Intergenic regions are defined as regions between genes not including recognizable promoters or untranslated regions. The other zero-coverage region was 65 bp within the first exon of EDS5 ( Figure 6A), a gene known to be required for SA accumulation [22] and ARR [19]. Gel electrophoresis of EDS5 RT-PCR products from iap1-1 and Col-0 showed a size difference that was consistent with a deletion in the iap1-1 product ( Figure 6B). Sequencing these products confirmed the 65 bp deletion in iap1-1 and also revealed a 6 bp insertion between the nucleotides flanking the deleted region ( Figure 6C). When the Col-0 reference genome was modified to contain this insertion/deletion mutation in EDS5, the Illumina reads generated from iap1-1 mapped continuously across the modified EDS5 locus, confirming that iap1-1 harbours the insertion/deletion mutation depicted in Figure 6C. This mutation results in the net loss of 59 nucleotides from the first exon of EDS5 and causes a frame-shift that produces a premature stop codon after the 52 nd amino acid ( Figure 6D). The iap1-1 and eds5-3 mutations are allelic Both iap1-1 and eds5-3 mutants have been shown to be required for ARR and pathogen-induced SA accumulation [19,22]. To confirm that iap1-1 is an allele of EDS5, homozygous iap1-1 mutants carrying the gl1 marker were crossed with eds5-3 and the F1 progeny were tested for ARR competence at 6 wpg. If the two mutations are not allelic then ARR should be restored in the F1 generation (complementation) but if iap1-1 and eds5-3 are allelic then the F1 generation should remain ARR-defective. Wild-type Col-0 supported low Pst levels (5610 4 cfu ld 21 ) characteristic of ARR whereas iap1-1, eds5-3, and iap1-16eds5-3 F1 plants supported high Pst levels (at least 50-fold higher than Col-0) indicating that they are compromised for the ARR response ( Figure 6E). Therefore, iap1-1 and eds5-3 failed to complement each other indicating that these two mutations are allelic and iap1-1 is a new mutant allele of EDS5. Discussion During ARR IAP1 is required for SA accumulation in the intercellular space where SA is thought to act as an antimicrobial agent [37]. This led us to investigate the role of IAP1 and intercellular SA accumulation during ETI. At the same time, mapbased cloning, whole-genome sequencing, and complementation analysis identified iap1-1 as a mutant allele of EDS5. In examining the literature, four eds5 alleles already exist, therefore we propose that iap1-1 be referred to as eds5-5 in subsequent publications. To avoid the time-consuming process of fine mapping, several approaches have been developed that combine the principles of genetic mapping with next-generation sequencing to identify EMS-generated mutations in Arabidopsis [56][57][58][59][60][61]. Most of these approaches involve sequencing pools of homozygous mutants selected from an F2 mapping population (reviewed in [62]). This step can be problematic if the mutant phenotype is difficult to score and homozygous individuals cannot be selected reliably. For example, heterozygous iap1-1/eds5-5 plants display an ARR phenotype that is intermediate to that of homozygous mutants and wild-type plants, and can occasionally be misclassified as homozygous [37]. An alternative approach to avoid this issue is to directly sequence the mutant genome. Unfortunately, EMS mutants usually possess thousands of mutations making it difficult or impossible to identify the causal mutation by direct sequencing alone [62]. This problem can be solved by rough mapping the mutation (to exclude most irrelevant mutations by determining the approximate position of the causal mutation) or backcrossing (to physically remove irrelevant mutations). Ashelford et al. [63] used both techniques followed by whole-genome sequencing to identify the early bird (ebi-1) EMS mutant. While successful, this effort was complicated by more than 30 non-synonymous mutations that were linked to the causal ebi1 mutation that were not removed by backcrossing [63]. We used a similar approach combining genetic mapping and whole-genome sequencing to identify the fast neutron mutant iap1-1/eds5-5. Ashelford et al. [63] performed four backcrosses to eliminate irrelevant mutations from the ebi1 mutant, whereas iap1-1/eds5-5 was backcrossed only twice since the mutation load in fast neutron mutants is typically lower than in EMS mutants [64][65][66]. Therefore, fewer irrelevant mutations will be linked to the causal mutation, making it easier to identify. Based on this information we suggest that identifying fast neutron mutants using a combination of rough mapping and direct sequencing can be done rapidly and efficiently in comparison to EMS mutant identification. Once the genome sequence data for iap1-1/eds5-5 was obtained and the reads were aligned to the Col-0 reference genome, identification of potential deletion mutations was straightforward. Instead of generating a list of SNPs as would be done for an EMSgenerated mutant, we compiled and assessed regions of the reference genome that were not covered by reads generated from the iap1-1/eds5-5 mutant genome. A region of zero coverage could indicate that the corresponding sequence is deleted or highly polymorphic in iap1-1/eds5-5 or could represent an error in the reference genome [67]. Regions of low coverage also occur for other reasons, including the difficulty of sequencing some genomic regions (e.g., low GC areas). Since we did not sequence the wildtype parent of iap1-1/eds5-5 we cannot differentiate between these possibilities. Therefore, while 814 regions of zero coverage were identified, this is probably an overestimate of the fast neutroninduced mutations in iap1-1/eds5-5. Fast neutron mutant genomes are estimated to harbour deletions in approximately 10 genes [66]. Our bioinformatics strategy did not detect the co-localized insertion mutation present in iap1-1/eds5-5. The insertion was discovered later by Sanger sequencing demonstrating the necessity of confirming the molecular basis of fast neutron-generated mutations. Although not as common as deletions, other instances of co-localized insertion-deletion mutations resulting from fast neutron mutagenesis have been described [68,69]. To identify IAP1, we began by genetically mapping it to chromosome four and turned to whole-genome sequencing because it became available and additional mapping became unfeasible. Based on this experience, we propose a more rapid approach for gene identification in which rough mapping and sequencing are performed at the same time. As soon as the causal mutation is mapped to one chromosomal region, the sequencing data can be used to obtain a list of zero-coverage regions in this area. Zero-coverage regions in introns or intergenic sequences can be excluded as they are unlikely to affect gene function. The list of candidate zero-coverage regions in exons, promoters, and untranslated regions would be assessed to determine if mutations in any of these genes makes sense based on the mutant phenotype. If there are too many zero-coverage regions (.10), then further mapping may be required to reduce the number of candidate genes. If as expected for fast neutron-induced mutants, there are only a few zero-coverage regions in candidate genes, complementation analysis can be performed to confirm the identity of the gene. If whole-genome/next-generation sequencing had been available when we started to map iap1-1/eds5-5, we would have mapped it near one or two markers on chromosome four, then realized that there was just one zero-coverage region in one gene near these markers making us confident that this was the mutation responsible for the iap1-1/eds5-5 phenotype. Moreover, this gene encoded EDS5, a SA transporter important for defense and therefore made sense with respect to the ARR-defective phenotype of iap1-1/eds5-5. Many years of challenging mapping would have been avoided if this method (simultaneous mapping and wholegenome sequencing) had been available. Plants that are heterozygous for the iap1-1/eds5-5 mutation display an ARR phenotype that is intermediate to that of homozygous mutants and wild-type plants, suggesting that iap1-1/eds5-5 is a semidominant mutation [37]. Other mutant alleles of eds5 have been classified as recessive (eds5-1 [30] and eds5-3 [22]). We believe this difference reflects the nature of assessing dominance in disease resistance mutants. For example, small changes in humidity may affect the growth of Pst in plants [70] such that heterozygotes may exhibit phenotypes that are similar to homozygous mutants or wild-type plants. Therefore during heterozygote analysis, if enough heterozygotes are classified with a wild-type phenotype, a semidominant mutation will be classified as recessive. It is also possible that iap1-1/eds5-5 is a unique semidominant allele. Since EDS5 transcripts are detectable in iap1-1/eds5-5 mutants, a truncated EDS5 protein may be produced in iap1-1/eds5-5 as 52 amino acids precede the premature stop codon in the iap1-1/eds5-5 mutant. If the mutant peptide has a dominant negative effect on the wild-type protein this could explain the semidominant phenotype observed in heterozygous iap1-1/eds5-5. Other eds5 mutant alleles that have been studied have been classified as recessive and also have premature stop codons and may produce truncated proteins [27]. However, the stop codons in eds5-1, eds5-3, and iap1-1/eds5-5 occur at different positions suggesting that distinct peptides could be produced. Future studies to investigate whether these mutant alleles produce different peptides may shed light on the importance of the functional domains of the EDS5 MATE transporter. It is also possible that the semidominant phenotype of iap1-1/eds5-5 is the result of an additional, unknown mutation. We do not usually observe a difference in disease susceptibility between young Col-0 and iap1-1/eds5-5 during the compatible interaction with virulent Pst [37, this study]. In some of our experiments young iap1-1 is slightly more susceptible than Col-0, however, the difference is not always statistically significant. This observation conflicts with other studies of eds5 mutants [22,30,31]. We propose that this difference may result from our use of a 10- Figure 6. iap1-1 is an eds5 mutant. (A) Integrated genome viewer screenshot showing iap1-1 reads aligned to the Col-0 reference genome. The region of zero-coverage is located in the first exon of EDS5. (B) RT-PCR products generated from Col-0 and iap1-1 using primers that flank the iap1-1 mutation. (C) EDS5 gene model including untranslated regions (thick grey lines) exons (thick black lines) and introns (thin black lines). The inset shows the nucleotides that are deleted (marked by D and underlined) and inserted (+ATATTA) in iap1-1 relative to Col-0. (D) Predicted amino acid sequences of Col-0 and iap1-1 EDS5. A premature stop in the iap1-1 sequence is indicated by an asterisk. The full Col-0 EDS5 sequence is not shown. (E) Sixweek-old Col-0, iap1-1, eds5-3, and iap1-16eds5-3 F1s were inoculated with 10 6 cfu ml 21 Pseudomonas syringae pv. tomato and bacterial density (colony-forming units per leaf disc [cfu ld 21 ]) was determined at 3 days post-inoculation. Values represent the mean 6 standard deviation of three sample replicates. Different letters indicate significant differences (ANOVA, Tukey's HSD, P,0.05). This experiment was repeated twice with similar results. doi:10.1371/journal.pone.0088608.g006 fold higher inoculum concentration (10 6 cfu ml 21 ) during bacterial growth assays. Indeed, Glazebrook et al. [30] indicate that the enhanced disease susceptibility phenotype of eds mutants is more easily observed when lower inoculum concentrations are used. Although IAP1 is EDS5, a gene already known to be required for SA-mediated defense responses, both confirmatory and novel results were obtained during our investigation of compatible and incompatible (ETI) responses to P. syringae in iap1-1/eds5-5. Bacterial levels were modestly enhanced in iap1-1/eds5-5 compared to Col-0 in response to both Pst(avrRpt2) and Pst(avrRps4) indicating that the ETI response was reduced, but not abolished in iap1-1/eds5-5, suggesting that IAP1/EDS5 contributes to ETI. Similar results were also observed in eds5-3/sid1 in response to Pst(avrRpt2) [22]. In addition, Venugopal and colleagues [35] found that SA-deficient sid2 mutants are also partially compromised for ETI. ETI was fully compromised in sid2 eds1 double mutants suggesting that SA and EDS1 function redundantly during ETI. A macroscopic HR was observed in NahG, other eds5 alleles [8,32], and iap1-1/eds5-5, and trypan blue staining detected similar levels of cell death in iap1-1/eds5-5 and Col-0 (this study). Neither of these techniques is sensitive or quantitative, therefore electrolyte leakage assays were used to carefully examine the HR in iap1-1/eds5-5. Electrolyte leakage was modestly reduced in iap1-1/eds5-5 compared to Col-0 inoculated with 10 6 or 10 7 cfu ml 1 Pst(avrRpt2) at several time points. Like the trypan blue assays, the electrolyte leakage assays confirm that HR cell death occurs in iap1-1/eds5-5, however, this sensitive assay indicates that the HR response was modestly reduced. We confirm that IAP1/EDS5 contributes to bacterial growth restriction during ETI and demonstrate that the HR is also partially dependent on functional IAP1/EDS5. In other words, it appears that IAP1/EDS5dependent SA accumulation is required for a full HR and the bacterial growth restriction that takes place during RPS2-AvrRpt2-mediated ETI. In addition, electrolyte leakage assays revealed that cell death and intercellular SA accumulation occurred concurrently, suggesting that SA may gain access to the intercellular space from dead and dying cells during the HR. Consistent with this idea, the highest levels of intercellular SA accumulation were typically observed at 24 or 48 hpi with Pst(avrRpt2), once extensive cell death had occurred. The timing of maximal intercellular SA accumulation varied between experiments, potentially because the timing and strength of the HR can be affected by variations in humidity that occur even within growth chambers. SA accumulates in the intercellular space during ARR where it is thought to function as an antimicrobial agent. The idea that SA might act as an antimicrobial agent in the intercellular space in other defense responses has not been examined, therefore we measured both inter-and intracellular SA accumulation during ETI initiated by Pst(avrRpt2) and in response to virulent Pst (compatible interaction). Wild-type Col-0 accumulated intercellular SA in response to Pst(AvrRpt2) in the same range as observed during ARR (153 to 400 ng ml IWF 21 ) [37,38] providing evidence for the importance of intercellular SA accumulation during ETI. Additionally, SA accumulation was reduced in rps2-201 mutants indicating that the intercellular SA produced was specific to the RPS2-AvrRpt2 ETI pathway. In comparison, intercellular SA levels in iap1-1/eds5-5 inoculated with Pst(avrRpt2) were similar to background levels observed in untreated and mock-inoculated controls demonstrating that iap1-1/eds5-5 accumulates little SA during ETI as would be expected of an eds5 mutant [22]. Although intercellular SA accumulation was reduced to background levels in iap1-1/eds5-5, RPS2-AvrRpt2-initiated ETI modestly reduced bacterial levels, suggesting that inter-and intracellular SA accumulation, as well as SA-independent constituents contribute to ETI. During the response to virulent Pst over 48 hpi, iap1-1/eds5-5 accumulated low levels of inter-(free and total) and intracellular (free) SA, similar to untreated or mock-inoculated plants, demonstrating that both intra-and intercellular SA accumulation is reduced by the iap1-1/ed5-5 mutation. We also observed that total intracellular SA accumulation in Col-0 and iap1-1 was low at early times after inoculation with virulent Pst (12 and 24 hpi), however, by 48 hpi, total intracellular SA levels increased to ,1000 ng gfw 21 in iap1-1/eds5-5 and to ,1800 ng gfw 21 in Col-0, similar to SA levels induced by Pst(avrRpt2). In Col-0 responding to virulent Pst, both intracellular and intercellular free SA levels were similar to background levels (untreated or mock-inoculated plants) with the exception of a modest increase in intracellular free SA levels at 48 hpi. One explanation for these observations is that PTI-associated SA accumulation is suppressed by virulent Pst. Support for this hypothesis comes from two studies in which the Pseudomonas phytotoxin coronatine was shown to inhibit whole-leaf SA accumulation in Arabidopsis in response to virulent Pst and Ps maculicola [51,52]. In addition, SA accumulates during PTI/basal resistance in response to the PAMP flg22 [36]. These investigations suggest that coronatine is important in suppressing SA accumulation during the PTI response and this produces a compatible environment for Pseudomonas infection. In this study we demonstrated that coronatine-producing Pst suppress both intracellular and intercellular SA accumulation. These data support the idea that intercellular SA accumulation is an important component of the PTI response. Serrano et al. [28] speculate that eds5 mutants do not accumulate SA because the EDS5 MATE transporter is not functional such that SA remains trapped in the chloroplast leading to high SA levels and feedback inhibition of SA biosynthesis. It is interesting to note that at the later 48 hpi time point, iap1-1/eds5-5 accumulated elevated levels of total intracellular SA (free+conjugated) in response to both virulent and avirulent Pst. Since little intracellular free SA accumulated at 48 hpi, the conjugated form of SA must be accumulating. We speculate that free SA is quickly converted to its conjugated form at 48 hpi with Pst and therefore little free SA is present for feedback inhibition of SA biosynthesis, allowing intracellular conjugated SA levels to rise. Also of note, both iap1-1/eds5-5 and Col-0 were similarly susceptible to Pst even though iap1-1/eds5-5 accumulated ,500 ng gfw 21 less intracellular total SA at 48 hpi compared to Col-0. Although free SA is generally regarded as the active form, studies on mutants in which enhanced disease susceptibility corresponds primarily with a reduced capacity to accumulate conjugated SA suggest that conjugated SA could be an important part of a successful defense response [71][72][73][74]. However, we speculate that by 48 hpi, suppression of PTI defense by Pst is waning such that intracellular conjugated SA accumulates in wildtype Col-0, however it is too late to mount a successful defense as bacteria have multiplied to high levels. It is also possible that the absence of accumulation of free SA in the intercellular space is responsible for the unsuccessful defense response in Col-0 and iap1-1/eds5-5 to virulent Pst. Conclusions By developing a whole-genome/next-generation sequencing technique to identify deletion mutations, we identified a new mutant allele of EDS5. This technique will be useful for other researchers as it allows rapid identification of a deletion mutant by whole-genome sequencing once the mutation is roughly mapped. Our studies of iap1-1/eds5-5 revealed that SA accumulates in both the inter-and intracellular spaces during the RPS2-AvrRpt2initiated ETI response. This suggests that SA contributes as both an intracellular signaling molecule and an antimicrobial agent in the intercellular space. We also demonstrated that intercellular SA accumulation is suppressed in a coronatinedependent manner by virulent Pst. Therefore, SA may act as an antimicrobial agent in the intercellular space during PTI/basal resistance in Arabidopsis. Trypan staining and electrolyte leakage assays Plants were inoculated with Pst(avrRpt2) (10 7 or 10 6 cfu ml 21 ) or mock-inoculated with 10 mM MgCl 2 . For trypan staining leaves were harvested at 24 hours post-inoculation (hpi) and immediately submersed in staining solution (0.02 g trypan blue, 8% phenol, 8% glycerol, 8% lactic acid, 8% water, 67% 95% ethanol). Leaves were boiled for 1 minute in the staining solution and left overnight followed by destaining in 70% ethanol. Images were captured using a digital camera (Nikon DXM1200F) mounted on a Nikon eclipse TE2000-S microscope at 106 magnification. For the first set of electrolyte leakage assays ( Figure 1C) leaves were collected at 2 hpi, weighed, and rinsed in distilled water. 48 leaves per treatment were added to 80 ml of distilled water in triplicate and conductance was measured using a YSI environmental model 556 conductivity meter. For the second set of electrolyte leakage assays ( Figure 4A) leaf discs were collected at 1 hpi, rinsed in nanopure water for 1 hour, then transferred to vials containing 10 ml nanopure water (9 plants per treatment, 10 leaf discs per vial, 3 vials per treatment). Conductance was measured periodically using a Jenway model 4070 portable conductivity meter. IWF collection and SA measurement IWF (intercellular washing fluid) collections were performed as described previously [19,38]. Briefly, Col-0 and iap1-1 (4 weeksold) were untreated, mock-inoculated (10 mM MgCl 2 ) or inoculated with 10 6 cfu ml 21 virulent Pst, avirulent Pst(avrRpt2), or coronatine-deficient Pst. Leaves were harvested at 12, 24, and 48 hpi then surface-sterilized (50% ethanol and 0.05% bleach) and vacuum infiltrated with sterile water. Leaves were then blotted dry and centrifuged at 10006g for 30 minutes in 50 ml syringes fitted into microcentrifuge tubes. The IWFs were filter-sterilized to remove Pst. ADPWH_lux is a non-pathogenic soil bacterium that has been modified to produce luciferase proportional to the amount of SA present [42] and was used to measure free SA and total SA (free SA plus SA-glucosides) as described previously [43]. Briefly, ADPWH_lux was grown overnight in LB media with shaking at 37uC and then diluted to an OD 600 of 0.4 with fresh LB followed by incubation with IWFs or leaf tissue minus IWFs for 1 hour at 37uC in an opaque 96-well plate (Corning no.:3915). Luminescence was measured on a Biotek plate reader at 490 nm and used to calculate SA concentrations. Whole-genome sequencing DNA was isolated from leaf tissue of an iap1-1 plant using a modified phenol-chloroform extraction method. Briefly, tissue was ground in liquid nitrogen, incubated with DNA extraction buffer (200 mM Tris-HCL [pH 7.5], 250 mM NaCl, 25 mM EDTA, 0.5% SDS [w/v]), extracted with phenol-chloroform, and precipitated in isopropanol. The pellet was re-suspended and treated with RNase Cocktail (Life Technologies) followed by extraction with phenol-chloroform and again with chloroform alone. The DNA was then precipitated in ethanol. Library preparation was carried out according to manufacturer instructions (Nextera) and sequencing was performed on an Illumina HiSeq 1500 (high output mode) using approximately one third of a lane. Reads were aligned to the Arabidopsis Col-0 reference genome (TAIR 10) using BWA software [46]. RT-PCR Leaf tissue was harvested, flash-frozen in liquid nitrogen, and stored at 280uC. RNA was isolated using Sigma TRI Reagent according to the manufacturer's instructions. Residual DNA was degraded using TURBO DNA-free (Life Technologies) prior to RNA quantification. First-strand cDNA synthesis was carried out using M-MLV Reverse Transcriptase (Life Technologies). PCR primers used to amplify the iap1-1 polymorphism were: 59-ATGCTAATCAAATCCCAAAGATTGAC (F) and 59-CA-GCGAGTTCGATGGAGC (R) at 29 cycles. Products were visualized on a 2% agarose gel stained with ethidium bromide. Figure S1 Trypan blue staining for hypersensitive response cell death. Four-week-old Col-0 and iap1-1 were inoculated with 10 mM MgCl 2 (mock) or 10 7 cfu ml 21 Pseudomonas syringae pv. tomato (Pst) carrying avrRpt2. Leaves were stained with trypan blue at 24 hours post-inoculation. (TIF)
2017-04-02T14:12:05.482Z
2014-03-04T00:00:00.000
{ "year": 2014, "sha1": "7fe76fb99f296945ba3e8e426eba8ee922f715a8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0088608&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7fe76fb99f296945ba3e8e426eba8ee922f715a8", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
73600234
pes2o/s2orc
v3-fos-license
Impact of climate and hydrochemistry on shape variation – a case study on Neotropical cytheroidean Ostracoda Impact of climate and hydrochemistry on shape variation – a case study on Neotropical cytheroidean Ostracoda Claudia Wrozyna, Thomas A. Neubauer, Juliane Meyer, Maria Ines F. Ramos Werner E. Piller 1 Institute of Earth Sciences, NAWI Graz Geocenter, University of Graz, Graz, 8010, Austria 2 Department of Animal Ecology & Systematics, Justus Liebig University, Giessen, 35392, Germany 5 3 Naturalis Biodiversity Center, Leiden, 2300 RA, The Netherlands 4 Coordenação de Ciências da Terra e Ecologia, Museu Paraense Emílio Goeldi, 66077-830, Brazil Introduction Understanding how species respond to environmental change is crucial for their application as proxies for past climate fluctuations as well as forecasting future dynamics and distribution of species.Morphological diversity represents a key character for the interpretation of faunal changes (Wagner and Erwin, 2006) and ecological shifts (Mahler et al., 2010) and urges discussions about speciation and extinction processes through time (e.g., Ciampaglio, 2004).Differences in shape and size among species have been shown to relate with changes of environmental parameter, in particular, differences in temperature across various clades (e.g., Loehr et al., 2010;Maan and Seehausen, 2011;Danner and Greenberg, 2015).Within freshwater invertebrates, ecophenotypic response has been documented for a variety of species, both recent and fossil (e.g., Hellberg et al., 2001;Zieritz and Aldridge, 2009;Inoue et al., 2013;Neubauer et al., 2013;Clewing et al., 2015). Ostracods represent a popular proxy group for climate and ecosystem changes due to their occurrence in various habitats, ranging from most inland waters to marine and interstitial and even (semi-)terrestrial environments (e.g., Horne, 2004).Their distribution is controlled by ecological factors such as salinity, temperature, and ion composition of the ambient water (e.g., Ruiz et al., 2013).The study of ecophenotypical variation in response to environmental change (Anadón et al., 2002;Frenzel et al., 2012;Fürstenberg et al., 2015;van der Meeren et al., 2010) demonstrates another approach using them for palaeoenvironmental studies.Due to their calcitic valves, they have an excellent fossil record and are utilized as palaeoenvironmental and biostratigraphic indicators (Anadón et al., 2002).A number of studies has shown that ornamentation, noding, sieve pore shape, and carapace size are linked to environmental factors, e.g., salinity, temperature, water depth and nutrient availability (van Harten, 1975;Yin et al., 1999;Majoran et al., 2000;van Harten, 2000;Anadón et al., 2002;Frenzel and Boomer, 2005;Medley et al., 2007;Marco-Barba et al., 2013;Meyer et al., 2016;Boomer et al., 2017).Especially with the rise of morphometric techniques, investigations also dealing with carapace shape variation in relation to environmental variables have increased (Yin et al., 1999;Baltanas et al., 2002;Baltanas et al., 2003;van der Meeren et al., 2010;Ramos et al., 2017;Grossi et al., 2017).Yet, the use of morphological data, even those based on morphometric analyses (Baltanas et al., 2002;Baltanas et al., 2003;van der Meeren et al., 2010;Grossi et al., 2017), has been restricted to either landmark-based or outline-based studies but have rarely used a combination of both (e.g., Ramos et al., 2017).Few studies integrate geographic gradients into their statistical analyses and corresponding climate variables or a reduced number of predictor variables. Moreover, shape-environment relationships are commonly identified based on simple linear regressions or qualitative observations on multivariate ordination methods. Here, we apply a thorough approach integrating data from carapace outline and surface details, as well as several climatic and hydrochemical variables, in order to investigate a potential link between morphology and environmental conditions.Subject of study are valves of the Neotropical cytheroidean ostracod species Cytheridella ilosvayi Daday, 1905. Wrozyna et al. (2016) and Wrozyna et al. (under review) lately demonstrated considerable biogeographical variation in valve morphology among Floridian, Mexican and Brazilian populations of that species.Morphological differences in populations of C. ilosvayi are discernible for both valves and appendages, for adult and juvenile (A-1 to A-3) stages and across sexes, suggesting that morphological divergence is a result of long-term biogeographic isolation (Wrozyna et al., submitted).While the morphological aspects of the biogeographic variability in C. ilosvayi are well understood, the causes for the regional differences have not been investigated.We hypothesize that the climatic differences between the regions inhabited by Cytheridella ilosvayi and associated differences in hydrochemical regimes have influenced valve morphology and finally led to biogeographically distinctive groups.We apply two-block partial least squares analyses and multiple regression analyses in order to test for covariation between the two sets of parameters (morphology, environment) and to identify the morphological characteristics and environmental variables that contribute most to the relationship. A detailed list of the sampled localities is available in Supplementary Table 1.Only adult valves were utilized in this study, providing a sufficient number of left and right valves for both sexes.Right and left valves were investigated separately due to dimorphism in size and shape (Wrozyna et al., 2014).Beyond that, females and males were analyzed separately because a large part of within-valve variation has been shown to depend on sexual differences (such as the presence of brood pouches in females; Wrozyna et al., 2016). Predictor variables Altogether, 15 variables were included in the analyses.Simultaneously to water sampling, field variables (electrical conductivity, water temperature and pH) were measured in situ at all sample sites using a WTW multi-sensor probe (Multi 3420 Set C).Water samples were taken with plastic bottles and promptly filtrated using a syringe filter with a filter pore size of 0.45 µm and stored in a freezer until analysis.Major ions were measured at the laboratory center of Joanneum Research in Graz by ion chromatography (Dionex ICS-3000).As the variables measured per sampling station only provide a snapshot of the local ecological conditions, the set of variables was supplemented with bioclimatic data from the WorldCLIM database (WorldClim, 2017), providing data on monthly to yearly scales.From the many variables available we included annual mean temperature [°C] (BIO1), mean diurnal range [°C] (BIO2; mean of monthly maximum-minimum temperature), temperature seasonality [°C] (BIO4; standard deviation *100), annual precipitation [mm] (BIO12) and precipitation seasonality (BIO15; coefficient of variation), each with a spatial resolution of 30″.We chose not to include all bioclimatic variables because many of them are highly correlated, causing issues for the regressions.Bioclimatic variables and occurrence data were linked in ESRI ArcGIS v. 10.4 with the tool "Extract Multi Values to Points".Environmental variables are provided in Appendix S2. Generalized Procrustes Analysis Valve morphology was captured using a combination of landmarks and semilandmarks.Eight landmarks (LM) were chosen to characterize anterior pore tubuli (LM 1-5, type-I) and the dorsal dip point of the posterior curvature (LM 6, type-II), as well as to delimitate maximum anterior and posterior curvatures (LM 7-8, type-III).Carapace outline was defined by two curves between LM 7 and 8, each comprising 30 equidistantly spaced semilandmarks (see also Wrozyna et al., 2016).All points were set on digitized SEM images using the program TpsDig v. 2.17 (Rohlf, 2013).The sliders file determining sliding direction of the semilandmarks during the Procrustes alignment was created in TpsUtil v. 1.58 (Rohlf, 2015).A Generalized least-squares Procrustes Analysis, computing consensus configuration, partial warps and relative warps (RW), was performed in TpsRelw v. 1.65 (Rohlf, 2016).Thin-plate spline deformation grids were used to visualize deviations of selected configurations from the mean and to identify morphological characteristics that account for differences among geographic regions.For details on the method see Rohlf and Slice (1990) and Bookstein (1996). We ran preliminary analyses for each dataset to identify major outliers that may bias the morphometric analyses by overemphasizing particular directions in the morphospace (and associated morphological characteristics).Such distortion may severely impede sound interpretation of follow-up statistical analyses. Statistics In order to study the covariance between shape variation and environmental variables, two-block partial least-squares analyses (PLS) were performed using software PAST 3.18 (Hammer et al., 2001).As a great advantage over other ordination methods such as principal components analysis, this method disregards within-block variation that may mask between-block covariance (Mitteroecker andBookstein, 2008, 2011).Using all RWs in the PLS might severely bias the pattern because -contrary to their descending significance in terms of explaining shape variation -they would be treated equally by the analysis.Therefore, we restricted the morphological block to RW 1-20, which account for at least 98.6% of the total shape variation in all four datasets.The environmental variables were log10-transformed to constrain the orders of magnitude involved.PLS analyses were computed based on correlation matrices. The PLS analysis provides an idea of the overall strength of the relationships with between shape and environment.To identify the parameter(s) that affect specific morphological traits or combination of traits, multiple regression analyses were conducted on selected RWs in the statistical environment R v. 3.3.2(R Core Team, 2016).Only warps 1) along which biogeographic differentiation was observed, 2) with an amount of shape variation higher than 10% of the total variation, and 3) with PLS loading values higher than the mean loading value (based on absolute values) were considered.These selection criteria were chosen in order to prevent from misinterpreting seemingly strong relationships between shape and environmental variables. Since the environmental parameters are likely to be highly correlated, eventual regression models including all variables might be strongly skewed and susceptible to misinterpretation.Therefore, we employed a stepwise selection of variables based on the variance inflation factor (VIF), which is an estimator of multicollinearity among variables (Quinn and Keough, 2002).As a rule of thumb, VIF values greater than ten indicate the presence of multicollinearity (Quinn and Keough, 2002); some authors even consider values above five evidences of collinearity (Heiberger and Holland, 2004).The applied function iteratively removes collinear variables by calculating the VIF of variables against each other (for the script, see Ijaz, 2013); R package 'fmsb' v. 0.5.2 (Nakazawa, 2015) is required for this procedure.VIF values were calculated with package 'HH' v. 3.1-32 (Heiberger, 2016).To enhance the models further, multiple regressions using backward stepwise selection by evaluation of the Akaike Information Criterion (AIC) were performed with the remaining set of factors.Normality of model residuals was tested with Shapiro-Wilk tests.In case normality was not achieved, residual distributions were assessed qualitatively using Q-Q-plots; only if the majority of cases match the expected distribution, a model was considered significant.Finally, we used the R package 'hier.part'v. 1.0-4 (Walsh and Mac Nally, 2013) to evaluate the independent contribution of each predictor to the (reduced) models. Results The relative warps analysis (RWA) yielded different results for males and females, while patterns were largely consistent within sexes (Fig. 2, 3).Along the first three relative warps, Mexican females have little overlap with Brazilian/Floridian ones. Only some of the specimens from Punta Laguna in northern Yucatan seem to be morphologically closer to the Floridian group and cluster apart in the analyses of both valves.Brazilian and Floridian individuals have a distinctly higher overlap and differentiate only little along RW 2. A clear differentiation within both clusters, like in the Mexican group, is lacking.Group differentiation in male valves is quite contrary: Floridian specimens have little overlap with Brazilian ones in both valves along RW 1, while Mexican specimens are hardly separable from either group along any of the first 3 RWs.However, the differentiation between some Punta Laguna valves and remaining Mexican carapaces along RW 1 is comparable to the patterns observed for females.Mexican and Brazilian males show slight biogeographic differentiation along RW 2 (left valves) and RW 3 (right valves), respectively.No clustering is observed for higher warps in either sex or valve. Similar to the patterns posed by the scatter plots, the thin-plate splines indicate that shape variation along RWs is largely consistent within valves but differs slightly between sexes.(Here we discuss only axes along which biogeographic discrimination is observed.See Wrozyna et al., 2016 for within-group variation.)The most important morphological characteristic representing shape differences along RW 1 in both females and males and right and left valves, is relative carapace length (Fig. 2, 3).However, the exact expression differs between sexes: valve outline in males varies between elongate-elliptical and short asymmetrical with slightly inflated anterior part, and between elongate-elliptical and short asymmetrical with distinctly inflated posterior region (i.e., brood pouch) in females.In addition to outline differences, the position of the anteriormost pore conulus (LM 2) shifts in dorso-ventral direction consistently in both valves and sexes.In females, also the position of the dorsal dip point of the posterior curvature (LM 6) varies in dorso-ventral direction.Shape variation along RW 2 is in females similar as for RW 1 but with a different combination of traits: negative scores correspond to elongate valves with inflated posterior and slightly shifted LM 2 and LM 6 in dorso-ventral direction.In male Cytheridella, only left valves show weak biogeographic differentiation along RW 2, representing shape differences from elongate-elliptical to slightly asymmetrical with higher dorsal margin, and the dorsal dip point of the posterior curvature (LM 6) shifts towards posterior.The only differentiation along RW 3 is for male right valves, corresponding mostly to shell elongation and a little to the relative positions of pore conuli.The relative warp scores of all datasets are provided in Supplementary Table 2. The PLS analyses indicate relationships between morphological and environmental variables, yet with different results for males and females.The first PLS axis explains between 68.7% and 77.9% of the total variation, whereas values are consistently higher for females (LV: 77.5%; RV: 77.9%) than for males (LV: 68.7%; RV: 71.5%).In all four analyses, Brazilian specimens are widely separated from Floridian/Mexican ones along PLS axis 1, corresponding to a clear differentiation along both environmental and morphological scores.Left valves of females and left and right valves of males of Brazilian specimens exhibit negative scores on both PLS axes corresponding to shape and environmental variables.Females display inverse distributions for Brazilian and Mexican specimens.Floridian and Mexican groups overlap little but consistently in all analyses, while the specimens of Florida tend to have smaller variation ranges than Mexican groups (Fig. 4).Permutation tests indicate however that PLS analyses are hardly significant for male valves (LV: P = 0.126, RV: P = 0.135).Yet, the low significance levels do not necessarily imply lacking relationships between shape and environment, they rather witness the difficulties in finding clear relationships in multifactorial analyses.In fact, the overall picture provided by the PLS might mask individual relationships between selected shape traits and environmental parameters, which is why a closer look is required by using multiple regressions.Nonetheless, the PLS analysis are useful to examine the overall strength of the relationships, which seem to be stronger and clearer in females than in males. For PLS axis 2, low relationships between shape and environment are yielded for all four datasets, and none of them are significant (see Supplementary Table 3). The loadings for morphological variables in the PLS analyses yield constantly high values for RW 1; RW 2 shows loading values higher than the mean (based on absolute values) in all analyses except male right valves; RW 3, in turn, contributes above average to variation in all cases but females right valves.Other warps were not considered because of their minor influence on shape variation (low loading values) or the lack of biogeographic separation.See Table 1 for a summary of the results. Following warps fulfil the selection criteria defined in the Methods section for consideration in the multiple regressions: RW 1 for all four datasets; RW 2 for female right and left valves; RW 3 for female left valves and male right and left valves.Hence, nine regression analyses were carried out.Shapiro-Wilks tests of model residuals indicate normality for four of the nine analyses (Table 1).Inspection of Q-Q-plots yielded, however, that in all models the majority of cases match the expected distributions, which is why the remaining models are still considered significant (see Supplementary Figure 1).Eight out of nine models are significant (P < 0.05); the model for RW 3 for male left valves is not (P = 0.074). Only a limited set of predictor variables is retained out of the originally 15 variables in each model after elimination of collinear parameters and backward stepwise selection (see Supplementary Table 4).Seven parameters do not contribute to any models: Na + , Ca 2+ , Mg 2+ , HCO3 -, conductivity, mean annual temperature, and precipitation seasonality.Of the remaining factors, temperature seasonality is one of the most important predictors in almost all models, accounting for at least 28.7% in all models with RW 1. Temperature seasonality is highest in Florida, closely followed by Brazil, and considerably less in Mexico, reflecting the distinction between Mexican and Floridian/Brazilian populations along RW 1.Similarly, annual precipitation and the anions Cl -and SO4 2-contribute significantly to many models, corresponding to differences in the hydrological regimes. Less explanation power is provided by pH, K + , water temperature, and mean diurnal temperature range.It is noteworthy that anions, represented by Cl -and SO4 2-, are obviously much more important than cations. Discussion Variation in temperature seasonality, annual precipitation and anions (Cl -, SO4 2-) explain a large portion of shape variation in Cytheridella, which is mostly related to relative carapace length and outline shape.Narrow elongate shapes, such as those occurring in Mexico, correspond to relatively low seasonality and precipitation but high anion concentrations.Opposite conditions seem to favor the formation of short, asymmetrical valves typical for specimens from Florida and Brazil.Secondary shape variations differentiating between elongated valves with slightly wider posterior and short, symmetrical valves (i.e., RW2) are attributed to higher and lower annual precipitation, respectively.The link between shape variation and environmental conditions is a well-studied branch of ostracodology, but studies have yielded quite contrasting results.Frequently identified ecological factors are salinity (Yin et al., 1999;Yin et al., 2001;Grossi et al., 2017) and hydrochemical regime, mirrored by Mg 2+ , Ca 2+ and K + contents (Ramos et al., 2017) or alkalinity and sulphate, respectively (van der Meeren et al., 2010).Morphological response to the same environmental factor may even differ between environments (e.g., Yin et al., 1999), complicating straightforward explanation models. Potential environmental drivers of valve shape variation The mechanisms that control the relationship between valve shape and environmental gradients in Cytheridella are not understood at present.Like many non-marine ostracods, Cytheridella is characterized by a benthic life style.Growth and proliferation of an individual and a population, respectively, might benefit from changes in carapace shape with respect to different habitats.For instance, a more elongated shape could be advantageous in more densely vegetated environments because of increased motility.For the present study, we sampled various habitats within each region, differing in vegetation cover and composition (Supplementary Table 1).If shape differences were indeed functional adaptations to varying habitat conditions, we would expect much higher morphological variability within each region and smaller differences between specimens from similar habitat types than shown by the analyses.We rather suppose that shape difference in Cytheridella ilosvayi has a physiological origin that mirrors the varying environmental conditions. The geographical range of Cytheridella coincides with the Neotropical region, which spans a wide latitudinal range from ~30°N to ~30°S.This range involves a latitudinal decline in mean annual temperature, which mainly corresponds to differences in annual minimum temperature (Lewis, 1996).Florida and southern Brazil are characterized by higher annual temperature gradients compared to Mexico.Annual minimum and maximum temperatures range between 16°C and 30°C in Florida and 10°C and 30°C in southern Brazil, respectively.Minimum and maximum temperatures vary between 19°C and 33°C (Climate-Data.org,2017). Temperature has a direct effect on other environmental parameters such as salinity and oxygenation of the water.Water temperature is one of the most important variables affecting metabolism, oxygen consumption, growth, molting and survival of crustaceans (Le Moullac and Haffner, 2000 and references therein).Increases in temperature can result in significantly shortened intermolt periods, higher molting rates (Roca and Wansard, 1997;Mezquita et al., 1999;Brylawski and Miller, 2006), increased growth increments (Martens, 1985;Iguchi and Ikeda, 2004) and reduction in maturation time (Pöckl, 1992). We expect that higher temperature seasonality induced prolonged molt cycles in populations of C. ilosvayi by extending intermolt periods during colder seasons.We hypothesize that the changed molt cycles affected calcification patterns and led to the observed differences in shape.Since the physiological processes involved in the secretion of ostracod valves are poorly known (and not at all for Cytheridella), this hypothesis cannot be tested at present.Precipitation causes declines in nutrients and promotes physical disturbance of the water column (Figueredo and Giani, 2009). Moreover, changes in precipitation directly influence hydrochemical composition, input of sediments, organic components and contaminants and lake level (Mortsch and Quinn, 1996;Whitehead et al., 2009).Indirect influence poses, e.g., the control on aquatic plants, which represent important (micro)habitats and/or food sources (Lacoul and Freedman, 2006).The annual cycle of precipitation over most of South America is monsoon-like with great contrasts between winter and summer (Grimm et al., 2007).The peak rainy season in the Brazilian sample region is the austral winter.The rainfall is caused by frontal penetration associated with migratory extratropical cyclones (Grimm et al., 1998).The amount of rainfall in Yucatan is associated with seasonal migration of the Intertropical Convergence Zone and less by spatially oriented tropical convective activity (e.g., Hodell et al., 2008).Florida, in particular Southern Florida, where most of our samples derive from, receives maximum precipitation during northern hemisphere summer from convectional and tropical storms (Schmidt et al., 2001).The annual precipitation amounts for the sampled areas are with 1396-1492 mm per year in Brazil higher on average than in Florida and Yucatan, with 1185-1430 mm and 1125-1359 mm, respectively.Since the annual amounts of the regions are very similar it might be more plausible that precipitation seasonality has an influence on carapace shape of Cytheridella through seasonally restricted nutrient inputs or changes of the hydrochemistry.Annual precipitation should be therefore considered with caution since it is difficult to deduce a causal relationship with carapace shape.Ionic composition of the host water is vital for calcification and growth rates of ostracods (Mezquita et al., 1999).The relationship between hydrochemistry and phenotypic variability is poorly understood, however.A study of Kim et al. (2015) shows that increased levels of pH account for decreased carapace growth rates, i.e., prolonged intermolt periods, and smaller carapaces.Carapace shape differences have been moreover associated with changes in Ca 2+ , Mg 2+ and pH (Ramos et al., 2017). Our analyses, however, revealed correlations neither with ions related to formation of carbonate, such as HCO3 -, Ca 2+ , and Mg 2+ , nor with pH.Only chloride and sulfate contents significantly correlate with carapace shape variation. Natural sources of Cl -in freshwater derive from marine sprays transferring NaCl into the atmosphere and are either transported as aerosol by wind or are washed out by precipitation and the weathering of rocks.Additionally, large amounts of chloride are derived anthropogenically from farming and waste water production (Müller and Gächter, 2012).Sulfate can derive through runoff from mining and agricultural areas, mobilization from pyrite deposits by oxygen intrusion during desiccation and weathering of rocks containing sulphur (Holmer and Storkholm, 2001;Lamers et al., 2002).Sulfate contents in groundwaters and surface waters result from dissolution of gypsum and anhydrite occurrences (Perry et al., 2002) and from mixing with seawater (Sacks et al., 1995).In Yucatan, a gypsum-rich stratigraphic unit occurs providing a solution-enhanced subsurface drainage pathway for a broad region extending along the eastern coast and from east to west in the southern part (Perry et al., 2009).The chloride content of groundwater is the result of mixing with seawater (Mondal et al., 2010).Additionally, a Cl - gradient extends from southeast to northwest providing generally higher chloride contents (Perry et al., 2009).Concentration gradients of SO4 2-and Cl -in Florida occur from inland to coastal areas as well as with depth (Sacks et al., 1995), explaining the relatively higher amount of chloride and sulfate.The comparably low values for south Brazilian sampling locations is not surprising given that such coastal water bodies are often fed by groundwater (Santos et al., 2008) that is dominated by bicarbonate waters and low chloride and sulfate contents (Gianesella-Galvão and Arcifa, 1988;Viero et al., 2009).The detected relationship between morphotypes and chloride and sulfate contents, respectively, could thus mirror the hydrochemical compositions resulting from different hydrogeological conditions of the regions.Van der Meeren et al. (2010) found ostracod valve shape variability to be significantly correlated with the ratio between alkalinity and sulfate.As the ratio was inversely related to solute concentration, the authors hypothesized that carapace shape may be linked to changes in the lake water balance or relative climatic moisture, or changes in the sources of solutes delivered to the environment.Varying anionic composition has also been considered to affect osmoregulation and calcification (Mezquita et al., 1999).As hyperosmotic organisms, freshwater ostracods are obliged to pump ions inwards (mainly Na + and Cl -) and water outwards to maintain a stable internal ionic concentration higher than that of the ambient water (Weihrauch et al., 2004). Chloride is obtained from the environment through a HCO3 -/Cl -antiport pump.The organism needs to precipitate calcite but also pump HCO3 -outwards to maintain the internal Cl -concentration (Mezquita et al., 1999).These authors assumed that even small genetic differences affect varied ecophysiological responses to temperature and water chemistry, which may be a key factor for the explanation of different biogeographical patterns of non-marine ostracods.Especially the trade-off between ionic regulation and calcification is considered to play a key role in ostracod speciation (Mezquita et al., 1999). One of the best-studied phenomenon in ostracods is variable noding (hollow outward flexions on the lateral surfaces on the valves) in Cyprideis (e.g., Vesper, 1975;van Harten, 2000).A connection between node formation and salinity was noted early, but the reported salinity limits are partly contradictory (Keyser and Aladin, 2004).Frenzel et al. (2012) deduced from a combination of mesocosm cultures and field studies that noding in Cyprideis torosa valves is pathologically and caused by osmotic problems under lower salinities and lacking Ca 2+ during molting.From the same species it is known that increasing salinity corresponds with decreasing proportion of rounded sieve pores of the valves (Frenzel et al., 2017).Considering that the variability of discrete valve traits such as noding or sieve pore shape is related to complex physiological processes, we hypothesize that the relationship between carapace shape and ionic composition detected by our analyses could be a result of complex interplay of different physiological processes affecting valve calcification.Understanding the physiological processes involved requires more detailed studies. Genetic diversity or ecophenotypic plasticity? Phenotypic variation in ostracods is considered to reflect either genotypic or ecophenotypic variability or a combination of both (Martens et al., 1998;Yin et al., 1999;Anadón et al., 2002;Frenzel and Boomer, 2005;Boomer et al., 2017;Grossi et al., 2017).A recent study on valve outline variability of a non-marine ostracod demonstrated that differences in carapace shape do not correspond to genetic clades (Koenders et al., 2016).However, caution is advised when comparing patterns among species, since different species react differently and have different potentials for ecophenotypic variation (Anadón et al., 2002;Frenzel and Boomer, 2005).The relationship between genotype and environment might differ among species, geographical regions and through time (see, e.g., Sanchez-Gonzalez et al., 2004;Koenders et al., 2016).Our results clearly imply that morphological disparity in Cytheridella is controlled by environmental factors.However, the distribution and the variation range of regional clusters reveal some opposing implications.For instance, a part of the Mexican populations comprises specimens with similarly shortened valves as are found in the Floridian group.Both shortened and elongated morphotypes cooccur in one lake (i.e., Punta Laguna) (Wrozyna et al., under review).On the one hand, this co-occurrence could suggest the presence of microhabitats with specific environmental conditions, posing differential impact on valve calcification in the very same ecosystem.On the other hand, this discrepancy might be considered evidence for genetic differentiation.An integrated study combining genetic and morphometric data is required to further explore this case. Conclusion The comparison of our results and a large number of previous studies witnesses the difficile nature of ecophenotypic response to varying climatic and ecological conditions in freshwater ostracods.Shape variation in Cytheridella, mostly related to relative carapace length and outline shape, is mainly explained by temperature seasonality, annual precipitation and chloride and sulfate compositions.Increased temperature seasonality, characteristic for Florida and south Brazil, are considered to account for slower growth rates during colder months and may have triggered the development of shortened valves with well-developed brood pouches.We propose that differences in chloride and sulfate concentrations, which are related to fluctuations in precipitation, might have affected valve development via controlling osmoregulation and carapace calcification.These explanation models are, however, tentative as physiological studies on the influence of changing ecological conditions in nonmarine ostracods are still scanty.A more detailed picture will require mesocosm experiments and field observations.Temperature per se, salinity (expressed as electrical conductivity) and pH have surprisingly little or no effect on shape variation in C. ilosvayi, although these factors have been discussed as important drivers of ostracod ecophenotypy, variably affecting size, ornamentation and shape.The discrepancies in explanation models suggest that environmental predictors for valve shape are not consistent across non-marine ostracods.The nature of the phenotype-environment relationship likely depends on the choice of the model taxon and ecosystem.On a larger scale, this lack of a general pattern complicates reconstruction of paleoenvironments based on ecophenotypic variation.S1. Figure 2 : Figure 2: Relative Warps Analyses of left and right valves of females of the first three warps and the associated thin-plate splines at minimum and maximum scores.Colors refer to the different regions: blue -Florida, green -Mexico, pink -Brazil. Figure 3 : Figure 3: Relative Warps Analyses of left and right valves of males of the first three warps and the associated thin-plate splines at minimum and maximum scores.Colors refer to the different regions: blue -Florida, green -Mexico, pink -Brazil. Figure 4 : Figure 4: First axis of the PLS analysis of carapace shape and environmental variables.Colors refer to the different regions: blue -Florida, green -Mexico, pink -Brazil.
2018-12-21T20:57:15.227Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "cdf5d46ce6321962e41980d8d1cbbec3564d5d40", "oa_license": "CCBY", "oa_url": "https://bg.copernicus.org/articles/15/5489/2018/bg-15-5489-2018.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "a04a85e4024289807b0a7caa30e1559b984d63f6", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Geology" ], "extfieldsofstudy": [] }
1324108
pes2o/s2orc
v3-fos-license
A Network View on Parkinson's Disease Network-based systems biology tools including Pathway Studio 9.0 were used to identify Parkinson's disease (PD) critical molecular players, drug targets, and underlying biological processes. Utilizing several microarray gene expression datasets, biomolecular networks such as direct interaction, shortest path, and microRNA regulatory networks were constructed and analyzed for the disease conditions. Network topology analysis of node connectivity and centrality revealed in combination with the guilt-by-association rule 17 novel genes of PD-potential interest. Seven new microRNAs (miR-132, miR-133a1, miR-181-1, miR-182, miR-218-1, miR-29a, and miR-330) related to Parkinson's disease were identified, along with more microRNA targeted genes of interest like RIMS3, SEMA6D and SYNJ1. David and IPA enrichment analysis of KEGG and canonical pathways provided valuable mechanistic information emphasizing among others the role of chemokine signaling, adherence junction, and regulation of actin cytoskeleton pathways. Several routes for possible disease initiation and neuro protection mechanisms triggered via the extra-cellular ligands such as CX3CL1, SEMA6D and IL12B were thus uncovered, and a dual regulatory system of integrated transcription factors and microRNAs mechanisms was detected. Introduction James Parkinson has been the first to observe this disease in adults in the year 1817. In his essay entitled "An Essay of the Shaking Palsy" he described this disease as initiated with slow, progressive involuntary tremors, followed by difficulty in walking, swallowing and speech [1]. Apart from motor symptoms, Parkinson's disease patients experience significant non-motor symptoms including mood and cognition decline, sleep disturbances, and other autonomic dysfunctions [2]. With the help of modern-day molecular and cellular research advancement, progressive degeneration of the dopaminergic (DA) neurons of the Substantia nigra (SN) brain region were found in Parkinson's disease brains [3], in addition to the accumulation of misfolded protein aggregates. Both environmental factors and genetic mutations were suspected to cause PD [4,5]. One of the distinctive features of Parkinson's disease is severe damage to the nigrostriatal dopaminergic system. Neurotoxic agents such as manganese and 1methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) were suspected for this type of neuronal damage. MPTP induced Parkinson's disease animal models were extensively used to study the neurodegeneration process as well as to identify potential therapeutic drug targets [6]. Soluble fractalkine (CX3CL1, chemokine ligand 1) isoform was shown to reduce impairment of motor coordination, decrease dopaminergic neuron loss, and ameliorate microglial (macrophages of brain) activation and proinflammatory cytokine release resulting from MPTP exposure [7]. Long time belief was that Parkinson's disease etiology is sporadic (not genetically inherited) in nature. However, a small percentage of the PD patients were now known to inherit gene mutations. Genes including ATP13A2, DJ-1, GIGYF2, HTRA2, LRRK2, PARK2 (parkin), PINK1, SNCA and UCHL1 were associated with either autosomal dominant or recessive form of Parkinson's disease [5]. From the listed genes SNCA (α-synuclein or α-syn) is critical to the pathogenesis in the early-onset of the rare familial form of PD. Insoluble form of α-syn fibrils were discovered in the protein aggregates called Lewy bodies (LBs), the hallmark pathological characteristics of Parkinson's disease. The aggregation and accumulation of abnormal α-syn in dopaminergic neurons have been postulated to be responsible for the neurodegeneration that ultimately leads to cell death [8,9]. Synucleins were also found in the amyloidplaques in Alzheimer's disease brains. In general, alpha-synuclein is highly expressed in brain at presynaptic terminals, particularly in the neocortex, hippocampus, striatum, thalamus, and cerebellum components. They function as molecular chaperones and interact with many proteins thus modifying their cellular activity. Due to its versatile interacting behavior, mutant alpha-synuclein has been implicated in the deregulation of many biological processes including oxidation, neuroinflammation, mitochondrial function, ubiquitination etc. [3,[10][11][12]. Figure 1 depicts the various genes already implicated in Parkinson's disease along with different deregulated biological processes caused by the several abnormal protein activities. To date, many genetic modifiers of PD and their role in PD pathogenesis have been described [13][14][15][16][17]. Some of these genes relate to neuronal growth and neuroprotective mechanisms in Parkinson's disease. FGFs (fibroblast growth factors) have potent neurotrophic properties for dopaminergic neurons [18]. They promote DA neuron's development and neurite outgrowth, rescue damaged DA neurons after different toxic insults, and prevent apoptosis. Overexpression of L1CAM (L1 cell adhesion molecule) enhances the survival of imperiled endogenous dopaminergic neurons in the Substantia nigra [19]. RAB3A (member of RAS oncogene family) has been shown to suppress α-syn toxicity in neuronal models of PD [20]. Fractalkines produced by neurons suppress the activation of microglia and play a neuroprotective role in 6-OHDA-induced (synthetic neurotoxic compound) dopaminergic lesions [21]. In general, metallothioneins (cysteine-rich, heavy metal-binding protein molecules) have been considered 'defensive proteins' with a role in neuroprotection. Metallothioneins 1 and 2 (MT1F, MT2A) have been shown to scavenge reactive oxygen species and free radicals in central nervous system [22]. Other genes have been implicated in PD pathogenesis. Neuroinflammation is suspected to play a major role in Parkinson's disease progression. MAPK signaling pathways contribute to neuroinflammatory responses and neuronal death triggered by synuclein-alpha aggregates or functional deficiencies in parkin or DJ-1 genes in the pathogenesis of PD [23]. RNF11 (ring finger protein 11) was suggested to play major role in the Parkinson's disease pathology since it was found highly enriched in SN dopamergic neurons as well as its co-localization within Lewy bodies (abnormal aggregates of protein) in PD brains [24]. Earlier study by Galvin et al., (1999) had shown that βand γ-synuclein are associated with hippocampal axon pathology in Parkinson's disease and dementia with Lewy bodies [25]. Recent genome-wide studies have found that mutations in at least 13 PARK loci and related genes increase both early-and late-onset PD susceptibility [15,26,27]. Genome-wide approaches were also used to identify microRNAstarget mRNA interactions in PD domain. MicroRNAs (miRNAs) are a class of small RNAs (~22 nucleotides) that act as posttranscriptional regulators of gene expression by binding to the complementary sequences in target mRNAs. In recent years, miRNAs have emerged as potential drug targets in a variety of diseases including infections, metabolism and inflammation etc. [28]. A recent genome-wide miRNA profiling study for Parkinson's disease has reported several miRNAs to be differentially expressed in PD blood samples. The hundreds predicted genes targeted by these miRNAs belong to various biological pathways including synaptic long-term potentiation, semaphorin signaling in neurons and protein ubiquitination pathway, etc., many of which were previously found deregulated in Parkinson's disease mechanism [29]. Even though there were some new treatment options available to PD patients, oral administration of levodopa (precursor of dopamine) has been the gold standard medication for Parkinson's disease. But prolonged use of levodopa increases the risk of developing levodopa-induced dyskinesias (involuntary movement) [30,31]. Recently, deep brain stimulation (DBS) has been offered as a secondary treatment option in Parkinson's disease where the benefits of medication have failed/diminished. DBS therapy has been shown to increase the neuron firing rate, blood flow and to promote neurotransmitter release as well as to stimulate neurogenesis. Although deep brain stimulation improves the motor symptoms of Parkinson's disease, it is a serious surgical intervention with major side effects of infection and intracranial hemorrhage including the risk of death [32]. In our study we construct a variety of biomolecular networks proceeding from several gene expression datasets covering different areas of brain affected by Parkinson's disease. Two such sets were reported by Moran et al. in 2006, who provided a whole genome analysis of the Substantia nigra (SN), found considerable difference in the gene expressions compared to control, reported several new genes that map to PARK loci, and identified 570 "priority genes" after using the Benjamini-Hochberg FDR correction [33]. Two years later, the same group published a network-based analysis based on Pathway Studio's ResNet database version 5.0. Several direct interaction networks have been constructed for the interactions between priority and known-PD genes. Cancer, diabetes and inflammation disease conditions have been associated with the top up-regulated priority genes. Another set was published by Zhang, et al. in 2005 [34], highlighting some of the deregulated genes responsible for either disease aggravation (MKNK2) or neuroprotection (HSBP1, SMA5, and FGF13). Deregulation was noticed in various genes belonging to metallothionein group and the heat shock protein group. These patterns of multiple molecular process deregulations have been found across different brain regions studied. Another expression pattern discovered supports the hypothesis for ubiquitin/proteasome system (UPS) dysfunction in Parkinson's disease. A decrease in Complex I activity has also being found to reinforce the suspected mitochondrial deregulation in PD. With current advancement of different "omics" technologies along with effective in-silico testing options, finding successful molecular therapeutic targets for Parkinson's disease seems much closer than before. Along this avenue, the current paper presents a comprehensive network-based analysis of Parkinson's disease (PD) related microarray datasets. Helped by the latest accumulated knowledge of gene/protein interactions and sophisticated software for network analysis we were able to expand upon the previous analyses of this disease paradigm, underlying cellular mechanisms and critical molecular players, as well as to identify novel drug targets. This research work on Parkinson's disease is part of a broader network-based data analysis of three neurodegenerative disorders (NDDs) including Alzheimer's (AD) and Huntington's disease (HD) with the final goal the identification of unified underlying molecular mechanisms of these three devastating NDDs. Manuscripts outlining our research findings of AD and HD, including the unified molecular mechanisms of NDDs, are in preparation and will be submitted for publication subsequently. Methods and Data The work flow followed in this study is illustrated in Figure 2. DNA microarray is a powerful technology that provides a high throughput and detailed view of the entire genome and transcriptome of an organism by measuring the relative mRNA abundance intensity. Due to their ready availability, high volume capacity and parallel testing, microarrays have dramatically accelerated many types of molecular biology investigation. The known limitation of using microarrays is that mRNA level does not necessarily correlates with its functional protein level in the cell. Also, post-translational modifications essential for determining protein function are not present on DNA microarray. However, these limitations could be partially overcome by careful handling of arrays, probe selections and repeat experiments. Moreover, microarray assays are inexpensive and Microarray gene expression of post-mortem brain tissue samples from diseased and control conditions were used. The three Affymetrix GeneChips sets used were GSE8397 U133A and U133B (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE8397) and GSE20295 U133A (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE20295) arrays. This specific selection was influenced by our extended research plan to search for the unified underlined mechanism of neurodegenerative diseases. Only Affymetrix post-mortem datasets were found to cover the three most characteristic neurodegenerative diseases -Parkinson's, Alzheimer's and Huntington ones. The samples were initially selected after careful review of the cases neuropsychological and/or neuropathological data, and matched by age and sex. The control subjects were with no known neurodegenerative disease history. The GSE8397 arrays included 15 cases and 8 controls each with male to female ratios 9:6 and 6:2, respectively. The mean age of cases reported was 80±5.7 whereas that of controls 70.6±12.5. The brain tissues/regions involved were Superior Frontal Gyrus (SFG), Medial Substantia Nigra (MSN) and Lateral Substantia Nigra (LSN). The GSE20295 array has equal number of 15 cases and controls. The male to female ratios for the two groups were 9:6 and 10:5, while the mean age was 76.7±6.2 and 71.2±11.1, respectively. Broadman Area 9 (BA9), Putamen (PT) and Substantia Nigra (SN) were the brain tissue/regions involved. The microarray data analysis was focused on genes differentially expressed across different tissues. For consistency between the selected datasets, the latter were subjected to the same techniques for preprocessing, normalizing and post-normalizing. Bioconductor software for analysis and comprehension of genomic data based on R programming language [36] (http://www.bioconductor.org/) was implemented in written in-house R. The raw microarray CEL files were downloaded from the GEO/ArrayExpress databases, and the microarray chip quality was assessed using arrayQualityMetrics [37]. More specifically, GeneChip reproducibility was assessed, signal-tonoise ratio was determined and no extreme outliers were detected. Relevant quality assessment figures/plots were obtained All microarray expression datasets were normalized to correct for systematic differences due to sample preparation, batch processing, etc. between genes or arrays. A multi-array average (RMA) expression measure was used [38], which consists of three steps: background correction, quantile normalization (each performed at the individual probe level), and robust linear model fit using median polish (logtransformed intensities at the probesets level). The standard RMA approach was utilized to enable more direct comparisons with other similar research results. The differential gene expression changes were statistically evaluated by the empirical Bayes (eBayes) method [39] from the limma Bioconductor package. Probe-sets with p-values < 0.05 were considered to be significantly differentially expressed genes (SDEGs). R source code for statistical analysis of such microarray gene expression dataset, including graphical output of the differentially expressed gene, can be obtained from the authors by request. The microarray datasets were subjected to the same statistical procedures given above and significantly differentially expressed genes lists called "seed genes" were generated for each dataset. The lists generated from the GSE8397 dataset were denoted as SFG, MSN and LSN, for the three types of brain tissue samples: superior frontal gyrus, medial and lateral Substantia nigra, respectively. In addition, differential gene expression changes found between control and PD cases irrespective of tissue types were denoted as "Diagnosis". An overlap of 414 seed genes was found between the four sets of significantly differentially expressed genes (SDEGs), as shown in Figure 3a. In a similar way, an overlap of 225 seed genes was found in the GSE8397 HG-U133B microarray gene expression dataset ( Figure 3b). Altogether, 631 seed genes were found after removing duplicates in GSE8397 U133A and U133B datasets. Correspondingly, using GSE20295 HG-U133A (Figure 3c) microarray dataset another four sets of seed genes, namely diagnosis, BA9, PT and SN (for tissue samples used), were generated and an overlap of 110 genes were considered SDEGs (p-values < 0.05). Finally, combining the three Parkinson's microarray datasets (GSE8397 (HG-U133A and B) and 20295) we found a total of 719 (p-values < 0.05) genes to be significantly differentially expressed. The p-values shown above were obtained with paired p-test without correction for multiple correlations. Due to the specificity of the post-mortem expression datasets no statistically significant expressed genes were found after Bonferoni correction, while with the less stringent Benjamini-Hochberg correction the number of SDEGs was not large enough to allow for meaningful analysis. However, to compensate partially for not taking into account the probes correlation, we selected to consider a more stringent p-value cut-off of 0.01 for the paired p-test value. With the new cut-off the total number of "seed genes" for Parkinson's disease was reduced from 719 to 267 (see Supplementary Table S1 for details). For constructing and analyzing networks relevant to neurodegenerative diseases we selected Pathway Studio 9.0 software package [40], (http://www.elsevier.com/online-tools/pathwaystudio). It offers options to construct various kinds of networks such as direct interaction, shortest-path, common targets and regulators of pairs or multiple genes, and others. The molecular interaction data used in the study were supplied by the ResNet 9.0 database (released October 15, 2011), provided jointly with the software. It covers human, mouse and rat proteins. The database is compiled by using MedScan technology from over 20 million NCBI's PubMed abstracts and over 880,000 full-text articles as of May 27, 2011. Currently the database covers 125342 entities, such as cell process, complex, disease, functional class, treatment and small molecules including over 110000 genes/proteins. It offers over a million interactions like binding, chemical reaction, direct regulation, expression, miRNA regulation, molecular synthesis, molecular transport, promoter binding, protein modification and regulations, as well as information about almost 5600 custom built cell-processes, metabolic and signaling pathways. In this study, we constructed direct interaction (DI) and shortestpath (SP) networks to analyze interactions between the SDEGs and with connecting genes/proteins that could be of interest in neurodegeneration process. Direct regulatory interactions of five different types were used only, including among others promoter binding, protein modification and miRNA regulation By applying the shortest-path (SP) network strategy with the list of SDEGs, we were able to identify connecting genes/proteins that might contribute to the neurodegenerative process but have not been related so far to it. This approach is based on the inference that genes/proteins with well-defined biological functions when interacting with other genes/proteins known of importance for given disease like Parkinson's have a higher probability to share that function, as compared to those selected at random (guilt-byassociation). One limitation of the shortest-path network approach is that sometimes it could bring in a large amount of intermediary nodes in order to have a unified network. Such a huge network is not only impractical for further analysis, but it also diminishes the importance of the seed genes in the selected scenario. Thus, care was taken to reduce the number of connecting nodes in the shortest-path network producing compact shortest-path networks. The last task was accomplished by setting up a cut-off rule, to include only seed genes with a large number (≥ 25) of neighbors in the Pathway Studio ResNet 9.0 database, thus focusing on genes having a better chance to be connected to known Parkinson's disease genes. The 267 seed genes were thus reduced to 105 genes and the ratio of connecting to seed genes/proteins ranged from 1.5:1 to 2:1 for all the datasets. The construction of the compacted SP network was finalized by adding few generic genes without which some of the genes of interest would still remain unconnected. Special attention in the network analysis was paid to identifying the key players -nodes with high network topology scores of node degree (local connectivity), closeness centrality (network monitoring) and betweeness centrality (traffic-influential) scores [41]. The calculation of these topological descriptors was executed with the Pajek software package [42]. Nodes with such favorable topological characteristics, along with biological/ molecular functions relevant for the neurodegenerative process, have been considered in two categories "already known PD-genes" and "genes of interest for PD". The distinction was made by using sources like Online Mendelian Inheritance in Man (OMIM) database (http://omim.org/), NCBI's PubMed database (http://www.ncbi.nlm.nih.gov/pubmed.com), MalaCards database (http://malacards.org/), and Google search for the latest publications (http://www.google.com). Each of these two categories were further divided in two subcategories, those found among the significantly differentially expression genes (SDEGs) and such emerging from the connecting proteins in shortest-path and common regulator networks. Differential gene expression was analyzed through complex regulatory networks that are controlled by two types of regulators: transcription factors (TFs) and microRNAs (miRNAs). In order to identify the microRNAs that target our seed genes we constructed shortest-path network with only miRNA regulation type of interactions using Pathway Studio's ResNet 9.0 database.Then, in order to construct a miRNA regulatory network we used the direct interaction network option in Pathway Studio utilizing the seed genes and the corresponding miRNAs identified in the earlier step. We identified many microRNA regulations of our seed genes which will be discussed in detail in the following sections. The microRNA regulatory network also revealed an integrated regulation in neurodegeneration process by both transcription factors and microRNAs. However, the miRNA regulatory analysis should be offered with some caution, because currently a high percentage of miRNA-mRNA interactions in Pathway Studio ResNet 9.0 database are based on predictions but not on experimental validation. Gene ontology (GO), an expert-curated database, assigns a list of genes into various biologically meaningful categories such as biological process, molecular function, and cellular component. pvalues are used to rank the significantly modulated genes into GO categories. We used the Database for Annotation, Visualization and Integrated Discovery (DAVID) [43][44][45], which provides biological functional interpretation of large lists of genes derived from genomic studies such as microarray, proteomics experiments, etc. Core analysis in Ingenuity's IPA (Ingenuity Systems, www.ingenuity.com) and Pathway Enrichment Analysis in Pathway Studio were then applied to identify enriched canonical pathways in Parkinson's disease, and the genes from the lists of SDEGs and network generated lists that take part in the enriched pathways. The results from DAVID analysis were examined in an attempt to characterize the integrated molecular mechanisms involved in neurodegeneration process. The output includes those GO categories and KEGG pathways that are enriched in a given list of genes. Kyoto Encyclopedia of Genes and Genomes (KEGG) is a basic database resource for understanding high-level functions of biological systems from molecular-level information, especially large-scale molecular datasets generated by genome sequencing and other high-throughput experimental technologies (http://www.genome.jp/kegg/) [46]. The KEGG pathways that were significantly enriched (p-value ≤ 0.05 after Benjamini-Hochberg FDR adjustments) and previously known in neurodegenerative disorders under study were identified and further investigated. Google and NCBI's PubMed databases were used to search for such previously known biological pathways in neurodegenerative disorders. After that, all the genes from the enriched KEGG pathways were combined into a list of "mechanism genes". Based on their molecular functions we further classified these "mechanism genes" as either disease causing (leading to neuronal loss/death) or disease alleviating (helps in neuronal survival) agents. Once again, Google and NCBI's PubMed databases were used to identify such previous implications. For easy understanding, the loss versus survival classification is represented in the figures of next sections by highlighting the "mechanism genes" in purple or yellow, respectively. Using the "mechanism genes" direct interaction network was constructed as well as investigated for integrated disease mechanism. As will be shown in Section 3 we outlined three possible mechanisms for initiating the Parkinson's disease from extracellular signaling. Results We initiated our Parkinson's disease network analysis using the 267 "seed genes", selected as explained in Methods and Data. Out of the 267 significantly differentially expressed genes (SDEGs) 67 genes were directly connected to each other by interactions such as regulations, promoter binding, direct regulation, protein modification and miRNA regulation. This interaction network (Figure 4) has a relatively low average node degree of 2.84. Genes like MAPK8, RAB3A, STXBP1, SYN1 and VAMP2 are the top five most highly connected nodes with node degree ≥ 7. One of the well-known Parkinson's gene SNCA (α-synuclein) was among the top five most influential (betweeness centrality) and highest accessible (closeness centrality) nodes in the network. 15 of the 67 genes/proteins (ACHE, ATR, CX3CL1, FGFR1, GRIA1, L1CAM, MAPK8, MT1F, MT2A, PRDX2, RAB3A, RNF11, SNCA, SNCG and SPTAN1), have already been implicated in Parkinson's disease paradigm either as neuroprotective and therapeutic agents or as disease aggravating ones. In Figure 4, these previously PD-known genes are highlighted in green. Based on their characteristic physiological roles 12 genes (BSN, DCLK1, KCNQ2, NCAM1, NEDD4L, PAK1, PCDH8, STXBP1, SYN1, UBE2N, UNC13A and VAMP2) colored in blue in Figure 4 were classified as potentially involved in Parkinson's disease. The molecular functions of some of these candidate genes are summarized here. NCAM1 (neural cell adhesion molecule 1) is important in cognitive processes such as learning and memory. It plays a major role in brain immune surveillance system [47]. NCAM1 also facilitates the release, repositioning, and/or expansion of the synaptic complex. BSN (bassoon presynaptic cytomatrix protein), is a scaffolding protein involved in organizing the presynaptic cytoskeleton, the specialized sites where neurotransmitters are released from the synaptic vesicles. (Retrieved on 25-Feb-2013 from http://www.ncbi.nlm.nih.gov/gene/8927). Campbell et al., (2012) [48] have shown that STXBP1 (syntaxin binding protein 1) is a vital part of the process of calcium ion-dependent exocytosis in neurons, as well as in neuroendocrine cells. It facilitates membrane fusion and neurotransmitter release. SYN1 (synapsin I) is known to be a key player in synapse formation and plasticity [49]. During an action potential (an important part of the neuron firing process), synapsins are phosphorylated by PKA (cAMP dependent protein kinase), releasing the synaptic vesicles and allowing them to move to the membrane and release their neurotransmitter. VAMP2 (vesicleassociated membrane protein 2), gene is thought to participate in neurotransmitter release at a step between docking and fusion. A recent study has shown that single nucleotide polymorphisms in UNC13A (unc-13 homolog A) gene may be associated with sporadic amyotrophic lateral sclerosis (ALS) [50]. It regulates neurotransmitter release at synapses, including at neuromuscular junctions. α-synuclein was shown to promote disruption of ubiquitin proteasome system [51]. UBE2N (ubiquitin-conjugating enzyme E2N) targets proteins for degradation via the proteasome. In recent years, synaptic vesicle trafficking defects have been increasingly implicated as an important factor in many PD models, either via direct interactions with the synaptic vesicle (SV) cycling machinery or via indirect effects caused by mitochondrial dysfunction [52]. Even though genes BSN, NCAM1, STXBP1, SYN1, VAMP2 and UNC13A are not shown to be directly related to PD, they all seems to play an important role in the regulation as well as the release of neurotransmitters and synaptic vesicles during the SV cycle process. Additional arguments for considering the above mentioned genes as associated with Parkinson's disease are provided from network perspective. Figure 4 reveals that BSN, STXBP1, SYN1, VAMP2, and UNC13A directly interact with RAB3A, a gene well-known in PD, where RAB3A is able to provide substantial rescue against α-syninduced degeneration of dopaminergic neurons. Besides with RAB3A, SYN1 is also directly connected to GRIA1 and SNCA, two known PD genes. Studies have suggested glutamate receptor (GRIA1) antagonists as potential treatment agent for Parkinson's disease [53]. In the direct interaction network, potential candidate genes like PAK1 and UBE2N are among the top five nodes with high closeness (visibility) centrality score. Another of the proposed candidate genes SYN1, was among the top five hub nodes as well as among the top five nodes with highest betweenness (traffic-influential) centrality score. Being a first-level direct interacting neighbors of a known gene (guilt-by-association), makes also BSN, NCAM1, PAK1, PCDH8, STXBP1, SYN1, UBE2N, UNC13A or VAMP2 genes of potential interest in Parkinson's disease. The physiological role these genes play in synaptic vesicle trafficking, neurotransmitter release, and ubiquitination, as well as their other network attributes like being hubs, network traffic-influential and/or monitoring nodes, increases the chance of these genes to be involved in the PD pathology, which reinforces the arguments in favor of their experimental validation. A shortest path network (SP) was built by selecting 105 out of the 267 significant differentially expressed genes (SDEGs), which have a higher chance to be connected to some of the known PD genes (See Methods). Interaction types included promoter binding, protein modification and direct regulation. 193 genes were added by the Pathway Studio 9.0 software to connect the 105 seed genes along the shortest paths between any pair of genes. The connecting genes were examined in sources like OMIM and PubMed, along with Google search to verify whether they have already been implicated or not in PD. In the second case, whether they could be of potential interest in PD diagnosis was decided based on the gene's physiology/molecular characteristics and network location (guilt-by-association). A more compact version of this 298 genes SP type network was constructed using only the genes from the four categories of Table 1, and adding few generic genes without which some of the genes of interest would remain unconnected. The compact SP network (see Figure 5) is considerably better connected (average node degree 6.79) than the one based on direct interactions. Many of the known PD genes, such as AKT1, CASP3, CDK5, MAPK1, MAPT and SNCA are highly connected in this network. From those, CDK5 and MAPK1 are among the 10 hub genes (AKT1, CASP3, CDK5, CREB1, CTNNB1, EGFR, MAPK1, SP1, SRC and TP53) with node degree > 15. In biomolecular networks highly connected nodes tend to be part of critical functions or pathways, some of the found hubs like TP53, MAPK1, AKT1 and CASP3 being a typical example. Figure 4. Parkinson's disease direct interaction network. The 15 genes/proteins implicated previously in PD pathology are highlighted in green and the 12 genes/proteins of potential interest for that disease are highlighted in blue. Different interactions are represented as follows: regulation -dashed grey, molecular transport -dotted red, co-expression -solid blue, protein modification -solid green, and protein-protein binding -solid purple. The nodes included in the network were then subjected to enrichment analysis using DAVID software tool which systematically maps the given gene list to the associated biological annotation terms. The statistically significant enriched Gene Ontology categories and pathways related to brain and nervous system, assessed with the Benjamini-Hochberg multiple correction, are presented in Table 2. Several clusters of genes were thus identified to be involved in neuron development, differentiation, projection and apoptosis, synaptic transmission, vesicle transport and regulation as biological processes affected by Parkinson's disease. Indeed, many of the enriched genes like CDK5, FGFR1, L1CAM, NR4A2, PRKCA, RAB3A, RAC1 and SNCA have already been studied as mediators, suppressors or regulators of neurodegeneration. Pathways such as ErbB signaling and Neurotrophin signaling are enriched in this PD related gene list. Both these pathways were considered as major avenues to promote survival of dopaminergic neurons [54]. Synaptosomes, axons, and membranebounded vesicles are some of the cellular components that are found affected by PD. Table 3 lists the genes identified in our study as possibly related to Parkinson's disease, based on their moderate-to-considerably high connectivity to known PD genes. CTNNB1 (catenin, beta 1) has the record environment of ten (!) nearest neighbors in the compacted shortest path network (CSPNW, Figure 5) all of which known to be involved in Parkinson's disease (AKT1, CASP3, CASP6, CDK5, Figure 5. Parkinson's disease compact shortest path network. The genes/proteins implicated in PD pathology are highlighted in green and red. The genes/proteins of potential interest are highlighted in blue and orange (green and blue refer to SDEGs, while red and orange to SP network connecting genes, respectively). Different interactions are represented as follows: protein modification -solid green, promoter binding -dotted green and direct regulationsolid grey. CREB1, MAPK8, NR4A2, PTEN, RAC1 and SMAD3). This makes CTNNB1 number one candidate gene of interest. This gene, along with Wnt1 and Fzd-1 critically contributes to the survival and protection of adult midbrain DA neurons [55]. In addition, it has a high betweeness centrality which increases its global influence in the network. The next strongest candidate for implication with Parkinson's disease is EGFR (epidermal growth factor receptor) gene having six PD-related neighbors (CASP3, CDK5, PRKCA, RNF11 and TP53). It is one of the top ten nodes with highest node degree, closeness as well as betweeness centrality scores. This greatly contributes EGFR to be one of the critical positions in the compact shortest path network with greater visibility and traffic-control. Many studies have shown that EFGR signaling play a major role in neurogenesis, neuron survival and maintenance [56][57][58][59]. In a recent study, EGFR has been suggested as a preferred target for treating amyloid-beta induced memory loss in Alzheimer's disease [60]. Third interesting PD candidate is PAK1 (p21 protein (Cdc42/Rac)-activated kinase 1) gene having five PD-related neighbors (AKT1, CASP3, CDK5, RAC1, and TP53). PAK1 regulates neuronal polarity, morphology, migration and synaptic function [61]. The gallery of Parkinson's disease potentially related genes from Table 3 includes also CEBPA (CCAAT/enhancer binding protein (C/EBP), alpha), which interacts with four known PD genes (GATA2, IL12B, MT2A, and TP53). CEBPA has been shown to bind to the promoter and modulate the expression of leptin, a hormone having easy accessibility to the brain. It is important to note that leptin receptors are expressed in neurons and other brain regions and are known to regulate neural development. Thus, leptin could be a potential drug candidate for neurodegeneration [62]. The compact shortest path network included many noteworthy connecting proteins like APP, CREB1, HSP90AA1, MAPT and PTEN which were previously implicated to play critical roles in many neurodegeneration disease pathogenesis and couple of them were indicated to have neuroprotective mechanism. APP (amyloid beta (A4) precursor protein), is the major component of the filamentous inclusions found in the Lewy bodies and Lewy neuritis, the characteristic hallmark features of many neurodegenerative diseases including Parkinson's, Alzheimer's, dementia with Lewy bodies and multiple system atrophy (MSA). Neurodegenerative diseases caused by abnormal aggregations of alpha-synuclein proteins are specially classified as alpha-synucleinopathies [63][64][65][66]. Similarly, tauopathies are a class of neurodegenerative diseases that are associated with the aberrant accumulations of tau proteins (MAPT) in the brain. Hyper Figure 5). Among these five genes, CREB1 appears to have major network advantage as being one of the top ten nodes with highest local connectivity, visibility and traffic-influential node in the compact shortest-path network. In addition, genes like APP, MAPT and HSP90AA1 are among the top 25 nodes with highest connectivity and higher accessibility to all other nodes as measured from their node degree and closeness centrality score. Other genes from Table 3 might also be investigated for possible relations to Parkinson's disease, including the generic genes MAPK1 and EGFR, which are also interacting with many known PD genes. The genes used to construct the compact shortest-path network were subjected to Ingenuity's IPA and DAVID pathway enrichment analysis, the latter software utilizing KEGG pathway classifications (Kyoto Encyclopedia of Genes and Genomes, http://www.genome.jp/kegg/) [46]. IPA produced 25 enriched pathways vs. 34 for DAVID, and after elimination of the cancer-and infection disease-related pathways, the ratio reduced to 18:21. After reviewing Parkinson's disease literature we selected sixteen of the David enriched pathways (Table 4) belonging to categories of signal transduction, cell motility, cell communication, immune system, nervous system and neurodegenerative diseases. Directly shared between IPA and DAVID were the pathways for p53, axonal guidance, gap junction and adherence junction signaling. Many signaling pathways (see Figure 6) including 14-3-3 mediated, neuregulin, semaphorin, ephrin, gap-junction, axonal guidance, as well as different growth factor signaling like EGF/EGFR, FGF, and NGF, were found enriched in Parkinson's disease pathology. This finding extends over the recent report [77]. Neuregulins along with epidermal growth factors play a diverse role in neuronal development and differentiation. Systemic administration of neuregulin-1β1 protects dopaminergic neurons in a mouse model of Parkinson's disease [78]. Semaphorins and ephrins are prominent families of axon guidance cues during normal nerve growth and also after injury. Binding interactions were reported between 14-3-3 proteins Synuclein-alpha and LRRK2 (leucine-rich repeat protein kinase 2), genes linked to sporadic and familial form of PD [79]. It was symptomatic to find out major neurodegenerative conditions like Alzheimer's disease, Amyotrophic lateral sclerosis (ALS) and Huntington's disease signaling, to be enriched in Parkinson's disease conditions as well. Discovering these overlapping pathways will help to better understand the complex neurodegenerative diseases mechanism and to search for therapeutic agents common for the entire family of these diseases. The analysis of genes involved in the selected DAVID/IPA pathways revealed more genes related to Parkinson's disease manifestation, such as FYN (protein-tyrosine kinase oncogene belonging to focal adhesion pathway) and VEGF (from VEGF signaling pathway). FYN-mediated signaling [80], activates phosphorylation of alpha-synuclein, and the accumulation of this phosphorylated protein in the brainstems of patients with Parkinson's disease is a signature mark of this disease. VEGF (vascular endothelial growth factor) is known to promote microglial proliferation, neurogenesis and angiogenesis providing thus neuroprotective effects via both direct and indirect mechanisms with other players of VEGF signaling pathway [81]. This was one more argument to use the 46 genes/proteins found in common in all the 16 KEGG pathways from Table 4 as an essential part of the integrated Parkinson's disease mechanism. The network built on this basis is shown in Figure 7. The genes in Figure 7 are classified into four categories as being already implicated in Parkinson's disease, such of potential interest to PD, as well as being disease causing (leading to neuronal loss/death) or disease alleviating (helping in neuronal survival. Due to the high network interconnectedness no separation between the loss and survival genes could be detected; the genes appear as part of a single integrated system. Visual inspection of the pathways in KEGG database also revealed that there is no definite compartmentalization of processes within a biological cell. One process/pathway feed into another or multiple pathways, e.g., WnT signaling pathway includes players from MAPK, focal adhesion, adherens junction, and Alzheimer's disease pathways. In examining the integrated mechanism network three routes emerged for triggering the Parkinson's disease mechanism via one of the extra-cellular ligands CX3CL1, IL12B and SEMA6D. In the first route, CX3CL1 (fractalkine) together with DRD1 (dopamine receptor D1) suppresses the expression of ionotropic glutamate receptor GRIA1. There is also interaction between CX3CL1, ADAM17 (metallopeptidase domain 17), and LCAM1 which then follows a downstream path into cytoplasm and to the nucleus for subsequent regulation of gene expression. ADAM17 and TP53 activate the expression of the upstream positioned CX3CL1. The suppression of microglial activation by fractalkine contributes to neuronal survival. ADAM17 mediated fractalkine cleavage would ultimately limit activation of microglia and support neuronal survival [82]. There is a two-ways gene expression modulation between CX3CL1 and SRC. Inside the cytoplasm AKT1, CASP3, MAPK1, MAPK8 genes/proteins are direct downstream targets of CX3CL1. Except GRIA1 and CASP3, all other downstream target genes of CX3CL1 are positively activated by it. Some of the players in the outlined route like ADAM17, CX3CL1, DRD1, GRIA1, and LCAM1 have been claimed in animal model studies as therapeutic targets for Parkinson's disease [19,21,83,84]. Second route is initiated via SEMA6D and its receptor PLXNA1 (plexin A1) which in turn regulates RHOA and AKT1 gene expression inside the cytoplasm. The downstream activity of MAPK1 in the cytoplasm is also negatively modulated by SEMA6D. SEMA6D, on its turn can be negatively modulated as upstream target of PLXNA1. Apart from SEMA6D, CAPN1 (calpain 1, (mu/I) large subunit) negatively regulates the expression of both PLXNA1 and SNCA and can thus modulate their downstream actions inside the cytoplasm. Semaphorins, secreted proteins involved in the guidance of neuronal and nonneuronal cells, interact with receptor complexes formed by plexins and neuropilins. There is a literature evidence for semaphorins and their receptors to promote or guide neuronal axon projection as therapeutic approaches for treatment of Parkinson's disease [85,86]. Studies in rodent and cell culture models of PD suggest that treatment with calpain inhibitors can prevent neuronal death and restore functions thus suggesting that calpain inhibition could be a therapeutic strategy in PD [87]. Third route of the proposed integrated Parkisnon's disease mechanism takes place via another extra-cellular ligand IL12B (interleukin 12B) which lies upstream to MAP kinases, RAC1 and AKT1, and all these genes negatively regulate the gene expression of IL12B. Many studies have suggested that neuroinflammation and activated microglia contribute to neurodegenerative processes. Interleukins alleviate these harmful effects and help in differentiation and survival of neuronal cells that were stressed out by activated microglial actions [88,89]. The 46 genes/proteins found in common in all 16 enriched KEGG pathways. Genes/proteins implicated in PD pathology are highlighted in green/red and those of potential interest are highlighted in blue/orange, where blue and green colored genes belong to the set of significantly modulated genes, while those colored in red and orange are from the set of connecting proteins in shortest path network. Different interactions: regulation -dashed grey, molecular transport -dotted red, co-expression -solid blue, protein modification -solid green, protein-protein binding -solid purple, promoter binding -dotted green and direct regulation -solid grey. Thus, from the integrated disease mechanism network we present a preliminary outline of three possible routes to enhance the survival of the dopaminergic neurons, which could be a source for potential therapeutic targets in Parkinson's disease. A more detailed study will be needed to elucidate this very complex overall mechanism. A shortest path network (SPNW) was constructed using all the 267 seed genes and accounting only for their direct microRNA-mRNA target interactions as given in the ResNet 9.0 database of Pathway Studio software. 71 regulatory miRNAs were thus identified (Figure 8). Table 5 shows the genes of interest in the MicroRNA Regulatory Network (MRN) and how many miRNAs are targeting each gene's mRNA. miR-218-1 was found to be the top player regulating the expression of 16 genes of which three (PCDH8, RIMS3 and STXBP1) are of potential interest to Parkinson's disease. In animal model study, it was shown that miR-218-1 is expressed in hippocampus [90], where volumetric MRI imaging study have found a progressive volume loss in PD human subjects [91]. Other microRNAs like miR-29a, miR-132, miR-133a1, miR-182, and miR-330 were found to regulate the expression of the known Parkinson's related genes ACHE, CX3CL1, FGFR1, L1CAM, and SPTAN1. Being direct interacting partners with known PD-related genes some of these miRNAs could be considered as potential regulatory targets in Parkinson's disease mechanism. On further examination, microRNA regulatory network revealed that the expression of candidate genes like RIMS3, SEMA6D and SYNJ1 was tightly regulated by multiple miRNAs. RIMS3 (regulating synaptic membrane exocytosis 3) and other RIM family members are generally believed to be RAB3 isoforms (RAB3A/B/C/D)-specific effectors that regulate synaptic vesicle exocytosis in neurons and in some endocrine cells [95]. Release and re-uptake of neurotransmitters in the synaptic junction is a highly coordinated process and RIMS3, and RAB3A along with other proteins play an important role during neurotransmitter release. The gene expression of the extra-cellular ligand SEMA6D, proposed as one of three initiators of the integrated Parkinson's disease mechanism (Figure 7), was found in our miRNA regulatory network to be regulated by seven miRNAs (miR-124-1, miR-128-1, miR-16-1, miR-19a, miR-23b, miR-30a and miR-9). Some of those like miR-124-1, miR-128-1 and miR-9 have been previously shown of importance for Alzheimer's disease neuropathology being abundantly expressed in Alzheimer hippocampus [96]. This may be considered as one more sign for the possible existence of common regulatory mechanisms in neurodegenerative diseases. Another highly microRNA-regulated gene is SYNJ1 (synaptojanin 1) (see Table 5), a polyphosphoinositide phosphatase found enriched in the brain and located at nerve terminals, as well as associated with synaptic vesicles and coated endocytic intermediates. Synaptojanins were suggested to accelerate the synaptic vesicle recovery/trafficking process at the synapse [97]. Dysfunction of synaptic transmission and membrane trafficking are implicated in PD. Based on its molecular function, SYNJ1 could play a role in Parkinson's disease molecular mechanism. Finally, in addition to miRNA mediated regulation, the network also included four genes (AFF1, ATF7IP, ATOH8 and TBC1D2B) that encode for transcription factors (TFs). These significantly differentially expressed TFs indicate a possible integrated TF/miRNA regulation of the transcription of Parkinson's related genes. Summary The microarray expression data used in our study were a combination of data produced and interpreted by different authors [33,34] and referring to different regions of brain. With a long-term aim to search for a common molecular mechanism for neurodegenerative diseases, we renormalized the data for a better comparability. Then, a number of specific biomolecular networks were built and analyzed in a variety of ways. As a result, while confirming some of the previous finding, including part of the novel predicted Parkinson's genes, more such PD-related genes were proposed in this work based on guilt-by-association analysis and accounting for the importance of certain nodes in network topology. As well known, the guilt-by-association approach is based on analysis of the nearest network neighborhood of genes with proved function in the search of interest. Many Parkinson's disease genes were listed in the OMIM database. However, our list of SDEGs in all three Parkinson's disease datasets used in this study did not include all of the OMIM PD-related genes, missing such genes like LRRK2, PARK2, PARK7, PLA2G6, PINK1 and UCHL1, while PINK1 and UCHL1 were still both significantly expressed in medial Substantia nigra, and UCHL1 also in lateral Substantia nigra, but not in all three brain tissue types. We found that the log fold-change of PARK2, PARK7 and PLA2G6 was only around 0.03, which was not significant enough to detect changes in gene expression. Affymetrix HG-U133A GeneChip did not contain probe for LRRK2 gene but instead included LRRK1 gene probe. Again, LRRK1 did not meet the criteria for "seed genes" list since it did not show strong differential gene expression and its log fold-change was also only around 0.03. While the lack of statistically significant presence of the above mentioned PD-related genes could possibly be attributed to the loss of expression intensity in the post-mortem brain samples compared to a functioning brain, in this study we focused our attention mainly to the genes showing considerable change in all three selected Parkinson's disease brain tissue samples. Despite the reduced base of 15 known PD-genes needed for the guilt-by-association predictions we were able to identify from our direct interaction network SYN1 neighboring three known PD genes, followed by UBE2N and NCAM1 with two and BSN, PAK1, PCHD8, STXBP1, UNC13A and VAMP2 with one such neighbor as novel Parkinson's disease candidate genes. Second-level interacting partners generally have much lesser chance to be included in the list of candidate genes. However, this chance may increase for some genes known to show certain functions that may be related to the disease of interest. Such is the case with DCLK1 gene via its role in synaptic plasticity and neurodevelopment and as being first neighbor of SYN1. Another group of novel PD gene candidates was found from similar analysis of the shortest path network. Such is the case with NEDD4L, SYNJ1, TUBB3 as direct partners, and ACACB, CACNA1G, KCNQ2, and SEMA6D as second-level partners to already known PD genes. All 17 genes listed here are significantly differentially expressed in PD. Our network analysis indicated that apart from the strongly differentially expressed genes some connecting genes/proteins from the shortest path networks could be of similar importance in the deregulation of the disease mechanisms. Considering such connecting genes/proteins via their guilt by associations to already known PD genes we concluded that CTNNB1, EGFR, ADAM17, CEBPA, CTNND1, CDKN1B, KLF1, ROCK1 and TIAM1 could also be genes of potential interest in Parkinson's disease realm. Some of the genes of this list were found to play an important role in network topology. Thus, CTNNB1 and EGFR are among the top ten highly connected nodes (with degree > 15), among the top ten nodes with higher accessibility to all other nodes as assessed by the closeness centrality, and among the top ten traffic influential nodes in the network as judged by their betweenness centrality. Genes like ADAM17, CEBPA and CTNND1 are among the top 25 high connectivity nodes (with degree ≥ 8) and also among the top 25 traffic-influential nodes in the network. Besides helping in identifying novel PD-related genes, the same line of network analysis has shown that APP, MAPT and PTEN, well-known contributors of many other neurodegenerative diseases including Alzheimer's, MSA, Pick's, PSPs etc., are important connecting genes/proteins in the Parkinson's shortest-path network. Finding such genes with common role in neurodegeneration process reinforces our study goal. We have also added another seven to the numerous miRNAs already known to affect the expression of PD-relevant genes [92][93][94]. With caution, because some of their regulatory interactions are not yet validated, we predict that miR-132, miR-133a1, miR-181-1, miR-182, miR-218-1, miR-29a, and miR-330 could be of interest as potential regulators in Parkinson's disease mechanisms, due to their direct interaction with known PD related genes. Further investigation of the above mentioned miRNA-related regulatory interactions of candidate and known PD-genes would deepen our understanding of the molecular mechanisms of complex diseases like Parkinson's. Examining the microRNA regulatory network, one may conclude that disease pathogenesis is complex enough and requires regulatory mechanisms mediated via both protein-coding genes and the small noncoding microRNAs. All genes listed in this summary were shown through gene set enrichment analysis to be key players in various cellular pathways and mechanisms like neuron development and differentiation, synaptic transmission, vesicle transport and endocytosis, apoptosis, and memory/learning, which are altered in the underlying Parkinson's pathophysiology and the potential compensatory responses. Moreover, enrichment of Alzheimer's, ALS and Huntington's disease signaling pathway was found to take place in PD brains as well. This supports the views for the presence of an underlying common mechanism for all neurodegenerative diseases. In the final stage of our systems biology approach to Parkinson's disease we used the KEGG pathways found enriched by DAVID analysis along with the enriched canonical pathways from IPA analysis to build an integrated mechanistic Parkinson's disease network containing 46 genes. Three routes of triggering PD molecular mechanisms were identified on this basis proceeding from signaling initiated via the extra-cellular ligands CX3CL1, SEMA6D and IL12B. Further analysis of these routes could reveal novel therapeutic targets for Parkinson's disease. Yet, the above findings could be considered only as the tip of the iceberg in understanding the intertwined nature of the highly complex neurodegenerative diseases.
2018-04-03T04:41:21.335Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "db879672f1a15158957fe676d3cf741bdfc1f059", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5936/csbj.201304004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db879672f1a15158957fe676d3cf741bdfc1f059", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
64608313
pes2o/s2orc
v3-fos-license
An analytic hydrodynamical model of rotating 3D expansion in heavy-ion collisions A new exact and analytic solution of non-relativistic fireball hydrodynamics is presented. It describes an expanding triaxial ellipsoid that rotates around one of its principal axes. The observables are calculated using simple analytic formulas. Azimuthal oscillation of the off-diagonal Bertsch-Pratt radii of Bose-Einstein correlations as well as rapidity dependent directed and third flow measurements provide means to determine the magnitude of the rotation of the fireball. Observing this rotation and its dependence on collision energy may lead to new information on the equation of state of the strongly interacting quark gluon plasma produced in high energy heavy ion collisions. Introduction The applications of hydrodynamics (both numerical and exact analytic solutions) in the description of the "soft" particle production in heavy-ion collisions has a good tradition. Here we deal with an analytic model. (For a recent work on numerical hydrodynamics, see eg. Ref. [1]; the many references therein provide a good list of recent developments in numerical hydrodynamical modelling). Exact solutions require some "luck" to find, however, they also have a good line of success: the early days of high energy physics saw the development of the Landau-Khalatnikov solution [2] and the Hwa-Bjorken solution [3], both invaluably useful for high energy collision phenomenology. Since then, many physically realistic solutions have been found, relativistic as well as non-relativistic ones. The rotation of the expanding hot and dense matter produced in non-central heavy ion collisions drew much theoretical attention recently. Many numerical models try to grasp the effects of rotation on the observables (see eg. Refs. [4,5]). In this work we present an exact solution of the non-relativistic perfect fluid hydrodynamical equations that describes the desired rotating expansion of a triaxial ellipsoid-like fireball, just as one imagines the space-time picture of a non-central heavy-ion collision. We also calculate the observables using simple analytic formulas and point out those features of them that would reveal the magnitude of rotation. We mention some of our works of which this step is a natural continuation: the discovery of accelerating relativistic solutions [6,7] as well as the first rotating relativistic [8] and non-relativistic [9] exact solutions. There are also exact solutions available for arbitrary equation of state in the relativistic [10] domain (even ones containing multipole asymmetries [11]) as well as in the non-relativistic domain [12] with realistic, ellipsoid-like symmetry. The solution and the calculation of the observables presented below is thus a next example of usefulness of exact solutions in heavy-ion phenomenology. The current work is a straightforward, although rather non-trivial generalization of our recent work on the evaluation of the observables for a spheroidally symmetric, rotating and expanding family of exact solutions of fireball hydrodynamics [13] for more general expansions with triaxial ellipsoidal symmetry. We refer to this work also for a more detailed overview and introduction to the status of the field of exact solutions of fireball hydrodynamics [13]. A new rotating solution of hydrodynamics The equations of non-relativistic hydrodynamics are the Euler equation, the energy conservation equation and the particle number conservation equation. They are recited here in the form suitable for heavy-ion physics phenomenology: Here n stands for the particle number density (thus nm 0 is the mass density), T for temperature, p for pressure, v for the velocity field, ε for the energy density. The equations above need to be supplemented with a suitable Equation of State (EoS) which we customarily choose as In non-central heavy-ion collisions, an almond-shaped region of hot and dense matter forms which has non-zero angular momentum. The almond shape is approximated with a triaxial ellipsoid, and the following solution of hydrodynamics takes rotation and expansion into account. Let the x axis point in the direction of the impact parameter and the z axis in the direction of the colliding beams; thus the x-z plane is the event plane, and the rotation is around the y axis. We call the lab frame the K frame, to be distinguished from the K frame, which co-rotates with the principal axes of the expanding ellipsoid. The solution is described in terms of the time-dependent angle of rotation ϑ (t), and the three principal axes of the rotated ellipsoid X (t), Y (t), Z (t). The scaling variable, whose level surfaces correspond to self-similar ellipsoids is 1 r x = r x cos ϑ − r z sin ϑ, r y = r y , r z = r x sin ϑ + r z sin ϑ. (7) The velocity field is taken to be a linear, rotating Hubble-like one, for which the above s variable is a proper "scaling variable", ie. its co-moving derivative vanishes. We introduce the V = XY Z notation. One can directly check that for constant κ the above and the following formulas provide the desired rotating solution to the (1)- For non-constant κ, the above formulas can be modified to form a valid solution. A solution is found with spatially homogeneous T and Gaussian-like n as Furthermore, the time development of the X, Y , Z principal axes (and thus that ofθ and g is governed by a Lagrangian for a point-like particle of mass m 0 , which we write up only in the κ = const , T ≡ T (t) case: where we introduced another simplifying notation: the "average" radius of the ellipsoid R, and the "average" angular velocity ω (t) as Note that the velocity profile is written up in the co-rotating K frame. The free parameter ω 0 is related to the total angular momentum M y of the flow (which points in the y direction); for example, in the V (s) = e −s/2 Gaussian case The solution thus specified is a natural generalization of the earlier results for three dimensionally expanding, non-rotating triaxial ellipsoids [12,14] (indeed, our formulation resembles very much to those ones), and in the spheroidal limit (X = Z) it reproduces the exact solutions presented in Refs. [9,13] for rotating and exploding fireballs with spheroidal symmetry. Observables from the new solution One can calculate the hadronic observables from the above solution by specifying the emission function (source function) and a suitable freeze-out condition. Following Ref. [12], for simplicity (and because the solution for arbitrary κ (T ) is then available), we take a Gaussian ansatz for the density profile (thus the temperature is spatially homogeneous), as well as take the freeze-out to happen at a constant time t f (which in this case corresponds to a given T f freeze-out temperature). We assume that at the freezeout, particles with mass m are created. The source function is then taken to be the non-relativistic Boltzmann distribution for a particle with mass m: In the following, the f index means the value taken at the freeze-out time t f , but it is mostly dropped: all quantities are to be understood as their value at t f . The single-particle spectrum and the two-particle Bose-Einstein correlation function are calculated as where K and q are the average pair momentum and the relative momentum, respectively, and λ is the effective intercept parameter of the correlation function. For the mentioned Gaussian density case, all the integrals can be performed analytically, yielding simple results. We introduce the following quantities for convenience: Note that these definitions can be straightforwardly specialized to the case when the two principal axes X and Z are equal (X = Z = R), and thus one deals with the expansion of a rotating spheroid. The single particle spectrum can be then written as where T x , T y , T z , and β xz , as well as T x , T y , T z and β xz characterize the slope parameters (and the cross-terms) in the K (laboratory) frame and the K frame (the eigen-frame of the rotated ellipsoid), respectively. The expression of these parameters are given as We see that the formulas relating the parameters in the K and K frames are simply the ones describing a rotation by a fixed θ angle (the value of the tilt angle at freeze-out). The fact that a cross term (ie. nonzero β xz ) appears (this is a new feature compared to Ref. [12]) signals that the single-particle spectrum is not diagonal in the K frame: the tilt of the coordinate-space ellipsoid is not the same as that of the ellipsoid determined from the single-particle spectrum. The expression of the azimuthal angle-averaged p T spectrum and the anisotropy (v n ) parameters follow the footsteps of Ref. [12]: one can introduce the scaling variables v and w and the effective temperature Using the I n ≡ I n (w) = 1 π π 0 cos (nϕ) e w cos ϕ dϕ modified Bessel functions of w, up to second order in v (and thus in the y rapidity) one has Here ϕ denotes the azimuthal angle, Ψ n is the nth order event plane (in our model, all the event planes coincide, ie. fluctuation effects are not taken into account). Fig. 1 presents the characteristic rapidity dependence of v 1 , v 2 and v 3 . The two-particle Bose-Einstein correlation also turns out to be a Gaussian, and can either be expressed in terms of the q of the q relative momentum (in the K and K frames, respectively), or using the Bertsch-Pratt parametrization. Simple calculation yields where we utilized the already calculated values of the T x , T z , β xz slope parameters. These radii are valid in the K frame; transformation into the K frame (lab frame) is straightforward: The y direction, as before, is much simpler: Customarily introducing the o, s, l (out, side, long) Bertsch-Pratt components of q in the longitudinally co-moving system (LCMS), one has and the radii as functions of the pair momentum azimuthal angle ϕ are . The oscillation of R s , R o and R os with a π period is characteristic to an ellipsoid-shaped source. The 2π period oscillation of R ol and R sl are characteristic to a tilted (and rotating) source. Fig. 2 shows the Bertsch-Pratt radii as a function of pair momentum azimuthal angle. The 2nd order oscillating (cos 2 ϕ, sin 2 ϕ containing) BP-radii, also the cross-terms, were recently extensively measured by the STAR collaboration [15], however, as we see, it is not these oscillations that signal rotational expansion. Discussion and outlook Most of the formulae of the observables derived here show a very close similarity (even identity) to those calculated in Ref. [12], where a stationary, but tilted ellipsoidal source was assumed. The universal scaling of the elliptic flow and other anisotropies (a feature of experimental data that was successfully explained in terms of the Buda-Lund model [16]) is preserved here, just the scaling variables are expressed in a bit more involved way, see Eq. (34). The oscillation of the HBT radii characteristic to rotation also shows up in other models of tilted ellipsoidal sources (just as in Ref. [12]; a simpler model was introduced earlier in Ref. [17]). The new result in the present work is two-fold: first, an actual (and in some sense, unique) rotating and expanding hydrodynamical solution is found which naturally leads to tilted sources from a non-tilted initial condition (and it can be applied to follow the time-evolution of rotation, for any given κ (T ) EoS). Second, the result for the observables are refined in a way that takes not only the tilt but also the rotational flow into account. The most striking consequence of tilted (and also rotating) expansion are seen in the rapidity dependence of the anisotropy parameters (Fig. 1) as well as the directed flow-like (ie. azimuthally 2π-periodic) oscillations of the R sl and R ol HBT radius parameters (the appearance of these cross-terms is purely due to rotation, see Eq. (49)). A natural proposal is thus to measure these observables to infer the rotational angle ϑ f and possibly the angular velocity. It is worthwhile to mention that to do these measurements, one needs experimental information on not only the usual (2nd order) reaction plane but also on the first order reaction plane; this may be done precisely by utilizing the y-dependence of the v 1 anisotropy parameter. It is also worth to mention that in the case of models with spheroidal expansion (that was discussed eg. in Refs. [4,9]), the "angle of rotation" ϑ f becomes ill-defined, so the experimental signatures of rotation become much harder to be identified. The detection of rotation has some far-going promises. A softer equation of state (caused eg. by the presence of a critical endpoint on the QCD phase diagram) would mean that the matter expands less violently, thus the angle of rotation will be greater. In this manner, measuring the rotation angle as a function of collision energy can be of great importance in mapping out the critical endpoint and the location of quark-hadron phase transition. So we look forward to see whether our presented model is applicable to new collision-energy dependent measurements of HBT radii oscillations as well as rapidity dependent anisotropy parameters to infer the rotation of the expanding hot and dense matter. It would lead to new knowledge of the strongly interacting quark-gluon-plasma produced in heavy-ion collisions. This work was supported by the Hungarian OTKA grant NK101438. M. N. was supported by the TÁMOP 4.2.4. A/1-11-1-2012-0001 "National Excellence Program", financed by the European Union and the State of Hungary, co-financed by the European Social Fund.
2015-12-02T22:23:23.000Z
2015-12-02T00:00:00.000
{ "year": 2015, "sha1": "6c3ab7903ea1e881bf7b8f5a49013186da1014d9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1512.00888", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6c3ab7903ea1e881bf7b8f5a49013186da1014d9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
246414220
pes2o/s2orc
v3-fos-license
Image Forgery Detection Using Deep Learning by Recompressing Images : Capturing images has been increasingly popular in recent years, owing to the widespread availability of cameras. Images are essential in our daily lives because they contain a wealth of information, and it is often required to enhance images to obtain additional information. A variety of tools are available to improve image quality; nevertheless, they are also frequently used to falsify images, resulting in the spread of misinformation. This increases the severity and frequency of image forgeries, which is now a major source of concern. Numerous traditional techniques have been developed over time to detect image forgeries. In recent years, convolutional neural networks (CNNs) have received much attention, and CNN has also influenced the field of image forgery detection. However, most image forgery techniques based on CNN that exist in the literature are limited to detecting a specific type of forgery (either image splicing or copy-move). As a result, a technique capable of efficiently and accurately detecting the presence of unseen forgeries in an image is required. In this paper, we introduce a robust deep learning based system for identifying image forgeries in the context of double image compression. The difference between an image’s original and recompressed versions is used to train our model. The proposed model is lightweight, and its performance demonstrates that it is faster than state-of-the-art approaches. The experiment results are encouraging, with an overall validation accuracy of 92.23%. Introduction Due to technological advancements and globalization, electronic equipment is now widely and inexpensively available. As a result, digital cameras have grown in popularity. There are many camera sensors all around us, and we use them to collect a lot of images. Images are required in the form of a soft copy for various documents that must be filed online, and a large number of images are shared on social media every day. The amazing thing about images is that even illiterate people can look at them and extract information from them. As a result, images are an integral component of the digital world, and they play an essential role in storing and distributing data. There are numerous tools accessible for quickly editing the images [1,2]. These tools were created with the intention of enhancing and improving the images. However, rather than enhancing the image, some people exploit their capabilities to falsify images and propagate falsehoods [3,4]. This is a significant threat, as the damage caused by faked images is not only severe, but also frequently irreversible. There are two basic types of image forgery: image splicing and copy-move, which are discussed below: • Image Splicing: A portion of a donor image is copied into a source image. A sequence of donor images can likewise be used to build the final forged image. • Copy-Move: This scenario contains a single image. Within the image, a portion of the image is copied and pasted. This is frequently used to conceal other objects. The final forged image contains no components from other images. The primary purpose in both cases of image forgery is to spread misinformation by changing the original content in an image with something else [5,6]. Earlier images were an extremely credible source for the information exchange, however, due to image forgery, they are used to spread misinformation. This is affecting the trust of the public in images, as the forging of images may or may not be visible or recognizable to the naked eye. As a result, it is essential to detect image forgeries to prevent the spread of misinformation as well as to restore public trust in images. This can be done by exploring the various artifacts left behind when an image forgery is performed, and they can be identified using various image processing techniques. Researchers have proposed a variety of methods for detecting the presence of image forgeries [7][8][9]. Conventional image forgery detection techniques detect forgeries by concentrating on the multiple artifacts present in a forged image, such as changes in illumination, contrast, compression, sensor noise, and shadow. CNN's have gained popularity in recent years for various computer vision tasks, including image object recognition, semantic segmentation, and image classification. Two major features contribute to CNN's success in computer vision. Firstly, CNN takes advantage of the significant correlation between adjacent pixels. As a result, CNN prefers locally grouped connections over one-to-one connections between all pixel. Second, each output feature map is produced through a convolution operation by sharing weights. Moreover, compared to the traditional method that depends on engineered features to detect specific forgery, CNN uses learned features from training images, and it can generalize itself to detect unseen forgery. These advantages of CNN make it a promising tool for detecting the presence of forgery in an image. It is possible to train a CNN-based model to learn the many artifacts found in a forged image [10][11][12][13]. Thus, we propose a very light CNN-based network, with the primary goal of learning the artifacts that occur in a tampered image as a result of differences in the features of the original image and the tampered region. The major contribution of the proposed technique are as follows: • A lightweight CNN-based architecture is designed to detect image forgery efficiently. The proposed technique explores numerous artifacts left behind in the image tampering process, and it takes advantage of differences in image sources through image recompression. • While most existing algorithms are designed to detect only one type of forgery, our technique can detect both image splicing and copy-move forgeries and has achieved high accuracy in image forgery detection. • Compared to existing techniques, the proposed technique is fast and can detect the presence of image forgery in significantly less time. Its accuracy and speed make it suitable for real-world application, as it can function well even on slower devices. The rest of the paper is organized as follows. Section 2 provides a literature review of image forgery detection methodologies. Section 3 introduces the proposed framework for detecting the presence of forgeries in an image. Section 4 contains a discussion of the experimentation and the results achieved. Finally, in Section 5, we summarize the conclusions. Literature Review Various approaches have been proposed in the literature to deal with image forgery. The majority of traditional techniques are based on particular artifacts left by image forgery, whereas recently techniques based on CNNs and deep learning were introduced, which are mentioned below. First, we will mention the various traditional techniques and then move on to deep learning based techniques. In [14], the authors' proposed error level analysis (ELA) for the detection of forgery in an image. In [15], based on the lighting conditions of objects, forgery in an image is detected. It tries to find the forgery based on the difference in the lighting direction of the forged part and the genuine part of an image. In [16], various traditional image forgery detection techniques have been evaluated. In [17], Habibi et al., use the contourlet transform to retrieve the edge pixels for forgery detection. In [18], Dua et al., presented a JPEG compression-based method. The discrete DCT coefficients are assessed independently for each block of an image partitioned into non-overlapping blocks of size 8 × 8 pixels. The statistical features of AC components of block DCT coefficients alter when a JPEG compressed image tampers. The SVM is used to classify authentic and forged images using the retrieved feature vector. Ehret et al. in [19] introduced a technique that relies on SIFT, which provides sparse keypoints with scale, rotation, and illumination invariant descriptors for forgery detection. A method for fingerprint faking detection utilizing deep Boltzmann machines (DBM) for image analysis of high-level characteristics is proposed in [20]. Balsa et al. in [21] compared the DCT, Walsh-Hadamard transform (WHT), Haar wavelet transform (DWT), and discrete Fourier transform (DFT) for analog image transmission, changing compression and comparing quality. These can be used for image forgery detection by exploring the image from different domains. Thanh et al. proposed a hybrid approach for image splicing in [22], in which they try to retrieve the original images that were utilized to construct the spliced image if a given image is proven to be the spliced image. They present a hybrid image retrieval approach that uses Zernike moment and SIFT features. Bunk et al. established a method for detecting image forgeries based on resampling features and deep learning in [23]. Bondi et al. in [24] suggested a method for detecting image tampering by the clustering of camera-based CNN features. Myung-Joon in [2] introduced CAT-Net, to acquire forensic aspects of compression artifact on DCT and RGB domains simultaneously. Their primary network is HR-Net (high resolution). They used the technique proposed in [25], which tells us that how we can use the DCT coefficient to train a CNN, as directly giving DCT coefficients to CNN will not train it efficiently. Ashraful et al. in [26] proposed DOA-GAN, to detect and localize copy-move forgeries in an image, authors used a GAN with dual attention. The first-order attention in the generator is designed to collect copy-move location information, while the second-order attention for patch co-occurrence exploits more discriminative properties. The affinity matrix is utilized to extract both attention maps, which are then used to combine location-aware and co-occurrence features for the network's ultimate detection and localization branches. Yue et al. in [27] proposed BusterNet for copy-move image forgery detection. It has a two-branch architecture with a fusion module in the middle. Both branches use visual artifacts to locate potential manipulation locations and visual similarities to locate copymove regions. Yue et al. in [28] employed a CNN to extract block-like characteristics from an image, compute self-correlations between various blocks, locate matching points using a point-wise feature extractor, and reconstruct a forgery mask using a deconvolutional network. Yue et al. in [3] designed ManTra-Net that is s a fully convolutional network that can handle any size image and a variety of forgery types, including copy-move, enhancement, splicing, removal, and even unknown forgery forms. Liu et al. in [29] proposed PSCC-Net, which analyses the image in a two-path methodology: a top-down route that retrieves global and local features and a bottom-up route that senses if the image is tampered and predicts its masks at four levels, each mask being constrained on the preceding one. In [30] Yang et al., proposed a technique based on two concatenated CNNs: the coarse CNN and the refined CNN, which extracts the differences between the image itself and splicing regions from patch descriptors of different scales. They enhanced their work in [1] and proposed a patch-based coarse-to-refined network (C2RNet). The coarse network is based on VVG16, and the refined network is based on VVG19. In [31] Xiuli et al., proposed a ringed residual U-Net to detect the splicing type image forgery in the images. Younis et al. in [32] utilized the reliability fusion map for the detection of the forgery. By utilizing the CNNs, Younis et al. in [33] classify an image as the original one, or it contains copy-move image forgery. In [34] Vladimir et al., train four models at the same time: a generative annotation model GA, a generative retouching model GR, and two discriminators DA and DR that checks the output of GA and GR. Mayer et al. in [35] introduced a system that maps sets of image regions to a value that indicates if they include the same or different forensic traces. In [36] Minyoung et al., designed an algorithm that leverages the automatically recorded image EXIF metadata for training a model to identify whether an image has self-consistency or if its content may have been generated from a single image. In [37] Rongyu et al., proposed a UNet that consists of a dense convolutional and deconvolutional networks. The first is a down-sampling method for retrieving features, while the second is an up-sampling approach for recovering feature map size. In [38] Lui et al., introduced the CNN segmentation-based approach to find manipulated regions in digital photos. First, a uniform CNN architecture is built to deal with various scales' color input sliding windows. Then, using sampling training regions, they meticulously build CNN training processes. In [39], an unfixed encoder and a fixed encoder are used to build a Dual-encoder U-Net (D-Unet). The unfixed encoder learns the image fingerprints that distinguish between genuine and tampered regions on its own. In contrast, the fixed encoder offers direction data to facilitate the network's learning and detection. In [40] Francesco et al., tested the efficiency of several image forgery detectors over image-to-image translation, including both ideal settings and even in the existence of compression, which is commonly performed when uploading to social media sites. Kadam et al. in [41] Proposed a method based on multiple image splicing using MobileNet V1.Jaiswal et al. in [42] proposed a framework in which images are fed into a CNN and then processed through several layers to extract features, which are then utilized as a training vector for the detection model. For feature extraction, they employed a pre-trained deep learning resnet-50. Hao et al. in [43] proposed using an attention method to analyze and refine feature maps for the detection task. The learned attention maps emphasize informative areas to enhance binary classification (false face vs. genuine face) and illustrate the altered regions. In [44], Nguyen et al., developed a CNN that employs a multi-task learning strategy to detect altered images and videos while also locating the forged areas. The information received from one work is shared with the second task, improving both activities' performance. To boost the network's generability, a semi-supervised learning strategy is adopted. An encoder and a Y-shaped decoder are included in the network. Li et al. introduced a deepfake detection method in [45]. The DeepFake techniques can only create fixed-size images of the face, which must be affinely warped to match the source's face arrangement. Due to the resolution disparity between the warped face area and the surrounding context, this warping produces different artifacts. As a result, DeepFake Videos can be identified using these artifacts. Komodakis et al. in [46] suggested a method for learning image features by training CNNs to recognize the two-dimensional rotation that is applied to the picture that it receives as input. The method proposed in [47] is composed of three parts: single image super-resolution, semantic segmentation superresolution, and feature affinity module for semantic segmentation. In [48] Yu et al., used dual attention upon pyramid visual feature maps to fully examine the visual-semantic relationships and enhance the level of produced sentences. For more details about image forgery and media, forensics readers may refer to [5][6][7][8][9][10][11][12][13]. The state-of-the-art techniques available for detecting the presence of tampering in the images generally take a very long time to process the images. Most of them can detect either image splicing forgery or copy-move type of forgery, not both. Another major issue with them is that they detect the forgery with low accuracy. Hence, there is a need for a better framework that is fast and more accurate. To address this, we presented a novel image recompression-based system. Apart from achieving better image forgery detection accuracy, our proposed framework has also achieved faster response time. This makes it suitable for real-life applications, as it is more accurate and can be utilized even by slower machines. The proposed framework is detailed in the next section. Proposed Technique CNNs, which are inspired by the human visual system, are designed to be non-linear interconnected neurons. They have already demonstrated extraordinary potential in a variety of computer vision applications, including image segmentation and object detection. They may be beneficial for a variety of additional purposes, including image forensics. With the various tools available today, image forgery is fairly simple to do, and because it is extremely dangerous, detecting it is crucial. When a fragment of an image is moved from one to another, a variety of artifacts occur due to the images' disparate origins. While these artifacts may be undetectable to the naked eye, CNNs may detect their presence in faked images. Due to the fact that the source of the forged region and the background images are distinct, when we recompress such images, the forged is enhanced differently due to the compression difference. We use this concept in the proposed approach by training a CNN-based model to determine if an image is genuine or a fake. A region spliced onto another image will most likely have a statistically different distribution of DCT coefficients than the original region. The authentic region is compressed twice: first in the camera, and then again in the fake, resulting in periodic patterns in the histogram [2]. The spliced section behaves similarly to a singly compressed region when the secondary quantization table is used. As previously stated, when an image is recompressed, if it contains a forgery, the forged portion of the image compresses differently from the remainder of the image due to the difference between the source of the original image and the source of the forged portion. When the difference between the original image and its recompressed version is analyzed, this considerably emphasizes the forgery component. As a result, we use it to train our CNN-based model for detecting image forgery. Algorithm 1 shows the working of the proposed technique, which has been explained here. We take the forged image A (images shown in Figure 1b tamper images), and then recompress it; let us call the recompressed image as A recompressed (images shown in Figure 1c are recompressed forged images). Now we take the difference of the original image and the recompressed image, let us call it A di f f (images shown in Figure 1e are the difference of Figure 1b,c, respectively). Now due to the difference in the source of the forged part and the original part of the image, the forged part gets highlighted in A di f f (as we can observe in Figure 1d,e, respectively). We train a CNN-based network to categorize an image as a forged image or a genuine one using A di f f as our input features (we label it as a featured image). Figure 2 gives the pictorial view of the overall working of the proposed method. To generate A recompressed from A, we use JPEG compression. Image A undergoes JPEG compression and produces A recompressed as described in Figure 3. When there is a single compression, then the histogram of the dequantized coefficients exhibits the pattern as shown in Figure 4, this type of pattern is shown by the forged part of the image. Moreover, when there is a sort of double compression then, as described in Figure 5, there is a gaping between the dequantized coefficients as shown in Figure 6, this type of pattern is shown by the genuine part of the image. We constructed a very light CNN model with minimal parameters in our proposed model (line number 5 to 13 of Algorithm 1). We constructed a model consisting of 3 convolutional layers after which there is a dense fully connected layer, as described below: • The first convolutional layer consists of 32 filters of size 3-by-3, stride size one, and "relu" activation function. • The second convolutional layer consists of 32 filters of size 3-by-3, stride size one, and "relu" activation function. • The third convolutional layer consists of 32 filters of size 7-by-7, stride size one, and "relu" activation function, followed by max-pooling of size 2-by-2. • Then we have the dense layer that has 256 neurons with "relu" activation function, finally which is connected to two neurons (output neurons) with "sigmoid" activation. The feature image (A di f f ) is resized to 128 × 128 (A reshaped_di f f ) and then fed to the network. The network learns the presence of any tampering present through the feature images (images shown in Figure 1e). During training, the proposed model learns the existence of the forgery in an image through the numerous artifacts left behind during image forgery. The trained model can identify tampering with high accuracy, discussed in the next section. First convo. layer: 32 filters (size 3 × 3, strid size one, activation: "relu") 8: Second convo. layer: 32 filters (size 3 × 3, strid size one, activation: "relu") 9: Third convo. layer: 32 filters (size 3 × 3, strid size one, activation: "relu") 10: Max-pooling of size 2 × 2 11: Dense layer of 256 neurons with "relu" activation function 12: Two neurons (output neurons) with "sigmoid" activation 13: } 14: for epochs = 1 to total_epochs do 15: training_error = 0 16: for i = 1 to n do 17: : A Experimental Results and Discussion This section describes the training and testing environment for the proposed approach. Aside from that, we'll examine and contrast its performance with that of other techniques. Experimental Setup We examined the proposed technique on a popular CASIA 2.0 image forgery database [22,49], to evaluate how efficient it is. There are a total of 12,614 images (in BMP, JPG, and TIF format), out of which 7491 are genuine images and 5123 tamper images. CASIA 2.0 includes images from various categories, including animals, architecture, articles, characters, plants, nature, scenes, textures, and indoor images. There are different-different sizes of the images present in the database; the resolution of images varies from 800 × 600 pixels to 384 × 256 pixels. The details about the CASIA 2.0 database are given in Table 1. A processor (Intel(R) Core(TM) i5-2400 CPU @ 3.1 GHz) having 16 GB RAM has been used for the experimentation. Following terms are initially calculated for the evaluation: • Total_Images: The total number of images that were tested. We calculate the accuracy, precision, recall, and F measure [1] for the evaluation and the comparison of the proposed method with others. These are calculated as given below: Now the accuracy is defined as given below: Model Training and Testing To evaluate the proposed technique, we randomly divided the CASIA 2.0 database in the ratio of 80% and 20% (Table 1), we used 80% of the images (5993 authentic images, 4099 tampered images, total 10,092 images) for training the model. We used Adam optimizer with an initial learning rate of 1 × 10 −5 and a batch size of 64. The remaining 20% images (1498 genuine images, 1024 tampered images, total 2522 images) are for testing the proposed model and comparing it with the other existing frameworks. Figure 7 illustrates the training and testing accuracy of the proposed model when trained on the CASIA 2.0 database with the settings mentioned above. Comparison with Other Techniques We compare the proposed technique to the other techniques in terms of accuracy and time required for image forgery detection. Table 2 shows the image forgery detection accuracy by the various techniques. These techniques and the proposed method have been evaluated on the same set of images of the CASIA 2.0 database. For the techniques mentioned in [2,3,27], when these techniques process an input image, then, if the mask generated from them reports forgery, then we categorize the input image as tampered else; it is considered as genuine image. It must be noted that Buster-Net [27] is basically for copy-move type image forgeries, so we have used copy-move forged images to report its results. Whereas CAT-Net [2] is basically for splicing type of image forgeries, so we have used the images with splicing to report its results. Mantra-Net [3] can handle both image splicing and copy-move type image forgeries. It can be observed that we chose techniques that can handle either image splicing or copy-move, but also techniques that can handle both image splicing and copy-move type of image forgeries. All the techniques have CASIA 2.0 database as a common database for the evaluation. We have used these techniques' publicly available trained model for evaluation. Apart from this, we retrained their models on the same CASIA 2.0 database images, on which the proposed model has been trained. The results obtained by retraining these models are also given in Table 2, along with their original models. After retraining, these techniques' accuracy has improved; however, the proposed technique still outperforms them. CAT-Net [2], Buster-Net [27], and Mantra-Net [3] concentrate more on where the forgery is present (localization, where the output is pixel-level forgery detection) in the given image rather than focusing on that is the image is tampered or genuine (detection, where the output is a binary classification). However, the proposed technique focuses on whether the given image is tampered with or genuine. Accuracy Comparison The proposed technique achieved better forgery detection accuracy due to the fact that instead of directly using the original pixel image, it uses the feature image, which is the difference of the image with its recompressed image. This helps to detect image forgery better because it can be observed that in the feature image the forged part gets highlighted. Hence, it has resulted in achieving high accuracy. On the other hand, [2,3,27] show poor accuracy in image forgery detection as these techniques try to find image forgery at the pixel level, and due to this there are false positive pixels reported which reduces their overall forgery detection accuracy at the image level. As mentioned in the previous section, we used JPEG compression to recompress the image; now, various quality factors are available while recompressing the image. So we have evaluated the proposed model for different JPEG quality factors and reported them as well. It is observed that the accuracy is better if the quality factor is kept at more than 90. The proposed technique achieved better accuracy as it utilizes better input features rather than directly using the original image as input features. To verify this we have trained our model by directly using the original images (instead of the better processed features), and its results are also reported in Table 2. It can be observed that in such a scenario the accuracy of the model drops from 92.23% to 72.37%, this shows the effectiveness of the processed input features (the difference of the original image with its recompressed version). Figure 8 show the comparison of the accuracy and the F measure for the proposed method and the other techniques. Figure 9 shows a few visual examples demonstrating the proposed approach's performance. The codes and the test images used by the proposed technique are available at codes (https://github.com/sadaf-ali/Image-forgery-detection-using-deeplearning-byrecompressing-the-images, accessed on 23 January 2022). We have acquired 92.23% accuracy on the test images when the recompressed images' quality factor (JPEG compression) is 98. Compared to the other techniques, our technique achieved much better performance in detecting the presence of the image forgery in the image. It must also be noted that our method can handle both image splicing and copymove types of image forgeries. In contrast, many of the state-of-the-art techniques can handle only one type of image forgery, and very few can handle both types of image forgery. Note: "*" means that the model has been retrained on CASIA 2.0 database. (a) Table 3 shows the average time taken by the various techniques to process an image and to predict whether it is a genuine image or a tampered one. In terms of predicting the presence of forgery in an image, it can be noted that the proposed technique is fast and more efficient than the other state-of-the-art techniques. Figure 10 pictorially shows the comparison of the average time taken by the proposed method and the other techniques for the forgery detection in an image. This is because we provide an efficient feature image to our model, and the proposed CNN-based model is relatively light compared to other techniques. As a result, it can provide predictions in a much shorter amount of time. This makes our model will be advantageous in real-world scenarios. Table 4 shows the comparison of the proposed technique with the other techniques. , Buster-Net [27], CAT-Net [2], and the proposed technique to process the image for forgery detection. Techniques Time Taken Mantra-Net [3] 10,927 Buster-Net [27] 1160 CAT-Net [2] 2506 Proposed technique 34 They focus more on where the forgery is present in the given image (pixel level forgery detection). 2 It uses feature image (difference of original image with its recompressed image), in the feature image the forged part gets highlighted, which helps to detect image forgery better. They directly use the image in the original pixel format, which makes image forgery detection difficult. 3 It is utilizes enhanced features, due to which it is able to handle both image splicing and copy-move types image forgeries. Most of the techniques do not utilize the enhanced features, due to which they are usually able to handle only one type of image forgery. 4 It attains a high accuracy in classifying the images as tampered or not. They do pixel level forgery detection. Due to this, they suffer from false positive pixels. Hence, there is degradation in classifying the images as tampered or not. 5 It is much faster, hence, it is suitable for slow machines as well. These techniques are much slower, hence, they are not suitable for the slower machines. Hence, from the experimental results, the following observations can be made: • Unlike other techniques, the proposed technique works well for both image splicing and copy-move types of image forgeries. • It is highly efficient for image forgery detection and has exhibited significantly better performance than the other techniques. • The difference in the compression of the forged part and the genuine part of the image is a good feature that can be learned by our CNN based model efficiently, which makes the proposed technique more robust in comparison to the other techniques. • The proposed model is much faster than the other techniques, making it ideal and suitable for real-world usage, as it can be implemented even on slower machines. Conclusions and Future Work The increased availability of cameras has made photography popular in recent years. Images play a crucial role in our lives and have evolved into an essential means of conveying information since the general public quickly understands them. There are various tools accessible to edit images; these tools are primarily intended to enhance images; however, these technologies are frequently exploited to forge the images to spread misinformation. As a result, image forgery has become a significant problem and a matter of concern. In this paper, we provide a unique image forgery detection system based on neural networks and deep learning, emphasizing the CNN architecture approach. To achieve satisfactory results, the suggested method uses a CNN architecture that incorporates variations in image compression. We use the difference between the original and recompressed images to train the model. The proposed technique can efficiently detect image splicing and copy-move types of image forgeries. The experiments results are highly encouraging, and they show that the overall validation accuracy is 92.23%, with a defined iteration limit. We plan to extend our technique for image forgery localization in the future. We will also combine the suggested technique with other known image localization techniques to improve their performance in terms of accuracy and reduce their time complexity. We will enhance the proposed technique to handle spoofing [50] as well. The present technique requires image resolution to be a minimum of 128 × 128, so we will enhance the proposed technique to work well for tiny images. We will also be developing a challenging extensive image forgery database to train deep learning networks for image forgery detection.
2022-01-31T16:17:41.254Z
2022-01-28T00:00:00.000
{ "year": 2022, "sha1": "c0cfef6a02feef86dfbf571d75886985f90b58c7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/11/3/403/pdf?version=1644227840", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e4d874f49bf090f21f59eed1ecee3435ed2ac3e0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
231871682
pes2o/s2orc
v3-fos-license
Identification of circular RNAs as novel biomarkers and potentially functional competing endogenous RNA network for myelodysplastic syndrome patients Abstract Circular RNAs (circRNAs) have been identified to exert vital biological functions and can be used as new biomarkers in a number of tumors. However, little is known about the functions of circRNAs in myelodysplastic syndrome (MDS). Here, we aimed to investigate circRNA expression profiles and to investigate the functional and clinical value of circRNAs in MDS. Differential expression of circRNAs between MDS and control subjects was analyzed using circRNA arrays, in which we identified 145 upregulated circRNAs and 224 downregulated circRNAs. Validated by real‐time quantitative PCR between 100 MDS patients and 20 controls, three upregulated (hsa_circRNA_100352, hsa_circRNA_104056, and hsa_circRNA_104634) and three downregulated (hsa_circRNA_103846, hsa_circRNA_102817, and hsa_circRNA_102526) circRNAs matched the arrays. The receiver operating characteristic curve analysis of these circRNAs showed that the area under the curve was 0.7266, 0.8676, 0.7349, 0.7091, 0.8806, and 0.7472, respectively. Kaplan‐Meier survival analysis showed that only hsa_circRNA_100352, hsa_circRNA_104056, and hsa_circRNA_102817 were significantly associated with overall survival. Furthermore, we generated a competing endogenous RNA network focused on hsa_circRNA_100352, hsa_circRNA_104056, and hsa_circRNA_102817. Analyses using Gene Ontology and Kyoto Encyclopedia of Genes and Genomes showed that the three circRNAs were linked with some important cancer‐related functions and pathways. Circular RNAs are a new type of endogenous RNA, whose 5′ and 3′ ends are joined together by splicing to form a covalent closed continuous loop. This structure is typically nonpolyadenylate and resistant to exonucleases, hence making them more stable than their linear counterparts. Circular RNAs are abundant, conserved, and stable in the cytoplasm. They have tissue-specific and developmental stage-specific expression patterns and act as miRNA sponges and RNA-binding proteins to regulate gene expression at the transcriptional and posttranscriptional level. 2 Circular RNAs have also been reported to be abundant and stable in human serum, especially in exosomes. Their expression patterns are significantly different between tumor samples and normal tissues, suggesting that circRNAs could be used as a biomarker for tumor diagnosis and prognosis. 3,4 Researchers have identified the clinical value of circRNAs in a number of cancers, including, lung cancer, 5 hepatocellular carcinoma, 6,7 and colorectal cancer. 8,9 However, little is known about the roles of circRNAs in MDS. In this study, we aim to discover novel functional circRNAs and potential diagnostic and prognostic values of MDS by identification of differentially expressed circRNAs in BM of MDS patients. The circRNA expression profile was analyzed by circRNA array and then validated by qPCR. Receiver operating characteristic curve analysis and the Kaplan-Meier method with the log-rank test were used to assess the diagnostic and prognostic value of circRNAs. Bioinformatic analysis was also used to set a circRNA-miRNA-mRNA interaction network and predict possible functions of circRNAs. | Patient recruitment and sample description One hundred MDS patients (refractory anemia, n = 2; refractory anemia with ringed sideroblasts, n = 2; RCMD, n = 83; RAEB-1, n = 4; RAEB-2, n = 6; and MDS, unclassifiable, n = 3) and 20 control subjects were recruited from Shanghai Huashan hospital between 2007 and 2016. Patients with RCMD were newly diagnosed based on the 2008 WHO criteria and had not yet undergone treatment. 10 Controls refer to samples from patients with benign blood diseases. Bone marrow aspiration is a traumatic procedure, and it is not widely accepted by patients. Together with ethical considerations, it is difficult to collect BM samples from healthy people. Therefore, samples of patients with benign blood diseases were enrolled as controls. Due to hematological count anomalies or hypersplenism, these patients had undergone BM tests and no abnormalities were found. After 2 years of follow-up, the BM of these patients remained normal. Bone marrow samples (5 mL) were collected from all 100 MDS patients and 20 control subjects. Blood samples (3 mL) were collected using an EDTA anticoagulated vacutainer from the 100 MDS patients. Bone marrow and PBMCs were isolated from all subjects using a Ficoll solution within 12 hours after sample collection. The study was approved by the institutional review board at Huashan Hospital of Fudan University and informed consent was obtained from each study participant. | RNA isolation and reverse transcription Total RNA was extracted from each sample of BM and PBMCs using the QIAamp RNA Blood Mini Kit (Qiagen) following the manufacturer's instructions. RNA quality was assessed using the Bio-Rad Experion electrophoretogram instrument (Bio-Rad). The purity of RNA was assessed by measuring the optical density of 260/280 ratio using a NanoDrop spectrophotometer. Samples were used in reverse transcription reactions when the A260/A280 was between 1.8 and 2.0. RNA samples were reverse transcribed into cDNA using Takara PrimeScript RT Master Mix (Takara) according to the manufacturer's protocol and stored at −80°C until further use. | Quantitative real-time PCR assay Real-time qPCRs were carried out using the CBX 1000 Sequence Detection System (Applied Biosystems). SYBR Premix Ex Taq PCR reagents (Takara) were used for amplification and detection following the manufacturer's instructions. Specifically, 10 μL reaction mixture containing 5 μL SYBR Green premix, 0.2 μL of each 10 mol/L primer, 1 μL cDNA, and 3.6 μL diethylpyrocarbonate-treated water was heated at 95°C for 30 s and subsequently subjected to 40 cycles of 95°C for 5 s and 60°C for 30 s. The C T value was the fractional cycle number at which the fluorescence exceeded the given threshold. We used GAPDH to normalize the qPCR. The relative expression levels of circRNAs were determined using the 2 −∆CT method. Divergent primers were selected for circRNAs and primers for GAPDH were synthesized by Biosune. All qRT-PCR primer sequences are illustrated in Table 1. | Sanger sequencing Products of qRT-PCR were sent for Sanger sequencing to determine the full-length sequence and confirm the splice junctions of the selected circRNAs. Sanger sequencing was carried out by Biosune. The characteristics of the nine patients were shown in Table 2. Data were quality controlled and normalized. P value and fold change were calculated using the Student's t test. Differentially expressed circRNAs were selected based on threshold values of fold change greater than or equal to 2 and P value less than or equal to .05. | Bioinformatics analysis Circular RNA-miRNA interaction was predicted by TargetScan and miRanda. Target mRNAs of miRNAs were analyzed by TargetScan, miRDB, and miRWalk. Cytoscape 3.8.0 software 11 was used to visualize the circRNA-miRNA-mRNA network. Gene Ontology annotation and KEGG analysis were undertaken using an R package named clusterProfiler. 12 | Statistical analysis All data were verified as nonnormal distributed using Shapiro-Wilk tests and presented as median (25%, 75%). The Mann-Whitney U test was used to analyze the data, with a significant value cut-off of P < .05. The clinical diagnostic value of any given circRNA was verified by ROC curve analysis, in which AUC > 0.5 and P < .05 indicated diagnostic value. The cut-off value and corresponding sensitivity and specificity were identified through ROC curve analysis. Kaplan-Meier survival analysis was carried out to analyze the prognostic value and the statistical significance was obtained using the log-rank test. The χ 2 -test or Fisher's exact test was used to compare the difference of categorical variables between patients with lower and higher circRNA expression. The correlation between circRNA expression levels in BM and PB samples was evaluated using Spearman's correlation test. Statistical analyses were undertaken using GraphPad Prism 6. | Bone marrow circRNA expression profile in MDS patients To identify differentially expressed circRNAs between MDS patients and control subjects, we extracted BM total RNA from five MDS patients and four control subjects for circRNA array assay analysis. Three hundred and sixty-nine circRNAs were differently expressed out of the 3459 circRNAs in the array, of which 145 circRNAs were upregulated and 224 were downregulated in MDS patient samples. the amplified product of circRNA was consistent with the CircBase sequence; (b) results of the Sanger sequencing confirmed the backsplice junctions of the circRNAs selected; and (c) the specificity of the amplified circRNA product was confirmed by electrophoresis. | Validating the results of circRNA array These inclusion criteria for selection proved that the selected circR-NAs naturally existed as a loop in BM and could be amplified by qRT-PCR. To identify the most clinically applicable biomarker, we chose six circRNAs that showed the highest fold differences (P < .05) between MDS and control subjects. Information of circRNAs is listed in Table 3. Then we verified the six circRNAs by qRT-PCR in a cohort consisting of 20 control subjects and 100 MDS patients. The results were consistent with the circRNA array ( Figure 1). | Receiver operating characteristic analysis of six circRNAs in MDS patients To compare the diagnostic value of the previously selected six circRNAs as candidate biomarkers of MDS, we undertook ROC curve analysis for each circRNA with the same cohort of 20 control subjects and 100 MDS patients. As shown in Figure 2, the AUC was larger than 0.500 (P < .05) for six circRNAs, suggesting their potential diagnostic value. Notably, the AUC of hsa_circRNA_104056 was 0.89 (P < .001), which was the largest among the selected circRNAs. The sensitivity and specificity of each circRNA were determined based on the cut-off value (shown in Table 4). Hsa_circRNA_104056 had the highest sensitivity and specificity (0.8 and 0.8421, respectively). | Kaplan-Meier survival analysis of six circRNAs in MDS patients We then detected whether the alteration of each circRNA expression could predict the prognosis of MDS patients. The survival curves for patients with MDS according to different circRNA expression levels are shown in Figure 3. We found that the elevated | Association between circRNA expression level with clinicopathological characteristics of MDS patients To determine the clinical significance of expression level of circRNAs, we divided the MDS patients into two groups based on the median RQ value of circRNAs. As shown in Table 5, we found that higher hsa_circRNA_100352 expression was more frequently seen in patients younger than 60 years of age (P = .020), or patients with lower Revised International Prognostic Scoring System (IPSS-R) scores (P = .019). Upregulation of hsa_circRNA_104056 was significantly associated with cytogenetics (P = .033). The cytogenetic of patients with higher hsa_circRNA_100352 expression were more likely to be good. However, the expression level of hsa_circRNA_102817 has no significant correlation with any of these clinicopathologic characteristics. Abbreviations: AUC, area under the receiver operating characteristic curve; CI, confidence interval. | Spearman's correlation test showed that there was no correlation between expression level of circRNAs in BM and PB samples As BM aspiration is invasive and not easily accepted by patients, | Circular RNA-miRNA-mRNA network Arraystar's homemade miRNA target prediction software based on database TargetScan and miRanda were used to predicted miRNAs that could bind to hsa_circRNA_100352, hsa_circRNA_104056, or hsa_circRNA_102817. The results are listed in Table 6 and Figure 5. Target mRNAs of miRNAs were analyzed using three databases, TargetScan, miRDB, and miRWalk. Targets that have been annotated by all databases were filtered out. The circRNA-miRNA-mRNA network was developed to predict potential target mRNAs of circRNAs based on the mechanism that circRNAs can bind to miRNAs, which can further silence certain gene expression by degrading the mRNA transcripts. The circRNA-miRNA-mRNA network diagram was drawn using Cytoscape 3.8.0( Figure 6). | Gene Ontology and KEGG analyses The lists of all the predicted genes were analyzed by the GO and KEGG approaches in R. Specifically, these genes were most enriched in the biological process of response to transforming growth factorβ and cell cycle arrest (P < .01), cellular component in the histone deacetylase complex and transcription regulator complex (P < .01), molecular function in SMAD binding, and cadherin binding (P < .05) ( Figure 7A). The KEGG analysis revealed that these genes were enriched in viral carcinogenesis, transcriptional misregulation in cancer, and p53 signaling pathway ( Figure 7B). factor-κB, 18 and miR-143-5p/hypoxia-inducible factor-1α pathway. 19 These miRNAs can influence cell proliferation, differentiation, invasion, and apoptosis, and play important roles in several tumors. ACK N OWLED G M ENTS This work was supported in part by grants from the National Natural Science Foundation of China (Grant No. 81270583). D I SCLOS U R E The authors have no conflict of interest.
2021-02-11T06:18:18.396Z
2021-02-09T00:00:00.000
{ "year": 2021, "sha1": "43a7ff0855c0227ca72ba6e664cfb2f29c7faee4", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cas.14843", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f087c08ce25e4494298d70f6cabe133a0b106d74", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
2277531
pes2o/s2orc
v3-fos-license
Stochastic Theory of Accelerated Detectors in a Quantum Field We analyze the statistical mechanical properties of n-detectors in arbitrary states of motion interacting with each other via a quantum field. We use the open system concept and the influence functional method to calculate the influence of quantum fields on detectors in motion, and the mutual influence of detectors via fields. We discuss the difference between self and mutual impedance and advanced and retarded noise. The mutual effects of detectors on each other can be studied from the Langevin equations derived from the influence functional, as it contains the backreaction of the field on the system self-consistently. We show the existence of general fluctuation- dissipation relations, and for trajectories without event horizons, correlation-propagation relations, which succinctly encapsulate these quantum statistical phenomena. These findings serve to clarify some existing confusions in the accelerated detector problem. The general methodology presented here could also serve as a platform to explore the quantum statistical properties of particles and fields, with practical applications in atomic and optical physics problems. Introduction The physics of accelerated detectors became an interesting subject of investigation when Unruh showed that a detector moving with uniform acceleration sees the vacuum state of some quantum field in Minkowski space as a thermal bath with temperature T U =ha/2πck B [1]. This seminal work which uses the structure of quantum field theory of Rindler space explored by Fulling [2], and the Bogolubov transformation ideas invented by Parker for cosmological particle creation [3] earlier, draws a clear parallel with the fundamental discovery of Hawking radiation in black holes [4]. The discovery of Unruh effect (see also Davies [5]) sets off the first wave of activities on this subject. The state-of-the-art understanding of the physics of this problem in this first stage of work is represented by the paper of Unruh and Wald [6]. We refer the readers to the reviews of Sciama, Candelas and Deutsch [7], Tagaki [8] and Ginzburg and Frolov [9]. The second stage of investigation on this problem was initiated by the inquiry of Grove [10], who challenged the prevailing view and asked the question whether the detector actually radiates. This was answered in the negative by an inspiring paper of Raine, Sciama and Grove [11] (RSG henceforth) who considered an exactly solvable harmonic oscillator detector model and analyzed what an inertial observer sees in the forward light cone of the accelerating detector via a Langevin equation. Unruh [12] performed an independent calculation and concurred with the findings of RSG to the extent that the energy-momentum tensor of the field as modified by the presence of the accelerating detector vanishes over most of the spacetime (except on the horizons). However, he also showed the existence of extra terms in the two-point function of the field beyond its value in the absence of the accelerating detector, and argued that these terms would contribute to the excitation of a detector placed in the forward light cone. These terms were missed out in RSG. Following these exchanges, there was a recent renewed interest in this problem, notably the series of papers by Massar, Parantani and Brout [13] (MPB), who gave a detailed analysis via Hamiltonian quantum mechanics of the two-point function and pointed out that the missing terms contribute to a polarization cloud around the accelerating detector; Hinterleitner [14], who independently discussed the backreaction of the detector on the field using a slightly different yet exactly solvable model and arrived at similar conclusions to MPB; and Audretsch and Müller [15], who explored nonlocal pair correlations in accelerated detectors. However, the physical significance of the polarization cloud, its connection to the noise experienced by another detector, and to the inherent correlations in the free Minkowski vacuum, remain largely unexplored. Beginning with this work we would like to add a new dimension to this problem and open up the third stage of investigation. The new emphasis is in exploring the statistical mechanics of particles and fields, and in particular, moving detectors on arbitrary trajectories. We analyze the stochastic properties of quantum fields and discuss this problem in terms of quantum noise, correlation and dissipation. We use the open system concept and the influence functional method to treat a system of n detectors interacting with a scalar field. This method enables one to examine the influence of detectors in motion on quantum fields, the mutual influences of detectors via fields, as well as the backreaction of fields on detectors in a self-consistent manner. As explained earlier [16], the influence functional method is a generalization of the powerful effective action method in quantum field theory for treating backreaction problems, which also incorporates statistical mechanics notions such as noise, fluctuations, decoherence and dissipation. Indeed, one of us has long held the viewpoint that [17,18], to get a more profound understanding of the meaning of Unruh and Hawking effects and the black hole information and backreaction problems one cannot be satisfied with the equilibrium thermodynamics description. It is necessary to probe deeper into the statistical properties of quantum fields, their correlations and dynamics, coherence and decoherence of the particlefield system, the relation of quantum noise and thermal radiance, fluctuation-dissipation relations, etc. Earlier investigation of correlation and dissipation in the Boltzmann-BBGKY scheme [19,20,21] and the properties of noise and fluctuations in the Langevin framework [22,23,24] are essential preparations for tackling such problems at a deeper level. 1 With this theoretical perspective in mind, we have recently begun a systematic study of the accelerated detector problem [25,23]. We show that thermal radiance can be understood as originating from quantum noise under different kinematical (moving detector) and dynamical (cosmology) excitations. The aim of this paper is to 1) show on both the conceptual and technical levels the power and versatility of this new method, 2) settle some open questions and clarify some existing confusions, such as the existence of radiation and polarization, solely from an analysis of detector response, 3) introduce new concepts such as self and mutual impedance, advanced and retarded noise, fluctuation-dissipation and correlation-propagation relations using the accelerated detectors problem as example, and finally 4) suggest new avenues of investigations into the statistical mechanics of particle and fields, including black hole physics. Employing a set of coupled stochastic equations for the detector dynamics, we analyze the influence of an accelerated detector on a probe which is not allowed to causally influence the accelerated detector itself. We find, as did [11,12,13] that most of the terms in the correlations of the stochastic force acting on the probe cancel each other. This cancellation is understood in the light of a correlation-propagation relation, derived as a simple construction from the fluctuation-dissipation relation for the accelerated detector. Such a relation can be equivalently viewed as a construction of the free field two-point function for each point on either trajectory from the two-point function along the uniformly accelerated trajectory alone. The remaining terms, which contribute to the excitation of the probe, are shown to represent correlations of the free field across the future horizon of the accelerating detector. In this problem, the dissipative properties of either detector remain unchanged by the presence of the other. This happens because the probe cannot influence the accelerated detector. However, the stochastic force acting on the probe plays a non-trivial role. We also consider the problem of two inertial detectors which can backreact on each other. This mutual backreaction changes the self-impedance functions of these detectors, and introduces mutual impedances as well. The dissipative properties of each detector are thus altered due to the presence of the other one. This physical effect is in a sense complementary to the effects manifested in the accelerated detector problem, where the probe does not backreact on the accelerated detector. The paper is organized as follows. In Section 2 we develop the influence functional formalism describing the influence of a massless scalar field on a system of an arbitrary number of detectors moving on arbitrary trajectories. The field modes are integrated out in this formalism, and effective stochastic equations of motion for the various detectors are obtained. In Section 3 we consider some applications of this formalism to three simple cases, the primary one being the analysis of two inertial detectors coupled to the same quantum field. In Section 4 we treat the RSG excitation of a probe in the presence of a uniformly accelerating detector. In Section 5 we show the existence of fluctuation-dissipation relations governing the detector system. These relations are used as a starting point for obtaining more general relations between the correlations of various detectors and the radiation mediated by them. Such relations are also discussed in the specific context of the RSG model. Finally, in the Appendix, we point out problems associated with the uncorrelated detector-field initial state in a minimally coupled model, and argue that these problems are removed in a derivative coupling model. We present a simple prescription for switching from one model to the other. Scalar Electrodynamics or Minimal Coupling Model The paper by Raine, Sciama and Grove uses the scalar electrodynamic or "minimal" coupling of oscillators to a scalar field in 1+1 dimensions. This coupling provides a positive definite Hamiltonian, and is of interest because it resembles the actual coupling of charged particles to an electromagnetic field. In this section, we derive the influence functional describing the effect of a scalar field on the dynamics of an arbitrary number of detectors modelled as minimally coupled oscillators. The detectors move along arbitrary trajectories. We assume that the field and the system of detectors are initially decoupled from each other, and that the field is initially in the Minkowski vacuum state. The formalism can be simply extended to higher dimensions, and to different choices of initial state for the field. We also obtain coupled Langevin equations for the detector system. Influence functional for N arbitrarily moving detectors Consider N detectors i = 1, ..N in 1+1 dimensions with internal oscillator coordinates Q i (τ i ), and trajectories (x i (τ i ), t i (τ i )) , τ i being a parameter along the trajectory of detector i. In the following analysis, we do not need to assume that τ i is the proper time, although this is, in most cases, a convenient choice. However, we will assume hereafter that the trajectories (t i (τ i ), x i (τ i )) are smooth and that the parameters τ i are chosen such that t i (τ i ) is a strictly increasing function of τ i . The detectors are coupled to a massless scalar field φ(x, t) via the interaction action Here, T is a global Minkowski time coordinate which defines a spacelike hypersurface, e i denotes the coupling constant of detector i to the field, s i (τ i ) is the switching function for detector i (typically a step function), and t −1 i is the inverse function of t i . t −1 i (T ) is therefore the value of τ i at the point of intersection of the spacelike hypersurface defined by T with the trajectory of detector i. Note that the strictly increasing property of t i (τ i ) implies that the inverse, if it exists, is unique. The action of the system of detectors is The scalar field action is given by and the complete action Expanding the field in normal modes, where ′ k denotes that the summation is restricted to the upper half k space, k > 0. Then the action for the scalar field is given by (σ = +, −) and the interaction action is We have t i (τ i ) < T , which follows from τ i < t −1 i (T ) and the property that t i (τ i ) is a strictly increasing function. Hence we may replace the upper limit of the dt integration by T . This replacement leads to the expression: The action S f ield +S int therefore describes a system of decoupled harmonic oscillators each driven by separate source terms. The zero temperature influence functional (corresponding to the initial state of the field being the Minkowski vacuum) for this system has the form [22]: If the field is initially in a thermal state, the influence functional has the same form as above, and the quantity ζ k becomes (2.13) β being the inverse temperature. We shall restrict our attention to the zero temperature case. Substituting for the J σ k 's in the influence functional, and carrying out the δ-function integrations, one obtains where In the above, the continuum limit in the mode sum is recovered through the replacement ′ k → L 2π ∞ 0 dk. We then obtain, after substituting for u σ k and ζ k , ). (2.16) In this form, Z ij is proportional to the two point function of the free scalar field in the Minkowski vacuum, evaluated for the two points lying on trajectories i and j of the detector system. It obeys the symmetry relation Corresponding to (2.12), we may also split Z ij into its real and imaginary parts. Thus we define ν andμ are proportional to the anticommutator and the commutator of the field in the Minkowski vacuum, respectively. The quantities Z ij are also conveniently expressed in terms of advanced and retarded null and the superscripts a and r denote advanced and retarded respectively. 2 Similar decompositions forν ij andμ ij thus follow. The influence functional, along with the action for the detector system, can be employed to obtain the propogator for the density matrix of the system of detectors. This propogator will contain complete information about the dynamics of the detectors. However, we shall take the alternative approach of deriving Langevin equations for the detector system in order to describe its dynamics. Langevin equations In this subsection, we wish to derive the effective stochastic equations of motion for the Ndetector system. In the previous subsection, we integrated out the field degrees of freedom. The effect of this is to introduce long-range interactions between the various detectors. Going back to the form (2.11) for the influence functional, we define the centre of mass and relative variables Correspondingly, we also find it convenient to define | F | is the absolute value of F , containing the kernel ν k . The phase of F contains the kernel µ k . In the second equality, we have used a functional gaussian integral identity, P [ξ σ k ] being the positive definite measure normalized to unity. It can therefore be interpreted as a probability distribution over the function space ξ σ k . The influence functional can thus be expressed as where < > denotes expectation value with respect to the joint distribution Π ′ k,σ P [ξ σ k ]. S inf will be called the stochastic influence action. We find where { , } denotes the anticommutator. Substituting for J −σ k and J +σ k in terms of the detector degrees of freedom {Q i }, the stochastic influence action S inf is obtained as From Equation (2.29) we see that the quantitiesμ ij , i = j mediate long-range interactions between the various detectors and the quantitiesμ ii describe self-interaction of each detector due to its interaction with the field. This self-interaction typically manifests itself as a dissipative (or radiation reaction) force in the dynamics of the detectors. We will, therefore, refer toμ ij , i = j as a "propagation kernel", andμ ii as a "dissipation kernel". We now turn to the interpretation of the quantities η i . They appear as source terms in the effective action of the detector system. Also, being linear combinations of the quantities ξ σ k , they are stochastic in nature. Indeed, from Equations (2.28) and (2.30) we can obtain Thusν ij appears as a correlator of the stochastic forces η i and η j . Along a fixed trajectory, this correlation manifests as noise in the detector dynamics. Hence we callν ii a "noise kernel" andν ij , i = j, a "correlation kernel". 3 The full stochastic effective action for the N-detector system is given by We may now express this in terms of the variables Q + i and Q − i defined earlier. Thus we obtain Extremizing the effective action with respect to Q − i and setting Q i = Q ′ i at the end [22], we obtain a set of coupled equations of motion, the Langevin equations, for the system of detectors: Due to the back-reaction of each detector on the field, and consequently on other detectors, the effective dynamics of the detector system is highly non-trivial and, as such, can be solved in closed form only for simple trajectories or under simplifying assumptions such as ignoring the back-reaction of certain detectors on the field. For instance, if we choose to ignore the back-reaction of detector i on the field, this can be effected by settingμ ji = 0, for all j, including j = i,while at the same time keepingμ ij = 0 for j = i. The particular casẽ µ ii = 0 amounts to ignoring the radiation reaction of detector i. This is necessary because the radiation reaction effect arises due to a modification of the field in the vicinity of the detector as a consequence of the back-reaction of the detector on the field. Of course, it is in general inconsistent to ignore the back-reaction of a detector, as it leads to a direct violation of the symmetry (2.17). As is well-known, it also leads to unphysical predictions. For example, in the treatment of an atom on an inertial trajectory, coupled to a quantum field, balance of vacuum fluctuations and radiation reaction is necessary to ensure the stability of the ground state. As explained above, ignoring back-reaction implies ignoring the radiation reaction force. Such a treatment would render the ground state unstable. However, in certain cases, the quantitiesμ ji may not contribute to the dynamics of detector j, as in Section 4 below, where the trajectory of one detector is always outside the causal future of the other one. Hence there is no retarded effect of one of the detectors on the other. Our formal treatment of the detector-field system is exact in that it includes the full backreaction of the detectors on the field, which is manifested in the coupled Langevin equations of the various detectors. The coupled equations of motion give rise to a sort of "dynamical correlation" between the various detectors. Non-dynamical correlations also occur because of the intrinsic correlations in the state of the field (Minkowski vacuum). These correlations are purely quantum-mechanical in origin, and they are reflected in the correlators of the stochastic forces,ν ij . Correlations between stochastic forces on different detectors induce correlations between the coordinates Q i of different detectors. As we shall show in a later section, our exact treatment makes it possible to demonstrate the existence of generalized fluctuation-dissipation and correlation-propagation relations governing the detector system. Examples In this and the following section, we consider some applications of the Langevin equations derived in the previous section to the cases of a single detector in the Minkowski vacuum moving on an inertial trajectory, a single detector on a uniformly accelerated trajectory, two detectors on inertial trajectories, and the case of one detector on a uniformly accelerated trajectory and another one on an arbitrary trajectory, functioning as a probe. The first two examples serve to illustrate the formalism, and describe the well-known physical effects of the dressing of a particle by the field and the thermal Unruh noise experienced by a uniformly accelerated particle. In the example of two inertial detectors, we introduce the notions of "self" and "mutual" impedance which govern the response of either detector. The effect of the back-reaction of each detector on the field and consequently on the other detector is to introduce the so-called mutual impedance in the detector response as well as to modify the self-impedance of each detector from its value in the absence of the other one. In the next section we shall consider the example of one detector on a uniformly accelerated trajectory and a probe, which moves along an unspecified trajectory. We switch on the probe after it intersects the future horizon of the uniformly accelerated detector, so that it cannot causally influence the uniformly accelerated one. Thus the uniformly accelerated detector in this case is effectively in an unperturbed Unruh heat bath, and this situation mimics most closely the RSG model. The missing terms in the RSG analysis, which contribute to a polarization cloud around the accelerated oscillator, but not to the energy momentum tensor, lead to a modified noise kernel in the Langevin equation for the probe. In all cases, we can solve exactly for the detector coordinates, at least in the late time limit (this limit is actually realized at any finite time t ≫ −∞ when the two detectors have been switched on forever, and corresponds to the neglect of transients in the solutions for the detector coordinates). One inertial detector Consider the case of one detector moving on an inertial trajectory x(τ ) = 0, t(τ ) = τ , and switched on forever (s(τ ) = 1). The noise and dissipation kernels take the form The Langevin equation becomes It will be convenient to define the dissipation constant γ = e 2 4 . We will restrict our attention to the underdamped case (γ ≤ Ω 0 ). Introducing the Fourier transform and similarly for η(τ ), we obtainQ with the impedance function χ ω defined as In the above solution for the detector coordinate in frequency space, it should be noted that transients have already been neglected. Transient terms correspond to delta functions in frequency space, the coefficients of these delta functions being determined by the initial conditions. For the complete solution these terms should be added to the right hand side of Equation (3.6). We may thus obtain We can therefore obtain the correlator of Q(τ ) and Q(τ ′ ), as One accelerated detector: Unruh effect In the case of an accelerated detector moving on the trajectory x(τ ) = a −1 cosh aτ , t(τ ) = a −1 sinh aτ , and s(τ ) = 1 (τ being the proper time along the accelerated trajectory), the noise and dissipation kernels take the form: These kernels can be decomposed into advanced and retarded parts, by writing, for examplẽ We can then use the changes of variables k → k 2 e ± a 2 (τ −τ ′ ) to obtaiñ showing that the noise felt by the accelerating detector is isotropic. One can also make a similar simplification for the kernelμ. These expressions can then be further simplified [25,23] by means of the integral transform [27] where K iα (a) is a Bessel function of imaginary argument, to yield The noise experienced by the detector is thus stationary and the factor coth( πk a ) in the noise kernel shows that it is also thermal, at the Unruh temperature k B T =h a 2π (we have chosen units such that c = 1). The dissipation kernel remains identical to that of the inertial detector. Based on the property that the two-point function of a free field on an accelerated trajectory evaluated in the Minkowski vacuum state is identical to the two-point function on an inertial trajectory evaluated in a thermal state at the Unruh temperature, this fact can be explained as follows: The dissipation kernel is proportional to the commutator of the free quantum field evaluated in whatever state the field is in. However, the commutator of a free field for any two points is just a c-number, hence its expectation value is independent of the state of the field. In particular, it does not distinguish between a zero temperature and a thermal state. So the dissipation kernel is identical to that in the inertial case. The anticommutator is, however, an operator whose expectation value depends on the state of the field, and therefore shows the familiar departure from the inertial case. The Langevin equation for the detector coordinate is Similar to the inertial detector case, we find Combining the two equations, with the impedance function χ k as defined in the inertial case. Two inertial detectors: self and mutual impedance We now consider the case of two detectors moving on the inertial trajectories x 1 (τ 1 ) = −x 0 /2, x 2 (τ 2 ) = x 0 /2 and t 1 (τ 1 ) = t 2 (τ 2 ) = τ , coupled to a scalar field initially in the Minkowski vacuum state, with coupling constants e 1,2 . They are separated by a fixed coordinate distance x 0 . As before, we will assume that both detectors have been forever switched on, i.e. s i (τ ) = 1, i = 1, 2. It will be convenient to express the noise, dissipation, correlation and propagation kernels as the real and imaginary parts of the functions Z ij defined earlier. Then, for the two-detector system, we obtain The coupled Langevin equations for the system are where τ − x 0 is the retarded time between the two trajectories, and As before, we define γ 1,2 = e 2 1,2 4 , and introduce Fourier transforms to obtain the corresponding equations in frequency space. Then we obtaiñ where The functions χ (1), (2) ω are, of course, what the impedance of each detector would be in the absence of the other one. However, the effect of introducing a second detector is, as we shall see, to modify the "self -impedance" of each detector as well as introduce a "mutual impedance" which describes, for instance, the response of detector 1 to the forceη 2 . Indeed, plugging the equation forQ 1 in the equation forQ 2 , we havẽ where L 22 is the modified self -impedance of detector 2 due to the presence of detector 1, and L 21 is the mutual impedance: The impedances L 11 and L 12 and the corresponding equation forQ 1 are obtained by an interchange of indices 1 and 2 in the above equations. We note the symmetry L 21 = L 12 . (3.35) The correlator < {Q i (ω), Q j (ω ′ )} >, i, j = 1, 2 is therefore obtained from equation (3.32) and its counterpart, as The above equation is to be viewed as a generalization of (3.8) to the two-detector case. Suppose we now wish to solve for the correlator of Q 2 . Then, taking Fourier transforms as before and simplifying, The second term in the square brackets vanishes as a consequence of the identity χ which is a form of the fluctuation dissipation relation for detector 1. The remaining terms simplify to yield As before, the correlator of Q 1 is obtained by interchanging the indices 1 and 2 in the above equation. Because of constraint c), the second detector cannot causally influence the first one, and thus it functions as a probe in the field modified by the first detector. Later in the analysis, we shall specify the trajectory of the detector with horizons as being a uniformly accelerated one. We shall continue to assume that the probe cannot causally influence the uniformly accelerated detector by means of the switching condition. If it were allowed to do so, this would lead to a deviation of the noise experienced by the uniformly accelerated detector from the precise thermal form. We will label the detector with horizons as detector 1 and the probe as detector 2. The switching condition s 2 (τ 2 ) = θ(u 2 (τ 2 )) for the probe leads to a closed Langevin equation for detector 1: This is just a consequence of the fact that the trajectory of detector 1 lies outside the causal future of the probe. The arguments which lead to the above local form of dissipation or radiation reaction for a general timelike trajectory are outlined in the next section. Introducing Fourier transforms and the impedance functions χ (1),(2) ω as defined earlier, we have,Q Consider now the Langevin equation for detector 2: We find where γ 1,2 are defined as in the two inertial detector case. The second term in (4.5) vanishes identically because u 2 (τ 2 ) > 0 and u 1 (τ 1 ) < 0 (u = 0 is a future horizon for detector 1). Since v = 0 is a past horizon for detector 1, we have v 1 (τ ′ 1 ) > 0 and the first term simplifies to yield where we have defined the retarded time ). This is well-defined since it occurs only in expressions in which v 2 (τ 2 ) > 0. Thus we obtain the dynamical equation for the probe, which depends, as expected, on Q 1 : Consider the quantity which is a source term in the equation of motion for Q 2 . The first part of F is the usual stochastic force arising out of the fluctuations of the field in the vicinity of detector 2, while the second part is the retarded force due to detector 1. RSG correctly point out that these two forces are correlated. In the context of our formalism, these correlations are embodied in the correlation kernelsν 21 andν 12 . Using the relation (4.2), we obtain Consider the correlator of F with the correlator of η 2 subtracted out. We have: The kernelsν 21 andν 12 separate into advanced and retarded parts. For the advanced parts, The advanced parts of the correlations can therefore be constructed from the advanced part of the noise along the trajectory of detector 1. With this simplification, we obtain where (r.p.) denotes the retarded part: (r.p.) = (4.14) At this point, we specialize to the case when detector 1 is uniformly accelerated. Then we have v 1 (τ 1 ) = a −1 e aτ 1 ; u 1 (τ 1 ) = −a −1 e −aτ 1 . (4.15) As shown in subsection 3.2, the noiseν 11 is thermal and isotropic. The retarded time τ R = a −1 ln(av 2 (τ 2 )). We may substitute forν 11 in equation (4.13) and carry out the integrations over s and s ′ to obtain The first term in the above expression vanishes as a consequence of the identity χ The only contribution to the excitation of the probe is therefore from the retarded parts of the correlationsν 12 andν 21 . This asymmetry between retarded and advanced parts is really a consequence of the choice of retarded boundary conditions in the formulation of the problem (the states of detector and field are assumed to be uncorrelated at past infinity) and the switching process at u 2 = 0. The vanishing of the first term in the above expression is a generalization of the cancellation obtained by RSG for a probe moving along an inertial trajectory. In order to study the retarded contribution in greater detail, it is desirable to simplify the correlationsν r 12 andν r 21 . The functions Z r 12 and Z r 21 take the form a (e −aτ ′ 1 +au 2 (τ 2 )) . Differentiating the above expressions with respect to τ ′ 2 and τ 2 , and substituting in the expression for (r.p.), one obtains, after carrying out the integration over s, The coincidence limit of the above expression yields the fluctuations of the random force acting on the probe: defining δF (τ ) = F (τ ) − d dτ η 2 (τ ), we obtain (4.24) The fluctuations are thus suppressed in the limit of large u 2 v 2 = t 2 2 − x 2 2 . For a probe trajectory without horizons, this is the limit in which the probe trajectory approaches future timelike infinity, which verifies that the effect of the accelerated oscillator on the field is ascribed to polarization rather than radiation (see also [13]). A radiation field is expected to persist at future infinity. Let us now turn to the question of the response of the probe. To obtain this, we will need to specify a particular form of trajectory for the probe as well. We will consider the simple inertial trajectory x 2 (τ 2 ) = 0, t 2 (τ 2 ) = τ 2 , switched on at τ 2 = 0. Then equation (4.23) gives Owing to the switching process at τ 2 = 0, the relation betweenQ 2 andF (the Fourier transforms of Q 2 and F ) is a non-local one in frequency space, because of transient effects. However, if we restrict our attention to the late time behavior of detector 2, we obtain from equation (4.7) a local relation of the form In the above expression, the lower limit of the τ integration is zero, corresponding to the step function θ(τ ) multiplying F which enforces the switching condition. The correlator ofQ 2 is therefore given by We have already obtained the difference of the correlator of F from its value in the absence of the accelerating detector, 1. Thus we have where the superscript (0) on Q 2 refers to its value in the absence of the accelerating detector. Performing the integrations over τ and τ ′ , we obtain The step functions which distinguish positive and negative frequencies in the above expression are an artefact of the switching process. Fluctuation-dissipation and correlation-propagation relations In this section we construct the fluctuation-dissipation relations for the detector system and extend this construction to obtain a new set of relations, which we call the correlationpropagation relations for trajectories without event horizons. These relations are a simple consequence of the analytic properties of the massless free field two-point function. We also discuss these relations in the context of the model of a uniformly accelerated detector and probe. Consider first the fluctuation dissipation relation for a quantum Brownian particle in a heat bath [22]. This can be expressed as a linear, non-local relation between the noiseν(s) and dissipationμ(s) kernels. Definingγ bỹ the finite temperature fluctuation dissipation relation is is a universal kernel, independent of the spectral density of the bath. In particular, the kernel K is independent of the coupling constant e. Such a fluctuation-dissipation relation holds for the uniformly accelerated detector (with temperature given by the Unruh temperature) and the inertial detector (with zero temperature). It was derived in [22] in the context of a quantum Brownian model with bilinear coupling between bath (field) and particle (detector). In such a modelγ is indeed the quantity which characterizes dissipation in the effective Langevin equation for the particle. In the context of the minimally coupled model, however, we find it suitable to defineγ asγ (s) = − d dsμ (s) (5.4) as this is the quantity which directly appears in the dissipative term of the Langevin equation An important aspect of either form of the fluctuation-dissipation relation is that the noise and dissipation kernels, and consequently K, are stationary, i.e. they are functions of s − s ′ alone. We wish to investigate whether a suitable generalization of the above relation holds for the full N-detector system. To this end, we assume that the detector trajectories are everywhere timelike and consider first only the kernels Z ii as they characterize noise and dissipation in the dynamics of the detectors i. We also assume that the detectors are switched on forever, thus excluding transient effects due to the switching process. Using advanced and retarded null coordinates introduced earlier, we definẽ denoting the advanced and retarded parts of the kernelγ. The timelike property of the trajectories implies that | dx i dt i |< 1. Together with the fact that t i (τ i ) are increasing functions of τ i , this implies that du i dτ i and dv i dτ i are necessarily positive. It also implies that the functions u i (τ i ) and v i (τ i ) have unique inverses, if they exist. This can be proved by way of contradiction: assume that , which means that the points τ i and τ ′ i have lightlike separation. This contradicts the fact that the trajectory is everywhere timelike. The uniqueness of v −1 i is shown in the same way. These two properties lead to the following simplification in the expression forγ ii : Thus we see that, for an arbitrary trajectory, the dissipation or radiation reaction kernel has the same form and is always local. This fact has been used in obtaining the dissipative term in the equations of motion for the accelerated detector and probe (4.1 and 4.7). The fluctuation-dissipation relation now follows in a straightforward manner: We now ask whether a similar relation holds between the real and imaginary parts of Z ij , i = j. This would not be a fluctuation dissipation relation in the usual sense, as the real part of Z ij describes correlations of the field between points on different trajectories rather than fluctuations, and its imaginary part describes the propagation of radiation between one detector and the other, rather than dissipation. We will call such relations "correlationpropagation" relations. If points on different trajectories have space like separations, the relevantγ ij (defined as − dμ ij dτ i ) will vanish as a consequence of the vanishing of the commutator of a free field for points at spacelike separations. This is simply an expression of causality in the detector dynamics. However, the corresponding correlationν ij need not vanish, and hence there cannot be a general relation between these two kernels. Such a situation is realized most clearly, for example, in the case of two uniformly accelerating detectors, one in the right and the other in the left Rindler wedge. The trajectories, although individually timelike, are spacelike separated everywhere. The correspondingγ 12 andγ 21 will therefore vanish identically. However,ν 12 andν 21 will remain non-zero, reflecting the highly correlated nature of the Minkowski vacuum state. If, however, none of the detector trajectories possess past or future horizons (in Minkowski space this is true in particular for geodesic trajectories, but not only for geodesic trajectories), then each of them will lie completely within the causal future of the others. In that case, we can obtain correlation-propagation relations relating separately the advanced and retarded correlations to their "propagating" counterparts. These relations follow from the fluctuation dissipation relations along single trajectories derived above, essentially by a method of geometric construction : definingγ a ij = − dμ a ij dτ i and similarlyγ r ij , we havẽ The correlationsν ij may be constructed from the noisesν ii in an identical manner: where we have inserted the identity function v i v −1 i in the first step. Also, ). (5.14) These two sets of constructions for the propagation and correlation kernels in terms of the dissipation and noise kernels enables us to write down the correlation-propagation relations simply by invoking the fluctuation-dissipation relations (5.8) as they separately apply to the advanced and retarded parts of the noise and dissipation along single trajectories: K a i and K r i being defined earlier (5.9). Since the quantitiesγ ij are really just δ-functions and the quantities K a,r i are proportional toν a,r ii , these relations can be equivalently viewed as constructions of the correlationsν ij from the noisesν ii . The above relations hold for trajectories without event horizons. In the example of the uniformly accelerated detector and probe, the uniformly accelerated detector trajectory does possess event horizons. This manifests in the property that the range of u 1 is restricted to (−∞, 0) and the range of v 1 to (0, ∞). The probe trajectory, on the other hand, will be chosen to be free of horizons. We will also now assume that the probe is switched on forever. Then we can construct the correlationsν 21 and the quantitiesγ 21 fromν 22 andγ 22 exactly as described above, and obtain the corresponding correlation-propagation relations: This simply follows by invoking the fluctuation-dissipation relation along the probe trajectory, as described above. However, it is of greater interest to know whether such relations would follow from the fluctuation-dissipation relation along the uniformly accelerated trajectory. As explained, this will not be completely possible because the accelerated trajectory possesses horizons. This difficulty shows up when one tries to write down a relation of the form (5.16) for the quantitiesν 12 andγ 12 . To do this, we first express the functions Z ij in a different form. This was done in Section 4 (see the steps leading from 3.54 to 3.58) for the restricted case u 2 (τ 2 ) > 0, v 2 (τ 2 ) > 0. If we remove this restriction, we find Z 21 can be expressed in a similar way. From the above, we see that the advanced (retarded) correlation for v 2 > 0 (u 2 < 0) has a thermal form, because these correlations can be constructed simply from the noise along the accelerated trajectory. We are therefore able to write down a correlation-propagation relation for this part of the correlations alone. This takes the form and The single relation (5.19), as opposed to separate relations between the advanced and retarded parts, is a consequence of the fact that the thermal noise is isotropic and therefore contains equal contributions from advanced and retarded parts. In the context of the analysis of section 4, where the probe is switched on at u 2 = 0, Viewed as a construction ofν a 12 fromν a 11 , this relation lies at the heart of the RSG cancellation in (4.16). Viewed alternatively as an extension of the thermal fluctuation-dissipation relation on the uniformly accelerated trajectory, it thus places the role of thermal equilibrium in the RSG cancellation on firmer ground. We now turn to the part of the correlations which do not partake in the correlationpropagation relation above. These are the advanced (retarded) correlations for v 2 < 0 (u 2 > 0), containing the sinh −1 factors, and are not expressible in terms of the noise along the accelerated trajectory. Rather, they represent true correlations across the future (past) horizon. If we specialize to the case u 2 > 0 as in Section 4, then these are exactly the correlations which contribute to the excitation of the probe in the guise of (r.p.), equation (4.23). The probe may therefore be said to be excited by free field correlations across the future horizon. If we specialize to the simple probe trajectory x 2 (τ 2 ) = 0, t 2 (τ 2 ) = τ 2 , then we have u 2 (τ 2 ) = v 2 (τ 2 ) = τ 2 and the expressions (5.18) for Z 12 acquire a symmetric form. In this special case, we can write down a correlation-propagation relation for the entire kernelν 12 , by relating the advanced part of the correlations across the horizon to the retarded part of the propagation kernel, and vice-versa: we then havẽ (5.24) we obtainν as a correlation-propagation relation in this special case. The above relation cannot be geometrically constructed from the fluctuation-dissipation relation along the single accelerated trajectory. So far, we have not been able to show the existence of such relations in more general cases. The extra piece in the interpolating kernel K ′ 1 comes from correlations across the horizon, as explained earlier. Summary and discussion To summarize our work and findings, we have presented a general formalism to treat an arbitrary number of detectors modelled as oscillators in arbitrary kinematic states, and minimally coupled to a massless scalar field in 1 + 1 dimensions. In this approach, the scalar field has been integrated out and the detector dynamics is described by a reduced set of effective semiclassical stochastic equations. These equations nonetheless contain the full quantum dynamics of the field. Our treatment can be extended to massive fields and higher dimensions by making appropriate changes in the two-point functions Z ij . We studied four examples, starting with a single inertial and uniformly accelerated detector, mainly to illustrate the new description, and culminating in the treatment of a uniformly accelerated oscillator and a second oscillator which functions as a probe. We show that there exist fluctuation-dissipation relations relating the fluctuations of the stochastic forces on the detectors to the dissipative forces. We discover a related set of correlation-propagation relations between the correlations of stochastic forces on different detectors and the retarded and advanced parts of the radiation mediated by them. In the analysis of two inertial detectors, we find that the change in the state of the field due to the coupling with either detector modifies the impedance functions of both detectors, and hence their dissipative properties. Also, this coupling introduces a mutual impedance which describes the change in the response of one detector due to the fluctuations of the field in the vicinity of the other one. The field fluctuations (noise) in this case are relatively trivial, and non-trivial effects can be ascribed mainly to the impedance functions. In the case of the accelerated detector and the probe, on the other hand, the noise due to field fluctuations and the field correlations between the two trajectories play a dominant role. Since the probe cannot causally influence the accelerated detector, the dissipative features of this problem are relatively trivial. Here, we find that most of the terms contributing to the response of the probe cancel out, leaving behind a contribution that arises purely from field correlations across the horizon. This cancellation was earlier pointed out [11,13] to be a consequence of the identity χ k + χ * k = 4γ | χ k | 2 or variations thereof, which is a form of fluctuation-dissipation relation. Although we utilize this identity in our calculation, we observe, however, that this really follows from the dissipative properties of the accelerated detector and its free uncoupled dynamics. It therefore does not explicitly involve the fluctuations of the field. We point out that this cancellation can instead be understood to follow because the correlations between the accelerated detector and probe trajectories can be expressed partly in terms of the noise or field fluctuations along the accelerated trajectory alone, and also because of the isotropy of this noise. The expression of correlation in terms of noise can be equivalently viewed as a consequence of the correlation-propagation relations we obtain in Section 4, which are appropriate extensions of a generalized fluctuation-dissipation relation directly relating field fluctuations to dissipative properties. A distinct feature of the influence functional formalism as used in this paper is the assumption of an uncorrelated field-oscillator initial state. As argued in the appendix, an uncorrelated initial state is more readily realizable in the derivative coupling model. However, since the minimal and derivative coupling models are dynamically equivalent, we expect our final results to be essentially unchanged, in particular the results of detector response in the various cases studied. The discussion of fluctuation-dissipation relations can be reformulated as well in a way suitable to the derivative coupling model. We would like to mention possible extensions of this work to other problems. In discussions of the quantum equivalence principle [29,12], one compares the response of a detector moving on a geodesic trajectory in Minkowski space, and coupled to a quantum field, to its response along a geodesic of a spacetime with a homogenous gravitational field. The idea is to derive a suitable transformation on the state of the quantum field which yields the same detector response in both cases. If one can find such a transformation, the equality of the detector response in both cases constitutes a test of the validity of the quantum equivalence principle for local physical processes. However, a homogenous gravitational field defines a global inertial frame, and so one is inclined to believe that the equivalence principle would hold for non-local processes as well, such as the effective dynamics of two spatially separated detectors coupled to the same quantum field. We plan to investigate this and related issues, especially the implications of our findings on black hole backreaction and information problems in later works. A.1 Infrared Problems with the Minimal Coupling Model The infra-red effects of the minimal coupling model are not trivial. In fact, as we will show, the proper treatment of this simple model, including an ultra-violet but not an infra-red cut-off, leads us to identify a super-selection rule which prefers a particular class of bases in the model's Hilbert space. Using the minimal coupling model and this preferred class of bases is equivalent to a derivative coupling model used by Unruh and Zurek [UZ] and a basis of direct products of unperturbed field and oscillator states. The MC Hamiltonian is It is straightforward to show that the expectation value of H M C for soft photon states (i.e., low energy eigenstates of the free field Hamiltonian) has a contribution proportional to the inverse of their unperturbed energy. This suggests that the true low energy states of this model must have strong correlations between the field and the oscillator. If we reject an infra-red cut-off as unphysical, then we must conclude that states that are direct products of field and oscillator states will actually have energies very much higher than that of the true ground state of the model. The poor behaviour of the basis of unperturbed (i.e. ǫ → 0) energy eigenstates reflects the fact that, since there are field modes at all frequencies, expanding around ǫ = 0 correctly requires degenerate perturbation theory. Since we are trying to set up an open quantum system, however, our choice of basis is not merely a matter of convention. Different bases can imply different partitions of the complete Hilbert space into 'system' and 'environment' subspaces. In the influence functional formalism, one traces over the final states of the environment, and assumes an initial state which is often a direct product of system and environment. If we change what we mean by 'system' and 'environment', the final trace becomes a different operation, and the initial state becomes a different state. It is not the model itself, of course, but only the naive basis that is badly behaved: the full Hamiltonian is quadratic, and hence equivalent to a set of decoupled harmonic oscillators. To understand the problems with the basis of unperturbed energy eigenstates, and to identify a better basis, we should diagonalize the full Hamiltonian. The MC Hamiltonian may be diagonalized by defining new creation and annihilation operators. If the original field and conjugate momentum operators are φ(x) and π(x), the diagonalizing annihilation operators are , for a massless field. Note that the set of field-like operators {a k , a † k } diagonalizes the entire Hamiltonian. There is no normal mode of the coupled system which corresponds even weakly to the unperturbed oscillator, and all of the normal modes contain very non-local excitations of the field. If we wish to consider the oscillator as an open system coupled to the field as an unobserved environment, then, the basis of exact energy eigenstates of the combined system will not be particularly convenient. The fact that this model is easily solved exactly does not make the system-environment problem trivial. We can now, however, determine the effect on the true ground state of the unperturbed oscillator raising and lowering operators. We find that either of these operators maps the ground state onto a highly excited state, whose expected energy is infra-red divergent. Consequently, observations that are restricted to the oscillator sector alone (as it is defined from Equation (A.1) may be said to require infinite amounts of energy. Considering the field that appears in Equation (A.1) to be an unoberved environment is therefore unphysical. We can, however, change our basis so that the Hamiltonian appears more benign in terms of the transformed operators. We may effect this transformation using the unitary operator U = exp − ī h ǫQφ(0). This transformation mixes the field and oscillator sectors, and so changes what is meant by an observation of the oscillator alone. We can check that the new oscillator raising and lowering operators, acting on the ground state, now produce states whose energy is ultraviolet divergent, instead of infra-red. A physically plausible UV cut-off then renders this energy finite, and it becomes reasonable to consider the new field sector as unobserved. Furthermore, a direct product of the unperturbed ground state of the transformed field and any finite-energy oscillator state now has finite energy, and so is not unreasonable as an initial state for the coupled system. An alternative way of expressing the advantage of the transformed variables is to say that in order for a degree of freedom in the theory to be observable in isolation, it must require finite energy to excite or de-excite it without affecting other degrees of freedom, and it must also be spatially local (except possibly at small scales). Such an observed degree of freedom will be some linear combination of the exact normal modes. If we require the co-efficients in this linear combination to vanish at high energies and remain finite everwhere else, we can ensure finite energy observability. If we require that the co-efficients are constant at low energies, we also ensure local observability. We would then like to have a basis in which this observed linear combination "looks like" a harmonic oscillator coupled to a scalar field. Given our conditions of local and finite energy observability, our original basis does not provide this feature, but our second basis does. The transformed Hamiltonian H M C becomes, in the new operators, precisely the Hamiltonian of the Unruh-Zurek model. We have therefore found that, even if we begin by analysing the minimally-coupled model, we may be compelled in the end to study the Unruh-Zurek model (with a UV cut-off) instead. More convenient for our subsequent calculations than the Hamiltonian for this model is its Lagrangian: Here Φ k is the time-dependent, spatial Fourier transform of the field φ. Note that, even in the more benign Unruh-Zurek basis, distinguishing the oscillator as a system observable independently of the field directly implies that there must be a UV cut-off. One often argues that a cut-off is appropriate because one is not interested in accurately describing physics at inaccessible energy scales; but in the case of the oscillator coupled to a field, there must really be a cut-off in the coupling in order for there to be any accessible energy scales! A.2 Correspondence between MC and derivative coupling models In the minimal coupling model, the derivative of the oscillator coordinate couples to the field, whereas in the derivative coupling model the oscillator coordinate couples to the derivative of the field. These two models thus differ by a total derivative term in the Lagrangian. In particular, they have the same Heisenberg operator dynamics. The above subsection describes the issue of the initial state, and argues that an uncorrelated initial state is physically more realistic in the derivative coupling model. However, since the two models have the same dynamics, this should translate to a simple prescription for switching from one model to the other in the context of the influence functional treatment. In the previous sections, we have derived all results from the minimal coupling model. One can obtain corresponding quantities in the influence functional of the derivative coupling model via the prescription below: The stochastic effective action in the derivative coupling model is then given by Note that the quantitiesμ ij in the above equation refer to the newly defined quantities in the derivative coupling model. They are obtained by differentiating the corresponding quantities in the MC model twice. The Langevin equations are: The noise kernel, as the correlator of η i and η j , is also obtained by the corresponding noise kernel in the MC model by differentiating twice, according to the correspondence established above. The infrared divergent energy of the initially uncorrelated state does have an effect: the propagation kernel, in the MC model, contains an initial "shock wave" term, as well as the expected dissipation and propagation terms, and this term is not present in the DC model. Since this shock wave is a transient, it has no significance in our late-time analysis. Institute for Mathematical Sciences at the University of Cambridge during the Geometry and Gravity program in Spring 1994. JA thanks Salman Habib for discussions, and the Canadian Natural Sciences and Engineering Research Council for support.
2014-10-01T00:00:00.000Z
1995-10-03T00:00:00.000
{ "year": 1995, "sha1": "09fc259606f78b6b075f6d3ebf5fe2661d7a3b08", "oa_license": null, "oa_url": "http://repository.ust.hk/ir/bitstream/1783.1-48181/1/PhysRevD.53.7003.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "698d88cab2eefe41f09b5f0935ce1be465d46e88", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
119450180
pes2o/s2orc
v3-fos-license
epsilon'/eps in the Standard Model We overview the detailed analysis of epsilon'/epsilon within the Standard Model, presented in ref. 1. When all sources of large logarithms are considered, both at short and long distances, it is possible to perform a reliable Standard Model estimate of epsilon'/epsilon. The strong S-wave rescattering of the final pions has an important impact on this observable. The Standard Model prediction is found to be Re(epsilon'/epsilon) =(1.7 +- 0.9)*10^{-3}, in good agreement with the most recent experimental measurements. A better estimate of the strange quark mass would reduce the uncertainty to about 30%. Introduction In recent times the determination of Re (ε ′ /ε) has stimulated a lot of work both on the theoretical and experimental sides. The latest has been recently clarified by the new NA48 [3], Re (ε ′ /ε) = (15.3 ± 2.6) · 10 −4 , and the KTEV [4], Re (ε ′ /ε) = (20.7 ± 2.8) · 10 −4 , results. The present experimental world average is [3]- [6] Re ε ′ /ε = (17.2 ± 1.8) · 10 −4 . (1.1) The theoretical prediction has been the subject of many debates since different groups, using different methods or approximations obtained different results [7]- [12]. Recently however it has been observed [1] that once all essential ingredients are taken into account, including final state interactions (FSI) [2], one can give a reliable estimate of Re (ε ′ /ε) Re ε ′ /ε = (17 ± 9) · 10 −4 . (1. 2) The subject of this talk is a review of the main ingredients in the calculation of ε ′ /ε. The physical origin of ε ′ /ε is at the electroweak scale, where the flavor-changing processes can be described in terms of quarks, leptons and gauge bosons with the usual gauge coupling perturbative expansion. At the scale M Z the heavy gauge bosons W ± and Z, and the top quark are integrated out of the theory. The dynamics is then described in terms of Wilson coefficients C i (µ) and operators Q i (µ), via a Lagrangian of the form The values of the coefficients C i are matched with the underlying theory at the electroweak scale ∼ M Z . Then, using the Operator Product Expansion (OPE) [13] and renormalization group equations [14], one can evaluate the Wilson coefficients at any scale µ summing up the short-distance logarithms. The overall renormalization scale µ separates the short-(M > µ) and long-(m < µ) distance contributions, which are contained in C i (µ) and Q i , respectively. The physical amplitudes are independent of µ; thus, the explicit scale (and scheme) dependence of the Wilson coefficients should cancel exactly with the corresponding dependence of the Q i matrix elements between on-shell states. Our knowledge of ∆S = 1 transitions has improved qualitatively in recent years, thanks to the completion of the next-to-leading logarithmic order calculation of the Wilson coefficients [15,16]. All gluonic corrections of O(α n s t n ) and O(α n+1 s t n ), where t ≡ log M/m and M and m are any scales appearing in the evolution, are already known. Moreover the full m t /M W dependence (to first order in α s and α) has been taken into account at the electroweak scale. We will fully use this information up to scales µ ∼ O(1 GeV), without making any unnecessary expansion. At a scale µ < m c one has a three-flavor theory described by a Lagrangian of the same general form as in eq. (1.3). The difficult and still unsolved problem resides in the calculation of the hadronic matrix elements. As we will see in the following the large-N c expansion and Chiral Perturbation Theory (χPT) allow to estimate those matrix elements with sufficient accuracy for the determination of ε ′ /ε. In the following we adopt the usual isospin decomposition: The complete amplitudes A I ≡ A I exp iδ I 0 include the strong phase shifts δ I 0 . The Swave π-π scattering generates a large phase-shift difference between the I = 0 and I = 2 partial waves [17]: There is a corresponding dispersive FSI effect in the moduli of the isospin amplitudes, because the real and imaginary parts are related by analyticity and unitarity. The presence of such a large phase-shift difference PrHEP hep2001 International Europhysics Conference on HEP Ignazio Scimemi clearly signals an important FSI contribution to A I . In terms of the K → ππ isospin amplitudes, . (1.5) Due to the famous "∆I = 1/2 rule", ε ′ /ε is suppressed by the ratio ω = Re(A 2 )/Re(A 0 ) ≈ 1/22 . The phases of ε ′ and ε turn out to be nearly equal: Φ ≈ δ 2 0 − δ 0 0 + π 4 ≈ 0 . The CPconserving amplitudes Re(A I ), their ratio ω and |ε| are usually set to their experimentally determined values. A theoretical calculation is then only needed for Im(A I ). Using the short-distance Lagrangian (1.3), the CP-violating ratio ε ′ /ε can be written as [7] where the quantities P (I) = i y i (µ) (ππ) I |Q i |K contain the contributions from hadronic matrix elements with isospin I, Ω IB = (1/ω) Im(A 2 ) IB /Im(A 0 ) parameterizes isospin breaking corrections and y i (µ) are the CP-violating parts of the Wilson coefficients: The factor 1/ω enhances the relative weight of the I = 2 contributions. In the Standard Model, P (0) and P (2) turn out to be dominated respectively by the contributions from the QCD penguin operator Q 6 and the electroweak penguin operator Q 8 [9], A recent improved calculation of Ω π 0 η IB at O(p 4 ) in χPT has found the result [18] Ω π 0 η IB = 0.16 ± 0.03 . (1.8) Chiral Perturbation Theory Below the resonance region and using global symmetry considerations one can define an effective field theory in terms of the QCD Goldstone bosons (π, K, η). The χPT formulation of the SM [19,20,21] describes the meson-octet dynamics through a perturbative expansion in powers of the ratio of momenta and quark masses over the chiral symmetry breaking scale (Λ χ ∼ 1GeV). The operator content of the theory is fixed by chiral symmetry. At lowest order, the most general effective bosonic weak Lagrangian, with the same SU (3) L ⊗SU (3) R transformation properties and quantum numbers as the short-distance Lagrangian (1.3), contains three terms transforming as (8 L , 1 R ), (27 L , 1 R ) and (8 L , 8 R ) whose corresponding couplings are denoted by g 8 , g 27 and g ew . The isospin amplitudes A I have been computed up to next-to-leading order in the chiral expansion [22]- [27]. Decomposing the isospin amplitudes according to their representation components A I = Σ R A (R) I , the results of those calculations can be written in the form (the expressions for A can be found in ref. [1]): (2.1) These formulae contain the chiral one-loop corrections ∆ L A (R) It is convenient to rewrite these amplitudes in the form A is the contribution at leading order in the large-N c expansion while the factors C (R) I represent the next-to-leading order (NLO) correction in the same expansion. The chiral loop contributions are NLO corrections in 1/N c . In order to determine A (R)∞ I one needs only to match properly χP T with the effective short distance Lagrangian in eq. (1.3) and so determine the χP T couplings. As an example we have (a more complete list can be found in ref. [1]): 02. These results are equivalent to the standard large-N C evaluation of the usual bag parameters B i . In particular, for ε ′ /ε, where only the imaginary part of the g i couplings matter [i.e. Im(C i )], the leading order large-N c estimate amounts to B (3/2) 8 ≈ B (1/2) 6 = 1. Therefore, up to minor variations of some input parameters, the corresponding ε ′ /ε prediction, obtained at lowest order in both the 1/N C and χPT expansions, reproduces the published results of the Munich [7] and Rome [8] groups. Thus at this order there is a large numerical cancellation between the I = 0 and I = 2 contributions, leading to an accidentally small value of ε ′ /ε. Notice that the strong phase shifts are induced by chiral loops and, thus, they are exactly zero at this leading order approximation. The large-N C limit has been only applied to the matching between the 3-flavor quark theory and χPT. The evolution from the electroweak scale down to µ < m c has to be done without any unnecessary expansion in powers of 1/N C ; otherwise, one would miss large corrections of the form 1 N C ln (M/m), with M ≫ m two widely separated scales [28]. Thus, the Wilson coefficients contain the full µ dependence. At large-N c the operators Q i (i = 6, 8) factorize into products of left-and right-handed vector currents, which are renormalization-invariant quantities. The matrix element of each single current represents a physical observable which can be directly measured; its χPT realization just provides a low-energy expansion in powers of masses and momenta. Thus, the large-N C factorization of these operators does not generate any scale dependence. Since the anomalous dimensions of Q i (i = 6,8) vanish when N C → ∞ [28], a very important PrHEP hep2001 International Europhysics Conference on HEP Ignazio Scimemi ingredient is lost in this limit [29]. To achieve a reliable expansion in powers of 1/N C , one needs to go to the next order where this physics is captured [29,30]. This is the reason why the study of the ∆I = 1/2 rule has proved to be so difficult. Fortunately, these operators are numerically suppressed in the ε ′ /ε prediction. The only anomalous dimension components which survive when N C → ∞ are the ones corresponding to Q 6 and Q 8 [28,31]. One can then expect that the matrix elements of these two operators are well approximated by this limit [29,30,32]. These operators factorize into color-singlet scalar and pseudoscalar currents, which are µ dependent. This generates the factors qq (2) which exactly cancel the µ dependence of C 6,8 (µ) at large-N C [28,29,30,31,32,33]. It remains a dependence at next-to-leading order. While the real part of g 8 gets its main contribution from C 2 , Im(g 8 ) and Im(g 8 g ew ) are governed by C 6 and C 8 , respectively. Thus, the analyses of the CP-conserving and CP-violating amplitudes are very different. There are large 1/N C corrections to Re(g i ) [29,30,32], which are needed to understand the observed enhancement of the (8 L , 1 R ) coupling. On the contrary, the large-N C limit can be expected to give a good estimate of Im(g i ). Chiral loop corrections The large-N c amplitudes in eq. (2.3) do not contain any strong phases δ I 0 . Those phases originate in the final rescattering of the two pions and, therefore, are generated by chiral loops which are of higher order in the 1/N C expansion. Since the strong phases are quite large, specially in the isospin-zero case, one should expect large higher-order unitarity corrections. The multiplicatively correction factors C The numerical corrections to the 27-plet amplitudes do not have much phenomenological interest for CP-violating observables, because Im(g 27 ) = 0. Remember that the CPconserving amplitudes Re(A I ) are set to their experimentally determined values. What is relevant for the ε ′ /ε prediction is the 35% enhancement of the isoscalar octet amplitude Im[A (8) 0 ] and the 46% reduction of Im[A (ew) 2 ]. These destroy the accidental lowest-order cancellation between the I = 0 and I = 2 contributions, generating a sizeable enhancement of ε ′ /ε. FSI at higher orders Given the large size of the one-loop contributions, one should worry about higher-order chiral corrections. The large one-loop FSI correction to the isoscalar amplitudes is generated by large infrared chiral logarithms involving the light pion mass [2]. These logarithms are universal, i.e. their contribution depends exclusively on the quantum numbers of the two pions in the final state [2]. As a result, they give the same correction to all isoscalar amplitudes. Identical logarithmic contributions appear in the scalar pion form factor [20], where they completely dominate the O(p 4 ) χPT correction. Using analyticity and unitarity constraints [38], these logarithms can be exponentiated to all orders in the chiral expansion [2]. The result can be written as: C I (s 0 ) . The Omnès [38,39,40] exponential provides an evolution of C (R) I (s) from an arbitrary low-energy point s 0 to s ≡ (p π 1 + p π 2 ) 2 = M 2 K . The physical amplitudes are of course independent of the subtraction point s 0 . Intuitively, what the Omnès solution does is to correct a local weak K → ππ transition with an infinite chain of pion-loop bubbles, incorporating the strong ππ → ππ rescattering to all orders in χPT. The Omnès exponential only sums a particular type of higher-order Feynman diagrams, related to FSI. Nevertheless, it allows us to perform a reliable estimate of higher-order effects because it does sum the most important corrections. Moreover, the Omnès exponential enforces the decay amplitudes to have the right physical phases. The Omnès resummation of chiral logarithms is uniquely determined up to a polynomial (in s) ambiguity [2,38,41], which has been solved with the large-N C amplitude A (R)∞ I . The exponential only sums the elastic rescattering of the final two pions, which is responsible for the phase shift. Since the kaon mass is smaller than the inelastic threshold, the virtual loop corrections from other intermediate states (K → Kπ, Kη, ηη, KK → ππ) can be safely estimated at the one loop level; they are included in C PrHEP hep2001 International Europhysics Conference on HEP Ignazio Scimemi chiral expansion. It remains a local ambiguity at higher orders [2,38,41]. To estimate the remaining sensitivity to those higher order corrections, we have changed the subtraction point between s 0 = 0 and s 0 = 3M 2 π and have included the resulting fluctuations in the final uncertainties. At ν = M ρ , we get the following values for the resummed loop corrections These results agree within errors with the one-loop chiral calculation of the moduli of the isospin amplitudes, indicating a good convergence of the chiral expansion. Final results The infrared effect of chiral loops generates an important enhancement of the isoscalar K → ππ amplitude. This effect gets amplified in the prediction of ε ′ /ε, because at lowest order (in both 1/N C and the chiral expansion) there is an accidental numerical cancellation between the I = 0 and I = 2 contributions. Since the chiral loop corrections destroy this cancellation, the final result for ε ′ /ε is dominated by the isoscalar amplitude. Thus, the Standard Model prediction for ε ′ /ε is finally governed by the matrix element of the gluonic penguin operator Q 6 . A detailed numerical analysis has been provided in ref. [1]. The short-distance Wilson coefficients have been evaluated at the scale µ = 1 GeV. Their associated uncertainties have been estimated through the sensitivity to changes of µ in the range M ρ < µ < m c and to the choice of γ 5 scheme. Since the most important α s corrections appear at the low-energy scale µ, the strong coupling has been fixed at the τ mass, where it is known [42] with about a few percent level of accuracy: α s (m τ ) = 0.345±0.020. The values of α s at the other needed scales can be deduced through the standard renormalization group evolution. Taking the experimental value of ε, the CP-violating ratio ε ′ /ε is proportional to the CKM factor Im(V * ts V td ) = (1.2±0.2)·10 −4 [43]. This number is sensitive to the input values of several non-perturbative hadronic parameters adopted in the usual unitarity triangle analysis; thus, it is subject to large theoretical uncertainties which are difficult to quantify [44]. Using instead the theoretical prediction of ε, this CKM factor drops out from the ratio ε ′ /ε; the sensitivity to hadronic inputs is then reduced to the explicit remaining dependence on the ∆S = 2 scale-invariant bag parameterB K . In the large-N C limit,B K = 3/4. We have performed the two types of numerical analysis, obtaining consistent results. This allows us to better estimate the theoretical uncertainties, since the two analyses have different sensitivity to hadronic inputs. PrHEP hep2001 International Europhysics Conference on HEP Ignazio Scimemi by the second error. The most critical step is the matching between the short-and longdistance descriptions. We have performed this matching at leading order in the 1/N C expansion, where the result is known to O(p 4 ) and O(e 2 p 2 ) in χPT. This can be expected to provide a good approximation to the matrix elements of the leading Q 6 and Q 8 operators. Since all ultraviolet and infrared logarithms have been resummed, our educated guess for the theoretical uncertainty associated with 1/N C corrections is ∼ 30% (third error). A better determination of the strange quark mass would allow to reduce the uncertainty to the 30% level. In order to get a more accurate prediction, it would be necessary to have a good analysis of next-to-leading 1/N C corrections. This is a very difficult task, but progress in this direction can be expected in the next few years [9,11,30,46,47,48]. To summarize, using a well defined computational scheme, it has been possible to pin down the value of ε ′ /ε with an acceptable accuracy. Within the present uncertainties, the resulting Standard Model theoretical prediction (1.2) is in good agreement with the measured experimental value (1.1). I.S. wishes to thank the organizers of EPS2001 for the nice meeting. This work has been partially supported by the TMR Network "EURODAPHNE" (Contr.No. ERBFMX-CT98-0169) and by DGESIC, Spain (Grant No. PB97-1261).
2019-04-18T13:02:09.018Z
2001-11-21T00:00:00.000
{ "year": 2001, "sha1": "cf4b1d290ca884bd22da54f4aa06b95375cfdceb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6153544ac50cfd2809579eab934c504e88bfc928", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Physics" ] }
244985723
pes2o/s2orc
v3-fos-license
H-Index Anxiety Among Health Researchers: A Commentary Appropriate methods are needed to evaluate the scientific excellence of individuals and research teams, especially in the field of health, to foster rich opportunities and distribute research grants with higher efficiency.[1] However, sometimes researchers and higher education authorities encounter problems due to the wrong choice of some evaluation methods. Therefore, the tendency to publish article in high impact journals and international citation databases can affect on citation, increasing the visibility of article and self‐citation. But, all factors indicate the inappropriate choice of researcher’s evaluation methods, the consequences of which will be evident at the individual, organizational, and national levels over time. One of these inappropriate choices could be the use of H‐index in researchers’ evaluations. Efforts to increase the researchers’ and higher education institutions’ H‐index can have both positive and negative effects on the personal and professional life of researchers, including their mental health. In effect, comparing the researchers’ academic and professional performance according to this index provides the basis for increasing their anxiety level. Thus, today we face a new concept called H‐index anxiety; just as researchers may experience library anxiety, information‐seeking behavior anxiety, and research anxiety. Accordingly, the researchers’ feelings of fear and uncertainty about how to increase the H‐index and achieve the necessary points were called H‐index anxiety. This anxiety can affect researchers’ mental health; it can disrupt the effective performance of faculty members and health researchers, to be more specific. As for the creation and increasing the H‐index anxiety level among researchers and paving the way for establishing appropriate strategies to reduce this type of anxiety level, it seems important is to recognize the effective factors. Introduction Appropriate methods are needed to evaluate the scientific excellence of individuals and research teams, especially in the field of health, to foster rich opportunities and distribute research grants with higher efficiency. [1] However, sometimes researchers and higher education authorities encounter problems due to the wrong choice of some evaluation methods. Therefore, the tendency to publish article in high impact journals and international citation databases can affect on citation, increasing the visibility of article and self-citation. But, all factors indicate the inappropriate choice of researcher's evaluation methods, the consequences of which will be evident at the individual, organizational, and national levels over time. One of these inappropriate choices could be the use of H-index in researchers' evaluations. Efforts to increase the researchers' and higher education institutions' H-index can have both positive and negative effects on the personal and professional life of researchers, including their mental health. In effect, comparing the researchers' academic and professional performance according to this index provides the basis for increasing their anxiety level. Thus, today we face a new concept called H-index anxiety; just as researchers may experience library anxiety, information-seeking behavior anxiety, and research anxiety. Accordingly, the researchers' feelings of fear and uncertainty about how to increase the H-index and achieve the necessary points were called H-index anxiety. This anxiety can affect researchers' mental health; it can disrupt the effective performance of faculty members and health researchers, to be more specific. As for the creation and increasing the H-index anxiety level among researchers and paving the way for establishing appropriate strategies to reduce this type of anxiety level, it seems important is to recognize the effective factors. The nature of H-index This index allows comparing the research performance of researchers in different disciplines; however, the nature of different disciplines is different in terms of number of journals, number of citations, and type of articles. [2] In particular, researchers' efforts to place the full text of their articles in digital libraries [3] as well as social networks all demonstrate the importance of the H-index. Thus, the comparison of H-index creates anxiety in some researchers who have a lower H-index. Research policies and regulations Research policies and regulations, such as faculty promotion regulations, faculty recruitment regulations and rules, admission requirements for research doctoral H-Index Anxiety Among Health Researchers: A Commentary Letter to Editor students, and postdoctoral researchers, have given rise to too much attention to this index and therefore have been a factor in spreading (increasing) the level of anxiety among faculty members and health researchers. Insufficient literacy associated with publishing research results Insufficient familiarity with the research-oriented social network capabilities in increasing visibility and article citation receiving and also insufficient ability to use citation databases can dispose the H-index anxiety. Individual factors It seems that low research background, young researchers, lower academic ranks (such as lecturer and assistant professor), type of employment (temporary to permanent, contractual), and experience of other anxieties such as information-seeking behavior anxiety, research anxiety, and so on are the individual factors affecting increasing H-index anxiety level. Conclusions Nowadays, the evaluation of research performance cannot be considered one-dimensional. Research policies and regulations emphasize on the H-index. This emphasis has become a factor of creation and spreading H-index anxiety among researchers. As you know, receiving citations and increasing the H-index is just one quality assessment of researchers' scientific works, not all of them. Consequently, considering the H-index as the only indicator of the research performance quality or overemphasis on it can lead to the deviation of the scientific development path of the country, misleading researchers and increasing their anxiety level given that researchers are sometimes have to violate ethical issues, especially research ethics. Along these lines, to reduce the H-index anxiety, in addition to reforming research rules and regulations and highlighting the multidimensional criteria in evaluating the health researchers, scientometrics-related workshops should be purposefully held for health researchers. Meanwhile, multidimensional scientometrics indicators should be designed and localized. Additionally, anxiety management skills training should be included in special training programs for researchers. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2021-12-09T17:55:15.983Z
2021-10-26T00:00:00.000
{ "year": 2021, "sha1": "6f1843e3566502a68644d209c96ea71b217c0977", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "df78bb6312892b4a9a24caad9f277d6d31b7fe54", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259882836
pes2o/s2orc
v3-fos-license
Attentional bias for high-calorie food cues by the level of hunger and satiety in individuals with binge eating behaviors Introduction The abnormal hyperreactivity to food cues in individuals with binge eating behaviors could be regulated by hedonic or reward-based system, overriding the homeostatic system. The aim of the present study was to investigate whether attentional bias for food cues is affected by the level of hunger, maintaining the normal homeostatic system in individuals with binge eating behaviors. Methods A total of 116 female participants were recruited and divided into four groups: hungry-binge eating group (BE) (n = 29), satiated BE (n = 29), hungry-control (n = 29), satiated control (n = 29). While participants completed a free-viewing task on high or low-calorie food cues, visual attentional processes were recorded using an eye tracker. Results The results revealed that BE group showed longer initial fixation duration toward high-calorie food cues in both hunger and satiety condition in the early stage, whereas the control group showed longer initial fixation duration toward high-calorie food cues only in hunger conditions. Moreover, in the late stage, the BE group stared more at the high-calorie food cue, compared to control group regardless of hunger and satiety. Discussion The findings suggest that automatic attentional bias for food cues in individuals with binge eating behaviors occurred without purpose or awareness is not affected by the homeostatic system, while strategic attention is focused on high-calorie food. Therefore, the attentional processing of food cues in binge eating group is regulated by hedonic system rather than homeostatic system, leading to vulnerability to binge eating. . Introduction In clinical settings, the consumption of a relatively large amount of food within a short period compared to usual, along with a subjective difficulty in controlling eating behavior during this time, is defined as binge eating behavior (American Psychiatric Association, 2013). This binge eating behavior is a core symptom observed in eating disorders such as binge eating disorder (BED), the binge-purge subtype of anorexia nervosa (AN), and bulimia nervosa (BN). Binge eating behavior often leads individuals to consume more energy than the actual amount of calories needed, resulting in imbalances related to the body and weight, causing excessive weight gain and associated psychological distress (Tanofsky-Kraff and Yanovski, 2004;Wonderlich et al., 2009). Problematic binge eating behavior can be explained by one of the important theories in addiction called the incentive-sensitization theory (Robinson and Berridge, 1993;Berridge and Robinson, 2016). The incentive-sensitization theory was proposed to explain substance abuse, such as drug addiction, and suggests that when individuals are repeatedly exposed to stimuli, such as specific substances, that provide . /fnins. . rewarding experiences, pleasure is induced and dopamine is activated, stimulating the brain and strengthening the connection between the stimulating substance and the rewarding response of pleasure. With the reinforcement of this connection over time, individuals can become conditioned to engage in behaviors that continuously seek out and consume the rewarding stimuli. In other words, the behavior of individuals who excessively seek and consume food in binge eating and those who exhibit problematic substance addiction, characterized by excessive preoccupation and approach toward addictive substances, share the same dopamine neural pathway, indicating that cravings for specific substances or food and the triggering of such cravings may have similarities (Schulte et al., 2016;Novelle and Diéguez, 2018). In essence, similar to substance addiction, repetitive and persistent binge eating behavior is suggested to induce sensitization in the mesocorticolimbic dopamine system (Berridge, 2009). Therefore, for individuals engaging in binge eating behavior, the importance of food stimuli or cues that trigger rewarding experiences after consumption gradually increases in their daily lives or environment. In fact, the act of human food consumption appears to rely on both a pathway that maintains physiological homeostasis and a pathway that pursues psychological satisfaction in contrast (Lutter and Nestler, 2009). For instance, the pathway aimed at maintaining physiological homeostasis is essentially a system that regulates the body's energy balance, and when energy stores are depleted, it increases appetite and motivation to seek food. On the other hand, the activation of the non-homeostatic pathway, triggered by food stimuli or cues, disregards the regulation of the system for maintaining bodily homeostasis, leading to excessive food intake, and triggering binge eating behavior, which can manifest as symptoms of overweight or obesity-related eating disorders (Berthoud, 2012;Dileone et al., 2012;Witt and Lowe, 2014;Yu et al., 2015). Particularly, individuals who engage in binge eating behavior, influenced by the hedonic system, have been found to prefer high-calorie foods and show a tendency to consume them in greater quantities compared to non-binge eating behaviors (Raymond et al., 2003). This is believed to be because high-calorie foods elicit stronger reward-related responses in the brain, leading individuals to experience a heightened sense of reward or pleasure when consuming these foods. As mentioned earlier in the incentive-sensitization theory, food cues and rewarding experiences can be conditioned through associative and reinforcement learning processes (Berridge and Robinson, 2003). In other words, specific food cues that are consistently paired with rewarding experiences after consumption can become attractive and desired stimuli that more easily and quickly capture an individual's attention, triggering cravings. The tendency of individuals who have become sensitive to these incentive stimuli is often measured by behavioral responsiveness, such as their reaction time to specific stimuli. Particularly, the most fundamental characteristic of attention, which is the underlying process guiding individual behavior, can be more accurately and sensitively measured through attention bias (Schag et al., 2013;Popien et al., 2015). Methods such as the go/nogo paradigm (Veling et al., 2017) or the dot-probe paradigm (Fenske and Raymond, 2006;Chen et al., 2016) are useful for measuring behavioral responsiveness by observing which stimuli among various stimuli presented in the environment receive more attention or focus. However, these methods have limitations when it comes to assessing more immediate and automatic responsiveness to specific stimuli, as participants may learn during the task about certain stimuli or processes presented to them. The free-viewing paradigm using eye-tracking is an appropriate method for investigating attentional responses to food in individuals exhibiting binge eating behavior (Cisler and Koster, 2010). Enhanced attention refers to the rapid detection of salient stimuli through automatic processing in the early stages, while disengagement involves strategic processing during the maintenance of attention. Therefore, difficulty in disengagement represents sustained attention to food-related cues. Results in adults with binge eating disorder reflect longer attentional dwell time on food stimuli, indicating extended gaze duration on food cues, and eye-tracking studies have yielded mixed results regarding initial direction biases (Schag et al., 2013;Popien et al., 2015;Schmidt et al., 2016;Sperling et al., 2017). For instance, one study using the free-viewing paradigm and antisaccade tasks found that individuals with binge eating disorder who were obese or overweight displayed longer gaze durations on food stimuli compared to individuals with obesity without binge eating disorder and normal-weight participants. However, all participants exhibited initial fixations occurring more frequently on food cues (Schag et al., 2013). Another study found that adults without binge eating disorder showed longer fixations and dwell times on both high-calorie and low-calorie food items (Popien et al., 2015). In adolescents with binge eating disorder, gaze durations were longer, but no directional biases were observed (Schmidt et al., 2016). Finally, while both positive and control groups did not differ in initial fixation locations, the positive group showed greater interest in food. There were no differences in detection times between groups in the visual search task, but the detection bias toward food cues was only found in the overall binge eating disorder (Sperling et al., 2017). In addition, as emphasized in the incentive-sensitization theory, two key concepts are highlighted (Pool et al., 2015(Pool et al., , 2016. First, the subjective value of incentives can vary depending on individuals' circumstances. For example, a stimulus that is rewarding to one person may be perceived as aversive or costly to another person. Second, individual circumstances and relational states are important factors that modulate sensitivity to incentives. For instance, individuals may become more sensitive to stimuli that can induce certain states they require. This is exemplified by individuals who require satiety being more sensitive to food cues or stimuli (Zhang et al., 2009;Robinson and Berridge, 2013). Evidence regarding the modulation of attentional patterns to food cues by hunger has been obtained through studies involving normalweight and individuals with obesity (Nijs et al., 2010;Loeber et al., 2013). Normal-weight individuals showed biased attention toward food cues when hungry but not when satiated, indicating that attentional processing in healthy individuals is modulated by the homeostatic system (Piech et al., 2010;Loeber et al., 2013). However, in obese and overweight groups, no differences in attentional patterns were observed between hungry and satiated states, and in some cases, results were contrary to those of the normal-weight group (Nijs et al., 2010). The evidence considering hunger and satiety factors in individuals with binge eating is limited. Given the mixed results regarding whether hunger can trigger binge eating, it is necessary to consider both hunger and satiety factors in individuals with binge eating disorder (Stice et al., 2008). Due to a lack of control over hunger levels, there may be mixed results in the early stages of attention, as previous studies have shown (Schag et al., 2013;Schmidt et al., 2016;Sperling et al., 2017). For instance, one study found that individuals with binge eating disorder (BED) reported significantly higher levels of hunger, more depressive symptoms, and less positive emotional responses to food cues compared to a control group (Sperling et al., 2017). Another study focusing on adolescents with BED showed that attentional biases toward food cues were only associated with increased hunger in the BED group. These results differ from previous evidence of general biases toward food stimuli in control groups, suggesting that hunger levels may have influenced the attention patterns of the control group (Schag et al., 2013;Schmidt et al., 2016). The existing evidence regarding orientation biases is not conclusive because it did not employ a competitive paradigm involving three types of stimuli in complex naturalistic scenes. Additionally, the findings regarding the early stage of attentional processes are not certain due to the relatively long duration of stimulus presentation (8 s), which may not adequately measure early covert attention (Popien et al., 2015). In the context of the incentive-sensitization theory, incentive salience refers to the implicit motivation to obtain a reward (i.e., wanting). Therefore, it is necessary to investigate the clear results of early attentional processes, which reflect relatively automatic attention patterns (Fox et al., 2001). The brain circuitry underlying the psychological processes of the reward system consists of two components: "wanting" and "liking" (Berridge and Robinson, 2016). "Wanting" represents the motivation to obtain a reward, while "liking" refers to the pleasure experienced during consumption (Berridge, 2009). The theory suggests that "wanting" and "liking" can be independent in psychopathological conditions such as addiction or binge eating (Finlayson et al., 2007;Pool et al., 2016). Unlike "liking", it is proposed that explicit and implicit "wanting" rely on different psychological mechanisms (Berridge and Robinson, 2003;Anselme and Robinson, 2015). Implicit "wanting" is expected to be associated with the early stage of attention, while explicit "wanting" is more closely related to overt attention, which is measured in the later stages of attention (Fox et al., 2001;Pool et al., 2016). In this study, attentional bias indicating implicit "wanting" and self-reported explicit "wanting" and "liking" was measured. The objective of this study is to investigate the influence of hunger and satiety on visual attentional bias toward food cue images in individuals with binge eating disorder. The research hypotheses are as follows: (1) In the hunger condition, both individuals with binge eating disorder and weight-matched controls will exhibit attentional bias toward high-calorie food cues compared to both low-calorie food cues and non-food cues. (2) In the satiety condition, individuals with binge eating disorder will continue to display attentional bias, whereas the control group will show reduced attention toward high-calorie food cues. . Materials and methods . . Participants Prior to the experiment, candidate participants were recruited through an internet bulletin board of universities in Seoul, Korea. As an initial screening for the binge eating (BE) problem group and control group, a total of 435 female undergraduates completed the Eating Disorder Diagnostic Scale (EDDS; Stice et al., 2000) and Eating Disorder Examination Questionnaire (EDE-Q; Fairburn and Beglin, 1994). All group members with BE reported an average at least one BE episode per week for the past 3 months without compensatory behavior following BE episodes. These individuals, who have not received an official diagnosis of BED, but demonstrate relatively high scores on measures assessing symptoms of binge eating, refer to individuals with a propensity for binge eating behaviors. By contrast, none of the control group members reported BE episodes per week during the past 3 months and a history of other eating disorder symptoms. Exclusion criteria in this study were as follows: (1) diagnosis of other eating disorders, (2) recurrent use of inappropriate compensatory behavior, and (3) reported the presence of any illness, or the use of any pharmacological treatment, that might influence eating behavior, body weight, or that would not allow a 12-h fast. Eventually, 116 eligible females agreed to participate: 58 participants were in the BE group and 58 participants were in the control group, and the BE group was matched with the control group by weights. Each group was assigned to hunger or satiety condition randomly. Finally, there were four groups: hungry BED (N = 29), satiated BED (N = 29), hungry control (N = 29), and satiated control (N = 29) ( Table 1). The study protocol was approved by an Institutional Review Board of Chung-Ang University, Seoul, Republic of Korea (no. 1041078-201910-HRSB-320-01). . . Measurement . . . Self-report questionnaires The Eating Disorder Diagnostic Scale (EDDS) is a 22-item selfreport scale based on DSM criteria for anorexia nervosa, bulimia nervosa, and binge eating disorder (Stice et al., 2000). The Korean version of the EDDS (K-EDDS) was used (Bang et al., 2018b). It was used to identify BE participants and rule out an eating disorder among those in the control group. An overall symptom is calculated from the sum of scores for the first 18 EDDS items. In this study, the Cronbach's α was 0.807. The Eating Disorder Examination Questionnaire (EDE-Q) is a 36-item self-report measure that assesses the presence and severity of eating disorder psychopathology (Fairburn and Beglin, 1994). The Korean version of EDE-Q version 6.0 was used (Bang et al., 2018a). It consists of a global score and four subscales: eating concern scale, restraint scale, shape concern scale, and weight concern scale. In this study, Cronbach's α was 0.938. . /fnins. . The Beck Depression Inventory (BDI) is a 21-item questionnaire that was originally developed for use with the clinical population, assessing the presence and severity of depression symptoms (Beck et al., 1988). The validated Korean version of BDI was used (Lee et al., 1995). The scale is used to assess the cognitive, emotional, and somatic symptoms of depression. Each item has four choices that describe the severities of each symptom, respectively. Participants choose one option they think to be closest to the state during the past week. In this study, Cronbach's α was 0.821. The State-Trait Anxiety Inventory (STAI; Spielberger et al., 1970) is used to trait anxiety and state anxiety. The trait version (STAI-T) measures the trait of anxiety, while the state version (STAI-S) measures the state of anxiety. The Korean version of STAI was used (Hahn et al., 1996). The total scores of each subscale are from 20 to 80. The STAI includes 20 items, with greater scores indicating more severe anxiety. In this study, Cronbach's α was 0.834 for STAI-T and 0.750 for STAI-S. To measure the level of hunger and satiety, the visual analog scale (VAS) ranging from 0 to 100 mm was used. The VAS items consist of a question with "not at all" to "very much". Participants responded with their own levels of hunger and satiety. Furthermore, to measure the level of wanting and liking, applying the incentive salience model for the rewarding value of food, VAS ranging from 0 to 100 mm was used. The VAS items consist of a question with "not at all" to "very much". The question to determine wanting was "How much do you want to eat this item right now?". Liking was determined to the question "How much do you like this item, not considering if you want to eat it right now?" (Stevenson et al., 2017). The body mass index (BMI) was used to measure participants' physical information. BMI was calculated by dividing weight (kg) by height (m²). It is an index that reflects the total amount of body fat. Weight was measured in kilograms, and height was measured in meters using height and weight measuring tools available in the laboratory. . . . Free-viewing task Eye-movement data were collected using an eye tracker (Tobii TX300, Tobii Technology AB, Danderyd, Sweden). There were three types of stimuli: high-calorie food, low-calorie food, and nonfood cues. Each stimuli type consists of nine images. The highcalorie food cues were items that contained a large amount of fats and sugar, such as hamburgers, ice creams, and chocolates. The low-calorie food cues contained various types of vegetables and fruits. The neutral stimuli included some stationery and household objects. High-and low-calorie cues were determined based on the actual and perceived calories of specific foods, as rated, and standardized in the FATIS (Seo et al., 2020). The FATIS is a database of pictures with normed ratings on addictive images including food, alcohol, nicotine, and non-addictive neutral items. The pair of stimuli was matched using an inspection with respect to complexity, shape, color, brightness, and viewing distance of food cues. Totally, 27 pairs were made (high-calorie food vs. non-food, low-calorie food vs. non-food, high-calorie food vs. low-calorie food). Each pair was presented in a counterbalanced order, and cues were presented twice over on the left and right side of the monitor, conducted 54 trials (Kim et al., 2016). Each pair of cues was presented at a size of 80 × 100 mm with their centers 200 mm apart. Followed by a pair of pictures for 4,000 ms, each trial began with a fixation for 1,000 ms. The eye movements of participants were recorded by an eye-tracking system during the free-viewing task. The eye-tracking data were measured at 120 Hz. All participants performed the free-viewing task in a lighted room, and the size of the monitor was 23 inch with a distance of 60∼75 cm between the eyes and monitor. The eye-tracking equipment was calibrated for participants by presenting the five moving dots on the screen, and . /fnins. . then the pairs of cues were presented. The software (Tobii TX300, Tobii Technology AB, Danderyd, Sweden) provided a variety of gaze information, involving initial fixation latency score, initial fixation duration score, and gaze duration score. . . Procedures Participants were asked not to consume any food, except water, for approximately 12 h prior to the start of the experiment. Upon arrival, participants were provided with information regarding their rights and the procedure. The experiment was scheduled between 8:00 and 10:00 a.m. to align participants' fasting and satiety states as closely as possible. Ultimately, all participants visited the laboratory before 10:00 a.m., and the manipulation checks for the fasting state were based on participants' selfreported responses. When participants arrived at the laboratory, they received instructions on the approved consent form from the Institutional Review Board and voluntarily signed the consent form. Then, participants were randomly assigned to either the hunger or satiety condition, matched for age and body mass index. For the satiety condition, a standard meal was provided at the laboratory to standardize satiety levels. The standard meal consisted of a "gimbap", approximately 350 kcal, which is a meal consisting of rice, radish, carrots, spinach, and other vegetables wrapped in seaweed. This was done to control participants' satiety levels. All participants in the satiety condition completed hunger and satiety visual analog scales (VASs) before and after the meal to assess their hunger and satiety levels. Participants in the hunger condition completed the hunger VAS only once. Afterward, participants were asked to complete the free-viewing task ( Figure 1). All participants were instructed to freely view the computer monitor while minimizing movement during the task. The task consisted of a total of 54 trials. Following the task, participants completed self-report questionnaires. Finally, all participants were provided with a debriefing regarding the experiment. The experimental procedure took approximately 40 min, and all participants received a monetary reward of 10,000 Korean won (approximately 10 USD). . . Data analyses The required sample size for this study was calculated using G * Power 3.1.9.4 (University of Dusseldorf, Dusseldorf, Germany), with an alpha error probability of 0.05 and a power of 0.95. A large effect size of 0.40 was expected with the current sample size. For data analysis, a one-way analysis of variance (ANOVA) was conducted to analyze the differences in the characteristics among the hungry BE, satiated BE, hungry control, and satiated control groups. To examine the differences in attentional bias pattern, three dependent measures were derived from eye-movement data: initial fixation latency, initial fixation duration, and gaze duration. Each score of eye movement data was calculated as the difference between the attentional bias score for high-and low-calorie food cues, and high-calorie food cues and neutral cues. In addition, based on the analysis of the basic characteristics between groups, significant differences were found in the levels of depression and anxiety among the groups. To account for these differences, depression and anxiety levels were set as covariates, and subsequent analyses were conducted. Hypothesis-driven analyses of attentional bias scores were conducted using Group 2 (BE, Control) x Condition 2 (hunger, satiety) two-way analysis of covariance (ANCOVA). Moreover, Group 2 (BE, Control) x Condition 2 (hunger, satiety) x Cue type (high calorie, low calorie) three-way ANCOVA was conducted on self-report wanting and liking VAS. All statistical analyses were conducted using IBM SPSS version 25.0 for Windows, v. 11.0. . . Sample characteristics A total of 116 participants participated in this study: 29 in the hungry BE group, 29 in the satiated BE group, 29 in the hungry control group, and 29 in the satiated control group. Table 1 shows the group characteristics of the participants analyzed in this study. According to the criteria of matching, there were no significant differences in the mean age [F (3,112) Two BE groups had significantly higher eating disorder symptoms, depression, trait anxiety, and state anxiety than did the other two groups. As expected, hungry BE and control groups showed higher hunger than satiated BE and control groups [F (3,112) = 85.09, p = 0.0001, η 2 = 0.695], indicating that manipulation was appropriate. Table 2 shows the subjective hunger rating before and after consuming a standardized meal. There was no statistically significant interaction between the group and the meal [F (1,56) = 0.59, p = 0.446, η 2 = 0.01], and the main effect on the group [F (1,56) = 2.37, p = 0.129, η 2 = 0.04]. It is suggested that there was no difference in the hunger level between BE and control groups. Subjective hunger rating using VAS showed a statistically significant main effect of the meal [F (1,56) = 156.35, p = 0.0001, η 2 = 0.736], indicating that both BE and control groups showed higher level of hunger before consuming a standardized meal than after the meal. . . Free-viewing task To examine attentional bias toward food cues, three eye movement scores, involving initial fixation latency score, initial fixation duration score, and gaze duration score, were analyzed for two pairs of cues. The analysis accounted for the potential influence of depressive and anxiety levels (BDI, STAI-T, and STAI-S) by controlling for them as covariates during the analysis, considering that they could be emotional states that can affect Frontiers in Neuroscience frontiersin.org . /fnins. . FIGURE Procedure for the free-viewing task. attention processes (Smith et al., 2020). Each score of eye-tracking data was calculated as the difference between the score for highand low-calorie food cues, and high-calorie food cues and neutral cues. Group (BE, control) x Condition (hunger, satiety) two-way ANCOVA was conducted. Furthermore, if there were significant interaction effects, post hoc analyses were conducted, and degrees of freedom were adjusted using the Greenhouse-Geisser epsilon to correct for violations of the assumption of sphericity. . . . Attentional bias toward high-calorie food cues vs. low-calorie food cues among the groups To examine whether each group exhibits an attentional bias toward high-calorie food cues compared to low-calorie food cues under the hunger condition, the initial fixation latency and initial fixation duration of each group were analyzed. Table 3 shows the mean and standard deviation values of attentional bias toward high-calorie cues vs. low-calorie cues among the groups. First, for the initial fixation latency score, there was no significant interaction between the group and condition [F (1,109) The results indicated that both BE and control groups did not detect high-calorie food cues more quickly than they did the low-calorie food cues, regardless of hunger and satiety. Second, for initial fixation duration, there was significant interaction between the group and the condition [F (1,109) = 5.27, p = 0.024, η 2 = 0.046]. To determine the source of the interaction, a simple main effects analysis was performed. As a result, there was no difference between hunger and satiety condition in the BE group [F (1,53) = 0.004, p = 0.947, n.s.]. In contrast, the control . /fnins. . FIGURE Comparison of initial fixation duration toward high-calorie food cues vs. low-calorie food cues among groups. BE, binge eating group; Control, weight-matched control group; *p < . , error bar: SE. group showed higher initial fixation duration to high-calorie food cues compared to low-calorie food cues in the hungry condition, but they were more likely to look initially at low-calorie food in satiated condition [F (1,53) = 9.22, p = 0.004, η 2 = 0.048]. Moreover, there was no difference between the BE group and control group in hunger condition [F (1,53) = 1.79, p = 0.186, n.s.], but the BE group showed higher initial fixation duration to high-calorie food cues vs. low-calorie food cues than the control group in satiated condition [F (1,53) = 8.15, p = 0.006, η 2 = 0.033]. It is indicated that the BE group showed persistent initial attentional bias toward high-calorie food cues both in the hunger and satiety condition, but the control group showed attentional bias only in the hunger condition. On the other hand, there were no significant effects observed for the condition [F (1,109) = 3.54, p = 0.063, η 2 = 0.031] and the group [F (1,109) = 0.90, p = 0.337, n.s.] (Figure 2). To investigate whether participants demonstrating problematic binge eating behaviors under the satiety condition exhibit longer gaze duration toward high-calorie food cues compared to lowcalorie food cues, the gaze duration toward food stimuli of each group was analyzed. As a result, there was no significant interaction between the group and the condition [F (1,109) = 0.739, p = 0.392, n.s.] and there was no significant main effect on the condition [F (1,109) = 2.61, p = 0.109, n.s.]. However, there was a significant main effect on the group indicating that the BE group looked at high-calorie food cues longer than low-calorie food cues compared to the control group [F (1,109) = 4.37, p = 0.039, η 2 = 0.039] (Figure 3). . . . Attentional bias toward high-calorie food cues vs. neutral cues among the groups To examine whether each group exhibits attentional bias toward high-calorie food cues compared to non-food cues under the hunger condition, initial fixation latency, and initial FIGURE Comparison of gaze duration toward high-calorie food cues vs. low-calorie food cues among groups. BE, binge eating group; Control, weight-matched control group; *p < . , error bar: SE. fixation duration of each group were analyzed. Table 4 shows the mean and standard deviation values of attentional bias toward high-calorie cues vs. neutral cues. First, for the initial fixation latency score, there was no significant interaction between the group and the condition [F (1,109) = 0.90, p = 0.344, n.s.], and the main effect on the group [F (1,109) = 1.81, p = 0.182, n.s.]. However, there was a significant difference in the condition [F (1,109) = 7.01, p = 0.009, η 2 = 0.060] presenting that all participants showed faster attention engagement in high-calorie food cues when hungry rather than satiated ( Figure 4). Second, for initial fixation duration, there was no significant interaction between the group and the condition [F (1,109) = 0.03, p = 0.855, n.s.], and the main effect on the group [F (1,109) = 1.62, p = 0.206, n.s.]. Although not statistically significant, there was a tendency observed for the condition [F (1,109) = 3.19, p = 0.077, η 2 = 0.028], suggesting a cautious implication that participants may have a higher likelihood of viewing high-calorie food cues for a longer duration in the hunger condition compared to the satiety condition. To investigate whether participants having problematic binge eating behaviors under the satiety condition exhibit longer gaze duration toward high-calorie food cues compared to non-food cues, the gaze duration toward food stimuli of each group was analyzed. As a result, there was no significant interaction between the group and the condition [F (1,109) = 1.91, p = 0.170, n.s.]. However, there was a significant main effect on the group [F (1,109) = 4.75, p = 0.031, η 2 = 0.042], indicating that the BE group showed attentional bias toward high-calorie food cues vs. neutral cues compared to the control group regardless of hunger and satiety. Moreover, there was a significant main effect on the condition [F (1,109) = 7.06, p = 0.009, η 2 = 0.061], as all hungry participants looked at the high-calorie food cues for a longer time than satiated participants ( Figure 5). FIGURE Comparison of initial fixation latency toward high-calorie food cues vs. non-food cues among groups. BE, binge eating group; Control, weight-matched control group; *p < . , error bar: SE. . . Explicit wanting and liking To assess participants' explicit wanting and liking for highcalorie and low-calorie food cues, their self-reported values on a visual analog scale (VAS) were analyzed. Table 5 shows the mean and standard deviation values for explicit wanting and liking toward high-calorie and low-calorie food cues. First, in terms of wanting level, the analysis revealed that there was no significant interaction between the group and the condition in wanting for high-calorie food cues [F (1,109) = 0.21, p = 0.647, n.s.]. There was a main effect on the condition for high-calorie food cues [F (1,109) = 17.01, p = 0.0001, η 2 = 0.135]. It is indicated that all participants reported higher explicit wanting for high-calorie food when they are hungry rather than satiated. However, there was no significant effect of the group on explicit wanting for high-calorie food cues [F (1,109) = 3.54, p = 0.63, n.s.]. Furthermore, there was no significant interaction between the group and the condition in wanting low-calorie food cues [F (1,109) = 0.79, p = 0.377, n.s.]. There was a main effect on the condition [F (1,109) = 12.03, p = 0.0008, η 2 = 0.099] for low-calorie food cues, indicating that both FIGURE Comparison of gaze duration toward high-calorie food cues vs. non-food cues among groups. BE, binge eating group; Control, weight-matched control group; *p < . ; **p < . , error bar: SE. the BED group and the control group reported higher explicit wanting for low-calorie food cues when hungry than satiated. There was no significant main effect on the group [F (1,109) = 0.44, p = 0.510, n.s.]. Second, for liking level, there was no significant interaction between the group and the condition in liking both for high-calorie for low-calorie food cues. It is suggested that there was no difference in liking for high-calorie food cues and low-calorie food cues between the BE group and control group, or between the hunger condition and satiety condition. . Discussion This study aimed to examine whether attentional bias for food cues was affected by hunger and satiety maintaining homeostasis in individuals with BE. The result of this study showed that the BE group showed attentional bias toward high-calorie food cues over low-calorie food cues in both hunger and satiety conditions in the early stage of attentional processing. However, the control group showed attentional bias toward high-calorie food cues when hungry, but when satiated they were more likely to look at the lowcalorie food cues. In the late stage of attentional processing, the BE group looked at the high-calorie food cues for longer than they did at the low-calorie food cues compared to the control group. Moreover, the BE group reported higher explicit wanting for highcalorie food than the control group did. All participants reported higher explicit wanting for high-calorie food when they are hungry rather than satiated. Finally, there was no difference in explicit liking for the group and the condition. The main result of this study is that both the BE and control groups showed early attentional bias toward high-calorie food cues over low-calorie food cues in the hunger condition. In the satiety condition, BE participants showed persistent orientation bias toward high-calorie food images, whereas the control group did not. As hypothesized, the effect of the hedonic pathway overriding the homeostatic pathway contributes to the development and maintenance of BE (Novelle and Diéguez, 2018). Normal-weight group showed incentive salience to high-calorie food cues only when hungry according to the homeostasis pathway (Lutter and Nestler, 2009). While it is adaptive to quickly detect and allocate attention toward high-calorie food during energy depletion, it is inappropriate to show the attentional bias toward high-calorie food cues, regardless of the condition in the BE group. In addition, the continuous hyperreaction of high-calorie food cues suggests why the majority of people with BED are overweight or obese (Field et al., 2013). This result supported the main hypothesis that the reward system activity is abnormally enhanced as exposure to palatable food cues in individuals with binge eating behaviors (Pool et al., 2016). In line with the incentive-sensitization theory, high-calorie food seems to be more salient than low-calorie food in the BE group because it is the reward-related cues. The attentional bias in the early stage of attentional processing presented the automatic engagement in high-calorie food cues, reflecting the implicit motivation to obtain a reward (Fox et al., 2001). As the cues triggered reactivity to high-calorie food cues may be due to conditioning systems, attentional bias limited to high-calorie food cues may be caused by a personal history of binge eating (Berridge and Robinson, 2003). The study examining food selection and intake of overweight women with BED showed that participants with BED consumed a greater percentage of energy as fat and a lesser percentage as protein than did participants without BED during the binge meal (Yanovski et al., 1992). Moreover, it is suggested that palatable food containing sugar and fat, most of which are high-calorie foods, have addictive properties (Gearhardt et al., 2011;Smith and Robbins, 2013). The results may be the evidence of addiction such as the consumption of palatable food in individuals with binge eating behaviors. The study shows that initial orientation bias toward highcalorie food cues vs. neutral cues appeared in all participants when hungry. The absence of group differences in participants' orientation bias toward high-calorie food cues vs. non-food is consistent with other eye-tracking studies in adults and adolescents who binge eat (Schag et al., 2013;Sperling et al., 2017). However, preferential orientation bias toward food stimuli was found in adults with BE episodes in real scenes (Popien et al., 2015) and in studies using reaction time-based measures (Schmitz et al., 2014(Schmitz et al., , 2015Sperling et al., 2017). The difference in results might be explained by the use of different experimental procedures and stimulus types. Moreover, another reason why this study did not show any difference between high-calorie food cues and neutral cues may be because of the ceiling effect. The ceiling effect may have occurred, as paying attention to high-calorie food cues is the most important issue for survival when hungry. As expected, the longer gaze duration for high-calorie food cues compared to low-calorie food cues or neutral food cues in individuals with BE was replicated in our study. There were relatively consistent results that people with BE showed slower disengagement of food cues (Schag et al., 2013;Popien et al., 2015;Schmidt et al., 2016;Sperling et al., 2017). However, like the control group, the BE group also showed longer gaze duration for highcalorie food cues vs. neutral food cues when they were hungry than when they were satiated in this study. The result is different from those of the group with obesity that may be related to the reward system dysregulation (Nijs et al., 2010). As the maintained stage of . /fnins. . attention measures more strategic attention (Fox et al., 2001), the BE group can also be affected by hunger and satiety in explicit desire (i.e., explicit wanting). This has something in common with the result of self-reported explicit wanting, applying the classification of explicit and implicit wanting in the incentive-sensitization theory. The BE group reported higher explicit wanting for high-calorie food than the control group did, and all participants reported higher explicit wanting for high-calorie food when they are hungry rather than satiated. The results may suggest that computations of wanting to incorporated the current physiological state in the BE group (Zhang et al., 2009). Another possible explanation is that there may be a risk of binge eating or loss of control when hungry rather than satiated. It is consistent with the result that dietary restraint would predict binge eating episodes (Freeman and Gil, 2004). When comparing high-calorie cues with low-calorie cues, the late stage of attentional bias toward high-calorie food cues is still evident in the BE group. This supports, to some extent, the approach-avoidance bias presented in previous studies (Schmidt et al., 2016). The results were different from other eating disorders, such as AN and BN (Brooks et al., 2011). For example, the eyetracking studies showed that AN and BN attended to food cues for a shorter time than the control group did (Blechert et al., 2011). Moreover, the eye-tracking study demonstrated that the bulimic tendency group detected high-calorie food cues faster than neutral food cues initially and avoided attentional maintenance (Kim et al., 2016). People with BE are more similar to people with obesity that showed consistent attentional bias in the early and late stages. While people with obesity tend to allocate their attention to both high-calorie and low-calorie food cues, BE showed attentional bias only for the high-calorie food cues (Nijs et al., 2010). This demonstrated that, unlike people with obesity, intervention focusing on the high-calorie food cues may be required in individuals with binge eating behaviors. There are several limitations in this study. First, in this study, the participants had a BMI index of approximately 21, which is within the average range for individuals in Korea and indicates that they did not exhibit issues of obesity or overweight. Therefore, we assumed that consuming "gimbap, " which is commonly regarded as a typical Korean meal, would induce a basic level of satiety. Although both the BED group and the control group reported reduced hunger after the meal, individual differences in the degree of satiety may exist, suggesting the possibility that participants did not fully experience satiety. Therefore, in future studies, it would be beneficial to use various physiological and psychological measures (e.g., blood samples), rather than relying solely on self-report evaluations, to assess the participants' level of hunger in a more objective manner. Second, the food images formed a homogeneous category, whereas non-food images depicted items from various categories. Thus, it is possible that more attention may be paid to food stimulus because they were of the same category. However, an overriding consideration in selecting the stimuli was within each picture pair, in which the food and neutral cues were matched as closely as possible for complexity, color, and brightness. In future, it would seem desirable to select non-food images from a single category. Third, this study did not consider food intake to examine BE after an attentional bias toward high-calorie food cues. In a future study, having a bogus taste test could provide additional evidence of BE. To conclude, the current study offers a suggestion that highcalorie food perception is biased in individuals with binge eating behaviors vs. weight-matched female controls. This is the first evidence to examine the differences in attentional patterns between BE and weight-matched control groups in the consideration of demands of the internal milieu based on the incentive-sensitization theory. As visual food cues are particularly prominent in society, understanding the cognitive process of exposure to visual food cues in BE is of great importance in developing potential behavioral therapies, environmental alterations, and public health measures. Moreover, based on the results, attention bias modification could be implemented to modify the specific attentional bias to high-calorie food cues. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by Chung-Ang University IRB. The patients/participants provided their written informed consent to participate in this study.
2023-07-15T15:12:15.037Z
2023-07-13T00:00:00.000
{ "year": 2023, "sha1": "599c22c58053b7298acb1f699c3d8efd89e71266", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2023.1149864/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07f80bde7e22d8be68006ea65b36869b8351714e", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [] }
12108488
pes2o/s2orc
v3-fos-license
Decisional role in seriously ill hospitalized patients near the end of life : The patient ’ s and provider ’ s perspective Decisions about whether or not to implement life-sustaining therapies are complex and are becoming more so as the ability to prolong life with advanced technologies and care increases. The objectives of this study were: (1) to determine seriously ill hospitalized patients’ preferences for decisional role with respect to decisions about life-sustaining treatments, and (2) to determine if providers were aware of patients’ preferences. This prospective, descriptive pilot study was conducted at an Ontario teaching hospital. One hundred and seventeen seriously ill adult patients admitted with cancer and non-cancerous conditions participated in a structured interview. Fifty-three nurses and 63 physicians responsible for the care of the participating patients also participated. Patients and providers were asked similar questions about end-of-life discussions and preference for decisional responsibility for life-sustaining treatments. Most patients (n=89, 77%) had thought about end-of-life issues and were willing to discuss these with their physicians and nurses, but few (n=37, 37%) reported such discussions. Preferences for decisional role varied; most indicated a preference for a shared role (n=80, 80%) and there were no differences in patients with or without cancer. Generally, both physicians and nurses were not aware of or did not determine accurately patient preferences for decisional role. The findings from this study show that seriously ill hospitalized patients have thought about and are willing to share in discussions about end-oflife care with their providers, yet many have not. Statement of issue In Canada, over 70% of deaths occur in the hospital. Patients with a primary diagnosis of cancer account for approximately 30% of these deaths. With the ability to prolong life with advanced technology and care, patients with primary and secondary diagnoses of cancer (and their family members) are, increasingly, confronted with decisions about whether or not to implement life-sustaining therapies. These are difficult, value-laden treatment decisions. Preferences for treatment are often unknown or not sought. Many studies examining end-of-life issues describe responsibility for these decisions (i.e., the decision to treat or to withhold or withdraw treatment) from the physician’s perspective. Although several position papers have been written, very little research has been conducted investigating the role of nurses in end-of-life care and end-of-life decision-making. Recent studies of end-of-life care suggest that improvements in communication and the decision-making process may lead to improvements in quality end-of-life care. A large five-centre study conducted in the United States, the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT) enrolled over 9,000 patients. The objective of this two-phase study was to improve end-of-life decision-making and reduce the frequency of mechanically supported painful and prolonged processes of dying. In phase one, the process of decision-making and patient outcomes were described and observed in 4,301 patients. There were shortcomings in communication and evidence of aggressive life-sustaining treatments: only 47% of physicians knew patients’ preferences, 46% of do not resuscitate (DNR) orders were written within two days of death, 38% of patients had a 10-day ICU stay, and many experienced moderate to severe pain in their last days of life. The findings from phase one suggested that management was most impacted by poor physician-patient communication. In phase two, a nurse-based intervention was designed. A “skilled” nurse made many contacts with the family, patient, and physician to elicit preferences, improve patient and family’s understanding of outcomes, encourage attention to pain control, and, overall facilitate advance care planning and communication. This study relied solely on the nurse as a communicator, facilitator, and advocate to improve the decisionmaking process. There were no significant differences in the measured clinical or economic outcomes. The apparent failure of this intervention strongly suggests that there are other more powerful determinants of the decision-making process that are not completely understood. Therefore, the purpose of this research was twofold: (1) to determine seriously ill hospitalized cancer and non-cancer patients’ preference for decisional role with respect to end-of-life decisions, and (2) to determine if their providers (nurses and physicians) were aware of their preferred role. We hypothesized that seriously ill hospitalized patients would prefer to defer or share the responsibility about end-of-life treatments to their health care providers and that cancer and non-cancer patients may have different experiences as the illness trajectory for cancer is more predictable and, historically, cancer patients have received more formal palliative and advance care planning. Secondly, we hypothesized that most providers would be unaware of patients’ preferences. The long-term goal of the End-of-Life Research Joan Tranmer, RN, PhD, is the director of nursing research at Kingston General Hospital and is the co-principal investigator along with Dr. D. Heyland for the End of Life Research Working Group. Dr. Heyland, MD, FRCPC, is a medical intensivist and a career scientist with the Ontario Ministry of Health. This research was generously supported by the Oncology Nursing Society and the Clare Nelson Bequest Fund of Kingston General Hospital. Helene Hudson, 1945-1993 THE 2000 HELENE HUDSON MEMORIAL LECTURE 12TH ANNUAL CANO CONFERENCE OCTOBER 2000 SPONSORED BY AMGEN CANADA Abstract Decisions about whether or not to implement life-sustaining therapies are complex and are becoming more so as the ability to prolong life with advanced technologies and care increases.The objectives of this study were: (1) to determine seriously ill hospitalized patients' preferences for decisional role with respect to decisions about life-sustaining treatments, and (2) to determine if providers were aware of patients' preferences. This prospective, descriptive pilot study was conducted at an Ontario teaching hospital.One hundred and seventeen seriously ill adult patients admitted with cancer and non-cancerous conditions participated in a structured interview.Fifty-three nurses and 63 physicians responsible for the care of the participating patients also participated.Patients and providers were asked similar questions about end-of-life discussions and preference for decisional responsibility for life-sustaining treatments. Most patients (n=89, 77%) had thought about end-of-life issues and were willing to discuss these with their physicians and nurses, but few (n=37, 37%) reported such discussions.Preferences for decisional role varied; most indicated a preference for a shared role (n=80, 80%) and there were no differences in patients with or without cancer.Generally, both physicians and nurses were not aware of or did not determine accurately patient preferences for decisional role. The findings from this study show that seriously ill hospitalized patients have thought about and are willing to share in discussions about end-oflife care with their providers, yet many have not. Statement of issue In Canada, over 70% of deaths occur in the hospital.Patients with a primary diagnosis of cancer account for approximately 30% of these deaths.With the ability to prolong life with advanced technology and care, patients with primary and secondary diagnoses of cancer (and their family members) are, increasingly, confronted with decisions about whether or not to implement life-sustaining therapies.These are difficult, value-laden treatment decisions.Preferences for treatment are often unknown or not sought.Many studies examining end-of-life issues describe responsibility for these decisions (i.e., the decision to treat or to withhold or withdraw treatment) from the physician's perspective.Although several position papers have been written, very little research has been conducted investigating the role of nurses in end-of-life care and end-of-life decision-making. Recent studies of end-of-life care suggest that improvements in communication and the decision-making process may lead to improvements in quality end-of-life care.A large five-centre study conducted in the United States, the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT) enrolled over 9,000 patients.The objective of this two-phase study was to improve end-of-life decision-making and reduce the frequency of mechanically supported painful and prolonged processes of dying.In phase one, the process of decision-making and patient outcomes were described and observed in 4,301 patients.There were shortcomings in communication and evidence of aggressive life-sustaining treatments: only 47% of physicians knew patients' preferences, 46% of do not resuscitate (DNR) orders were written within two days of death, 38% of patients had a 10-day ICU stay, and many experienced moderate to severe pain in their last days of life.The findings from phase one suggested that management was most impacted by poor physician-patient communication.In phase two, a nurse-based intervention was designed.A "skilled" nurse made many contacts with the family, patient, and physician to elicit preferences, improve patient and family's understanding of outcomes, encourage attention to pain control, and, overall facilitate advance care planning and communication.This study relied solely on the nurse as a communicator, facilitator, and advocate to improve the decisionmaking process.There were no significant differences in the measured clinical or economic outcomes.The apparent failure of this intervention strongly suggests that there are other more powerful determinants of the decision-making process that are not completely understood. Therefore, the purpose of this research was twofold: (1) to determine seriously ill hospitalized cancer and non-cancer patients' preference for decisional role with respect to end-of-life decisions, and (2) to determine if their providers (nurses and physicians) were aware of their preferred role.We hypothesized that seriously ill hospitalized patients would prefer to defer or share the responsibility about end-of-life treatments to their health care providers and that cancer and non-cancer patients may have different experiences as the illness trajectory for cancer is more predictable and, historically, cancer patients have received more formal palliative and advance care planning.Secondly, we hypothesized that most providers would be unaware of patients' preferences.The long-term goal of the End-of-Life Research We have attempted to integrate many of the concepts into an organizing framework to guide this study (and others) conducted by the End-of-Life Research Working Group (Heyland, Tranmer, & Feldman-Stewart, 2000).The framework consists of four "units of study": a) provider, b) patient, c) decision-making process, and d) outcome.It should be emphasized that this is an organizing framework and it cannot be overstated that conceptually, decisionmaking, especially near the end of life, is complex.The interactions are continuous, involve multiple providers and family members, occur within a complex social environment of often conflicting priorities and values, and the decisions are value-laden and final.While we conceptualize the patient-provider interactions as one of three models (passive, collaborative, active) the models should not be construed to represent rigid processes or events.Patients and providers may move from one model to another from one interaction to the next or even within one interaction.For example, as a physician operating in an active role senses that his patient requires more information and participation in the decision-making, he may move to a more shared decision-making model.The optimal outcome(s) of the process will be unique for each individual and, consequently, are difficult to define and measure. The nursing role in end-of-life decision-making If health care and end-of-life decision-making are thought of as a continuum, anchored by the patient at one end desiring full responsibility and control and the physician (provider) at the other end dictating clinical decisions about patient care, we hypothesize that the nurse functions in several roles as a facilitator, interpreter, and clarifier, and is often a filter through which communication occurs between the physician and patient.The nurse as the mediator interprets for the physician and advocates for the patient and family.Thus, if the nurse is participating by mediating patient preferences, then his/her perception of the degree of responsibility for patient decision-making should be congruent with that of the patient. Very few research studies have addressed the role of the nurse in end-of-life decisions, although some studies have addressed nursing attitudes towards end-of-life issues.Gaps in the decision-making process are evident and those that may involve nurses have not been adequately addressed.A recent study by Wilson and colleagues (1997) comparing interns' and attending physicians' abilities to predict end-of-life treatment choices of seriously ill hospitalized patients found that physicians often learned of the patients' CPR preferences from sources other than the patients.No specific data were given that quantified 'often.'No indications were given as to whom these sources were, although it is possible that one of the main sources was the patient's nurse.A study by Baggs (1993) found that the amount of collaboration between nurses and ICU house staff in the decision to transfer a patient out of the ICU, as reported by nurses, was a statistically significant predictor of risk of a negative patient outcome.As the collaboration increased, as reported by nurses, the incidence of negative patient outcomes decreased.Collaboration, as reported by house staff, was not statistically associated with patient outcomes.Although this study did not look especially at end-of-life decisions, the results indicate that the role nurses play in clinical decision-making can impact patient outcomes. The aim of phase two of the SUPPORT study was to improve endof-life decision-making and reduce the frequency of a mechanically supported, painful, and prolonged process of death.In this randomized controlled study, physician groupings were randomized to receive the intervention or not.The intervention consisted of nurses: (1) providing prognostic information to physicians; (2) eliciting patient preferences; (3) encouraging physician attention to pain control; (4) facilitating advanced care planning; and (5) facilitating physician-patient communication.Although the SUPPORT trial failed to achieve statistical significance on any of the five primary outcomes: physician understanding of patient preferences; incidence and time of documentation of do-notresuscitate orders; the amount of pain experienced by patients; time spent in intensive care unit, comatose or receiving mechanical ventilation before death; and hospital resource use -one cannot conclude that there is no role for nurses in end-of-life decisions.Oddi and Cassidy (1998) in a critical commentary of the SUPPORT trial suggested that the poor outcomes might have been related to the investigators' inadequate understanding of, and consequently the incorporation of the nursing skill and knowledge into the project design and intervention.Nurses were to independently develop their role, similar to "nurse specialists."The nurse selection criteria, background education, preparation, and responsibilities varied between sites.Nurses may have failed as communicators because their information was not valued or perceived as credible.Nurses may have failed in their role as patient advocates because of a lack of assertiveness and support by the health care team.Nurses were caught in the middle between families and patients and the physicians -there was little evidence of collaboration.However, Oliverio and Fraulo (1998) favourably commented on their role as nurse clinicians.They stated that they came to understand the complexities and fears of patients and families in this process and perhaps it was these complexities that explain why the communication efforts seemingly demonstrated no benefit.They felt strongly that it was the nursing role to advocate for appropriate care in accordance with patients' and families' preferences in conjunction with the clinical judgment of the health care team.The nursing role was to make sense of the complex factors, such as high technology, hope, futility, and the burden of the decisions.They also suggested that outcomes related to process and comfort and caring may be more appropriate to measure. Summary End-of-life decision-making for seriously ill hospitalized cancer and non-cancer patients is complex.Recent research suggests that our efforts to improve the care near the end of life have not been successful.Specifically, the nursing role remains underdeveloped and underutilized.Therefore, the purpose of this study was to explore important aspects of end-of-life decision-making in seriously ill hospitalized cancer and non-cancer patients from both a patient and a provider perspective. Research questions In this study we posed three research questions: 1.What role do seriously ill hospitalized patients wish to assume in decisions about life-sustaining treatments? 2. Is there a difference in preferences for decisional role in patients diagnosed with cancer or non-cancerous conditions? 3. Are health care providers (nurses and physicians) aware of patients' preferences for decisional role and if so what is the congruency? Research method The study design was a case-specific, cross-sectional survey administered in face-to-face interviews.The study was conducted in an acute care, university-affiliated hospital in southeastern Ontario.The study population consisted of those patients admitted to the Kingston General Hospital who met the patient inclusion/exclusion criteria, the patients' assigned nurse, and the patients' attending and resident physicians. Patient inclusion criteria required that patients were age 18 years or more; were admitted to hospital for medical reasons; had one or more of the following co-morbidities: (a) chronic obstructive lung disease (COPD) determined by the presence of two or more of a baseline pCO 2 of ≥ 45 torr, cor pulmonale, respiratory failure within the last year, or forced expiratory volume of ≤ 25%; (b) congestive heart failure (CHF) determined by New York Heart Association Class IV symptoms of ventricular function ≤ 25%; (c) cirrhosis determined by diagnostic imaging or esophogeal varices and hepatic coma or class B or C liver disease, or (d) metastatic cancer (admitted with a complication); were expected to stay in hospital 72 hours or more; and could speak English.Patients with psychiatric illness, those who were expected to have difficulty in communication (language, cultural, or cognitive barriers), and those who were facing imminent death were excluded from the study.The patient inclusion criteria for the study sample were chosen to reflect the inclusion criteria used in the SUPPORT study.Patients whose condition may deteriorate to the point where they may be at risk of facing end-of-life decisions and whose probability of survival at six months was 50% were included in the sample. Each study subject's assigned nurse, responsible resident, and attending physician were approached to participate in the study.The assigned nurse was the nurse assigned to the patient on the day of the interview.The attending physician was the staff physician who was responsible for the patient's in-hospital medical care at the time the survey was administered to the patient.The most responsible resident was the senior resident assigned to the care of the patient. Measures We obtained information from patients using a structured questionnaire administered by a research assistant.The questionnaire consisted of a preamble explaining the study objectives; questions to determine the patient's role in making decisions; questions determining with whom the patient feels comfortable discussing endof-life issues; and a section to collect demographic data.We did not use the card sort approach as originally designed by Degner and Sloan (1992), as we were concerned about the time required to sort responses and we also wanted to use the same methodology with the physicians and nurses.The measurement tool for physicians and nurses consisted of a subset of questions of the questionnaire given to the patient.The health care provider questionnaire assessed the physicians' and nurses' perception of the role they thought the patient would desire with respect to end-of-life decisions.We also provided an opportunity for both patients and providers to comment on their responses. Data collection All attending physicians were informed about the study and endorsement was sought for involvement of patients assigned to their care.Most attending physicians agreed to participate.A small number of physicians raised concern about the focus of the study on end-oflife issues, especially with "their patients" with whom they may not have discussed these issues.We attempted to reassure physicians that we were exploring the process of decision-making in an attempt to describe strengths and gaps and that we were only focusing on preferences for decisional role and not actual preferences for care. Patients were approached for participation if they met the inclusion criteria and had been in hospital for at least three days.After patient consent was obtained, the research assistant conducted the interview.The nurse assigned to the patient on the day of the interview, the most responsible senior resident physician, and the attending physician were given a questionnaire to complete for each patient enrolled in the study.The research protocol was reviewed and approved by the Kingston Health Sciences Research Ethic Board. Sample Patient recruitment for this study began in July 1999 and provider recruitment in February 2000 and will continue for another six months.As of July 2000, the time of this report, the patient participation rate was 57% (122/215).The most common reason for non-participation was the patient's desire not to be in a study.The response/participation rate for nurses, residents, and attending physicians was 86% (46/53), 64% (19/29), and 77% (26/34) respectively.For the purpose of this report, the attending and resident responses are combined into physician responses. Patient characteristics Results are reported on the first 117 patients enrolled in the study (see Table One).Patients enrolled in the study were elderly.Of those patients who were enrolled in the beginning six months of the study, 74% of patients with cancer and 54% of patients with non-cancerous conditions have expired.More of the patients with COPD and congestive heart failure had ICU admissions in comparison to the cancer patients.More cancer patients had received palliative care consults.However, only one-third of patients in both groups had recorded discussions about end-of-life (EOL) care or an EOL order on their patient record.Most patients either lived on their own or with another family member.In this sample, 58% (62/107) were married, 26% (28/107) divorced, and the remaining 15% (17/107) were either single or widowed.Most were retired (81%, 87/107). Nurse characteristics Nurses (n=42) were employed on the medical surgical units.Twenty-nine per cent were in part-time positions, 60% in full-time positions, and the remaining 11% in temporary part-time or full-time.Nurses in this sample had worked for an average of 11 years with a range of work experience from one month to 33 years.In this hospital, patient assignment is done on a shift-to-shift basis -there is no primary nurse assignment.A single clinical nurse specialist, palliative care, provides important support to patients and families. End-of-life discussions In the first part of the interview, patients were asked the questions listed in Table Two.Most patients (77%, 89/116) have thought about treatments they would wish to receive if they developed a lifethreatening complication.However, similar to what is recorded in the patient record, 37% of patients (43/116) reported having these discussions.Most are willing to discuss these issues with their physician.Those patients who wished not to discuss end-of-life care stated that they would discuss these issues with others (i.e., family physician) or they did not feel there was a need to discuss now.Very few discussions about end-of-life care with the nurse or other health care providers were reported (n=18, 16%).However, many patients expressed a willingness to talk with nurses.Over one-half of the sample reported that they had some form of advance directiveusually located outside the hospital (n=66, 57%). Preference for decisional role In this sample, there was no difference with respect to desire for decisional role in patients with cancer and those without (see Table Three).The preferences for role varied.The majority of patients expressed a desire for a shared or a more active role in making decisions about life-sustaining treatments.Patients provided some very clear comments about their views.A patient who expressed the desire for a shared role reported: "It makes more sense -I need to have the discussion between the doctor and myself as he would know the best treatments for me.He is a professional and could tell me what option was best and I would respect his/her opinion." A patient who expressed the desire for a more active role reported, "This is my body and my decision.I want control -living and dying is up to the individual."Fewer patients, but still a substantial number, wished the physician to take more of a role.They stated, "I am not a To leave decisions to their doctor 5 (10%) 7 (12%) Have the doctor make the final decisions but seriously consider their opinion 4 (8%) 6 (11%) Have the doctor share responsibility for decisions 16 (32%) 21 (37%) To make the final decisions after seriously considering their doctor's opinion 17 (34%) 13 (23%) To make the decisions 8 (16%) 10 (18%) Patient preferences for family member's role: Leave decisions to their doctor 5 (10%) 9 (16%) Have the doctor make the final decisions but seriously consider their opinion 1 (2%) 7 (12%) Have the doctor share responsibility for decisions 19 (38%) 17 (29%) Make the final decisions after seriously considering their doctor's opinion 19 (38%) 15 (26%) To make the decisions 6 (12%) 9 (16%) doctor -I am unable to make that decision -he must know what he is doing."We also asked what role patients would wish their family member to assume if they were not able to participate.The same trend of responses was noted. Providers' awareness of preferences The provider responses followed a similar pattern to that of the patients (see Table Four).However, we provided an opportunity for the providers to indicate that they could not determine patients' preferences.Forty-six per cent of the nurses in the sample indicated that they did not know the patient well enough to determine preferences for decisional role with respect to decisions about end-oflife care.Fewer physicians reported this "unawareness", however, fewer physicians responded which may indicate that unresponsiveness is similar to unawareness.Nurses commented that, "they were only just assigned the patient," "they did not think they knew the patient well enough to discuss these issues," "the patient was stable now and there was no need to talk about these issues."Physicians also stated that the patient was currently stable and there was no need to talk about end-of-life care.At times they did not know the patient well enough -i.e., they were the "covering" oncologists for inpatients. The degree to which each patient and nurse agreed upon the preferred role was analyzed.An active role was coded if patients or providers indicated that the patient wished to decide with or without physician input, the collaborative role included the shared category, and the passive role included the categories in which the patient indicated that they wished the physician to decide either on his/her own or after consideration of their opinion.This categorical breakdown is similar to the one used by Degner and Sloan (1992) in their categorization after the unfolding of preferences using the card sort technique.Nurses agreed with patients 19% of the time, however when nurses assessed patient preferences the agreement was 38% (8/21).Patients reported more of a preference for an active role in comparison to a passive role.However, the actual discrepancy was small (i.e., a difference of one level). Most nurses (95%, n=44) and 36% (n=16) of the physicians reported that they had no discussions with patients about lifesustaining treatments.Twenty-three per cent of the physicians (n=10) reported that they had discussed life-sustaining treatments with the patient and the patient agreed that they had done so.A substantial proportion of the physician population (30%, n=13) reported that either they had talked with the patient and the patient said they had not or, conversely, the physician had no discussions and the patient said they had.Overall, in this sample of patients there was a paucity of communication around end-of-life treatments. Summary of findings Patients near the end of life differ with respect to role preference, however most (80%, n=80) prefer a shared process and active involvement in the decision-making.There were no differences in role preference between patients whose primary diagnosis was cancer in comparison to patients whose primary diagnosis was non-cancer related.Nurses (and physicians) in this acute care setting were not aware of or misinterpreted patients' preferences for decisional role.Common themes emerged: patient was not critical enough, only just assigned, role ambiguity, and lack of communication processes. Study strengths and limitations The major strength of this study is that our sample accurately reflects patients who are near the end of life, as over 50% of patients enrolled in the first six months have expired.The second strength of this study is the use of comparison groups.In many hospitals, patients with cancer are often admitted with complications related either directly or indirectly to their cancer or other underlying conditions.Therefore, it was important to determine the similarities and differences in seriously ill hospitalized patients.Finally, this study reflects the real life world of the providers and the contextual influences of a tertiary teaching hospital, including multiple caregivers and patient assignments. The major limitation of this study is the use of a cross-sectional survey to measure a complex process such as decision-making.We focused on certain aspects (i.e., decisional role) of decision-making at one point in time.While this produced some important findings, further longitudinal research could explore the influence of important determinants of effective decision-making during the end-of-life phase.Indeed, during the interviews the research assistants often commented on the "richness" of some of the interviews.Finally, the provider sample size is small.Data will continue to be collected until there is a large enough sample to generalize the findings. Discussion The findings of this study show that in this sample of seriously ill hospitalized patients, most have thought about and are willing to discuss end-of-life treatments with both physicians and nurses, yet many have not.Nurses were not comfortable discussing these issues as they perceived this to be the physician role, and they were only just assigned the patients.The physicians often stated that "someone else" should do this or that the patient was not critical enough at this point.The research literature reports concerns about the late and inappropriate timing of end-of-life discussions, in particular referrals to palliative care or institution of EOL orders.The findings from this study support this concern.Unfortunately, most patients with the diagnostic conditions and criteria used in this study died within six months.Providers do not know if this is the sentinel admission that may be the patient's last -thus we should engage in end-of-life discussions before the critical end points of uncontrollable pain or symptoms or inevitable death.We have prognostic criteria and willing patients (and families) but, consistent with the acute care culture, we wait until there is a crisis.Thus, there is a need to focus end-of-life care beyond the "very end of life." Patient preferences for role varied, but many patients expressed a desire for sharing in some way the information exchange and deliberation and assuming of decisional responsibility.This was not what we expected as we hypothesized that patients, because of their serious illnesses, would defer responsibility to the care provider.There are two possible explanations for this finding.Firstly, the decision to end life is "high stake" and thus patients (and families) more than likely feel strongly about how they wish this stage of life to unfold.They want to be involved and heard.Why do patients willingly choose a passive role?Is this their desire or a reaction to their feelings of vulnerability and loss of control or, conversely, could it be related to a sound trust in the decisions made by the physicians and others?It does not seem to be related to the severity of their illness or their inevitable death.Secondly, patients may perceive that there are no real options -either life or death.This is not the case with other medical or health care decisions.Furthermore, many of these patients had chronic conditions and were knowledgeable of their own condition and their experience.They could make an informed decision. The majority of nurses in this sample either were unaware of or misinterpreted the patients' preferences.Based on position papers and policy statements, we assumed that a nurse would function as a clarifier, advocate, and mediator for patients with respect to decisions about end-of-life care; however, in this study, in this acute care setting, this was not the reality.In this hospital, nurses are assigned to patients on a shift-to-shift basis and communication about patient needs and care often focuses on the immediate needsthere is little emphasis and perhaps opportunity in a shift assignment to proactively discuss care issues that are not directly related to immediate care needs.However, many hospitals employ a number of strategies to address some of these gaps -discharge rounds, palliative care specialists, and advance care planning.Unfortunately, as evident in the results of this study, they are administered to a few (i.e., few palliative care consults to cancer patients and none to non-cancer patients) or very close to the end of life.It is concerning that some nurses are abdicating all of the responsibility of discussions about end-of-life care to the physician.Nurses do have a professional role and mandate in this regard, and hospital (and other) professional administrators need to provide the necessary supports for nurses to engage in this care.Oliverio and Fraulo (1998) offered some suggestions based on their experience in the SUPPORT trial.They recommended that (a) death needs to be understood as natural and inevitable; (b) discussions about end-oflife care issues need to occur early in all settings and be communicated thoroughly; (c) nurses need to be aware of the burden that family members experience when participating in endof-life decisions and intervene to minimize the burden; (d) there is a need to consider the creation of cultures (and perhaps units) that support care near the end of life; and (e) patients need to be reassured that they will receive quality care regardless of decisional preferences.They also recommended that a role similar to the SUPPORT nurse be implemented in hospitals.We would recommend the development and evaluation of multi-faceted strategies to improve care near the end of life.This could include heightening awareness about end-of-life issues; increasing nursing knowledge and skill with respect to quality care issues near the end of life; supporting and mentoring nurses in patient advocacy roles; establishing methods of communication that are reliable and feasible; and establishing strategies that both providers and patients can engage, as they desire, in important decisions about care. Historically, nurses have provided compassionate care to dying patients and their families.We need to extend this care to patients as they approach the end of life.Patients are willing to be involved.Nurses in the acute care setting need to incorporate end-of-life care processes into their repertoire of knowledge and skilled care that they normally provide to seriously ill hospitalized patients.The challenge for nurses (and physicians) is to provide this care in an acute care environment that is ever-changing, complex, and treatment-oriented. Joan Tranmer, RN, PhD, is the director of nursing research at Kingston General Hospital and is the co-principal investigator along with Dr. D. Heyland for the End of Life Research Working Group.Dr. Heyland, MD, FRCPC, is a medical intensivist and a career scientist with the Ontario Ministry of Health.This research was generously supported by the Oncology Nursing Society and the Clare Nelson Bequest Fund of Kingston General Hospital. Table One : Patient characteristics of sample * Data only available on patients enrolled in first six months or those who expired before July 2000.
2018-04-03T00:29:05.826Z
2001-01-19T00:00:00.000
{ "year": 2001, "sha1": "5420013b8351d4ab691f75bea1d32ddfc43174e3", "oa_license": "CCBYNC", "oa_url": "http://canadianoncologynursingjournal.com/index.php/conj/article/download/431/432", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "8e91c2c85ceedd66686fb0f4262b840c89733b07", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266398383
pes2o/s2orc
v3-fos-license
Weighted residual network for SAR automatic target recognition with data augmentation Introduction Decades of research have been dedicated to overcoming the obstacles inherent in synthetic aperture radar (SAR) automatic target recognition (ATR). The rise of deep learning technologies has brought a wave of new possibilities, demonstrating significant progress in the field. However, challenges like the susceptibility of SAR images to noise, the requirement for large-scale training datasets, and the often protracted duration of model training still persist. Methods This paper introduces a novel data augmentation strategy to address these issues. Our method involves the intentional addition and subsequent removal of speckle noise to artificially enlarge the scope of training data through noise perturbation. Furthermore, we propose a modified network architecture named weighted ResNet, which incorporates residual strain controls for enhanced performance. This network is designed to be computationally efficient and to minimize the amount of training data required. Results Through rigorous experimental analysis, our research confirms that the proposed data augmentation method, when used in conjunction with the weighted ResNet model, significantly reduces the time needed for training. It also improves the SAR ATR capabilities. Discussion Compared to existing models and methods tested, the combination of our data augmentation scheme and the weighted ResNet framework achieves higher computational efficiency and better recognition accuracy in SAR ATR applications. This suggests that our approach could be a valuable advancement in the field of SAR image analysis. Introduction Due to its ability to operate independently of atmospheric and sunlight conditions, synthetic aperture radar (SAR) offers advantages over optical remote sensing systems.Automatic target recognition (ATR) is a crucial application of SAR systems, traditional techniques relied on handcrafted features such as the shape, size, and intensity of objects in the images (Oliver and Quegan, 2004).However, these techniques faced limitations as they required manual feature extraction and were susceptible to variations in conditions, object orientations, and configurations Wu et al. (2023a) and Yuan et al. (2023).In recent years, numerous approaches have emerged with the advancement of learning algorithms such as generative neural networks, multilayer autoencoders (Wu et al., 2022), long shortterm memory (LSTM), and highway unit networks (Deng et al., 2017;Lin et al., 2017;Song and Xu, 2017;Zhang et al., 2017).However, it is important to note that even state-of-the-art machine learning algorithms may encounter challenges when applied to SAR ATR, such as the limited availability of training samples and the issue of model overfitting. To address these challenges, Chen et al. (2016) have introduced all-convolutional networks (A-ConvNets) as a solution, reducing the number of free parameters in deep convolutional networks and thus mitigating the overfitting problem caused by limited training images.Furthermore, several SAR image data augmentation methods have been proposed in recent years, such as the works by Zha (1999), Ding et al. (2016), Wagner (2016), Xu et al. (2017), and Pei et al. (2018a), aiming to tackle the issue of limited training data. In order to enhance the training data for SAR target recognition, several methods have been proposed.Zha (1999) suggested generating artificial negative examples by permutating known real SAR images to increase the dataset size.Wagner (2016) utilized positive examples to improve robustness against imaging errors.Pei et al. (2018a) developed a multi-view deep learning framework that generates a large amount of multiview SAR data for training.This approach expands the training dataset by incorporating the spatial relationships between target images, resulting in improved recognition accuracy.Additionally, techniques such as suppressing speckle noise through fusion filters (Xu et al., 2017) and adding simulated speckle noise with varying parameters to training samples (Ding et al., 2016) were employed to enhance the SAR image data. Among deep learning networks, Convolutional Neural Networks (CNNs) appear to be the most popular choice for SAR target recognition (Chen et al., 2016).However, severe model overfitting related to deep CNNs in SAR ATR was observed, leading them to propose an alternative solution called all-convolutional networks (A-ConvNets) to reduce the number of free parameters.A-ConvNets consist of sparsely connected layers instead of fully connected layers, providing a means of adjusting the model training process by improving network architecture. There have been additional studies combining CNNs with assistant approaches, particularly in the context of data augmentation (Zhang et al., 2022;Wu et al., 2023b).The data augmentation methods used in SAR ATR can be broadly categorized into spatial information-related methods (Wagner, 2016;Pei et al., 2018a) and speckle noise-related methods (Xu et al., 2017).For spatial information-related approaches, Pei et al. (2018a) proposed a multiview deep learning framework that generates a large amount of multiview SAR data.This includes combinations of neighboring images with different azimuth angles but the same depression angle.By expanding the training dataset through this multiview SAR generation system, the spatial relations among target images are taken into account, resulting in higher model accuracy.Another typical method involves generating artificial images through distortion and affine transformation (Wagner, 2016). Regarding the approach related to speckle noise, Xu et al. (2017) proposed a data augmentation technique utilizing a fusion filter-based noise suppression approach.This approach aims to address the low recognition rate and low robustness of traditional classification methods toward speckle noise.Other works have also focused on incorporating speckle noise characteristics in data augmentation techniques (Chierchia et al., 2017) and CNN models (Ma et al., 2019).Also, researchers are seeking to modify traditional CNN structures to better cater to SAR ATR requirements.These efforts include altering the learning parameters (Pei et al., 2018b), optimizing the network structure, and integrating speckle noise-related factors during model training (Kwak et al., 2019).In their work, the speckle noise was first suppressed using the fusion filter, and then the noise-suppressed images were used for network training to enhance model accuracy. In SAR ATR tasks, CNNs have been extensively applied due to their effectiveness.Neural network structures, such as convolutional highway units, have been employed to train deeper networks with limited SAR data (Lin et al., 2017).However, it is important to consider the special characteristics of SAR images and adjust them accordingly to network models. Although existing SAR ATR works have primarily utilized machine learning frameworks, particularly neural networks, and made significant efforts in adapting SAR images to network models, SAR images require special attention due to their uniqueness as remote sensing data.For instance, the application of deep convolutional highway units demonstrated promising results in training deeper networks with limited SAR data, the introduction of extra parameters, and the potential invalidation of layers due to shortcut connections need to be considered (Lin et al., 2017). Literature has shown that data augmentation, particularly noise-related methods, can improve model accuracy (Ding et al., 2016).Some works have been done to simulate and incorporate speckle noise with different parameters into the training samples (Ding et al., 2016).However, evaluating handcrafted images against ground-truth data and predicting real-world recognition processes presents challenges.It is also important to consider image samples with noise cancellation in addition to noise addition, as both can contribute to the network training process. Furthermore, to address the limitations of the CNN structure, other improvements can be considered in terms of the training process.CNNs are known for their strong feature extraction capability, resulting in success in image processing-related areas.However, when applying CNNs to SAR ATR, it is crucial to address the limited quantity of ground truth images, which are more difficult to acquire compared to optical RGB format images (Hochreiter and Schmidhuber, 1997;He et al., 2016).Overfitting can become a problem when training CNN models on SAR data. Motivated by these considerations, this paper proposes a modified version of the Residual Network (ResNet) for SAR ATR, incorporating data augmentation to enhance recognition accuracy.Specifically, a residual strain control is introduced to modify the ResNet structure proposed by He et al. (2016), which has demonstrated superior training depth and accuracy compared to other CNNs.The proposed modification reduces training time and enlarges the SAR image dataset by both canceling and adding speckle noise, leading to improved recognition accuracy.Experimental results show that the proposed weighted ResNet, combined with data augmentation, enhances computational efficiency and recognition accuracy. The main contributions of this paper can be summarized as: 1) This paper proposes a data augmentation method related to speckle noise in SAR images, which enhances the size and quality of the SAR image dataset.This augmentation, which involves both the addition and removal of noise, resulted in a more robust and accurate CNN model for SAR ATR. 2) A weighted ResNet is proposed which incorporates a unique residual strain control factor in its framework.By adjusting the residual strain of each weight layer, the weighted ResNet managed to enhance the model's computational efficiency, accuracy, and convergence speed, offering a major step in model optimization. 3) This paper presents comprehensive experiments to validate the effectiveness of the proposed algorithm.It further compared the weighted ResNet with other prominent CNNs, verifying its superiority in terms of training depth, model accuracy, and accelerated convergence. The rest of the paper is organized as follows: Section 2 presents the proposed data augmentation method based on noise removal and addition.Section 3 provides details on the design of the modified residual network.Section 4 presents experimental results, while Section 5 presents the conclusions.The weighted ResNet structure includes a residual strain control factor added to the last layer of each shortcut unit.Compared with other CNNs, the improved network structure has advantages in terms of training depth and model accuracy, as well as accelerated convergence compared to the original ResNet.For data augmentation, an approach incorporating speckle noise addition and cancellation is proposed, resulting in an expanded dataset encompassing both ground-truth and noisy samples.Efficient data augmentation and improved network model accuracy in SAR ATR are achieved compared to other methods by rearranging the training and test datasets. Data augmentation methodology In this section, we shall present a data augmentation method based on the noise perturbation.More precisely, we augment the dataset by both canceling and adding noise. . Speckle noise in SAR images It is known that SAR imaging suffers from speckle noise.Assume that the radar works under single looking mode, the observed scene can be modeled with multiplicative noise as where I represents the observed intensity, s is the radar cross section (RCS) and n denotes the speckle noise.The amplitude of the RCS obeys exponential distribution with unit mean and the speckle noise is a kind of multiplicative noise.Hence, to generate a SAR image without speckle noise, we first obtain the speckle noise estimate by dividing the ground-truth images by the RCS estimate as where ŝ represents the RCS estimate obtained by applying the median filter. . Noise based data augmentation Unlike existing data augmentation approaches, we propose to expand the dataset via noise suppressing as well as noise adding.Following ( 1) and (2), it is not difficult to imagine that we can utilize the estimated speckle noise n to enlarge the training dataset by adding the speckle noise through multiplication and canceling suppressing through division.By doing so, it is able to get lower signal to noise (SNR) images and higher SNR images, which can be expressed as For data augmentation, both the lower SNR images and higher SNR images are taken as effective support. Deep residual network design In this section, we shall present the weighted ResNet structure, which has shortcut block units modified by introducing a residual strain control parameter in the second convolutional layer.The weighted ResNet results in less training time compared to its original counterpart. . Network structure unit As evaluated in the ILSVRC 2015 classification task, ResNet achieves a 3.57% error on the ImageNet test set, which won 1st place (He et al., 2016).Equipped with shortcut connections, ResNet excels in both learning depth and recognition accuracy compared to plain convolutional neural networks.The essential idea of the ResNet is that it learns the residual function instead of the underlying mapping.The residual function, defined as the difference between the underlying function and the original intensity function (input), automatically includes reference from the input.However, in common CNN networks, the mapping function is learned as a new one in the stacked layers.In other words, the layers are reformulated as residual functions with reference to the layer inputs rather than learning unreferenced functions. It may have overwhelming advantages, but problems also clearly exist.While conducting experiments with popular networks, we found that ResNets are less likely to converge even after other networks are well trained.This computational shortcoming drove us to explore the reason behind it and left room for improvements.Consequently, we introduced a weighted ResNet variant in our MSTAR data implementation.For a clearer explanation, the supporting theory and analysis will follow the introduction of the network structure. Figure 2 shows a single shortcut connection of the weighted ResNet, where the fourth and back layers are skipped for the sake of simplicity.The underlying mapping function H(x) is defined as where x denotes the input intensity and W s is a linear projection which matches the dimensions of x with an modified residual function F(•) as where σ (•) stands for the rectified linear unit (ReLU) function and the biases are omitted for simplicity, and c r ∈ [−0.5, 0.5] denotes the residual strain control parameter.As can be seen from Figure 2, the residual unit is modified by adding a residual strain control after the ReLU process.During model training, the control parameter c r is constrained by where η is the learning rate, c r is the graident of parameter c r .Figure 3 draws a single shortcut connection of the proposed improved ResNet.Again, it can be found that, compared to the basic ResNet, the main difference is that a residual strain control unit is added.In this figure, the two blocks are termed identity block (IB) and transformational block (TB), respectively. . Weighted ResNet structure In brief, the weighted ResNet involves 20 convolutional layers, in which an average pooling layer and a dense layer are the last two layers.Specifically, it takes the following form as The main architecture and flow chart of the weighted ResNet are given in Table 2 and Figure 4, respectively. In weighted ResNet, a weight factor, denoted as C r , is introduced to the residual connections of the traditional ResNet architecture.This mechanism can assign different weights to different layers or features depending on their contribution to the final output.This allows important features to have more impact on the output and less significant features to have less impact. The intention behind introducing a weighting mechanism varies depending on the specific application or task at hand.For example, in some contexts, introducing weights can help deal with class imbalance in the dataset.In other cases, it may be used to increase model robustness against noise or other irregularities within the data.The weights may be learned during training, using backpropagation and gradient descent, or might also be assigned based on preset criteria defined by the researchers.The methodologies can vary in different incarnations of weighted ResNet models. . Residual strain control for ResNet modification Although deeper network depth and higher model accuracy are well-noticed, ResNets suffer from untoward convergence.We may first find the outstanding learning ability surprising, but it prompts further thinking and exploration post-implementation.The pain point arises when the residual information and the underlying information are merged.As observed in the Basic and Basic Inc architectures, ReLU will be applied on the residual information channel before the merger.This eventually hampers the seamless integration of the two channels.For the underlying channel, the value is in the range of (−∞, +∞), whereas the value set of the residual channel is significantly limited to merely positive after the ReLU operation.The raw merger operation in original ResNets leads to a bias far from the underlying channel, which suppresses the cognition.This will not only shorten the representation ability of networks, but also tie down the overall training process.Therefore, ResNets fall behind other CNNs in convergence inevitably. To keep the goodness as well as speed up the training, the residual strain control parameter plays a role.As taken values in the range of [−0.5, 0.5], the residual control parameter c r shifts the residual channel to both negative and positive values.And this turbocharge in turn results in a better fusion of the two channels.Significant improvements in convergence have been achieved in modified ResNets after the multiplication of c r . It is worth noting that our optimization method does not add any extra structures or computational operations, thus maintaining the computational complexity, measured in FLOPS, at the same level as the base ResNet model. . Network training Given the image dataset with S training samples and the corresponding ground-truth labels x i , y i , i ∈ S, we adopt a training cost function with L 2 regularization as where p y i represents the predicting probability for each target class, θ is the trainable parameter of the network, λ 1 and λ 2 are the L 2 regularization parameters. On the basis of the cross-entropy loss, the cost function has been equipped with two L 2 regularization factors as terms.One corresponds to the model parameters, denoted by θ , and the other for gradient computation, which has been discussed in depth in previous work.In this work, we employ one of the most popular gradient updating techniques, the momentum stochastic gradient descent (SGD) (Ruder, 2017;Tian et al., 2023) to optimize the modified residual network, which will be discussed briefly in this subsection.It is also important to note that the residual strain control parameter c r is also being updated during the training process using the error back-propagation method. SGD with momentum roots in physical law of motion to go pass through local optima.By linearly combining the gradient and the previous update, momentum maintains the update at each iteration.This keeps the update steps stable and avoids chaotic jumps.The following formulas show how SGD with momentum works: where θ i denotes the model parameter to be estimated, θ i is the ith gradient updates, µ is the momentum coefficient, α is a single learning rate, and ∇L(θ i ) represents the cost function degrade.Compared with plain SGD, with the accumulating speed, the momentum SGD step will be larger than the SGD constant step.Thus, this trick will not only help to achieve global minimum but also increase robustness. Experiments . Dataset We evaluate our proposed method using a benchmark dataset from the Moving and Stationary Target Acquisition and Recognition Program published (Zhao and Principe, 2001) To train the weighted ResNet, all the images we used in our experiments are cropped to 100 × 100 pixels, with the target located at the center.We primarily use eight types of target images, and the number of images used for training and testing is listed in Table 1.The cropped image dataset contains 8 types of military ground targets, namely T62, BRDM2, BTR-60, 2S1, D7, ZIL131, ZSU-234, and T72.Images of each target are collected at depression angles of 15 • and 17 • and then turned at an angle of 360 • .We note that one uses images with a depression angle of 15 • for training and images with 17 • for testing.However, this may shorten the recognition ability of the trained deep learning network because of the missing spatial information that could have been included.We stick with this idea and do training experiments with images of 15 • and 17 • with depression angle. In order to expand the capacity of the original dataset by removing and adding noise (different filtering or noise distribution parameters), in our experiments, we use cropped images of 8 targets to generate image variants, and 400 images are randomly selected for each target. For illustration purposes, we take one of the T62 SAR images as an example to demonstrate the noise removing and adding behaviors.Figures 6A, B show the original optical image and the SAR image.Figures 6C-E draw the noise-removing images generated through median filtering with the templates of 3 × 3, 5 × 5, and 7 × 7, respectively.Figures 6F-H depict the noise-added images with multiplied exponentially distributed speckle noise with means (termed as M) of 0.5, 1.0, and 1.5, respectively.Finally, the whole noise canceled and added images generated from the cropped images are listed in Table 1.According to our design, the ./fnbot. . FIGURE The original optical image (A) and SAR image (B), and noise perturbed SAR images (C-H).SSIMs for the filters of both noise removal and noise adding are set by 90%, 82.5%, and 75%, respectively. . Classification results We first conducted experiments to validate our proposed speckle noise-based method.The confusion matrix of our weighted ResNet can be found in Tables 2, 3 as comparisons of data augmentation.The classification accuracy of weighted ResNet using non-augmented training data is 94.56% (7,269/7,680).Table 2 shows the confusion matrix of weighted ResNet using nonaugmented training data.Each row in the confusion matrix represents the actual target class, and each column denotes the class predicted by the weighted ResNet.The classification accuracy of weighted ResNet using augmented training data is 99.65% (7,653/7,680).Table 3 shows the confusion matrix of weighted ResNet using augmented training data.Each row in the confusion matrix represents the actual target class, and each column denotes the class predicted by the weighted ResNet. The classification accuracy of weighted ResNet with data augmentation is up to 99.65%, increasing by almost 5.1%.Additionally, the weighted ResNet structure has a relatively lower classification performance on the ZIL131 (92.71%) and BTR-60 (92.81%), followed by T62 (93.23%).After the dataset extension, the classification accuracy of ZIL131 is up to 98.96%.A similar improvement is seen in the BTR-60 and T62, each with nearly a 5% increment.This indicates that the speckle noise perturbation based data augmentation method is valid.Moreover, the recognition rate of armored personnel carriers is relatively low, which suggests that the distribution of those targets is near in the feature space.The above results are consistent with the trends observed in which has been published in Kang et al. (2017), a contributor in SAR ATR feature exaction.Further, in Figure 7, we show some instances of misclassification, where we selected only one example from each category for presentation.A→B means cases where a sample with the label A is incorrectly classified as B by the model. . Network performance comparsion In our experiments on weighted ResNet and ResNet, the following setups are applied: the mini-batch size is 128, the epoch number is 160, the dynamic learning rates are 1.0 for the first 80 epochs, 0.1 for the next 40 epochs and 0.01 for the remaining epochs, the momentum coefficient starts from 0.9.In order to illustrate its advantages, the weighted ResNet is compared to its original counterpart (He et al., 2016), SVM (Zhao and Principe, 2001), A-convNet (Chen et al., 2016), and Ensemble CNN (Lin et al., 2017), CNNs [ (Morgan, 2015;Ding et al., 2016;Furukawa, 2017), as well as other two deep neural networks [AlexNet (Krizhevsky et al., 2012) and VGG16 (Simonyan and Zisserman, 2014)] for SAR image classification.As shown in Table 4, there is a 0.81% accuracy rise for CNN-3, while nearly 3.57% on AlexNet, and over 4% increase noted in VGG16, ResNet, and weighted ResNet.Table 4 clearly shows that ResNet has a higher recognition accuracy than other networks.Other modified networks without data augmentation can achieve accuracy over 99% (Chen et al., 2016;Lin et al., 2017). Discussion and conclusion In this paper, we presented a weighted ResNet model for ATR.Our method tackled problems usually associated with conventional CNN models such as overfitting due to the constrained quantity of ground truth images and the unique complexities presented by speckle noise in SAR images.We incorporated data augmentation and introduced a distinctive residual strain control method, which together contributed to the generation of a weighted ResNet with increased computational efficiency, boosted recognition accuracy, and faster convergence.The data augmentation method proposed in this paper, which involved the addition and cancellation of speckle noise, successfully expanded the quality and size of the SAR image dataset and made the model more resilient.This step was critical, as it provided a practical solution to the issue of scarce ground truth images. Our novel introduction of a residual strain control to adapt the ResNet model contributed to significant improvements in model efficiency and recognition accuracy and reduced training time.It efficiently managed the residual strain of each weight layer, leading to faster convergence and improved optimization. Experimental results displayed the superiority of our proposed weighted ResNet model when compared to other prominent CNNs.The accelerated convergence, remarkable training depth, and improved model accuracy showcased our model's effectiveness and robust capabilities in SAR ATR. While our research and results are promising, the continuous advancement in AI and deep learning applications will consistently present avenues for growth.Future work can focus on further enhancements of the weighted ResNet model for improved model stability and generalization capabilities.Additionally, exploring more sophisticated data augmentation techniques can help in producing even more robust models capable of handling different SAR ATR scenarios.Applying the developed model to other similar imaging techniques can also be an interesting aspect to look into. FIGURE FIGUREData augmentation and network training process. Figure 1 Figure1describes the overall system of the proposed method.It is noticed that the whole process can be in general divided into three parts: data augmentation process, model training, and classification accuracy test.Following (1) and (2), it is not difficult to imagine that we can utilize the estimated speckle noise n to enlarge the training dataset by adding the speckle noise through multiplication and canceling suppressing through division.By doing so, it is able to get lower signal to noise (SNR) images and higher SNR images, which can be expressed as FIGURE FIGUREComparison of the identity block [IB, (left)] and transformational block[TB, (right)] between basic ResNet with our proposed weighted ResNet. FIGURE FIGURENetwork architecture for weighted ResNet. by the US Defense Advanced Research Projects Agency and the US Air Force Research Laboratory.The dataset consists of X-band SAR images of different types of military vehicles (e.g., APC BTR60, Main Tank T72, and Bulldozer D7) with elevation angles of 15 • and 17 • .The image resolution is 0.3m × 0.3 m, some example images of different classes are shown in Figure 5. FIGUREExamples of misclassified samples in each category, with only one example selected per category.The text below the image, A→B, signifies that the expected category is (A), but the model mistakenly classified it as (B). FIGURE FIGUREComparison of the accuracies vs. training time. TABLE List of noise perturbed SAR images (data augmentation). TABLE Confusion matrix of the weighted ResNet (without data augmentation). TABLE Confusion matrix of the weighted ResNet (with data augmentation). TABLE Accuracy comparison with other methods.Thus no unusual signs were observed during the training process.Another reason may refer to the experiences gained while conducting network training experiments on different network structures with large volumes of other data sets.Here we train the ResNet and weighted ResNet without loading pre-trained models.The method is robust against noise and momentum SGD training will skip local optimal solutions.
2023-12-21T16:17:43.779Z
2023-12-19T00:00:00.000
{ "year": 2023, "sha1": "599f1e7ea88c48854dd5b8d71c9a49419f4acda7", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b93ff5d202b785a0269c97302c71decc5ff6ebee", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
140924134
pes2o/s2orc
v3-fos-license
The manufacture and properties of nonwoven composites from fibres mix The application of nonwoven composites expands each year. The heat-bonded nonwovens (NW) are used for geotextile production due to the simple production cycle and low processing cost. To create the geotextile with desirable properties the mix of fibres are used. The aim of this study is to use recycled hemp, polyethylene terephthalate and polypropylene fibres for the development of new materials and technologies for nonwoven material composites (NWC) production. The series of NWC from mix of natural and synthetic fibres and series with one or two sides thermoplastic net reinforcing are manufactured. The NWC characterization parameters, water penetration, air permeability and results of physico-mechanical properties are presented. Introduction The nonwoven (NW) textile industry has grown as a broad array of engineered fibers and polymerbased products by high-speed, low-cost, innovative, value-added, fully automated processes and the ability to recycle textile products into useful products [1]. The common thermoplastic composites market is projected to grow up for 8.34% between 2017 and 2022. Increased use of thermoplastic composites in the transportation and aerospace & defence applications due to various properties offered by them, which include ease of recyclability, low curing time, high strength, and increased rigidity, among others are the factors driving the growth of the thermoplastic composites market across the globe [2]. The nonwoven composite (NWC) geotextiles are fibrous three-dimensional structures, used for different applications such as reinforcement, separation, filtration and drainage instead of coarse-grained soil due to their easy installation and gain [3]. Geotextile are generally made from a limited number of polymers (polypropylene, polyethylene and polyester). The three main properties which are required and specified for geotextile are its mechanical responses, filtration ability and chemical resistance [4]. The Rural Support Service of Latvia has planned financial support for the rebuilding and restoration of melioration systems, as well as for the construction or conversion of areas at production facilities [5,6]. The 147 km long single water supply system in Riga City also requires continuous maintenance and construction of new facilities, according to the rules for the use and maintenance of the Riga City Hydrographic Network [7]. In view of the above facts, the development of new materials for geotextile is up to date. In the previous studies the optimal composition of fibres mix and the production condition of nonwovens' composites was identified [8,9]. The aim of this research is to investigate the properties and the factors affecting NWC from fibers mix without and with reinforcing. The NWC series of the local hemp fibres variety "Bialobrzeskie" of Kraslava district, recycled polyethylene terephthalate and polypropylene fibres mix as well as series of composites with one or two side thermoplastic net reinforcing is manufactured and properties tested. Methods of Production The untreated hemp fibres are stiff and on the used laboratory carding machine it is not possible to produce a satisfactory NW web [6]. Therefore, after tests, alkali treatment method of hemp (100 g/L at 20±1 0 C temperature for 2 min) is selected [8,9]. A neutralization of pre-treated fibres with acetic acid (2 g/L) at 19 ± 1 0 C temperature for 15 min is done. The NW web (Table 1) is made from fibrous blend (HF-59 %; PET-23 %; PP-18 %) using laboratory carding machine MESDAN 337A (delivery velocity 10-15 m/min). After NW web formation the samples (160 x 160 mm; mass 2.6 g (designation of sample A) and 3.8 g (designation of sample B) are cut out and prepared for NWCs production in a frame without and with PP net reinforcement from one side (designation of samples A1, B1,) and both sides (designation of samples A2, B2). For thermal bonding of NWC the laboratory press Labtech Engineering ASTM LP-S-50/S at the required pressure 26 ± 1 MPa and temperature 160 ± 2 0 C is used. [15]. Ten samples of NWC (50 x 150 mm) are tested, based on the experience gained in previous research [8,9]. The resistance to water penetration of NWC is verified by hydrostatic pressure test according LVS EN ISO 9073-16:2009 [16]. For determination of the air permeability LVS EN ISO 9073-15:2008 [17] is applied. Ten NWC samples are tested from both sides and the average values of water penetration and air permeability are calculated. Results and Discussion Series of composites without netting and with one or both sides net reinforcing were manufactured and tested. The results of NWC parameters are summarised in the Table 1. Before explaining the obtained results, it must be mention, that NW samples have free arrangement of fibers and they are made manually, therefore irregularity in fibers location of further NWC is observed which can an influence on the obtained results of examined properties. NWC characteristic and properties The NWC thickness and mass without netting (Table1; samples A, B). depends on mass of the NW. The use of reinforcement decreases the thickness of one side reinforced NWCabout 16 % (A1) and 10 % (B1) due to additional influence of thermoplastic PP net on NW web bonding. Both sides reinforcement in comparison with one side is not significant. As expected, the use of reinforcing net increases the mass per area of NWC and depends on reinforcing net amount (one side, both sides). Water penetration and air permeability The water penetration -significant property for geotextiles (Table 1) for examined NWC without reinforcing is higher for samples with higher initial mass of web (A, B samples). With one layer netting, the water penetration decreases for both series samples. It can be explained with NWC thickness decrease and fibers crowding in the NW web after reinforcing. The double side netting (samples A2; B2) causes increase of water penetration in comparison with one-sided reinforcement (A1, B1). The wetting processes influence on water penetration is possible. The air permeability (Table 1) is higher (59 %) for samples with lower web initial mass and mass per area (A series). The reinforcement of NWC increases the air permeability in all cases. The expected correlation between the water penetration and air permeability is not confirmed. The physico-mechanical properties of NWC are important for starting an application of NWC. The influence of thickness and mass per area of NWC on tensile strength (Figure 1; samples A, B) without reinforcing is not significant. The significant increases of tensile strength, after reinforcement is observe for B series -33 % for one-sided (sample B1), 70 % for double-sided (sample B2). Mechanical properties The elongation characteristics grow remarkably (60 -66 % for A series and 81 % for B series) in all cases due to use of reinforcement ( Figure 2). The influence of reinforcing method (one or both sides netting) on the elongation at break is smaller -15 % (A series) and 2 % (B series). Conclusions  The thickness and mass per area of NWC without reinforcing depends on initial mass of NW web -5 % and 30 % respectively.  The use of reinforcement decreases NWC thickness (one side netting about 16 % (A1) -10 % (B1); both side netting 11 % (A2) -13 % (B2)  The use of netting causes the increase of NWC mass per area; one side netting about 6 % (A1) -7 % (B1), double-sided netting 12 % (A2) -13 % (B2).  The increase of water penetration for NWC with higher web initial mass, 23 % (without netting), 16 % (with 1 play netting) and 5 % (with both sides netting), is observed.  The air permeability is higher (59 %) for samples with lower web initial mass and mass per area. The reinforcement of NWC increases the air permeability in all cases.  The physico-mechanical properties depends on NWC characterizing properties and reinforcing method; the increase of tensile strength 70 % for B series double sided NWC is achieved.  The use of reinforcement causes significant increase of elongation at break; 66 % (A series) and 81 % (B series). Acknowledgements The Financial Support of NRP of Latvia, project IMIS2, is greatly acknowledged.
2019-05-01T13:03:55.313Z
2019-04-05T00:00:00.000
{ "year": 2019, "sha1": "36f920a13789b0579a178e23b7f2058fbf207fa4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/500/1/012027", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5d46cb9a4cc21f73af1df0f2ff8b60f00bbecb19", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
237780163
pes2o/s2orc
v3-fos-license
INTEGRATIVE ARABIC LANGUAGE TEACHING OF INTEGRATED ISLAMIC ELEMENTARY SCHOOLS IN SOLO RAYA Integrated Islamic Schools, which is very influential in the education system in Indonesia, stand behind this research. This research explored Arabic language teaching at the Integrated Islamic Elementary Schools in Solo Raya with different integrative system backgrounds. It aimed to answer how the language was taught and learned in Integrated Islamic Elementary Schools in Solo Raya and the learning process from Hector Hammerly's perspective. This research was a field research model with a case study approach under the constructivist research paradigm. The results show that the Arabic language is taught depending on the need in Integrated Islamic Elementary Schools in Solo Raya, and it follows the curriculum of each school. The Hector Hammerly perspective in Arabic language teaching applies cognitive, collaborative, natural, and communicative approaches. The teaching strategies were, among others, presentation, L1 usage, visual aids, practices, error correction, technological aids, evaluation (the students’ performance or teacher-andstudent quizzes to evaluate daily progress), and CA-OB (cognitive audio-oral bilingual) method. Introduction Since 1973, Arabic has become one of the International language which has been officially used as one of working language and official language of the United Nations. 1 Arabic is the holy book language of Muslims, namely the Qur'an and the Sunnah, as well as no less than 22 countries in the Middle East and Africa regions have made Arabic as national language. In addition, Arabic becomes the languages of Arabiyât education, science, diplomacy, social and economic transactions, and cultures for the majority of people in these 20th countries. 2 The position of Arabic in Indonesia is a foreign language as it is not a mother tongue and is not used in daily lives. Arabic is not completely foreign to Indonesian society, especially Muslims who at least use this language in daily worship such as in the obligatory prayer five times a day. 3 Arabic language teaching has made significant progress in the Islamic education environment in Indonesia. Arabic language teaching at Integrated Islamic Schools is a way (washilah) to raise the importance of Arabic awareness to help understand the Quran and provides provisions for exploring the sources of knowledge which are developed in the golden age of Islam, as well as fostering pride in speaking Arabic. 4 Arabic language learning is essentially aimed at forming and building students who are competent with the knowledge of Arabic which is a means of understanding the content of Qur'an and Hadith. 5 Mahmud Yunus said in his book "Metode Khusus Bahasa Arab" that the purposes of Arabic language teaching are to understand the readings in prayer and recitation of the Quran and enable Muslims to take an instruction and lesson as well as improve the ability to learn religious science. Islam comes from an authentic source in Arabic and can speak Arabic for direct communication with Muslims abroad. Arabic is a language that has become a scientific language today. 6 Hence, it is important to learn Arabic especially since young age as younger learner can learn faster and more effective than older learners. 7 The Integrated Islamic School (SIT) is basically a school that applies the concept of Islamic education based on the Quran and the Sunnah. The definition of Integrated Islamic School according to the concept of an Integrated Islamic School Network (JSIT) standard policy is an Islamic school organized by combining Islamic values and teachings in the curriculum structure with effective teaching and optimal and cooperative involvement between teachers and parents, as well as society to need students' characters. From its implementation, an Integrated Islamic School is defined as a school that adopts integrative approach. The implementation is done by combining general education and religious education into the curriculum. In addition, it also emphasizes integration in learning 2 Azhar Arsyad, Bahasa Arab dan Metode Pengajarannya: Beberapa Pokok Pikiran, 2 nd Ed (Yogyakarta: Pustaka Pelajar, 2008), 1. 3 Arabiyât methods to optimize the cognitive, affective, and psychomotor domains. The Integrated Islamic School also combines mind, spiritual, and body. In its implementation, it combines the involvement and active participation of the learning environment, namely schools, homes, and communities. In the development of Integrated Islamic Schools, emerge foundations or educational institutions that established Integrated Islamic School with their respective backgrounds, visions and missions, curricula and developments. Likewise, Arabic language teaching at the Integrated Islamic Schools are developed and implemented with various approaches in Arabic language teaching. However, Arabic language teaching implementation in accordance with the theoretical has not been successful. So, this research tries to answer the questions about the implementation of Arabic language teaching at Integrated Islamic Elementary Schools and the causes of unsuccessful learning implementation under the theories and answer the gaps between the ideal and reality theoretically regarding the result of analysis and its problems. This research is important to answer the questions and the results of field analysis in real and theoretical ways. In this research, the researcher took five Integrated Islamic Elementary Schools as sample of the diversity of Integrated Islamic Schools that have emerged, especially in Solo Raya. The researcher took Integrated Islamic Elementary Schools because Arabic is a typical subject of Integrated Islamic School and Elementary School so that it can be continued up to the high school level with a long study period. Integrated Islamic Elementary Schools as a place of study related to Arabic language teaching carried out adjustments and analysis of Arabic language teaching in accordance based on theories in Integrated Islamic Elementary School with the elements and criteria of the Arabic language teaching theory according to Hector Hammerly to answer the reasons for unsuccessfulness or failure and compare the gap between ideals and reality on the field. This research covers three regencies or cities in Solo Raya. The term Solo Raya was indeed coined to replace the term Subosukawonosraten which stands for combined regions, namely Surakarta, Boyolali, Sukoharjo, Karanganyar, Wonogiri, Sragen, Klaten. However, as explained above, the focus of this research is at three cities or districts in Solo Raya, namely Surakarta City, Sukoharjo Regency, and Sragen regency which are expected to represent a school with an Integrated Islamic system that are affiliated or developed by Institutions which are established by religious organizations, community organizations, and foundations in Solo Raya. Arabiyât The five Integrated Islamic Elementary School used the Integrated Islamic School system and integrated learning, but Arabic subjects have not used integrated learning with various obstacles, problems, shortcomings, and advantages. Integrated Islamic Schools focus on general subjects, Islamic Education (PAI) and Memorizing Qur'an (Tahfidz), and not Arabic. Because Arabic is a local content lesson and a characteristic of Islamic schools with less time, a model is needed so that learning can be affective. This research focused on Arabic language teaching in five Integrated Islamic Elementary Schools (SDIT) that are different from many developing Integrated Islamic Elementary Schools. This focus is related to its emergence, an integrated concept in all the sample aspects and its learning. With the formulation of the problem of how is the language taught and learnt in Integrated Islamic Elementary Schools in Solo Raya and how is the learning process in Hector Hammerly perspective and how to analyze. This research will look at and analyze Integrative Arabic language teaching at the Integrated Islamic Elementary Schools in Solo Raya. Method This research is categorized as filed research that is research conducted directly in the field to obtain the required data. Field research is a research conducted to detect and analyze phenomena, events, social activities, attitudes, beliefs, perceptions and thought individually and in groups. 8 According to Creswell, in qualitative research a researcher must create a complex picture, examine words, report in detail from the respondents views, and conduct studies on natural situations. 9 It is said to be qualitative because this research emphasizes Arabic language teaching processes in the Integrated Islamic Elementary Schools. This research model was a case study, which is an in depth study of particular case resulting a complete and structured description of the case. Case studies cover the entire life cycle or only certain sectors of case factors. 10 In qualitative research, Creswell classifies various approaches in research including knowledge claims (post-positivist, constructivist, emancipatory, and pragmatic), inquiry strategies (experimental, ethnographic, narrative, and mixed). 11 Philosophically, the approach to seeing social reality can be existentialist, instrumentation, phenomenological, and behavioristic. Methodology in a harmonious configuration between data search methods, knowledge claims, inquiry strategies, and Arabiyât understanding of reality. This study uses a naturalistic approach, 12 knowledge claims using a constructivist approach, 13 and the inquiry strategy uses a case study approach. 14 In conducting research and development, there are many methods used, namely descriptive, evaluative, and experimental. Descriptive research methods are used to collect data about the literature and describe current conditions, as well as efforts to describe and interpret existing data or relationships. 15 Meanwhile, the evaluation method is used to develop educational material in several stages of evaluation and review. This research also uses experimental methods, namely methods to study something by changing circumstances and paying attention to its effects on other things. 16 Apart from that, the experimental method also serves to set the situation so that the effects of the variables can be examined. 17 Meanwhile, the data collection technique is a researcher's effort to provide or collect sufficient data. 18 Data collection techniques in this research used several methods, namely the observation, interview, and questionnaires methods. While the data analysis was carried out by following the steps, namely data reduction, data presentation, and data conclusion. 19 Integrative Arabic language teaching in Hector Hammerly's Perspective Integrative in popular scientific dictionary means unification. 20 In education world, integrative is usually associated with democratic education movements that focus on actual problems as an approach. According to Beane (1997), the integrative learning center regulates important issues in the curriculum with the wider world. Integrative will connect one problem to another so that a unit of knowledge will be developed. Knowledge provides a part with the whole (part whole relationship). 21 Hammerly (1985) argues that the lack of an integrative theory in language teaching context for various implications, and lack of adequate theory affect all activities. Hammerly assesses that the emptiness of theory results in confusion in language teaching field so that it requires theories that are adequate, explicit (firm and Arabiyât straightforward), and comprehensive. 22 Content and Language Integrated Learning (CLIL) has been said to increase not only foreign language proficiency. 23 Hector Hammerly's book entitled An Integrated Theory of Language Teaching and its Practical Consequences (1985), which was later translated into Arabic entitled An-Nazariyah at-Takamuliyah fi Tadris al-Lughoh wa Nataijuha Al-Amaliyah shows the increasing interest of countries in the world in foreign language education and learning. It is attracting attention for foreign language education and teachers for teaching it to non-speakers, consolidating its position and confirming its universality in countries that plan to establish an institution of language education center for nonspeakers and train their teachers. This book presents most of problems with the right suggestions and solutions by presenting theory with its practical support to find out its strengths and weaknesses. 24 In learning activities, there are several objectives, including general goals, specific objectives, teaching facilitates, selections, gradations, guidance, presentations, understandings, practices, integrations, variations, evaluations, reintroductions, uses, and masteries. 25 Existing goals are mutually sustainable to achieve goals in learning. In addition, teaching language also teaches the culture of language studied by paying attention to the sociolinguistic principle, evaluation principle, technology assistance principle, self-instruction principle, cultural competence principle, and observation. 26 Language teaching in the book Sintetis (1982), integrative intended by Hector Hammerly, is everything related to the process of learning foreign languages for native and second languages. Learning processes require language teaching or learning model conducted in the classroom as opposes to a variety of models which represent processes emerging in a natural language environment. The Two Cone Model in many ways represents the integrative theory proposed by Hector Hammerly. In the Two Cone Model, there is a centrifugal nature is the meaning of the word centrifugal is moving away from the center or axis, the meaning is moving away from the center or axis but the use of the word is to describe the position of the Two Cone Model. Combining the basic principles of teaching, generalization, structural and communicative, as well as the relationship between the Two Cone Model with learning theory, linguistic, and language teaching methodology. In Arabiyât second language, while the symbol P is for pronunciation, G for Grammar, and V for Vocabulary. T-CM distinguishes the process of acquiring or learning languages, namely native and second languages. The second language makes the center and moves away from the center of rotation (centrifugal) which describes language movement (linguistics) to the communicative around it. In contrast to acquiring the original language, movement in the opposite direction moves in a circular (centripetal) manner. It is in line that in teaching Arabic focused or emphasized on anashir al-lughah (linguistic elements) and istikhdam al-lughah (language uses). 27 In anashir al-lughah (linguistic elements), it includes four science branches to be studied, including ilm alashwat (teaching about the place where the letters carry out and the letter characteristics), qawaid (such as nahwu, sharaf, and imla'), mufradat (Arabic vocabulary), and ma'ani (translation). Arabic language teaching at Integrated Islamic Elementary Schools in Solo Raya The Integrated Islamic Elementary School (SDIT) Nur Hidayah Surakarta Teaching Arabic is taught at SDIT Nur Hidayah aims to be the religion and science languages as well as a means of communication. Thus, the Arabic subject in this school becomes an inseparable part of the religious education subject and is a unity. Arabic language teaching Teaching Arabic at the Nur Hidayah Integrates Islamic Elementary School has a target i.e. students can actively master Arabic vocabularies and expressions in the form of basic sentence patterns so that students are expected to be able to make simple communication in Arabic and be able to understand simple reading in a text. To achieve the ability to use Arabic as mentioned above, an appropriate language curriculum is needed, namely the language component and language use activities which fits each level. The language components are word form, sentence structure and vocabulary which communicatively 300 words and idioms as well as components of language use activities, namely speaking, listening, reading, and writing. 27 Maimun, "Strategi Pengembangan Evaluasi Hasil Pembelajaran Bahasa Arab", Journal of OKARA STAIN Pemekasan, Vol. 2, No. 6, 2011, 244. 28 Muh Nahidh Islami, Luasnya Bahasa Arab, at https://www.kompasiana.com/muh60847/ 5bf3a0afab 12ae7b4a65b0b5/luasnya-bahasa-arab, retrieved: November 8, 2019 at 20.15 Arabiyât To implement Arabic language learning program Arabic language teaching at SDIT well, learning activities should pay attention that Arabic language teaching is study of language use with the aim of communicating orally and in writing not only exploring the rules of the language. In the focus of discussions, there are five related to language namely speaking, vocabulary, sentence structure, reading, and writing. In language there is Arabic pronunciation which is adjusted to the intonation. Learning points are the minimum limit that needs to be taught because learning part that is not included in the learning program may need to be added as long as it is accordance with the students' thinking and language skills. If deemed necessary, changing the order of the subjects is still possible as long as it does not disturb the logical gradation of sentence structure. The learning time that is held can be arranged according to the breadth and depth of the material. The method chosen is a multi-method based on an active communicative approach. Some of learning recourses are books, complement and supporting materials, learning media as explanations for words in the form of visual objects, examples, models, dramatizations, other demonstrations to avoid the use of Indonesian translation in teaching Arabic. The number of lesson hours provided is an estimate of the time required to complete the objectives of the course. Teaching and learning assessment process includes knowledge and language skills, especially through interview or written test. Interview or oral tests can be objective or interpretative. Finally, it is important to realize that students in Arabic language teaching will be influenced by their native language background. The mother tongue aspects are the same as the Arabic aspects (both sound, word structure, sentence structure, and writing), while different aspects will create difficulties. Therefore, teacher must pay more attention and repeat the parts of the language which might be difficult for students. SDIT Nur Hidayah uses the 2013 Curriculum (K13) and it is integrated with the JSIT curriculum. This school is as an example of the implementation of the 2013 Curriculum and the JSIT curriculum. The JSIT curriculum is the curriculum used for the Integrated Islamic School built by JSIT, as well as the SDIT Nur Hidayah Surakarta. For the school curriculum, it still refers to the Ministry of Religion and National Education but the school makes some developments by instilling Islamic values which combine general and religious education. 29 After carrying out evaluations, the lesson hours are added to meet the needs and the results of the evaluation. In the curriculum, Arabic lessons were originally only two hours and now becoming three hours of lessons, of which one hour is used for practice. The curriculum in Integrated Islamic School develop Islamic values more where teacher accompany students in every lessons as a special feature of an Integrated Islamic School under JSIT Indonesia with integration and Islamic values. Arabiyât Since this school also develops the JSIT Curriculum, Arabic lessons use selfmade teaching materials and are only used for personal use and the books developed by JSIT. Until now, the school uses Arabic books from JSIT published by Nur Hidayah Foundation printed by EnHa Press. In Arabic classroom, teachers at this school create fun learning in the lower classes by using songs to make it easier to memorize vocabularies because the material presented is still basic level. Teacher always provides motivations, explains lesson plans, review previous lessons, conveyed objectives to students using materials that are suitable for students, and the teacher evaluates and allows students to ask questions. That is the descriptions of Arabic language teaching at SDIT Nur Hidayah Surakarta. 30 The Integrated Islamic Elementary School (SDIT) Muhammadiyah Al-Kautsar Kartasura Since 2013, this school has used the 2013 Curriculum (K.13), through approaches with others integratively in accordance with the 2013 Curriculum. Arabic lessons in this school include local content and local contents including English, Javanese, and Aranese. The Arabic Curriculum at this school is to adopt and develop the curriculum from the Ministry of Religion and the typical curriculum of the Muhammadiyah School, namely the Muhammadiyah Islamic and Arabic Curriculum (ISMUBA). 31 According to the teacher, Arabic language teaching at this SDIT refers to the 2013 Curriculum and the special typical curriculum of Muhammadiyah schools, namely ISMUBA which is integrative by providing guidance to students such as reading, interpreting, writing, reciting, listening, and observing with Arabic language teaching which is active learning, fun learning, GEMBROT PAIKEM and ISLAMIC. 32 The combination result is applied on a week with two hours of lessons, the most important thing is that students know and understand the material presented in this school does not have its own model, while evaluation is always carried out every year to improve and develop Arabic language teaching in this school. The Integrated Islamic Elementary School (SDIT) Al-Anis Kartasura SDIT Al-Anis uses a curriculum by carrying out and developing a combination of the National curriculums namely The Ministry of Education and Culture, The Ministry of Religion and the distinctive Integrated Islamic School, namely a system Arabiyât that is oriented toward Islamic boarding school developed by the Al-Anis Kartasura Foundation. 33 This Integrated Islamic School curriculum based Islamic boarding school tries to implement character education and follows the boarding school model to guide students to behave and have good character even though there are things which are different. Students are expected to not only have characters like santri (students of Islamic boarding school) but also they are ready to continue to the Islamic boarding school. From this understanding, it is implemented in religious lesson, tahfidz (memorizing Quran) and Arabic as a provision for students to go to the Islamic boarding school after completing their education at this school. Learning at SDIT Al-Anis is like in an Islamic boarding school which is accumulated with the current government curriculum using KTSP. Arabic language teaching adopts the curriculum from the Ministry of Religion, but this school also compiles its own teaching materials with Islamic boarding school-oriented principles. It is hoped that graduates will be ready to enter Islamic boarding schools based on religious education and Arabic which is highly highlighted in this school. 34 The Integrated Islamic Elementary School (SDIT) Ar-Risalah Surakarta This school uses combination curricula, namely the National Education (DIKNAS), Ministry of Religious Affairs (Kemenag), and curriculum from foundations such as tahfidz (memorizing Quran) and Arabic which is called integration. Teaching materials are developed by the schools themselves by developing curriculum from the Ministry of Religion and the organization. This school uses the KTSP and DEPAG curricula. Curriculum in other schools may only be KTSP or KURTILAS, but this school emphasizes morals without ruling out general lessons and there is no training outside of class hours. 35 According to Arabic teacher, Arabic language teaching at SDIT occurs interactively between teachers and students during learning. 36 The Integrated Islamic Elementary School (SDIT) MTA Gemolong SDIT MTA Gemolong is currently using and implementing the 2013 and the Integrated Islamic Elementary School curriculum. MTA there are eight lessons, local and development lessons as well as special programs developed by the Foundation and the Central MTA. Learning at this school uses a thematic approach adjusted to 33 Interview with Ahmad Muhammad, the Headmaster of SDIT Al-Anis, on April 13, 2019, at the Headmaster's office of SDIT Al-Anis. 34 Interview with Hikmah, Arabic teacher of SDIT Al-Anis, on May 7, 2019, at reception room of SDIT Al-Anis. 35 Interview with Sudrajat, the Headmaster of SDIT AR-Risalah, on April 15, 2019, at the Headmaster's office of SDIT AR-Risalah. 36 Interview with Setyo, Arabic teacher of SDIT AR-Risalah, on April 30, 2019, at the Headmaster's office of SDIT AR-Risalah. the curriculum of the Integrated Islamic Elementary School MTA Gemolong, Gemolong District. 37 While the unit division in each lesson is 35 minutes, there are 34 to 38 weeks that can be called effective for each year. The materials are made based on the books and refers to the 2013 Curriculum. 38 So far, the curriculum at this school only tends to achieve students' cognitive and psychomotor values. Meanwhile, the Integrated Islamic Curriculum is a curriculum that integrates religious education with general education to form a noble character. The curriculum structure that must be followed by the Integrated Islamic Elementary School MTA Gemolong includes substantially six years of basic education from first grade to sixth grade. The curriculum structure of the Integrated Islamic Elementary School MTA Gemolong is completed with graduation standards and subject competences. In the Arabic teaching and learning process the teacher assists students whenever they have difficulties in understanding. In this school there are also extracurricular activities to support and increase students' abilities in developing and providing a forum for students. 39 Integrative Arabic language teaching at Integrated Islamic Elementary School in Solo Raya An Integrative learning from Hector Hammerly which is intended for teaching a foreign language or a second language is appropriate and there are similarities with Arabic language teaching in this case study research area. An Integrated Islamic Elementary School can develop this model with the goals and targets of Arabic language teaching and each its advantages and disadvantages. From the discussion regarding Integrative Arabic language teaching, it can be analyzed that Arabic language teaching at SDIT is not all and does not fully use the integrative learning model even though SDIT uses an integrated system and everything related to integration. The criteria of integrative learning have been described in accordance with the theory with the element of integrative Arabic language teaching. There are several results that are appropriate with the field conditions and theory. Integrative Arabic leaning at SDIT can be seen from the perspective of integrative Arabic language teaching theory which has been described from the related elements, it can be concluded as follows. Arabiyât Arabic language teaching at SDIT Al Anis Kartasura based on the perspective of integrative learning theory has fulfilled several elements related to Integrative language learning which is supported by content standards, quality standards, and curriculum standards that have been determined by JSIT. Arabic language teaching at SDIT Muhammadiyah Al-Kautsar Kartasura based on the perspective of integrative learning theory has fulfilled several elements related to integrative language learning with its uniqueness, namely using the ISMUBA curriculum and it is always evaluated and developed regularly. Arabic language teaching at SDIT Al Anis Kartasura based on the perspective of integrative learning theory has fulfilled several elements related to integrative language learning with its advantages, namely curriculum and learning that adapts and is based on Islamic boarding school with the aim that students can continue at the Islamic boarding school. Arabic language teaching at SDIT Ar-Risalah based on perspective of integrative learning theory has fulfilled several elements that can be related to integrative language learning which is supported by curriculum development and learning that relatively adjusts students, even for the male and female class learning process is separated, as well as teachers also adjusted to students. Meanwhile, Arabic language teaching at SDIT MTA Gemolong, based on the perspective of integrative learning theory, has fulfilled several elements that can be related to integrative language learning determined by institutions and teachers with the adjustment of students specifically according to MTA. Therefore, the Integrated Islamic Elementary School chooses and uses integrative learning. Because SDIT uses integrated system, so try to use an integrated education model and integrated or integrative learning although the practice is very different from one school to another. Integrative Arabic language teaching based on the perspective of the integrative learning model, namely the model that appropriate with the elements of integrative learning model, which are include methods and strategic techniques that appropriate with the Arabic language teaching model. The results of Arabic language teaching analysis in Integrated Islamic Elementary Schools are adjusted to the perspective of Hector Hammerly's integrative theory. Implementation of integrated Arabic language teaching based on perspective of a theoretical and the analysis results conducted that Integrative Arabic language teaching can be applied in Integrated Islamic Elementary Schools, but not all because of the differences that exist in each school. In contrast, after a long discussion with teachers and policy leaders at the Integrated Islamic Elementary School, the use of textbooks was adjusted to schools because various teaching materials sources could be used to support integrative Arabic language teaching by covering the appropriate aspects. Arabiyât Then, Arabic language teaching at the Integrated Islamic Elementary School can use an integrative learning model but by adjusting it because each school has its own system and references such as curriculum, vision, and mission, and so on. However, it is hoped that this integrative learning model can be studied and implemented. In addition, integrative Arabic language teaching can be developed for new educational institutions so that teachers can conceptualize earlier either with ideal language programs or ideal language classes by adjusting related elements. On the other hand, it can also be used in Arabic language teaching so that it can be studied systematically according to age, needs, and the environment so that it can optimize the results of Arabic teaching and learning process in various institution in Indonesia. Systematic books and sustainable material from class level and even education level will be more integrative and appropriate with the ideal Arabic language program. Therefore, they can produce appropriate results by emphasizing language use, language elements, language skills, communication skills, and cultural skills. So, the integrative model in Arabic language teaching is expected to be able to improve students' skills and abilities. Learning by using integrative learning model changes the teachers' view of Arabic language teaching which they have carried out with the existing model. Likewise with students, this model becomes an alternative model along with the existing of other models. Thus, integrative Arabic language teaching results from integrative theory and integrative learning and its adjustments to Arabic language teaching, especially in Integrated Islamic School which can be developed by adjusting educational institutions and targets of each institution. Arabic language teaching can be adjusted based on the criteria and elements of the description above. Even though, the integrative implementation which has not been successful in accordance with the theory because so far SDIT has used an Integrative or integrated concept only as systems, views, and objectives of SDIT. According to this, there are many Integrated Islamic Elementary Schools that claim in using an integrated approach but it is not appropriate with the implementation because the suitability of the theory and field results is very different. Conclusion It can be concluded that from the five Integrated Islamic Elementary Schools based on the perspective of integrative learning theory, there are at least some elements but there are adjustments by using and developing integrative models according to theory. Integrated Islamic Elementary Schools are able to develop in accordance with each specifics by including an integrative learning model in order suitable with the integrated concepts and systems so that they synergize with each other. In addition, those who have elements and criteria can be adjusted and developed. If it is not suitable, then it can add and strengthen other aspects. Arabiyât Hector Hammerly's integrative language learning theory cannot always be applied to conditions in Indonesia with its characteristics even though the integrative language learning theory has been use in several countries. However, this theory can be applied by adjusting the elements and criteria so that it can use an integrative learning model. If it will be implemented properly, there must be a revision or some kinds of modification so that it is suitable for Indonesia. So, it can be concluded that integrated Arabic language teaching according to Hector Hammerly's theory can actually be applied with prerequisites and adjustments to elements and criteria of integrative Arabic language teaching and still maintain the characteristics and developments of each the respective educational foundations or institutions.[]
2021-09-01T15:03:25.758Z
2021-06-30T00:00:00.000
{ "year": 2021, "sha1": "fe4af2479a8ff13abe58b6914aed2b1058a7c4bf", "oa_license": "CCBYSA", "oa_url": "http://journal.uinjkt.ac.id/index.php/arabiyat/article/download/20095/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6e709a659dad703cb699a8426be489aa8a76de4f", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Sociology" ] }
229433633
pes2o/s2orc
v3-fos-license
Modeling and Optimization of the Drinking Water Supply Network—A System Case Study from the Czech Republic : In this study, we investigated the modelling and optimization of drinking water supply system reliability in the village of Zaben, Czech Republic. An in depth overview of the water supply network in the municipality, passport processing and accident and malfunction recording is provided based on data provided by the owner and operator of the water mains as well as the data collected by our own field survey. Using the data processed from accident and failure reports in addition to water main documentation, the water supply network in Zaben was evaluated according to the failure modes and e ff ects analysis methods. Subsequently, individual water supply lines were classified based on their structural condition. In addition, a proposed plan for financing the reconstruction of the water supply mains in Zaben was created. As such, this study provides an overall assessment of the water supply network in Zaben alongside a proposed plan for the structural restoration of the water supply system, which accounts for the theoretical service life of the system and the financial resources of the owner. Introduction Drinking water supply systems have significant impacts on the quality of human life, health, and hygiene. As such, a high quality, well managed and well maintained drinking water supply system is a foundational component of infrastructure for urbanized area. However, in order to effectively operate this infrastructure, it is necessary to understand its structural condition and perform maintenance and repairs accordingly to extend the functional life of system. It has been shown that careful coordination of planned, comprehensive maintenance and renewal of the system can prevent operational emergencies and reduce the need for unplanned interventions. This is important as the water supply system is essential for modern life and accordingly malfunctions, outages or deviations from the normal operating state can have significant negative effects. While some of these effects can be predicted and planned for through substitution or crisis solutions, many of these effects require an immediate emergency response when they occur, which cannot be effectively planned in advance [1,2]. In general, the whole drinking water supply system, together with the supply of electricity, is one of the most important elements of critical infrastructure, as a number of sectors, services and the population itself depend on their functionality and operability. Even the slightest disruption of these systems operation can then result in the deterioration of the quality of human life. A properly functioning supply of drinking water currently represents an important role for the functionality of of the supply of drinking water therefore means not only a reduction of human life living standard, but also a reduction in the functionality of other state parts on which society is directly dependent. It is possible to mainly include emergency services in the form of an integrated rescue system, especially the fire safety of the area and health care, as well as food, chemical, industrial or agricultural production. This issue is a severe issue in other countries around the world, which corresponds to the subject of many publications interest [3][4][5][6][7][8][9][10]. In some countries, the situation is addressed in the context of water supply systems failure, often with a link to the legislation of the country. For example, the author of [11,12] stated, using the example of a pipe route failure, that even a small-scale failure can cause the collapse of the entire system and affect the performance of the entire water network. Other approaches to the evaluation of water supply networks are also presented by the authors of the publication [13,14], who, in their work, evaluate individual components of the water supply network on the basis of graph theory and also deal into detail with the issue of risky urban water supply networks analysis, as well as other publications [15][16][17][18][19]. The village of Zaben is situated in the Moravian-Silesian Region of the district of Frydek-Mistek. In particular, Zaben can be considered a suburban part of the city of Frydek-Mistek. In Zaben, according to the Czech Statistical Office (CZSO), there have been 827 inhabitants as of 1 January 2017 and individual housing is dominant. The village lies on a flat plain at an altitude of 266 to 270 m above sea level and its total cadastral area is 3.35 km 2 . The construction of the water supply system began in 1981 and was completed in 1984. A schema of the water mains in Zaben is presented in Figure 1. Figure 1 shows the overall scheme of the water supply network in the village. The left part shows the main distribution order (yellow color) and the minor water supply order (blue color). The main water distribution lines contain dimensions DN 150 or DN200, the minor water supply lines have smaller dimensions, DN100, DN80 and DN50. The picture on the right side shows the division of the water supply system into individual functional units, which are the subject of the solution and are solved separately. This division was created for the purpose of present study; a similar division is now also used by the administrator of the water supply system. Figure 1 shows the overall scheme of the water supply network in the village. The left part shows the main distribution order (yellow color) and the minor water supply order (blue color). The main water distribution lines contain dimensions DN 150 or DN200, the minor water supply lines have smaller dimensions, DN100, DN80 and DN50. The picture on the right side shows the division of the water Sustainability 2020, 12, 9984 3 of 21 supply system into individual functional units, which are the subject of the solution and are solved separately. This division was created for the purpose of present study; a similar division is now also used by the administrator of the water supply system. The whole water supply network is owned by the municipality of Zaben. The administrator of the water supply network (also sewerage network) and the entity responsible for the operation and functionality of the system is the joint-stock company Severomoravské vodovody a kanalizace Ostrava Plc (SmVaK Plc). SmVaK Plc is also entitled to collect money for water and sewage and regulate its price. The source of drinking water is the Ostrava Regional Water Supply, from which drinking water is supplied to the tower reservoir of Biocel Paskov. This reservoir has a total volume of 200 m 3 , its minimum level is 306.50 m above sea level and the maximum water level in the reservoir is 311.80 m above sea level. Drinking water is supplied to the appliance by gravity due to sufficient elevation (the terrain level of the appliance is 266 to 270 m above sea level). The elevation between the lower level of the reservoir and the highest point of the terrain in the village is 36.50 m. The reservoir of Biocel Paskov, by the village of Zaben, also supplies the Biocel industrial complex itself, including the adjacent housing estate in the village of Paskov. Drinking water from the Biocel reservoir is supplied by the DN200 feeder to the north-eastern part of the village. From there, the water is distributed to the final consumers through the distribution network. Approximately 97% of village inhabitants are connected to the water supply network [20]. The water supply system is a combined system; accordingly, most of the village's supply system (central and north-western regions) has a circular layout, whereas the south and west components of the supply system were designed as a branch system. The total length of the water pipelines in the municipality is 9028.60 m and PVC pipes were used as the construction material with dimensions of DN 50, DN 80, DN 100 or DN 150. The water supply network is divided into seven lines labelled A to G. Lines A, B, C and D are the water lines located in the northern part of the village, and the E, F and G lines are located in the southern part of the village. Notably, lines E, F and G have a low operational reliability [20]. Water Supply Documentation We conducted a field survey of Zaben to collect information and additionally obtained documents and data from the water main operator-SmVaK Ostrava Plc, the Village of Zaben, the CZSO, and the Czech Land Surveyor and Cadastral Office. Based on this information, a summary document of the water supply network-including a simplified record of failures-was created. The water supply system summary document was made from different data files in different formats which were collated following preliminary investigation. The existence of elements of the system where there were discrepancies between various records were verified by our field survey. For improved organization and additional insight, the water supply network was divided into sections which were each assessed individually and the individual water lines were color-coded (these colors are used throughout this study). It should be noted, however, that the failure reports provided by the system operator contained limited information [2,21]. By verifying the actual situation by means of a legwork, the authors supplemented, corrected and expanded the submitted database provided by the waterworks administrator and other institutions (e.g., the database shows the repair of a water supply hydrant that is not in the real environment, etc.). One of the key works was also the unification of documentation (mostly in tabular form) for further investigation (note: documents provided by the waterworks administrator and other institutions were processed in a free and very confused form-e.g., handwritten notes and sketches only in paper form, pasportization of network elements was processed in MS Word, etc.). The evaluation of the structural condition of the system is a qualitative analysis that identifies weaknesses in the system along with the causes and consequences of these weaknesses [22]. Similarly to the general Failure Methods and Effects Analysis (FMEA) method according to CSN IEC 812 (010675), structural condition evaluation considers the system as a whole as well as its individual elements [1]. Monitored elements are evaluated with regards to various risk areas, known as Technical Indicator (TI), and are then classified into the categories K1 to K5 based on condition or efficiency. This classification system has been previously defined and is defined by criteria and input data specific to each TI [22]. The definition of these categories is provided in Table 1. This classification scheme was used in conjunction with the summary document (compiled as described in Section 2.1) to evaluate the drinking water supply system [23]. These technical parameters were partially taken over from the publication [22] and suitably supplemented for their application using the FMEA method. Overall, this method can be applied to a wide range of other water supply networks. In the case of modification and addition of input parameters (TIs), the method can be applied to other types of urban networks (sewerage, gas, etc.). TI1-Age of the Pipeline The age of the pipeline provides information about the wear the water network has been subjected to. Accordingly, K1 to K5 (Table 1) are assigned based on the age and material of the pipe, as shown in Table 2. The failure rate indicator is one of the most important indicators. Based on the data available and the size of the region of interest, we broke down the disruptions and failures into groups including valve failures (gate valves, hydrants, etc.), faults in pipe material and connections, supply line failures and other categories. Sections of the water supply system were categorised with regards to each failure type and subsequently, a weighted average calculation was performed to determine the overall TI2 categorisation. Note that for basic cases, it was sufficient to weight all failure modes equally. Categorisation was performed in accordance with Table 3. Table 3. Classification of TI2-failure rates (by failure mode)-adapted from [22]. In addition, all failures were divided into four categories based on their type-cracked piping, pipe excavation accident, supply line failure and valve failure-for additional failure analysis. TI3-Water Loss in the Network The TI for water loss was calculated from the total Water Not Invoiced (WNI), which is most often expressed as a percentage or as unit water leakage which is expressed in units, such as m 3 /km/year. It was necessary to deduct the system's own water needs, such as water pipe flushing or withdrawals from the mains conducted by the operator, from the WNI to ensure accuracy. Categorisation was performed by unit leakage or by WNI based on available data. The criteria for categorisation are summarized in Table 4. Table 4. Classification of TI3-water losses in the network-adapted from [22]. Category Water TI4-Pressure Ratios The value of TI4 was calculated based on the maximum hydrostatic pressure in the water network. Given the needs of our case system, our assessment only included hydrostatic pressure; however, for enhanced calculation accuracy, the hydrodynamic pressure should be included. Hydrostatic pressure was determined from the minimum and maximum water levels in the reservoir and the minimum building height in the consumption area. For the case of classifying the complete network, we used the typical values for the vast majority of nodes in the network (e.g., >80% of nodes). Categorisation by hydrodynamic pressure was performed according to Table 5. TI5 -Impact on Water Quality For accurate evaluation of TI5, it is best to monitor water quality in the consumption areas. However, in this case, we used the monitoring data from the nearby regional water mains in Ostrava as Zaben is supplied with this water. We considered that water should not remain in the network for longer than 24 h, due to the partially circular design of the network. The classification criteria are summarised in Table 6. Table 6. Classification of TI5-water Quality based on various categories-adapted from [22]. Overall Technical Condition The overall technical condition was evaluated by a sum of all technical indicators TI1 to TI5. Overall technical condition was used to reflect both the entire water supply system and its individual components. Determination of the overall technical condition is given by the following Equation (1): where CTS is the overall technical condition, n is the total number of TIs, TI i is the value of the technical indicator (TI1 to TI5) in the range of K1 to K5, where K1 = 1, K2 = 2, etc., and W i is the weight of the respective TI i such that W i was used to indicate the significance of each TI i . The weighting was performed by dividing the total weight among the individual technical indicators, TI i . Based on previous studies, we defined W i such that it was equal in each case of TI1 to TI5-i.e., W i = 0.2 [22]. The CTS of the system was classified according to Table 7. Table 7. Classification of CTS-overall technical condition-adapted from [22]. Plan of Funding for Water Main Renewal We developed a plan for water main renewal based on the system wear, structural and technical condition, service life, total property value, total water supply value and total associated service value. This plan was prepared on the basis of Annex No. 18 to Decree No. 428/2001 Coll. "Plan for financing the renewal of water supply and sewerage" [CCC], which is part of the valid legislation of the Czech Republic. The equations used in this chapter were taken from this decree. Determining Theoretical Asset Service Life To determine the time required to accumulate necessary funding for repairs, we began by assessing the theoretical life span of the water system based on previous studies. A general estimation of service life (shown in Table 8) was used as a starting point. Following this, information about the properties of various pipe materials was used to refine the lifespan estimate, as summarized in Table 9 [24]. Pipe Material Service Life [Years] Water lines of the feeding and supply network 80 Water treatment plant, or raw water source 45 Calculation of the Basic Wear Percentage of Total Assets The basic wear percentage was calculated for the water mains of interest. For this calculation, the age of the infrastructure assets is essential. As such, we determined the age of the system as a weighted average based on the length and material of the various pipe segments to ensure the accurate calculation of required funding. Calculation of the percentage of wear was based on the theoretical service life of the pipe material in the water supply network, which was a PVC pipeline with a 60-year life expectancy. The actual water supply in the village was built between 1981 and 1984, and it is therefore between 35 and 38 years old. Approximately 8% of the water main length has been repaired or replaced since construction. As such, the calculation of wear percentage considered the average overall age as 36.5 years [24]. Percentage of Wear (PO) was calculated as follows: where PO is the percentage of wear, AS is the average age of the system and TZ is the theoretical service life. Theoretical Time for Accumulation of Funds (TDAP) Theoretical time for accumulation of funds in years is the calculated average time remaining before implementation of the renewal process is required, as renewal is required prior to the end of service life. The restoration time is the time remaining until completion of the pipeline renewal. TDAP is calculated as follows: where TZ is theoretical service life and PO is the percentage of wear. Annual Funds Needed for Reconstruction (RPPO) Annual funds needed for reconstruction represents the amount of funding that should be spent on regular renewal of infrastructure assets to ensure their reliability, functionality and sustainability. RPPO incorporates the total value of the infrastructure, TDAP and the deadline by which the system must be reconstructed. RPPO is calculated as follows: where CHM is the total value of the assets and TDAP is calculated as in Equation (4). Water Supply Documentation We created a water supply system summary document as described in Section 2.1. As previously mentioned, the system was segmented and each element of the system was recorded in this document. A sample excerpt of the summary document is provided in Table 10. The water lines in the municipality are made of PVC pipes in different dimensions and their total length is 9028.6 m. Figure 2 shows the length share of the individual water lines A to G in the context of the total system length. Sustainability 2020, 12, x FOR PEER REVIEW 12 of 22 The water lines in the municipality are made of PVC pipes in different dimensions and their total length is 9028.6 m. Figure 2 shows the length share of the individual water lines A to G in the context of the total system length. Notably, the water supply network in Zaben includes a total of 250 water supply lines, 44 sectional closures, 10 mud pans, 15 air valves and 35 hydrant closures, along with other objects, valves and fittings. Furthermore, it should be noted that neither the owner nor the operator have accurate information about their assets. For example, according to the operator's database, there is supposed to be a hydrant in the village in a specific location, while according to the data of the municipal authority, this element was supposed to be a valve. However, our field survey could not identify either of these elements at the specified location. Nevertheless, all the available data were summarised in a single, uniform record of all elements of the water supply system [26]. Assessment of Structural Conditions of Water System In this study, individual TIs were addressed comprehensively within the entire system of drinking water supply in the municipality. The only exception here is TI2-Failure rate-where the system was evaluated according to the place of failure and thus in relation to individual water lines A to G. Other TIs were solved all over the village due to input data of individual TI, which are similar within the system, or differs in fractions of ratio. Age of Pipelines-TI1 The water mains in Zaben were predominantly built between 1981 and 1984 using PVC pipes. Some pipeline sections were replaced since the initial construction and were either replaced with new PVC or with PE or high-density PE due to PVC defects. However, in relation to the total length of the network, the replaced segment is a very small component of the water supply system (approximately 8%). As such, the replaced segments did not have a significant impact on the evaluation of TI1. As the PVC water supply network in the municipality is predominantly 35 to 38 years old, TI1 was classified as K2 (for ages 20 to 40), sorted by Table 2. Since the expected theoretical service life of the PVC pipe material is 60 years, the water mains are approximately in the middle of their service life and should not require replacement in the near future. As water mains are often made up of large-scale systems that were not built up at once, but rather gradually with the growth of settlements, large water supply systems are often comprised of different materials and segments with different ages. Our assessment is simplified in the case, as the water system is predominantly one material and age. However, in the case of large water pipelines with a variety of different materials and of different ages, it would be advisable to divide this network Notably, the water supply network in Zaben includes a total of 250 water supply lines, 44 sectional closures, 10 mud pans, 15 air valves and 35 hydrant closures, along with other objects, valves and fittings. Furthermore, it should be noted that neither the owner nor the operator have accurate information about their assets. For example, according to the operator's database, there is supposed to be a hydrant in the village in a specific location, while according to the data of the municipal authority, this element was supposed to be a valve. However, our field survey could not identify either of these elements at the specified location. Nevertheless, all the available data were summarised in a single, uniform record of all elements of the water supply system [26]. Assessment of Structural Conditions of Water System In this study, individual TIs were addressed comprehensively within the entire system of drinking water supply in the municipality. The only exception here is TI2-Failure rate-where the system was evaluated according to the place of failure and thus in relation to individual water lines A to G. Other TIs were solved all over the village due to input data of individual TI, which are similar within the system, or differs in fractions of ratio. Age of Pipelines-TI1 The water mains in Zaben were predominantly built between 1981 and 1984 using PVC pipes. Some pipeline sections were replaced since the initial construction and were either replaced with new PVC or with PE or high-density PE due to PVC defects. However, in relation to the total length of the network, the replaced segment is a very small component of the water supply system (approximately 8%). As such, the replaced segments did not have a significant impact on the evaluation of TI1. As the PVC water supply network in the municipality is predominantly 35 to 38 years old, TI1 was classified as K2 (for ages 20 to 40), sorted by Table 2. Since the expected theoretical service life of the PVC pipe material is 60 years, the water mains are approximately in the middle of their service life and should not require replacement in the near future. As water mains are often made up of large-scale systems that were not built up at once, but rather gradually with the growth of settlements, large water supply systems are often comprised of different materials and segments with different ages. Our assessment is simplified in the case, as the water system is predominantly one material and age. However, in the case of large water pipelines with a variety of different materials and of different ages, it would be advisable to divide this network into segments and deal with the segments individually. This approach is only possible if there is sufficient data available. Failure Rate-T12 We summarised the failure rates and accident record for the different water lines from 2013 to 2015 in Table 11. This summary includes a description of the pipeline malfunction, the location and date of the problem and the method of its repair. We also investigated the frequency of system failures by year from 2005-2015. These data are presented in Figure 3. There was a maximum of seven failures in 2012, which was due in part to the extensive reconstruction and completion of the village gas pipeline. Three defects in the water pipeline were caused by mechanical damage during gas line construction. We also evaluated the trend in the frequency of failures and accidents on water mains over time and observed a small but steady increase in frequency. This trend line is also shown in Figure 3. Figure 5, it is evident that the most common cause of a malfunction or emergency was a cracked pipe (n = 22 cases). The cracked pipe category includes any disturbance of the line elements, and are a significant source of water loss in the network. It can be further assumed that across the network there are also other minor water leaks. From the list of failures in the monitored period, it was noted that many faults were repaired by replacing the pipe material for long (>10 m) sections of the pipeline. However, these changes were not reflected in the project documentation provided by the operator and thus, were not reflected in the data on the age of the pipe material or the list of components in the water supply network. It follows that these structural changes are addressed by a reactive maintenance approach, so the problem is eliminated only after a failure occurs. According to the Civil Engineering Act No. 183/2006 Coll., as amended, this does not require any administrative construction proceedings, so the operator can improvise and carry out such repair even with no project documentation [27,28]. From Figure 5, it is evident that the most common cause of a malfunction or emergency was a cracked pipe (n = 22 cases). The cracked pipe category includes any disturbance of the line elements, and are a significant source of water loss in the network. It can be further assumed that across the network there are also other minor water leaks. From the list of failures in the monitored period, it was noted that many faults were repaired by replacing the pipe material for long (>10 m) sections of the pipeline. However, these changes were not reflected in the project documentation provided by the operator and thus, were not reflected in the data on the age of the pipe material or the list of components in the water supply network. It follows that these structural changes are addressed by a reactive maintenance approach, so the problem is eliminated only after a failure occurs. According to the Civil Engineering Act No. 183/2006 Coll., as amended, this does not require any administrative construction proceedings, so the operator can improvise and carry out such repair even with no project documentation [27,28]. In Table 12, we present the summary of the number of failures in each line over time and for the system overall. In total, there were 42 failures and accidents in the water supply system of the village between 2005 and 2015. The total length of the waterline is 9.0286 km and the number of failures per kilometre of water line per year was 0.4229 pp/km/year. The table also shows that the most problematic section of the water system was the F line-which was categorised as K4 with 0.6413 pp/km/yr-whereas the failure rate was the lowest for line A-which was categorised as K1 with 0.2039 pp/km/yr. Thus, the most critical component of the drinking water supply system is Line F. Finally, Table 12 shows the categorisation of each line based on its individual failure rate while Table 13 shows a ranking of the lines by performance. Overall, the water supply system was categorised as K3 with regards to TI2, sorted by Table 3. In Table 12, we present the summary of the number of failures in each line over time and for the system overall. In total, there were 42 failures and accidents in the water supply system of the village between 2005 and 2015. The total length of the waterline is 9.0286 km and the number of failures per kilometre of water line per year was 0.4229 pp/km/year. The table also shows that the most problematic section of the water system was the F line-which was categorised as K4 with 0.6413 pp/km/yr-whereas the failure rate was the lowest for line A-which was categorised as K1 with 0.2039 pp/km/yr. Thus, the most critical component of the drinking water supply system is Line F. Finally, Table 12 shows the categorisation of each line based on its individual failure rate while Table 13 shows a ranking of the lines by performance. Overall, the water supply system was categorised as K3 with regards to TI2, sorted by Table 3. TI3-Water Losses Data on water losses in Zaben were not provided by the owner or the operator of the water system. The volume of water losses, or the balance of WNI, was therefore obtained from the development plans of water supply and sewage systems in the region (PRVKUK). These plans state that 'the proportion of water not invoiced is consistently higher, reaching up to 31% of the quantity of water supplied to the system' [20]. Due to these enormous losses, the whole water system was categorised as K5 with regards to TI3 (sorted by Table 5). This categorisation indicates that water loss is an emergency situation that requires immediate solutions. TI4-Pressure Ratios Pressure conditions in the water network in the village were broadly comparable across the whole system because the village lies on relatively flat plain where the altitude differences are minimal. Due to the altitude of the water tower reservoir which supplies drinking water to Zaben, among other villages, the network has compliant pressure conditions. The water level in the reservoir was between 306.5 and 311.8 m above sea level. The maximum hydrostatic pressure in the water supply system was thus 45.8 mH 2 O, while the minimum was 36.5 mH 2 O. These pressure conditions in the village water network can be considered as favourable, given Table 5. In particular, 45.8 mH 2 O is on the border between categories K1 and K2. However, in normal operating mode the hydrostatic pressure will be lower and thus the resulting category for TI4-pressure conditions of the water supply network-was K1 (sorted by Table 5). Note that in certain cases, increased pressure can have adverse impact on network performance, especially in the case of a leak. Thus, water losses tend to increase in the system when pressure is high, interrelating TI3 and TI4. TI5-Water Quality Water quality is affected by many factors, including the age of the network, clogging, pipe material, quality of drinking water supplied, number of purification steps, and water residence time in the pipeline. Water quality categorisation was determined, as described in Table 6. Given that the village of Zaben is supplied with drinking water from the regional water mains of Ostrava-which contains predominantly surface water-the water in the mains has adequate quality. Furthermore, residence of water in the pipes should be less than 24 h, due in part to the circular design of the network. The weakest point of the system is where water loss indicates leakage may be occurring. Leakage may result in a deterioration of the quality of drinking water in the pipeline. Thus, the resulting category for TI5 was K3 (sorted by Table 6). CTS-Overall Technical Condition It was appropriate to determine the weight W i as an estimate, based on the significance of the indicators for the CTS of the water supply network. Usually, the significance of each TI i stems mainly from their classification. In this case, we used equal weighting because the presented study was implemented as an illustrative and idealized model. However, with a more detailed solution, it would clearly be appropriate to adjust the values of the weights for individual TIs according to their severity. From this point of view, it would be most effective to increase the values of the weights for TI2, TI3 and TI5, which, in this context, have the most significant effect on water quality and system functionality. TI1 and TI4 then have a negligible effect. However, the degree of weights needs to be assessed separately for each area of interest, and it is therefore possible that in a different environment the distribution of Wi weights could be different. Substituting Equation (1) resulted in Table 14 provides a summary of the categorisation for each TI and the overall condition (CTS). Based on the evaluation, the entire water supply system was categorised as K3 (sorted by Table 7). A categorisation of K3 indicates an average evaluation and, according to Table 1, is not expected to require immediate rectification, though repairs and maintenance should be expected in the near future. However, it should also be noted that some of the technical indicators were categorised as being in very good condition (K1), while some were classified as emergency levels (K5). Considering this perspective, the overall condition of the water network is likely unsatisfactory, especially due to the enormously high water losses in the network. Thus, urgent action is likely required. Funding and Planning for Water Main Renewal To calculate the TDAP, the service life of the system is required along with the value of the percentage of wear. It is thus recommended that a financial plan be determined by considering the service life estimates presented in Table 12. The obligation to save for, and subsequently restore, water systems was prescribed for owners of water supply systems by the Amendment to the Water Supply and Sewage Systems Act (i.e., the Act No. 76/2006 Coll.) as of February 2006. Based on this act, Zaben Municipality-as the owner of the water supply network in its region-should theoretically have annual funding for the restoration of the water mains (RPPO) for at least the next 10 years. Since no major reconstruction projects have been undertaken nor has other investment been made in the water system of the village so far, it can be assumed that the village has been saving the money to date. Using Equation (3) Based on these calculations, the total theoretical time required for accumulation of funds is 23.5 years, which means that the water supply system should be reconstructed in its entirety by 2042. which is equivalent to CZK 10.52 million (€404,615) over the 10 years since the establishment of the legal obligation to prepare a plan for financial renovation of water supply systems. These funds should therefore be either invested annually into water supply system renewal or the owner and operator should plan extensive reconstruction on the timescale of a few years, in which case the funds required are proportionally increased. If the municipality, as the owner of the water mains, obeyed the requirements of the law [28], we can assume it has sufficient funding to commence the first phase of the water supply network reconstruction now. Based on the evaluation of the structural condition described in Section 3.2, we prepared a plan for the renovation of water mains in the village. As mentioned above, the significant water losses necessitate urgent solutions even though the overall system was generally evaluated as K3 (compliant). Draft plans for the reconstruction particularly considered the failure rate of individual water lines, since this factor has a significant influence on the reliability of the water system village. According to the failure rates of individual pipelines, we proposed a schedule of reconstruction for the water supply network. When preparing the schedule, there were distinguished preparatory and implementation phases of water supply reconstruction. The implementation phase involves the actual design of the infrastructural renewal, whereas the pre-realisation phase includes the preparation of the restoration project, its approval processes, building permit procedures, transport engineering measures during the reconstruction work and so on. The target end date of 2042 is due to the calculated TDAP. A summary of the proposed schedule is provided in Table 15. The first phase of restoration will take place in the years 2019-2025, which cover the restoration of the water supply system in the southwestern part of the village (Lines F and G) as well as construction of a second water supply conduit from Paskov City. Water line F will be renewed first since it has the highest failure rate of the entire water supply system and is likely a significant source of water losses in the system. Following the renovation of Line F, a second drinking water supply conduit from Paskov of approximately 870 m in length is expected to be constructed in 2021 and 2022 and is expected by PRVKUK [20]. This conduit will connect to Line G, which will also be renewed during this process-specifically by replacing the current water mains with wider pipes (DN200 instead of DN50). Water supply lines F and G are currently designed as branch-type, but in the reconstruction, it would be advisable to interconnect them in a circular design so as to improve the operational certainty of both water lines. The second phase will take place between 2035 and 2042, when the remaining parts of water supply in the north-eastern area of the municipality (lines A through E) will be reconstructed. Specifically, water lines D and E will be reconstructed first, followed by lines B and C and finally line A. Thus, restoring the water supply system in the village is a large investment, for which the village must be well prepared. Given the fees associated with lost water and that it is likely that part of this cost could be recovered by repairing the pipeline, system owners and operators may be incentivised to participate in the revitalisation process. Consider that the present annual water consumption is approximately 36,223 m 3 /yr. and that WNI may account for up to 11,229 m 3 /yr. Furthermore, consider that the price of water is CZK 34.40 (€ 1.32)/m 3 . As such, the financial loss is approximately CZK 386,278 (€ 14.856,85), which represents a significant annual financial loss to the operator of water supply network which may encourage operators to financially support and maintain the system [29]. In short, the result of our proposed financing plan was the annual need to set aside financial assets for the renovation of water supply systems. This is necessary to ensure the required technical equipment and stable operation of the water supply system, the financing for which could be incentivised to operators by addressing the cost of lost water in systems needing repair [28]. Final Recommendations and Discussions As described above, the subject of the practical part of the work was modeling and subsequent optimization of the water supply network in Zaben village. Only the distribution network in the village was solved in the work. In the case of further development of the issue, not only should the distribution network itself should be addressed, as it is the most common place of failures and accidents, but so should other parts of drinking water supply system, which form a significant part of the entire system. These parts can include, in particular, raw water sources, drinking water treatment plants, water inlets and reservoirs. In this work, the basic risks of these and other buildings are only outlined, however, the evaluation of their construction and economic condition is not included here due to the scope of the topic. For further development of the issue, it would be appropriate to apply, for example, the methods used in this work, or some other methods of risk analysis, namely those parts of the water supply system that have a significant impact on the functionality and reliability of the system. For further development of modeling and optimization of the reliability of buildings for drinking water supply, it also seems important to take into account other areas that have been revealed during the elaboration of this very comprehensive topic. Areas suitable for further solution of the issue include in particular: • the effects of other infrastructures (transport, energy, sewerage, hot water, etc.) and other elements of public space (greenery, furniture, space division) on the operability and failure rate of water supply networks, • the effects of method of laying engineering networks or water mains on their service life, reliability and failure rate (simple installation in the field, trenchless technologies, pipeline material, common and combined routes, etc.), • evaluation of other operational parts of drinking water supply system, such as raw water sources, drinking water treatment plants, water supply, water reservoirs, pumping stations, etc., • economic evaluation and balancing of individual water supply lines in connection with the financing of the renewal of water supply systems for public use, • design of method of optimization (e.g., the possibility of using suitable trenchless technology, simple storage in the ground, material design, coordination with other elements of public space) based on the construction status of the water supply network using the FMEA method, • prediction of failures and accidents in drinking water supply system using a suitable statistical method based on the evaluation of the failure rate of individual elements of the system and the determination of the resulting time to failure (e.g., FTA tree analysis), • solution and design of a maintenance plan for inspections and revisions using modern technologies for monitoring water supply networks and related operating files (especially detection of water leakage from pipes, monitoring the course of hydrodynamic pressures in the water supply network, etc.) Conclusions Water supply networks in the Czech Republic are an extensive grid of 77.146 km of water mains that supply water to 9.93 million people-approximately 94% of the population. The water losses in the network have been slightly reduced in recent years, but may still reach an average of 16.8% of water intended for delivery. This amount corresponds to approximately 99.1 million m 3 of water per year, which is a huge amount of water that it is costly to produce and is subsequently lost due to leaks. This study presents the water supply network of a model territory in the village of Zaben in the Czech Republic. We applied the methodology for evaluating the technical condition of the water supply system, according to the general FMEA approach. Based on this methodology, the evaluation of individual sections of the water mains was conducted according to their structural condition. We also proposed a financial and logistical plan for the reconstruction of the water supply system in the village on the basis of our analysis. Notably, the proposed time schedule was designed such that the sequence of individual water lines being renewed reflects their structural condition. The result is a schedule for water main reconstruction, whereby the priority is the renewal of those lines that are in the worst technical condition. This schedule can serve as a basis for both the owner and the operator of the water supply network when preparing for the reconstruction or development of the water supply and sewer systems in the region. The analysis shows that the most critical point of the drinking water supply system in the municipality is clearly the water supply line F, where there is an average of 0.64 failures per kilometer of water supply per year. The second most problematic section is the line G. On the contrary, the smallest number of failures and accidents for the monitored period is shown by the line A with 0.2 failures per one kilometer of water pipeline per year. According to the methodology for evaluating the technical condition of the water supply network in the village was generally evaluated as network belonging to the K3 category-satisfactory. However, due to the enormous losses of water in the network and the relatively high failure rate, especially of some water supply lines in the village, it is appropriate to start the renewal of the water supply system in the village now. The renewal proposal is mainly based on the failure rate of individual series, as this factor has a significant effect on the reliability of the whole drinking water supply system in the village. The procedure and data processing proposed in this work, which was presented on the water supply network in the model area of the village Zaben could be applied to other water supply systems in other municipalities and cities to create realistic conditions for efficient operation of water supply networks and significantly support sustainable development area. As part of the management of water supply networks in the Czech Republic (in the case where the operator is a private company, as in this case, SmVaK Plc), the total operation is solved for the purpose of maximum profit of the operator. Private entities are not entitled to any state or other support and therefore all interventions in the system are handled as low-cost. In contrast, there are systems that are operated by the state sphere. Here, the main goal is to ensure the supply of water in the required quality and with maximum efficiency. However, there are few such systems in the Czech Republic, mainly due to historical political practices (especially in the 90s). Fortunately this situation has recently improved and an increasing number of drinking water supply systems are being taken over by the state sphere. Author Contributions: M.T. and N.S. provided the core idea, collected the data, wrote the manuscript and analyzed the data statistically. D.K. and S.E. revised and constructively commented on the paper, checked the formal correctness. All authors have read and agreed to the published version of the manuscript. Funding: The work was supported by funds for Conceptual Development of Science, Research and Innovation for 2020 allocated to VŠB-Technical University of Ostrava by the Ministry of Education, Youth and Sports of the Czech Republic.
2020-12-03T09:05:12.154Z
2020-11-29T00:00:00.000
{ "year": 2020, "sha1": "72d0f3e9e88e9c2848ffe5e44267e05949e195b3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/23/9984/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e2a51b90a6d3c57b2ff7878030364d460a36b6f5", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
233824824
pes2o/s2orc
v3-fos-license
Preceding Vehicle Detection Based on Optimized Faster R-CNN Algorithm Preceding vehicle detection is still a challenge for unmanned driving technology. Deep learning has achieved great success in target detection. Among them, the Faster R-CNN algorithm is a classic representative. However, the algorithm still has some room for improvement in detection accuracy. By analyzing the problems of Faster R-CNN in the detection of occluded vehicles, taking the target detection post-processing algorithm Soft-NMS as the research object, two new penalty coefficients, inverse proportional penalty coefficient and exponential penalty coefficient, were proposed. It further improves the algorithm’s detection accuracy of the blocked vehicle in front. Introduction Compared with human driving, unmanned vehicles have outstanding environmental awareness and path planning capabilities, which can greatly reduce the traffic accident rate and ease the pressure of traffic congestion [1]. In the unmanned driving system, perception is an extremely important part. Real-time detection of the preceding vehicle can provide sufficient prior conditions for subsequent decisionmaking and planning, thereby avoiding traffic accidents [2]. In recent years, preceding vehicle detection has become a research hotspot due to its important role in autonomous vehicles. At present, the methods for detecting vehicles in front are mainly divided into traditional machine learning methods and deep learning methods based on neural networks. Traditional machine learning methods mainly extract vehicle features through feature extraction operators such as HOG (histogram of oriented gradient) [3] and Haar-like [4], and then input these features into classifiers such as SVM (support vector machine) [5] and AdaBoost [6] to complete vehicle detection. These methods have the disadvantages of high time complexity, insufficient model learning, and inability to adapt to the requirements of feature diversity. At the same time, it cannot be applied to large samples and cannot accurately include the current complex road traffic environment of unmanned driving [2]. At the same time, machine learning methods are difficult to solve the problems of occlusion and false detection. In this context, it is more effective to use deep learning methods to detect vehicles in front. With the continuous development of deep learning, the convolutional neural network can effectively extract high-IOP Publishing doi: 10.1088/1742-6596/1802/3/032075 2 dimensional features of images due to its parameter sharing, local connection, and down-sampling structure of the simulation vision processing method, which greatly improves the detection accuracy. Therefore, it is widely used in vehicle target detection tasks, and the effect far exceeds traditional algorithms. Faster R-CNN (faster region with convolutional neural network) [7] is a target detection framework based on region recommendations and convolutional neural networks proposed by Microsoft Research in 2015 that can perform end-to-end learning. Zhang [8] built a fast and accurate road target detection algorithm (FAROD) based on Faster R-CNN, and improved the detection performance of small targets by introducing a deconvolution structure. Frameworks such as SSD (single shot multibox detector) [9] and YOLO (you look only once) v2 [10], which are known for fast detection, also draw on many ideas of Faster R-CNN. As a classic algorithm for target detection, it is used by many scholars in the field of vehicle detection. In the process of vehicle detection, the detection of the car in front of the occlusion is a difficult problem to be solved urgently. When the front vehicle is blocked by other vehicles or obstacles, it is easy to miss detection or low detection accuracy, so that the target vehicle cannot be accurately identified. The Faster R-CNN algorithm mainly uses the NMS (Non-maximum suppression) algorithm to remove the redundant target detection frame, while the optimized NMS algorithm can improve the detection accuracy of the vehicle in front of the occlusion. Bodla [11] proposed a Soft-NMS algorithm with penalty coefficient. This method does not need to retrain the original model, and can be easily integrated into any target detection algorithm using NMS, reducing the rate of missed detection. Zhao [12] applied the Soft-NMS algorithm to the task of detecting targets such as vehicles. Compared with the NMS algorithm, the Soft-NMS algorithm can improve the accuracy of the PASCAL VOC 2007 data set by 1% and 2%, so using the Soft-NMS algorithm can improve the detection accuracy of the car ahead. Soft-NMS algorithm proposes linear weighted and Gaussian weighted penalty coefficients. However, no scholars consider applying other types of penalty coefficients to the Soft-NMS algorithm to explore its impact on the detection accuracy. Therefore, aiming at the detection accuracy of the car in front of the occlusion, this paper introduces the Soft-NMS algorithm into the Faster R-CNN algorithm. The Soft-NMS algorithm is optimized and two new penalty coefficients are introduced to improve the detection accuracy of the occluded vehicle. First of all, the paper introduces the overall structure and design principle of Faster R-CNN, RPN (the region proposal network) and NMS. Then the working principle of Soft-NMS algorithm is introduced and two new penalty coefficients are introduced to optimize Soft-NMS algorithm. Finally, the new penalty coefficients are verified by experiments, and the detection accuracy of Faster R-CNN algorithm is further improved. How Faster R-CNN works As shown in Fig. 1, the work of Faster R-CNN is divided into four parts. The first part is image feature extraction. The image is processed by a convolutional neural network (VGG16 as an example) to obtain a feature map. The second part is RPN. According to the feature map input through the front layer network, the network will determine the candidate region and the general location of the target in the image, including the detection frame and whether it is the foreground and background. The third part is RoiPooling. Through the proposed region and feature map information, RoiPooling maps the proposed region to the feature map. The fourth part is classification and regression. Through the feature map of the proposed region, the cls layer and the reg layer determine the position and category of the target in the image, respectively. RPN is a fully convolutional neural network, and its workflow is shown in Fig. 2. The input of this network is the feature map generated by the previous convolutional layer. First, use the sliding window of n×n (n=3 in this article) to traverse the feature map, generate a new 512-dimensional feature map, and generate k anchors. Then, the 512-dimensional feature is mapped on the low-latitude vector through a 1×1 convolution operation. These vectors will be used in the cls layer and reg layer. The picture will be adjusted to a fixed size before entering the network. Through a series of convolution operations (taking the VGG16 network as an example), a feature map with a size of 50×38 is finally generated. Because each feature point corresponds to k anchors, a total of 50×38×k anchors are mapped on the original image. Through the processing of the cls layer and the reg layer, each anchor will obtain four parameters including two scores of the target and the corresponding position. According to these parameters, after a post-processing process, about 300 suggested areas can finally be generated. The post-processing process is shown in Fig. 3. Faster R-CNN will generate detection bounding boxes and scores for specific categories of targets. Adjacent detection bounding boxes often have related scores, which will increase the false positives of the test results. In order to avoid this situation, the Faster R-CNN algorithm applies the NMS algorithm to remove redundant detection frames. The working principle of the NMS algorithm is as follows. First, the algorithm will generate a series of detection frames Bi and a series of confidence scores Si (i=1, 2,..., j ,...). Then select the detection frame Bj with the highest confidence score and its confidence score Sj. Then determine the IoU value of the detection frame and other detection frames Bi (i≠j). Subsequently, the confidence score of Bi (i≠j) is updated according to Equation (1). If the confidence score of Bi (i≠j) is 0, the detection frame will be removed. Then select the remaining detection frames except Bj and repeat the above operation until all target detection frames are selected. Among them, in the design of Faster R-CNN algorithm, Threshold = 0.3. Soft-NMS algorithm Although the NMS algorithm can effectively reduce the false positives of the detection results, the method is a greedy algorithm. It forces the removal of the detection bounding box adjacent to the highest confidence score. If a real target appears in the overlapped area and the overlapped area is too large, the target will be deleted by mistake, causing the target to be unrecognizable and reducing detection accuracy. Therefore, the Soft-NMS algorithm came into being based on the above reasons. The core idea of the Soft-NMS algorithm is to no longer violently delete all target detection frames with IoU greater than the threshold, but to reduce the confidence score of the detection frame. Compared with Equation (1), Soft-NMS smoothes it and proposes a penalty coefficient λ based on linear weighting and Gaussian weighting. The above two penalty coefficients are determined by Equation (2) and Equation (3) respectively, and the corresponding confidence score is determined by Equation (4). This soft-NMS algorithm based on penalty coefficients effectively improves the detection effect of Faster R-CNN algorithm on occluded targets by reducing the confidence score instead of directly deleting the target detection frame. In the Equation, IoU(Bi,Bj) represents the intersection ratio between the detection frame Bi to be processed and the detection frame Bj with the highest built-in reliability in the same detection frame set, Si is the confidence score of the detection frame Bi, Threshold = 0.3, δ = 0.3. 3. Faster R-CNN optimizes the detection accuracy of the vehicle in front of the occlusion Based on the above research, this paper proposes a further optimization of the Soft-NMS algorithm. First, the influence of the threshold on the penalty coefficient is discussed. Then, two other types of penalty coefficients are introduced according to the curve of the penalty intensity of the penalty coefficient changing with IoU, so as to improve the detection effect of the Faster R-CNN algorithm on the vehicle in front of the occlusion. The impact of the threshold on the original penalty coefficient Threshold is also called critical value, refers to the lowest or highest value that an effect can produce. The Soft-NMS algorithm is not easy to set the threshold. If the threshold is too small, the target will be missed, and if the threshold is too large, the target will be falsely detected. Therefore, by adjusting different thresholds, the impact of maintaining the penalty intensity of the penalty coefficient within the threshold on the detection accuracy of the preceding vehicle is explored. Regarding the linear penalty coefficient, this section explores the detection results of the Faster R-CNN algorithm when the penalty coefficient value is 1 when the threshold is 0.3, 0.2, 0.1, and 0, respectively. The test results are shown in It can be seen from TABLE Ⅰ that with the decrease of the threshold, the detection accuracy of the algorithm for the vehicle in front of the occlusion also gradually decreases. The following conclusions can be drawn from the above results: (1) The reduction of the threshold means that the penalty coefficient can maintain the same penalty intensity in a smaller range. The decrease in detection accuracy means the decrease in the ability to maintain the penalty intensity accompanied by the decrease in the threshold. The detection effect changes in a negative direction. (2) When the threshold is 0, the detection accuracy of the algorithm for the blocked vehicle in front is close to 1% with the threshold. This means that maintaining a penalty intensity of 1 within a certain threshold has a positive effect on the detection effect. Because the original Gaussian penalty coefficient does not set a threshold, the threshold is set in the Gaussian penalty coefficient and the penalty coefficient is 1 within the threshold range to prove the above point. Regarding the Gaussian penalty coefficient, this paper also explores the detection results of keeping the penalty coefficient value of 1 when the threshold is 0.3, 0.2, 0.1, and 0, respectively. The test results are shown in Through the analysis of the experimental results in TABLE Ⅱ, the following conclusions can be drawn: (1) Compared with the original Gaussian penalty coefficient, by introducing the threshold, the penalty intensity can be maintained within a certain threshold range, which has a positive effect on the improvement of detection accuracy. (2) Similar to the linear penalty coefficient, the reduction of the threshold in the Gaussian penalty coefficient brings about the weakening of the ability to maintain the penalty intensity, which has a certain negative impact on the detection effect. Through the analysis of the Gaussian penalty coefficient and the linear penalty coefficient, it can be found that keeping the penalty coefficient value of 1 within a certain threshold range will have a positive impact on detection. This not only validates the view that the optimized Gaussian penalty coefficients are not as effective as linear penalty coefficients, but also provides ideas for the design of new penalty coefficients. Analysis and optimization of penalty coefficient Cui [13] proposed an optimized Soft-NMS algorithm, which multiplies the linear penalty coefficient and the Gaussian penalty for many times. The new Equation for calculating the penalty coefficient λ Q is shown in the Equations (5) and (6). Under this process of multiple multiplication, the variation curve of the penalty intensity of the penalty coefficient with IoU is shown in Fig. 4. Fig. 4 The relationship between penalty intensity, IoU and Q In this optimization algorithm, the processing of multiplying the penalty coefficient Q times enables the algorithm to improve the detection accuracy of the vehicle in front of the block by 1%-2%. At the same time, it is found that compared with the linear penalty coefficient, the Gaussian penalty coefficient has a weak ability to maintain its own penalty intensity, which is the reason why the detection accuracy of the Gaussian penalty coefficient is lower than the linear penalty coefficient. (a) Optimized linear penalty coefficient (b) Optimized Gaussian penalty coefficient Based on the above research, this paper proposed an inverse proportional penalty coefficient and an exponential penalty coefficient according to the curve change law of the penalty function coefficient. 1) Inverse proportional penalty coefficient It is known that the linear penalty coefficient and the Gaussian penalty coefficient have been multiplied multiple times to achieve a better detection effect of occluded vehicles. From Fig. 4, it is found that the curves of the two penalty coefficients are gradually concave when multiplied multiple times, which is similar to the curve of the inverse proportional function y=k/x (x>0). Therefore, the curve of the penalty coefficient λ is an inverse proportional function in the interval (Nt, 1). Similar to the linear penalty coefficient, the penalty intensity is maintained within a certain threshold range, and the penalty coefficient value is maintained at 1. Therefore, the penalty coefficient should have the same penalty intensity as the linear penalty coefficient when IoU(Bi,Bj) is between 0 and Nt. Therefore, the design function of the inverse proportional penalty coefficient λ is shown in Equation (7). In the previous analysis, it is known that the detection effect of the optimized linear weighted penalty coefficient is better than the Gaussian weighted penalty coefficient. Therefore, the designed inverse proportional penalty coefficient λ is compared with the optimized linear weighted penalty coefficient. When the threshold Nt is 0.3, the function curve of the inverse proportional penalty coefficient is shown in Fig. 5(a), and the comparison after adding this function curve to the optimized linear weighted penalty Fig. 5(b). Through comparison, it is found that the penalty coefficient λ still has some characteristics of the optimized linear weighted penalty coefficient. 2) Exponential penalty coefficient Similar to the Gaussian penalty coefficient, the exponential function also has the characteristic that the larger the IoU value, the stronger the penalty intensity. The exponential penalty coefficient λ based on the exponential function is as follows: ) ( Fig. 6(a). (a) Exponential penalty coefficient under different a (b) Exponential penalty coefficient with threshold Fig. 6 Schematic diagram of exponential penalty coefficient It can be seen from Fig. 6(a) that, compared with the optimized Gaussian penalty coefficient, the exponential penalty coefficient has a similar penalty curve, and the penalty intensity is different under different values of a. It is known that for the penalty coefficient, maintaining a certain penalty intensity within the threshold range has a positive effect on the detection effect, so the exponential penalty coefficient with a threshold range is shown in Equation (9)  In this case, the curve of the exponential penalty coefficient is shown in Fig. 6(b). According to the above two designs, the optimized confidence score is shown in Equation (10). Experimental verification The design experiments of inverse proportional penalty coefficient and exponential penalty coefficient are verified. Firstly, the evaluation index and training environment are introduced. Then, the effects of the two newly introduced penalty coefficients on the preceding vehicle detection accuracy of Faster R-CNN are evaluated respectively. Finally, evaluate the effect of their ability to maintain the penalty intensity within the threshold range on the detection effect. Evaluation indicators and environmental configuration The most commonly used model evaluation indicators are P (Precision), R (Recall), and mAP (mean Average Precision). The P represents the model's ability to identify related targets, and is the percentage of correct predictions. R refers to the target's ability to find related targets, and is the percentage of all targets predicted to be correct. For two classification problems, the AP (Average Precision) represents the performance of the classifier, that is, the area of the P-R curve, to reflect the evaluation effect of the balance between precision and recall. Among them, the P and R are determined by Equation (11) and Equation (12) Analysis of experimental results The dataset used in this paper is the KITTI dataset. The models are trained on the dataset and verified by experiments. First, for the inverse proportional penalty coefficient, this paper selects the threshold by jointly adjusting the parameters. In the process of constantly changing the threshold size, keep other network structures unchanged, and take the final detected AP value as the evaluation criterion. Because of the curve nature of the inverse proportional function, this paper tests the detection effect of the inverse proportional penalty coefficient when the threshold Nt is 0.3, 0.2 and 0.1 respectively. The test results are shown in TABLE 5. At the same time, the detection effect of using the inverse proportional penalty coefficient is further tested. Fig. 7 (a) and (b) respectively show the detection diagrams without applying the inverse proportional penalty coefficient and applying the inverse proportional penalty coefficient. (1) Compared with the original linear penalty coefficient (threshold = 0.3) and the original Gaussian penalty coefficient (threshold = 0) applied by the Soft-NMS algorithm, the detection accuracy of the inverse proportional penalty coefficient is higher than that of the above two penalty coefficients no matter what threshold the inverse proportional penalty coefficient is. It is proved that the designed inverse proportional penalty coefficient is more effective than the original two penalty coefficients, and the detection effect is better. (2) With the continuous decrease of the threshold, the detection effect of the algorithm for the occluded preceding vehicle is also gradually weakened. This also verifies that the ability of penalty coefficient to maintain the intensity of penalty within the threshold range is weakened, and its effect on the detection effect is negative. As for the exponential penalty coefficient, this paper verifies the exponential penalty coefficient in the case of a=0.1, a=0.05 and a=0.01 respectively. First of all, this paper verifies the detection effect of the algorithm for the occluded preceding vehicle when the exponential penalty coefficient is applied without considering the threshold as shown in Equation (8) Then, we continue to test the detection effect of the algorithm on the blocked vehicle in front when the exponential penalty coefficient is applied after considering the threshold (threshold value is 0.3) as shown in Equation (9). The test results are shown in TABLE 7. When the threshold value is considered, the detection diagram of the blocked vehicle in front is shown in Fig. 8. Tab.7 The test results of exponential penalty coefficient are applied(considered the threshold) Fig. 8 Detection of blocked vehicles in front with exponential penalty coefficient (considered the threshold) Through the analysis of the above experimental results, the following conclusions can be drawn: (1) When the threshold is not considered and the parameter a is 0.1 and 0.05 respectively, the detection effect of exponential penalty coefficient is better than that of linear penalty coefficient and Gaussian penalty coefficient. Although the detection effect is worse when the parameter a is 0.01, it is better than the original NMS algorithm, which proves the effectiveness of the exponential penalty coefficient. (2) When the threshold is 0.3, the detection effect of applying the exponential penalty coefficient is better than the original linear penalty coefficient and Gaussian penalty coefficient under different parameter a value. When a is 0.01, the detection effect is better. It is proved that a new type of penalty coefficient designed to imitate the optimized linear penalty coefficient and Gaussian penalty coefficient curve change trend has a positive effect on the detection effect. (3) When the threshold is not considered, with the continuous decrease of parameter a, the worse the detection effect is. When considering the threshold, the detection effect is better with the continuous decrease of a. Through the comparison of Fig. 6, it is found that when the threshold is not considered, the ability of the exponential penalty coefficient to maintain its own penalty intensity is worse with the continuous decrease of a. After considering the threshold, it can be guaranteed that with the change of parameter a, the exponential penalty coefficient maintains the penalty intensity within the threshold range, which is also the reason for the above changes. It also verifies the positive effect of the penalty coefficient within the threshold to maintain the penalty intensity. To sum up, the inverse proportional penalty coefficient and exponential penalty coefficient proposed in this paper have better detection results than linear penalty coefficient and Gaussian penalty coefficient under certain conditions. Conclusion In order to improve the detection accuracy of the Faster R-CNN algorithm for the preceding vehicle, this paper makes corresponding optimizations for the detection of occluded vehicles. Aiming at the Soft-NMS algorithm, this paper proposes two new penalty coefficients: inverse proportional penalty coefficient and exponential penalty coefficient. Experiments verify that the two new penalty coefficients have better detection effects than linear penalty coefficients and Gaussian penalty coefficients. At the same time, by further considering the threshold factor, the detection effect has a better performance under certain conditions, which verifies the effectiveness of this algorithm.
2021-05-07T00:04:05.067Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "037df744adda5c47ea296b6d51179e94bacaef3a", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1802/3/032075", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f31b62368e3fbb8f84f937ba528ed47c58504da3", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
98597840
pes2o/s2orc
v3-fos-license
Cardiotonic effects of Terminalia arjuna extracts on guinea pig heart in vitro We investigated the effects of fruit and bark extracts of Terminalia arjuna on the rate and contraction of guinea pig heart in vitro. Immediately after killing, the heart was connected to a perfusion apparatus containing oxygenated double-dextrose McWins solution with fresh arjun fruit and bark experimental extracts. The apex of the heart was then connected to a recording drum through a metallic hook and the rate and height (force) of contractions were recorded on a smoked drum. The heights of contraction for fruit extract and bark extract were significantly higher than that for normal saline (p<0.001), while they were significantly lower than that for adrenaline (p<0.01). Addition of verapamil caused a significant blockade in the heights of contraction produced by extracts of T. arjuna (p<0.01), while heart rate was not affected. The cardiotonic effect of T. arjuna was probably mediated through high concentration of Ca ++ present in the plant. Introduction Terminalia arjuna (neer maruthu in Tamil and Malayan, commonly called aurjuna or the arjun tree in English) is about 20-25 meters tall, usually has a buttressed trunk and forms a wide canopy at the crown from which branches drop downwards.The Arjun is usually found growing on river banks or near dry river beds in West Bengal, South and Central India and in Bangladesh (Ali et al., 1966;Ali et al., 1979;Basu and Kirtikar, 1987;Biswas et al., 2011;Dwivedi, 2007;Ramchandran, 1992). Since the studies in mice showing its leaves to have analgesic and anti-inflammatory properties, T. arjuna has found its way in traditional/alternative medicine for treatment of a wide range of human ailments.The arjuna was introduced into Ayurveda as a treatment for heart disease, wounds, hemorrhages, fractures, ulcers, tuberculosis, cough, chronic fever particularly tuberculosis, hemoptysis, urinary tract infections, renal stones, acne, bleeding piles and diarrhea (Ali et al., 1966;Dwivedi, 2007;Chopra et al., 1958;Dastur, 1982).Of these treatments our research interest was related to the reported anti-ischemic, cardiac protective and cardiotonic effects of T. arjuna on the cardiovascular system (Ali et al., 1979;Ramchandran, 1992).In an attempt to elucidate the cardiotonic properties, we investigated the effects of fruit as well as bark extracts of T. arjuna on the rate and contraction of guinea pig heart in vitro. Animal The study was conducted on guinea pigs of both sexes weighing 500-600 g.Immediately after killing by a blow on the head the heart along with great vessels were taken out and kept on a petri dish containing Preparation of T. arjuna samples by extraction Fresh arjun bark and fruit were collected from some trees (approximately 25 years old) within the premises of BCSIR at Dhaka.Various procedures were followed to prepare samples for pharmacological studies, e.g.i) crude water extraction of fruit; ii) crude powder bark; iii) water extraction of the fresh bark, and iv) extraction of the bark and fruit by various organic solvents like petroleum ether (bp 40-80°C), benzene, rectified spirit, etc. Crude powder preparation of bark and fruit Fresh barks and fruits were collected, chopped to small pieces and air dried.It was powdered in iron mortarpestle and sieved by different types of standard sieves. The well dried powdered products were made ready for pharmacological experiments. Water extraction of the bark and fruit Chips of freshly collected barks and fruits were crushed and extracted with water at low temperature (50-60°C). The total extractive was concentrated on water bath and finally dried in desiccators containing CaCl2/silica gel. It was then powdered and sieved. Extraction of the bark and fruit with organic solvents Chips of bark and fruits separately (4 kg each) were extracted exhaustively with rectified spirit.The total extractive were concentrated to about 1.5L and it was kept standing in a conical flask.After three days, heavy precipitates from it settled at the bottom which were filtered and dried over CaCl2 in a desiccator.After separation of the solid from the solution, they were concentrated, dried and powdered.They were soluble in water.All chemicals were from BDH chemicals, London and sigma chemicals, London. Grouping of animals and treatments The animals were divided into groups receiving treatments as the following: Group I: Consisting of four guinea pigs; The heart received normal saline (0.2 mL/ mL of bath fluid).This served as the control group. Group II: Consisting of four guinea pigs; The heart received crude fruit extract of T. arjuna (0.2 mL/mL of bath fluid).Group III: Consisting of three guinea pigs; The heart received fruit extract of T. arjuna (0.8 mL/mL of bath fluid).Group IV: Consisting of three guinea pigs; The heart received fruit extract of T. arjuna (1.2 mL/mL of bath fluid).Group V: Consisting of three guinea pigs; The heart received bark extract of T. arjuna (0.4 mL increasing to 0.8 mL/mL of bath fluid).Group VI: Consisting of five guinea pigs; The heart received calcium ions.Group VII: Consisting of five guinea pigs; The heart received verapamil (10 mg/mL of bath fluid) and alcoholic extract of T. arjuna.Group VIII: Consisting of two guinea pigs; The heart received calcium ions and verapamil. These experiments were carried out in the Department of pharmacology of the then IPGMR, Shahbag, Dhaka 1000, in collaboration with BCSIR, Dhaka and Department of Medicine, IPGMR, Dhaka, during the period of 1983-85. Results The effects of normal saline and the fruit extract of T. arjuna on isolated guinea pig heart are stated in Table I. The mean height of contraction produced by crude fruit extract was significantly higher compared to normal saline as control (p<0.01).The mean ± SE heart rate was 110 ± 5.2/min and 112.0 ± 6.4/min for normal saline and crude fruit extract respectively, which were not significantly different (p>0.2). The comparative effects of normal saline and bark extract of T. arjuna on guinea pig heart contraction in vitro are presented in Table II.Compared to fruit extract, the significant effect of bark extract on contraction was more pronounced (p<0.01 and p<0.01).However, the mean heart rate was 114.0 ± 7.3/min which not significantly different from control (p>0.2). The comparative effects of normal saline, adrenaline, T. arjuna (fruit extract) and T. arjuna (bark extract) are shown in Table III.The height of contraction (mean ± SE) was highest for adrenaline (p<0.001) and the heart rate was 115.0 ± 8.6/min which was significantly higher (p<0.01).The heights of contraction for T. arjuna (fruit extract) and T. arjuna (bark extract) were significantly lower than adrenaline (p<0.05), while they were significantly higher than that for normal saline (p<0.01). The comparative effects of normal saline, CaCl2 and verapamil on guinea pig heart in vitro are shown in Table IV and Figure 1.The mean ± SE of the heights of contraction for CaCl2 alone was highest (p<0.001) and the heart rate of 134 ± 6.7/min were also significantly higher compared to control (p<0.01).Addition of verapamil (1 mg/mL of bath fluid) produced a significant (p<0.001)blockade in the height of contraction produced by calcium ions. The effects of T. arjuna and verapamil on isolated guinea pig heart are shown in Table V and Figure Discussion In the present study, cardiotonic properties was observed which was more pronounced with the water soluble part of alcoholic extract of T. arjuna.The force of contraction of the cardiac muscle was increased without any appreciable change in the rate.The bark of T. arjuna on frog heart and found an increase in the amplitude of contraction without increasing the rate which closely agrees with our observations.In the present experiment it was observed that the increased force of contraction was blocked by veraparmil, a calcium antagonist.This suggests that the cardiotonic effect of T. arjuna is likely to be mediated through the release of intracellular Ca ++ ions.It has been found that arjun bark contains high calcium along with other organic acid (arjunic acid, etc.) and glycosidal substances (arjun extract) (Basu and Kirtikar, 1987).Bark is prescribed in traditional system of medicine in heart diseases.The cardiotonic property observed in the present study as well as by others, may therefore be due to this high calcium content and presence of glucosidal and other organic substances in T. arjuna. Koman in 1920, on the other hand, failed to demonstrate the cardiotonic effect of arjuna when a decoction was used in the treatment of valvular heart disease.This may be due to completely different method of preparation from T. arjuna and also experimental situation.Caius et al., 1930 observed cardiotonic property of T. arjuna along with diuretic properties similar to cardiac glycosides (Biswas et al., 2011).The cardiotonic properties observed by these investigations are also compatible with our observations.This cardiotonic property of T. arjuna may be useful in the treatment of congestive heart failure (Basu and Kirtikar, 1987;Biswas et al., 2011). However, compounds present in T. arjuna have antioxidant and hypocholesterolemic effects.In a randomized trail, it was shown that T. arjuna bark powder has significant anti-oxidant action that is comparable with vitamin E and it had significant cholesterol lowering effect (Gupta et al., 2001).Another study clearly demonstrated that bark extract of T. arjuna decreases platelet activation and may therefore possess antithrombotic properties (Malik et al., 2009).Many other studies demonstrated cardio perspective effect in myocardial neurosis, protection against cancer, benefits in angina pectoris, reversal of impaired endothelial function in chronic smokers, etc. (Barani et al., 2004;Dwivedi et al., 2005;Gauthamana et al., 2005;Karthikeyan et al., 2003;Sivalokanathan et al., 2006).There was no significant side effects of T. arjuna have been reported in medical journals (Sahelian, 2011). 1. Addition of verapamil (1 mg/mL of bath fluid) caused a significant (p<0.01)blockade in the height of contraction produced by bark extract of T. arjuna. Figure 1 : Figure 1: Tracing showing contractions of guinea pig heart after exposure to calcium, verapamil and different concentrations of T. arjuna extracts (A1: Water soluble part from the alcoholic extract of powdered bark dried earlier; A2: Water soluble part from the alcoholic extract of chopped bark without drying) Journal homepage: www.banglajol.info;www.bdjpharmacol.comAbstracted/indexed in Academic Search Complete, Agroforestry Abstracts, Asia Journals Online, Bangladesh Journals Online, Biological Abstracts, BIOSIS Previews, CAB Abstracts, Current Abstracts, Directory of Open Access Journals, EMBASE/Excerpta Medica, Google Scholar, HINARI (WHO), International Pharmaceutical Abstracts, Open J-gate, Science Citation Index Expanded, SCOPUS and Social Sciences Citation Index
2019-04-06T13:12:06.740Z
2012-08-22T00:00:00.000
{ "year": 2012, "sha1": "c0685fd196262260459aac6d25b099c0d9d96057", "oa_license": "CCBY", "oa_url": "https://www.banglajol.info/index.php/BJP/article/download/11034/8507", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c0685fd196262260459aac6d25b099c0d9d96057", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
119575977
pes2o/s2orc
v3-fos-license
On the Stueckelberg Like Generalization of General Relativity We first consider the Klein-Gordon equation in the 6-dimensional space $M_{2,4}$ with signature $+ - - - - +$ and show how it reduces to the Stueckelberg equation in the 4-dimensional spacetime $M_{1,3}$. A field that satisfies the Stueckelberg equation depends not only on the four spacetime coordinates $x^\mu$, but also on an extra parameter $\tau$, the so called evolution time. In our setup, $\tau$ comes from the extra two dimensions. We point out that the space $M_{2,4}$ can be identified with a subspace of the 16-dimensional Clifford space, a manifold whose tangent space at any point is the Clifford algebra Cl(1,3). Clifford space is the space of oriented $r$-volumes, $r=0,1,2,3$, associated with the extended objects living in $M_{1,3}$. We consider the Einstein equations that describe a generic curved space $M_{2,4}$. The metric tensor depends on six coordinates. In the presence of an isometry given by a suitable Killing vector field, the metric tensor depends on five coordinates only, which include $\tau$. Following the formalism of the canonical classical and quantum gravity, we perform the 4 + 1 decomposition of the 5-dimensional general relativity and arrive, after the quantization, at a generalized Wheeler-DeWitt equation for a wave functional that depends on the 4-metric of spacetime, the matter coordinates, and $\tau$. Such generalized theory resolves some well known problems of quantum gravity, including"the problem of time". The problem of time in quantum gravity Despite being a very successful theory at the classical level, general relativity has turned out to be problematic when attempted to be consistently quantized. Amongst others, there is the so called 'problem of time' (for a recent review see [1]). This can be seen if we perform the canonical quantization. If we start from the Einstein-Hilbert action, and perform the 1 + 3 ADM decomposition of spacetime, M 1,3 = R × Σ, then the action of general relativity can be cast into the 'phase space' form [2,3] I[q ij , p ij , N, Here q ij , i, j = 1, 2, 3, is a 3-metric on a space hypersurface Σ, and p ij is the corresponding canonically conjugate momentum, whilst N and N i are, respectively, laps and shift functions having the role of Lagrange multipliers leading to the constraints H(q ij , p ij ) ≈ 0, and H i (q ij , p ij ) ≈ 0, (2) which are associated with the diffeomorphism invariance of the original Einstein-Hilbert action. The Hamiltonian is a linear combination of constraints and the evolution is a pure gauge. There is no physical evolution time in such a theory. Upon quantization, the above constraints become the wave functional equations. For instance, the first constraint becomes the Wheeler-DeWitt equation whilst the second constraints become We see that in quantum theory there is no spacetime, but only space Σ, , because the wave function(al) depends only on 3-geometry, represented by q ij . Thus, in addition to the absence of an external time, we have also the problem of the disappearance of spacetime. A possible remedy: the Stueckelberg theory In the Stueckelberg theory [4], besides the four spacetime coordinates x µ , there is an extra parameter τ . The coordinate x 0 ≡ t is not the 'evolution parameter'. The evolution parameter is τ . In quantum theory of a 'point particle', the wave function is ψ(τ, x µ ), (5) and is normalized according to d 4 x ψ * ψ = 1. We will show how τ arises from extra two dimensions, one space like and one time like dimension. Then we will show that 'extra dimensions' need not be the 'true' extra dimensions, i.e., some extra dimensions in addition to four spacetime dimensions, but can be associated with the space of matter configurations. A particular case of such configuration space is Clifford space, a manifold whose tangent space at any point is the Clifford algebra Cl (1,3). Clifford space is the space of oriented r-volumes, r = 0, 1, 2, 3 associated with the extended objects living in M 1,3 . In this paper we focus our attention to a 6-dimensional subspace, M 2,4 , of Clifford space. We consider the Einstein equations that describe a generic curved space M 2,4 . Then we perform the ADM-like 1+4 decomposition of a 5-dimensional subspace M 2,3 of M 2,4 , our argument being that the additional dimension can be neglected in the presence of an isometry given by a suitable Killing vector field, because then the metric tensor depends on five coordinates only. We will show show how in the quantized theory the problems of time and of spacetime disappear in such generalized theory. The latter problem does not occur, because the wave functional now depends on spacetime 4-geometry, represented by the metric g µν . The problem of time we resolve by adding a suitable matter part to the action. Klein-Gordon equation in 6D Let us consider the action for the Klein-Gordon field in 6-dimensions: where φ = φ(x M ), M = 0, 1, 2, 3, 5, 6. Let us split the index M into a 4-dimensional part and the part due to the extra two dimensions according to M = (µ,M), µ = 0, 1, 2, 3,M = 5, 6, and let us assume that the metric has the following form: The latter metric can be transformed into which is the pseudo euclidean metric with signature (+ − − − −+). By inserting the metric (7) into the action (6), we obtain where we have denoted where Λ is a constant, we have which is the well known Stueckelberg action. We have omitted the integration over λ, because it gives a constant which can be absorbed into the expression. Considering the corresponding equations of motion, we find that from a massless Klein-Gordon equation in 6D ∂ M ∂ M φ = 0 (12) we obtain the Stueckelberg equation The constant Λ comes from the 6th dimension x 6 ≡ λ. More precisely, Λ is an eigenvalue of the canonical momentum conjugate to λ. By ansatz (10), coordinate λ is eliminated from the action, whilst the eigenvalue Λ remains. To sum up, if the signature of two extra dimensions is (−+), and if we work in the light cone coordinates, then we obtain the Stueckelberg equation for a wave function that depends on τ and x µ . Notice that, because τ is a 'light cone' coordinate, we have the first derivative of ψ with respect to τ . More formal considerations: Point particle in 6D and its quantization Let us consider a classical action for a point particle in 6-dimensional space: where M = 0, 1, 2, 3, 5, 6. Here σ is a parameter, denoting a point on the worldline, andẊ µ = dX µ /dσ. An equivalent action is a functional of the coordinates X M , the canonically conjugate momenta P M , and a Lagrange multiplier α: Varying the latter action with respect to P M , we obtain the relation between velocities and momenta,Ẋ M = αP M . If we split the coordinates according to and express four momenta in terms of velocities, P µ =Ẋ µ /α, then the action (15) becomes The second term in the latter action can be omitted, because by partial integration it can be transformed into the form and if we use the equations of motion,ṖM = 0, then only the total derivative term remains. The third term in eq. (17) can be rewritten in terms of the 4D mass, m. Namely, by varying (15) we obtain the mass shell constrain in 6D: which can be decomposed according to From M 2 = P M P M = P µ P µ + PM PM , we have where m 2 ≡ P µ P µ . If 6D mass M is equal to zero, then the 4D mass is due to the 5th and the 6th component of momentum only: So, from eq. (17), using (18), and (21), we obtain the well known Howe-Tucker action for a massive particle in 4-dimensional spacetime: Upon quantization, the classical constraint (19) becomes the Klein-Gordon equation whereP M = −i ∂/∂X M is the momentum operators. We will use unit in which = c = 1, and write ∂ M ≡ ∂/∂X M . We can decompose eq. (24) into a 4D and a 2D part: which gives By the ansatz and by denoting x 5 ≡ τ, P 6 ≡ Λ, eq. (26) gives If, in particular, 6D mass M is zero, then we have the usual Stueclkelberg equation (13). We have seen that the Stueckelberg equation in which the wave function depends not only on four spacetime coordinates x µ , but also on an extra parameter τ (evolution parameter), is embedded in the 6D theory with one time-like and one space-like extra dimension: At this point it is interesting to observe, that an extra time like and an extra space like dimension are necessary in the "two time" physics [5], based on the extended phase space action that is invariant under local Sp(2) transformations. Our action (15), with M = 0, is a particular case of a more general action, considered in refs. [5]. A question arises as to what is a physical meaning of the extra two dimensions. This will be discuss in next section. The space M 2,4 as a subspace of Clifford space Clifford space, C, is the space of oriented r-volumes, r = 0, 1, 2, 3, associated with extended objects living in spacetime M 1,3 . 'Extra dimensions' are due to the fundamental extended nature of physical objects. The concept of Clifford space has been discussed in refs. [6]- [15]. Here we will exploit the fact that the space M 2,4 , used in previous section, can be identified with a subspace of C. Clifford space: a quenched configuration space of extended objects-branes Strings and branes have infinitely many degrees of freedom. But at first approximation we can consider just the center of mass, X µ , µ = 0, 1, 2, 3. Next approximation is in considering the holographic coordinates, X µν , of the oriented area enclosed by the string. We may go even further and search for eventual thickness of the object. If the string has finite thickness, i.e., if actually it is not a string, but a 2-brane, then there exist the corresponding volume degrees of freedom, X µνρ . In general, for an extended object in M 1,3 , we have 16 coordinates They are the projections of r-dimensional volumes (areas) onto the coordinate planes. Oriented r-volumes can be elegantly described by Clifford algebra [14,15]. Although branes have infinitely many degrees of freedom, we can sample them by a finite set of coordinates X M , that denote position in a 16D manifold, called Clifford space C, whose tangent space at any point X M is the Clifford algebra Cl(1, 3). Instead of the usual relativity formulated in spacetime in which the interval is let us consider the theory in which the interval is extended to Clifford space: Coordinates of Clifford space can be used to model extended objects, whatever they are. The latter coordinates, the so called 'polyvector coordinates', are a generalization of the concept of center of mass [14]. Instead of describing extended objects in "full detail", we can describe them in terms of the center of mass, area and volume coordinates. Therefore, Clifford space is a quenched configuration space for extended objects [16]. In particular, extended objects can be fundamental branes. With the definition (34) of the metric, signature of C is (8,8). Therefore, M 2,4 is a subspace of C. Dynamics The following action generalizes the action for a point particle of the ordinary special relativity: where σ is an arbitrary continuous parameter. From the latter action we obtain the following equations of motion:Ẍ Here η M N is the analogue of Minkowski metric with signature (8,8). Since X M are interpreted as r-volume coordinates, the equations of motion (36) imply that the volume (in particular the area) changes linearly with σ. If X M sample a brane, then the above dynamics can only hold for a tensionless brane. For a brane with tension one has to introduce curved Clifford space and generalize eqs. (35),(36) to arbitrary metric with non vanishing curvature [9,10,11]. A worldline X M (σ) in C represents the evolution of a 'thick particle' in spacetime M 1,3 . In C we have a line, a worlline X M (σ), whilst in spacetime M 1,3 , we have a thick line whose centroid line is X µ (σ). It describes a thick particle, i.e., and extended object, in spacetime. The thick particle can be an aggregate of p-branes for various p = 0, 1, 2, ... . But such interpretation is not obligatory. A thick particle may be a conglomerate of whatever extended objects that can be sampled by 'polyvector' coordinates X M ≡ X µ 1 µ 2 ...µr . If we vary the latter action with respect to X M (σ), we obtain the geodesic equation, and if we vary it with respect to G M N (x M ), we obtain the Einstein equations, At this point let us mention that, according to a general consensus, the Einstein equations with a point like source have no solution, because a solution in the vacuum around a source is the black hole solution with a horizon. However, a penetrating discussion about this issue can be found in ref. [17]. The fact that a point like source action (37) is problematic could mean that the fundamental objects are not point particles, but branes. Instead of a worldline X M (σ), one should take a brane X M (σ a ) as a source. From the point of view of the 16D Clifford space C, or its 6D subspace M 2,4 , the latter object is infinitesimally thin in the directions transverse to the brane. But from the point of view of 4D spacetime M 1,3 ⊂ M 2,4 ⊂ C we have a thick brane. Leaving such intricacies aside, we can nevertheless use eqs. (37),(39) as an approximation to a realistic physical situation in which instead of the δ-distribution we have a distribution due to an extended source. The 6D Ricci scalar can be written as where the subscript 5,6 means that the extrinsic curvature is due to the presence of the 5th and 6th dimension. Instead of performing such ADM-like 2+4 decomposition, we will follow an easier procedure. We will consider a 1 + 5 decomposition in which case we have R (6) = R (5) + extrinsic curvature term 6 (41) If there exist suitable isometries in the 6D space M 2,4 , and if we choose a suitable 5D subspace M 2,3 , then the extrinsic curvature terms in eq. (41) can be neglected. The 5D Ricci scalar, in turn, can also be decomposed in analogous way: In particular, let us consider the ADM-like 1+4 decomposition M 2,3 = R × M 1,3 , where M 1,3 is spacetime. Then the 5D metric can be decomposed as where the indices M, N now assume five values only, and N = 1/ √ G 55 . The inverse metric is Of course, the meaning of N as an index is different from the meaning of N as a quantity, in which case N = 1/ √ G 55 . The extrinsic curvature is Here D ν is the 5D covariant derivative, D ν the 4D covariant derivative, and n M the normal to M 1, 3 . We see that the 4D metric g µν depends not only on four spacetime coordinates x µ , but also on an extra parameter τ . Introducing where g ≡ det g µν , and K ≡ g µν K µν , we can write the 5D action in the 'phase space' form: The terms with extrinsic curvature in eq. (48) can be expressed in terms of p µν : where D = g µν g µν = 4 and p ≡ g µν p µν = √ −g (D − 1)K. Here p µν are the canonical momenta conjugated to the 4D metric g µν , whilst N and N µ are Lagrange multiplies for the constraints Upon quantization, g µν and p µν become operators that can be represented as The 'Hamiltonian' constraint, H ≈ 0, becomes the Wheeler-DeWitt equation: Now the wave function(al) depends on 4-geometry, represented by a spacetime metric g µν (x µ ). In this theory we have no problem of spacetime. We also have no problem of time, if by 'time' we understand the coordinate time t ≡ x 0 . However, the evolution parameter τ has disappeared from the quantized theory. There is no τ in the wave functional equation (54). Now we have the problem of τ . One possibility is to take the position that this is not a problem. It is important that we do not have the problem of t ≡ x 0 , whereas missing τ is not a problem at all! Another possibility is to bring τ into the game by considering matter degrees of freedom. In our approach the latter degrees of freedom are described by coordinates of Clifford space, one of them being interpreted as τ . To describe matter configurations, we have to consider also the matter part of the action. As a model we consider the action (37) in which R (6) is replaced with R (5) , and d 6 x with d 5 x, the indices being now M, N = 0, 1, 2, 3, 5. The gravitational part we then replace by the equivalent phase space action (47). The matter part of the action we also replace by the phase space form: Splitting the metric according to (43), we have (56) To cast the matter part into a form comparable to the gravitational part of the action, we insert the integration over δ 5 (x − X(σ))d 5 x, which gives identity. In both parts of the action, I m and I G , now stands the integration over d 5 x. Recall that we identified Varying the total action with respect to α, N and N µ , we obtain the constraints δα : δN : where H and H µ are given in eqs. (48),(49), and κ ≡ 16πG. We can write H compactly as H = G µν αβ p µν p αβ + √ −gR (4) , whith the metric In a quantized theory, the constraints (58)-(60) become operator equations acting on a state vector. The constraint (58) can be put straightforwardly into its quantum version by replacing P µ →P µ = −i∂ µ , P 5 →P 5 = −i∂ 5 . So we have But the constraints (59),(60), because of the δ-distribution, are not suitable for a direct translation into their corresponding quantum equivalents. Usually, for a quantum description of gravity in the presence of matter, one does not take the matter action in the form (56). Instead, one takes for I m an action for, e.g., a scalar or spinor field, and then attempt to quantize the total action (57) following the established procedure of quantum field theory. Here I would like to point out that one can nevertheless start from the action (56) and use all the constraints (58)-(60). Let us consider the Fourier transform of the constraint (59), the zero mode being given by the integral Writing d 5 x = d 4 x dx 5 and introducing H = d 4 xH, we have from which it follows 1 α Here we have replaced the coordinate x 5 , denoting a point in the 5D manifold, with the coordinate X 5 , denoting a point on the worldline. Using the equation of motion (resulting from varying the action (55) with respect to P M ), whereẊ M ≡ dX M /dσ, we find that P 5 =Ẋ 5 /α. Using the latter expression in eq. (67), we obtain Similarly, from the constraint (60) we obtain where H µ = d 4 x H µ . Let us now use the relations P M = G M N P N and P M = G M N P N with the metrics (43),(44), and rewrite eqs. (69),(70) into the form with covariant components of momenta P µ , P 5 : The above result is nothing but a manifestation of the fact that the integration of a stress-energy tensor over a certain hypersurface gives momentum. Here momentum is P M = (P µ , P 5 ). Using (49), eq. (72) can be rewritten as where B is the boundary of a region Ω in the 4-space, and dΣ ν is an element of the boundary surface. The relation (73) It is now straightforward to consider the quantum versions of the constraints (74),(72) together with the constraint (75). We have The latter equations impose the operator constraints on a quantum state that is represented by Ψ[τ, X µ , g µν (x µ )] which depends on the particle's coordinates X µ , the fifth coordinate X 5 ≡ τ , and the spacetime metric g µν (x µ ). In other words, Ψ is a function of τ, X µ , and a functional of g µν (x µ ). Eq. (77) is just like the Schrödinger equation, with τ as evolution parameter. Therefore, the "problem of τ " does not exist in this quantum model for a point particle coupled to a gravitational field. Had we performed a split from six to four dimensions (and not from five to four as we did in this section), then in eq. (76), instead of ∂ 2 τ , we would have ∂ λ ∂ τ ∼ Λ∂ τ (see sec. 2), so that eq. (76) would become the Stueckelberg equation. The system (76)-(78) describes at once a Klein-Gordon wave function for a relativistic particle, and the wave functional for a gravitational field. It is only an incomplete description of the physical system. A complete description would require to take into account the infinite set of constraints due to all Fourier modes of the the constraints (58)-(60). Discussion and conclusion We have shown how the Stueckelberg equation for a relativistic point particle comes from a 6-dimensional space, M 2,4 , with signature (2,4), that is (+ − − − −+). Two extra dimensions, one time like and one space like, are necessary, because then in the equation we obtain the first derivative of the wave function with respect to a Lorentz, SO(1,3), invariant parameter τ which is identified with the fifth coordinate X 5 . An argument in favor of such 6D space comes from the works on the two time (2T) physics considered by Bars et al. [5]. The phase space action (16) for a point particle is a particular case of a more general action that is invariant under local Sp (2) transformations between coordinates and momenta. In such theory there are three Lagrange multipliers associated with three constraints, which cannot be satisfied in 4D spacetime M 1,3 . They can be satisfied in 6D space M 2,4 . Since the theory by Bars et al. [5] is based on very strong foundations, we can conclude that such 6D space is a reasonable subsitute for 4D spacetime. It enables to formulate the 2T physics on the one hand, and the Stueckelberg theory on the other hand. A possible deeper relationship between the two theories has yet to be explored. There exists another direction of research, which is based on the concept of configuration space, i.e., the space of possible matter configurations. An example of such space is the 16D space of oriented r-volumes, associated with extended objects, e.g., branes. We call it Clifford space, C, because it is a manifold whose tangent space at any point is Clifford algebra Cl(1, 3). If we define the metric according to eq. (34), then the signature of C is (8,8). A subspace of C is M 2,4 . Therefore, if we adopt the concept of Clifford space, C, we do not need to postulate extra dimensions of spacetime, in order to have the 6D space formulation of the Stueckelberg theory and of the 2T physics. Four dimensions of C can be identified with the four dimensions of spacetime, whilst the remaining 12 dimensions of C are associated with the intrinsic configurations of matter living in the 4-dimensional spacetime. We have considered the general relativity in Clifford space, more precisely in the 6D subspace with signature (2,4). The action contains the Einstein-Hilbert term which is a functional of the metric only, and a matter term, which is a functional of matter degrees of freedom coupled to the metric. As a model we have considered a point like source. We have performed the ADM decomposition of a 5D subspace into spacetime M 1,3 and a part due to the 5th dimensions, x 5 ≡ τ . The action gives the mass shell constraint in 5-dimensions, and the constraints that generalize the Hamiltonian and momentum constraints of the canonical gravity, with the extra terms due to the presence of the point particle source. After quantization those constraints become the operator constraints acting on a state, that can be represented as a functional of the spacetime metric g µν , µ, ν = 0, 1, 2, 3, a function of the particle coordinates X µ , and the fifth coordinates, τ , which has the role of the Stueckelberg evolution parameter. In the Stueckelberg theory the 'true' time is the Lorentz, SO(1,3), invariant evolution parameter τ , and not the coordinate x 0 ≡ t. Since such parameter occurs in the wave function(al) for the gravitational field, we conclude that there is no 'problem of time' in this theory.
2014-09-01T12:03:01.000Z
2011-04-13T00:00:00.000
{ "year": 2011, "sha1": "6404c456e42411cd9905360c90fcb4534d87938a", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/330/1/012011", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "6404c456e42411cd9905360c90fcb4534d87938a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
211050342
pes2o/s2orc
v3-fos-license
The changing modes of human immunodeficiency virus transmission and spatial variations among women in a minority prefecture in southwest China Supplemental Digital Content is available in the text Introduction Liangshan Yi Autonomous Prefecture (shortened to Liangshan afterwards), located on the border of Sichuan and Yunnan provinces in Southwest China, is one of the most poverty-stricken regions in China. [1] One of the major drug trafficking routes from the "Golden Triangle" to China passes through this Prefecture. [2] As a result, the rate of human immunodeficiency virus (HIV) infection in the Prefecture has increased dramatically since the first HIV case was reported in 1995. [1] By September 2017, the Prefecture exhibited the highest HIV prevalence rate in Sichuan province. [3] It is widely accepted that this high prevalence of HIV infection is due primarily to intravenous drug use (IDU). [4,5] However, in recent years, the number of new HIV infections among intravenous drug users has steadily declined. [3] According to the Chinese HIV sentinel surveillance system (HSS), there was an uptrend in new HIV infections among women from 2011 to 2014, accounting for nearly 42% of all new HIV infections in the Prefecture. [3] Given this shifting pattern in HIV infections, it is vitally important to examine the changing mode of HIV transmission from IDU to heterosexual contact, its socioeconomic and demographic characteristics, and its spatial variations to develop, structure, and implement better informed and targeted as well as more effective prevention strategies and programs in the Prefecture. Spatial heterogeneity in the changing mode of HIV transmission and in the factors associated with this changing mode of HIV transmission is noteworthy for several distinct reasons. First, as alluded to previously, the South-eastern townships in the Prefecture are located along the drug trafficking routes from the Golden Triangle region such that IDU is common in this area. [6] It is well acknowledged that the sharing of equipment for IDU is both a substantial cause of HIV infection and a contributing factor to blood-borne transmission around the world. [7][8][9][10][11][12] It has been estimated that as of 2017 about 17.8% of people aged 15 to 64 years who injected drugs (PWID) were living with HIV/AIDS worldwide. [12] While the prevalence rate is generally low, China has witnessed a slight decline in the percentage of PWID living with HIV over the past several years. For example, the percentage of PWID living with HIV dropped from 6.33% in 2013 to 6.00% in 2014. [13] Second, the level of economic development varies considerably across the townships in the Prefecture. In effect, some townships in the Prefecture remain the poorest in contemporary China because of the rugged mountainous terrain with a vulnerable ecological environment. [14] Past research demonstrated that poverty was significantly associated with sexual risk behaviors, including inconsistent condom use and multiple sexual partners. [15][16][17][18][19] Public health scholars have linked poverty with condom use decisions, and identified poverty as one of the most significant barriers to the negotiation of condom use. [20][21][22] Third, the influence of cultural norms can have profound implications for HIV transmission and control. [23][24][25][26] Liangshan is an ethnic minority region with more than 50% of its 4.873 million residents being of ethnic Yi, residing in 618 townships, 16 counties, and 1 county-level city. [2] Dominant social norms and traditional values pertinent to HIV transmission, such as unsupportive attitudes toward condom use, condoned casual sex behavior, and arranged marriage within the same social status groups, remain strong among the Yi minority population, [27] thus potentially exacerbating HIV infections spatially across the Prefecture. It is important to note that the distribution of the ethnic Yi population is not uniform in the Prefecture. In fact, the Yi population is primarily concentrated in the northeastern part of the region such that the influence of the Yi culture might be more prominent in the northeastern townships than their counterparts in other geographic locations. [14] In light of these geosocial and geo-cultural diversities, there is a good reason to anticipate the township-level heterogeneity in the mode of HIV transmission and in the factors associated with the mode of HIV transmission in the Prefecture. Unfortunately, to the best of our knowledge, no study to date has systematically investigated such notable spatial heterogeneity. To fill this research void, the present study is designed to accomplish 2 specific research goals that are outlined below. First, to properly visualize the aforementioned township-level spatial heterogeneity in the changing mode of HIV transmission in the Prefecture, a database comprising women with new HIV infections was constructed through multiple sources to minimize the township-level small sample size problem. This small sample size related problem may result in unstable and unreliable estimates as well as their corresponding maps. [28,29] To help attenuate this potential problem, Bayesian hierarchical model was employed to estimate and map the proportion of heterosexual transmission among women with new HIV infections. Second, the geographically weighted regression (GWR) model was used to link sociodemographic characteristics to the changing mode of HIV transmission among women with new HIV infections at the township-level. [30] To achieve these research goals is contextually important and relevant because this minority region has been devastated by the HIV/AIDS epidemic and extreme poverty since the 1990s. Data collection To compile a comprehensive and accurate HIV/AIDS database consisting of women who were newly infected with HIV from 2011 to 2014 in Liangshan, multiple data sources for detecting HIV in Liangshan were considered and triangulated (ie, crosschecked and validated). These data sources included: (1) provider-initiated HIV testing & counseling (PITC) service, (2) voluntary counseling and testing (VCT) service, [31,32] Finally, recent research projects conducted in the prefecture were also consulted to corroborate the new female HIV infection cases collected from the above data sources. To avoid duplicates, every new infection case was coded with a unique ID number. It must be noted that if women who were hospitalized and diagnosed with AIDS, they were not included in this study unless they received services provided by PITC. For the present study, the BED capture enzyme immunoassay was utilized, which identified and confirmed a total of 1074 women with new HIV infections residing in 618 townships, 16 counties, and 1 county-level city from 2011 to 2014. Out of these women with new HIV infections, 927 cases were retained to constitute the analytical sample for the present study as these cases had complete information on age, ethnicity, occupation, educational level, marital status, and more importantly, the mode of HIV transmission. All the information was obtained at the time of blood sample collection. The data collection was approved by the Ethics Committee of the Center for Disease Control and Prevention of the Liangshan Yi Autonomous Prefecture. Variables The dependent variable for this study was the mode of HIV/AIDS transmission among women with new HIV infections in the Prefecture from 2011 to 2014. This variable was dummy-coded with 1 indicating heterosexual transmission and 0 representing all other modes of transmission. This variable was dummy-coded with 1 indicating heterosexual transmission and 0 representing all other modes of transmission. The independent variables included age, ethnicity, marital status, educational level, and occupation. Age was measured in years as a continuous variable; ethnicity was dummy-coded as 1 = other ethnicities and 0 = Yi; marital status was dummy-coded as 1 = single or married with divorced and 0 = widowed; educational level was dummy-coded as 1 = primary school or above and 0 = no formal education; occupation was dummy-coded as 1 = other occupations and 0 = peasant and/or herdsman. The zero category was consistently used as the reference group. Furthermore, the residential location of each case was represented by the latitude and longitude of the centroid of the township it belongs to. The sample characteristics and bivariate associations with the dependent variable are reported in Table 1. Statistical modeling As indicated above, the mode of transmission among women with new HIV infections was identified by cross-validating multiple data sources, including a variety of monitoring records, medical check-up reports, IDU history records, and sexualbehavior surveys. Since these newly identified cases came from different townships that might be affected by varying degrees of socioeconomic conditions, spatial and cultural factors such as proximity to Yunnan province and the Golden Triangle region, the Bayesian hierarchical model was utilized to estimate the proportion of the heterosexual transmission from 2011 to 2014. The implementation of the Bayesian hierarchical model is described below in detail. Because this study focuses on heterosexual transmission among women with new HIV infections, the mode of transmission was classified as heterosexual versus other modes of HIV transmission. As such, the underlying proportion of the mode of HIV transmission was estimated using a beta distribution. In the present study, the hyperparameters were expressed as (a, b), which determine the beta distribution for the proportion of women with new HIV infections through heterosexual contact. The proportion of such HIV infections was presented by u. To obtain the posterior distribution of u, a vector of hyperparameters, (a, b), was drawn from its marginal posterior distribution, p(a, bjy), and the parameter vector u was drawn from its conditional posterior distribution, p(uja, b, y), given the values of (a, b). [33] The marginal posterior distribution of (a, b) was computed algebraically using the conditional probability formula, which was written as: Here, the joint posterior distribution of all parameters p(u, a, bjy) and the u's conditional posterior distribution p(uja, b, y) were denoted by formulas (2) and (3), respectively: À u j Þ bþn j Ày j À1 ð3Þ: In these 2 formulas, y j , n j , and u j represent the number of women with new HIV infections through heterosexual transmission, the sample size, and the proportion of heterosexual transmission among women with new HIV infections in jth township, respectively. By substituting (2) and (3) into the formula in (1), the marginal posterior distribution of (a, b) can be obtained through: To obtain a proper posterior distribution of (a, b), a diffuse hyperprior density was set as: pða; bÞ ∝ ða þ bÞ À5=2 ð5Þ: With the data collected from the Prefecture, we computed the density function in (4) with hyperprior density in (5). Next, 10,000 hyperparameter (a, b) draws were simulated from their normalized marginal posterior distribution. For each township j (j = 1, . . . , J), u j was sampled from its conditional posterior distribution, p(u j ja, b, y), ∼ Beta(a + y j , b + n j À y j ) as proposed by Gelman. [33] All these analytical procedures were implemented using R version 3.3.3. Additionally, a GWR model was used to explore spatial variations in the effects of the independent variables on the mode of new female HIV transmission. The model was specified as: Logit where the p i is the estimated probability that the dependent variable is 1, that is, heterosexual HIV transmission among women with new HIV infections, b k;i signifies the estimated effects of independent variable k for individual i, ðu i ; v i Þ represents the x-y coordinates of individual i, and x k;i indicates a set of independent variables (k = 1, . . . ,K) for individual i. GWR version 4.08 was used to implement the GWR model. Sociodemographic characteristics As expected, Table 1 revealed that the vast majority (90.4%) of women with new HIV infections were of Yi ethnic origin. They were primarily engaged in agricultural or animal husbandry activities as farmers or herdsmen (83.1%). About 3 quarters (74.4%) of these women with new HIV infections were married and more than half of them (51.8%) had no formal education at the time of diagnosis. These sociodemographic characteristics were significantly associated with the mode of HIV transmission (P < .05). The spatial distribution of cases and variables are shown in figures included in the appendix, http://links.lww.com/ MD/D641. Bayesian hierarchical analyses By using Bayesian hierarchical models to estimate the proportion of heterosexual transmission among women with new HIV infections in the Prefecture, the marginal posterior distributions for all 4 years were computed. Figure 1 GWR analyses Results GWR analyses are shown in Table 2 and Figure 4. A careful examination of Table 2 suggests that Yi women were less likely to be infected through heterosexual transmission as compared with women with new HIV infections in other ethnic groups. However, this ethnic difference was salient only in about 30% of the townships, especially in the northeastern and/or southern parts of the Prefecture (see Fig. 4). Stated differently, in the remaining 70% of the townships, it was Yi women with new HIV infections that accounted for much of the shift in the mode of HIV transmission from IDU to heterosexual contact. Moreover, as Table 2 indicated, women with new HIV infections engaging in other occupations were less likely to be infected through heterosexual transmission for all townships (100%) than their agricultural and herdsman counterparts, with the odds ratios (ORs) ranging from 0.52 to 0.55 (P < 0.05). This is particularly true for women residing in the western part of the Prefecture (see Fig. 4). It is also observed that net of other sociodemographic characteristics, being single or married (vs divorced or widowed) decreased the odds of heterosexual transmission for all women with new HIV infections, with the adjusted ORs ranging from 0.31 to 0.38 (P < 0.05) for unmarried women and from 0.30 to Xiao et al. Medicine (2020) 99: 6 Medicine 0.35 (P < 0.05) for married women, respectively (see Table 2). As shown in Table 2 and Figure 4, these marital status effects were present throughout the prefecture (100%) but stronger in the northern townships. Furthermore, in more than half of the townships (55.56%), having primary school education (vs no formal education) was negatively associated with the odds of heterosexual transmission, which was particularly pronounced in the northeastern part of the prefecture (see Fig. 4). However, there was no significant difference between those who had a junior high school education or above and those who had no formal education (see Table 2). Finally, age was statistically insignificant, thus subsequently being removed from Table 2. Spatial variations in the significant sociodemographic predictors are mapped and highlighted in Figure 4. Discussion Our study revealed that there was a rapid shift in the mode of HIV transmission among women with new HIV infections living in Liangshan from 2011 to 2014. The results from the Bayesian hierarchical model showed that the proportion of heterosexual transmission among women with new HIV infections increased from about 20% in 2011 to approximately 80% in 2014, suggesting that the mode of female HIV infection in this minority region had undergone a dramatic transformation. Taken together, these findings demonstrate that the predominant mode of women with new HIV infections in Liangshan had shifted to heterosexual transmission, and this shift could partially explain why HIV incidence among women in the Prefecture soared from 2011 to 2014, which accounted for almost 42% of all new HIV infections during this period of time. [3] As indicated by previous studies, the prevalence of people living with HIV in the Yi minority regions in China ranged from 2.88% to 9.46% in the 2000s. [2] If heterosexual contact was the predominant mode of female HIV transmission, then it could be argued that women in these high prevalence minority regions, especially those who are poorly educated, could face serious challenges in protecting themselves from being infected. [17,20] Although HIV/AIDS prevention work should continue to focus on such high-risk populations as drug abusers and FSW, earnest attention must be given to heterosexual women who are increasingly at risk for HIV infection through their sexual activities. To compare with previous studies that explored spatial variations in the risk factors associated with the mode of HIV infection, [6] this study also examined the spatial heterogeneity in the socioeconomic and demographic factors associated with heterosexual HIV transmission among women with new HIV infections. One surprising finding was that compared to women in other ethnic groups, Yi women were less likely to be associated with HIV infection through heterosexual transmission. However, this unanticipated ethnic difference was observed only in 30% of the townships, especially in several eastern counties where heroin use and addiction were common among Yi women. Although public health scholars in China continue to observe differences between Yi women and women in other ethnic groups, [34][35][36] our study showed that heterosexual activities had become a common mode of HIV transmission for women with new HIV infection living in most townships in Liangshan. Also, we suggest that findings regarding ethnic differences in this study should not be generalized without caution to other minority regions in China, where Yi minority population size is less than that in Liangshan. As to marital status, we found that divorced or widowed women were more likely to be infected through heterosexual transmission than those who were married or single. This is not surprising given the traditional custom for the widowed to marry her husband's brother who is known to have been HIV positive. Moreover, because women's social status remains low with little or no right to negotiate condom use, [37] casual sex and sex without condom persist in this minority region. [27,38,39] This widespread casual sex behavior without consistent condom use might be the central reason for the growing proportion of heterosexual transmission among women, especially among divorced or widowed Yi women. In addition to marital status, the association between educational attainment and the mode of HIV transmission among women with new HIV infections was found in several northeastern townships. As such, the effects of educational attainment on HIV transmission varied spatially in Liangshan. As discussed previously, the northeastern part of Liangshan is one of the highest HIV-infected areas. As such, education may have a disproportionate effect on HIV transmission since improved education is known to lead to more condom use and less frequent casual sex. [40,41] This finding confirms that the government's HIV prevention efforts should be harmonized with their targeted poverty alleviation efforts as well as their drive to improve education in this minority region. Study limitations There were several limitations in this study. First, even though we made concerted efforts to collect every identified and confirmed HIV case involving a woman with new HIV infection in the prefecture by triangulating multiple data sources, it was possible that some of the female IDUs had been left out from the present study, given the difficulty to track them over a 4-year period from 2011 to 2014. However, the amount of bias due to this data omission is arguably small as the IDU has been steadily declined in the prefecture and nationwide. [3,42] In addition, patients who were hospitalized and diagnosed with AIDS but did not utilize services provided by PITC were not included in the present study. By the same token, women who were stigmatized and were not monitored by HSS would be omitted from this study as well. Second, when the proportion of heterosexual transmission among women with new HIV infections was estimated, the sample size for several townships was relatively small. This could be due to the fact that some newly infected cases could not be accurately classified as heterosexual transmission. Although Bayesian hierarchical model partially avoided extreme estimates due to the small-sample biases, a slight amount of such biases might still exist. Last but not least, because the number of variables in the constructed database was limited for the present study, future research should include and investigate more risk or protective factors that are potentially associated with the changing mode of HIV transmission among women in Liangshan or other Yi minority areas. Conclusion Over the past few years, heterosexual transmission has become the predominant mode of HIV transmission among women in Liangshan. This shift in the mode of HIV transmission over a period of 4 years is characterized by a noteworthy spatial diffusion pattern. That is, overtime the rates of heterosexual transmission have expanded beyond the northeastern townships to the western and southern parts of the prefecture. In addition, sociodemographic factors that are associated with this changing mode of HIV transmission also exhibit spatial variations. These findings suggest that future intervention strategies and programs should be spatially structured and culturally competent to better serve targeted populations in this minority region. Author contributions
2020-02-28T14:48:04.573Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "c6018c72eeeeb7ec5ea224083187e7dffcd9bf17", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000018776", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c6018c72eeeeb7ec5ea224083187e7dffcd9bf17", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
148569426
pes2o/s2orc
v3-fos-license
Gene expression and promoter methylation of angiogenic and lymphangiogenic factors as prognostic markers in melanoma The high mortality rate of melanoma is broadly associated with its metastatic potential. Tumor cell dissemination is strictly dependent on vascularization; therefore, angiogenesis and lymphangiogenesis play an essential role in metastasis. Hence, a better understanding of the players of tumor vascularization and establishing them as new molecular biomarkers might help to overcome the poor prognosis of melanoma patients. Here, we further characterized a linear murine model of melanoma progression and showed that the aggressiveness of melanoma cells is closely associated with high expression of angiogenic factors, such as Vegfc, Angpt2, and Six1, and that blockade of the vascular endothelial growth factor pathway by the inhibitor axitinib abrogates their tumorigenic potential in vitro and in the in vivo chicken chorioallantoic membrane assay. Furthermore, analysis of The Cancer Genome Atlas data revealed that the expression of the angiogenic factor ANGPT2 (P‐value = 0.044) and the lymphangiogenic receptor VEGFR‐3 (P‐value = 0.002) were independent prognostic factors of overall survival in melanoma patients. Enhanced reduced representation bisulfite sequencing‐based methylome profiling revealed for the first time a link between abnormal VEGFC, ANGPT2, and SIX1 gene expression and promoter hypomethylation in melanoma cells. In patients, VEGFC (P‐value = 0.031), ANGPT2 (P‐value < 0.001), and SIX1 (P‐value = 0.009) promoter hypomethylation were independent prognostic factors of shorter overall survival. Hence, our data suggest that these angio‐ and lymphangiogenesis factors are potential biomarkers of melanoma prognosis. Moreover, these findings strongly support the applicability of our melanoma progression model to unravel new biomarkers for this aggressive human disease. The high mortality rate of melanoma is broadly associated with its metastatic potential. Tumor cell dissemination is strictly dependent on vascularization; therefore, angiogenesis and lymphangiogenesis play an essential role in metastasis. Hence, a better understanding of the players of tumor vascularization and establishing them as new molecular biomarkers might help to overcome the poor prognosis of melanoma patients. Here, we further characterized a linear murine model of melanoma progression and showed that the aggressiveness of melanoma cells is closely associated with high expression of angiogenic factors, such as Vegfc, Angpt2, and Six1, and that blockade of the vascular endothelial growth factor (VEGF) pathway by the inhibitor axitinib abrogates their tumorigenic potential in vitro and in the in vivo chicken chorioallantoic membrane assay. Furthermore, analysis of The Cancer Genome Atlas data revealed that the expression of the angiogenic factor ANGPT2 (P-value = 0.044) and the lymphangiogenic receptor VEGFR-3 (P-value = 0.002) were independent prognostic factors of overall survival in melanoma patients. Enhanced reduced representation bisulfite sequencing-based methylome profiling revealed for the first time a link between abnormal VEGFC, ANGPT2, and SIX1 gene expression and promoter hypomethylation in melanoma cells. In patients, VEGFC (Pvalue = 0.031), ANGPT2 (P-value < 0.001), and SIX1 (P-value = 0.009) promoter hypomethylation were independent prognostic factors of shorter overall survival. Hence, our data suggest that these angio-and lymphangiogenesis factors are potential biomarkers of melanoma prognosis. Moreover, these findings strongly support the applicability of our melanoma progression model to unravel new biomarkers for this aggressive human disease. Introduction Melanoma mortality rate, one of the highest among all human cancers, is broadly associated with its metastatic potential (T ım ar et al., 2016). Although the complete resection of localized melanomas is curative in nearly all cases (Shain and Bastian, 2016), the survival rate for patients identified with metastatic melanoma is only 6-11 months (Clark et al., 2018;Fruehauf et al., 2011). Metastasis is a complex process, in which a sufficient blood supply is critical for dissemination and subsequent tumor growth. Melanoma cells can spread through hematogenous and lymphatic routes; therefore, the formation of new blood and lymphatic vessels via angiogenesis and lymphangiogenesis, respectively, is crucial (Adler et al., 2017). Indeed, high vascularization has been associated with melanoma progression (Chung and Mahalingam, 2014;T ım ar et al., 2016). As mentioned, diverse molecules are involved in vessel formation. VEGFs are associated with the early steps of blood vessels formation, while ANGPTs are critical regulators of vascular and lymphatic maturation and remodeling (Rigamonti et al., 2014;Thurston, 2003). ANGPT1 and ANGPT2 bind to the tyrosine kinase receptor TIE2; the former is considered an agonist and the latter primarily an antagonist of this receptor. ANGPT2 has been shown to be highly expressed in tumors, promoting endothelial disruption and facilitating tumor cell extravasation (Li et al., 2015). Angiogenic factors have been shown to be useful in predicting cancer progression and aggressiveness in different malignancies and were proposed as tumor biomarkers (Cao et al., 2014;Martinelli et al., 2013). Studies with melanoma patients show conflicting results; while some have shown that VEGFA and VEGFC can predict shorter overall and disease-free survival (Boone et al., 2008;Spiric et al., 2015;Tas et al., 2006), other studies failed to prove such a correlation (Bolander et al., 2007;Vihinen et al., 2007). Recently, it has been reported that the promoter regions of the major angiogenesis players contain extended CpG islands (Pirola et al., 2018). Thus, DNA methylation might be a potent regulatory mechanism of angiogenesis in melanoma. Here, we show that the aggressiveness of murine melanoma cells is closely associated with high expression of angiogenic factors and that blockade of the VEGF pathway abrogates the tumorigenic potential of metastatic melanoma cells. Furthermore, the expression of the angiogenic factor ANGPT2 and the receptor VEGFR-3 were significantly associated with overall survival of melanoma patients. The methylation status of VEGFC, ANGPT2, and SIX homeobox 1 (SIX1) promoters was also found to correlate with overall survival in melanoma patients. In the studied murine model, DNA methylation was identified as the mechanism regulating the abnormal expression of these genes. Cell lines and drug treatment Murine melanoma cell lines 4C11À (nonmetastatic) and 4C11+ (metastatic) were cultured in RPMI 1640 medium supplemented with 5% FBS and 1% penicillin (100UÁmL À1 ) and streptomycin (100 lgÁmL À1 ) at 37°C in 5% CO 2 humidified atmosphere. Cell culture reagents were purchased from PAN Biotech (Aidenbach, Germany). Axitinib (PZ0193; Sigma-Aldrich, St. Louis, MO, USA), a selective inhibitor of VEGF receptors, and 5-Aza-2 0 -deoxycytidine (5-Aza-CdR; Calbiochem, Merck, Darmstadt, Germany) were dissolved in DMSO (PAN Biotech) and stored at a final concentration of 10 mM at À20°C. 4C11+ cells were treated with different concentrations (40 nM-10 lM) of axitinib for MTT assay and with 1 lM for all other assays. All treatments were performed for 48 h. 4C11À cells were treated with 10 lM of 5-Aza-CdR for 72 h. As a control, cells were treated with the respective volume of DMSO. Final DMSO volume in the cell culture was lower than 0.01%. In vivo chicken chorioallantoic membrane assay The chicken chorioallantoic membrane (CAM) assay was performed as previously described (Muenzner et al., 2018). Fertilized specific pathogen-free chicken eggs were obtained from Valo BioMedia (Osterholz-Scharmbeck, Germany) and incubated at 37°C with 80% relative humidity. The first day of incubation was considered as embryonic day (EDD) 1. On EDD 8, the eggshell was opened at the more rounded pole of the egg and the exposed membrane residing below the air sac was removed with fine forceps revealing the CAM, and the window was re-sealed with adhesive silk tape. On EDD 9, 4C11À and 4C11+ cells or pretreated 4C11+ cells (axitinib or vehicle) (1 9 10 6 cells/ egg) were applied on the CAM. Cells were prepared in a mixture of 50% RPMI medium and 50% Matrigel (CorningÒ MatrigelÒ Basement Membrane Matrix,356237;Corning,Bedford,MA,USA), and the formed pellets were incubated for 1 h at 37°C before being applied onto the CAM. Tumors and the adjacent CAM were dissected on EDD 12 or EDD 15. Tumor volume was measured (l 9 w 9 h 9 0.526, where l indicates length, w indicates width, and h indicates height), and the tissue was fixed in 4% phosphate-buffered formaldehyde before being embedded in paraffin for histopathologic observation. After the tumor and CAM had been removed, the embryo was immediately euthanized by decapitation. Immunostaining Immunohistochemistry (IHC) was performed to detect pH3 [phospho-histone H3 (BC37), 1 : 200; Biocare Medical, Pacheco, CA, USA] in formalin-fixed paraffin-embedded (FFPE) tissue obtained from the CAM assay (n = 7); and VEGFR-3 [(D6), 1 : 200; Santa Cruz, Dallas, TX, USA] and ANGPT2 [(F-1), 1 : 100; Santa Cruz] in primary and metastatic FFPE human melanoma specimens (n = 5). The study methodologies conformed to the standards set by the Declaration of Helsinki. Briefly, sections (2-4 lm) were deparaffinized at 72°C for 30 min, incubated in xylene, and rehydrated in EtOH. Antigen was retrieved by heating in a Tris/EDTA buffer at 120°C for 5 min, and endogenous peroxidases and nonspecific binding sites were blocked with specific blocking solution. The slices were incubated with specific primary antibody and next with secondary horseradish peroxidase-linked antibody. Positive immunoreactivity was detected using diaminobenzidine or AEC/H 2 O 2 , and nuclei were counterstained with hematoxylin and eosin (HE). Hematoxylin and eosin-stained tumor slices obtained from the CAM assay were scanned with Panoramic MIDI system (Camera type: CIS VCC-FC60FR19CL, Objective name: Plan-Apochromat, Objective magnification: 409, Camera adapter magnification: 19) (3DHISTECH, Budapest, Hungary), which generates a digital image with high quality, and evaluated with the CASEVIEWER software (Version 2.0; 3DHISTECH). Immunofluorescence images of cryosections of primary (n = 3) and metastatic (n = 3) melanoma were generated using the multi-epitope ligand cartography technique (MELC). Sample preparations from tissue, data generation, and analysis were performed as described previously (Ostalecki et al., 2017). Total RNA isolation Total RNA from 4C11À and 4C11+ melanoma cells was prepared using the miRNeasy Mini kit (Qiagen, Hilden, Germany), and DNA digestion was performed with RNase-Free DNase Set (Qiagen) according to the manufacturer's protocol. 2.5. NanoString gene expression analysis mRNA signature of 4C11À and 4C11+ cancer cells was profiled with nCounter PanCancer Mouse Pathways Panel (NanoString Technologies, Seattle, WA, USA). As recommended by the manufacturer, 100 ng of total RNA was used as input for sample preparation. Samples and specific probes were hybridized overnight at 65°C, automatically processed in the Prep Station, and transferred to the Digital Analyzer for data collection with a high-density scan (600 fields of view). NanoString nSolver software was used for normalization and pairwise comparisons. mRNA data were normalized with a set of 20 predetermined housekeeping genes. Based on background detection, the minimal threshold for detection was considered as 50 counts. RT-qPCR Following the manufacturer's instructions, 1 lg of purified RNA was reverse transcribed to cDNA using miScriptÒ II RT kit (Qiagen) with HiFlex Buffer. Real-time PCR was performed with QuantiTect SYBRÒ Green PCR Kit (Qiagen) using 40 ng of cDNA and specific primers designed with NCBI Primer-Blast software (Table S1). The PCR amplification conditions were as follows: 95°C for 15 min, 40 cycles of 94°C for 15 s, 60°C for 30 s, and 70°C for 30 s. Beta-actin was used as a housekeeping control, and data were analyzed with the comparative 2 ÀDDCT method. PCR reactions were performed using the Bio-Rad CFX96 Real-Time PCR Detection System. Invasion assay Membranes of TranswellÒ inserts (Corning) were precoated with Matrigel Growth Factor Reduced (356231; Corning) diluted in serum-free medium in a 1 : 4 proportion. After Matrigel had jellified, 2 9 10 5 previously serum-starved (24 h) cells were seeded in the upper chamber in serum-free medium. RPMI containing 10% FBS was used as a chemoattractant in the lower chamber. Cells were allowed to invade for 48 h at 37°C and 5% CO 2 . After this period, membranes were fixed with 4% formaldehyde and stained with crystal violet. Images of invading cells were obtained in a bright-field microscope (Leica DMi1; Leica Microsystems, Wetzlar, Germany). Tube formation assay A 96-well plate was precooled, and 40 lL of Matri-gelÒ (356237; Corning) was added in each well. The plate was incubated at 37°C for at least 30 min to allow the MatrigelÒ to form a gel-like structure before 1-6 9 10 4 cells were seeded on top of the MatrigelÒ in 100 lL of complete medium. Tube formation was assessed after 16-18 h under a bright-field microscopy (Leica DMi1). MTT assay Four thousand 4C11+ cells were seeded in the wells of a 96-well plate and allowed to attach for 24 h. Following this, cells were treated with different concentrations of axitinib (Sigma-Aldrich, Darmstadt, Germany) for 48 h and cell viability was analyzed by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) reduction. For this, cells were incubated with MTT (Sigma-Aldrich, Germany; 0.5 mgÁmL À1 ) for 2 h at 37°C, and the produced purple formazan was solubilized with DMSO and quantified at 595 nm with a reference filter of 620 nm in a multilabel plate reader (VictorTM X3 2030 Multilabel Plate Reader; Perkin Elmer, Waltham, MA, USA). Wound healing assay Cells were seeded in a 12-well plate, were led to adhere for 24 h, and then treated with axitinib or vehicle alone for 48 h. At this point, the monolayer confluence was reached and a straight wound was created with a sterile 100 lL pipette tip in the center of each well and cell debris were eliminated by washing with PBS. Cells were incubated with FBS-free medium containing 2 ngÁlL À1 of mitomycin C (Sigma-Aldrich), a mitosis inhibitor, and cell migration was monitored for 48 h by bright-field microscopy (Nikon, Tokyo, Japan). The cell-free area was measured with IMAGEJ software (National Institutes of Health, Bethesda, MD, USA), and relative cell migration was quantified by the equation: Migration % = [1 À (cell-free area at t 24 /cell-free area at t 0 ) 9 100]. Cell cycle analysis For cell cycle analysis, after the appointed treatment, adherent and occasional floating cells in the supernatant were collected, washed in PBS, and fixed overnight with ice-cold 70% EtOH. Next, cells were incubated with staining solution containing 50 lgÁmL À1 propidium iodide and 0.5 mgÁmL À1 RNase for 30 min in the dark. DNA content was determined in FACS CantoÒ II flow cytometer (Becton-Dickinson, San Diego, CA, USA), and cell cycle distribution was assessed with FLOWJO cell cycle platform v7.6.5 (Flowjo LCC, Ashland, OR, USA). Survival analysis Data on gene expression and methylation of 470 individuals were downloaded from The Cancer Genome Atlas (TCGA) Skin Cutaneous Melanoma project. Of these, 79 individuals bearing primary tumors and 199 bearing secondary tumors (totalizing 278) had information on all covariates considered in this study and were used for the survival analyses. A multivariate Cox regression model was used to test the impact of gene expression and promoter methylation on patient overall survival. Age, tumor primary site, presence of metastasis in lymph nodes, ulceration, and Breslow depth value were used as covariates. Hazard ratios (HR) and corresponding 95% confidence intervals (CI) are shown. Statistical significance was set at P < 0.05. Kaplan-Meier survival curves were generated for genes with significant association with overall survival. Statistical analysis Data analysis was performed using GRAPHPAD PRISM 6 software (GraphPad Software, San Diego, CA, USA). Two and multiple group comparisons were analyzed by t-test and ANOVA, respectively. For non-normally distributed data, the Mann-Whitney test was used for two-group comparison. Melanoma cell line 4C11+ exhibits highly aggressive and angiogenic phenotype in vitro and in vivo 4C11À and 4C11+ cells were engrafted onto the CAM and incubated for 3-5 days. 4C11-tumors grew at a slow rate, with only a small tumor mass being visible at day 3. However, 4C11+ cells at day 3 had already given rise to a tumor mass with an extensive network of blood vessels. At day 5, most of the embryos bearing 4C11+ tumors had died, probably due to the tumor aggressiveness. The 4C11+ tumors of chicken embryos that survived until day 5 were also removed and analyzed by HE staining; however, these samples harbored large hemorrhagic areas leaving hardly any tumor tissue (Fig. S1), which prevented further investigations. Thus, we decided to exploit 4C11À tumors on day 5 and 4C11+ tumors on day 3. In average, 4C11+ tumor volume was 11-fold bigger than 4C11À tumors at the evaluated time points (Fig. 1A). For objective evaluation, HE-stained tumor slices were scanned, which allowed the measurement of the actual tumor area by considering only tumor cell regions and excluding residual MatrigelÒ, CAM tissue, and large hemorrhagic areas. In average, the 4C11+ tumor area was 5-fold bigger than the area of 4C11À tumors (Fig. 1B). Hematoxylin and eosin histological analysis showed that in 4C11À tumors, neoplastic cells were separated by profuse eosinophilic material, such as MatrigelÒ. In these tumors, we found isolated intervening vessels and no evidence of CAM invasion (Fig. 1C, left panels). In contrast, 4C11+ tumors presented hyperchromatic nuclei, pleomorphic cells, extensive necrosis, and a hypervascularized and invasive tumor growth pattern ( Fig. 1C, right panels). There was also a variable pigment formation in 4C11+ samples, which was clearly recognizable in the macroscopic tumor and in the HE staining. Abundant positive immunohistochemical staining for pH3, an important mitosis marker, was observed in 4C11+ cells (Fig. 1C, right panel). Staining quantification showed that 4C11+ tumors had in average 3 times more pH3-positive cells [31/high-power field (HPF); range 20-45] than 4C11À tumors (9/HPF; range 3-15; Fig. 1D), confirming the high proliferative index of 4C11+ cells (Fig. 1C). In addition, 4C11+ cells were shown to be highly invasive in vitro (Fig. 1E). Scanned slices were also employed to better analyze tumor vascularization. All 4C11À tumor samples contained few and well-defined intratumoral vessels; however, 4C11+ tumor samples presented large and irregular vessels and several hemorrhagic areas within the tumor masses (Fig. 1F). From seven analyzed tumors established from 4C11+ cells, four had large areas of hemorrhage and one was completely hemorrhagic (Fig. 1F, bottom right panel). In a tube formation assay in vitro, we could observe that 4C11+ cells formed an extensive network of capillary-like structures, while 4C11À cells remained as single cells or formed irregularly shaped clusters of cells, and were, therefore, incapable to generate defined tubular vasculogenic mimicry (Fig. 1G). Transcriptional analysis reveals alteration in angiogenesis-related genes during melanoma progression To explore the molecular mechanisms responsible for the highly aggressive phenotype of 4C11+ cells, we profiled the expression pattern of 770 murine cancerrelated genes in 4C11À and 4C11+ cells with the NanoString nCounter technology. Using a twofold difference and P < 0.05 cutoff, we found 254 differentially expressed mRNA, of which 59 were significantly upregulated and 195 downregulated in 4C11+ in comparison with 4C11À cells ( Fig. 2A and Tables S2 and S3). Unsupervised hierarchical clustering analysis demonstrated that 4C11À and 4C11+ cells could be distinguished accordingly to their mRNA expression profile (not shown). The 10 most up-and 10 most downregulated genes were hierarchically clustered and displayed in a heatmap (Fig. 2B). To gain further insight into the function of the dysregulated genes in 4C11+ cells, we performed a pathway enrichment analysis of all dysregulated genes using the Panther database. Among the most significant enriched pathways were FGF, PDGF, and VEGF signaling pathways, all directly associated with tumor progression and, most importantly, tumor metastasis and angiogenesis (Fig. 2C). KEGG pathway analysis also showed enrichment of tumor-associated pathways (not shown). Interestingly, a String analysis showed that among the 10 most upregulated genes in 4C11+ compared with 4C11À cells, there is an association among the proteins encoded by Vegfc (the most upregulated gene), Angpt2, Shc4, and Met. The same analysis with the top 10 downregulated genes depicted an interaction network among Tnc, Fgfr2, Cola1a, Bmp4, Bmp7, Lef1, and Gpc4 (Fig. 2D). These data, in association with our functional assays, prompted us to study the angiogenesis process in these cells. Moreover, as Vegfc was the most dysregulated gene in the NanoString analysis, we decided to evaluate its family in more detail. We confirmed the differential expression of Vegfa, Vegfb, Vegfc, and their receptors, Vegfr-1, Vegfr-2, Vegfr-3, and Nrp2 by RT-qPCR (Fig. 2E,F). We validated that Vegfa is downregulated in 4C11+ cells in comparison with 4C11À cells, while Vegfb and Vegfc are upregulated, the latter to a much higher extent. Considering the receptors, Vegfr-1 and Nrp2 were downregulated in 4C11+ cells compared with 4C11À cells; Vegfr-2 could not be detected in either one of the cell lines and Vegfr-3 was highly upregulated in 4C11+ cells. Additionally, we confirmed the higher expression of Angpt2, Met, and Six1 in 4C11+ cells (Fig. 2G). Inhibition of VEGFC function decreases aggressiveness phenotype of metastatic 4C11+ cell line To better understand the impact of VEGFC in the aggressiveness of 4C11+ cells, we blocked the VEGFC pathway using axitinib, a potent and selective inhibitor of VEGF receptors. First, we evaluated the effects of axitinib on 4C11+ cells viability by a MTT dose-response analysis. Cells were treated with 40 nM-10 lM of axitinib for 48 h. Growth inhibition obtained using 1-5 lM of the drug was similar,~40%, while 10 lM of the drug reduced cell viability to nearly 50%. For this, we decided to use the lowest effective dose of axitinib in 4C11+ cells, that is, 1 lM, for the subsequent experiments (Fig. 3A). Cell cycle analysis revealed that axitinib treatment induced a significant decrease of cells in the G1 phase, while massively increasing the G2/M population (13.4% in control cells to 52.2% in treated cells). There was no alteration in the sub-G1 population (Fig. 3B). In wound healing assays, vehicle-treated 4C11+ cells were able to close the wound after 24 h; however, axitinib-treated cells had a 30% lower migration rate at that time point (Fig. 3C). After 48 h, there was no significant change in migration capability (data not shown). In the in vivo CAM assay, we observed that vehicle-treated 4C11+ cells were able to develop massive tumors in 5 days, which were extremely invasive and presented extensive inflammatory infiltrates (Fig. 3D, left panels). On the other hand, axitinib-treated 4C11+ cells generated only small tumor masses, in some cases failing to develop a tumor mass at all. HE staining depicts tumor cells surrounded by MatrigelÒ and clearly separated from the CAM, which was not infiltrated by the tumor cells (Fig. 3D, right panels). From 11 samples grown from axitinib-pretreated 4C11+ cells, two were able to develop tumors, although much smaller than tumors generated by untreated 4C11+ cells. In one graft, cells were largely surrounded by MatrigelÒ, which was not observed in the control group, while in the other graft, the tumor growth was very similar to the DMSO-treated group with cells invading the CAM and presenting high immune cell infiltration (Fig. S2). High expression of VEGFR-3 and ANGPT2 is associated with poor overall survival in melanoma patients Cox multivariate analysis was performed to evaluate the association of gene expression and overall survival in melanoma patients. Expression of the following genes detected as upregulated in 4C11+ cells in comparison with 4C11À cells was analyzed individually: VEGFC, VEGFR-3, ANGPT2, MET, and SIX1. After adjustment for age, tumor primary site, presence of metastasis in lymph nodes, ulceration, and Breslow depth value, high expression of VEGFR-3 (HR = 1.199; P-value = 0.044) and ANGPT2 (HR = 1.189; P-value = 0.002) was shown to be predictors of shorter overall survival (Fig. 4A). Kaplan-Meier curves of VEGFR-3 and ANGPT2 are shown (Fig. 4B,C). The expression of VEGFR-3 and ANGPT2 was also evaluated by IHC and MELC staining in primary and metastatic human melanoma tissue. In the IHC, only the staining of melanocytes and melanoma cells were evaluated. Skin and colon tissue were used as negative controls (Fig. S3). The average positive staining intensity of the antigens was weak in primary samples (IRS: VEGFR-3 = 3.6; ANGPT2 = 3.1) and moderate in the metastatic specimens (IRS: VEGFR-3 = 5.3; ANGPT2 = 5.6; Fig. 4D). In the MELC technique, we also stained blood vessels with Collagen type IV. This assay confirmed the high expression of ANGPT2 and VEGFR-3 in metastatic melanomas and showed these antigens are expressed by both the tumor and endothelial cells. While VEGFR-3 was mainly detected in the tumor vasculature, ANGPT2 was highly expressed by the tumor cells (Fig. 4E). Expression of Vegfc, Angpt2, and Six1 is epigenetically regulated in murine melanoma cell lines The DNA methylation status of Vegfc, Vegfr-3, Angp-t2, Met, and Six1 was verified in a methylome sequencing data from enhanced reduced representation bisulfite sequencing (Rius et al., in preparation). CpGs distant up to 1500 nucleotides upstream and 250 nucleotides downstream from the transcription start site (TSS) were analyzed, and a CpG site was considered to be differentially methylated if it presented a minimum of 25% difference in methylation with a Pvalue ≤ 0.01. The number of CpGs differentially methylated in Vegfc, Angpt2, and Six1 promoters was four (distant from 147-222 nucleotides downstream of the TSS), 11 (located from 1129 to 1370 nucleotides upstream of the TSS), and 72 (À1499 to +203 distant from the TSS), respectively. These CpGs sites had in average 33%, 87%, and 76% lower methylation in 4C11+ cells than in 4C11À cells, respectively ( Fig. 5A and Table S4). In the analyzed region, Vegfr-3 and Met did not show any differential methylation in 4C11+ in comparison with 4C11À cells. We then treated 4C11À cells, which express low levels of Vegfc, Angpt2, and Six1, with the DNA methyltransferase inhibitor 5-Aza-CdR and verified their expression by RT-qPCR. Demethylation caused by the treatment enhanced the expression of the three genes analyzed, most prominently of Angpt2 and Six1 (Fig. 5B). 3.6. Promoter methylation of VEGFC, ANGPT2, and SIX1 is associated with poor prognosis in melanoma patients As we observed an epigenetic regulation of Vegfc, Angpt2, and Six1 in our mouse melanoma model, we analyzed whether the promoter methylation of these genes could predict overall survival in melanoma patients by a Cox multivariate analysis. Promoter CpG island and shore methylation of VEGFC (HR island + shore = 0.035; P-value = 0.031) and SIX1 (HR island = 0.591; P-value = 0.009) were analyzed and found to be significantly associated with survival. ANGPT2 promoter does not contain a CpG island; therefore, we analyzed single CpGs shown to be hypomethylated in 4C11+ cells and that had also been previously evaluated in chronic lymphocytic leukemia (Martinelli et al., 2013). The average DNA methylation of these CpGs was associated with overall survival (HR = 0.1677; P-value < 0.001). In all cases, decreased methylation was associated with shorter survival of melanoma patients (Fig. 5C). Kaplan-Meier curves are shown (Fig. 5D-F). Discussion Metastasis is closely associated with high mortality rate in melanoma patients. Therefore, melanoma cell dissemination and the related molecular mechanisms still need to be elucidated in more detail. New diagnostic and prognostic markers are also important to reach a better clinical outcome and mortality reduction. To study melanoma progression, we used a linear murine progression model in which metastatic 4C11+ cells arose from nonmetastatic 4C11À cells following P53 expression loss (Souza et al., 2012). To better analyze the cancer properties of 4C11À and 4C11+ cells, we performed the CAM assay, which allows the study of several hallmarks of cancer, as proliferation, invasion, metastasis, and angiogenesis (Lokman et al., 2012;Muenzner et al., 2018). Here, we could observe a clear distinction between the two cell lines. In vivo, 4C11+ cells gave rise to larger tumors, which were highly proliferative and vascularized. On the other hand, 4C11À tumors displayed small well-defined tumor masses with localized tumor vessel infiltration. We also observed 4C11+ cells were highly invasive in vitro. These data are consistent and enrich our previous results showing different growth rate and metastasis capability of 4C11À and 4C11+ cells in a Heatmap of unsupervised hierarchical clustering of 4C11À and 4C11+ cells with the 10 most upregulated and 10 most downregulated mRNA. Green and red represent, respectively, low and high mRNA expression level. (C) Ten most significantly enriched pathways of all dysregulated genes determined by the Panther database. (D) String analysis of the 10 most upregulated and downregulated genes in 4C11+ cells. (E-G) Vegfa, Vegfb, Vegfc (E); Vegfr-1, Vegfr-2, Vegfr-3, Npr2 (F); and Angpt2, Met, and Six1 (G) mRNA expression was determined by RT-qPCR in 4C11À and 4C11+ cells. Data are expressed as fold change normalized to Actb (n = 3). nd: not detected. Data represent mean AE SD, and statistical significance was evaluated by one-way ANOVA followed by the Dunnett's post hoc test (**P ≤ 0.01; *** P ≤ 0.001). mouse model (Souza et al., 2012). Interestingly, 4C11+ cells also developed an extensive network of capillarylike structures in an in vitro tube formation assay, indicating a high vascular mimicry (VM) capacity. These 3D tube-like structures consist of tumor cells and extracellular matrix and are endothelial cell-free. VM can function as an alternative supplier of blood to tumor masses, independently of angiogenesis, and can contribute to metastasis (Chung and Mahalingam, 2014). The high VM capability of 4C11+ cells is consistent with their excessive bleeding observed in the CAM assay and can contribute to it. Investigation of the expression pattern of cancer-related genes in 4C11À and 4C11+ cells revealed that Vegfc and Angpt2 were among the top 10 upregulated genes. Both are important regulators of the vascular phenotype in tumors (Kim et al., 2009). A pathwaybased analysis of all dysregulated genes revealed enrichment of several angiogenic pathways, including FGF, PDGF, and VEGF signaling. In String analysis, we found that VEGFC, ANGPT2, MET, and SHC4 proteins interact, which is a further support of their possible role in the 4C11+ cells aggressive phenotype. Several downregulated genes were shown to have a strong protein interaction, but most of these genes, such as Fgfr2, Tnc, and Lef1, have been positively associated with tumor progression (Katoh and Nakagama, 2013;Murakami et al., 2001;Shao et al., 2015), suggesting their downregulation is not involved in 4C11+ cells aggressive phenotype. Conversely, Bmp4 is a known inhibitor of angiogenesis (Tsuchida et al., 2014); thus, its downregulation probably contributes to 4C11+ tumor vascularization. Compared with 4C11À cells, 4C11+ cells presented upregulation of Vegfb, Vegfc, and Vegfr-3. Importantly, the expression of VEGF receptors by tumor cells is a known indicator of high aggressiveness (Mouawad et al., 2009). As VEGFC has a high affinity for VEGFR-3, our data indicate that the VEGF pathway effector in 4C11+ cells must be VEGFC by binding and activating VEGFR-3 in an autocrine signaling, which is classically recognized to be involved in lymphangiogenesis (Mouawad et al., 2009). As oxygen and nutrient supply are mandatory for tumor growth, the VEGF pathway, angiogenesis, and lymphangiogenesis are not only essential for the metastasis process, but also for the progression of solid tumors beyond a critical size (Alitalo et al., 2013). Indeed, recently VEGFC and VEGFD have also been reported to regulate the inflammatory tumor microenvironment, which regulates early stages of tumor growth (Alitalo et al., 2013). Nonetheless, high levels of VEGFC correlate with melanoma metastasis to lymph nodes, which is one of the most important markers of poor prognosis for melanoma patients (T ım ar et al., 2016). It has also been demonstrated that VEGFC and lymphangiogenesis can contribute to the development of distant metastasis (Ma et al., 2018). We also confirmed the expression of Angpt2, Met, and Six1; genes involved in metastasis and angiogenesis. ANGPT2 is a secreted growth factor that sensitizes endothelial cells to different proangiogenic factors, such as VEGFs, and it has been shown to promote tumor metastasis, angiogenesis, and lymphangiogenesis curves represent, respectively, low (bottom 50%) and high (top 50%) DNA methylation. Significance was determined by the log-rank test. (Holopainen et al., 2012). SIX1 was recently shown to have pro-oncogenic and metastatic properties in different tumors (Coletta et al., 2008;Wang et al., 2012). Interestingly, to the best of our knowledge, there are no studies available in melanoma. In addition, SIX1 is capable of inducing lymphangiogenesis by increasing VEGFC expression (Liu et al., 2014;Wang et al., 2012). The high expression of Angpt2, Six1, and Vegfc in 4C11+ cells suggests that they might contribute together to the vascular phenotype of these cells. Then, we blocked the VEGF pathway in 4C11+ cells using axitinib, a potent and selective inhibitor of VEGFRs . This drug competitively binds to the intracellular ATP site domain of the receptors, stabilizing them in an inactive conformation and therefore inhibiting downstream signal transduction (Gross-Goupil et al., 2013). The treatment induced a significant G2/M arrest, reduced 4C11+ cells ability to migrate in vitro and to develop tumors in vivo. As axitinib is a classical inhibitor of angiogenesis, most studies analyzed the effects of the drug in an already developed tumor (He et al., 2014;Zhang et al., 2014), but we aimed to examine the ability of cells previously treated with the drug to develop a tumor. While 4C11+ control cells developed big tumor masses, axitinib-pretreated 4C11+ cells only gave rise to small, noninvasive tumors. This is presumably due to the inhibition of VEGFC signaling-the only VEGF ligand expressed in these cells-which leads to an impaired tumor development. Indeed, axitinib has been reported to have antitumor activity in highly angiogenic tumors (Fruehauf et al., 2011). Axitinib has previously been shown to induce senescence in gastric cancer (He et al., 2014) and glioma cells (Morelli et al., 2016), a phenotype suggested by the G2/M arrest observed in 4C11+ -treated cells. Although senescence can influence tumor growth (Rodier and Campisi, 2011), former studies have shown that major axitinib effects are due to the VEGF pathway blockade. Nonetheless, the profound abrogation of tumor growth caused by the VEGF receptors blockage, and consequent inhibition of VEGFC function, suggests the VEGFC pathway has an important role in the aggressive phenotype of 4C11+ cells. Further studies of our group should elucidate whether lymphangiogenesis is involved in 4C11+ tumor cells dissemination. To evaluate the translational relevance of our findings, we performed a Cox multivariate analysis for the expression of VEGFC, ANGPT2, MET, SIX1, and VEGFR-3. After adjustment for the covariates, high expression of VEGFR-3 and ANGPT2 were both independent predictors of poor prognosis. Kaplan-Meier analysis illustrated that patients with low and high expression of these genes have distinct survival curves, although log-rank tests were not significant. Reinforcing these data, IHC and MELC staining demonstrated that tissues obtained from metastatic melanoma patients, which are known to have a worse prognosis, have higher expression levels of VEGFR-3 and ANGPT2 compared with tissues obtained from patients with primary melanoma. In the samples analyzed by the MELC technique, VEGFR-3 was mainly detected in the tumor vasculature, while ANGPT2 was highly expressed by the tumor cells. Although these molecules are predominantly expressed by endothelial cells, recent studies have reported their expression by tumor cells (Streit and Detmar, 2003;Su et al., 2008aSu et al., , 2008b. Notably, VEGFR-3 and ANGPT2 were previously observed to be expressed in melanoma tumor cells (Helfrich et al., 2009;Mouawad et al., 2009). Vascular endothelial growth factor receptor-3 expression was already shown to be a prognostic marker of disease-free survival in gastric adenocarcinoma (J€ uttner et al., 2006), but there are only a few studies in melanoma. Soluble VEGFR-3 has been reported to be associated with disease-free survival but not overall survival in melanoma patients (Mouawad et al., 2009). Moreover, ANGPT2 expression was correlated with overall patient survival in colorectal and breast cancer (Hong et al., 2017;Sfiligoi et al., 2003). In melanoma, one study demonstrated high levels of circulating ANGPT2 to be associated with poor patient overall survival (Helfrich et al., 2009). To our knowledge, we are the first to show that ANGPT2 (HR = 1.189; P = 0.002) and VEGFR-3 (HR = 1.999; P = 0.044) are independent predictors of prognosis in melanoma patients. Furthermore, we detected that Vegfc, Angpt2, and Six1 promoters were hypomethylated in 4C11+ cells compared with 4C11À cells. Importantly, the methylation status of these genes was inversely correlated to their mRNA expression levels. Indeed, treatment of 4C11À cells with 5-Aza-CdR increased the expression of the three genes, suggesting that DNA methylation regulates their transcription. Interestingly, we found low levels of VEGFC, ANGPT2, and SIX1 promoter methylation are independent prognostic factors of poor patient survival. VEGFC expression has previously been shown to predict melanoma patient survival (Boone et al., 2008;Liu et al., 2008), and its expression has been shown to be regulated by DNA methylation in gastric cancer (Matsumura et al., 2007). Consistently, VEGFC promoter methylation was reported to be associated with progression-free survival in ovarian cancer (Dai et al., 2013). Concerning ANGPT2, the methylation status of 6 CpGs near the gene transcription site (four of which were analyzed in this study) has already been shown to predict overall survival of chronic lymphocytic leukemia patients (Martinelli et al., 2013). Several studies have reported that SIX1 overexpression is frequently associated with poor patient prognosis in various malignancies, as colorectal cancer (Kahlert et al., 2015) and glioma (Zhang and Xu, 2017), however not in melanoma. Besides, the mechanisms responsible for the high expression of SIX1 have been poorly investigated. Methylation of SIX1 promoter has previously been reported as a transcription regulatory mechanism in the porcine and bovine muscle (Wei et al., 2018;Wu et al., 2013). To our knowledge, this is the first report to show VEGFC (HR = 0.035; P = 0.031), ANGPT2 (HR = 0.168; P < 0.001), and SIX1 (HR = 0.591; P = 0.009) promoter methylation as prognostic markers for melanoma patient survival. Conclusion In summary, we found that the VEGFC pathway is highly correlated with tumor aggressiveness in our murine model. Moreover, we identified VEGFR-3 and ANGPT2 expression, as well as VEGFC, ANGPT2, and SIX1 promoter methylation, as independent prognostic factors for overall survival in melanoma patients, showing the high translational relevance of our findings obtained in a murine model. Supporting information Additional supporting information may be found online in the Supporting Information section at the end of the article. Fig. S1. Representative macroscopic images and HE stained tissue sections (49 magnification) of 4C11+ tumors grown on the CAM for 5 days. Fig. S2. Representative macroscopic image and HE staining (49 and 409 magnification) of tumors grown on the CAM. 4C11+ cells were pretreated in vitro with 1 lM of Axitinib for 48 h and applied onto the CAM. Tumors grown were removed after 5 days. Fig. S3. Representative images of VEGFR-3 and ANGPT2 staining in normal skin and colon, which were used as negative controls of the IHC staining. Table S1. Primers sequences for RT-qPCR reactions. Table S2. mRNA upregulated in 4C11+ cells in comparison to 4C11À cells as assessed by the NanoString Panel. Table S3. mRNA downregulated in 4C11+ cells in comparison to 4C11À cells as assessed by the Nano-String Panel. Table S4. CpGs differentially methylated of Vegfc, Angpt2 and Six1 promoters regions in 4C11+ cells compared to 4C11À cells evaluated by ERRBS.
2019-05-10T13:05:44.658Z
2019-05-25T00:00:00.000
{ "year": 2019, "sha1": "46196b8bb76449abfcd76f7fc2f26dbb27f6e0fb", "oa_license": "CCBY", "oa_url": "https://febs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/1878-0261.12501", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46196b8bb76449abfcd76f7fc2f26dbb27f6e0fb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
265814623
pes2o/s2orc
v3-fos-license
An Observational Pilot Study of a Tailored Environmental Monitoring and Alert System for Improved Management of Chronic Respiratory Diseases Objective Chronic lung-related diseases, with asthma being the most prominent example, characterized by diverse symptoms and triggers, present significant challenges in disease management and prediction of exacerbations across patients. This research aimed to devise a practical solution by introducing a personalized alert system tailored to individual lung function and environmental conditions, offering a holistic approach for the management of a range of chronic respiratory conditions. Methods In response to these challenges, we developed a personalized alert system based on individual lung function tests conducted in diverse environmental conditions, as determined by air-quality sensors. Our research was substantiated through an observational pilot study involving twelve healthy participants. These participants were exposed to varying air quality, temperature, and humidity conditions, and their lung function, as indicated by peak expiratory flow (PEF) values, was monitored. Results The study revealed pronounced variability in pulmonary responses across different environments. Leveraging these findings, we proposed a design of a personalized alarm system that monitors air quality in real-time and issues alerts under potentially unfavorable environmental conditions. Additionally, we investigated the use of basic machine learning techniques to predict PEF values in these varied environmental settings. Discussion The proposed system offers a proactive approach for individuals, particularly those with asthma, to actively manage their respiratory health. By providing real-time monitoring and personalized alerts, it aims to minimize exposure to potential asthma triggers. Ultimately, our system seeks to empower individuals with the tools for timely intervention, potentially reducing discomfort and enhancing management of asthma symptoms. Introduction Asthma, a chronic pulmonary disorder, afflicts approximately 374 million people globally with annual 461,000 deaths. 1 It is concerning to acknowledge that despite the highest incidence of asthma being reported in countries with high sociodemographic indices (SDIs), the maximum mortality rate due to this condition is observed in low and middle SDI countries. 2ccording to the most recent report from the Global Initiative for Asthma (GINA), asthma remains insufficiently diagnosed, with its prevalence being inadequately documented in numerous middle and low-income countries. 3A closer look at the United Arab Emirates (UAE) reveals that the disease prevalence among school-aged children is approximately between 9.8% and 11.9%, with some variability attributable to age. 4 In neighboring Saudi Arabia, the situation is similar, if not more concerning.The number of children with asthma is much higher than adults, with rates varying from 9% to 33.7% depending on the area. 5In the broader Middle East region, asthma prevalence varies between 4.4% and 7.6%. 6Asthma is defined by periodic episodes of wheezing, shortness of breath, and coughing, caused by the inflammation and narrowing of the respiratory tract. 7This chronic condition stems from the complex interaction of genetic and environmental elements, such as air pollution.Asthma places an enormous strain on patients' physical, social, emotional, and professional lives, with a significant impact on their quality of life and a marked burden on healthcare systems worldwide. Asthma management is challenging due to the vast variability in individual symptoms and triggers, such as exercise, weather, cold air, certain foods, tobacco smoke, temperature changes, humidity, and strong odors. 8,9Recognizing and avoiding these triggers is essential, but identification and active avoidance are often difficult. 10The effects of these triggers vary among individuals, 3 emphasizing the importance of monitoring and predicting them to prevent acute episodes.Current literature mostly focuses on bio-signal factors for asthma prediction, 11 with fewer studies examining environmental factors. 12Crucially, only a handful have combined both elements in research. 13sthma management has traditionally centered on medication use, patient education, and triggers avoidance, with recent trends shifting towards integrated care approaches aimed at improving asthma control and enhancing patients' quality of life. 14Despite these strides, the variability of symptoms and triggers necessitates personalized management. 15odern healthcare emphasizes strategies tailored to individual asthma phenotypes and endotypes. 16Advanced understanding of asthma's pathophysiology has led to targeted biologic therapies and precision medicine. 17The rise of digital health interventions, like smart inhalers, offers continuous monitoring and personalized feedback. 18While these developments are promising, personalized asthma management remains nascent, requiring further research, especially in realtime monitoring and predictive analytics. 19ersonalized alert systems are revolutionizing chronic disease management with real-time monitoring and response mechanisms.In diabetes, continuous glucose monitors with alert features enhance glycemic control and reduce hypoglycemic episodes. 20,21For heart disease, remote monitoring systems with personalized alerts aid in early cardiac anomaly detection, reducing hospitalizations and improving patient quality of life. 22,23In respiratory diseases like asthma or COPD, smart inhalers and spirometers provide real-time feedback and personalized medication and lung function alerts, potentially transforming management. 24,25This trend highlights a shift towards proactive, individualized chronic disease management emphasizing real-time monitoring and patient engagement. Environmental factors, alongside biological ones, are crucial in understanding asthma triggers.7][28] Recognizing the role of these factors is pivotal in shaping individualized asthma management strategies.This may involve alert systems warning patients of adverse air-quality or weather conditions exacerbating their symptoms.Given asthma's variability and unpredictability, there's a call for tailored monitoring approaches.Although standard methods work for most, they sometimes neglect those with unique symptom profiles and specific triggers. 29This underscores the need for a shift towards personalized strategies, which recognize individual symptom patterns and triggers, integrating both biological and environmental aspects, and suggesting efficient monitoring methods. Our research is geared towards creating an affordable, practical, and personalized alert system to revolutionize chronic lung-related diseases, including asthma.Based on individual pulmonary function tests and real-time environmental data, our proposed system aims to provide timely, personalized alerts according to the individual's respiratory health status.We propose that this tailored approach will gradually empower individuals to proactively monitor their respiratory health, foresee potential exacerbations, and initiate timely interventions, consequently avoiding asthma attacks and enhancing their overall quality of life.While our current study focuses on healthy individuals, we believe that the concept and scenario are applicable and beneficial for asthmatic patients. 17 System Overview Our objective in designing the personalized alert system is to safeguard individuals' respiratory wellbeing by issuing timely alerts prior to their exposure to potentially discomforting environmental conditions.This is achieved by monitoring and correlating their lung function with specific environmental data.Given the considerable variability of lung function among individuals, our goal is to create a system capable of accommodating this diversity and delivering tailored alerts that align with each individual's unique respiratory characteristics.The alert system operates by integrating data from two primary sources: individual lung function tests and real-time environmental monitoring. For the lung function tests, we used spirometers to capture Peak Expiratory Flow (PEF) values. 30PEF values are a reliable measure of lung function, and fluctuations in these values can indicate a potential asthma attack.For environmental monitoring, we deployed two types of portable air-quality sensors, one custom-built and the other commercially available, both adept at identifying a variety of common asthma triggers, such as particulate matter, humidity, and temperature fluctuations, Figures 1-3.The custom-built sensor in this experiment has neither direct contact nor giving prescriptive actions to the study participants.It was incorporated parallelly in this pilot study for validation purposes, and to facilitate future use.When the system predicts a potential risk, it generates an alert.The alerts are designed to be unobtrusive yet attention-grabbing, delivered through the device and the associated smartphone app.The app not only notifies the individual of a potential risk but also provides initial recommendations on how to avoid it, such as staying indoors or refreshing the room air, Figure 4). Air-Quality Sensors In this study, we employed two types of sensors to gauge the quality of air within a room: a customized lab made sensor constructed using an ESP32 microcontroller, Figures 1 and 2, and a commercial air-quality sensor to authenticate the readings obtained from the customized device, Figure 3.We are utilizing the customized sensor to facilitate the programming of varying alarm thresholds, which were based on data derived from the PEF reader, details are in Figure 4.Such a feature is not accessible in the closed-source commercial sensors. The both sensors were designed to monitor various elements including Humidity, Temperature, Fine particulate matter with a diameter of 2.5/10 micrometers or smaller (PM2.5)/(PM10),Total Volatile Organic Compounds (TVOC), and Carbon Dioxide (CO2).In the customized device, we employed the DHT-11 sensor module to record temperature and humidity readings, while the TVOC sensor module was utilized to assess air-quality and the Carbon Dioxide (CO2) levels.The PM2.5 sensor module was used to measure dust particles.Table 1 outlines the range of recorded values, the corresponding sensor module used, and the range of sensed data.The architecture of the sensor node in terms of the required sensor array and the employed microcontroller is illustrated in Figure 2. Figure 1 The customized Air-Quality Sensor (AQ-S).The sensor, encased in a 3D-printed cover, has been engineered for portability and scalability.The fan in the air-quality sensor is used to draw in ambient air for sampling, ensuring a continuous and representative analysis of the surrounding air-quality.Height: 2.5cm, W/L: 5cm. Lungs Function Tester The selection of participants for our pilot study was centered on healthy young individuals who did not have a known diagnosis of asthma or any other chronic respiratory condition.We chose healthy volunteers to control for any potential confounding effects that existing respiratory conditions might have on lung function or response to environmental conditions.A total of 12 volunteers, aged between 20 and 25 years, were recruited as participants for the study, as shown in Table 2. Prior to their involvement, all participants provided informed consent for their participation in the research.The study was approved by the Ethic Committee of the University of Tabuk, which complies with the Declaration of Helsinki. For lung function tests, we used a standard spirometer to measure Peak Expiratory Flow (PEF) values. 1 These values provide a measure of the maximum speed at which participants can exhale, offering a reliable indicator of lung function.PEF is a measure of how fast a person can exhale air after taking a deep breath, and it is influenced by various factors, including environmental conditions.Participants were instructed on the proper use of the spirometer, and measurements were taken three times for each participant in each environment, after being in the environment for 20 minutes, to ensure 1.The ESP32 microcontroller is employed to facilitate the processing and communication between the sensors and the user application.accuracy.The best of these three values was recorded as the participant's PEF for that specific environmental condition.Upon completion of the data collection process, participants were requested to fill out a brief survey.The survey was used to streamline data collection and minimize external factors.The survey consists of three concise questions: A) Did the participant experience breathing discomfort during the experiment?(yes/no).B) Were there any factors that may have affected the data collection?(yes/no).C) If yes, could the participant specify the factor and provide additional comments?This step was crucial to provide subjective insights that may complement the objective data collected through the PEF. Collection of Environmental Data and PEF In our research, we employed air-quality sensors (AQS) to measure an array of environmental parameters, encompassing levels of particulate matter, relative humidity, and ambient temperature.PEF was monitored in parallel with these airquality metrics under four distinct environmental conditions: typical daily conditions, conditions with moderate dust/ smoke enrichment, moderate temperature conditions, and moderate humidity conditions.Given the observational nature of our study, we instructed participants to measure their PEF and document the readings from the AQS in their primary living and sleeping spaces during standard conditions.An average AQS value was computed from these recorded values for analysis.Under the conditions of moderate dust/smoke enrichment, participants were instructed to record PEF and AQS measurements during their routine home incense-burning activities.For scenarios with moderate temperature, participants were asked to conduct the PEF and AQS measurements in the afternoon within a room where the air conditioning system had been turned off for the entire day.Finally, in the conditions of moderate humidity, participants recorded PEF and AQS measurements during the afternoon in a room where the air conditioner was operating in its dehumidifying, or "dry", mode without its cooling function activated. The collected environmental data was rigorously analyzed in tandem with the participants' lung function data.We applied simple statistics to discern patterns and correlations between specific environmental conditions and changes in PEF values (details are in the results section).This approach allows us to establish a baseline of how each individual's Figure 4 System flowchart.The variable "X" is a time-dependent measure, established through the evaluation of PEF responses to environmental alterations.As per the findings of this pilot study, the value of "X" is initially set at 0.4.However, it is anticipated that this value will be progressively adjusted to accurately reflect the lung capacity of each individual. lung function, as measured by PEF, responds to different environmental conditions.This analysis formed the basis of our proposed personalized alert system, enabling us to identify potential environmental triggers for each individual and thereby tailor the alert system to each participant's unique profile. PEF in Standard Environment Condition The collected data from the air-quality device, in conjunction with corresponding best of three PEF reading after a duration of 20 minutes spent within a standard indoor environmental condition, is demonstrated in Table 3. From the table we can conclude: First, we conducted a comparative analysis between our custom-built sensor and the commercially available counterpart.By comparing these air-quality measurements, we identified a high mean correlation coefficient of the recorded values of 95%.The noteworthy correlation coefficient signifies the reliability and accuracy of our custom device when compared against the commercial standard.This demonstrates the potential of our customized device as an affordable, yet precise open-sourced tool for monitoring air-quality and potential asthma triggers in realtime, and its potential integration into a personalized alert system for asthma management.Second, we juxtaposed the air quality values derived from these sensors with the recorded PEF values.In other words, we evaluated the Pearson correlation coefficient for each environmental variable in relation to the PEF values.Results are illustrated in Table 4.This assessment allowed us to discern which environmental factors had the greatest effect with PEF values.We found that the PM2.5 value had the strongest correlation with the PEF value.Other parameters, however, did not demonstrate a significant correlation (<0.5).These preliminary results suggest that PM2.5 levels are important to monitor, as they potentially exert a significant impact on PEF values. Third, Table 3 confirms the healthy status of all study participants.Their PEF values were categorized following the criteria established by the Hankinson model. 32According to this model, PEF values exceeding 80% of the predicted optimum are classified as "normal" -a category often referred to as the "green zone".All participants in our study fell within this "green zone", underscoring their healthy condition. PEF in Different Environment Conditions Table 5 showcases the average and the standard deviation (SD) values registered from the air-quality sensor across varying environmental conditions to which the study participants were experienced.For ease of understanding, we have categorized these environmental conditions into 1) Air-quality room (Good, Medium), 2) Temperature (Normal, Slightly Hot), and 3) Humidity (Normal, Slightly-High).In the interest of safeguarding our participants, we consciously avoided subjecting them to drastic changes in environmental conditions, which could have possibly resulted in a more substantial correlation with PEF values.Despite this approach, which might be viewed as a limiting factor, our intent was to maintain the study as an initial exploration primarily aimed at understanding the overall effect of diverse environmental parameters on PEF values.The driving principle behind this decision was our objective to validate our study's concept, while ensuring that participant safety remains paramount.The table indicates that in a room with medium air-quality, there was a noticeable rise in PM2.5 levels by 67%.Similarly, in a room with a slightly elevated temperature, there was a 45% increase in the temperature recorded.Moreover, in a room with slightly high humidity, the humidity levels saw a surge of 66%.Table 6 presents the PEF values under the different environmental conditions.This table reveals that PEF responds differently to different environmental conditions among participants.For instance, a noticeable decrease in PEF values was observed in some individuals when exposed to high levels of particulate matter PM2.5, suggestive of a potential trigger.Contrarily, variations in humidity and temperature did not evoke significant changes in PEF measurements for certain participants.Despite all participants experiencing the same environmental settings shifts, their PEF values varied, suggesting that their respiratory systems responded uniquely to these stimuli.Participants 3, and 10, for instance, demonstrated heightened sensitivity to minor changes in room air-quality in comparison to others.Consequently, it might be beneficial to establish lower thresholds for their air-quality alarms.By doing so, these individuals would receive early warnings, enabling them to vacate the room ahead of others, thereby ensuring their well-being.The numbers underlined in the table represent participants who reported experiencing respiratory discomfort during the experiment, as per the first question of the administered survey.In response to the second and third questions of the survey, which probe for potential factors influencing data collection or any noteworthy comments, there were no significant points reported.These underlined values highlight some notable observations.Participant 10 reported experiencing respiratory discomfort, which corresponded with a significant drop in her PEF value (>0.4).Conversely, Participant 9 reported respiratory discomfort without a corresponding dip below the predetermined PEF threshold (<0.4).This is aligned with variability of normal human physiology and suggests that the threshold value for air quality alerts should be personalized and dynamically adjusted, considering each individual's unique respiratory comfort level. Personalized Threshold Identification In order to determine potential triggers for participant(s), we evaluated the correlation between environmental conditions and changes in PEF values.The identification of personalized triggers was based on observing substantial decreases in PEF values when individuals were exposed to specific environmental conditions, Table 6.From Tables 5 and 6, we can conclude that when PM2.5 has increased to 67%, 2 out of the 12 participants could has more than 4% changes on their PEF values, and one of those 2 has felt uncomfortable.Setting an alarming system for these participants for instance can happen whenever PM2.5 increase up to 30-40% from the standard room value considering their sensitivity to PM2.5 value.Although some other participants reporting discomfort (the underlined value of participants 9), we did not observe significant changes in their PEF values.This observation was not the focus of this study and hence has been set aside.However, this highlights the potential influence of other factors beyond lung function that could be considered in future research. While our initial expectation was to establish an individual threshold for each participant, limitations related to sample size and the slight variations in environmental conditions guided us to divide our participants into two groups.One group demonstrated higher sensitivity to air-quality changes, reflected in a PEF change of more than 4% (arbitrarily set), while the other group showed lesser sensitivity.Consequently, for the first, more sensitive group, we suggest setting the air-quality alert threshold to a 30-40% change in PM2.5.This would mean that if the PM2.5 value shifts by this percentage, an alert would be triggered, advising them to either leave the room or ventilate the space to improve air-quality. Figure 4 illustrate the overall model of the proposed personalized alerting system, considering the insights gained from Tables 3, 5 and 6.The model consists of two stages: the initial stage and the personalized stage.In the initial stage, the system prompts the user to measure PEF in various environmental conditions.The model then calculates the changes in PEF values in relation to the changes in air-quality observed in these environments.Based on this information, the model sets the alert threshold "X" on the customized air-quality device.Once the threshold is set, the system enters the personalized stage.During this stage, the air-quality device periodically alerts the user to potential exposure to uncomfortable environmental conditions.The alerts serve as reminders for the user to take necessary precautions or adjust to ensure their comfort in response to the detected changes in air-quality.The threshold "X" is adaptable to ensure user comfort and can be fine-tuned accordingly.Additionally, the method for predicting the value of "X" will be detailed in the following section. Predicting PEF Values in Different Environmental Conditions In this study, we also explored the application of machine learning models to predict PEF values in different environmental conditions, good AQ (G_AQ) and medium AQ (M_AQ) conditions.The goal was to investigate the feasibility and effectiveness of using machine learning algorithms to predict PEF values in these specific environmental contexts.To achieve this, we employed feature selection methods, namely CfsSubsetEval 33 and the Best_First search method, to identify the most relevant features that have an impact on PEF values in each condition.We then tested several classifiers, including linear regression, Multilayer Perceptron, and SMOreg (support vector machine for regression), 34 to predict PEF values, Table 7.Although the root mean squared error (RMSE) values were relatively high, likely due to the limited size of our datasets, we observed interesting patterns: 1) Predicting PEF values in G_AQ conditions proved to be challenging using the available parameters. 2) We noted the potential in predicting PEF values for M_AQ conditions using a subset of the existing features (column 3, Table 7).These findings suggest that it is possible to predict PEF values, particularly for individuals who are sensitive to increases in PM2.5, and CO2 levels.However, we did not investigate the prediction of PEF in high temperature or high humidity conditions due to the lack of significant changes in PEF values under these conditions, Table 6.Although promising, the overall finding at this stage highlights the need for further exploration and improvement in predicting PEF values under different conditions.It's pertinent also to note that, as we are developing our own air-quality sensors, we are considering the addition of a carbon monoxide (CO) sensor in future design.Given the known impact of CO on prevalent obstructive lung diseases such as asthma and COPD, we anticipate that this inclusion could offer significant value.Specifically, we expect a high correlation between changes in CO levels and alterations in PEF. Table 7 provides information on the performance of the number of the selected attributes and the tested models for predicting PEF in different conditions.The table displays the selected attributes in order of relevance, along with the correlation coefficient (CC) and Root Mean Square Error (RMSE) values obtained from linear regression, multilayer perceptron, and SMOreg model. Insights and Limitation From literature, we acknowledge the potential influence of factors such as air-quality, temperature, and humidity on individual PEF values, however, the main objective of this pilot study was to demonstrate the potential of developing personalized air-quality sensor alerts.These alerts are customized based on an individual's lung function and readings obtained from different environmental conditions.By considering these factors, the proposed model aims to provide tailored alerts to individuals, enabling them to take proactive measures in response to their unique respiratory health needs.Another aspect we explored in this research was the use of machine learning.Our preliminary experiment suggests that there is potential to predict PEF values in different environmental scenarios by increasing the number of participants and environmental conditions.Although our study had limitations due to the small sample size and limited environmental variations, the results indicate promising avenues for further research and improvement.By incorporating machine learning techniques and expanding the dataset in real-world studies, we can enhance the accuracy and applicability of PEF predictions in diverse environmental conditions. There are several limitations in this study that we acknowledge, including the limited number of participants.Furthermore, in our efforts to ensure participant safety by not exposing them to hazardous environments, we only introduced minor environmental variations.This precautionary measure, coupled with the restricted time each participant spent in each setting, may have influenced the efficacy of detecting significance PEF changes.These constraints also precluded us from conducting comprehensive statistical significance tests.While acknowledging the limitations at this stage of our study, we contend that the novelty of our research is rooted in highlighting the variability of individual PEF reactions to environmental shifts.As such, it's clear that the prevailing "one-size-fits-all" model adopted by existing airquality sensors in the market is not practically viable.Thus, our study highlights a methodology for creating a customized array of alert systems.These systems are specifically designed to adapt to each individual's unique lung functions and possess the capability to evolve and learn over time. In the next stages of this study, we aim to address the limitations of this work by expanding our participant sample size for more diverse data and conducting a longitudinal study to better understand the real-world impacts of environmental changes on lung function over time.We will also include a broader range of environmental factors, such as pollution levels and allergens, and continue to refine our personalized alerting system, which will be tuned by Machine learning models.Ultimately, we envision testing our system through clinical trials to assess its effectiveness and make necessary adjustments for real-world implementation.Our goal remains to develop a responsive and individualized alerting system for people with varying lung functions. While our pilot study is focused on healthy individuals, this selection is pivotal to the validation of the overall concept.From an ethical perspective, it would be unseemly to expose asthmatic individuals at high risk to known respiratory irritants within the context of this study.Instead, our approach necessitates conducting an introductory examination on healthy participants, gradually modifying the system for higher risk groups within real-world contexts after the institution of all requisite safety protocols.Furthermore, demonstrating lung function variability amongst lowrisk individuals strongly suggests the potential for the alert system to be adaptable and efficacious for higher risk individuals, including asthma patients.This indication is not only plausible but offers a compelling case for the adaptability of our system.We perceive this methodological approach as a strength of our study.Given that wellcontrolled asthma patients generally exhibit low PEF variability, akin to healthy individuals, they will be an appropriate group for further study.Conversely, poorly controlled asthmatics at high risk usually present high PEF variability, which our proposed alert system should more likely detect. 14,17,35he insights from this study could be further enhanced by including measurements of Forced Expiratory Volume in the first second (FEV1) and Forced Vital Capacity (FVC) alongside Peak Expiratory Flow (PEF).Nonetheless, adhering to the Global Initiative for Asthma (GINA) guidelines, which are authoritative for asthma monitoring outside of clinical environments, the use of either PEF or FEV1 is acceptable.For the sake of simplicity and the practicality of home monitoring, we focused solely on PEF.PEF serves as an effective measure for controlling and tracking asthma, forecasting flare-ups, and identifying potential triggers.Its ease of use, affordability, and the convenience of portability render it particularly appropriate for self-management in domestic settings.While PEF is primarily indicative of large airway patency, comprehensive spirometry tests that include the FEV1/FVC ratio are critical for diagnosing various lung disorders, including both obstructive and restrictive diseases.Acknowledging this, PEF maintains its crucial role as a fundamental element in managing asthma in day-to-day settings. Conclusion This pilot study highlights the potential of personalized approaches in the management of chronic lung-related diseases, including asthma.We have demonstrated the feasibility of an alert system that associates individual lung functions with changes in environmental data, providing a personalized alarming system to aid proactive asthma management.Although preliminary, this result highlights its potential as a robust tool for individuals to actively monitor their respiratory health, anticipate potential respiratory discomfort, and initiate timely interventions.While we acknowledge that our study was conducted on healthy individuals, the trends and patterns observed provide us with a firm basis for predicting how such a system might benefit those living with chronic respiratory conditions like asthma.Based on the pathophysiology of asthma disease, airways and therefore PEF are highly sensitive to environmental triggers, and we speculate that using such a personalized alarm system would be a valuable tool to augment asthma management.As we advance into an era marked by a growing emphasis on personalized healthcare and digital health technologies, it is vital to continue to explore and innovate ways to make the management of conditions like asthma more tailored, proactive, and efficient.The presented alert system structure is one step towards this vision, illustrating a compelling intersection of personalized healthcare and technology. Figure 2 Figure 2 Node Architecture.Details about the used array of sensors are in Table1.The ESP32 microcontroller is employed to facilitate the processing and communication between the sensors and the user application. Table 2 Participant Demographics Table 1 31ecification of the Custom-Built (Figure2) and Commercially Available (Figure3) Air-Quality Sensors are Provided.The Classification of Good/Normal Indoor Air Quality Adheres to the Global Air-Quality Guidelines Established by the World Health Organization (WHO).31 Table 3 PEF and Air-Quality Measurements in Indoor "Normal" Daily Environment.The Table Shows Two Air-Quality Values, the Left/Right is from the Commercialized/Customized Device Table 4 Statistical Evaluation of R-Squared Values for Variables Relative to PEF Note: *Significant correlation (≥0.3) with PEF value. Table 5 Air-Quality Measurements in the Different Indoor Environments.AQ: Air-Quality, Temp: Temperature, HM: Humidity, S: Slightly Table 6 PEF Values in Different Environmental Conditions Table 7 Predicting PEF in Different Environmental Conditions
2023-12-07T16:15:31.696Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "af1a18dffb2330db4459ee5baa04d217954146b1", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=94941", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4645c0d95d086492eaa6d3dbe55c6dac18844896", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
36932104
pes2o/s2orc
v3-fos-license
Diagnostic Implications of Antigen-Induced Gamma Interferon, Nitric Oxide, and Tumor Necrosis Factor Alpha Production by Peripheral Blood Mononuclear Cells from Mycobacterium bovis-Infected Cattle ABSTRACT Bovine tuberculosis in the United States has proven costly to cattle producers as well as to government regulatory agencies. While in vivo responsiveness to mycobacterial antigens is the current standard for the diagnosis of tuberculosis, in vitro assays are gaining acceptance, especially as ancillary or complementary tests. To evaluate in vitro indices of cellular sensitization, antigen-induced gamma interferon (IFN-γ), nitric oxide (NO), and tumor necrosis factor alpha (TNF-α) responses by blood mononuclear cells from Mycobacterium bovis-infected cattle were quantified and compared. Using an aerosol model of infection, two doses of each of two strains of M. bovis (95-1315 and HC-2045T) were used to induce a range of IFN-γ, NO, and TNF-α responses. Infection-specific increases in NO, but not in IFN-γ or TNF-α, were detected in nonstimulated cultures at 48 h, a finding that is indicative of nonspecific activation and spontaneous release of NO. The infective dose of M. bovis organisms also influenced responses. At 34 days postinfection, IFN-γ, NO, and TNF-α responses in antigen-stimulated cells from cattle receiving 105 CFU of M. bovis organisms were greater than responses of cells from cattle infected with 103 CFU of M. bovis organisms. The NO response, but not the IFN-γ and TNF-α responses, was influenced by infective strains of M. bovis. The TNF-α, NO, and IFN-γ responses followed similar kinetics, with strong positive associations among the three readouts. Overall, these findings indicate that NO and TNF-α, like IFN-γ, may prove useful as indices for the diagnosis of bovine tuberculosis. First described by Robert Koch in 1891, the tuberculin skin reaction has been the principal means of tuberculosis diagnosis for both humans and domestic animals (23). For cattle, the caudal fold skin test (CFT) is the primary approved test for tuberculosis within the United States. The CFT relies on in vivo reactivity to Mycobacterium bovis purified protein derivative (PPDb) injected intradermally into a fold of skin at the base of the tail. Cattle classified as reactors or suspect with this test are often retested by using the comparative cervical skin test, in which PPDb is injected at one site and M. avium PPD (PPDa) is injected at a separate site. The comparative cervical skin test, while technically more challenging than the CFT, provides an added ability to distinguish M. avium (including M. avium subsp. paratuberculosis) responders from M. bovis responders. An in vitro method of tuberculosis diagnosis has also been developed (41) and approved for use in the United States as a complementary test (i.e., in conjunction with the skin test) (22). The in vitro assay detects gamma interferon (IFN-␥) produced differentially by peripheral blood mononuclear cells (PBMC) exposed to no antigen (i.e., background response), PPDa, PPDb, or mitogen (e.g., pokeweed mitogen, PWM) (40). The assay is particularly suitable for diagnostic laboratories, as whole blood cultures are used, thus circumventing the need for cumbersome cell separation techniques. More recently, recombinant antigens specific for virulent tubercle ba-cilli (e.g., ESAT-6, CFP-10, MPB-59, MPB-64, and MPB-70) have been evaluated for use in tests that discriminate among M. avium-exposed, M. bovis BCG-vaccinated, and tuberculous cattle (3,4,35). These antigens have demonstrated utility for both in vitro (i.e., IFN-␥ test) and in vivo (i.e., skin test) use (12,20,27,28,34,39). Despite these advances, there is still a need for convenient and inexpensive tests for bovine tuberculosis. The proven, practical application of an IFN-␥-based assay for tuberculosis diagnosis is not surprising considering the robust cell-mediated response generated by tuberculosis complex mycobacteria. Indeed, IFN-␥ is crucial for effective host defense during tuberculosis (8,15,18,25,30). Other readouts of mycobacterial immunity, especially cellular reactivity, may also have diagnostic application. Two essential components of tubercular host defense include nitric oxide (NO) and tumor necrosis factor alpha (TNF-␣) (1,13,14,16,17,21). Stimulation of inducible nitric oxide synthase in macrophages and subsequent generation of reactive nitrogen intermediates are potent mechanisms for mycobacterial killing (6,7,9,13,21). Mycobacterium-induced TNF-␣ and IFN-␥ secretion by T cells and/or macrophages from infected individuals is responsible for an antimycobacterial defense mediated by reactive nitrogen intermediates (17,33). TNF-␣ is also necessary for containment of the infection (i.e., granuloma formation). Mice deficient in TNF-␣ or TNF-␣ receptor are highly susceptible to fatal mycobacterial infections and fail to develop organized granulomas (11,17,19). NO and TNF-␣, like IFN-␥, are readily produced by mycobacterium-induced PBMC from M. bovis-infected cattle (24,38), thus demonstrating their potential as diagnostic readouts for M. bovis infection. The objective of the present study was to quantify and compare mycobacterium-specific IFN-␥, NO, and TNF-␣ production by PBMC from M. bovis-infected cattle. An aerosol model of M. bovis infection using two dosages of each of two strains of M. bovis was used to initiate variable responses for comparisons. Isolated PBMC were used for recall stimulation studies because this population produces vigorous IFN-␥, NO, and TNF-␣ responses when stimulated with mycobacterial antigens. Responses were evaluated for effects of challenge dose and strain, kinetics, and associations. Twenty crossbred cattle of approximately 9 months of age and obtained from herds with no history of tuberculosis were housed at the National Animal Disease Center, United States Department of Agriculture, Animal Research Service, Ames, Iowa, according to the Association for Assessment and Accreditation of Laboratory Animal Care International and institutional guidelines. At the initiation of the study, all animals were tested and confirmed negative for M. bovis and M. avium exposure by using a commercially available assay (Bovigam; CSL Limited, Parkville, Victoria, Australia) for detection of IFN-␥ responses to in vitro mycobacterial antigen stimulation. The animals were housed in temperature-and humidity-controlled rooms (1 to 2 animals per room) within a biosafety level 3 confinement facility. Negative airflow exited the building through HEPA (high efficiency particulate air) filters, ensuring that air from the animal pens was pulled towards a central corridor and through HEPA filters before exiting the building. Airflow velocity was 10.4 air changes per h. The strains of M. bovis used for the challenge inoculum were strain 95-1315 (United States Department of Agriculture Animal Plant and Health Inspection Service designation), originally isolated from a white-tailed deer in Michigan (31), and strain HC-2045T, originally isolated from a Holstein cow in Texas. Inoculum consisted of mid-log-phase M. bovis cells grown in Middlebrook's 7H9 medium supplemented with 10% oleic acid-albumin-dextrose complex (Difco, Detroit, Mich.) plus 0.05% Tween 80 (Sigma Chemical Co., St. Louis, Mo.) as previously described (2). The challenge inoculum consisted of either ϳ10 5 (n ϭ 5 for each of the two strains) or ϳ10 3 (n ϭ 5 for each of the two strains) CFU in 2 ml of phosphatebuffered saline (PBS). The cattle were restrained, and the challenge inoculum was delivered by nebulization into a mask covering the animals' nostrils and mouths (26). Nineteen of the twenty cattle challenged with M. bovis had typical tuberculous lesions with M. bovis organisms cultured from affected tissues. Restriction fragment length polymorphism patterns of M. bovis organisms isolated from tissues matched the challenge inoculum strain. Tracheobronchial and mediastinal lymph nodes and lungs were the most commonly affected tissues. Lung lesions were distributed diffusely among all lobes, consistent with aerosol exposure to droplet nuclei of Ͻ5 m. Lesions were more severe and disseminated in cattle receiving the higher challenge dosage (i.e., 10 5 CFU), regardless of the challenge strain. Although it is difficult to determine the actual number of tuberculous lesions per animal, lesions were detected in more sites (i.e., organs, lymph nodes, lung lobes, etc.) in cattle receiving 10 5 CFU of strain HC-2045T than in those receiving 10 5 CFU of strain 95-1315. The numbers of lesion sites did not differ between challenge strains for cattle receiving 10 3 CFU. Detailed descriptions of gross, histologic, and bacteriologic findings are presented elsewhere (26). PBMC were isolated from buffy coat fractions of peripheral blood collected in 2ϫ acid citrate dextrose (5). The wells of 96-well round-bottom microtiter plates (Falcon; Becton Dickinson, Lincoln Park, N. J.) were seeded with 2 ϫ 10 5 PBMC in a total volume of 200 l per well. The medium was RPMI 1640 supplemented with 2 mM L-glutamine, 25 mM HEPES buffer, 100 units of penicillin per ml, 0.1 mg of streptomycin per ml, 1% nonessential amino acids (Sigma), 2% essential amino acids (Sigma), 1% sodium pyruvate (Sigma), 50 M 2-mercaptoethanol (Sigma), and 10% (vol/vol) fetal bovine serum. The wells contained medium plus 5 g per ml, of PPDb (CSL Limited) 5 g per ml of PPDa (CSL Limited) 10 g per ml, of M. bovis strain 95-1315 whole-cell sonicate (WCS) 10 g per ml, of M. bovis strain HC-2045T WCS or 1 g per ml, of PWM or medium alone (no stimulation) per ml. The WCS antigens were prepared from 4-week M. bovis cultures grown in Middlebrook's 7H9 medium supplemented with 10% oleic acid-albumin-dextrose complex. Bacilli were pelleted, sonicated in PBS, further disrupted with 0.1 to 0.15 mm glass beads (Biospec Products, Bartlesville, Okla.) in a bead beater (Biospec Products), and then placed on ice. The preparation was centrifuged, and the supernatant was harvested and filtered (0.22 m). After incubation of PBMC cultures for 48 h at 37°C in 5% CO 2 , the supernatants were harvested and stored at Ϫ80°C until thawed for analysis. Nitrite is the stable oxidation product of NO, and the amount of nitrite within culture supernatants is indicative of the amount of NO produced by cells in culture. Nitrite was measured by using the Griess reaction (29) performed in 96well microtiter plates (Immunolon 2; Dynatech Laboratories, Inc., Chantilly, Va.). Nitrite concentrations in the supernatants were also determined by high-performance ion chromatography (HPIC). Briefly, macromolecules were separated from the aqueous portion of the sample by centrifugation through a 30,000-Da molecular mass cutoff filter. The microfiltrate was collected and injected directly into an ion-exchange high-pressure liquid chromatography system. A 4.1-by 250-mm strong anion-exchange column with a 10-m inside diameter was used. The mobile phase consisted of an aqueous solution containing 5.28 g of NaH 2 PO 4 ⅐ H 2 O, 43.46 g of Na 2 HPO 4 ⅐ 7 H 2 O, 2.40 g of NaCl, and 100 ml of acetonitrile per liter. Nitrate was detected by absorbance at 214 nm; nitrite was detected by absorbance at 530 nm after a postanalytical column diazo-coupling reaction with an aqueous solution containing 100 ml of 85% o-phosphoric acid, 40.00 g of sulfanilamide, and 2.00 g of N-(1-naphthyl)ethylenediamine dihydrochloride per liter. Ions were quantitated against their respective standard curves. A commercial enzyme-linked immunosorbent assay (ELISA)based kit (Bovigam; CSL Limited) was used for determination of IFN-␥ concentrations in culture supernatants. Duplicate samples for each individual treatment were analyzed. Each treatment represented three pooled replicate samples. TNF-␣ was measured by using a TNF-␣ capture ELISA (protocol and reagents were provided by L. Babiuk, Veterinary Infectious The data were assessed for normality prior to statistical analysis. Arithmetic and log 10 -transformed data were analyzed as a split plot with repeated-measure analysis of variance (ANOVA) using Statview software (version 5.0; SAS Institute, Inc., Cary, N.C.). The statistical model included the effects of treatment (i.e., challenge strain and challenge dose), time (days relative to challenge), and the interaction of treatment and time on nitrite, IFN-␥, and TNF-␣ concentrations in supernatants from PBMC cultures. Fisher's protected-least significant difference test was applied when significant effects (P of Ͻ0.05) were detected by the model. Pearson's productmoment correlations were computed between nitrite production measured using Greiss and HPIC assays, as well as between concentrations of nitrite, IFN-␥, and TNF-␣ in culture supernatants. To validate the Griess reaction assay in our culture system, supernatants (49 samples) were evaluated for nitrite by HPIC at an accredited toxicology laboratory (i.e., University of Nebraska Veterinary Diagnostic Center) and by the Griess reaction (i.e., at the National Animal Disease Center). Samples included supernatants from PBMC cultures stimulated with medium alone, PPDa, PPDb, and PWM. Blood samples were collected at prechallenge (day 0) and at 34 days, 68 days, and 124 days postchallenge. For the analysis, supernatants representing a predicted wide range of responses were included. As demonstrated in Fig. 1, results from the two assays had a strong positive linear association (r ϭ 0.94, P Ͻ 0.0001; y ϭ 0.57x ϩ 70.87), suggesting that both the Griess and HPIC assays generated similar results. The strongest associations between the Griess and HPIC assays were observed for samples from stim- a Blood mononuclear cells were isolated immediately prior to challenge (day 0) and at 34 days, 68 days, and 124 days after challenge and were cultured for 48 h with medium alone or with 5 g of PPDb per ml. IFN-␥ and TNF-␣ concentrations were quantified by ELISA, and nitrite concentrations were quantified by the Griess reaction. Data were analyzed by repeated-measure ANOVA with P values of 0.001, 0.001, and Ͻ0.0001 for IFN-␥, nitrite, and TNF-␣, respectively. Values followed by different letters are significantly different (P Ͻ 0.05). b Values represent mean responses (Ϯ SEM, n ϭ 20) to PPDb stimulation minus responses to medium alone. VOL. 10, 2003 NOTES 963 the groups (i.e., split by dose and strain; n ϭ 5) when they were analyzed separately (data not shown). In general, spontaneous NO production increased upon infection, whereas minimal to no increases in spontaneous release of IFN-␥ or TNF-␣ were observed. Similarly, NO within exhaled air (36) and NO produced spontaneously by PBMC (37) (Table 1). PPDb-induced IFN-␥ and nitrite levels were also increased (P Ͻ 0.05) at 124 days postchallenge in comparison to those of prechallenge responses (Table 1). IFN-␥, nitrite, and TNF-␣ responses to either PPDb or HC-2045T WCS exceeded (P Ͻ 0.05) the respective responses to either PPDa or 95-1315 WCS ( Table 2), regardless of the challenge strain. In general, challenge dosage, duration of infection, and type of antigen used for stimulation affected IFN-␥, nitrite, and TNF-␣ responses similarly. The antigen-specific IFN-␥ and TNF-␣ responses of cattle infected with strain HC-2045T did not differ (P Ͼ 0.05, repeated-measure ANOVA; 0 to 124 days postchallenge; n ϭ 10) from those of cattle infected with strain 95-1315. In contrast, nitrite responses to PPDb, PPDa, HC-2045T WCS, and 95-1315 WCS of PBMC from HC-2045T-infected cattle exceeded (P Ͻ 0.05) those of 95-1315-infected cattle (Fig. 4). While clear differences in disease severity among animals receiving equiv-alent doses of the two strains were difficult to determine, it did appear that cattle receiving the HC-2045T strain had slightly more severe cases of the disease than did cattle receiving the 95-1315 strain. Lesions were detected at more sites in cattle receiving 10 5 CFU of HC-2045T than in cattle receiving 10 5 CFU of 95-1315, likely impacting the nitrite response. In vitro-based cellular immune assays are gaining wide acceptance for use in tuberculosis diagnosis. Of relevance to cattle, an IFN-␥ assay (in conjunction with skin testing) was recently approved for use in tuberculosis diagnosis. Other readouts of bovine cellular immune responsiveness (e.g., TNF-␣ and NO), however, have not been critically analyzed or compared to the IFN-␥ response. In the present study, TNF-␣ and NO responses upon M. bovis infection followed similar kinetics, as did IFN-␥ responses ( Fig. 3 and 4; Table 1). The relative magnitude of each of these responses to variable antigens was consistent (Tables 1 and 2), and recall IFN-␥, TNF-␣, and NO responses to crude M. bovis soluble antigens were clearly associated (Table 3). Thus, evaluation of TNF-␣ and NO responses, like that of IFN-␥ responses, may prove useful for diagnosis of bovine tuberculosis. The nonspecific production of nitrite in PBMC cultures from infected cattle (Fig. 2) may be problematic for development of a useful NObased diagnostic assay of infection. However, the NO response to M. bovis antigens generally exceeded the response to medium alone, thereby providing antigen specificity (Tables 1 and 2). As with the IFN-␥ assay, adaptation of NO and TNF-␣ assays to a whole-blood format and usage of recombinant antigens will be necessary to enhance the practicality and specificity of these assays.
2017-06-17T20:43:27.459Z
2003-09-01T00:00:00.000
{ "year": 2003, "sha1": "d5a047f3aed365e6ccb980eb2e42197d911e59d5", "oa_license": null, "oa_url": "https://doi.org/10.1128/cdli.10.5.960-966.2003", "oa_status": "GOLD", "pdf_src": "ASMUSA", "pdf_hash": "c8dae9ef84a9a63159042d965d91cb907c826597", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3404441
pes2o/s2orc
v3-fos-license
Sulodexide pretreatment attenuates renal ischemia-reperfusion injury in rats Sulodexide is a potent antithrombin agent, however, whether it has beneficial effects on renal ischemia-reperfusion injury (IRI) remains unknown. In the present study, we assessed the therapeutic effects of sulodexide in renal IRI and tried to investigate the potential mechanism. One dose of sulodexide was injected intravenously in Sprague-Dawley rats 30 min before bilateral kidney ischemia for 45 min. The animals were sacrificed at 3h and 24h respectively. Our results showed that sulodexide pretreatment improved renal dysfunction and alleviated tubular pathological injury at 24h after reperfusion, which was accompanied with inhibition of oxidative stress, inflammation and cell apoptosis. Moreover, we noticed that antithrombin III (ATIII) was activated at 3h after reperfusion, which preceded the alleviation of renal injury. For in vitro study, hypoxia/reoxygenation (H/R) injury model for HK2 cells was carried out and apoptosis and reactive oxygen species (ROS) levels were evaluated after sulodexide pretreatment. Consistently, sulodexide pretreatment could reduce apoptosis and ROS level in HK2 cells under H/R injury. Taken together, sulodexide pretreatment might attenuate renal IRI through inhibition of inflammation, oxidative stress and apoptosis, and activation of ATIII. INTRODUCTION Renal ischemia reperfusion injury (IRI) is commonly seen in various clinical settings such as kidney transplantation, hemorrhagic shock or cardiovascular surgery [1]. Although numerous efforts had been made to avoid or alleviate renal IRI, the morbidity and mortality of ischemic acute kidney injury (AKI) still remains high [2]. Therefore, it is urgent to identify novel preventive strategies to decrease AKI incidence and to improve clinical outcome. Currently, the exact pathophysiological mechanism of AKI still remains elusive. However, it has been established that the pathophysiology of AKI predominantly involves continued hypoperfusion, inflammation, oxidative stress, and tubular epithelial apoptosis [3][4][5]. Recently, several studies have shown that microvascular thrombosis generation also plays a pivotal role in the pathophysiology of AKI, and many antithrombin agents, such as heparin [6] and antithrombin III (ATIII) [7][8][9] could mitigate renal IRI. Therefore, pharmacological agents with multiple function such as anti-coagulation, anti-oxidative and anti-inflammation properties may be promising preventative strategies for AKI. Among various candidates, sulodexide, a purified mixture of glycosaminoglycan composed of low www.impactjournals.com/oncotarget/ Oncotarget, 2017, Vol. 8, (No. 6), pp: 9986-9995 Research Paper molecular weight heparin and dermatan sulfate [10], has been reported to exert its reno-protective effect in many renal diseases [11][12][13][14]. In addition to its anti-coagulant function, it was reported that sulodexide has antioxidative effects [15,16] and anti-inflammatory [17,18], as well as the anti-ischemic effects [19]. Previous studies have demonstrated that sulodexide attenuated diabetic nephropathy through inhibiting cell proliferation and decreasing matrix accumulation [20]. Given the effects of sulodexide, it is reasonable to speculate that sulodexide administration may be able to mitigate renal IRI. In the present study, we examined the therapeutic effect of sulodexide administration on renal IRI in rats and tubular epithelial cells. Furthermore, the potential mechanisms were investigated. Sulodexide administration mitigated renal ischemia-reperfusion injury As shown in Figure 1A-1B, rats in IRI groups displayed significant exacerbation of renal function 24h after reperfusion, as indicated by remarkably increased levels of Scr and BUN compared with that in sham rats. However, the levels of Scr and BUN were significantly decreased in IRI rats pre-treated with sulodexide (Scr, IRI vs. IRI+Sul, P<0.05; BUN, IRI+sulodexide vs. IRI+Sul, P<0.05). In agreement with the alteration of Scr and BUN, the concentrations of sNGAL and uKIM-1, as the more precise and sensitive marker for diagnosing AKI, were also lower in sulodexide-administered IRI rats than that in un-treated IRI rats. Collectively, these data suggested that sulodexide was able to protect against ischemiareperfusion kidney injury. Sulodexide attenuated morphological change after ischemia-reperfusion injury To evaluate the extent of kidney pathological injury, kidney sections were stained with PAS. As expected, ischemia-reperfusion led to typical tubular injury characterized by pronounced renal tubular detachment, luminal congestion with loss of brush border, tubular cell necrosis and intratubular cast formation whereas the aforementioned pathological changes were remarkably alleviated in sulodexide-administered IRI rats, as shown in Figure 2. Meanwhile, the pathological score of histological lesions was significantly lower in sulodexide-treated IRI rats (2.60±0.58) than un-treated IRI rats (3.95±0.43) at 24h after reperfusion. Effects of sulodexide on oxidative stress and inflammation in kidney tissues To explore whether the reno-protection conferred by sulodexide in IRI was associated with oxidative stress and inflammation, the related markers of oxidative stress and inflammation were examined. As shown in Figure 3A-3B, renal SOD activity was decreased while MDA levels were increased in IRI rats compared with the sham-operated group. Pretreatment with sulodexide restored renal SOD levels and decreased renal MDA levels, suggesting that sulodexide could attenuate oxidative stress from two directions in rats with IRI. Moreover, we found that the renal mRNA expression levels of tumor necrosis factor α (TNFα), monocyte chemotactic protein 1 (MCP-1) and intercellular cell adhesion molecule-1 (ICAM-1) in IRI rats were substantially increased compared with sham rats, which was blunted by sulodexide pretreatment (Figure 4). In summary, sulodexide might exert its reno-protective effects against IRI via inhibiting oxidative stress and inflammatory response. Cell apoptosis were alleviated in IRI rats with sulodexide pretreatment To determine whether sulodexide's beneficial effects on IRI were associated with apoptosis inhibition, we performed TUNEL assays on the kidney sections. In comparison to sham-operated rats, ischemia-reperfusion resulted in elevated apoptosis, and sulodexide could significantly decrease tubular cells apoptosis in IRI rats ( Figure 5). Furthermore, the expression of anti-apoptosis protein Bcl-2 and the activity of caspase-3 was also examined. Similarly, it was observed that ischemia-reperfusion led to a substantial decrease in the expression of Bcl-2 and increase in caspase-3 activity as indicated in Figure 6. Our data proved that Bcl-2 expression was restored nearly to normal levels by sulodexide preconditioning while caspase-3 activity was dramatically suppressed. Taken together, sulodexide administration mitigated renal cell apoptosis in IRI rats. Antithrombin III was activated before alleviation for renal IRI triggered by sulodexide To determine whether the beneficial effect of sulodexide was dependent on its anti-coagulation function, we detected plasma ATIII activity at 3h after renal reperfusion. Our data revealed that the plasma ATIII activity was declined in IRI animals compared with sham, but was restored approximately to normal levels in sulodexide-treated IRI animals at 3h after reperfusion as shown in Figure 7A. In contrast, there was no significant improvement of renal pathological injury score between sulodexide-treated IRI group (1.38±0.21) and vehicletreated IRI group (1.61±0.21) at 3h after renal reperfusion ( Figure 7B). In addition, we also examined plasma levels of fibrinogen degradation products (FDPs) and our data showed that the levels of FDPs in IRI rats were significantly higher than that in sham rats, suggesting that fibrinolytic function was activated in IRI models ( Figure 7C). And sulodexide pretreatment significantly reduced serum FDPs levels in IRI rats. These results indicated that the beneficial effects of Figure 2: Sulodexide pretreatment mitigated renal histological injury in IRI rats. Rats were challenged with sham-operation or 45 minutes of bilateral renal ischemia, respectively. Kidney tissues were harvested at 24 h after reperfusion. Periodic acid-Schiff (PAS) staining and a semi-quantitative scoring system was used to evaluate the severity of tubular injury. A. Representative images of renal PAS staining (magnification, 200×). B. Semi-quantitative assessment of tubular injury. All data were presented as means ± SD (n=6). * P<0.05 versus Sham, *** P<0.001 versus Sham; # P<0.05 versus IRI. www.impactjournals.com/oncotarget sulodexide on IRI were at least in part on its anti-coagulation properties via activation of ATIII. Sulodexide reduced oxidative stress and inhibited apoptosis of HK2 cells under hypoxia/ reoxygenation injury In vitro, the hypoxia/reoxygenation (H/R) injury model for HK2 cells was set up. We found that H/R resulted in a significant increase of intracellular ROS production, which was reduced by sulodexide pretreatment (Figure 8). In addition, sulodexide pretreatment also attenuated H/R induced activation of caspase-3. DISCUSSION In the present study, we sought to assess the therapeutic effects of sulodexide on renal IRI and investigate the potential mechanisms. Our results demonstrated that sulodexide pre-administration attenuated the functional and histologic alterations of kidney in IRI rats, which was accompanied with elevated plasma ATIII activity and blunted oxidative stress, inflammation and apoptosis. Furthermore, sulodexide could directly suppress H/R-induced ROS formation and activation of caspase-3 in vitro. Thus, we believe that the reno-protective effects of sulodexide on IRI may be mediated via its anti-oxidation, anti-inflammation, anti-apoptosis and anti-coagulation mechanisms, and sulodexide may represent as a promising therapeutic candidate. It has been accepted that the disturbance of coagulation systems involved in the pathologic processes of AKI and some antithrombin agents has been proved to be able to accelerate functional recovery of renal function after transient kidney ischemiareperfusion. In our study, we confirmed that sulodexide, as a potent anticoagulants, also had therapeutic effects on renal IRI in a rat model. In concert with our findings, prior study also had demonstrated that sulodexide could protect against myocardial IRI. It was observed that plasma ATIII activity was elevated in sulodexide- Figure 8: Sulodexide pretreatment protected against hypoxia/ reoxygenation injury in HK-2 cells. HK2 cells were cultured to subconfluence, and pretreated with sulodexide (10, 50μg/ml) 30min prior to exposure to hypoxia for 60min, then the cells underwent 30min reoxygenation. A. ROS production. B. Caspase-3 activity. Data were presented as means ± SD and were representative of three independent experiments. * P<0.05 versus CTL, ** P<0.01 versus CTL, *** P<0.01 versus CTL; # P<0.05 versus H/R. treated IRI rats compared with un-treated IRI rats, indicating the reno-protection by sulodexide might be at least in part dependent on activation of plasma ATIII. This hypothesis can be supported by the following observations: 1) Sulodexide is a potent activator for ATIII [21]; 2) Previous studies proved that exogenous ATIII could protect against renal IRI. Consistently, our previous study also demonstrated that ATIII insufficiency increased the susceptibility to or severity of AKI, suggesting that ATIII is indispensable in the endogenous defense mechanisms against renal IRI; 3) Circulating ATIII activity in sulodexide-pretreated IRI group was increased compared with vehicle-treated IRI at 3h after reperfusion, which preceded the functional and structure changes of kidney between sulodexidetreated and PBS-treated IRI rats. Nonetheless, previous studies have demonstrated that ROS can decrease ATIII activity [22,23], so we cannot exclude the possibility that sulodexide may inhibit ROS generation thereby inhibiting ischemia-reperfusion induced decrease in ATIII activity. The elevation of ATIII activity conferred by sulodexide could be attributed to the direct activation and subsequent effects of inhibition in ROS production. In spite of its anti-coagulation property, sulodexide also possesses anti-oxidative effects [15,16] and antiinflammatory effects [18,24,25]. It has been reported that during the renal IRI, oxidative stress is one of the most critical mechanisms involved in the tubular cellular damage and apoptosis [26,27]. Our previous studies found that MDA was increased while SOD was increased in rat AKI models [28][29][30]. Consistently, the present study demonstrated that the improvement of renal function in IRI rats by sulodexide was accompanied with decreased oxidative stress levels. In agreement with our findings, previous studies also demonstrated that the beneficial effect of sulodexide on diabetic nephropathy was associated with the reduction of the MDA levels and enhancement of SOD and catalase activities [31]. Moreover, our in vitro studies showed that sulodexide were able to repress intracellular ROS production induced by H/R in HK-2 cells. Thus, on the one hand, the inhibition of oxidative stress by sulodexide in vivo could be elucidated by its direct anti-oxidative effects; on the other hand, sulodexide pretreatment increased ATIII activity and subsequently improve the kidney perfusion, which might indirectly inhibit the generation of ROS. Additionally, we observed that sulodexide reduced the expression of pro-inflammatory cytokines in IRI rats. It has been reported sulodexide could inhibit the secretion of inflammatory mediators from lipopolysaccharidestimulated macrophages in vitro [17], and our recent study proved that sulodexide significantly decreased macrophage infiltration in contrast induced nephropathy [32]. In summary, we believed that the anti-inflammatory and anti-oxidative effects of sulodexide contributed to the reno-protection against IRI. Cumulative evidence has suggested that apoptosis is critically involved in the pathological process in renal IRI [3,33]. Our study demonstrated that sulodexide treatment significantly alleviated cell apoptosis, which was evidenced by decreased caspase-3 activity and increased Bcl-2 expression. Besides, sulodexide also inhibited the activation of caspase-3 in HK-2 cells under H/R induced injury. In summary, sulodexide might exert its inhibitory effect on apoptosis via a direct inhibition of caspase-3. The presented data showed that sulodexide protected against renal IRI via activation of ATIII and its anti-oxidative, anti-inflammatory and anti-apoptosis mechanisms. Nonetheless, there was no direct evidence linking the renal protection of sulodexide and activation of ATIII, which was the main limitation of our study. ATIIIknockout rats should be used to investigate the underlying mechanisms of sulodexide in future. In conclusion, sulodexide can alleviate renal IRI through its anti-oxidative stress and anti-apoptosis, its reno-protective role might be due to its activation for ATIII, indicating that sulodexide may be a potential agent for AKI prevention and treatment. However, whether prophylactic and therapeutic administration of sulodexide can effectively prevent AKI incidence and improve clinical outcome in patient remains to be determined in the future. Reagents Sulodexide was purchased from Vessel Due F (Alfa Wassermann, Italy). The primary antibodies, rabbit anti-Bcl-2 and mouse anti-GAPDH were both provided from Cell Signaling Technology (Danvers, MA, USA). Animal experimental protocols This animal experiment in this study was approved by the Animal Care and Ethics Committee of Shanghai Jiao Tong University Affiliated Sixth People's Hospital. Male Sprague-Dawley rats (weighing 250-300g) were purchased from Shanghai Science Academy Animal Center (Shanghai, China). Animals were randomly divided into 4 groups: sham-operated group treated with tail vein injection of PBS (Sham, n=6); sham-operated group treated with tail vein injection of sulodexide (Sham+Sul, n=6); ischemiareperfusion group treated with tail vein injection of PBS (IRI, n=6), ischemia-reperfusion group treated with tail vein injection of sulodexide (IRI+Sul, n=6). The model of bilaterally renal ischemia-reperfusion injury was set up as previously described [34]. Sulodexide (2mg/kg, dissolved in 0.1ml PBS), or the same volume of PBS was injected intravenously into the tail vein 30 min before the surgery. Briefly, animals were anesthetized with sodium pentobarbital (50mg/kg). Renal ischemia was induced by clamping both renal pedicles for 45min using a non-traumatic vascular clamp. Animal body temperature was maintained using an animal heating pad. The clamp was removed to restore kidney blood flow. Sham rats underwent the same surgery but without renal pedicle clamping. All the animals were sacrificed 3h or 24h after the surgery, respectively and kidney, blood and urine were collected for further analysis. Blood samples collected from abdominal aorta were moved into one BD Vacutainer® SST™ Serum Separation Tube (Becton-Dickinson, Franklin Lakes, NJ, USA) to obtain serum and one BD Vacutainer® Citrate Tube containing 3.2% buffered sodium citrate (final concentration 0.105mol/L) to obtain plasma, separately. The tubes were centrifuged at 2,000 g for 10min for serum and plasma collection. Cell culture and hypoxia/reoxygenation model Human proximal tubular epithelial cells (HK2 cells, ATCC, Manassas, VA, USA) were cultured in K-SFM at 37°C, 5% CO 2 , supplemented with 5 ng/ml human recombinant EGF and 0.05 mg/ml bovine pituitary extract. Two hours before study, the medium was replaced with glucose-free medium without added growth factors or serum. Cell plates were placed in a glass chamber gassing with 95% N 2 /5% CO 2 gas and for 60 min followed by reoxygenation (95% 02, 5% CO 2 ,) for 30min as previously described [35]. Sulodexide or vehicle (10, 50µg/ml) were added into the medium 30 min before exposed to hypoxia. Cell apoptosis was measured with a Cell Death Detection ELISA kit (Roche Diagnostics, Mannheim, Germany) and intracellular reactive oxygen species (ROS) were detected with a kit (Cell biolabs, San Diego, CA, USA) as previously described [36]. Assessments of biochemical parameters Automatic biochemical analyzer (Hitachi7600, Tokyo, Japan) was used to measure blood serum creatinine (Scr) and blood urea nitrogen (BUN) to determine the changes of renal function. ATIII activities in plasma were measured using the commercial kit Accucolor ATIII (SIGMA Diagnostics, Livonia, MI, USA) on an automatic coagulation analysis machine (Sysmex CA7000, SIEMENS, Munich, Germany) as previously described [29]. Fibrinogen degradation products (FDPs) were measured using rat fibrinogen degradation product ELISA Kit (Cusabio, Wuhan, China) following instruction provided by the Manufacturer. Histological analyses The right kidney was fixed and embedded. Paraffin embedded kidney was cut into 3μm sections and subjected to Periodic Acid Schiff (PAS) staining. The histological scoring was evaluated by grading the percentage of affected tubules per 10 randomly chosen, non-overlapping fields (magnification, ×400) in the corticomedullary region according to the following criteria: tubular dilation, loss of brush border, tubular necrosis, and cast formation. The renal injury scoring was estimated on a scale from 0 to 5: 0, none; 1, 0-10%; 2, 11-25%; 3, 26-45%; 4, 46-75% and 5, 76-100%, as described previously [29]. The assessment was performed by an observer who was blind to the study groups. Terminal transferase-mediated dUTP nick-end labeling (TUNEL) staining for cell apoptosis was employed to assess the extent of renal apoptosis in different groups (Roche Diagnostics, Mannheim, Germany), as described previously [37]. Measurements of oxidative stress markers The concentrations of malondialdehyde (MDA) and superoxide dismutase (SOD) in renal tissue were measured using commercial kits according to the manufacturer's instruction (Beyotime, Jiangsu, China), and the final levels of MDA and SOD were normalized to the protein concentration of kidney tissue homogenate as previously described [28]. Measurements of kidney injury markers in blood and urine Two novel markers of early stage kidney injury, urinary kidney injury molecule-1 (uKIM-1), and serum neutrophil gelatinase-associated lipocalin (sNAGL) were measured using ELISA kits (R&D systems, Minneapolis, MN, USA). Western blot Protein concentrations of kidney tissue homogenate were measured using BCA assay (Beyotime, Suzhou, Jiangsu, China) and protein samples were separated by 12% sodium dodecyl sulfate-polyacrylamide gels, then were transferred to polyvinylidene difluoride membrane (PVDF) and blocked with 5% non-fat dried milk. The PVDF membranes were then incubated with primary antibody overnight at 4°C and with HRP-conjugated secondary antibodies (Beyotime) for 2h at room temperature. The blotting signals were visualized by the Image Quant LAS 4000 Mini System (GE Healthcare, Pittsburgh, PA, USA). The bands were analyzed using Image J software and GAPDH was used as loading control. Statistical analysis The statistical software SPSS (Ver.18.0) was used for data analysis. One-way ANOVA with Sidak post hoc test or Kruskal-Wallis with Dunn's post test was employed to determine the differences in groups. A value of P < 0.05 was considered significant.
2018-04-03T02:10:33.779Z
2016-12-27T00:00:00.000
{ "year": 2016, "sha1": "aa9022cd49fffe54712ec70eab48294af2b42423", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=14309&path[]=45641", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa9022cd49fffe54712ec70eab48294af2b42423", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
230638842
pes2o/s2orc
v3-fos-license
Air–sea carbon flux from high-temporal-resolution data of in situ CO2 measurements in the southern North Sea An important element to keep track of global change is the atmosphere–water exchange of carbon dioxide (CO2) in the ocean as it provides insight in how much CO2 is incorporated in the ocean (i.e. the ocean as a sink for CO2) or emitted to the atmosphere (i.e. the ocean as a source). To date, only few high-resolution observation sets are available to quantify the spatiotemporal variability of air–sea CO2 fluxes. In this study, we used observations of pCO2 collected daily at the ICOS station Thornton Buoy in the southern North Sea from February until December 2018 to calculate air–sea CO2 fluxes. Our results 10 show a seasonal variability of the air–sea carbon flux, with the sea being a carbon sink from February until June switching to a carbon source in July and August, before switching back to a sink until December. We calculated that the sink was largest in April (-0.95 ± 0.90 mmol C m d), while in August, the source was at its maximum (0.08 ± 0.13 mmol C m d). On an annual basis, we found a sink for atmospheric CO2 of 130.19 ± 149.93 mmol C m y. Apart from regionand basin-scale estimates of the air–sea CO2 flux, also local measurements are important to grasp local dynamics of the flux and its interactions 15 with biogeochemical processes. Introduction Increased anthropogenic emissions of greenhouse gases (GHGs) lead to global warming (IPCC, 2019), and observing their balance is an important way to keep track of global change (Steinhoff et al., 2019). A key element in this balance is the airsea exchange of CO2 in the ocean, as the oceans are responsible for the uptake of 25% of anthropogenic CO2 emissions 20 (Friedlingstein et al., 2019). The air-sea CO2 flux provides insight in how much CO2 is added to the marine environment from the atmosphere (i.e. the sea being a sink for atmospheric CO2) or emitted by the marine environment to the atmosphere (i.e. the sea being a source). The North Atlantic Ocean is one of the major sinks with an uptake of 680 mmol C m -2 y -1 (Watson et al., 2009(Watson et al., ) in 2005, and between 800 and 4000 mmol C m -2 y -1 (Woolf et al., 2019) in 2010. Continental shelfs are regarded as sinks of carbon with an average air-sea CO2 rate of 1900 mmol C m -2 y -1 for the European continent . 25 However, the Southern Bight of the North Sea (SBNS), i.e. a European shelf sea, is shown as a source area in Thomas et al. (2004). The latter is in contrast with other studies that suggest that the SBNS and the whole North Sea can be regarded as a sink for CO2 (Borges and Frankignoulle, 2002;Laruelle et al., 2018;Schiettecatte et al., 2007). The southern part of the North Sea includes the Belgian Continental Shelf (BCS), which is a well-studied area in terms of air-sea surface dynamics and carbon https://doi.org/10.5194/bg-2020-442 Preprint. Discussion started: 4 December 2020 c Author(s) 2020. CC BY 4.0 License. biogeochemical cycling (e.g. Borges et al., 2019;Gypens et al., 2011Gypens et al., , 2004. In terms of air-sea carbon fluxes, the BCS shifted 30 from being a carbon source in the 1950s to a carbon sink in the 1980s (Gypens et al., 2009), with more recent source-sink turnovers . Changing seawater physical and biogeochemical characteristics in the BCS result in seasonal patterns of air-sea CO2 flux (Gypens et al., 2004. The dynamic nature of the BCS in terms of annual CO2 fluxes, which were often based on short term measurements and simulated values in past studies, highlight the necessity of high-resolution robust CO2 observations. Therefore, in the present study, we monitored the local dynamics of CO2 flux using high-temporal-35 resolution data of both partial pressure of CO2 in the sea (pCO2, sea) and in the air (pCO2, air). Our aims were to quantify the airsea carbon flux, to identify what drives the seasonality of the flux in a specific year and to identify the annual source-sink dynamics in a specific location of the BCS. Materials and methods The North Sea has a surface of 670.000 km² (EEA, 2015) of which the Belgian Continental Shelf (BCS) occupies about 0.5% 40 (or 3.454 km²; Belpaeme et al., 2011). The BCS is relatively shallow with water depths gradually increasing to 45m from the Southeast towards the Northwest (Van Lancker et al., 2015). Apart from extreme observations, surface water temperatures vary seasonally between 5°C and 20°C. The salinity is strongly influenced by the river plumes of the Scheldt, Rhine, Seine and Meuse (Lacroix et al., 2004) and varies between 29 to 35 PSU. https://doi.org/10.5194/bg-2020-442 Preprint. Discussion started: 4 December 2020 c Author(s) 2020. CC BY 4.0 License. The Flanders Marine Institute is operating the Fixed Ocean Station "BE-FOS-VLIZ Thornton Buoy" in this area as part of the 50 European research infrastructure "Integrated Carbon Observation System" (ICOS -ERIC, https://www.icos-cp.eu/). The station is equipped with commercial sensors to measure in-situ the sea surface xCO2, atmospheric xCO2, sea surface Temperature (SST), sea surface salinity (SSS) and the total gas pressure of CO2. The BE-FOS-VLIZ Thornton Buoy is located approximately 30 km away from Zeebrugge in the area of the Thornton bank wind turbine farm (51.579N, 2.993E; Fig. 1). In this study, we used observations from the year 2018. A schematic of the mooring and position of the sensors is depicted in 55 Figure A1. The equipment details and data collection information are listed in Table 1. and robust 2-way interactive communication with the buoy system and individual sensors, and provides the means to adapt 65 sampling strategies of the sensors and identify issues very effectively. The sensors used for this study (SBE37-SMP-ODO and CO2-PRO ATM) are calibrated by the manufacturers once per year. Additionally the pCO2 measurements of the buoy were validated monthly against calculated pCO2 values from measurements of Total Dissolved Inorganic Carbon (CT), Total Alkalinity (TA) and pH of manually collected samples. Water sampling followed the SOP1 described in Dickson et al. (2007). TA, CT and pH were determined using the methodologies described in 70 Dickson et al. (2007). For TA the method follows SOP3b of Dickson et al. (2007; commercially available system VINDTA 3s). The pH analysis and setup follows SOP6a (Dickson et al., 2007) using the Thermo Scientific Orion pH meter (STAR A211) and ROSS Sure Flow glass body pH electrode and we report pH at Total Scale at 25 o C as measurements are performed in a thermostatic environment (Grant Water Bath). Total Dissolved Inorganic Carbon (CT) is determined using the commercially available Automated Infra Red Carbon Analyzer (AIRICA). For all methods, we use CRMs from Scripps 75 Institute of Oceanography (UCSD). The uncertainties for each method are mentioned in Table 2. For the calculation of pCO2, sea, we have used the R package 'seacarb' (Gattuso et al., 2020). https://doi.org/10.5194/bg-2020-442 Preprint. Discussion started: 4 December 2020 c Author(s) 2020. CC BY 4.0 License. The calculated pCO2, sea values were used to calibrate the sensor data using a linear regression method (Fig. A2). The SST and SSS data of the buoy were validated against data obtained from RV Simon Stevin's CTD system (SBE3 & SBE4, respectively for SST & SSS -Seabird Scientific) and underway Thermo-Salinograph sensor (SBE21 -Seabird Scientifics) when visiting 80 the station and collecting samples. The pCO2, air measurements were evaluated against xCO2 data from nearby ICOS atmospheric stations on land. For this comparison, we used the non-parametric Kruskal-Wallis rank sum test and the pairwise Wilcoxon Rank Sum test in the R package 'stats' (R Core Team, 2019). The air-sea CO2 flux (F) is calculated (Eq. 1) according to the wind-driven turbulence diffusivity model of Nightingale et al. (2000) expressed in partial pressure: where kNightingale is the gas transfer velocity (length • time -1 , Eq. 2), K0 is the solubility of CO2 in seawater (mass • volume -1 • pressure -1 ) and pCO2, sea and pCO2, air are the partial pressure of CO2. We calculated pCO2 by multiplying xCO2 measurements 90 with the total gas pressure of CO2 respectively in seawater or atmosphere. The solubility of CO2 in seawater depends on the sea surface temperature (SST) and the sea surface salinity (SSS; Wanninkhof, 2014 Wind speed (10m above sea level) data were acquired from Meetnet Vlaamse Banken (MVB) for the Westhinder platform, 95 Wandelaar platform and Scheur Wielingen platform, which are located approximately 20 km to 40 km more to the South and Southwest. Wind speed was measured every ten minutes. In the SST and SSS records, there are no data in September 2018 due to a malfunction in the buoy's SBE37-SMP-ODO sensor. To account for the lacking SSS data, we completed our times series with salinity data from by the RV Simon Stevin's CTD system in the same period (Flanders Marine Institute, 2019). The SST data gaps were completed by data from a second water temperature sensor installed on an Aanderaa Seaguard 100 multiparametric platform (Fig. A1b). Timestamps were used in order to combine data sets from various sensors and systems. All data were assessed for potential outliers. As in Salgado et al. (2016), outliers were defined as values lying outside the borders of the lower quartile minus three times the interquartile range (Q25 -3*IQR) and the upper quartile plus three times the interquartile range (Q75 + 3*IQR). The daily mean air-sea CO2 flux was calculated from 1891 time points. We took the day-night cycle of the CO2 flux into account by using daily means. Besides, we calculated monthly means and standard 105 https://doi.org/10.5194/bg-2020-442 Preprint. Discussion started: 4 December 2020 c Author(s) 2020. CC BY 4.0 License. deviation. Our data covers the period from February 2018 until December 2018. To quantify the annual CO2 flux based on eleven months of data, we calculated a weighted mean for the winter, i.e. February and December, and the remaining nine months, respectively, using weights 0.25 and 0.75. We, then, extrapolated the weighted mean to a year. A summary of the input data is provided in Table 3. We investigated if the CO2 flux calculated with xCO2, air from the Thornton Buoy was different from the CO2 fluxes based on xCO2, air measurements at nearby atmospheric stations (Sect 3.1). We compared them 110 using the non-parametric Kruskal-Wallis rank sum test and the pairwise Wilcoxon Rank Sum test in the R package 'stats' (R Core Team, 2019). We adopted the method developed by Takahashi et al. (2002) to separate and assess the seasonal effects of biological processes and temperature on pCO2 and CO2 flux dynamics over an annual cycle. We applied eq. 3 and 4, where Tmean is the mean annual temperature (13.4 °C) and Tobs is the in situ temperature. The relative importance of the components effects is expressed by the thermal-biological ratio (T/B) or the difference (T-B), where T is pCO2, therm and B is pCO2, bio. 120 2, A T/B ratio between zero and one implies the dominance of biological processes over thermal effects (T-B < 0), whereas a T/B ratio larger than one implies that temperature effects are dominant (T-B > 0). Environmental conditions In 2018, sea surface salinity (SSS) varied between 32.1 PSU and 34.7 PSU (Fig. 2a) with a mean value of 33.4 ± 0.58 PSU. The water temperature followed a seasonal pattern with high temperatures in summer time (max. 22.2 °C) and low water temperatures during the winter (min. 3.3 °C; Fig. 2b). No seasonal pattern was observed for the wind speed, i.e. the wind speed is highly variable throughout the year. The lowest wind speed measured was 0.3 m s -1 and the highest was 17.9 m s -1 (Fig. 2c). 130 https://doi.org/10.5194/bg-2020-442 Preprint. Discussion started: 4 December 2020 c Author(s) 2020. CC BY 4.0 License. The pCO2, air fluctuated between 389.4 µatm and 464.7 µatm (Fig. 2d). The pCO2, sea data were validated against calculated values of pCO2, sea from DIC and pH values of manually collected samples. After that the pCO2, sea data were corrected with a linear regression method and validated against the manually collected (spot) samples. The pCO2, sea had a large range (126.9 µatm -525.6 µatm), and reached its lowest value in May and highest value in August (Fig. 2d). These observed pCO2, sea concentrations corroborate with data found by Gypens et al. (2011) and Borges et al. (2006) for the 140 English Channel (ECH) and the Southern Bight of the North Sea (SBNS). Borges et al. (2006) found that the spring bloom in early spring was followed by an increase in pCO2, sea in late spring-summer. Schiettecatte et al. (2007) observed that the SBNS was oversaturated in CO2 during winter and strongly undersaturated in April-May. Schiettecatte et al. (2007) reported a minimum pCO2, sea value of 192.35 ± 35 µatm in the SBNS in April and a maximum of 455 ± 36 µatm in August. They observed higher pCO2, sea values for the BCS, up to 900 µatm, but high values were measured close to the Scheldt plume 145 (Schiettecatte et al., 2007). In the present research, we observed a seasonal trend of pCO2, sea, which increased in the summerearly autumn and decreased in the winter-spring. We did not observe a strong seasonality in the pCO2, air record. To evaluate our atmospheric xCO2 data, we compared data from the BE-FOS-VLIZ Thornton Buoy with 2018 data from nearby (i.e. < 900 km) atmospheric stations (Fig. 3): i.e. two ICOS atmospheric stations Cabauw (207m above sea level; https://doi.org/10.5194/bg-2020-442 Preprint. Discussion started: 4 December 2020 c Author(s) 2020. CC BY 4.0 License. Frumau et al., 2020) and Tacolneston (185m; O'Doherty et al., 2020) and one atmospheric station in Mace Head (24m; 150 Delmotte et al., 2020). Usually, the CO2 mole fraction data and products of these land-based atmospheric stations are used to calculate air-sea CO2 fluxes, (e.g. Borges and Gypens, 2010). A basic comparison between the different data sets highlights the following. The minimum and maximum CO2 mole fraction registered at the Thornton Buoy in 2018 was 389.4 ppm CO2 and 464.7 ppm CO2. The atmospheric CO2 mole fraction from sampling station Cabauw fluctuated between 394.0 ppm -473.5 ppm CO2, Tacolneston between 386.5 ppm -455.1 ppm CO2 and Mace Head between 394.2 ppm -451.7 ppm CO2. A similar 155 trend was observed in the xCO2, air data of the Thornton Buoy as in the xCO2, air data of the other stations (Fig. 3). Our xCO2, air data is in range with the xCO2, air data of the land-based atmospheric stations (Fig. 3), which supports our use of local field observations. The use of local field observations of xCO2, air at sea provides useful information that complements the use of land-based stations because: 1) the sampling happens close to the water surface where the air-sea carbon exchange occurs, and 2) the xCO2, air observations are more specific to the sampling location than land-based stations. 160 Air-sea CO2 flux The air-sea CO2 flux was estimated based on the salinity (Fig. 2a), temperature (Fig. 2b), wind speed (Fig. 2c), and pCO2 for seawater and atmosphere (Fig. 2d) time series at the Thornton Buoy in the BCS. We found that the wind speed had a large https://doi.org/10.5194/bg-2020-442 Preprint. Discussion started: 4 December 2020 c Author(s) 2020. CC BY 4.0 License. impact on the magnitude of the CO2 flux, i.e. higher wind speed increased the air-sea exchange of CO2 in either way. The daily means of the CO2 flux varied between -2.99 mmol m -2 d -1 and 0.37 mmol m -2 d -1 (Fig. 4). We calculated monthly means 170 (-0.95 ± 0.90 mmol m -2 d -1 to 0.08 ± 0.13 mmol m -2 d -1 ) and distinguished a clear seasonal pattern (Fig. 4). We compared these air-sea CO2 fluxes with air-sea CO2 fluxes calculated with pCO2, air of the atmospheric stations. Only the carbon flux using the atmospheric CO2 data of Cabauw differed from the carbon flux using pCO2, air of the Thornton Buoy (p = 0.031; Fig.3), showing the importance of local atmospheric pCO2 measurements. Overall, the air-sea CO2 flux calculated with different pCO2, air sources, i.e. Thornton Buoy and atmospheric stations, followed a very similar seasonal trend (Fig. 3). 175 Coinciding with other studies in the SBNS, we noted a seasonal effect in the air-sea carbon flux Borges and Gypens, 2010;Gypens et al., 2011;Kitidis et al., 2019;Schiettecatte et al., 2007). The BCS at our location acted as a carbon sink from February until June (-0.95 ± 0.90 to -0.34 ± 0.23 mmol C m -2 d -1 ; Fig. 4). The sink was the largest in April with a monthly mean of 0.95 ± 0.90 mmol C m -2 d -1 . The flux direction switches to a weak carbon source from mid-July and until August with a monthly mean of 0.08 ± 0.13 mmol C m -2 d -1 . However, our findings contradict with the other studies from August onwards. We found that the BCS at our measuring station switched back again to a small sink from September until 185 December (-0.15 ± 0.15 to -0.04 ± 0.10 mmol C m -2 d -1 ; Fig. 4). We believe that the frequency and quality of our local observations allowed us to identify the weak source in July and August, whereas it may have been unnoticed with different https://doi.org/10.5194/bg-2020-442 Preprint. Discussion started: 4 December 2020 c Author(s) 2020. CC BY 4.0 License. observational capacity, e.g. sporadic sampling cruises. The Thornton Buoy ICOS setup allows for the collection of robust and high-frequency time series observations, , whereas sampling cruises can provide excellent spatial coverage however time resolution can be sporadic (Borges and Frankignoulle, 2002;Schiettecatte et al., 2007). In that respect, it is possible that if 190 samples and observations were obtained during a cruise in autumn over a relatively short period (days or weeks) when CO2 was emitted, then the extrapolation of those observations could have led to the BCS being described as a source in autumn instead of a sink. We also need to acknowledge that environmental factors, e.g. temperature and biological activity can have significant effect on carbon fluxes Thomas et al., 2005Thomas et al., , 2007Wimart-Rousseau et al., 2020). Extreme events, such as the heat wave in the summer of 2018, may have also contributed to some of the differences (e.g. increase in 195 CO2 concentrations) that we present in this study (Borges et al., 2019). Additionally, the solubility of CO2 is lower in warmer water (Wiebe and Gaddy, 1940), reducing the uptake of atmospheric CO2 (Yamamoto et al., 2018). Gypens et al. (2011) also simulated that the North Sea would change to a source for atmospheric CO2 with warmer conditions (biological processes excluded). Other factors, such as wind and input of river plumes, are known to affect the air-sea carbon flux (Arndt et al., 2011;Gypens et al., 2011;Laruelle et al., 2018;Nightingale et al., 2000;Thomas et al., 2005). High wind speed during the 200 winter can amplify the CO2-uptake in this season and so influence the yearly carbon exchange between the atmosphere and the sea (Kitidis et al., 2019). It is known that either temperature driven or biological processes are the dominant driving factor of the pCO2, sea (Schiettecatte et al., 2007;Thomas et al., 2005). In order to determine the main driver of the pCO2, sea dynamics, and as such to quantify the influence of temperature driven and biological processes on the observed CO2 flux, we applied the computational method of Takahashi et al. (2002). 205 We found that on an annual scale the biological activities dominated the pCO2, sea (T/B ratio = 0.69 and T-B = -113.32) and so CO2 flux in the BCS. We also observed that the dominant factor changed by season. For the winter, i.e. February to March and October to December, we found that the thermal effect is dominant (T/B ratio = 1.24 and T-B = 42.28). However, in spring and summer biological processes are dominant over the thermal effect (T/B ratio = 0.84 and T-B = -34.74). Our results 210 correspond with the results of Schiettecatte et al. (2007), who found a T/B ratio of 0.74 (T-B = -70). This is, however, in contrast with the results reported by Thomas et al., (2005) who suggested that temperature rather than biological activity controlled the pCO2, sea dynamics seasonally. The data analysed in Thomas et al. (2005) were collected in four short term cruises and one cruise (i.e. in May) did not consider a CO2 undersaturation (Schiettecatte et al., 2007). This CO2 undersaturation occurs in the declining phase of the phytoplankton bloom and is typically observed mid-April when the bloom is at its peak 215 in the SBNS (Borges, 2003;Borges and Frankignoulle, 2002;Gypens et al., 2004). Based on our high-temporal-resolution measurements, we found that biological activities in BCS controlled the pCO2, sea and consequently the CO2 flux (T/B ratio = 0.69). The high-temporal resolution is important to determine the seasonal variations in the pCO2, sea, CO2 flux and their underlying mechanisms. Linking high-temporal-resolution phytoplankton dynamics with our pCO2 and CO2 flux data (Hilligsøe et al., 2011) may provide new insights in the CO2 flux variation and its underlying drivers. 220 https://doi.org/10.5194/bg-2020-442 Preprint. Discussion started: 4 December 2020 c Author(s) 2020. CC BY 4.0 License. According to our data in our location, the air-sea CO2 flux in 2018 was found to be mainly driven by biological processes, and we found that the BCS at our measuring station acted as a sink for atmospheric carbon on an annual scale (-130.19 ± 149.93 mmol C m -2 y -1 ). Our result is in line with other studies, who identified the SBNS as a CO2 sink on an annual scale (Borges and Frankignoulle, 2002;Gypens et al., 2004;Kitidis et al., 2019;Schiettecatte et al., 2007;Fig. 5). 225 Figure 5: The annual air-sea CO2 flux from different studies (letter) with data from 1994 to 2018. Studies a, e, f, g, h and i provide an annual air-sea CO2 flux for the SBNS, whereas studies b, c, d and j for the BCS. Please note that the high values (> 1000 mmol C m -2 y -1 ) of study b and d were located close (< 5 km) to the coast near Zeebrugge. Where possible, the standard deviation (or the standard error in case of study g) is shown by error bars. The horizontal line around a letter indicates that a mean was taken over 230 the indicated period during that study. Gypens et al. (2004Gypens et al. ( , 2011 simulated annual CO2 fluxes in range of our findings, e.g. -170 mmol C m -2 y -1 in 1996 -1999 and -103 mmol C m -2 y -1 in 2002. However, the observed annual carbon sinks in other studies were twice (e.g. -300 mmol C m -2 y -1 ; Borges and Frankignoulle, 2002), to four (e.g. -700 mmol C m -2 y -1 ; Schiettecatte et al., 2007), to 20 times as large (e.g. -2000 mmol C m -2 y -1 ; Kitidis et al., 2019) than our quantifications. Indeed, previous studies show a high inter-annual variability 235 in CO2 flux within the SBNS. Other studies (Borges et al., 2008;Thomas et al., 2004Thomas et al., , 2005Fig. 5) have observed that, in contrast to our study, the southern North Sea was a source of atmospheric CO2 on an annual scale, e.g. 220 mmol C m -2 y -1 (Thomas et al., 2005). It should be noted that many of the CO2 flux data of the southern North Sea are several years old, dating back from 2001(Thomas et al., 2005), 2003(Schiettecatte et al., 2007 and 2015 (Kitidis et al., 2019). These previous studies as well as our study show the high inter-annual variability in the BCS. The high inter-annual variability 240 stresses the need to keep track of the air-sea CO2 flux in a high dynamic area, such as the BCS. Having access to recent and high-temporal-resolution in situ data is important for robust coastal and ocean research but is also useful for policy makers, as https://doi.org/10.5194/bg-2020-442 Preprint. Discussion started: 4 December 2020 c Author(s) 2020. CC BY 4.0 License. it could refine policy decisions. The carbon fluxes play a major role in the development of the ocean, i.e. ocean acidification (IPCC, 2019) and the global carbon cycle by absorbing anthropogenic carbon emissions (Friedlingstein et al., 2019). Our findings are both in line, i.e. an annual sink for atmospheric carbon, and in contrast, i.e. an annual source for atmospheric 245 carbon, with findings of others studies, demonstrating the high inter-annual variability. The air-sea CO2 flux does not only vary in time. It also varies in space. Having data on the spatial variability on a local scale, e.g. Thornton Buoy (this study) and Zeebrugge (Borges et al., 2008;Borges and Frankignoulle, 2002;Fig. 5), could be used to assess the spatial variability within a larger area, such as continental shelf seas. Continental shelf seas showed an increase 250 in absorbing atmospheric CO2 and variability within the shelf, but also across different shelf systems (Landschützer et al., 2016;Laruelle et al., 2018). Though, it remains uncertain if the increase in atmospheric CO2 absorption will continue (Legge et al., 2020). As global warming endures, seawater temperature will rise, consequently decreasing the solubility of CO2 (Wiebe and Gaddy, 1940), reducing the uptake of atmospheric CO2 (Yamamoto et al., 2018). In addition, global warming can affect the CO2 uptake indirectly by decreasing or stopping ocean circulation. Less ocean circulation will decrease the nutrient supply, 255 weakening the biological processes and the CO2 export (Yamamoto et al., 2018). The variability (Laruelle et al., 2018) and insufficient quantification (Legge et al., 2020) of air-sea CO2 flux stresses the need for more and extensive in situ observations on local, such as in this study, and global scale (Bozec et al., 2006;Wimart-Rousseau et al., 2020;Woolf et al., 2019). Hightemporal resolution of CO2 flux monitoring is key to gain more knowledge about the inter-annual variability, its drivers and the evolution of CO2 flux. We also suggest extending the observations to investigate the spatial variability in the BCS. 260 Conclusion We calculated monthly mean air-sea carbon flux at a station in the BCS using high-temporal-resolution data, i.e. daily measured values of pCO2, sea and pCO2, air. By doing so, we revealed a large range of the variability in the air-sea carbon flux (-2.99 mmol m -2 d -1 and 0.37 mmol m -2 d -1 ). The air-sea carbon flux displayed a seasonal pattern, with a sink in the winterspring months, a source in the summer and a small sink in autumn. We measured a carbon sink for atmospheric CO2 in 2018 265 with an estimated uptake of 130.19 ± 149.93 mmol C m -2 y -1 . We advocate for long-term sustained observations, that will allow to improve the quantification of coastal air-sea CO2 flux and constrain the associated variations and drivers.
2020-12-10T09:05:06.072Z
2020-12-04T00:00:00.000
{ "year": 2020, "sha1": "ce38ce679e9fcba05098bdca39c8c27a22f4c312", "oa_license": "CCBY", "oa_url": "https://bg.copernicus.org/preprints/bg-2020-442/bg-2020-442.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "65e4e06f23230ec1520511e33214196e30810a4d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
12808809
pes2o/s2orc
v3-fos-license
Innovative Issues and Approaches in Social Sciences Innovative Issues and Approaches in Social Sciences Rational-irrational Electoral Preferences, Altruism and Expressive Behavior IIASS is a double blind peer review academic journal published 3 times yearly (January, May, September) covering different social sciences: IIASS has started as a SIdip – Slovenian Association for Innovative Political Science journal and is now being published in the name of CEOs d.o.o. by Zalozba Vega (publishing house). Abstract Caplan (2000, 2001, 2006) proposed the rational-irrationality model arguing that irrationality is a good as any other, whose consumption is maximized in relation to its costs and benefits. Applying this model to the problem of electoral behavior Caplan implies that voters 'afford' many irrational beliefs, because the lack of individual decisiveness renders vote as a consequenceless act. This paper contributes to the development of knowledge by analyzing the compatibility of rational irrationality with active electoral behavior. Two important arguments are being proposed: First, Wittman's (2008) intuition that rational irrationality is incompatible with voting could be supported only about a particular type of altruism, which Caplan actually seems to reject. Second, rational irrationality seems to be compatible with expressive motivations, reinforcing the conclusion that rational-irrational individuals are active voters in mass elections. He receives a scholarship within the project "Doctoral and Postdoctoral Fellowships for young researchers in the fields of Political, Administrative and Communication Sciences and Sociology" POSDRU/159/1.5/S/ 134650, financed through the Sectorial Operational Programme for Human Resources Development 2007-2013, co-financed by the European Social Fund. Introduction The way people get informed or vote in the electoral events specific to contemporary democracies has been the subject of numerous studies in social and behavioral sciences. Since Downs (1957), voting behavior has become important in economists' concerns. Starting with the standard methodological principles of neoclassical economicsmethodological individualism, expected utility maximization, homo economicus (i.e. instrumental rationality and selfishness) -Downs argued that individuals have few rational reasons (rationality being defined as above) to be informed or to vote. Based on their indecisiveness, individuals choose to remain ignorant about the quality of electoral alternatives -they are rational ignorant. Moreover, electoral participation (voting) would also be underprovided. In other words, two of the core issues on which the health of democracy rests, information and participation -will be underprovided. These two results were differently received by the academic community. If in the case of rational ignorance the degree of adoption was higher due to its compatibility with observed behavior -i.e. citizens are often political ignorant -in the case of the abstention prediction the acceptance was of course difficult -electoral participation is indeed much higher than anticipated by Downs' (1957) and later by Tullock's (1967) model. Starting from this obvious failure, public choice researchers have formulated numerous alternatives to the classical model. The most important of them, the expressive voting model (Brennan, Buchanan, 1984;Brennan Lomasky, 1985, 1987, 1997Brennan, Hamlin, 1998) and the altruistic voting model (Jankowski, 2002, 2007, Fowler, 2006Edlin, Gelman, Kaplan, 2007), solved the problem of incompatibility with observable facts. In these models the problem of information has remained marginal, rational ignorance hypothesis being most likely tacitly accepted. In (1999,2000,2006), Caplan explicitly attacks this norm of Public Choice Theory. According to Caplan, voters' undeniable ignorance is not really that rational. Being a rational ignorant implies unsystematic behaviors but what Caplan argues is that voters display systematic bias rather than random errors. Ignorance of this kind is therefore irrational. The reason for these systematic biases is not in turn irrational. In Caplan's terms: "When there are weak incentives to reach correct answers, an otherwise intelligent person may opt to turn off his critical faculties and believe whatever makes him feel best." (Caplan, 2004: 471). In other words, irrationality is rationally chosen. Caplan discusses the implications of this way of conceiving rationality on electoral behavior. His analysis focused though on the issue of quality and forming of electoral preferences, ignoring the issue of electoral participation. Voting is implicitly assumed in Caplan's work, but it is never treated as a problem in need for an explanation. On this problem, Wittman (2008) mentioned the possibility that rational | 10 irrationality may be inconsistent with voting, being therefore affected by the same problem as Downs-Tullock model of electoral behavior. From this, I analyze the consistency of the rational irrationality model trying to learn to what extent Wittman's critique can be sustained. Thus, I discuss pure altruism, non-instrumental warm-glow altruism, instrumental warmglow altruism and expressive motivations in connection to rational irrationality. I will address these issues in the following sequence: First I shortly present the public choice models of ignorance and voting, then I present the rational irrationality model, and finally, I develop the analysis briefly presented above. The problem of information and electoral behavior in public choice theory As mentioned in the previous section, the public choice study of electoral behavior begins with Downs (1957Downs ( ), (1957b and connects the issue of electoral participation with that of the quality of information voters have about electoral alternatives. In terms of rational ignorance, Downs assumes that the information is instrumentally valuable and, given "the insignificance of any one voter in a large electorate (Downs, 1957b: p.146), the returns of voting "correctly are infinitesimal" (Downs, 1957b: p.146). In other words, "it is irrational for most citizens to acquire political information for purposes of voting" (Downs, 1957b: p.147). In addition, the quality of democracy (which depends on the information that people have about politics) is a non-exclusive good -once produced, it is indivisible and will be open to consumption for both those who participated in providing it and those who did not. For this reason everyone has incentives to avoid paying information costs, thus becoming free riders. The fundamental assumption of rational ignorance, namely the individual indecisiveness in mass elections is a critical assumption also for implying voting abstention. Based on the result published by Downs (1957Downs ( , 1957b, Tullock (1967) proposed the following formula: , where is the reward (payoff) received for voting, is the (differential) benefit expected to be derived from the success of your party/candidate, is the probability of your vote being decisive (with ) in bringing about of , stands for voter's estimate of the accuracy of his judgment, is the cost of voting and is the cost of obtaining information. In the public choice literature, however a simplified version of this formula is often used: . The structure of the calculus of voting model is as follows: First, voters are primarily conceived as instrumental and selfish utility maximizers (homo economicus); Secondly all voters are able to correctly estimate the costs ( ) and benefits ( ) of the act of voting; Third, all | 11 voters know the value of , being aware of the unlikelihood of their decisiveness in mass elections. The implication of these assumptions is that most citizens will abstain from voting -a conclusion being in obvious conflict with the observables of democratic elections. For this reason alternative models have been formulated, retaining the fundamental methodological principles of public choice theory and giving up the least important ones: the expressive voting and the altruist voting. Both retained the expected utility maximization and gave up the homo economicus assumption -i.e. instrumental behavior (the case of expressive voting) or selfishness 2 (the case of altruist voting). In the case of expressive voting, individuals express either their partisan support (Fiorina, 1976, Brennan and Buchanan 1984, Brennan and Hamlin, 1998Kan and Yang, 2001) or their moral feelings (Buchanan, 1954;Tullock, 1971;Brennan andLomasky, 1985, 1987). In both cases, however, the model structure is the same: voters are non-instrumental utility maximizers. They are all capable to correctly estimate the costs ( ) and the benefits ( ) of voting and they are all aware of the low value of -the improbability of individual decisiveness. But with the non-instrumental component, the effect of is counterbalanced and the model has an implication consistent with the facts: rational-expressive individuals are active voters. Regarding the altruistic voting model (Jankowski, 2002(Jankowski, , 2007Edlin, Gelman, Kaplan, 2007) individuals are conceived as having utility functions that include considerations about the welfare of other people -e.g. they vote for the country or for the common good. The model is primarely based on the assumption of instrumental altruistic maximization of expected utility 3 . In this model voters are able to correctly estimate the costs ( ) and benefits ( ) of voting and they have a fair representation of the value of . The conclusion of this model is also compatible with reality: Since has a component that includes the welfare of others, its value increases with the number of 'others' and cancels the effect of the low value of . In the next section I present a more recent model of electoral behavior that focuses on the issue of preference formation and ignorance rather than voting, but which has implications for the latter -the rational irrationality model. The rational irrationality model One implication of the neoclassical methodological framework for analyzing electoral behavior (presented in the previous section) is that individuals evaluate electoral (they don't act randomly) alternatives and they have unbiased preferences. In (2000) Caplan argued, however, that this idea should be only partially accepted, and that, in fact, individuals may have a rational demand for irrationality -they are rationally irrational in a "near-neoclassical" way. (Caplan, 2000: p.196) The underlying idea of this new way to conceive rationality is that individuals can formulate preferences over their beliefs based on the costs and benefits that they have. In this view, beliefs are equivalent to any other good whose consumption is maximized by individuals. More, each individual has a bliss belief, (i.e. a belief that makes him feel good) and the individual demand for irrationality would be determined by its cost. The idea is illustrated in Figure 1, below. Source: Caplan, 2000: p.195 (adapted after) The wealth/irrationality budget line shows which combinations of welfare-irrationality are feasible. (Caplan, 2000: p.194). Its intersection with the wealth axis indicates a pure neoclassical preference -the consumption of irrationality is zero. Its intersection with the Irrationality axis illustrates the consumption of the bliss belief. 4 Depending on the cost of irrationality, individual preferences deviate from the standard neoclassical rationality going closer to the bliss belief. Fundamental to this model is the assumption that the exchange between welfare and 4 In Caplan's words, "When the price of irrationality is zero people adhere to their bliss belief, consuming irrationality until they are 'satiated'" (Caplan, 2001a: p.314). The quantity of material wealth The quantity of irrationality (bliss belief) | 13 irrationality units is based on an unbiased judgment about the tradeoff. Therefore rational irrational agents have rational expectations about the slope of their wealth/irrationality budget line -they "perceive the impact of their irrationality on their wealth without bias" (Caplan, 2000: p.195). In other words, individuals are aware that an increase in their psychological welfare (being closer to their bliss belief) can result in a loss of material welfare. Caplan (2004) provided the following example to illustrate this: "a doctor may want to believe that he can perform surgery while drunk without additional risk, but this belief would have high expected material costs from law suits and loss of business" (Caplan, 2004: p.471). Given its high price, irrationality will not occur -the doctor will not consume his bliss belief in this case. This model has as its main application the problem of information and of citizens' electoral preferences. Regarding the information, Caplan (2006) noted: "What voters don't know would fill an university library" (Caplan, 2006: p.5). This ignorance, however, is not explained by the rational ignorance hypothesis. The errors of judgment and choice that voters make are not caused only by the lack of information. In fact, Caplan (2006: p.100) argues that emotional attachment seems to be a better candidate to explain them. According to Caplan (2001bCaplan ( , 2004Caplan ( , 2006, the beliefs that voters and people in general have, are not 'impartial' as implied by the rational ignorance hypothesis. Actually, these beliefs are biased, and are better explained by the rational irrationality model. A key factor here is the fact that in mass elections the private costs of irrationality is insignificant. Returning to the example of the doctor, although he cannot afford to operate while being drunk, he "could however vote on the basis of lame economic sophisms without fear of negative consequences. Since his vote is almost certain to have no effect on the outcome anyway, he could safely indulge irrational political beliefs at the ballot box even though he refrains from such cognitive excesses on the operating table" (Caplan, 2004: p.471). Individual indecisiveness in mass elections therefore explains why voters are rational-irrational. Based on these considerations Caplan identifies four systematic biases that voters have. These are not, however, important for my analysis, and therefore they are not to be presented here. Rational irrationality, abstention, altruism and non-instrumental behavior Caplan's theory was rather critically received. Tullock (2008) labeled it as an "attack on democracy" (Tullock, 2008: p.485) and Bennett, Friedman (2008) argued that the very concept of rational irrationality is inconsistent and there is no solid evidence to support Caplan's conclusion that emotions or ideology could explain public errors regarding economic issues. This latter criticism was also formulated by Wittman (2008) who noted that the way Caplan interpreted the data is less than convincing and that he was unable to demonstrate that rational irrationality could replace rational ignorance (Wittman, 2008: p369). Another criticism made by Elster and Landemore (2008) was that Caplan's theory was deeply ideological and conceptually confused. Most of these criticisms focused on the four biases Caplan identified and on his conclusions about democracy. Some of them, however, have focused on the methodological difficulties of the rational irrationality model. My analysis falls into this latter category discussing the problem of consistency of Caplan's model. The significance of voting as a consequenceless act and the abstention prediction As mentioned in the introductory section, Caplan does not formulate an explicit argument about electoral participation. Such argument is though implied -rational irrational individuals seem to be active voters and this fact remains unquestioned in Caplan's work. Regarding this issue, Wittman (2008) expressed an intuition (without developing it into a solid critique) about a possible problem: "voters behave as if their votes were important. First, they vote, which is costly; if they thought their vote did not count, then they probably would not vote." 5 (Wittman, 2008: p.372). In what follows I will develop the analysis shortly indicated by Wittman, focusing on the issue of internal consistency of Caplan's model. A first step is to clarify its logical structure. Caplan repeats in several papers (2000,2001,2004,2006) the idea that voters are aware that their vote is consequenceless. This idea is consistent with both the calculus of voting model, and the expressive and the altruistic voting models. In all these models voters know that their vote is without consequences. This idea, however, is unclear and should be further studied. In the terms proposed in section 2 of this article ( , , , ) the sentence "voters are aware that their vote is consequenceless" certainly implies knowledge of the value of the term , i.e. the probability of bringing about the benefit . Some of Caplan's statements | 15 seem to indicate that "being consequenceless" exclusively means the knowledge of the term."Since his vote is almost certain to have no effect on the outcome […]" (Caplan, 2004: p.471) or "Democracy asks voters to make choices, but gives each only an infinitesimal influence. From the standpoint of the lone voter, what happens is independent of her choice" (Caplan, 2006: p.140). On the other hand, turning to the calculus of voting formula, the lack of consequences could comprise more than just knowing the value of . If we accept this idea, then my voting is inconsequential for me not only because I know that the probability of being decisive is very small but also because I can correctly estimate the values of and . Suppose, for the sake of the argument, that things would be the opposite: voters know the value of but not those of and . If this were the case, the principle of utility maximization would become unusable (I cannot maximize without knowing these values). But Caplan accepts the importance of this principle. From here, apparently we should accept as 'caplanian' (in Caplan's spirit) the assumption that rational-irrational voters, know also the values of and : in this respect, my choices are inconsequential if the value of is 'sufficiently large' and the value of is 'small enough' to 'activate' the value of which is constantly very small. If, however, the value of is 'large enough' and the value of is 'small enough' is counterbalanced and voting becomes an act that has consequences for me. If rational-irrational voters should know the values of and then their behavior would be consistent only if they would abstain from voting. Such a conclusion could be implied by the following argument: One of the fundamental premises of Caplan's model is that at some level, individuals know the exact costs of irrational beliefs that they may have when they vote. Taking one of Caplan's (2006) examples, I may believe, despite all the information available, that voting for the Communist Party is a good idea if I understand, at the choice over beliefs level, the values of , and . (in the interpretation that these are all necessary to imply the lack of consequences of unilateral 6 voting). But if this is so, then Caplan's model could only explain the emergence of communist beliefs, but not voting according to them. If the knowledge of , and is required in order to maximize the utility in choosing beliefs 7 , being an active voter would involve a contrary belief, namely that my vote counts (Wittman's intuition). Hence, in this interpretation, | 16 Caplan's model would explain why people would have certain beliefs but not why would they vote according to them. If my vote is without consequences then whatever beliefs I may have, I would not have any reasons to vote according to them. Moreover, if the lack of consequences of voting would not involve knowing the values of and but only of then even if I am already in the voting booth (say I work there), I would have no reason to vote the party I prefer. So it is possible that I could believe that the communist alternative is the best, but in the same time I could vote for the Nazi party (since there is no way my vote could break a tie). This conclusion could be strengthened by some details that Caplan gave in the second part of the 'Myth of the rational voter ' (2006): "Irrationality makes the individual better off" under the following condition: " […] If irrationality is utility-maximizing as long as there are any psychological benefits: " (Caplan, 2006: p.146). From these formulas several implications can be derived: First, choosing electoral beliefs is connected to the likelihood of being decisive (this has the role of strengthening the idea that knowing alone implies that voting is a consequenceless act). Secondly, the material cost of irrationality should not be confused with from the calculus of voting formula, and the psychological benefit should not be confused with from the same formula. What emerges from the above quote is that we should distinguish between two levels of choice, and that at the level of choice over beliefs, only the factor appear to be required. Also, the quote reinforces the impression left by reading several of Caplan's works, namely that, in general, he assumes that and that all voters know this. Based on these considerations and moving to the level of the decision to vote, the voters' knowledge of the value of should be kept constant: if at the upper level (the level of choosing beliefs) it was assumed that , then an intuitive inter-domain invariance condition is that also at the lower level (the decision to vote)one cannot believe at one level that his/her vote does not bear any consequences, and at the other level that it does. If the invariance of is a condition which Caplan would accept, then the exact value of and would be irrelevant. If then and at any value of , In this case Caplan's model would explain, as noted above, only the reason a person would think that communism is the best alternative, but would not also imply voting for the communist partyeven being in the voting booth, he/she would have no selfish | 17 instrumental reason to vote according to this belief. On the other hand, if , deducing whether a rational-irrational individual would vote, two conditions seem necessary: the first is rather obvious: maximizing over both levels (inter-domain maximizing) implies either that voter knew the value of all factors, either that some of these factors are irrelevant. Since this latter case was fairly discussed above, we are left with the conclusion that rational-irrational individuals should also know the values of and from the calculus of voting formula. But if this is the case and if we accept that have rarely a big enough value to counterbalance the small value of and almost any positive value of , then the conclusion previously stated should be maintained: rationalirrational individuals may have communist, Nazi, Christian etc. beliefs, but they would not have any reasons for voting according to these beliefs (or voting at all). In this case Wittman's intuition would be correct. Rational Irrationality, altruism and expressive voting The above criticisms seem to seriously affect Caplan's model, but they cannot be stated with complete confidence unless certain issues about the nature of rational irrationality are clarified: Does it concern only selfish individuals, or is it compatible with altruism? Is it only instrumental or it is compatible with non-instrumental interpretations? These questions are relevant because we have already seen that the alternative models presented in section 2 were able to predict electoral participation building on altruism or non-instrumental considerations. So it should be determined the extent to which rational irrationality can be operationalized as altruistic or non-instrumental, and it should be clarified under which terms any such compatibility can save Caplan's model from the charges of internal inconsistency and incompatibility with the observables of democratic elections. 4.2a. The issue of imperfect altruism Regarding altruism, Caplan shows that rational irrational individuals are actually altruists: "voters are not selfishly motivated. The self-interested voter hypothesis -SIVH -is false. In the political arena, voters focus primarily on national well-being, not personal well-being" (Caplan, 2006: pp.148,149) and "Good intentions are ubiquitous in politics; what is scarce is accurate beliefs" (Caplan, 2006: p.157). Apparently, from the above mentioned coexistence of altruism and rational irrationality but also from the conclusion of the altruistic voting model (presented in the second section of this paper) we could infer the conclusion that Caplan's model makes the prediction that people vote according to their selflessirrational beliefs. This conclusion would be supported (under certain | 18 conditions which I discuss below) both in Jankovski (2002,2007) and in Edlin, Gelman, Kaplan (2007) interpretations of altruism. Therefore either we decompose as (Edlin, Gelman, Kaplan, 2007) or as (Jankowski, 2002), apparently Caplan's model of rational irrational selfless voting leads to a conclusion which is consistent with observable facts: individuals vote according to their rational-irrational beliefs. Caplan's altruism has however some features and it should be analyzed whether they can lead to a conclusion contrary to that of the previous paragraph. As in the case of irrationality, Caplan shapes altruism as a consumption good: "first, altruism and morality generally are consumption goods like any other, so we should expect people to buy more altruism when the price is low. Second, due to the low probability of decisiveness, the price of altruism is drastically cheaper in politics than in the markets. Voting to raise your taxes by a thousand dollars when your probability of decisiveness is 1 in a 100.000 has an expected cost of a penny" (Caplan, 2006: p.150). This idea is illustrated in Figure 2, below. | 19 The price of altruism in what concerns political decisions is therefore zero (intersection of line D with the quantity axis), while the price of altruism in what concerns economic decisions is much higher. Based on this price, the amount of altruism acquired on the market is expected to be low, while in the political choices it is expected to be much higher 8 . The altruism Caplan described was later labeled by Elster and Landemore (2008) as equivalent to Andreoni's (1989Andreoni's ( , 1990) warm-glow altruism or selfish-altruism. Starting from this observation 9 , the next step in analyzing Caplan's model is to examine rational irrationality in relation to the types of altruism identified by Andreoni (1989Andreoni ( , 1990. Subsequently, starting from the analysis developed by Jankowski (2002, 2007) we should be able to determine whether rational irrationality is compatible with (altruistic) voting. (i.e. which of the types of altruism imply that individuals vote). Andreoni (1990) has proposed a simple model 10 in order to distinguish between two important types of altruism based on the following formula of impure altruism: , , where the utility of individual i ) depends on his consumption of a private good ), the total quantity of a public good ) and on his private contribution to the public good ), with (the sum of all individual contributions that constitute the public good). Based on this formula, Andreoni differentiate between pure altruism: and pure egoism: . To move further, some clarifications are needed: First, Andreoni (1989) differentiate between pure altruism and warm-glow altruism/selfish altruism. The difference between these two types of altruism resides in the invariance to the donor's identity of the wealth created by the act of donation. In other words, a pure altruist is 8 The idea is not new, being also presented by Tullock (1971) and Brennan, Lomasky (1985). 9 And without taking into account Elster and Landemore's (2008) criticism. 10 I do not insist here on its details since they are rather irrelevant for my argument. | 20 concerned only with the amount of goods that the receiver gets and not with the identity of the donor, while a warm-glow altruist is not concerned with the total amount of goods received, but with the identity of the donor -I can feel good about myself because my donation proved me that I am a good man and I cannot get this feeling from the fact that others donate -this feeling is dependent exclusively on my donation. Starting only from Caplan's description of rational-irrational altruism, it seems difficult to assess if the latter could be labeled as warm-glow altruism as Elster and Landemore claimed. However Caplan offered at least two explanations that could shed some light on the type of altruism that he had in mind. The first is about the altruistic motivation of millionaire actors from Hollywood, which is designed to "enhance their self-image" (Caplan, 2006: p.151). An additional argument for labeling rational irrational voters as warm-glow altruists is given by the way the rational irrationality concept is internally built: voters "are not selfish in the conventional sense of trying to maximize their wealth or income. […] they choose their political beliefs based on psychological benefits to themselves, ignoring the costs to society. (Caplan, 2006: p.229). This position seems to indicate that although people can be altruists when voting, altruism would be selected for selfish reasons -it would produce psychological benefits for those who 'donate' by voting. In other words, the invariance to the identity of the donor is not satisfied. Assuming that by this we determined that Caplan's voters are rational-irrational warmglow altruists, it only remains to be determined whether this is sufficient to generate a possibility result when it comes to turnout. There are two cases that can be studied starting from Jankowski (2002): the first case, where warm-glow altruism is independent of pure altruism, with the formula: ; and the second case, where warmglow altruism is dependent on pure altruism with the formula: . In these two formulas is the purely selfish benefit, is the purely altruistic benefit, is the warm-glow altruism, and is the factor introduced by Riker and Ordeshook (1968) to capture mainly civic duty 11 . In the first case, both and are independent of 's effect. Temporarily ignoring the factor which has not been since now the object of my analyses, it could be said that and are sufficient (together but also separately) to generate a 11 This is a deliberate simplification of the term . Additionally in a later section of this paper I will explore the expressive meaning of . For other meanings of , Riker and Ordeshook (1968: p.28) should be consulted. | 21 possibility result (people should vote). The argument is quite simple: in respect to , "if the net benefit to others from candidate A's program is $1 billion in extra welfare expenditure, then even if , the expected benefit ($5) will exceed the costs of voting" (Jankowski, 2002: p.64). Regarding the factor, its effect is obvious: almost any factor that is not under the influence of (it is not multiplied by ) has the nature of counter-balancing , because the value of this latter factor is usually very small in all democracies. In the second case (i.e. the second formula), has instrumental value -it depends on the outcome of the donation and not on the act of donation in itself -and becomes dependent of (it is multiplied by ) which means that its effect is severely muted. Since the value of is, by definition, much smaller than the value of , in this case "it is pure altruism rather than warm-glow altruism that has the dominant impact on the voting decision" (Jankowski, 2002: p.65). So in this second case if rational irrationality is compatible only with warm-glow altruism, then Caplan's model is inconsistent i.e. such individuals would have little reason to vote. This conclusion, however, requires further clarification. A first observation is that the instrumental interpretation of warm-glow altruism could be considered to deviate from Andreoni's (1989) definition of this class of altruism: "the warm-glow is an increasing function of what is given" (Andreoni, 1989(Andreoni, : p.1449 and not of what is received! In other words, this benefit would be invariant to the decisiveness of donation -I feel good about myself, not because X has received something from me, but because I donated. I don't really care if X really received something as long as I have proved myself, by donating (i.e. voting for a transfer) that I am a good, generous, admirable man. In Andreoni's view, appears to be a non-instrumental factor, therefore only the first of Jankowski's (2002) formula would comprise Andreoni's warm-glow altruism. That being the case, four things are left to be clarified to have a complete analysis of the matter: a) Is the second interpretation of (Jankowski's interpretation -denoted by ) a legitimate category? It is clearly analytically distinct from Andreoni's interpretation of ( ) but this does not by itself disqualify this new notion of warm-glow altruism for a thoroughly discussion about its relation with rational irrationality. b) Does Caplan's altruism fit into or ? c) Is rational irrationality consistent with pure altruism? d) Are expressive motivations a way that could help rational irrationality to cancel the effect of ? I will address all these problems in the next section. 4.2b Instrumental warm-glow altruism and expressive motivations First, and should be given a natural language expression. could be translated into: "I care about the donor's identity, but not about the donation's decisiveness" while could be translated as: "I care about the donor's identity and about the donation's decisiveness". Whether is or is not an intuitive condition, it is arguable at the same extent as in the case of -the individuals falling into class seem as credible 12 as those falling into and since there is no analytical reason to reject 's possibility, it could be accepted. Once 's legitimacy is accepted (at least on analytic grounds if not also for ontological reasons) the possibility of inconsistency reopens because, at first glance, Caplan does not provide sufficient detail to allow us to be completely sure whether rational irrationality fits into or . Luckily this is just an appearance. Returning to Caplan's view of being zero or near zero it could be argued that connecting decisive altruism with rational irrationality it is not in the spirit of Caplan's theory: If I know that I could not be decisive ( ) at the first level (choosing beliefs), then (provided inter-level invariance) I will keep this knowledge of also at the lower level which means that the only possible form of altruism is . would not be possible because I could not extract utility from this kind of altruism, as long as it depends on which can be zero and as long as pure altruism does not seem consistent with rational irrationality 13 . This being the case, Caplan's model seems to be consistent. But besides , there is another term that could save Caplan's model from the charge of inconsistency. In the second section of this paper I have discussed two alternatives deemed viable to solve the paradox of the calculus of voting model (without giving up the principle of utility maximization): the altruistic and expressive voting. Since the altruism solution in relation to rational irrationality was already explored, it remains debatable whether expressive voting would be consistent with the fundamentals of Caplan's model. | 23 I mentioned above that in its form, is analytically indistinct from the term , and I noted one of the main meanings this latter term has been given -i.e. civic duty. Another meaning that Riker and Ordeshook (1968: p.28) mentioned is the expression of partisan preferences. In (2006) Caplan shows that his model is closely related to that of expressive voting. This relationship can be interpreted in two ways. First rational-irrational individuals could choose beliefs of some kind, but also they could choose not to express them because they don't get satisfaction from expressing beliefs but just from having them. In this case expressive voting and rational irrationality seem distinct and we could not legitimately add an expressive component to the rational irrationality in order to counterbalance p. 14 Second, individuals may have an irrational belief that they may wish to express. In this case the rational irrational and the expressive considerations are analytically indistinct: 'I think the Tooth Fairy would be a good President (for me) and I express this belief by voting for her even if she is not on the agenda 15 ' In this example I would choose an irrational belief because its material costs are zero and I would choose to express it by voting. This case could not fall in the class of because the reasons are selfish (I think about my benefits in voting the Tooth Fairy). If this would be compatible with the rational irrationality model, then in addition to there would be an expressive rational irrational term (let's label it as ) which could nullify the term. This sense of a connection between rational irrationality and expressive motivations seem to be in Caplan's spirit: "expressive voters do not embrace dubious or absurd beliefs about the world. They simply care more about how policies sound than how they work. [...] In contrast, rationally irrational voters believe that feel-good policies work (Caplan, 2006: p.139). This statement concerns a non-analytical difference between expressiveness and rational irrationality. In this case the difference would be psychological, not behavioral. If this interpretation is correct, then it seems legitimate to add into a 'caplanian' (i.e. one that Caplan would accept) equation of voting. In this interpretation, be it about or , Caplan's model seems to be consistent with voting. Conclusion Regarding the rational irrationality model, Wittman (2008) noted that "voters behave as if their votes were important. First, they vote, which is costly; if they thought their vote did not count, then they probably would not vote." 16 (Wittman, 2008: p.372). In this paper I explored this intuition by studying several ways in which rational irrationality could be connected with the problem of electoral participation. The first of these, instrumental egoism is explicitly rejected by Caplan. The second, pure altruism seems inconsistent with how Caplan defines choice over beliefs (and over altruism). The third, the non-instrumental warm-glow altruism ( seems to be consistent with voting, while (the fourth) warm-glow instrumental altruism ( ), although it is rather inconsistent with voting, does not seem to be the kind of altruism Caplan had in mind. Based on these observations, a caplanian equation of voting should probably contain the term and eventually the term (both terms are independent of ). Separately but also together, these terms are intended to offset the effect of and . Therefore, in a particular interpretation, Caplan's rational irrationality is compatible with voting 17 . This paper concludes that although Caplan's model has many open doors through which significant criticism could enter, it also has some exits through which rational irrationality could be evacuated from the path of inconsistency allegations. 16 Elster and Landemore (2008) expressed a similar intuition. 17 The criticism intuited by Wittman (2008) considered only instrumental egoism, which Caplan keeps only at the level of choice of beliefs and not at the level of voting. At this later level Caplan uses either non-instrumentally warm-glow altruism or a modified version of expressive voting.
2018-05-08T17:35:05.864Z
0001-01-01T00:00:00.000
{ "year": 2015, "sha1": "10db7186e3536a4c1a31e1d4ed6b59f3b7e40fa9", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12959/issn.1855-0541.iiass-2015-no1-art01", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "10db7186e3536a4c1a31e1d4ed6b59f3b7e40fa9", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Economics" ] }
81867289
pes2o/s2orc
v3-fos-license
Study the Effects of Vernakalant on Ischemic-Reperfusion Dysrhythmias in Experimental Animals in Comparison with Amiodarone Article by Aljoufi FA, Hendawy OM, Della Grace Thomas Parambi, Fadwa A. Elroby Assistant Professor of Pharmacology, Faculty of Pharmacy, Jouf University, Saudi Arabia Lecturer of Clinical Pharmacology, Faculty of Medicine, Beni-Suef University, Egypt Assistant Professor of Pharmaceutical Chemistry, Faculty of Pharmacy, Jouf University, Saudi Arabia Lecturer of Forensic Medicine & Toxicology, Faculty of Medicine, Beni-Suef University, Egypt E-mail: faaljoufi@ju.edu.sa, omnia_mmh@yahoo.com, dellajesto@gmail.com, dr_fido_311@yahoo.com Introduction Dysrhythmia is an abnormality of the rate, rhythm, site of origin of heart impulses or a hindrance in electrical conductive system of the heart that develop activation sequences of myocardium. The impulses of conductivity originate from the primary pacemaker, sinus node, spontaneously sending depolarization wave through the atrium, depolarizing the atrioventricular node then propagated to Purkinje fibres then depolarizing the ventricle in a systemic manner. Actually, there are more than hundred classes of cardiac dysrhythmias. The normal cardiac rhythm, sinus rhythm, could be disrupted due to failure of automaticity, such as sick sinus syndrome or due to over activity, such as inappropriate sinus tachycardia. Ectopic foci cause premature excitation of the myocardium on single or continues basis leading to premature atrial contractions (PACs) and premature ventricular contractions (PVCs). Another classes of dysrhythmia, atrial fibrillation, paroxysmal atrial tachycardia (PAT) and supraventricular tachycardia (SVT), caused by micro or macro re-entry. In general, the seriousness of cardiac dysrhythmias depends on the presence or absence of structural heart diseases. [Fu, 2015] The most common relatively benign dysrhythmia are atrial fibrillation, PACs and PVCs, being benign is in case of absence of structural heart lesion. In contrast, the presence of non-sustained ventricular tachycardia (VT) or syncope in coronary heart disease patient may be a harbinger of subsequent cardiac death and must not be ignored. [Rudy, 2008] The resting membrane potential is caused by different ionic concentration and conductance through the cell membrane during phase 4 of the action potential. The normal resting membrane potential in ventricular myocardium is about -85 to -95 mV. This potential is determined by selective permeability of the cell membrane to different types of electrolytes. The cell membrane is most permeable to K + ions and relatively impermeable to the other ions. The resting membrane potential is there for dominated by the K + equilibrium potential according to K + gradient across the cell membrane. The membrane potential can be calculated using Goldman-Hodgkin-Katz voltage equation. [Grunnet, 2010] The stability of electrical gradient is maintained by different ions pump and exchange mechanism, involving Na + /K + ion exchange pump, Na + /Ca + exchanger current and Inwardly Rectifying K + current (Ikr). [Sipido, et al 2007]. Phase 0 is a rapid depolarization phase. The slope of phase 0 represents the maximum rate of depolarization of the cell and is known as dV/dt max. This phase is caused by fast opening of Na + channels leading to rapid Na + influx (INa) into the cell. The ability of the cardiac cells to open fast Na + channels during phase 0 is related to the membrane potential at the moment of excitation. If the membrane potential is at its baseline (about -85 mV), all the fast Na + channels are closed and the excitation will open them all causing large influx of Na + ions. If the membrane potential is less negative, some of the fast Na + channels will be inactivated and insensitive to opening, leading to lesser response to excitation of the cell membrane and a lower Vmax, thus the resting membrane potential become too positive, so that lead to delayed excitation and conduction, increasing risk for dysrhythmias. [Santana, et al 2010a]. The fast Na + channels are being controlled by numbers of gates, each gate can attain a value between 1 (fully open) and 0 (fully closed). The product of all gates denotes the percentage of channels available to conduct Na + . According to Hodgkin and Huxley model, the Na + channels contains 3 gates: m, h and j. In the resting state, m gate is closed (zero) and h and j gates are open (one). Upon electric stimulation of cardiac myocytes, m gates opens quickly while simultaneously h and j gates close more slowly. For a brief period of time, all gates are open (non-zero) and Na + enter the cell following electrochemical gradient, thus, if the resting membrane potential is too positive, the h or j gates may be considerably less than one, such that the product of m, h and j becomes too small upon depolarization. [Grunnet, 2010]. Phase 1 action potential is due to inactivation of Na + channels. The transient net outward current causing small downward deflection of the action potential is due to movement of K + and Clions, carried by Ito1 and Ito2 currents respectively. Particularly the Ito1 contributes to the notch of ventricular myocytes action potentials. [Santana, et al 2010b]. The plateau phase of cardiac action potential is maintained by balance between inward movement of Ca + (ICa + ) through L-type calcium channel and outward movement of K + ions through the slow delayed rectifier K + channels (IKs). [Grunnet, 2010] During phase 3 (rapid repolarization), the L-type calcium channels are closed while the slow delayed rectifier K + channel are still open. This ensure a net outward current, corresponding to negative change in membrane potential, thus allowing more types of K + channels to open. These are primarily the rapid delayed rectifier K + channels (IKr) and the inwardly rectifier K+ current (IK1). This net outward, positive current (equal to loss of positive charge from the cell) cause the cell to depolarize. The delayed rectifier K + channels close when the membrane potential is restored to about -80 to -85 mV, while IK1 remains conducting throughout phase 4, contributing to set the resting membrane potential. [Kubo, et al 2005]. Pathophysiology of dysrhythmia The pathogenesis of dysrhythmia is involving three main mechanisms: enhance or supressed automaticity, triggered activity or re-entry. Automaticity is a natural property of cardiac myocytes, suppression of automaticity of sinoatrial node (SAN) can lead to sinus node dysfunction and sick sinus syndrome (SSS) which is the most common indication for permanent pacemaker implantation. In contrast, enhance automaticity can result in multiple dysrhythmias, both atrial and ventricular. Triggered activity occurs in case of early afterdepolarization and delayed afterdepolarization initiate spontaneous multiple depolarization precipitating ventricular dysrhythmia such as Torsades de pointes and digitalis induced ventricular dysrhythmia. Probably, the most common mechanism of arrhythmogenesis results from re-entry that include bidirectional conduction and unidirectional block. "Micro" level re-entry results in ventricular tachycardia from conduction around the scar of myocardial infarction and "Macro" level of re-entry results in conduction through Wolff-Parkinson-White [WPW] syndrome concealed accessory pathway. [Nakagawa, et al 2001] Atrial fibrillation is the most common type of supra-ventricular dysrhythmia (SVD) associated with significant morbidity, mortality and affecting quality of life. [Camm, et al [Camm, 2012] Amiodarone is the most effective medication for rhythm control but it is often discontinued due to numerous systemic side effects such as thyroid and lung dysfunction. [Lafuente, et al 2006] [Camm, 2012] Therefor, there is a clear need for safer and more effective pharmacological strategies for rhythm control. [Dobrev & Nattel, 2010] In light of these unmet needs, the following research has been focused to design novel pharmacological target aiming to treat most common type of SVD (atrial fibrillation) with higher efficacy and less risk. An attractive prospect for AF therapy has been considered the introduction of agents with selective affinity to ions channels specifically or predominately involved in atrium. Indeed, this research is currently focused on the development of agent targeting to modification of those pathway and molecular mediators which are involved in propagation and maintenance of supraventricular dysrhythmias. [ Vernakalant Vernakalant has highly selective atrial ion-channel blocking properties that has recently involved in management of acute atrial fibrillation (AF). [Dobrev & Nattel, 2010] Intravenous vernakalant has been approved for the alteration of recent-onset AF in Europe and other parts of the world, but not in the USA. Vernakalant inhibits atrial-selective K + currents, including the ultra-rapidly activating delayed rectifier K + current (IKur) and acetylcholine-activated inward rectifier K + current (IK,ACh), and causes rate-dependent atrial-preferential Na + channel block, with only a small inhibitory effect on the rapidly activating delayed rectifier K + current (IKr) in the ventricle. [Fedida, et al 2005] Due to its atrial-selective properties, vernakalant prolongs the effective refractory period (ERP) of the atria with moderate effects on the ventricles [Dorian, et al 2007] which explains the low pro-dysrhythmic risk for torsades de pointes (TdP) dysrhythmias. [Dobrev & Nattel, 2010] . Vernakalant is an antidysrhythmic agent that has predominant properties on supraventricular electrophysiology. A human electrophysiological study demonstrated that vernakalant infusion dosedependently prolongs atrial ERP. [Dorian, et al 2007] Atrial selectivity, thereby avoiding ventricular pro-dysrhythmia, can be achieved by aiming towards atrial-selective channels, such as IKur and IK,ACh, by atrial-preferential inhibition of excitability through exploiting state-selective Na + channel blocking properties or by high selectivity for rapid rhythms like AF. [Dobrev & Nattel, 2010] Vernakalant blocks several K + channels. It inhibits IKur in the open state, with preserved efficacy at high stimulation frequencies. [Fedida, et al 2005] The atrial-selective IK,ACh current is potently blocked by vernakalant. [Fedida, 2007] [Wettwer, et al 2013] Vernakalant also targets Kv4.3 and human ERG (hERG) channels that correspond to the transient outward current (Ito) and IKr, respectively, although the contribution of Ito to repolarization is lower in ventricles than in atria. [Fedida, et al 2005] In contrast, IKr is an important repolarizing current in ventricular cells. Its blockade causes QT interval and action potential duration prolongation, predisposing to TdP arrhythmias through the development of dysrhythmia-triggering early afterdepolarization and/or an increased dispersion of repolarization. [Dobrev & Nattel, 2010] However, the potency of vernakalant in blocking hERG channels is up to 100-fold lower than that of class IC antiarrhythmic drugs (flecainide or propafenone). [Fedida, et al 2005] Late Na + current (INa, late) inhibition by vernakalant is protective against the proarrhythmia from IKr blockade. [Orth, et al 2006] Vernakalant causes an open-channel block of Na + channel Nav1.5 α-subunits that underlie the atrial INa. [16] [18] At physiological heart rates, the block of Nav1.5 channels by vernakalant is weak because of its rapid unbinding kinetics from the channel, [Fedida, et al 2005] [Fedida, 2007] which is consistent with the small increase in QRS interval (a marker of ventricular conduction velocity) observed in clinical trials. [Roy, et al 2008] [Pratt, et al 2010] [Carmeliet & Mubagwa, 1998] In addition, the effects of vernakalant on Na + channels are voltage and rate dependent, resulting in an enhanced inhibitory potency at depolarized potentials and rapid rates, like in fibrillating atria. [Fedida, et al 2005] The resting membrane potential of normal atrial myocytes is 10 mV more depolarized than that of normal ventricular myocytes. When atrial myocytes fail to repolarize fully, as can happen during AF, the atrioventricular difference in resting membrane potential is further accentuated and a large fraction of atrial Na + channels is inactivated. This reduces the Na + channel reserve predominantly in the atria and allows vernakalant to inhibit preferentially atrial Na + channels. [Fedida, et al 2005] Although such voltage and rate dependency is also typical for flecainide and propafenone, they do not show atrial selectively, [Fedida, et Amiodarone Amiodarone is a broad spectrum anti-dysrhythmic drug against numerous types of irregular heartbeats including ventricular tachycardia, ventricular fibrillation, atrial fibrillation & paroxysmal supraventricular tachycardia. [Porid, 1995] The antiarrhythmic effect of amiodarone is due to non-competitive alpha and beta adrenergic inhibition, class II activity, in addition, amiodarone is a very effective blocker of sodium channels, class I activity, moreover, it has a week calcium channel blocking effect, class IV activity. [Du, et al 1995] Amiodarone increases the cardiac refractory period without influencing resting membrane potential, except in automatic cells where the slope of pre-potential is reduced, generally reducing automaticity [Varro & Robloczky, 1986]. Amiodarone relaxes vascular smooth muscle, reduces peripheral vascular resistance (after load) and slightly increases cardiac index. [Singh, 1970] After oral dosing, however, amiodarone produces no significant changes in left ventricular ejection fraction (LVEF), even in patients with depressed LVEF. [Twidale, et al 1993] After acute intravenous dosing in man, amiodarone may have a mild negative inotropic effect. [Gangol, et al 1985] Amiodarone does not alter vagal reflexes or the responsiveness of cardiac cholinergic receptors but it causes some non-competitive alpha and beta adrenergic blockade. [Biggera & Hoffman, 1992] Amiodarone has also a selective inhibition of the effect of T3 on myocardium that may contribute to prolongation of the action potential duration and refractoriness. [Melmed, et al 1981] The Pharmacokinetic of numerous drugs, including many that are commonly administered to individuals with heart disease, is affected by amiodarone. Particularly, doses of digoxin should be halved in individuals taking amiodarone since amiodarone decreases renal and non-renal clearance of the digitalis glycosides and increases its bioavailability. These effects appear related to the dose of amiodarone, with higher doses of amiodarone being associated with the greatest increase in digoxin concentration. [Achilli & Serra, 1981] Amiodarone potentiates the action of warfarin. Individuals taking both of these medications should have their warfarin dose halved and their anticoagulation status, measured as prothrombin time & international normalized ratio, measured more frequently. Amiodarone decreased the total body clearance of warfarin in normal subjects but did not change volumes of distribution. Amiodarone is a general inhibitor of the cytochrome P450 catalyzed oxidation of warfarin. [Larry, et al 1991] The FDA revised the labels of amiodarone and simvastatin in 2002 to warn of increased risk of rhabdomyolysis, the most severe form of myopathy, when the two drugs are taken concomitantly in doses greater than 20 mg per day of simvastatin. [Karimi, et al 2010] There are many other drugs should not be taken with amiodarone: cimetidine, clopidogrel, cyclosporine, dextromethorphan, diclofenac, loratedine, a beta-blocker, potentiation, and Ca2+ channel blockers. [Singh, et al 1989] Amiodarone has numerous side effects. Most individuals administered amiodarone on a chronic basis will experience at least one side effect [Vanerven & Schalij, 2010], Decrease heart rate and increase incidence of heart block, interstitial lung disease, Some individuals developed pulmonary fibrosis after a week of treatment, Amiodarone is structurally similar to thyroxin, which contributes to the effects of amiodarone on thyroid function, both under and over activity of the thyroid may occur on amiodarone treatment [Batcher, et al 1989], Corneal micro-deposits, Corneal verticillata, Abnormal liver enzyme results are common in patients on amiodarone. [Flaharty, et al 1989] According to the numerous drug interactions and adverse effects caused by amiodarone, this research investigates the effect of a novel antidysrhythmic drug, vernakalant, on reperfusion dysthymia in rats in comparison with amiodarone, standard broad spectrum antidysrhythmic drug. Material and method The animals used in the experiments were 40 adult male albino rats weighting 170-200 g. The animals were handled according to the guide lines of local ethical committee which comply with the international laws for use and care of laboratory animals. The animals were divided into four groups, each group contained 10 rats.  (Control Group I) normal (did not receive any medications)  (Control Group II) diseased group (reperfusion dysrhythmia, adult male rats were anaesthetized by intramuscular injection of 25% solution of urethane in a dose of 0.7ml/100g body weight. The trachea was exposed and tracheotomy was done through which a Y-shaped glass tube was cannulated. The animal was artificially ventilated in a respiratory rate of 40/minute and tidal ISSN: 2520-3118 volume of 6 ml/kg [Harkness & Wagner, 1989] throughout the experiment to avoid any respiratory disturbances during the experiment. The left jugular vein was cannulated by pediatric cannula (size 24G). The chest was opened by midline thoracotomy at the xiphesternal junction. After opening the pericardium, the heart is exteriorized by gentle pressure on the chest wall then a snap was taken by a proline 5-0 thread around left anterior descending branch of the left main coronary artery and the two ends of the thread were put into a plastic tube closed by a clamp for 15 min & subsequent reperfusions for 30 min. [Abraham, et al 1989]  (Group III) The standard lead II was adjusted by the power lab and the heart rate was recorded by the power lab device (Model no. 866. MLA1215 Animal Bio AMP lead wires set of three 2 mm pins to micro hook lead wires). Animals were anesthetized and prepared as the previous groups. After recording of normal ECG, reperfusion dysrhythmia was induced in the same previous manner with ECG recording every 5 minutes From T0 to T23. For each animal, the heart rate, time of appearance of cardiac dysrhythmias and disturbances ECG were recorded. Statistical methods Data were statistically described in terms of range, mean  standard deviation ( SD), frequencies (number of cases) and percentages when appropriate. Comparison of quantitative variables between the study groups was done using Kruskal Wallis analysis of variance (ANOVA) test. For comparing categorical data, Chi square ( 2 ) test was performed. Exact test was used instead when the expected frequency is less than 5. A probability value (p value) less than 0.05 was considered statistically significant. All statistical calculations were done using computer programs Microsoft Excel 2003 (Microsoft Corporation, NY, and USA) and SPSS (Statistical Package for the Social Science; SPSS Inc., Chicago, IL, USA) version 15 for Microsoft Windows. Control group I Heart rate The normal heart rate ranged between 226 and 312 beats/minute, with a mean ± SD of 286.20 ±26.377 ranged from T0-T23. S-T segment There were no significant changes in the S-T segment with normal ECG. (Figure 2 Heart rate By induction of dysrhythmia, there were no statistically significant changes in heart rate up to the T4 (Closure of coronary arteries). From T5 (Reperfusion), there was statistically significant reduction in heart rate down to the T 23,177.5±14.849 with a ratio of reduction of 38%. (Figure 3) S-T segment There were no significant changes in the S-T segment before induction of dysrhythmia, T0-T4. Reperfusion produced an equal percentage of S-T segment depression ( Figure 4) and elevation ( Figure 5) about 10% for each at T5 and then there was statistically significant increase in the percentage of S-T segment depression to reaches 60% at T9 then decreased gradually from T14, 50%, to reach 0% at T23. The percentage of elevated S-T segment increased gradually to reach 66% at T22. Dysrhythmia Coronaries reperfusion resulted in induction of dysrhythmia starting with the T5 in 2 animals, 20%, in the form of SVEs and SVT in one animal and VEs in the second animal. The incidence of dysrhythmias increased gradually to affect all animals (100%) by T9. Death of the animals started to occur after T12 in one animal, 10%, and increased gradually to reach 40% after T20. As regards the types of reperfusion dysrhythmia to the non-treated rats, control group I, the following four types of cardiac dysrhythmias developed, ventricular tachycardia (V.T.) (Figure 6), multiple ventricular extra systoles (V.Es) (Figure 7), multiple supraventricular extra systoles (S.V.Es) ( Figure 8) and supraventricular tachycardia (S.V.T) (Figure 9). VT VT VT VE+ VT T13 SVE+SVT VE +VT VT VE+ VT VT VT VT VT VE+ VT T14 SVT VT VT VE+ VT VT VT VT VE+ VT T15 SVT VT VT VE+ VT VT VT VT VE+ VT T16 SVT VT VT VT VT VT VT T17 SVT VT VT VT VT VT VT T18 SVT VT VT VT VT VT VT T19 SVT VT VT VT VT VT VT T20 SVT VT VT VT VT T21 SVT VT VT VT T22 SVT VT VT T23 SVT VT Heart rate There were no statistically significant differences in the resting heart rate, T0, between control group II, both group III and IV. By induction of dysrhythmia in group III, vernakalant reduced bradycardia gradually from 38% to 18.5% (Figure 10), while in group IV, amiodarone decreased the bradycardia from 38% to 17.5% (Figure 11). S-T segment There are no significant changes in the S-T segment before induction of dysrhythmia, T0 -T4 in the control group II and both group III and group IV. There was slight increase in the percentage of elevated S-T segment starting from T5 to T10 on comparison between the control group II and group III, then this percentage gone within normal range with the control group with no statistically significant changes up to T 23. In group IV, the percentage of elevated S-T segment markedly decrease to reach 0% during the whole experiment and the percentage of normal S-T segment stays at a high level ranged from 100% at T5 to 80% at T23 with statistically significant changes starting from T7 to T21 (Table 2). Dysrhythmias There were regularly paced complexes with normal shape without any statistically significant differences between group II, group III and group IV up to T4. In group III and IV, the heart rate stayed regular all up to T23. There were statistically significant changes in the regularity of the heart beats between the control group II and both group III and IV starting from T7 until T23. VT appeared only in one animal at T22 in group III, while in group IV, there were not any types of dysrhythmia occurred all through adrenaline doses. (Figure 12) Discussion Cardiac dysrhythmias occur when the electrical signals to the heartbeats are not working probably. For instance, some people experience irregular heartbeats, which may feel like a racing heart or fluttering. Many types of cardiac dysrhythmias are harmless, however, if they are particular abnormalities resulting from weak or damage heart, dysrhythmia can cause serious and even potentially fatal symptoms. Dysrhythmias are life threatening medical emergencies that may cause cardiac arrest and sudden death. Up to 65% of patients had sudden cardiac death as first manifestation of cardiac dysrhythmia. In the United States, more than 850,000 people are hospitalized for a dysrhythmia each year. [John, et al 2010] Supraventricular dysrhythmia is a complicated type of dysrhythmia that is hard to treat with habitual antidysrhythmic medications. Novel pharmacological methodologies are in advance concentrated on the advancement of specialists with selective affinity to ion channels predominately engaged with the atrium. In parallel, inquire about endeavors have been focused on the development of agents focusing to adjustment of those pathways which are associated with the proliferation and maintenance of atrial fibrillation (AF). [Ferrari, et al 2015] Novel ion channels inhibition agents developed to treat AF are broadly separated into two categories such as "atrial-selective" compounds and "multi-channel blockers". Vernakalant is predominantly "atrial-selective" blocker while amiodarone is considered as a "multi-channel blocker". Vernakalant is an antiarrhythmic atrial-selective compound acting by blockade of IKur, which is exclusively expressed in the atria. Furthermore, it is a multichannel blocker that affects the sodium channel and IK-Ach both expressed predominantly in the atria. [Burashnihov, et al 2010] This study aimed to investigate the possible antidysrhythmic effect of vernakalant on coronary reperfusion cardiac dysrhythmias, in comparison with amiodarone. In this work, adult male albino rats were used. Their heart structure is relatively close to that of the humans and their size made it easy to induce ischemia-induced dysrhythmias. Amiodarome was chosen as a comparator, because it is a standard antidysrhythmic drug with broad spectrum properties against different types of cardiac dysrhythmia with different mechanisms. In this work, adult male albino rats weighting 170-200 g were kept in normal environment without any procedures or medications given, as a standard (control) group I with normal heart rate and normal ECG recording. In group II (diseased group), It was noticed that there is no changes in heart rate during closure of coronary arteries (up to T4), after reperfusion (T5) there was statistically significant reduction in the heart rate with irregular cardiac rhythm, it was explained by [Jurkovicova and Cagan, 1998] that abnormal cardiac rhythm originate as a consequence of the complex of cellular and humeral reactions accompanying the opening of coronary artery leading to release of chemical substances such as calcium, thrombin, platelet activating factor, inositol triphosphate & angiotensin II which operate as modulators of cellular electrophysiology causing complex changes at the level of ions channels. In vernakalant treated group III the resting heart rate did not differ significantly from that of the control group I, vernakalant reduced the coronary-reperfusion induced bradycardia from 38% to 18.5%. It was stated by [Bechard et al, 2011] that vernakalant is an antiarrhythmic atrial-selective compound acting by blockade of IKur, which is exclusively expressed in the atria. Furthermore, it is a multichannel blocker that affects the sodium channel and IK-Ach both expressed predominantly in the atria. Inhibiting potassium currents, vernakalant causes prolongation of atrial refractoriness which contributes to the efficacy of the drug. Besides, it exerts frequency-and voltage-dependent sodium channels block, including the INaL, causing significant effect on the intra-atrial conduction particularly at fast rates. In the amiodarone pretreated group IV, the resting heart rate did not differ significantly from those of the control group I and vernakalant treated group III. The effects of amiodarone on resting heart are evaluated in different studies. [Mason, 1987] stated that amiodarone decreases sinus rate about 15 to 20% and attributed this effect to its ability to inhibit intracellular conversion of thyroxin T4 to T3. [Djandjighian et al, 2000] also observed that amiodarone significantly and dose-dependently lowered the resting heart rate in animals and reduced the exercise-induced tachycardia which is probably due to its calcium channel and β-adrenoceptor blocking effects. These changes should not require discontinuation of amiodarone as they are evidence of its pharmacological action. [Arrendono, et al 1986] In group IV, the mean heart rate decreased from T4 up to T23 with a percentage of reduction 17.5%, on comparison with that of the control group I, 38%. The difference in heart rate reduction in both groups, I and IV, was statistically significant. The antagonistic effect of amiodarone to bradycardia induced by reperfusion may be explained by its vasodilator effect [Zipese, et al 1984] and it was explained by [Patel et al, 2009] that amiodarone acts as a multichannel blocker by inhibiting a wide range of ion channels including IKs, IKr, IKur, IK-Ach, ICaL, INa+. The antagonistic effects of both vernakalant and amiodarone on reperfusion induced bradycardia were comparable with no statistically significant difference. Coronary reperfusion (group II) resulted in changes in ST segment and T-wave inversion. There was an elevation of S-T segment in one animal, 10%, and a depression in another animal, 10%, after T5 then the percentages of animals showing elevation and those showing depression increased significantly to reach 40% for each after T10. From T12 and up to T22 there were also comparable ST segment elevation and depression with more tendencies to elevation. It is contrary to that stated by [Heper et al, 2008] that successful reperfusion causes normalization or more than 50% regression of S-T segment elevation, T-wave inversion or any other dysrhythmias observed by electrocardiograph, S-T segment return is explained by rapid normalization of myocardial cell membrane potentials in the ischemic area as myocardial cells are capable of normalizing their membrane potential immediately as oxygen become available. Treatment of the animals in group III with vernakalant increased insignificantly the percentages of animals showing elevated ST-segment compared with that in control group I. Vernakalnt decreased significantly the percentages of animals showing ST-segment depression. In accordance with the observed results in this work, vernakalant could produce a dose-dependent reduction in ST-segment depression induced by exercise in experimental animals, suggesting that vernakalnt's beneficial mechanism of action is due to an improvement in regional coronary blood flow in areas of myocardial ischemia mainly for non-transmural, subendocardial ischemia, it was stated by [Roy et al, 2004] that ISSN: 2520-3118 in vivo human electrophysiology study and the CRAFT trial did not find a significant change in QRS or heart rate-corrected QT interval (QTc) by the infusion of vernakalant. In contrast, pivotal trials (ACT I, II, and III) showed that vernakalant increases the QRS and QTc intervals between 5 minutes and 2 hours after the start of infusion [Pratt, et al 2010] Treatment of the experimental animals in group IV with amiodarone, abolished S-T segment elevation completely with a minor percentage of depressed S-T segment, remaining in only 10%-20%. Similar results were observed in the experimental animals by [Lindenmeyer et al, 1984] Based on aforementioned results, it could be concluded that amiodarone is slightly more effective than vernakalnt in correction of both elevated, transmural ischemia, and depressed S-T segment and non-transmural ischemia. There were no significant changes in the regularity of the heart rate in the control group I up to T4. The dysrhythmias began from the T5 in 20% of animals and increased gradually, 30%, after T6 to 90% after T8 and at T9, 100%. These results were compatible to that stated by [Murdock et a, 1980] that the incidence of reperfusion-induced ventricular fibrillation increased when occlusion periods were lengthened from 5 minutes to 20 or 30 minutes and decreased when reperfusion was delayed beyond 30 to 60 minutes. Also, reperfusion-induced fibrillation tended to occur more often when severe arrhythmias developed during occlusion. It was also stated by [Casio et al, 2001] that changes in extracellular potassium (K) has been shown to fluctuate with coronary occlusion and reperfusion and that is also related to alterations in conduction that cause arrhythmias. In the vernakalnt treated group, III, the regularity of the heart beats was maintained up to T 23. Cardiac dysrhythmia did not develop in 90% of the rats and only 10% developed ventricular tachycardia with high doses, at T22 and T23 for 10 minutes. The anti-dysrhythmic action of vernakalant could be attributed to its atrial-selective ion-channel blocking properties that has recently been introduced for the acute management of cardiac dysrhythmias. [Dobrev & Nattel, 2010] Vernakalant inhibits atrial-selective K currents, including the ultra-rapidly activating delayed rectifier K current (IKr) and acetylcholine-activated inward rectifier K current (IK, Ach), and causes ratedependent atrial-preferential Na channel block, with only a small inhibitory effect on the rapidly activating delayed rectifier K current (IKr) in the ventricle. [Fedida, et al 2005] Ttreatment of animals with amiodarone, Group IV, resulted in prevention of development of all type of dysrhythmias (100%) up to T23. The antidysrhythmic effect of amiodarone could be attributed to its due non-competitive alpha and beta adrenergic inhibition, class II activity, in addition, amiodarone blocks sodium channels, class I activity, moreover, it has a weak calcium channel blocking effect, class IV activity. [Gill, et al 1992] Conclusion Vernakalant showed a powerful antidysrhythmic action against ischemic-reperfusion cardiac dysrhythmias in experimental animals, comparable to that exerted by amiodarone with less recorded adverse drug effects causing beneficial properties of vernakalant against different types of cardiac dysrhythmias.
2019-02-22T23:21:58.247Z
2018-10-31T00:00:00.000
{ "year": 2018, "sha1": "a0d3161139080e27637f70c831d33c4475440951", "oa_license": null, "oa_url": "https://doi.org/10.21522/tijmd.2013.06.02.art002", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "a0d3161139080e27637f70c831d33c4475440951", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
271110921
pes2o/s2orc
v3-fos-license
Deep learning empowered breast cancer diagnosis: Advancements in detection and classification Recent advancements in AI, driven by big data technologies, have reshaped various industries, with a strong focus on data-driven approaches. This has resulted in remarkable progress in fields like computer vision, e-commerce, cybersecurity, and healthcare, primarily fueled by the integration of machine learning and deep learning models. Notably, the intersection of oncology and computer science has given rise to Computer-Aided Diagnosis (CAD) systems, offering vital tools to aid medical professionals in tumor detection, classification, recurrence tracking, and prognosis prediction. Breast cancer, a significant global health concern, is particularly prevalent in Asia due to diverse factors like lifestyle, genetics, environmental exposures, and healthcare accessibility. Early detection through mammography screening is critical, but the accuracy of mammograms can vary due to factors like breast composition and tumor characteristics, leading to potential misdiagnoses. To address this, an innovative CAD system leveraging deep learning and computer vision techniques was introduced. This system enhances breast cancer diagnosis by independently identifying and categorizing breast lesions, segmenting mass lesions, and classifying them based on pathology. Thorough validation using the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) demonstrated the CAD system’s exceptional performance, with a 99% success rate in detecting and classifying breast masses. While the accuracy of detection is 98.5%, when segmenting breast masses into separate groups for examination, the method’s performance was approximately 95.39%. Upon completing all the analysis, the system’s classification phase yielded an overall accuracy of 99.16% for classification. The potential for this integrated framework to outperform current deep learning techniques is proposed, despite potential challenges related to the high number of trainable parameters. Ultimately, this recommended framework offers valuable support to researchers and physicians in breast cancer diagnosis by harnessing cutting-edge AI and image processing technologies, extending recent advances in deep learning to the medical domain. Introduction Breast cancer is a significant health concern, especially in Asia, where diverse factors contribute to its prevalence.It imposes emotional, physical, and financial burdens on individuals and communities, necessitating global collaboration for early detection, improved healthcare, and tailored treatments.In 2023, an estimated 300,590 new cases and 43,170 deaths from breast cancer are projected in the United States [1].Early detection and de-stigmatization are crucial for reducing mortality and promoting mental well-being [2].The age and presentation differences between Asian and Western women with breast cancer raise questions about disease characteristics, emphasizing the need for research and awareness.The complex nature of breast cancer, with diverse subtypes and treatment responses, emphasizes the need for personalized treatment strategies.Public education on self-exams, mammograms, and a healthy lifestyle is crucial for prevention and early detection [3].The evolution of medical imaging, from X-rays to modern modalities like mammography, ultrasonography, CT scans, MRI, and digital radiography, has significantly influenced cancer diagnosis and research [4].These technologies have enabled the acquisition of crucial medical images, which are then analyzed by radiologists, playing a pivotal role in the diagnostic process.Mammography remains the preferred and reliable method for breast cancer screening, especially in the early stages.Modern mammography equipment utilizes digital technology, reducing radiation exposure and ensuring safety [5].Its effectiveness in early detection emphasizes the importance of promoting breast cancer awareness and regular mammogram screenings for women, particularly those at higher risk.Mammography requires precise positioning of the nipple in alignment with the lower edge of the pectoralis major muscle.Two key views, MLO (mediolateral oblique) and CC (craniocaudal) are utilized to capture comprehensive breast tissue images.The CC view focuses on inner breast tissue without the axillary tail and centers on the pectoralis major muscle, ensuring accurate breast examination [6].Radiologists are skilled in identifying potential cancer risks by analyzing mammograms for abnormal areas with increased brightness, location, breast size, and fatty tissue density [7].They emphasize concern when dense, white tumor masses are evident, as malignant tumors can change in shape.Benign tumors pose minimal risk, but vigilance is necessary for anomalies like calcifications, asymmetries, and structural deformations, often caused by artifacts.Various techniques, including digital mammographic screening and full-field digital mammography (FFDM), are employed from the CBIS-DDSM dataset. Mammography can identify one or more lesions inside the breast that vary in size and location.Radiologists routinely compare screening images over time to detect changes or confirm breast cancer-related symptoms.Breast mass lesions, which can be benign or malignant, are often identified through various methods, including mammography, biopsy, or MRI etc. Figs 1 and 2 showcase benign breast abnormalities, including calcifications and architectural distortion.Calcifications, visible as white patches and dots in mammograms, can sometimes be associated with ductal carcinoma in situ, though they are typically benign.Macro-calcifications appear as clear specks, while micro-calcifications, despite their small size, warrant closer attention due to their significance. Architectural distortion in the breast, characterized by deformations without a visible tumor, is a common benign condition but can be a precursor to breast cancer.Detecting architectural distortion can be challenging, especially in 2D mammography, due to its shifting appearance, size, and position.Mammograms are valuable for categorizing breast abnormalities by isolating tumors from the background, offering cost-effective diagnoses.While these procedures often rely on manual interpretation by radiologists, computerized mammography analysis has the potential to enhance accuracy and effectiveness, aiding in the distinction between benign and malignant tumors, and ultimately improving diagnosis and treatment decisions.The study highlights the value of routine mammography exams in lowering mortality rates by identifying breast cancers early on before they have a chance to spread to other body parts or healthy tissues.As a result, radiology specialists review mammography every day to spot problematic lesions and evaluate any questionable breast tissue based on its location, traits, and shape [8].This process continues to be costly and error-prone despite its importance and the increasing number of mammograms checked daily, underscoring the need for increased accuracy and dependability [9].It is the job of radiologists to recognize worrisome lesions on breast mammograms during screening and to differentiate between other types of lesions, such as masses, calcifications, and other typical abnormalities.Physicians must then decide how to treat the tumor and determine its pathology diagnosis to determine if it is benign or malignant.As a result, computer-aided diagnostic (CAD) systems can offer a second opinion, assisting professionals in determining the possibility of breast cancer in general [10].Recent developments in AI for computer vision have produced algorithms that have shown to be incredibly helpful to medical professionals.Particularly, these systems have proven their capacity to precisely detect, outline, and classify malignant lesions in a variety of medical imaging tasks, including mammography [11].Conventional methods relied on straightforward image processing and machine learning techniques to extract hand-crafted and fundamental attributes with the aim of locating and identifying probable locations [12][13][14].Cutting-edge deep-learning algorithms are emerging as substitutes for traditional tumor segmentation methods due to deteriorating accuracy and a high false positive rate.These new algorithms offer more sophisticated capabilities and address the limitations of conventional approaches by incorporating background tissue information and automating feature extraction for tumor delineation and classification in computer-aided diagnosis systems.[15].Advanced machine learning methods, notably Convolutional Neural Networks (CNNs), are garnering attention in automated CAD systems and medical imaging for their proficiency in feature extraction and recognition, particularly in detecting subtle patterns associated with conditions like breast cancer.As computer processing power has grown, deep learning models have gained prominence for their ability to automatically extract comprehensive features from medical images, eliminating the reliance on prior knowledge or human feature engineering.[16].This development has helped to improve automated system results while finding a crucial balance between their ability to recognize many lesions in a single mammogram and the accuracy of detecting these lesions [17,18]. Cutting-edge CAD systems leverage deep learning algorithms to provide real-time assistance to radiologists, enhancing early diagnosis and personalized treatment planning for breast cancer.While automated techniques improve detection accuracy, radiologists' clinical expertise remains essential for prognosis and therapy decisions, underscoring the collaborative role of technology and human judgment in patient care.[19,20].To reduce false positive and negative instances since breast cancers, CAD system performance must be generally improved.These features have led to widespread support for deep learning's use in biomedical settings, particularly in CAD systems created for mammography [21,22]. Finding tiny breast cancers is fundamentally more difficult than finding larger, more advanced tumors.The use of sophisticated algorithms may aid in early detection, improve the chances of effective therapy, and ultimately result in better patient outcomes.During the past 20 years, deep learning has demonstrated its ability to address complicated challenges in the field of medical imaging by excelling in a range of computer vision tasks.As a result, we focus on mammography in particular tasks such tumor identification, breast lesion segmentation, and classification. Mammography, introduced in 1913, has proven invaluable in early breast lesion detection, significantly reducing mortality rates through screening.Research emphasizes the role of Computer-Aided Diagnosis (CAD) systems, leveraging computer vision and AI, in automatically detecting anomalies in mammograms, aiding healthcare practitioners in medical imaging analysis.[23].A technique for identifying cellular alterations in breast tissues that may differentiate between diseased and healthy situations was developed by Tavakoli et al. [24], Preprocessing procedures, a special block-based convolutional neural network (CNN) architecture, and the inclusion of a decision-making mechanism are all part of this method.This process creates a binary map that classifies pixels inside the defined region as either having anomalies or being within the normal range after the CNN has been trained.It is noteworthy that this method, when used on the MIAS database, has an amazing accuracy rate of 95%.Moon et al. [25] developed a computer-aided detection (CAD) system specifically for the purpose of identifying malignancies in their work.This system uses CNN architectures, multiple representations of the image content, and an image fusion technique.Combining these methods led to diagnostic performance metrics for the ensemble method of 91.10%, 85.14%, 95.77%, and 0.9697, respectively.In order to improve information propagation, the study used skip connections, ResNet, and DenseNet connections to solve issues such gradient vanishing and interlayer transmission loss.It's crucial to remember that in this investigation, B-mode ultrasonography (US) images were used to manually delineate tumors and surrounding tissue.It's important to note, too, that depending on the operator, tumor shapes and regions of interest (ROIs) can vary.In order to identify and detect breast cancer, Khan et al. [26] created a ground-breaking method that makes use of deep learning.To improve classification accuracy, they used three different CNN architectures: ResNet, VGGNet, and GoogleNet.This solution, which also made use of data augmentation techniques, outperformed rival approaches by an astounding 97.525%.The study investigated a combination of human created features and features retrieved by CNNs in order to further hone the categorization process.The difficulties in image categorization, particularly when dealing with complicated histological images of breast cancer, have been efficiently handled by contemporary breakthroughs in artificial intelligence and image processing techniques.The classification of histological breast cancer images has benefited significantly from the shift from traditional hand-crafted features to features obtained from Convolutional Neural Networks (CNNs) trained on patch images.Importantly, CNNs reliably produce objective findings across different datasets by classifying data without largely relying on domain-specific expertise.Similar networks may likewise produce positive results in this area.Peng et al. published a novel approach for automated mass detection [27].They combined the multiscale-feature pyramid network and the Faster R-CNN model.Using the CBIS-DDSM and INbreast datasets, our technique showed impressive true positive rates of 0.94 and 0.96, respectively.In their work Masni et al. [28] developed a computer-aided diagnosis system based on YOLO.When used with the DDSM dataset, this system has an accuracy rate of 85.52%.For deep learning applications in medical image processing, Haq et al. [29] proposed using a Convolutional Neural Network (CNN) model, specifically the DnCNN model, with an emphasis on breast imaging data for breast cancer diagnosis.Within a 30-minute processing window, their proposed DnCNN model performed superbly, obtaining an amazing accuracy rate of 79%.Vedalankar et al. [30] used three easily accessible databases CBIS-DDSM, DDSM, and mini-MIAS to solve the issue of class imbalance in mammography datasets.Their suggested strategy called for the classification of architectural distortion in mammograms using support vector machines and AlexNet.The results demonstrated the method's superiority over conventional methods with a peak accuracy of 92%, a sensitivity of 81.5%, and a specificity of 90.83%.It's crucial to remember that this study is constrained by its reliance on a rather small group of three databases.This demonstrates the necessity for additional validation using larger and more varied datasets to guarantee the robustness and generalizability of the strategy. Alruwailia and Gouda, et al. [31] concentrated on utilizing deep learning models to improve diagnostic mammography procedures for finding breast cancer in their study.To discriminate between benign and malignant breast cancer cases, they used transfer learning with pre-trained models, notably ResNet50 and Nasnet-Mobile.They also used augmentation techniques to increase the amount of mammographic images in order to improve the system's stability, avoid overfitting, and broaden the dataset.The study's findings showed that their deep learning system on the MIAS dataset had an accuracy of 89.5% when using ResNet50 and 70% when using Nasnet-Mobile.Importantly, when utilizing MOD-RES + oversampling (for ResNet50) and Nasnet-Mobile, their deep learning-based strategy surpassed professional radiologists across a range of parameters, including overall accuracy, precision, recall, and F1-score.Comparative studies showed that the suggested strategy outperformed the existing models in the field of medical imaging, especially when working with small training datasets.This highlights the potential for deep learning approaches to improve the precision and effectiveness of diagnosing breast cancer early using mammography.According to Das et al. [32], early breast cancer identification is crucial for increasing women's survival rates.In order to help radiologists correctly diagnose breast cancer, their research relies on computer-aided diagnostic (CAD) systems.They carry out a comparison analysis utilizing several criteria to compare deep CNN architectures that have been trained on various datasets with a newly proposed shallow CNN architecture.The work makes use of shallow CNNs that take advantage of distinguishable features and preprocessed mammography pictures.By enhancing well-known CNN models including VGG19, ResNet50, MobileNet-v2, Inception-v3, Xception, and Inception-ResNet-v2, they also investigate transfer learning.Notably, the accuracy rates for the DDSM and INbreast datasets for the shallow CNNs are 80.4% and 89.2%, respectively.Pre-trained CNNs, on the other hand, are more accurate, with rates of 87.8% and 95.1% for the same datasets.These findings demonstrate the potential of shallow CNN architecture and pre-trained CNN models for efficient breast anomaly detection and precise cancer diagnosis.Different image dimensions and quality may be to blame for the observed performance discrepancies between the CBIS-DDSM and INbreast datasets.While deep learning-based features can result in overfitting, the INbreast dataset has improved mammography quality.On the smaller INbreast dataset, however, fine-tuning parameters enhance model performance.Although cross-dataset evaluations are part of the research, they produce fewer promising outcomes than within-dataset testing.These results offer insightful information for upcoming study topics.It's important to note that other clinical characteristics, such as medical history or regional variations, which could improve computeraided approaches for early cancer diagnosis and individualized care, are not taken into account in this study.The study acknowledges transfer learning's limits, particularly when natural image features fall short of accurately capturing the subtleties of medical imaging.In order to overcome this, the authors suggest that transfer learning from datasets in the specific medical domain may result in algorithms for breast cancer diagnosis that are more accurate.The suggested CNN models show superior information extraction from individual images as compared to training each CNN from scratch after being thoroughly examined using contemporary methods.Notably, the maximum level of feature extraction efficiency is attained by the Xception model, which incorporates depth-wise separable convolution for recovering obscured objects and optimizes ResNet principles.The comparative analysis demonstrates the updated Xception classifier's superior performance in comparison to other models.With performance scores ranging from 0.87 to 0.91 for the CBIS-DDSM dataset and 0.91 to 1.00 for the INbreast dataset, the upgraded Xception classifier consistently outperforms previous techniques, exhibiting excellent efficacy in the identification of breast cancer.An advanced framework based on deep learning and machine learning techniques was developed by Trang et al. [33] in their study with the main goal of detecting breast cancer by merging clinical data and mammography images.731 pictures from 357 women made up the dataset used in this study, which was used to train a model that could distinguish between benign and malignant tumors.To do this, the researchers developed models for support vector machines, random forests, gradient boosting machines, and artificial neural networks (ANN) using clinical data.In order to assess mammograms, they also used deep convolutional neural networks (CNN), such as X-ception, VGG16, ResNet-v2, ResNet50, and CNN3.The combined model has an area under the curve (AUC) of 0.88, a sensitivity of 89.7%, a specificity of 78.1%, and an overall accuracy of 84.5%, according to the study's findings.Surprisingly, the combined model performed better than utilizing just mammography pictures, increasing accuracy from 72.5% to 84.5%.This study brought to light the benefits of combining clinical information with mammography pictures to improve the precision of breast cancer detection.The study concluded that the combination of clinical and imaging data could improve the capacity of machine learning and deep learning models in the detection of breast cancer, thereby opening up new paths for therapeutic applications in the future.In order to address women's health issues related to breast cancer, The paper emphasizes the limitations associated with using mammograms for breast cancer diagnosis and admits the frequent lack of explainability and interpretability in these systems, despite the outstanding segmentation and classification abilities of deep neural network-based CAD systems.Both individuals and medical professionals may become less trusting as a result of this restriction.To close this gap, the suggested methodology blends CBR and deep learning to produce precise and understandable classifications, improving the accuracy and comprehension of breast cancer detection.In order to automate the segmentation of breast cancers, Hai et al. [34] were pioneers in the development of a network that includes multiscale picture features.This network received scores of 60.41% for Intersection over Union (IoU) and 76.97% for Dice on an independent dataset.In an experiment, Soulami et al. [35] used a thorough UNet model to concurrently recognize, segment, and categorize breast masses.Using the INbreast and DDSM datasets to evaluate segmentation ability, a stellar Dice score of 90.50% was attained.Shams et al. [36] created an end-to-end model, for instance, that smoothly incorporated Convolutional Neural Networks (CNN) with Generative Adversarial Networks (GAN).They also provided a graphic of this integrated strategy.Their main objective was to categorize mammograms as benign or malignant.In studies utilizing the DDSM dataset, they were able to get an accuracy rate of 89%, while using the INbreast dataset, they were able to acquire an astounding accuracy rate of 93.5%. A deep learning-based computer-aided diagnostic (CAD) method for early breast cancer diagnosis was created by Hekal et al. [37].Using CNN models with adjustable Otsu thresholding, they improved the extraction of TLR (Texture and Location Relationship) characteristics and increased the training process' effectiveness.The mammography nodule images were divided into four groups by the CAD system using a support vector machine (SVM)-based classifier: Benign Calcification, Malignant Calcification, Benign Mass, and Malignant Mass.Utilizing the ROI CBIS-DDSM dataset, the study presented its findings.The CAD system successfully classified ROIs into these four classes with noteworthy accuracy, achieving an accuracy of 0.91 using the AlexNet model and 0.84 using the ResNet-50 model. In order to enhance the classification outcomes of the MIAS dataset, Saber et al. [38] proposed a deep learning architecture with a primary focus on identifying and diagnosing breast cancer.The dataset underwent a number of preprocessing procedures, including the detection of cancerous areas, noise reduction, and contrast enhancement.They used approaches for data augmentation to improve the dataset.Notably, they improved mass-lesion classification by utilizing freezing and fine-tuning techniques.When compared directly to alternative models, the VGG16 model showed remarkably high diagnostic accuracy for breast cancer.With values of 98.96%, 97.83%, 99.13%, 97.35%, 97.66%, and 0.995, respectively, it earned remarkable metrics for overall accuracy, sensitivity, specificity, precision, F-score, and AUC when utilizing the 80-20 technique.With performance scores of 98.87%, 97.27%, 98.2%, 98.84%, 98.04%, and 0.993, the VGG16 model performed admirably.The above literature review suggests that feature extraction, detection, and classification tasks may not be sufficiently accurate or efficient for the present CNN-based approaches for breast cancer detection.To obtain the appropriate degree of precision, these procedures also appear to need more time and resources.This research endeavor has a strong emphasis on raising detection accuracy.Despite the use of complex models in prior studies, it is important to note that the data used in this study showed an unequal distribution.Our goal in this research is to provide a quick and effective breast cancer diagnosis tool. Proposed methods An overview of the extensive architectural models and techniques utilized in a CAD system for the early detection of breast cancer is given in this part.To extract features, find anomalies, segment tumors, and classify them, it makes use of cutting-edge deep learning and computer vision methods.For thorough diagnostic support, the system fused YOLO detection, segmentation using Associated-ResUNets, and classification through AlexNet (BreastNet-SVM). A. Detection and identification The YOLO network was developed as a departure from the conventional sliding window approach, aiming to predict both bounding box locations and class probabilities for the entire image using a single CNN.This innovative design significantly reduces computational overhead.At the heart of YOLO's architecture lies a fully convolutional neural network (FCNN), illustrated in Fig 4, which divides the image into grids and generates bounding boxes, class probabilities, and confidence ratings for each grid cell. We used YOLO-V7, the improved YOLO network's seventh iteration, which was especially designed to improve object detection at various scales.The multi-scale feature extraction and detection method is used by YOLO-V7.As shown in Fig 3 [39], it first uses skip connections to address gradient vanishing problems in deeper network layers.Three fully connected layers that handle features extracted at various scales make up the detecting segment.The system uses anchor box theory to establish anchor boxes and fine-tunes them using a K-means method with whole images, both of which are inspired by the Faster-RCNNs model.The output matrices of multi-scale features are then arranged into grid cells and used along with these anchor boxes.The selection of boxes with scores over a predetermined threshold is made easier by this design, which also makes it simpler to calculate the Intersection over Union (IoU) percentages between ground-truth and anchor boxes.In order to ensure precise identification when both scores exceed a predetermined threshold, the model predicts confidence levels, probability distributions, and four offset values for each anchor box. Our algorithm detects probable breast lesions within bounding boxes and assigns confidence scores, as discussed in the preceding section.This is consistent with the YOLO-based The model settings, input data, and YOLO's classification method for identifying the lesion type (mass or calcification) all affect the confidence score.This lays the groundwork for enhancing prediction results.In this paper, we propose ranking the Intersection over Union (IoU) scores of various augmented images, including rotations and morphs, in order to prioritize the selection of precise predicted bounding boxes.This method aids in the selection of sample mammograms for accurate localization and classification of lesions.Additionally, to cut down on errors and improve overall performance, we advise merging predictions from various model implementations.These models undergo unique setup and training, such as Model-1 for Mass and Calcification independently and Model-2 for numerous classes.Following extensive testing, we develop customized fused models for calcification using Model-1(calcification) and for mass using Model-1(mass).The mass and calcification aspects of Model-2 considerably improve the general usability of the Model-1 models.Beginning with initial Mass predictions from Model1(Mass), our fusion technique concentrates on predictions with an IoU score over threshold1.After separating images with mass lesions using threshold2, we apply Model-2 (Calcification & Mass) to produce predictions.Mass Predictions 2 are defined as images that Mass Prediction 1 does not cover.Combining these two prediction groups yields the final Mass predictions shown in Fig 4 .Calcification forecasts are made using a similar process.We employ threshold1 (0.44) and threshold2 (0.38) consistently throughout this fusion procedure since they have a history of producing promising outcomes. To implement our strategy, we used a YOLO-based architecture.The core model was initially trained using a variety of configurations, each of which focused on a different class label, such as mass, calcification, or architectural deformation.To determine the projected bounding boxes with the best confidence scores for each iteration, we gathered a variety of augmented images, including the originals and rotational versions.This technique was developed to accurately identify the best images for the classification of certain mammograms and the precise diagnosis of breast abnormalities.we used a fused YOLO-model strategy to improve the results of our final forecast.By merging several forecasts, we hoped to lower overall error rates and increase the adaptability of models with various configurations.Model-2, built on YOLO, was set up for multi-class training including all three classes whereas Model-1 represented the YOLO base model created for a particular class.By analyzing both M-1 and M-2, the Fused Model was created to enhance overall detection performance.A new class label named "Normal" was also included to account for mammograms that came back normal during follow-up screening.Assuring the lack of anticipated bounding boxes, we employed the YOLO-based architecture trained on abnormal mammograms to forecast normal ones, permitting reliable categorization as "Normal."The most recent screening mammograms were used for the models' creation and testing, which included examples of lesions with architectural deformation, calcification, or both.This all-encompassing strategy displays our dedication to improving and expanding the applicability of the YOLO-based paradigm. B. Segmentation UNet, a prominent model in medical image segmentation, adopts an encoder-decoder structure inspired by FCN, omitting fully connected layers.Its symmetrical architecture comprises down-sampling and up-sampling paths, forming a "U" shape.UNet's key innovation lies in integrating skip connections, vital for preserving spatial information lost during down-sampling.Inspired by this, the "Associated-ResUNets" architecture joins two UNets with additional skip connections to enhance information flow as shown in Fig 5 .Each encoder block includes two convolution units followed by BN and ReLU layers, with the output undergoing max pooling before passing to the next encoder block.Customized skip connections between the first decoder and second encoder blocks recover decoded information, improving overall segmentation performance. To facilitate smooth transitions between down-sampling and up-sampling pathways, the model employs an Atrous Spatial Pyramid Pooling (ASPP) block.This technique, utilizing "Atrous" convolution, widens the receptive field while maintaining resolution.The ASPP block integrates batch normalization layers and four 3 * 3 convolution layers with varying dilation rates, combined to generate multi-scaled features fed into a 1 * 1 convolutional layer.Following the initial UNet design, a second UNet with increased skip connections and insights from initial up-sampling is utilized.After activation with ReLU and normalization with a BN layer, the output of the preceding decoding block is merged with itself and used as input for the second UNet's initial encoder block.Subsequently, the outputs of three encoder blocks' maximum pooling methods are merged with the outcomes of preceding decoding blocks before down-sampling.The terminal encoding block of the second UNet is directed to the ASPP block, followed by a 1 x 1 convolutional layer and sigmoid activation layer to produce the final output mask.Additionally, the A-ResUNet model incorporates an attention block to fuse attention mechanisms with skip connections in encoder and decoder blocks.This attention block, accepting low-level data input, involves a transposed convolutional layer followed by ReLU activation, sigmoid activation, and transposed convolutional layers to produce an attention map.This map is multiplied with skip connection information to enhance segmentation accuracy.Finally, the decoder block receives input from this output to improve UNet's segmentation capability across varied medical image sizes, with one typical convolution block replaced for optimization. C. Classification using a BreastNet-SVM In this phase, we introduce a customized technique inspired by the architecture of AlexNet and its modified variants, forming the fundamental model termed BreastNet-SVM.Illustrated in Fig 6, this model encompasses training, validation, and testing phases, using the CBIS-DDSM dataset as the initial data source, comprising mammograms from individuals diagnosed with breast cancer.The data preparation stage involves enhancing data quality through preprocessing, including image transformations, noise removal, and outlier filtering.Subsequently, the meticulously processed data is divided into training, validation, and testing sets, with approximately 70% allocated for training and the remaining 30% for validation and testing.Notably, input patches can vary in size: 16 x 16, 32 x 32, or 48 x 48.The training dataset is composed of two main layers: the application layer and the performance layer.In the application layer, features are extracted using the modified convolutional neural network Breast-Net-SVM, capturing significant information from input images for further processing.To In Convolutional Neural Networks (CNNs), the convolutional layer is an essential component that is in charge of identifying significant features in the input data.These layers carry out convolutional operations, represented by the symbol.They first apply a filter on the incoming image, though.It is common to refer to the result of this convolutional procedure as either an activation map or a feature map.Eq (1) depicts this convolutional operation visually. Aði; jÞ In this case, the sign "X" stands for a filter's dimensions, which are "a x b," "Y" stands for the input matrix, which is often an image, and "A" is the resultant feature map that is produced when the filter "X" is convolved with the input "Y."The symbol for this convolution operation is "X Y." The output of the convolutional layer is then subjected to a non-linear activation function (AF) after this convolutional procedure.The network becomes non-linear as a result of this AF.It is possible to process the feature map and introduce non-linearity while normalizing network data using a variety of non-linear activation functions.Sigmoid, hyperbolic tangent (Tanh), SoftMax, and rectified linear unit (ReLU) are some of these AFs.The ReLU activation function is used in this study, and it produces zero if the input is zero or less.Eq (2) refers to "ReLU" as the symbol for the mathematical representation of this ReLU process. In the realm of convolutional neural networks (CNNs), the pooling layer is commonly employed subsequent to the convolutional layer to decrease the dimensionality of the feature map while preserving essential features, often referred to as "down-sampling" in academic literature.Techniques such as average pooling, max-pooling, sum-pooling, and min-pooling are utilized by the pooling layer to reduce the dimensions of the activation map, retaining critical information.Before being forwarded to the fully connected layer, the feature map undergoes a flattening operation, converting the feature map matrix into a long vector as shown in Fig 8 .In this specific application, 70% of pre-processed mammograms undergo convolutional operations in the convolutional layer.The proposed BreastNet-SVM comprises a total of thirteen layers, including three pooling layers, three fully connected layers, and seven convolutional layers, tailored for breast cancer identification, accommodating grayscale images of size 32 x 32 as shown in Fig 7 .Initially, in the first two convolutional layers, 32 filters with a 3 x 3 kernel size are applied with the same padding, utilizing the ReLU activation function to introduce non-linearity.Following these layers, the original 32x32x32 image is down-sampled using a max-pooling layer with a 2x2 filter and stride of 2. Subsequently, two additional convolutional layers are employed, each featuring 64 filters, a 3x3 kernel, the same padding, and ReLU activation function.Post the initial max-pooling layer, which scales the image to 16x16x64, a second max-pooling layer with a 2x2 kernel size and stride further downsamples the image, resulting in an 8x8x64 image.The last three convolutional layers entail a total of 128 filters, each with a 3x3 kernel and ReLU activation.Following these layers, a third max-pooling layer is implemented, using a specific kernel size to reduce the input dimensions to a single vector sized 2048 x 1. In the classification process using a convolutional neural network (CNN), the Fully Connected (FC) layer plays a crucial role after relevant features have been extracted.Serving as a bridge connecting neurons from the preceding layer to those above, the FC layer's output is passed through an Activation Function (AF) to generate class scores for classification.Common techniques for classification tasks include Support Vector Machines (SVM) and SoftMax.In the BreastNet-SVM model, the support vector machine classifier is utilized to achieve optimal accuracy in distinguishing between benign and malignant breast cancer forms, with results evaluated at the performance layer.Deep learning tasks demand significant computational resources and training time, addressed through optimization algorithms like Stochastic Gradient Descent (SGD), adaptive moment estimation (Adam), and Root Mean Squared Propagation (RMSprop) to enhance performance.The Adam optimizer efficiently utilizes resources, RMSprop dynamically adjusts learning rates, and SGD utilizes model parameters and momentum to identify optimal parameters.Key metrics like accuracy and classification rate are evaluated at the performance layer to determine if the model meets learning criteria, requiring potential retraining.Upon training completion, the model and results are stored in the cloud for future use.During validation, the cloud-stored BreastNet-SVM model is retrieved for comparison with the trained model to assess performance.Utilizing a subset of the validation dataset, the previously trained model categorizes cases as "benign" or "malignant" based on cancer cell detection. Our complete structure functions in a two-step sequential manner.It first recognizes and categorizes breast masses before going on to section these masses.Before beginning the intensive segmentation job, we take precautions by using a cutting-edge data augmentation technique.This method increases the dataset size of low-resolution mammograms while also enhancing their quality.To be clear, our novel design is specifically applied to the regions of interest (ROIs) containing breast masses that were determined in the preliminary stage.We used the YOLO model to identify breast abnormalities and distinguish between calcification and mass lesions in the earlier stage of our framework.The model thus produced bounding boxes around pertinent areas on the whole collection of mammograms.Nevertheless, the design is only used in the present phase on the ROIs associated with breast masses discovered earlier.It's important to highlight that because calcification lesions lack exact reference annotations, this study only focuses on segmenting bulk lesions.The integrated YOLO model is used in the earliest stage of our framework to identify worrying breast lesions and distinguish between calcifications and mass lesions.Our newly developed architecture (shown in Fig 8) makes it easier to seamlessly transfer the areas of interest (ROIs) containing the identified masses to the following segmentation stage.Our method comprises expanding specific bounding box coordinates to cover more surrounding space around smaller tumors in order to account for the various sizes of breast masses.This generates a series of ROI images that are then The following steps helped us increase the system's performance: 1. We began with a mammogram that had not been changed and had exact mass annotations highlighted in red.These annotations identified the mass's Region of Interest (ROI). 2. We created a binary mask that successfully segmented the indicated mass to further finetune the procedure.This mask assisted in separating the bulk from the nearby tissue. 3. The segmented output of the mass, which now excluded the surrounding tissue, was obtained in the final stage.In the final classification phase, this segmented mass was used. 4. Using the segmented ROI masses as input, we trained a bespoke customized AlexNet (BreastNet-SVM) model for the classification task individually for each classification aim.We were able to forecast the pathology and categorize it as either benign or malignant as a result of this phase. 5. This completes our elaborate framework, which is seen in Fig 9 .All automated procedures used in the evaluation and diagnosis of breast cancer are included. A. Detection and identification For our YOLO model, we chose to concentrate on modifying a few key hyperparameters in order to streamline the process and highlight the most important factors.Mammography data was used in our trials, which involved randomly dividing it into 70% for training, 20% for testing, and 10% for validation for each class as shown in Table 1.We just changed the hyperparameters and kept the total trainable parameters constant throughout our studies.To train our system for the recognition and categorization of breast lesions, we used CBIS-DDSM mammography dataset.We altered the model's input data and adjusted Model2's classification settings to enable multiple classes.Our findings unequivocally highlight the benefits of applying data augmentation and scaling methods to the original mammography dataset, with the dataset exhibiting especially notable performance gains.Notably, our model accomplished a greater rate of detection accuracy, which is an impressive feat.Since M2 was trained for both tasks using the enriched and scaled dataset, we ran tests where M1 was trained independently for Calcification and Mass detection.The results of these experimental trials are fully summarized in Table 2 below. In this study, a second assessment phase was added to evaluate the model for simultaneous detection and classification.This review procedure, which was extensive and included the integration of models developed under numerous situations, was explained in the preceding chapter.We first presented the results from the independent models, M-1 and M-2, utilizing the top-chosen mammograms from the enriched dataset to provide a thorough comprehension.Each pair of mammograms was evaluated together with six improved versions, such as rotated or transformed variations, of the original image.After carefully examining these sets, we chose the image with the best Intersection over Union (IoU) rating.The detection accuracy rate for each prediction class was then calculated after we integrated multiple models to form a new Fusion model, as shown in Table 3. The fused model achieved remarkable accuracy rates, notably 98.5%, when identifying mass lesions.This innovative fusion approach significantly enhanced the identification and classification of breast lesions.By achieving a detection accuracy rate of 98.5%, the fusion strategy effectively combined multiple models, delivering both speed and precision that surpassed current state-of-the-art methods.It's worth highlighting that Architectural Distortion, in particular, exhibited outstanding diagnostic capabilities with a sensitivity of 95% for cancer patients and 93.09% for non-malignant cases as shown in Table 4. Fig 10 shows the trade-off between FPR and TPR under various conditions using ROC curve plots.Particularly noteworthy were the excellent AUC scores of 0.95 for the Architectural Distortion and Mass cases and 0.96 for the Normal cases.The difficulty with calcification lesions comes from their variety of shapes and locations; they frequently appear as minor, irregular imperfections, making automated identification techniques work less efficiently. B. Mass segmentation Table 5 displays the evaluation findings for various testing sets, concentrating on the assessment of segmented maps at the per-pixel level.We determined two evaluation indicators for these results in this assessment. The Associated-ResUNets architecture consistently outperforms the classic UNet, traditional AUNet, and ResUNet models, with considerable gains in Dice and IoU scores.Additionally, Associated-ResUNets and its variants exhibit great segmentation efficiency, with an average IOU score 92.28% and Dice score 95.89%. C. Classification using a BreastNet-SVM The study developed and assessed the BreastNet-SVM model using the publicly available CBIS-DDSM dataset.Multiple statistical criteria, including sensitivity, miss classification rate, specificity, and accuracy, were used to evaluate performance.These metrics established parameters for assessing the model's overall performance and measured the model's capacity to produce accurate predictions.They also helped to identify instances of wrong predictions.The following criteria have been established to evaluate the model's performance. BreastNet-SVM model for breast cancer diagnosis, this study tests three distinct optimizers (RMSprop, Adam, and SGD) on the CBIS-DDSM dataset.The efficiency of the model is then evaluated by a comparison analysis with more recent strategies.A comparison of the training phase, taking into account different input image sizes and optimizer selections, is presented in Table 6. The study assessed the effectiveness of the BreastNet-SVM model for detecting breast cancer using three distinct optimizers (RMSprop, Adam, and SGD) and three different input image sizes (16x16, 32x32, and 48x48).The size of the input image and the optimizer selection were found to have a substantial impact on model performance.Notably, Adam and RMSprop both showed good performance, but the SGD optimizer consistently produced the maximum accuracy across all input sizes.Table 7 provides a comparative analysis during the study's validation phase and summarizes these findings. The efficiency of the BreastNet-SVM model for the detection of breast cancer varies depending on the optimizer chosen and the size of the input image.Especially with a 32x32 input image size, where it reached 99.16% accuracy, the SGD optimizer consistently produced the highest accuracy.Across a range of sizes, the Adam optimizer also worked admirably, whereas RMSprop showed great specificity but occasionally lower sensitivity.The setup of the model can be optimized for the identification of breast cancer using these performance indicators.A dataset of 6,165 samples that were divided into two categories malignant and benign was used to train the model.A confusion matrix was produced throughout the training process to evaluate its effectiveness as shown in Table 8. The BreastNet-SVM model in the study was trained using 2,990 samples from the benign category, and the performance of the model was assessed based on the accuracy of sample classification.The model accurately predicted 2,971 of these samples, however, 19 of them were misclassified.Using a dataset of 3,175 samples, the model was trained on malignant samples, accurately categorizing 3,128 samples while misclassifying 47.The validation phase of the BreastNet-SVM model's confusion matrix, which corresponds to the SGD optimizer that produced the best accuracy, is shown in Table 9.In the validation phase, 882 samples in total were used to test the proposed model.These samples were then divided into two groups: malignant and benign. The suggested BreastNet-SVM model showed a high level of accuracy when predicting benign instances during the validation phase.The model accurately categorized 406 out of 411 benign samples, misclassifying only 5. Malignant samples required 471 samples for validation; With a remarkable accuracy of 99.16%, the BreastNet-SVM model performed exceptionally well.Notably, it obtained an incredibly low misclassification rate of just 0.84%, the lowest percentage among comparable studies.Additionally, it demonstrated the highest sensitivity (97.13%) and specificity (99.30%) during the experimental analysis performed on the CBIS-DDSM dataset, which is available to the general public. Discussion We conducted a comprehensive comparison of our proposed methodology with recent studies and similar methods.To ensure a thorough and equitable evaluation, we exclusively considered research that focused on the detection of Mass lesions, and these results are presented and contrasted in Table 10.When comparing the detection accuracy rates with other studies that utilized the CBIS-DDSM dataset, our fused YOLO models consistently outperformed in terms of overall performance. The BreastNet-SVM model delivered outstanding results, achieving an impressive accuracy of 99.16% as in Table 11.Notably, it demonstrated the lowest misclassification rate observed among similar studies in the field, standing at a mere 0.84%.Additionally, it exhibited the highest sensitivity at 97.13% and the highest specificity at 99.30% in the experimental analysis conducted using the publicly available CBIS-DDSM dataset. Conclusion and future work This research introduces an integrated deep learning-based CAD system aimed at assisting medical professionals in breast cancer diagnosis.The system comprises three key phases: detection, segmentation, and classification of breast abnormalities.The study demonstrates the effectiveness of various models and techniques, such as fused YOLO for simultaneous location and nature improved segmentation using attention mechanisms and residual blocks, and the integration of Associated-ResUNets and BreastNet-SVM) for accurate classification.The results highlight improved accuracy, reduced false positives/negatives, and the potential for broader medical imaging applications.Future research could expand this framework to incorporate more abnormalities and 3D medical images like CT scans and MRIs. Table 5 . Segmentation performance on test set. model correctly predicted 461 of them while mis prognosticating 10 of them.The findings of the improved AlexNet (BreastNet-SVM) model for detecting breast cancer are shown inFig 11,including both benign and malignant outcomes.The right forecast of the first three images, which were classified as genuine negatives, shows how accurately the model classified benign instances.three images indicating cancer tissue were wrongly labeled as benign (false negatives), and another three images representing benign tissue were incorrectly labeled as malignant (false positives).The last three photos were appropriately classified as positive instances by the BreastNet-SVM model, which accurately reflected their malignancy status. https://doi.org/10.1371/journal.pone.0304757.t005the
2024-07-13T05:10:09.810Z
2024-07-11T00:00:00.000
{ "year": 2024, "sha1": "accbe6f3d59096b633b97707b0dfcccbe45de795", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "accbe6f3d59096b633b97707b0dfcccbe45de795", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
118449912
pes2o/s2orc
v3-fos-license
Brans-Dicke Gravity from Entropic Viewpoint We interpret the Brans-Dicke gravity from entropic viewpoint. We first apply the Verlinde's entropic formalism in the Einstein frame, then perform the conformal transformation which connects the Einstein frame to the Jordan frame. The transformed result yields the equation of motion of the Brans-Dicke theory in the Jordan frame. I. INTRODUCTION It is well-known that a black hole has an entropy [1], and a black hole radiates as if it has a temperature proportional to the surface gravity on its event horizon [2]. These facts suggest that there may exist a deep connection between gravity and thermodynamics. Then, Jacobson [3] showed that the Einstein equation can be obtained from the first law of thermodynamics together with the Bekenstein-Hawking entropy-area relation [1,2] applied on a local Rindler horizon [4]. Recently, Verlinde [5] proposed that gravity is not a fundamental interaction but can be explained as a macroscopic thermodynamic phenomenon. Based on the holographic principle and the equipartition rule, he showed that gravity emerges as an entropic force. A related idea was considered also by Padmanabhan [6,7]. Since these works, there have appeared many subsequent investigations in cosmology [8], black hole physics [9][10][11][12], loop quantum gravity [13], and other fields. In this paper we investigate whether the Brans-Dicke gravity can be viewed as an entropic phenomenonà la Verlinde. The Brans-Dicke gravity is characterized by the fact that the ‡ Present Address: Center for Quantum Spacetime, Sogang University Seoul 121-742, Korea * Electronic address: cylee@sejong.ac.kr † Electronic address: ktk@theory.sejong.ac.kr § Electronic address: dhlee@theory.sejong.ac.kr gravitational coupling is not presumed to be a constant but is proportional to the inverse of a scalar field which couples non-minimally to the curvature scalar [14]. Also, it is well known that Brans-Dicke theory can be transformed the Einstein frame via a conformal transformation, and in the Einstein frame Brans-Dicke action is equivalent to the Einstein gravity plus the scalar field action except the fact that unlike in the Jordan frame a usual matter does not follow the geodesic of Einstein frame [14]. We use the holographic principle and the equipartition rule in the Einstein frame first, and then via a conformal transformation we recover the field equations of metric in the Jordan frame. In Ref. [15] it was shown that black hole entropy in the Brans-Dicke gravity depends upon the value of scalar field at the horizon as well as the area of horizon. However, we will show that a naive application of Verlinde's formalism with this entropy expression of the Jordan frame cannot yield the correct field equations. The paper is organized as follows. First we review the Verlinde's conjecture on the emergent gravity in the case of the Einstein gravity, then give an entropic derivation of the Brans-Dicke field equation based on the holographic principle and the equipartition rule. We work with c = = k B = 1 in this paper. II. EINSTEIN EQUATIONSÀ LA VERLINDE According to Verlinde, gravity is an entropic force emerging from coarse graining process of information for a given energy distribution. In this process, information is stored on the holographic screen. In a static background with a global timelike Killing vector ξ a , a generalized Newtonian potential is defined by With this potential we can foliate the spacetime, and we choose the holographic screen as an equipotential surface of Φ. In this background, a test particle approaching the holographic screen experiences the following 4-acceleration where u a is the 4-velocity of the particle. The temperature of the holographic screen measured by an observer at infinity is given by the so-called Unruh-Verlinde temperature which is obtained by multiplying the redshift factor e Φ = √ −ξ a ξ a to the Unruh temperature for the acceleration (2): where N a is the outward unit normal vector to the holgraphic screen and the Killing vector. With the holographic principle which states that the information in the bulk can be given by the information on the boundary, and with the equipartition rule which states that each bit of information contributes the energy of 1 2 T , the quasi-local energy inside the holographic screen which is the boundary of spacelike hypersurface Σ can be written as where the number of bits on an area dA is assumed to be dN = dA/G. With (1) and the properties of Killing vector ξ a , one can show that the right hand side of (4) is nothing but the Komar expression where dA ab = 1 2! ǫ abcd dx c ∧ dx d . Expressing the Komar mass inside the holographic screen with the energy momentum tensor T ab in the left-hand side and applying the Stokes theorem to the right-hand side of Eq. (5) together with an identity, ∇ a ∇ a ξ b = −R ab ξ a we have [16] where dΣ a is defined with an outward pointing normal. Since Eq. (6) is true for an arbitrary holographic screen, we obtain the Einstein equation III. BRANS-DICKE EQUATIONS FROM ENTROPIC VIEWPOINT In the previous section we have seen that Einstein gravity can be seen as an entropic phenomenon. In this section we will extend this entropic viewpoint of gravity to the Brans-Dicke theory. In the Brans-Dicke theory there are two frames called the Jordan frame and the Einstein frame whose metrics we denote byg ab and g ab , respectively. These two frames are related by the following conformal transformation: Unlike the minimal coupling in the Einstein frame, the scalar field ψ couples non-minimally to the curvature scalar in the Jordan frame such that it plays the role of effective gravitational coupling of the theory, i.e., G eff = G/ψ. Since this extra scalar field comes into play, the gravity in the Jordan frame is also known as the scalar-tensor theory of gravity. From now on, all quantities with tilde should be understood as expressions in the Jordan frame. In the following we will show that the Brans-Dicke field equation of metric in the Jordan frame can be obtained with Verlinde's entropic formulation. Since it is well known that the entropy of stationary black hole in the Jordan frame is proved to be the same as that in the Einstein frame [17], first, we will setup entropic formulation in the Einstein frame where the usual Verlinde's idea works nicely for the Einstein gravity and then with a conformal transformation (8) we will derive the field equations ofg ab in the Jordan frame. As usual we assume that there exists a timelike killing vectorξ a in the Jordan frame satisfying the Killing equation,∇ aξb +∇ bξa = 0 where∇ a is the covariant derivative associated with the Jordan metricg ab . Notice that ξ a = g abξ b satisfies the Killing condition in the Einstein frame. The gravitational potential Φ = log √ −ξ a ξ a associated with ξ a in the Einstein frame has the following relation with the gravitational potentialΦ in the Jordan frame: whereΦ = log −ξ aξ a . Therefore the acceleration of a test particle near the holographic screen seen in the Einstein frame is given by where the acceleration measured in the Jordan frame is given byã a = −∇ aΦ . Notice that unlike the Jordan frame description of the Brans-Dicke gravity, a a = 0 does not mean the geodesic motion of a test particle in the Einstein frame. Eq. (10) shows that a measure of free-falling in the Einstein frame is given by a a + 1 2 ∇ a ψ which compensates for the additional scalar field dragging. With this acceleration a a , we define the Unruh-Verlinde temperature T in the Einstein frame as follows: where N a andÑ a are the unit outward normal vectors to the holographic screen seen in the Einstein frame and in the Jordan frame, respectively, and are related by N a = √ ψÑ a . Therefore the quasi-local energy (4) inside the holographic screen ∂Σ seen in the Einstein frame has the following expression in terms of variables in the Jordan frame: where the surface element is given by dA = ψdà under the conformal transformation (8). Here ∂Σ is the same closed hypersurface as ∂Σ but seen in the Jordan frame. The same applies toΣ and Σ below. Now, applying the Stokes theorem to each terms in the right-hand side of Eq. (12) yields where dΣ a is the volume measure in the Jordan frame. Thus, we have where˜ =g ab∇ a∇b . As we have done in Eq. (6) we express E in the terms of energymomentum tensor as follows: Here, we have used the fact that the energy-momentum tensor in the Einstein frame has the following relation with that in the Jordan frame under the conformal transformation (8), We emphasize that the energy momentum tensor in (16) contains the contribution from scalar field ψ as well as the ordinary matters. Now, comparing two expressions (15) and (16) for the holographic energy we get the field equations for metricg ab as follows: This equation is the same one which can be obtained by varying the Brans-Dicke action with respect to the Jordan metricg ab : where I[g ab , ψ] is the action for scalar field ψ and I matter [g ab , φ (m) ] is an ordinary matter action minimally coupled tog ab . Variation of last two terms in (19) with respect tog ab will contribute to the energy momentum tensor of (18 where ω is the Brans-Dicke parameter. With the holographic principle and the equipartition rule we have derived the field equations for metric of the Brans-Dicke gravity without resort to the action principle as Verlinde has done for the Einstein gravity. Finally, we comment that naive application of the holographic principle and the equipartition rule in the Jordan frame does not yield the correct field equations of Jordan metric. Since in the Brans-Dicke gravity the entropy of black hole with horizon areaà is given by [15]S we expect that an infinitesimal area dà on the holographic screen measured in the Jordan frame would contain the following amount of information Thus one may write the quasi-local energy inside the holographic screen ∂Σ of Jordan frame which is compatible with the holographic principle and the equipartition rule as follows It is obvious from the Eq. (12) that in order to obtain the Brans-Dicke field equations the temperatureT in this expression should be equal to the Unruh-Verlinde temperature not of Jordan frame but of Einstein frame in (11).
2013-01-10T07:32:57.000Z
2011-05-30T00:00:00.000
{ "year": 2011, "sha1": "6bdc2d23a058e41f0a54dcfb27bfe33f5c90b124", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1105.5905", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6bdc2d23a058e41f0a54dcfb27bfe33f5c90b124", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248643140
pes2o/s2orc
v3-fos-license
Legal Protection for Cinematographic Copyright Holders Related to Alleged Violations of Economic Rights Through the Telegram Social Media Application (in a View of Law Number 28 of 2014 Concerning Copyright) Uploading a creation in social media app is an evidence that the internet presence becomes an impact in modern era that keeps getting sophisticated. However, it can't be denied that it can cause several problems, with the occurance of speculation violation economy rights especially for cinematography copyrights holder. This research used legal protection as discussion point to cinematographic copyright holder, ilegality reason, and legal effort that cinematographic copyright holder can use against hijackers that are using Telegram application. The research method that is used is to regulate justice through the use of legal methods and conceptual methods. The result of the research conclude that cinematographic copyright protection in the form of preventive measures, alternative dispute solution with arbitration or repressive dispute resolution methods taken through the court. The reason why hijackers use the telegram application is because it is convenient, free and also Telegram doesn't have strict rules against misused channel. The cinematographic copyright holder that has been hijacked from Telegram application can make an effort by giving complaint submission about copying and hijacking to relevant instancies. INTRODUCTION Indonesia is a legal state, where every legal subject has used a rule of law that contains the right to do something. This right is inherent in humans to do anything, for example the public whose name is in a copyrighted work that is realized in a tangible form, as long as it does not conflict with the public, with the guarantee that a right that is owned is guaranteed with legal certainty so that it does not conflict with the truth. Rights are an authority possessed by legal subjects. The authority that has been owned by a legal subject is an exclusive authority which includes economic rights and moral rights. [1] However, due to allegations of violations of exclusive rights, such as the piracy of cinematographic works, especially through a social media application called Telegram, this is one form of implementing economic rights used by actors who clearly understand the current situation, most of the Indonesian population has Telegram. Users of the Telegram application think that this social media application has large storage and delivery, so that people can do all kinds of things such as sending and receiving files, exchanging messages (chat) with family far and near, making group discussions including watching movies or series they want. for free just by searching for the title in the "Search" column. Many people do not understand that watching cinematographic works in the form of famous films/series through this application is a wrong thing, because the films available on the Telegram channel originate from the actions of a number of individuals who have hijacked other people's Cinematographic works that have been downloaded and uploaded. illegally without the permission of the copyright holder of the cinematographic work. Another reason why people prefer Telegram as their means to watch movies is because of the limited cost from the public to pay for the official application for film and series service providers. A cinematographic work has been alluded to by Law No. 28 years. 2014 concerning Copyright Article 40 paragraph (1) letter m where the statement describes cinematographic works, namely what is included in cinematographic works are artificial ones that can be in the form of moving images, among others, can be in the form of documentary films, promotional films, reports or can be in the form of narrative films that can be made with scripts and animated films. In making cinematography creations it can be made in the form of celloids, film tapes, film discs, optical discs or it can be in the form of other tools that can allow it to be shown in cinemas, wide screens, TV, or other devices. Cinematography is an illustration of the form of audiovisual. From the explanation above, the author is interested in raising the issue of piracy of cinematographic works through the Telegram social media application in accordance with Law no. 28 of 2014 Copyright only, so that later this topic can be in accordance with the title and formulation of the problem that has been determined. Thus this research is still supported by other regulations that are interrelated with each other. Related Work Based on the description above, the title of the research entitled: "Legal Protection for Cinematographic Copyright Holders Related to Alleged Violations of Economic Rights Through the Telegram Social Media Application (in view of Law Number 28 of 2014 concerning Copyright)" The factors that cause someone to violate economic rights in the form of cinematographic works through the Telegram social media application Advances in information technology that are growing so rapidly cause various changes in every aspect of activities in human life that can directly affect the birth of new legal acts. In addition to working as a medium of information, communication, and a profitable business tool, the internet can also be a very fertile ground for the occurrence of a criminal act [2] Technological advances in the reproduction industry, difficulty in supervising production activities, very significant price differences between legal and illegal products, and ineffective law enforcement. This is also the same as sharing pirated films through social media, the actors included in this activity are users of social media who send pirated films as providers of pirated films, namely recipients or viewers who enjoy pirated works, and of course the owners of social media themselves. With regard to information technology, the piracy carried out in this writing occurs in a social media application, namely Telegram which can be downloaded from the PlayStore as well as the AppStore which is on a communication tool owned by each individual, and can be accessed via the telegram web itself. As has been explained regarding the understanding of the Telegram application that this application is a social media that has a look like Whatsapp. And one of the methods used in carrying out this piracy process is through the Channel, which is also part of the Telegram application itself. Pirated movies in the Telegram application are mostly stolen from Netflix and Spotify. The Outline quoted from Business Insider Singapore, that a number of groups and channels on Telegram were deliberately created with the aim of sending pirated content. In this case the party involved is clear that he violated several laws and regulations, namely Law Number 28 of 2014 concerning Copyrights and Film piracy includes the violation of copying and/or piracy of cinematography as stated in Article 1 number 12 UUHC. [3] Then this gives rise to a legal consequence arising from legal actions that occur on the Channel in the Telegram Application. A person's intellectual property rights must of course be protected, as well as Copyright which can protect someone's copyrighted work. In Article 4 UUHC there is an explanation that Copyright is an exclusive right of a person which consists of moral rights and economic rights. Which if the basic rights are violated, the Creator and Copyright Holder can sue for losses caused by the act of piracy. Based on the results of interview data research conducted by the author, the factors that cause someone to commit piracy are because it is easy to do, if you look at the point of view of a Telegram Channel admin to get famous films, he can do it himself by getting illegal links containing a film that has been pirated by an anonymous sender before, especially in profiting from viewers who are subscribed to the Telegram Channel. A famous movie watcher who was hijacked via telegram doesn't have to spend a certain amount of money just to watch the movie he wants to see (Unless the person wants to become a VVIP subscriber). To become a VVIP subscriber is not as expensive as the price you have to pay each month on the official applications of famous movie providers such as Netflix and so on. This is what causes some people to prefer watching on Telegram instead of watching through official applications such as Netflix or watching movies in theaters directly. In addition, another supporting factor is the Covid 19 pandemic which has made many cinemas unable to operate normally. Legal Protection for Cinematographic Copyright Holders Related to Alleged Violations of Economic Rights Through the Telegram Social Media Application (View from Law Number 28 of 2014 concerning Copyright) It should be noted that Legal Protection Efforts can only be implemented if the Author and Copyright Holder file a complaint as contained in Article 120 of the UUHC, which reads: "Criminal acts as referred to in this Law constitute a complaint offense" [4] This means that the article mentioned above will only be imposed on the perpetrators of piracy if the Creator and Copyright Holder Advances in Social Science, Education and Humanities Research, volume 655 file a complaint. In principle, if a criminal incident occurs, the government, represented by the police, the prosecutor's office and the judiciary, without a request from the person affected by the criminal incident, immediately acts to conduct an investigation However, from the many criminal events, there are several types, almost all of them crimes, which are only prosecuted on complaints (requests) from people who are subject to criminal events. [5] Such a criminal event is called a complaint offense. A complaint offense (Klacht delict) is an offense that is tried, if the aggrieved person complains to the Police/Investigator. However, if there is no complaint, the investigator will not conduct an investigation and make a Minutes of Examination. Then the Complaint Offenses are divided into 2 (two) types, namely: Absolute complaint offenses are offenses (criminal events) which can only be prosecuted if there is a complaint; Relative complaint offense, is an offense (criminal event) which is usually not a complaint offense, but if it is committed by a specified relative, it will be a complaint offense. The method of filing a complaint itself has been determined in Article 45 H.I.R. by means of a signed letter, or by word of mouth. Complaints orally by the employee who received it must be written and signed by him and by the person who complained. [6] The period of time is 6 months if the person who is obliged to complain is in Indonesia, and 9 months if the complainant is abroad. Then the complaint can be in oral form, what applies when the complaint is verbal notification is submitted. If it is written, what applies is the date on which the complaint was sent, not the date the letter was received. From the explanation regarding the complaint offense above, it can be seen that the Creator or Copyright Holder has the right to make a complaint, by submitting a complaint orally or in writing to the employee (Police/Investigator) Our Contribution Based on the background and problem formulation described above, the objectives to be achieved in this research are to find out the factors that cause someone to violate economic rights in the form of cinematographic works through the Telegram social media application and also to find out how the law works in a cinematographic work regarding the alleged violation of economic rights when viewed from Law Number 28 of 2014 concerning Copyright. Paper Structure This paper structure are using research method to collect data, manage data, and conclude from data according to the problem to be studied by the author. This legal research is to study the symptoms of a particular law, either one or more of its symptoms. This legal research is carried out with a series of scientific activities based on methods, systematics, and a certain thought. The research method used by the author in the study is as follows: Type of Research. The type of research in this legal research is normative research. The definition of normative research is research that provides a systematic explanation of the rules governing a certain legal category, as well as analyzes of a relationship between regulations that describe areas of difficulty and may predict future development. And also Legal Resources and Materials In this paper, the author uses legal materials obtained from the results of a review of decisions or a review of literature or library materials related to a problem or a material from research which is often called legal material. Legal Protection Protection means to give shelter, to produce or to cause something to take refuge. In general, protection means protecting something from something dangerous or something that can be in the form of interests or objects and goods. In addition, protection also contains the meaning of protection given by someone to someone who is helpless. In the Indonesian Dictionary, the definition of law means a regulation or custom which is officially considered binding, which is confirmed by the ruler or government. The law contains provisions that become the rules of life of a society that are controlling, preventing, binding, and coercing. In addition, the law is also defined as the provisions that stipulate something over something else, namely stipulating something that can be done, must be done, and forbidden to be done. Law is also defined as the provision of a prohibited act along with legal consequences/sanctions in it. According to Achmad Ali, what is meant by law is a rule or measure that is arranged in a system to determine what is allowed and what cannot be done by humans as citizens of society in their social life, which comes from the community itself as well as from other sources, which are recognized. the highest authority in society, and is actually implemented by citizens as a total unit in the world, and if these rules are violated, it will give the highest authority to impose external sanctions. [7] Therefore, it can be said that legal protection is a protection given to the subject in accordance with the rule of law, whether it is preventive in nature or in a repressive form (coercion), both written and unwritten within the framework of legal regulations. Legal protection is a description of the workings of the legal function to realize legal goals, namely, justice, expediency and legal certainty. According to Fitzgerald, explaining Salmond's theory of legal protection which states that the law aims to combine and combine various interests in society because in a cross-interest, protection of certain interests can only be done by limiting the various interests of other parties. Legal interests are Advances in Social Science, Education and Humanities Research, volume 655 concerned with human rights and interests, so that they have the highest authority to determine human interests that need to be regulated and protected. From the descriptions of the experts above provide an understanding that legal protection is a description of the functioning of the law to realize the objectives of the law, namely justice, benefits and legal certainty[8] Legal protection is a protection given to legal subjects in accordance with the rules law, both preventive and in the form of repressive nature, both written and unwritten in legal framework. Law on Copyright and Protection to encourage individuals in society who have intellectual abilities and creativity to be more enthusiastic about creating as many copyrighted works as possible that are useful for the progress of the nation. Protection is also directed at protecting related rights, namely the exclusive right for actors to reproduce or broadcast their sound recordings, broadcasting institutions to create, reproduce, or broadcast their broadcast works. Legal protection of Copyright in Indonesia is currently regulated in Law Number 28 of 2014 about Copyright. So the protection of copyright aims to protect all rights inherent in the creator so that these rights are not taken away by others. Legal protection of Copyright is to encourage individuals in society who have intellectual abilities and creativity to be more enthusiastic about creating as many works as possible that are useful for the progress of the nation. With the Copyright Law, the creators no longer need to worry about the binding status of their creation because the Copyright Law will guarantee a creation at the first time it is created, not the first time. Responsibilty Responsibility is a sense attached to each individual subject of law where an individual is responsible for his own violations; In this theory there are two terms that refer to responsibility, namely liability (the state of being liable) and responsibility (the state or fact of being responsible). Liability is a broad legal term (a board legal term), which among other things implies that liability refers to the most comprehensive meaning, covering almost every character of risk or responsibility, which is certain, dependent or possible. Liability is defined to designate all the characteristics of rights and obligations. In addition, liability is also a condition of being subject to actual or potential obligations. the condition of being responsible for actual or possible things such as losses, threats, crimes, costs, or burdens; conditions that create a duty to implement the law immediately or in the future. [9] Responsibility means something that can be accounted for by an obligation, and includes decisions, skills, abilities, and skills. Responsibility also means, the obligation to be responsible for the laws that are implemented, and to repair or otherwise compensate for any damage that has been caused. In addition, there are other opinions about the principle of responsibility in law, which is divided into three namely accountability, responsibility, and liability. Understanding Legal Responsibility, there are three kinds of legal responsibility, namely legal responsibility in the sense of accountability, responsibility, and liability. Responsibility in the sense of accountability is legal responsibility in relation to finance, for example accountants must be responsible for the results of the bookkeeping, while responsibility is the responsibility to bear the burden. Responsibility in the sense of liability is the obligation to bear the losses suffered. Responsibility in the sense of responsibility is also defined as a moral attitude to carry out its obligations, while responsibility in the sense of liability is a legal attitude to account for violations of its obligations or violations of the rights of other parties. Copyright Copyright is one part of intellectual property that has the widest scope of protected objects, because copyright includes science, art and literature (art and literary) and also includes computer programs. The development of the creative economy which is one of the mainstays of Indonesia and various countries and the rapid development of information and communication technology requires an update of the Copyright Law, considering that Copyright is the most important thing in the national creative economy. With the existence of a Copyright Law that fulfills the elements of protection and development of the creative economy, it is highly expected that the contribution of the Copyright and Related Rights sector to the country's economy will be able to run more optimally. From a historical perspective, the concept of copyright protection began to grow rapidly since the invention of the printing press by J. Gutenberg in the mid-fifteenth century in Europe. The need in this field arises because with the printing press, copyrighted works, especially written works, are easily reproduced mechanically. It was this incident that initially grew copyright. For Indonesia itself, the term copyright was first proposed by St. Moh. Shah, S.H. at the Cultural Congress in Bandung in 1951 (which was later accepted by the Congress) as a substitute for the term author's rights which was considered to be less broad in its meaning. The term author rights itself is a translation of the Dutch term Auteurs Rechts. It is declared "less broad" because the term author's rights gives the impression of "narrowing" the meaning as if what is covered by the author's rights are only the rights of the authors, which have something to do with composing. Meanwhile, the term copyright is broader, and it also includes composing corals. Indonesia adheres to the notion of creator based on an individual person, so in the Copyright Law Article 1 paragraph (2) the creator is a person or several people who jointly with inspiration give birth to a creation based on the ability of the mind, imagination, dexterity, skill, or expertise as outlined into a unique and personal form. Seen in Law Number 28 of 2014 concerning Copyright, provides an Advances in Social Science, Education and Humanities Research, volume 655 overview regarding the definition of Copyright which has been stated in Article 1 point 1 of Law Number 28 of 2014 concerning Copyright, namely "Copyright is the exclusive right of the Creator which arises automatically based on the declarative principle after a Work is realized in a tangible form without reducing the restrictions in accordance with the provisions of the legislation." Observing the three definitions of Copyright above, it can be concluded that all three provide the same meaning, namely Copyright is a special right or exclusive right owned by the Creator. Author, in Article 1 point 2 of Law Number 28 of 2014 concerning Copyright is a person who individually or jointly produces a work that is unique and personal. As for the definition of Creation, it is stated in Article 1 point 3 of Law Number 28 of 2014 concerning Copyright, that what is meant by Creation is any copyrighted work in the fields of science, art and literature that is produced on inspiration, ability, thought, imagination, dexterity, skill, or expertise expressed in a tangible form. According to Article 1 point 1 of the Copyright Law Number 28 of 2014, "Copyright is the exclusive right of the creator that arises automatically based on declarative principles after the creation is manifested in a tangible form without being limited in accordance with the provisions of the legislation." In the definition of Copyright above, there is the sentence Exclusive rights. That is a right that is solely intended for the holder so that no other party may use the right without the permission of the right holder. The above definition explains that other than the creator or copyright holder, it is prohibited to announce or reproduce or even distribute a work without the permission of the Copyright Holder for whatever reason. The article also strongly emphasizes how the rights of the creator to a work can be reproduced and get the economic value of his creation and also get the moral value of the copyright. Exclusive rights are closely attached to the owner who is the property's power over the creation in question. Therefore, no other party may take advantage of Copyright except with the permission of the Author. As an exclusive right, Copyright contains two essences, namely economic rights (Economic Rights) and moral rights (Moral Rights). Economic rights are rights to obtain economic benefits from creations and related rights products. The content of economic rights includes the right to announce (performing rights) and the right to reproduce (mechanical rights). [10] As for the moral rights of the creator, the author's right to include his name in the creation and the author's right to change the creation of others, including the title. While moral rights are rights inherent in the creator or actor that cannot be removed or deleted without any reason, even though the copyright or related rights have been transferred. To get the benefits of economic rights to the work. the creator or Copyright Holder has the economic right to: -Publishing creations; -Reproduction of creation in all forms; -Translation of creations; -Adapting, arranging, or transforming creations; -Distribution of creations or differences thereof; -Creation show: -Announcement of Works; -Communication of creation; and -Leasing creations. Other than the Author or Copyright holder, it is prohibited to reproduce and/or commercially use the creation. Electronic information This technology is a development of computer technology combined with telecommunications technology. CONCLUSION 1. The form of legal protection is divided into two parts, namely prevention and repressive measures. Prevention efforts that will be carried out by the government are providing legal protection by Trying to shut down content that violates copyright based on the provisions of article 15 of the joint regulation of the minister of law and human rights number 14 of 2015 and the minister of communication and information technology number 26 of 2015 concerning implementation of closing content and / or user access rights violation of copyright and / or related rights in electronic systems. While repressive efforts are a Form of legal protection intended for dispute resolution. Where dispute resolution can be definition of the word "information" itself is internationally agreed upon as "the result of data processing" which in principle has more value than raw data. Computers are the first information technology that can process data into information. In Article 1 number 2 of Law Number 11 of 2008 concerning Information and Electronic Transactions, what is meant by Electronic Transactions are legal actions carried out using computers, computer networks, and/or other electronic media. The legal actions of the organizer of electronic transactions can be carried out in the public or private sphere. The parties conducting electronic transactions must have good faith in interacting and/or exchanging electronic information and or electronic documents during the transaction. The implementation of this electronic transaction is regulated by government regulations. In today's ITE world, there is social media or what is also known as social networking which is part of the new media. It is clear that interactive content in new media is very high. Social media, defined as an online medium, with its users can easily participate, share, and create content including blogs, social networks, wikis, forums and virtual worlds. Blogs, social networks and wikis are the most common forms of social media used by people around the world. Including Telegram. Social media has social power that greatly influences public opinion that develops in society. Raising support or mass movements can be Advances in Social Science, Education and Humanities Research, volume 655 formed because of the power of online media because what is on social media is proven to be able to shape public or community opinions, attitudes and behavior. Through alternative dispute resolution arbitration or courts based on the provisions of 5 uuhc. 2. The cause of the alleged violation of the economic rights of the rights holder to a cinematographic work through the telegram application is the very rapid development of information technology. So That it is very easy for people to use the internet network for various activities in social media, one of which is violation of economic rights by piracy of cinematographic copyrights. In addition, the factors that cause piracy actors to use the telegram Application facility, namely telegram is considered Very easy to use, free and considers that telegram is not too strict regarding users who abuseChannels. 3. The form of legal action that can be taken by the Cinematography copyright holder regarding theReproduction and/or piracy that occurs in the channel on the telegram application is by making a complaint to the related party.
2022-05-10T16:22:32.623Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "c0d551fe8aab4084087386b3d8e5903a9a301d60", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125973085.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c1a917f3e73a9034b8c1012f8976a2f6cd6eb99e", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [] }
1615128
pes2o/s2orc
v3-fos-license
Annual Research Review: Transdiagnostic neuroscience of child and adolescent mental disorders – differentiating decision making in attention‐deficit/hyperactivity disorder, conduct disorder, depression, and anxiety Background Ineffective decision making is a major source of everyday functional impairment and reduced quality of life for young people with mental disorders. However, very little is known about what distinguishes decision making by individuals with different disorders or the neuropsychological processes or brain systems underlying these. This is the focus of the current review. Scope and methodology We first propose a neuroeconomic model of the decision‐making process with separate stages for the prechoice evaluation of expected utility of future options; choice execution and postchoice management; the appraisal of outcome against expectation; and the updating of value estimates to guide future decisions. According to the proposed model, decision making is mediated by neuropsychological processes operating within three domains: (a) self‐referential processes involved in autobiographical reflection on past, and prospection about future, experiences; (b) executive functions, such as working memory, inhibition, and planning, that regulate the implementation of decisions; and (c) processes involved in value estimation and outcome appraisal and learning. These processes are underpinned by the interplay of multiple brain networks, especially medial and lateralized cortical components of the default mode network, dorsal corticostriatal circuits underpinning higher order cognitive and behavioral control, and ventral frontostriatal circuits, connecting to brain regions implicated in emotion processing, that control valuation and learning processes. Findings and conclusion Based on clinical insights and considering each of the decision‐making stages in turn, we outline disorder‐specific hypotheses about impaired decision making in four childhood disorders: attention‐deficit/hyperactivity disorder (ADHD), conduct disorder (CD), depression, and anxiety. We hypothesize that decision making in ADHD is deficient (i.e. inefficient, insufficiently reflective, and inconsistent) and impulsive (biased toward immediate over delayed alternatives). In CD, it is reckless and insensitive to negative consequences. In depression, it is disengaged, perseverative, and pessimistic, while in anxiety, it is hesitant, risk‐averse, and self‐deprecating. A survey of current empirical indications related to these disorder‐specific hypotheses highlights the limited and fragmentary nature of the evidence base and illustrates the need for a major research initiative in decision making in childhood disorders. The final section highlights a number of important additional general themes that need to be considered in future research. Introduction Success or failure in life is partly determined by the decisions one makes. In this review, we argue that ineffective decision making contributes to impaired functioning and reduced life satisfaction in children and adolescents with mental health conditions. These individuals' propensity to make poor decisions, while strikingly apparent to clinicians and other professionals, is underresearched. Little is known about the neuropsychological mechanisms that underpin decision-making deficits in child and adolescent mental disorders, and crucially, there has been no systematic attempt to understand how these processes and mechanisms might differ between disorders. In this review, we adopt a neuroeconomic perspective on decision making to address these issues. This approach provides an alternative framework to traditional psychiatric models (Hasler, 2012;Kishida, King-Casas, & Montague, 2010) and potentially offers new insights into the ways in which complex behavioral processes are compromised in those with mental disorders. From our perspective, the decision-making process can be broken down into a number of stages. For instance, whether a teenage patient has the motivation to attend her school-mate's party depends on her evaluation of whether she will derive pleasure from doing so ('evaluation of the subjective utility of a future event', in neuroeconomic language). This is distinct from implementing the decision to acthow she will go about organizing herself and her environment so she is able to attend the party. These two processes are also distinct from her appraisal of the outcomewhether or not her experiences of the party were positive, neutral, or negative, and how that altered her views of herself and the value she attributes to such social encounters. The different stages of decision making will be associated with different expressions of psychopathology: a depressed teenager may find it hard to motivate herself to go to the party, whereas someone with ADHD may find it hard to generate and follow through a plan to get there. In contrast, a person with anxiety might attend the party but spend most of it scrutinizing their own actions and worrying about how they are perceived by others. It is important to note that such a neuroeconomic approach to decision-making rests on the core assumption that each action an individual performs is in some sense a choicewhether or not it is recognized as such by the individual (Sonuga- Barke & Fairchild, 2012). In the party example, even an unmotivated and disengaged or an anxious-avoidant response to the invitation to the party reflects a choice. Another assumption is that each individual, including those with mental disorders, sets out to maximize subjective value or utility through their actionswhether or not that goal is actually achieved. It is important to understand here that maximizing subjective utility does not necessarily imply maximizing the actual benefits available to a person. In fact, using the example above, a decision not to attend a party can be seen as rational from an anxious adolescent's perspective given the negative utility they attach to incurring social embarrassment, but when viewed more objectively, this decision may be damaging at a number of levels (i.e. reduced social interaction and ability to develop coping strategies and exacerbation of anxiety due to avoidance). Leaving aside the issue of mental health-related differences in economic goals, there are also barriers to effective decision making that are associated with impairments at different stages of the decisionmaking process: Even if an individual has the same goals, they may differ from other individuals in their ability to make and carry through decisions to achieve those goals. In this sense, effective choice depends on the individual's ability to compare the subjective utility that may be derived in the future from the different choice options available (Oppenheimer & Kelso, 2015). These options may differ in terms of their valence (gains or losses), timing (immediate or delayed outcomes), and risk/probability (likely or unlikely). Furthermore, decision making is informed by both state-and trait-like characteristics. Instances of state-level differences are (a) intrinsic intraindividual variations in motivational states linked to physiological drives and energetic factors (e.g. hunger, thirst, need for sleep; de Ridder, Kroese, Adriaanse, and Evers 2014); and (b) extrinsic variations in elements such as the quality and availability of information about alternative actions and their consequences (Newell & Shanks, 2014) or external pressure (Byrne, Silasi-Mansat, & Worthy, 2015;Stringaris, 2015). Returning to the example given above, fatigue would be an intrinsic state factor, while knowledge about who might be at the party would be an extrinsic one. At the trait level, individuals vary from one another in (a) their hierarchy of tastes and preferences with regard to different choice outcomes (a factor related to variations in subjective value assignment; Plassmann, O'Doherty, & Rangel, 2010); and (b) the efficiency with which they can process choice-related information and implement their decisions. In the current review, the complexity and the multifaceted nature of putative decision-making deficits in mental disorders will be explored by contrasting aspects of the neuropsychology and pathophysiology of four mental disorders affecting children and adolescents: depression, anxiety, attention-deficit/hyperactivity disorder (ADHD), and conduct disorder (CD). Such transdiagnostic comparisons are timely given the growing emphasis on identifying core dimensions of pathophysiological impairment that are relevant across different clinical presentations. This perspective, although not new, is currently being promoted by the National Institute for Mental Health (NIMH) through their Research Domain Criteria (RDoC) initiative (Insel et al., 2010). The aim of RDoC is to shift the focus of research, and eventually clinical practice, away from existing diagnostic categories, as recently updated in the DSM-5 (American Psychiatric Association, 2013), toward 'new ways of classifying psychopathology based on dimensions of observable behavior and neurobiological measures.' The objective is to 'define basic dimensions of functioning … cutting across disorders as traditionally defined' (NIMH, http:// www.nimh.nih.gov/research-priorities/rdoc/index. shtml; Cuthbert & Insel, 2013). The RDoC emphasis on transdiagnostic approaches represents a positive move to refocus a scientific field increasingly fragmented into diagnostic specialisms. However, there is considerable debate about the merits of this approach (Peterson, 2015). In light of this imperative, we are particularly interested to see whether potentially diverse patterns of decision-making impairment across mental disorders implicate similar neuropsychological and neurobiological systems. We chose to focus on ADHD, CD, anxiety, and depression because (a) each is relatively common in childhood and adolescence (Polanczyk, Salum, Sugaya, Caye, & Rohde, 2015); (b) they frequently co-occur (Kessler et al., 2005); and (c) they encompass a broad range of psychopathological dimensions, both internalizing and externalizing; in this latter sense, they provide a strong test of the value of transdiagnostic approaches. It is also important to note that clinical observation and laboratory-based experimental research combine to suggest that problems with decision making are present in each disorder. Clinically, core features of the four disorders implicate decision-making impairments. Individuals with ADHD are often described as disinhibited and impulsivechoosing without sufficient reflection and favoring immediacy over delayed outcomes. CD is linked with risk-taking and failure to learn from negative consequences. In contrast, anxious individuals tend to be oversensitive to risk of negative outcomes, while individuals with depression may be characterized as generally unmotivated and indecisive. It is important to note that despite the accepted complexity and heterogeneity of each of the disorders and the related existence of different types and subtypes within current diagnostic manuals, we feel that the purposes of our analysis are, at this stage, best served by adopting generic (e.g. depression and anxiety) rather than diagnosticsystem-specific terms and considering the archetypal features of each condition in a general way. The remainder of the article will be divided up into three sections. In the first, we will introduce a unified model of economic decision making we have developed to help both organize the empirical evidence relating to the putative sources of impairment in different mental disorders and provide a framework for the development of hypotheses regarding the underlying neuropsychological and neurobiological mechanisms. In the second part, we will apply the model to the four disordersfirst setting up differential hypotheses about the role of different stages in decision-making, neuropsychological processes, and neural systems in each disorder, and then selectively surveying the extant empirical evidence in light of these hypotheses. We conclude by identifying issues that merit further investigation. An integrated neuroeconomic model of decision making Figure 1 illustrates the core features of our integrated decision-making model and highlights the complex interplay underpinning neuropsychological systems (see also Kalueff, Stewart, Song, and Gottesman, 2015). The model has several general characteristics. Neuropsychological decision-making stages The decision-making process itself is divided into three distinct stages: (a) evaluation, (b) decision and management, and (c) appraisal and accommodation. Evaluation. This stage involves processes whereby subjective utility estimates are generated for each potential outcome. This involves the integration of information related to parameters, such as valence (win or lose), magnitude (large or small), timing (now or later), and probability (likely or unlikely)for each possible option. This provides a subjective estimate of the cost/benefit and timing of each outcome. We assume that this is influenced by both the implicit value system of an individual and explicit thought processes. The implicit value systemwhich we refer to as the utility matrixinvolves personal preference about content (if an individual prefers apples to oranges, apples will be assigned a higher utility than oranges in the matrix) but also timing (whether a person dislikes risk or delay) of outcomes. Τhe utility matrix is not considered a fixed element, but is automatically updated in the light of the evaluation of the consequences of decisions. The explicit value system involves self-referential autobiographical processes allowing reflection on the experience of prior choices and envisaging potential outcomes. Although the specific weighting of the influences of implicit and explicit processes is not specified in the model, it is assumed that abnormalities in either set of processes could disrupt outcome appraisal or influence evaluation processes in those with mental disorders. Decision and management. This stage involves comparisons of the subjective utility estimates of the available alternatives, the choice of one option over the other, and the implementation of a plan to ensure that the chosen option is enacted effectively. We assume higher order self-regulatory functions of executive control to be especially important during this phase. In particular, choice between options will involve working memory and inhibition, while goal attainment will involve effective planning, inhibition, and self-organization. Appraisal and accommodation. This stage is underpinned by reinforcement learning processes whereby a prediction error is generated through a comparison of expected and derived utility, which then feeds back to both the subjective expected utility matrix and autobiographical memory. The evaluation of the discrepancy between predicted versus derived utility is influenced by the current utility matrix, the self-regulation processes required to hold intertemporal information in mind, and autobiographical memory concerning prior choices. so-called default mode network; (b) the network involved in top-down executive controlincluding lateralized regions of the prefrontal and parietal cortex and the dorsal striatum; and (c) the cortical-subcortical circuits implicated in reinforcement learning, encoding of value, and emotion processingincluding ventral regions of the prefrontal cortex (such as the orbitofrontal cortex and the anterior cingulate cortex), ventral striatum, insula, and amygdala. These networks interact with each other, providing an additional level of complexity. In the next section, we examine the role of the three core neuropsychological domains implicated in the model. Self-referential processes. Effective decision mak- ing is facilitated if one can disengage from the immediate environment, stand back unencumbered by the influence of imminent and tangible incentives, and consider priorities across an extended timeframeintegrating one's current personal values and past experiences into a coherent picture while envisioning future possibilities. In recent years, there has been a renewed interest in task-independent self-referential cognition of this sort (Smallwood & Schooler, 2015) and the ways in which such cognitive processes may be disturbed in mental disorders (Andrews-Hanna, Smallwood, & Spreng, 2014 Figure 1 A schematic representation of an integrated neuroeconomic model highlighting the complex interplay between multiple psychological systems and neural circuits in the control of the decision-making process. The decision-making process itself is divided into three distinct stages: Evaluationwhere an estimate of the subjective utility of available choice options is made taking into account memory and learning from prior experience as well as prospection about future value mediated by implicit reinforcement learning mechanisms (encoded in a utility matrix) and explicit self-referential processes (autobiographical memory); decision and managementduring which the subjective utility assigned to competing alternatives is compared in terms of overall costs and benefits and a decision plan is implementedprocesses heavily influenced by higher order executive functions; appraisal and accommodationutility actually derived from decision is estimated (again on the basis of explicit and implicit value systems) and compared with prior expectations to generate a prediction error signal which drives learning and updates implicit and explicit value estimates for particular experiences and choices as represented by the feedback loops in the figure. The model proposes that these decision-making stages are primarily controlled by three distinct brain systems: the default mode network (MPFC, medial prefrontal cortex; PCC, posterior cingulate cortex; LPC, lateral parietal cortex; MTG, medial temporal gyrus) primarily implicated in self-referential cognition but also in some aspects of selfregulation; executive control system (DLPFC, dorsolateral prefrontal cortex; ACC, anterior cingulate cortex; PAR, parietal cortex) which mediates top-down control during self-regulation and planning; and reinforcement evaluation and learning circuits (OFC, orbitofrontal cortex; AMYG, amygdala; ACC, anterior cingulate cortex, REINF, reinforcement) The role of such processes in decision making has recently been discussed (Sonuga-Barke & Fairchild, 2012). A crucial feature is the idea that individuals construct a coherent autobiographical script about the personal meaning and subjective utility of past choice outcomes and future choice options based on a well-integrated concept of themselves as effective economic agents (D'Argembeau et al., 2014). This arises out of the ability to reflect on past experiences to provide a basis for future choices (Addis, Wong, & Schacter, 2007). The final self-referential element in economic decision-making involves self-projection to compare future outcome scenarios (Lin & Epstein, 2014) to estimate the subjective value of each choice. In this way, self-referential processes play a positive role in decision making. However, there is also a potential downside. This arises from the fact that when not properly regulatedfor instance, occurring in the wrong setting or at the wrong time or when an individual engages in excessive negative rumination self-referential processes can lead to mind-wandering or daydreaming which disrupts performance (Christoff, Gordon, Smallwood, Smith, & Schooler, 2009) because of lapses of attention (Sonuga- Barke & Castellanos, 2007). The neural substrates of self-referential cognition appear to be located along a cortical midline axis with two major hubsmedial prefrontal cortex (extending ventrally to include dorsal anterior cingulate cortex) and medial parietal cortexin particular, the posterior cingulate cortex and precuneus (Snyder & Raichle, 2012). These regions, together with circuits involving more lateral elements (i.e. temporal-parietal junction and medial temporal gyrus), form an interconnected set of regions known as the default mode network. Because of its size, location, and extensive range of connections to cortical and subcortical structures, the cortical midline axis operates as a coordinating hub, bringing together internally generated thoughts and externally available information about oneself and others into a coherent narrative that integrates past experiences to bear on future actions (Moran, Kelley, & Heatherton, 2013). The regions of the default mode network interact with brain regions responsible for cognitive control (Smallwood, Brown, Baird, & Schooler, 2012), reinforcement processing (Cauda et al., 2011), memory (James, Tripathi, Ojemann, Gross, & Drane, 2013), and emotion processing and regulation (Chase, Moses-Kolko, Zevallos, Wisner, & Phillips, 2014). Functional magnetic resonance imaging (fMRI) studies have shown that default mode nodes form a functional network at rest with temporal coherence between constituent regions which partly overlaps with patterns of white-matter connectivity (Greicius, Supekar, Menon, & Dougherty, 2009;Vertes & Bullmore, 2015). The activity of the default mode network is attenuated following transitions to goal-directed tasks requiring effortful, focused attention, and cognitive engagement (Snyder & Raichle, 2012). The centrality of the default mode network in economic decision making is supported by findings from a recent meta-analysis of brain activations during value computation (Clithero & Rangel, 2014). More specifically, with regard to economic decision-making, fMRI studies have implicated a medial frontotemporal axis as a putative neural mechanism underpinning the bridge between self-referential retrospection and prospection (Buckner, Andrews-Hanna, & Schacter, 2008) with medial prefrontal cortex and medial temporal gyrus interacting to facilitate prospection (Lavallee & Persinger, 2010;Spreng & Grady, 2010). This is consistent with data showing that the latter is involved in autobiographical memory retrieval, which provides the foundation for internal mentation (Lavallee & Persinger, 2010;Tulving, 2002), while medial prefrontal cortex is implicated in self-related future simulations central to the consideration of future choice outcomes (Kim, 2012) and complex perspective-taking processes (Van Hoeck et al., 2013). Medial prefrontal-posterior cingulate cortex circuits regulate self-initiated goal formation and planning in conjunction with dorsal attention networks (Spreng, Stevens, Chamberlain, Gilmore, & Schacter, 2010). Furthermore, these brain regions appear to play a role, in concert with the orbitofrontal cortex, in sustaining long-term goal states. In addition, the frontopolar cortex plays a complementary role in protecting the execution of long-term economic plans to allow implementation of decisions (Koechlin & Hyafil, 2007). Evidence for disruptions of task-independent and self-referential thought and the associated default mode network in mental disorders has been available for some time (Broyd et al., 2009). Two general models have been proposed. First, mental disorders often impair self-referential processes (reducing the ability to form autobiographical memories or envision the future), on the one hand, or distort the content of such processes on the other (leading to a focus on negative experiences or personal failures; Andrews-Hanna et al. 2014). These may, in turn, impair particular stages of the decision-making process, disrupt the transition between decision stages, or introduce pathological biases into others. Altered self-referential processes are reflected in patterns of altered connectivity during rest and introspection either within or between default mode hubs and other systems involved in cognitive control or emotion processing. Second, there is evidence that some individuals with mental disorders show impaired modulation of default mode activity during task performance (Sonuga- Barke & Castellanos, 2007). This allows intrusive self-referential thoughts (e.g. mind wandering) to undermine task performance. Such intrusions could have multiple origins, and these are likely to differ between disorders. Intrusions of self-referential thought during task performance may also be content-drivenfor exam-ple, they could be associated with a compulsion to dwell on past events or worry about the future (Servaas, Riese, Ormel, & Aleman, 2014), in both cases leading to excessive default mode activity. Executive control. Executive function is an umbrella term that refers to a heterogeneous grouping of top-down processes that allow individuals to regulate their thoughts and behavior to successfully engage in purposeful, goal-directed, and futureoriented actions (Suchy, 2009). There is general consensus among researchers that EFs comprise three core types: inhibition, working memory, and cognitive flexibility (Lehto & Elorinne, 2003;Miyake et al., 2000). Inhibition encompasses the suppression of prepotent responses (i.e. response inhibition) and control of interference from extraneous stimuli and distracting internal mental representations (Diamond, 2013). Working memory has multiple interrelated components. In the most well-regarded model, visuospatial and verbal working memory work together with a central executive to allow the simultaneous holding in mind and manipulation of multiple units of information (Baddeley & Hitch, 1974). Finally, cognitive flexibility is the process involved in the changing of perspective and response setsadjusting how one thinks about somethingand being sufficiently flexible to change in response to demands and/or priorities (Diamond, 2013). Additional higher order executive functions, such as reasoning, problem solving, and planning, are conceptualized as processes that build on the aforementioned more basic processes (Collins & Koechlin, 2012;Lunt et al., 2012). Despite continued debate, it is acknowledged that these different executive processes represent separable but moderately correlated constructs (Miyake et al., 2000). From the perspective of our model, executive functions are related to decision making in several ways. First, in general terms, they provide the basis for deliberative processes and development of decision plans (Bickel, Pitcock, Yi, & Angtuaco, 2009). Second, cognitive flexibility allows one to consider alternative options simultaneously (Diamond, 2013). Third, inhibitory control provides the time to reflect effectively on choice alternatives, while working memory allows multiple units of information to be assimilated and compared (Amso, Haas, McShane, & Badre, 2014). Fourth, executive control is also required for planning the implementation of the selected option (choice), together with the prospective information derived from task-independent processes (Suchy, 2009). Additionally, executive functions, in particular, inhibitory control, are involved in resisting interference from competing options during the implementation of plans (Sylvester et al., 2003). Executive functions are most engaged when the novelty or complexity of a situation makes it impossible to rely only on automatic responses or when control is required in the face of motivationally salient events (so called hot executive function settings; Kerr and Zelazo 2004). This latter situation is especially common in the context of economic decision making (Krain, Wilson, Arbuckle, Castellanos, & Milham, 2006). Executive functions are also implicated in timing, which may be important for decision-making processesespecially perceptual timing and temporal foresight (Noreika, Falter, & Rubia, 2013). Inhibitory control and working memory are pivotal in tasks involving duration discrimination, time reproduction (Noreika et al., 2013), and temporal foresight (Lin & Epstein, 2014). Traditionally, different executive functions have been localized to specific divisions of the prefrontal cortex: dorsolateral prefrontal cortex for working memory (Fuster, 1999); medial prefrontal cortex (including the anterior cingulate) for flexibility (Bush et al., 1999); ventral prefrontal cortex (including orbitofrontal and ventromedial) for inhibition (Tremblay & Schultz, 2000); and frontal pole (including anterior portions of the dorsolateral prefrontal cortex) for the higher order integration of executive functions (Koechlin, Basso, Pietrini, Panzer, & Grafman, 1999). However, it is now clear that the neural substrate of executive functions is better understood in terms of brain networks implicating basal ganglia, thalamus, cerebellum, and cortical regions outside the prefrontal cortex. For instance, working memory is controlled by a frontoparietal network (Darki & Klingberg, 2015) with the response inhibition network incorporating projections to the basal ganglia (in particular the caudate) and the thalamus (Suchy, 2009). In addition, both the right and left dorsolateral prefrontal areas and the superior medial frontal lobe have been implicated in tasks that involve cognitive switching (Jurado & Rosselli, 2007). Shifting processes also activate parietal lobe and left middle and inferior prefrontal gyri (Jurado & Rosselli, 2007). A meta-analysis (Houde, Rossi, Lubin, & Joliot, 2010) of task-based fMRI studies exploring executive functions in youth found that bilateral prefrontal areas, including the dorsolateral prefrontal and inferior prefrontal cortices, extending to the insular cortex, as well as related posterior parietal and occipital areas, were consistently activated in youth, mirroring findings in adults (Niendam et al., 2012). Executive functions are essential for cognitive, social, and psychological development (Diamond, 2013), as well as positive academic (Borella, Carretti, & Pelegrina, 2010) and professional outcomes (Bailey, 2007). Given this, it comes as no surprise that a growing body of evidence points to executive function impairment in mental disorders, including, among others, substance use disorders (Baler & Volkow, 2006), schizophrenia (Barch, 2005), and obsessive compulsive disorder (Penades et al., 2007). In general terms, there are a number of models of executive function deficits in psychopathology. First, there are models that postulate core executive func-tion deficits with disruptions in top-down control contributing to the main deficits that characterize the disorder (Barkley, 1997;Oosterlaan, Logan, & Sergeant, 1998). Second, there are models that implicate altered energetic processes as a factor undermining the supply of effortful cognitive control (Sergeant, 2000), concluding that the degree of inhibitory control is dependent on the individual's state and the allocation of energy to the tasks. Third, models propose specific deficits in the regulation of emotionally and motivationally charged responses (Moon & Jeong, 2015). Finally, there are models that argue that executive function deficits arise out of other aspects of a disorderso for instance in Eysenck et al.'s attentional control theory (Eysenck, Derakshan, Santos, & Calvo, 2007), the worry and rumination that characterize affective disorders limit the resources available for effective executive control. Reinforcement processes-valuation and learning. A choice between two or more options is predicated on the ability to make a judgment about the likely subjective utility to be derived from the respective options. Such processes are complex and likely to involve, on the one hand, conscious and explicit autobiographical memory processes, and on the other hand, implicit reinforcement-related processes. With regard to the latter, it is now clear that specific neural circuits are implicated in (a) the prospective valuation of possible events, (b) the appraisal of outcomes, and (c) the updating of what we have termed the utility matrix that informs these through the process of reinforcement learning. Through this neurobiologically based set of processes, individuals are able to adapt their behavior in complex environments, learn from experience, and predict the likely consequences of their actions (Dayan & Niv, 2008;Stringaris, 2015). At the core of these reinforcement processes is a mechanism through which the discrepancy between the subjective utility actually derived from a choice compared with that initially predicted is computed. This discrepancy is encoded by a prediction error signal, which is positive if the outcome is better and negative if the outcome is worse than expected (Schultz, Dayan, & Montague, 1997;Stringaris, 2015) leading to the updating of the individual's estimation of the value of choices (Rushworth, Noonan, Boorman, Walton, & Behrens, 2011;Stringaris, 2015). These processes can be modeled formally using reinforcement learning algorithms, such as the temporal difference learning rule (Sutton & Barto, 1998), and the resulting models can be fitted to fMRI data to investigate neural signals related to expected value or prediction error computations in humans (O'Doherty, Dayan, Friston, Critchley, & Dolan, 2003). The ability to generate value signals for different options is fundamental to the evaluation phase of decision making. This process involves converting all the anticipated benefits and costs of different choice alternatives into a common 'currency' so that they can be compared with each other (Chib, Rangel, Shimojo, & O'Doherty, 2009;Padoa-Schioppa, 2011;Stringaris, 2015). The decision maker also needs to be able to integrate across a range of different parameters (e.g. potential gains or losses, probability of outcomes, and delay until outcome receipt) when selecting between options. In addition, reinforcement processes are involved in the evaluation stage of decision making in at least two ways. First, they mediate the hedonic experience associated with rewarding outcomes, that is, they provide a neural substrate for the subjective experience of reward. Second, they enable the individual to learn from experience and update the expected value representations that influence future decisions (as described above). A number of brain regions and networks have been implicated in valuation and learning. There is extensive evidence that the ventromedial prefrontal cortex, orbitofrontal cortex, and ventral striatum are involved in coding the value of options, particularly when a choice is required (Kim, Shimojo, & O'Doherty, 2006;Lebreton, Jorge, Michel, Thirion, & Pessiglione, 2009;Plassmann et al., 2010). Interestingly, such value signals appear to be modulated by parameters which influence choice behavior, such as valence, probability/risk, delay, and the individual's motivational state (e.g. Gottfried, O'Doherty, and Dolan 2003). The ventromedial prefrontal/orbitofrontal cortex is also activated during the receipt of rewarding outcomes, regardless of modality and type of reward (i.e. it is responsive to primary and secondary reinforcement; Kim et al., 2006;Liu, Hairston, Schrier, and Fan 2011). Prediction error signals appear to be encoded by dopaminergic neurons in the striatum (and particularly ventral striatum), the ventral tegmental area, amygdala, and orbitofrontal cortex (D'Ardenne, McClure, Nystrom, & Cohen, 2008;Niv, Edlund, Dayan, & O'Doherty, 2012;O'Doherty et al., 2003). Negative outcomes (both anticipated and received) are processed by specific brain regions such as the amygdala and insula that are known to play a broader role in emotion processing and regulation (Barrett, Mesquita, Ochsner, & Gross, 2007;Ochsner & Gross, 2005). The amygdala, in particular, is heavily connected to other key elements of the reinforcement system (Kim et al., 2011). Neuroimaging studies investigating the processing of rewards or negative outcomes have demonstrated that the brain is sensitive to the valence of outcomes, with ventromedial prefrontal/orbitofrontal cortex activation increasing when rewards are received and decreasing in response to loss outcomes (Kim et al., 2006;Tom, Fox, Trepel, & Poldrack, 2007). A meta-analysis of fMRI studies of decision making found that rewarding outcomes (encompassing monetary rewards) activated the striatum, anterior insula, medial orbitofrontal, and rostral anterior cingulate cortex (Liu et al., 2011). Negative outcomes also triggered striatal and anterior insular activity, but additional activations were observed in lateral orbitofrontal cortex, inferior frontal gyrus, dorsal anterior cingulate cortex, and amygdala (Liu et al., 2011). This meta-analysis suggests that brain regions involved in processing positively and negatively valenced information are highly overlapping, but the direction of the change in activity may vary within the same regions (e.g. medial orbitofrontal cortex or ventral striatum) according to valence (Kim et al., 2006). Interestingly, the direct contrast of rewarding versus negative outcomes revealed multiple regions (e.g. medial orbitofrontal cortex and ventral striatum) that were more sensitive to the former than the latter, whereas only lateral orbitofrontal cortex and caudal regions of the anterior cingulate were more sensitive to negative outcomes. Functional MRI studies attempting to disaggregate value and risk processing in healthy adults have demonstrated that these parameters are encoded in partially distinct brain networks. While value signals are primarily encoded in orbitofrontal cortex and ventral striatum as noted above, risk or outcome uncertainty is correlated with lateral orbitofrontal cortex and dorsal anterior cingulate activity (Christopoulos, Tobler, Bossaerts, Dolan, & Schultz, 2009;Tobler, O'Doherty, Dolan, & Schultz, 2007). In addition, a region in medial frontal cortex appears to integrate these signals according to the individual's risk attitudes (Tobler et al., 2007). Such findings implicating the lateral orbitofrontal cortex in risk processing are consistent with neuropsychological studies showing heightened risk-taking following orbitofrontal cortex lesions (Hsu, Bhatt, Adolphs, Tranel, & Camerer, 2005;Sanfey, Hastie, Colvin, & Grafman, 2003) or at least lesions that disrupt adjacent fibers (Rudebeck, Saunders, Prescott, Chau, & Murray, 2013). Finally, a number of studies have investigated neural activity when the individual is selecting between immediate and delayed rewards (Kable & Glimcher, 2007;McClure, Laibson, Loewenstein, & Cohen, 2004). Using an intertemporal choice task, Kable and Glimcher (2007) showed that ventral striatal, medial orbitofrontal cortex, and posterior cingulate cortex activation were inversely related to the length of the delay before reward delivery. Consequently, these regions appear to play an important role in temporal discounting of delayed rewards. However, it should be noted that there are substantial individual differences in rates of temporal discounting, even among healthy adults and children (Olson et al., 2009), and such differences appear to map onto neural activity (e.g. individuals showing the shallowest discounting functions in their choice behavior also exhibited the weakest effects of delay imposition on neural activity; Kable & Glimcher, 2007)] In related work, McClure et al. (2004) observed increased medial orbitofrontal cortex, ventral striatum, and posterior cingulate cortex activity when subjects selected immediate monetary rewards, whereas lateral orbitofrontal cortex and dorsolateral prefrontal cortex were activated during intertemporal choice regardless of delay. Similar results were obtained using primary reinforcement (i.e. immediate or delayed juice delivery; McClure, Ericson, Laibson, Loewenstein, & Cohen, 2007). These results were interpreted as evidence that distinct neural systems (impulsive/ automatic vs. deliberative) were in competition during intertemporal choice. Learning and evaluation processes have been implicated in mental disorders in a number of ways (Luman, Tripp, & Scheres, 2010;Sonuga-Barke, 2011). First, individuals with mental disorders may show a general insensitivity to reinforcement (both positive and negative), which influences both the encoding of cues and processing of outcomes (e.g. depression -Pizzagalli, 2014). Second, certain forms of psychopathology may be underpinned by deficits in reward or punishment learning, due to impairments in generating stimulus-response-outcome representations (e.g. schizophrenia; Waltz, Frank, Robinson, and Gold 2007). This could be due to insensitivity to either positive or negative feedback and reduced ability to adjust behavior according to environmental contingencies. Third, individuals with mental disorders may display a specific insensitivity to either rewarding or punishing outcomes. Fourth, cue and outcome processing could be essentially intact, but the process of comparing different outcomes may be disrupted (i.e. affecting the decision/ implementation phase of decision making). Fifth, individuals with mental disorders might display normal sensitivity to external reinforcement but deficits in intrinsic reinforcement or vice versa. Related to this concept, individuals with psychopathology may show domain-specific impairments in reinforcement, for example, reduced sensitivity to social reinforcement (praise) but normal sensitivity to monetary reinforcement (Demurie, Roeyers, Baeyens, & Sonuga-Barke, 2011;Scott-Van Zeeland, Dapretto, Ghahremani, Poldrack, & Bookheimer, 2010). The final set of models suggests that the preference structures that guide the evaluation process are altered in individuals with psychopathology, perhaps due to early adversity or living in unpredictable environments (e.g. conduct disorder; Sonuga-Barke, 2014). According to this view, such individuals are capable of evaluating options, implementing decisions, and appraising outcomes, but the weighting of different parameters, such as risk or delay, in the evaluation process is altered in a relatively stable manner. Disorder-specific hypotheses In this section, we first present individual hypotheses regarding the different behavioral expressions, and associated neurobiological and neuropsychological processes, of impaired decision making in ADHD, CD, anxiety, and depression based on our neuroeconomic model (Figure 1). We acknowledge that additional systems (e.g. autonomic nervous system) are almost certainly affected in these disorders and implicated in decision making, but due to space limitations, these systems are only considered briefly here. Our primary aim is to explore potential differences between disorders in terms of decision making and motivational styles. Our hypotheses, therefore, emphasize differences rather than similarities between disordersa point which is particularly relevant when contrasting decision making in the highly overlapping conditions of anxiety and depression. Finally, although a systematic review of evidence is beyond the scope of this article, we briefly consider indicative evidence, focusing on data from children and adolescents where availablealthough it must be noted that we have frequently had to rely on adult data, with all of the caveats this implies. Figure 2 provides a summary of the hypotheses for the four disorders as these relate to different decision-making stages and neurocognitive systems. Attention-deficit/hyperactivity disorder Background Attention-deficit/hyperactivity disorder is a prevalent, debilitating life-span condition marked by developmentally inappropriate levels of hyperactivity, impulsivity, and/or inattention (Faraone et al., 2015). Clinically, it is highly comorbid with both externalizing (e.g. CD) and internalizing (e.g. depression) problems. Pathophysiologically, it is heterogeneous, with different ADHD individuals displaying marked variation in the profile of contributing factors and deficits across multiple brain networks (Sonuga- Barke, Bitsakou, & Thompson, 2010). Multimodal treatment is supported empirically, with medication targeting the core and associated symptoms (Banaschewski et al., 2006) and behavioral approaches used to treat co-occurring problems, such as antisocial behaviors and social skills deficits (Daley et al., 2014). Core hypotheses regarding impaired decision making in ADHD Alterations in multiple brain systems interact to disrupt self-referential, executive, and reinforcement processes that act across processing stages to produce decision making that is both deficient (i.e. inefficient, insufficiently reflective, and inconsistent) and impulsive (biased toward immediate over delayed alternatives). Evaluation. (a) Disturbed prospection of future events and states due to disrupted connectivity between core midline and lateralized nodes of the default mode network combines with deficient reinforcement signaling within ventral frontostriatal circuits to impair the ability to estimate the subjective utility of choice alternatives. (b) A bias toward choosing immediate overdelayed rewards arises from a combination of reduced signaling of future reinforcement in the ventral striatum and heightened aversion to delay linked to amygdala hyperactivation. Decision and management. (a) A generalized deficit in top-down executive control mediated by disruptions in frontostriatal and frontoparietal circuits reduces the speed and efficiency of decision makingeffects compounded by spontaneous lapses in attention linked to interference from default mode-related task-independent thoughts. (b) Impairments in the ability to generate and implement plans consistently without changing or reversing preferences are linked to dysconnectivity between default and executive networks (medial prefrontal and dorsolateral prefrontal cortex) and failures to resist the lure of competing choice alternatives and distracting influences due to executive dysfunction. Appraisal and accommodation. The ability to compare predicted and derived utility (i.e. prediction error signal) and thus learn from experience is degraded by disruptions in anterior cingulate cortex-orbitofrontal cortex connectivity. Empirical indications Dysregulation of self-referential processes. To date, no direct study of self-referential processes in decision making in ADHD has been undertaken. However, evidence linking ADHD to dysfunction within the default mode network, hypothesized to subserve effective self-referential cognition, has accumulated in recent years. Resting state fMRI studies have demonstrated disrupted connectivity between midline hubs of the default mode in ADHD . A recent study found a general developmental lag in intrinsic default mode network structure in ADHD and disrupted connections to executive and attentional networks (Sripada, Kessler, & Angstadt, 2014). Other studies found that ADHD individuals have difficulties in regulating default mode activity appropriately to respond to external demands, leading to excessive activity within this network during task performance (Helps et al., 2010;Liddle et al., 2011). In terms of evidence relating to the dysregulation of putative self-referential processes underpinned by default mode dysfunction in ADHD, there is both direct and circumstantial evidence. DM interference linked to spontaneous attentional lapses. Reinforcement Ventral fronto-striatal deficits impair utility estimates and with delay aversion produce preference for immediacy. Learning compromised by degraded prediction error computations due to orbitofrontal cortex dysconnectivity. CONDUCT DISORDER -Reckless, insensitive to negative outcomes. Reinforcement Impaired evaluation of negative future events exacerbated by amygdalaorbitofrontal cortex dysregulation. Limbic hypoactivation reduces sensitivity to aversive outcomes; impairs learning from negative feedback due to deficient aversive prediction error signalling. DEPRESSION -Disengaged, perseverative, pessimistic. Self-Referential DM-related excessive self-focus and negative bias leads to reluctance to engage in choice. DM-related negative rumination reduces willingness to initiate/execute decisions. Excessive rumination & DM hyper-activity leads to negative appraisal of outcomes. Executive DM-related failure to suppress negative intrusive thoughts creates choice instability. Reinforcement Depreciation of prior outcomes/diminished reward anticipation, expressed as reduced VS activity to reward-predicting cues, creates a negative evaluation bias. Excess VS activity creates hypersensitivity to negative outcomes. Self-Referential Negative self-referential thoughts about performance, achievement devaluation & compulsive focus on future failure. Executive Diminished PFC control of limbic activity produces threat biases. Reinforcement Amygdala hypersensitivity to negative events undermines threat appraisal. Conflicted outcome evaluation of similar outcomes due to PFC network dissociation. and the sort of spontaneous, dysfunctional, and uncontrolled mind-wandering associated with disorganized introspection and lapses of attention during external taskscontrasting this with the deliberate, well-regulated and functional self-referential thought required for effective prospection and planning. Direct evidence linking this form of maladaptive mind-wandering and attention in ADHD is not yet available although mind-wandering has been directly linked to attentional lapses (Smallwood, McSpadden, Luus, & Schooler, 2008) and low-frequency signatures in reaction time data, characteristic of such lapses, have been observed repeatedly in ADHD (Karalunas, Geurts, Konrad, Bender, & Nigg, 2014). More circumstantially, deficits in autobiographical and prospective memory have been observed. Fuermaier et al. (2013) demonstrated ADHD-related deficits in both self-rated and objectively assessed prospective memory. Furthermore, the same group narrowed this effect down specifically to deficits in long-term planning rather than the recall of planned action, plan integrity, or self-initiation. Support for difficulties in long-term planning functions also comes from a number of sources including in relation to goal setting (Nyman et al., 2010), 'if-then' plans (Gawrilow, Merkt, Goossens-Merkt, Bodenburg, & Wendt, 2011) and planning scripts (Desjardins, Scherzer, Braun, Godbout, & Poissant, 2010). Prospective memory, especially time-, as opposed to eventbased memory, also appears to be impaired in ADHD (Talbot & Kerns, 2014). Fabio & Capri (2015) found that deficits in autobiographical memory inhibited the ability of ADHD children to access personal events in the past while (Klein, Gangi, & Lax, 2011) demonstrated an association between ADHD and disorganized personal narratives when asked to access episodic self-referential terms. Scholtens, Rydell, & Yang-Wallentin (2013) found reduced future orientation regarding academic matters in adolescent ADHD. Impaired executive control. There is now considerable evidence relating to executive functions in ADHD. Meta-analyses provide evidence of ADHDrelated deficits in inhibition (Lipszyc & Schachar, 2010), interference control (Lansbergen, Kenemans, & van Engeland, 2007), and working memory (Alderson, Kasper, Hudec, & Patros, 2013), which are likely to impact on decision making. Deficits in higher order executive processes such as attentional flexibility and short-term planning are also apparent (Willcutt, Doyle, Nigg, Faraone, & Pennington, 2005). This evidence from neuropsychological tests converges with neuroimaging evidence, which highlights ADHD-related alterations in lateralized frontoparietal and frontostriatal structure (Pironti et al., 2014) and function (Cortese et al., 2012). The relationship between executive control and decision making in ADHD has been examined in two ways. First, there is a large and growing literature on decision making about rewards under conditions of uncertainty and riskso-called hot executive settings. Results to date are mixed and open to interpretation (Groen, Gaastra, Lewis-Evans, & Tucha, 2013). Around 50% of these studies found that ADHD individuals make poorer and riskier decisions than controls. However, because of the complex and cognitively demanding nature of the tasks and their reliance on multiple processes, these positive results are challenging to map onto specific cognitive processes (Brand, Franke-Sievert, Jacoby, Markowitsch, & Tuschen-Caffier, 2007). Probably most informative is the Cambridge Gambling Task which allows different decision-making elements to be disentangled (Manes et al., 2002). The studies using this task found that ADHD individuals make suboptimal choices, but this relates to problems in processing and adjusting information about risk or to delay aversion (i.e. choosing the earliest available option; see below) rather than risk proneness per se (Coghill, Seth, & Matthews, 2014;DeVito et al., 2008). Second, a number of studies have examined the links between decision making and executive functions in ADHD by including paradigms measuring both processes. For instance, Duarte, Woods, Rooney, Atkinson, & Grant (2012) found that suboptimal decision making on the Iowa Gambling Task was related to working memory deficits in ADHD. Drechsler, Rizzo, & Steinhausen (2008) found that poor decision making on a gambling task was related to inhibitory control deficits, but not working memory problems, in ADHD. However, no studies have specifically explored the neural basis of the cognitive impairments leading to poor decision making in ADHD. Impaired reinforcement processes. Imaging studies have implicated structural alterations in the neural circuits and regions that mediate reinforcement-related processesincluding the orbitofrontal cortex (Hesslinger et al., 2002) and the ventral striatum (Carmona et al., 2009). In terms of difficulties with processing signals of future reinforcement, a recent meta-analysis confirmed ventral striatal hyporesponsiveness in individuals with ADHD (Plichta & Scheres, 2014), which appears independent from dysfunction in executive control networks (Carmona et al., 2012). Wilbertz et al. (2012) also found reduced orbitofrontal cortex sensitivity to reward magnitude changes in adult ADHD. There is evidence of functional hyperconnectivity between core hubs of the reward circuit (Tomasi & Volkow, 2012). Evidence from behavioral tasks gives a rather mixed picture (Luman, Sergeant, Knol, & Oosterlaan, 2010), with some studies suggesting oversensitivity to rewards (Fosco, Hawk, Rosch, & Bubnik, 2015) and others showing hyposensitivity ( van Meel, Heslenfeld, Oosterlaan, Luman, & Sergeant, 2011) related to impaired reward-related prediction error signals (Thoma, Edel, Suchan, & Bellebaum, 2015). In line with our predictions, the most consistent finding relates to an accentuated sensitivity to delay prior to the delivery of reinforcement, which holds regardless of the paradigm used (Yu, Sonuga-Barke, & Liu, 2015) and seems to reflect a combination of a drive toward immediate reinforcement (Marco et al., 2009), heightened discounting of delayed reinforcement (Scheres, Tontsch, & Thoeny, 2013) and aversion to delaya desire to escape the negative affect induced by delay (Lemiere et al., 2012). The limited neuroimaging evidence available is consistent with this picture, with increased discounting in ADHD associated with atypical connectivity between the ventral striatum and executive control regions (Costa Dias et al., 2013), while cues of impending delay lead to enhanced activation of limbic regions known to encode aversive stimuli (i.e. amygdala and anterior insula) in ADHD individuals (Wilbertz et al., 2013). Surprisingly, few studies have investigated reinforcement learning per se in ADHD, given the centrality of this process to two highly influential models of ADHD (Sagvolden, Johansen, Aase, & Russell, 2005;Tripp & Wickens, 2008). Two recent studies are particularly relevant here. Luman, Goos, & Oosterlaan (2015) found that children with ADHD learned at the same rate as controls on an instrumental learning task. In contrast, Hauser et al. (2014) observed reduced reward medial prefrontal cortex prediction error signals in adolescents with ADHD during reversal learning task performance, supporting the hypothesis that reinforcement learning is disrupted in ADHD. Summary of ADHD-related research priorities Perhaps in contrast to other disorders reviewed here, there is already a substantial and growing body of evidence either directly examining key aspects of decision making in ADHD or at least exploring systems and processes hypothesized to be involved in decision making. Much of this work was done in children and adolescents. However, the extant literature lacks integration and the field remains fragmented. Furthermore, key questions remain unaddressed. We feel that three key research priorities are (a) to study the way that explicit self-referential cognitive and implicit reinforcement-related processes interact during the evaluation stage of decision making and whether this contributes to the way value is assigned by individuals with ADHD; (b) to explicitly examine the role played by prospection and its neural substrates in decision making about future rewards in ADHD; and (c) to better understand how aberrant reinforcement processing in ADHD influences learning and how this in turn feeds back to affect stimulus evaluation. Core hypotheses regarding impaired decision making in CD Disturbances in reinforcement mechanisms, and related brain circuits, impact evaluation and appraisal/accommodation stages of decision making with specific effects on the processing of negative stimuli, producing reckless choices and insensitivity to negative consequences. Evaluation. (a) Altered structure and function within, and disrupted connectivity between, amygdala/insula and orbitofrontal cortex generally impair the subjective estimation of negative future events. This reduces the impact of signals of future punishment, risk/uncertainty, and delay on decision making, which is especially pronounced for options combining multiple negative elements (e.g. delayed negative outcomes/distal punishments). Decision and management. We predict that these processes are largely unaffected in CD when not comorbid with ADHD. Appraisal and accommodation. Individuals with CD display normal or enhanced sensitivity to positive or rewarding outcomes but reduced sensitivity to aversive outcomes, blunting their response to, and reducing their ability to learn from, negative feedback. These effects are mediated by hypoactivation of the brain's punishment centersamygdala and anterior insulaand associated striatal regions, and deficient prediction error signals for aversive events. Empirical indications Evaluation of aversive or risk-related cues. The decision making of children and adolescents with CD or oppositional defiant disorder (ODD) has been studied using four types of tasks involving (a) decision making under risk (outcome probabilities are explicitly presented); (b) decision making under uncertainty (key information is unavailable or where learning is required); (c) reversal learning (where contingencies change); and (d) passive avoidance learning. An early study found no group differences in Iowa Gambling Task (IGT) performance at baseline (Ernst et al., 2003) although unlike controls, CD individuals failed to show performance improvements a week later. More recently, Schutter, van Bokhoven, Vanderschuren, Lochman, and Matthys (2011) found that adolescents with CD/ODD and substance use disorders failed to learn to avoid risky decks associated with large penalties. A study using a modified gambling task obtained similar findings in children with ODD . In contrast, Fairchild et al. (2009) found that CD was associated with increased risky decision making under risk, suggested heightened sensitivity to gains or reduced sensitivity to losses during the evaluation phase of decision making. Crowley, Raymond, Mikulich-Gilbertson, Thompson, & Lejuez (2006) found that adolescents with CD and substance use disorders made more risky choices than control subjects using the Balloon Analogue Risk Task (BART). Using the same task, Humphreys and Lee (2011) found that children with comorbid ODD+ADHD made riskier choices than controls, whereas the ODD-only group was less sensitive to punishment than controls. Collectively, these findings suggest that individuals with CD or ODD have difficulties in adjusting their behavior following negative reinforcement or punishment, whereas studies assessing decision making under risk indicate that CD is associated with altered sensitivity to gains and/or losses during choice evaluation. A recent study investigating intertemporal choice showed heightened temporal discounting in adolescents with CD relative to controls (White, Clanton et al., 2014). Interestingly, these findings remained significant when excluding participants with comorbid ADHD. This suggests a more present-orientated motivational style in CD or alternatively that CD is independently associated with delay aversion. An fMRI study observed hypoactivation in multiple brain regions in individuals with both CD and substance use disorders (SUDs) during the evaluation phase of decision making (Crowley et al., 2010). The regions implicated included multiple areas in prefrontal cortex, anterior cingulate cortex, insula, and amygdala, as well as temporal and parietal cortices. The CD+SUDs group also showed reduced activation when receiving rewarding feedback in anterior cingulate and temporal and visual cortices, but increased responses to losses in several frontal and temporal regions relative to controls. Impaired reinforcement processes. Learning from aversive events is impaired in those with CD, and such effects are correlated with variations in the severity or persistence of CD. Adolescents with CD, like adults with antisocial personality disorder (Flor et al., 2002), show deficient autonomic conditioning (Fairchild, Stobbe, van Goozen, Calder, & Goodyer, 2010;Fairchild, van Goozen, Stollery, & Goodyer, 2008). Reduced autonomic conditioning at age 3 predicted increased criminal behavior in adulthood (Gao, Raine, Venables, Dawson, & Mednick, 2010), whereas intact conditioning in midadolescence was associated with better outcomes in a high-risk group (Brennan et al., 1997). Deficient acquisition of conditioning was associated with higher rates of offending within a group of young offenders (Syngelaki, Fairchild, Moore, Savage, & van Goozen, 2013b). Importantly, most studies have not found CD-related effects on general autonomic reactivity to aversive unconditioned stimuli. While these findings appear to challenge the idea of a general impairment in the processing of negative stimuli, other studies have observed CD-related reductions in eye-blink startle or skin conductance responses to aversive stimuli (Fairchild et al., 2008;van Goozen, Snoek, Matthys, van Rossum, & van Engeland, 2004;Syngelaki, Fairchild, Moore, Savage, & van Goozen, 2013a). Consequently, it is currently unclear whether there is a primary deficit in responsiveness to aversive stimuli or a disproportionate impairment in learning from punishment. Indeed, it is plausible that both processes are impaired and reduced sensitivity to aversive stimuli contributes to associative learning difficulties. Structural MRI studies have observed reduced anterior insula, orbitofrontal cortex, and striatal gray-matter volume in CD (Fairchild et al., 2011(Fairchild et al., , 2013Sterzer, Stadler, Poustka, & Kleinschmidt, 2007), suggesting that CD is associated with structural, as well as functional, abnormalities in key regions of the valuation network. Rubia et al. (2009) observed reduced orbitofrontal cortex responses to rewarding outcomes in boys with childhood-onset CD. A recent study found no group differences in neural activity during reward or loss anticipation when comparing adolescents with persisting and desisting conduct problems and controls (Cohn et al., 2014). However, the persistent disruptive behavior disorder (DBD) group demonstrated reduced ventral striatal activity during reward receipt and increased amygdala responses to receipt of losses. In a passive avoidance task with monetary rewards and punishments, adolescents with DBDs showed weaker expected value signals in ventromedial prefrontal cortex when choosing to respond to stimuli and weaker expected value signals in insula when choosing not to respond (White et al., 2013). They also displayed reduced positive and increased negative prediction error signals in the caudate when receiving feedback. In a follow-up study using environmental reinforcers, adolescents with DBDs showed reduced expected value signals in caudate nucleus, thalamus, and posterior cingulate cortex when making suboptimal decisions (White, Fowler et al., 2014). These studies support the hypothesis that individuals with CD show deficits in expected value signals for aversive outcomes and altered prediction error signals, although it is currently unclear whether both reward-related and aversive expected value signals are disrupted. Functional magnetic resonance imaging studies of emotion processing have demonstrated that adolescents with CD show reduced activity in the dorsal anterior cingulate cortex, amygdala, insula, dorsolateral prefrontal cortex, and caudate nucleus (Fairchild et al., 2014;Lockwood et al., 2013;Passamonti et al., 2010;Sterzer, Stadler, Krebs, Kleinschmidt, & Poustka, 2005). However, in contrast with the findings described above, a study combining psychophysiological and fMRI methods found no significant differences in autonomic fear conditioning between persistent and desisting DBD groups and controls, but increased anterior cingulate cortex/insula responses to the conditioned stimulus in both DBD groups relative to controls (Cohn et al., 2013). These divergent findings may have been explained by elevated anxiety in both the DBD groups. Summary of CD-related research priorities Three key priorities for future neuroeconomic studies of CD are (a) to investigate systematically decision making using tasks that allow disaggregation of the decision-making stages identified in this revieweven though we hypothesize that decision and management-related processes are essentially intact in CD, very few studies have examined the intervening processes between evaluation and outcome appraisal. It will also be critical to study the impact of alterations in reinforcement processes (e.g. aversive prediction error signals) on the subsequent evaluation of options and reinforcement learning with both appetitive and aversive stimuli; (b) to examine decision making in social contexts, in order to under-stand how the decision making of individuals with CD is affected by the presence of peers, and whether their antisocial behavior is related to stable changes in social preferences (e.g. reduced inequity aversion when it concerns others, but heightened sensitivity to (perceived) unfair treatment by others); (c) to investigate the impact of environmental adversity (e.g. being raised in poverty, effects of socioeconomic status gradients, and biological embedding of earlylife stress) on the preference structures that guide the evaluation process, given the strong association between childhood maltreatment, low socioeconomic status, and CD Piotrowska, Stride, Croft, & Rowe, 2015). Core hypotheses relating to impaired decision making in depression Alterations in self-referential, executive, and reinforcement processes, and their underlying brain networks, interact to produce disengaged, perseverative, and pessimistic decision making. Evaluation. A dysfunctional attributional style due to negative perceptual biases of past events is compounded by default mode network-related excessive self-focusing, which manifests as reluctance to engage in choice behavior. Anticipation of reward is diminished (reflected in decreased ventral striatal activity) which, combined with blunted affective forecasting, exaggerates negative and underestimate positive characteristics of future choice options, contributing to disengaged decisions. Decision and management. Excessive negative rumination on past events reduces the individual's willingness not only to initiate but also to execute future decisions. Failure to suppress default mode network-mediated negative intrusive thoughts during decision management increases choice instability and the tendency to ineffectively reevaluate ongoing decisions. Appraisal and accommodation. Hypersensitivity to negative outcomes, reflected by increased ventral striatal responses to punishment, coupled with negative appraisal of decisions due to excessive rumination, further contributes to a pessimistic decision-making style. Empirical indications Models about the influence of emotions on decisionmaking stretch back a long time (Loewenstein, Weber, Hsee, & Welch, 2001). Surprisingly, they have received very little empirical testing in the context of psychiatric disorders and, specifically, of depression. Dysfunctional attributional style. Attributional style, the way in which a person explains the causes of positive and/or negative events in their lives, is often altered in depression. As in adulthood (Sweeney, Anderson, & Bailey, 1986), childhood depression is associated with an internalized (i.e. self-blaming), stable (i.e. trait like), and global (i.e. generalizing across situations) attributional style. Positive events are attributed to external, unstable, and specific causes (Gladstone & Kaslow, 1995). A task-based fMRI study in adults (Seidel et al., 2012) assessing attributions to social events found that the left temporal pole, the left dorsomedial, and the right ventrolateral prefrontal cortex were significantly more activated in controls versus depressed patients for 'non-self-serving' attributions and in patients versus controls for 'self-serving' attributions. Since, in controls, 'non-self-serving' and, in depressed patients, 'self-serving' attributions are in conflict with the prevailing expected style, the study suggests that higher degree of cognitive control is required to inhibit the prepotent tendency toward either self-serving or non-self-serving responses. Excessive rumination and obsessive self-focus: Rumination can be a maladaptive thinking style as a response to negative mood states, including depression, irritability, and anxiety. It is common in patients with both anxiety and depressive disorders and may contribute to impairment over and above the presence of other psychopathology (McLaughlin & Nolen-Hoeksema, 2011). Meta-analytic studies provide compelling evidence for negative rumination in adolescent depression (Rood, Roelofs, Bogels, Nolen-Hoeksema, & Schouten, 2009). Ruminative thinking can lead to interpretation bias so that ambiguous information is interpreted consistently with the content of ruminations (Mor, Hertel, Ngo, Shachar, & Redak, 2014). In dysphoric individuals, rumination mediates difficulties with making decisions; moreover, it reduces confidence in decisions (van Randenborgh, de Jong-Meyer, & Huffmeier, 2010b). Rumination can, therefore, bias the evaluation of prospective situations but also the execution of the decision and the interpretation of previous events (appraisal of decisions, see further below). Increased obsessive self-focused cognition, typically associated with negatively distorted autobiographical memories, is common in both adults (for a review, see Ingram, 1990) and in youth with depression. For example, Black and Possel (2013) found that maladaptive selfreferential processing and excessive rumination at baseline predicted increased depressive symptoms 6 months later (Black & Possel, 2013). Additionally, negative memory biases on the self-referent encoding task at age 6 predicted increased depressive symptoms at age 9 (Goldstein, Hayden, & Klein, 2014). Indeed, compared with healthy controls, individuals with dysphoria experience their decisions as more difficult and had less confidence in their choices and this difficulty was mediated via excessive self-focused thinking (van Randenborgh et al., 2010b). The same is true for depressed individuals (van Randenborgh, de Jong-Meyer, & Huffmeier, 2010a). Furthermore, excessive rumination predicts indecision in adults, and this effect is independent of the severity of depression (Di Schiena, Luminet, Chang, & Philippot, 2013). A number of studies have found alterations in functional connectivity in depression as well as in the interplay between the default mode network and other systems, relating such alterations to excessive selfreferential and rumination processes (Marchetti, Koster, Sonuga-Barke, & De Raedt, 2012;Pannekoek et al., 2014). Rumination has been shown to be correlated with decreased fractional anisotropy (a proxy of white-matter structural connectivity) in the superior longitudinal fasciculus, the major tract connecting frontal/parietal circuits with the limbic system (Zuo et al., 2012). A seminal study (Sheline et al., 2009) reported increased self-referential processing related to a failure to deactivate core default mode regions (including ventromedial prefrontal cortex, anterior cingulate, lateral parietal cortex, and lateral temporal cortex) while participants were examining and reappraising pictures. A recent meta-analysis found increased connectivity between the default mode network and subgenual prefrontal cortex, suggesting that coactivation of these regions is related to behavioral withdrawal and a self-focused, negatively valenced and withdrawn ruminative state (Hamilton, Farmer, Fogelman, & Gotlib, 2015). ARR: Decision making in childhood mental disorders future events but also forms projections for how those events will feel. This is termed affective forecast. Relative to controls, individuals with dysphoria present with blunted affective forecast, expecting future positive events to feel less positive even if they occur (Marroquin & Nolen-Hoeksema, 2015). This might further strengthen the tendency to avoid future decisions. Reward hyposensitivity. The reward network is altered in adolescents with depressive disorder (Forbes & Dahl, 2012;Kerestes et al., 2014b;Romens et al., 2015) and also in unaffected firstdegree relatives of patients with depression (Olino et al., 2014). One of the most prominent findings is that of reduced anticipation for reward. Recent neuroimaging results demonstrate that activity in the ventral striatum is reduced in adolescent participants with subthreshold and clinical depression relative to healthy comparison subjects during anticipation of monetary rewards . Moreover, diminished ventral striatal response to reward anticipation is linked with anhedonia, rather than low mood, and predicts new onset of depression 2 years later. The reduced response to reward anticipation may underlie a variety of motivational deficits in depression that clinicians traditionally subsume under the construct of anhedonia. Importantly, no brain alterations have been found during positive monetary outcomes in depressed subjects compared with controls, strengthening the notion that anticipatory rather than consummatory processes are aberrant in depression (Treadway & Zald, 2011). As such, this process may be particularly relevant in the evaluation stage. Hypersensitivity to negative outcomes. Complementing the hyporesponsivity to positive reward, depressed patients also show a hypersensitive to punishment (Kessel, Kujawa, Hajcak, & Klein, 2015) or negative feedback (Eshel & Roiser, 2010). In a recent fMRI study, it was shown that that adolescent with anhedonia, but not those with low mood, showed increased activation in the ventral striatum during negative outcome (Stringaris et al., 2015b). This is in accordance with previous results from adults with anhedonia (Padrao, Mallorqui, Cucurell, Marco-Pallares, & Rodriguez-Fornells, 2013). Along with the effect of rumination, hypersensitivity to negative outcome may bias the appraisal stage, leading to pessimistic assessment of previous choices. Summary of depression-related research priorities The following lines of investigation are particularly relevant in relation to the hypotheses discussed above: (a) there is a specific need to investigate the brain correlates of decision-making processes in depression from a developmental perspectiveto address the lack of studies in childhood and adoles-cence. While the neurobiological underpinnings of the relationship between depression and decision making are being elucidated in adults, there are several unexplored key areas in relation to young people's decision making. For instance, the neuronal correlates of attributional processes have not been specifically and systematically studied in depressed youth. (b) The association between laboratory measures of decision making and more ecologically indices, especially in relation to the evaluation stage need to be established. (c) Finally, we need more experimental studies comparing the effects of cognitive versus behavioral interventions effects on reward-related decision processes and their brain correlates. Anxiety Background Anxiety disorders are common in youth with a prevalence ranging between 5% and 10% (Pine & Klein, 2010) and are probably best understood as extreme expressions of continuously distributed traits (Plomin, Haworth, & Davis, 2009). Anxiety disorders include phenomena that occur in early childhood, such as separation anxiety disorder, and conditions that mainly emerge from adolescence onwards, such as social phobia and panic disorder. There is evidence both for the common etiological underpinnings of these disorders (Rutter, 2011), as well as for the value in distinguishing between them (Pine, 2011). The pathophysiology of anxiety involves alterations in a conserved 'threat network' involving subcortical structures, such as the amygdala, that are critical for the acquisition of fear responses (LeDoux, 2000), and frontal areas involved in emotion regulation (Stringaris, 2015). Cognitively, anxious people are more likely to show increased vigilance to threat, characterized by a negativity bias (Bar-Haim, Lamy, Pergamin, Bakermans-Kranenburg, & van IJzendoorn, 2007), which is related to amygdala hyperresponsivity to threat (Monk et al., 2008). Medication treatment with selective serotonin reuptake inhibitors is effective in reducing anxiety symptoms, and cognitive behavioral therapy is also effective (James, James, Cowdrey, Soler, & Choke, 2013). Core hypotheses relating to impaired decision making in anxiety Stress induced by heightened levels of performance anxiety arising from self-doubt, combined with hypervigilance for threat, creates a hesitant, riskaverse, and self-deprecating decision-making style. Evaluation stage. (a) Amygdala overactivation, combined with diminished top-down control in executive circuits (e.g. ventrolateral prefrontal cortex), underpins automatic attentional bias toward threat and leads to the overestimation of negative characteristics of neutral outcomes, especially where those outcomes are ambiguous or uncertain; this in turn leads to excessive risk aversion. (b) Reduced reward valuation during episodes of stress-induced-performance anxiety diminishes expected subjective value estimates and decision-making confidence and is reflected in lower activity in the ventral striatum and ventromedial prefrontal cortex. Decision and management. Dissociation between ventral and dorsal frontal regions gives rise to conflicting positive evaluations alongside anxiety when subjects are faced with ambiguous choices (i.e. when competing outcomes are close in subjective value). Appraisal and accommodation. Anxious self-referential thoughts about performance, reflected in increased default mode activity during task execution, lead to the devaluation of achieved outcomesshifting attention away from the present situation toward past or future negative events (compulsive prospection). This leads to impairments in the individual's ability to predict outcomes. Empirical indications Attention to threat. Information processing in people with anxiety is biased toward the negative. Anxious children selectively attend to negative information, are distracted by it, and find it difficult to disengage from it (Daleiden & Vasey, 1997). They are more likely to interpret ambiguous information as threatening, in ways that cut across different anxiety disorders and are similar in adults and children (Bar-Haim et al., 2007), although a reversal of this effect (with bias away from the negative) has been found when individuals are under significant threat (Bar-Haim et al., 2010). Biases exist for both consciously processed and subliminally presented stimuli. Two stages of biased information processing in anxiety have been identifiedan early, fast and automatic or amygdala-based primary pathway and a secondary, slower system that incorporates contextual information relying on prefrontal processing (Beck & Clark, 1997;Cisler & Koster, 2010;Mogg & Bradley, 1998). Young people with generalized anxiety disorder show excess amygdala activity when briefly presented with angry faces (Monk et al., 2008). Executive control. Anxiety is associated with a reduced ability to recruit executive processes to moderate emotional responses through mechanisms such as attention reallocation or reinterpretation (Pine, 2007;Posne & Rothbart, 2000). From a neural perspective, such regulation happens through crosstalk between the amygdala and parts of the prefrontal and orbitofrontal cortex: regions are active during emotion regulation such as when appraising emotionally laden situation (Ochsner, Silvers, & Buhle, 2012). It has been demonstrated that increased activity in prefrontal and orbitofrontal areas correlates with reductions in amygdala activity (Banks, Eddy, Angstadt, Nathan, & Phan, 2007;Goldin, McRae, Ramel, & Gross, 2008). In this regard, there is fMRI evidence to support the theory of Eysenck et al. (2007) that anxiety disrupts executive processes, such as inhibition, thus impairing an individual's attentional control over the processing of emotionally salient stimuli. In a study by Monk et al. (2008), hyperactivation in the amygdala shows negative connectivity between this region and the ventrolateral prefrontal cortex suggesting a decreased top-down control of automatic processes. Moreover, reduced DLPFC activations is present in patients with anxiety but not healthy volunteers during error processing (Fitzgerald et al., 2013). Reward processing and learning. Neuropsychological experiments suggest that anxiety-related threat perception may bias individuals toward overestimating potential losses (Clark et al., 2012). The presence of anxiety also appears to diminish the expected positive value of outcomes. In healthy volunteers, anxiety is associated with reduced activity in the medial prefrontal cortex following feedback related to both monetary gains and monetary losses (Treadway, Buckholtz, & Zald, 2013). Indeed, anticipatory anxiety diminishes activity in the ventral striatum and medial prefrontal cortex during when these were asked to estimate the subjective value of events and predict outcomes (Engelmann, Meyer, Fehr, & Ruff, 2015). Consistent with animal models of stress activity in the anterior insula, an area known to preferentially encode negative value was increased and the connectivity between medial prefrontal cortex and striatum was diminished during anticipatory stress (Dias-Ferreira et al., 2009). How these findings about incidental anxiety apply to those with persistent anxiety, typical of most anxiety disorders, remain understudied. Recent results suggest that social reward responsivity to reward valuation in patients with social anxiety disorder may be reduced as reflected by diminished activity in putamen and reduced ventral striatal-anterior cingulate cortex connectivity (Cremers, Veer, Spinhoven, Rombouts, & Roelofs, 2014). However, these effects may be specific to social reward cues and not apply to monetary rewards (Maresh, Allen, & Coan, 2014). Contingency learning in anxiety has received little attention from researchers. There is some evidence that high-trait anxious individuals show deficits in the ability to adapt their decisions in the face of aversive stimuli especially when environments became volatile (Browning, Behrens, Jocham, O'Reilly, & Bishop, 2015). This deficit could result in anxious people perceiving aversive events to be less predictable and thus harder to avoid. Future exper-iments need to test whether other emotional disorders, such as depression, suffer from similar problems. Avoidance. While procrastination is not coterminus with anxiety, both appear related to the avoidance of stressful situations. Hence, anxious behavior is characterizedindeed for some disorders defined bythe avoidance of certain decisions or tasks (American Psychiatric Association, 2013). It appears that the degree of approach or withdrawal from a situation depends on activity in the prefrontalstriatal-insular network. Increased prefrontal activity seems to be associated to less approach behavior (Aupperle, Melrose, Francisco, Paulus, & Stein, 2015). Avoidance of decision making may even be present for anxious individuals in so called winwin situations. It appears that dorsal prefrontal areas are associated with anxiety to choice conflict between positive outcomes and predicts the reversal of a previously made choice when given the chance of reevaluation. The extent to which there are interindividual differences in such processing of conflictual decisions and whether these correlate with trait anxiety remains to be established. Related to avoidance behaviors are the increased levels of intolerance to uncertainty found in anxious individuals (Beesdo-Baum et al., 2012), which leads to negatively biased interpretations of events. Intolerance of uncertainty is linked with anger expression in individuals with generalized anxiety disorder and may explain why they may disengage from tasks or avoid decision making (Fracalanza, Koerner, Deschenes, & Dugas, 2014). It appears that intolerance to uncertainty is positively correlated with activity in frontolimbic areas, particularly in subgroups of people with social or generalized anxiety (Krain et al., 2008). Summary of anxiety-related research priorities Recent advances in anxiety neuroscience and treatment open up a number of exciting avenues for further research in decision making as discussed above: (a) How person-environment interactions influence decision-making habits. Anxious withdrawal or irritability (Krebs et al., 2013;Mikita et al., 2015;Stoddard et al., 2014) is all potent modifiers of parental or peer responses to a child and will serve to reinforce existing decision making. The extent to which environmental modificationsas is, for example, done with standard behavioral treatmentwill have an effect on the entrenched and more general decision-making style of an anxious person has been surprisingly underexplored. (b) Attention Bias Modification Treatment (ABMT) targets a decision-making processing in anxietywhat are its underlying neural correlates and can we use these as a guide on how to target other decision-making processes in anxiety (such as reward processing and learning)? Conclusions and issues for further consideration Children with mental health problems continually have to make decisions, yet we lack a comprehensive account of the factors and processes that may underlie decision-making impairments in mental disorders. Important functional outcomes, including whether they can return to school, or even whether they choose to stay alive, rely on their ability to make decisions effectively. Similarly, psychological disturbances may constrain a person's capacity for decision making and impair their volition or sense of agency. While this is widely recognized in legal systems around the world, a differentiated understanding of the pathways leading to these impairments in different disorders is still lacking. In this article, we have attempted to provide an integrative, transdiagnostic neuroeconomic framework for the study of impaired decision making in psychopathology and apply it to highlight putative differences in decision making between ADHD, CD, depression, and anxiety. We argue that describing how these disorders map onto difficulties in the evaluation, execution, and/or appraisal of decisions is a key first step toward understanding why psychopathology frequently leads to negative outcomes. In this review, we have identified impairments that we predict are unique to each of the psychiatric disorders. While the focus has been on the distinct features of disorders, we acknowledge that there will also be problems cutting across current diagnostic boundaries. We have also considered the neuropsychological mechanisms that may underlie such difficulties in decision making. In general terms, there is at present only limited or indirect evidence to support these disorder-specific hypotheses, and relevant evidence from children and adolescents is particularly scarce. Most evidence relating to decision making in childhood psychopathology comes from studies of ADHD and CD, but even here we currently lack the necessary integration across levels of analysis to establish the role of specific neurocognitive systems in driving decision-making deficits. One consequence of this is that we have had to resort to adult studies when highlighting relevant evidence. We hope that this review will spur the field on to perform more neuroeconomically inspired studies of decision making in children with mental disorders. Although we have addressed a wide range of issues, a number of additional issues require consideration in any discussion of decision making in child and adolescent mental disorders. Causality What is the role of impaired decision making in the causal pathways from etiological factors to mental disorders? Decision making might be viewed as an expression of a disordered statean extension of the clinical profile and a manifestation of its presentationperhaps mediating the pathway from disorder to functional impairment and reduced quality of life. At the same time, the neuroeconomic model provides an alternative perspective on the pathophysiological pathways to disorder expressionwhere dysfunctional decision making associated with mental disorders appears to be a downstream effect of neural and cognitive mechanisms. For example, an overactive limbic system may bias attention toward negative stimuli, and this may in turn adversely affect the evaluation stage of decision making. Treatments such as attentional bias modification aim to reduce attentional biases toward negative stimuli and may thus improve decision making and reduce impairment. However, it is also possible that dysfunctional decision making is a causal mechanism in its own right, contributing to a vicious cycle and compounding the effects of the disorder itself. This could happen in number of ways. The decisions one makes constrain their experiences. If one chooses immediacy over delayed rewards, or safety over risk, then one reduces exposure to delay and risk and diminishes one's opportunities to learn how to manage delay and/or risk in the future. The same applies to escape from threat in anxious individuals. Decisions can also negatively impact a person's moodthat is negative predictions about the future through the process of affective forecasting can induce further hopelessness and despair in already depressed individuals. Conversely, the negative appraisal of previous decisions can impact not only on future decision making but also exacerbate negative moods and low self-esteem. To the extent that this is true, targeting the underlying decisionmaking processes may offer potential in alleviating the primary symptoms of disorders as well as associated impairment. Complexity, comorbidity, and heterogeneity As mentioned in the introduction, the RDoC initiative is promoting a (reductionist) model of clinical scientific enquiry, which attempts to break mental disorders into their basic neurobiological constituent components or core impairment dimensions to provide an empirically driven transdiagnostic alternative to current clinically informed diagnostic models (Insel et al., 2010). The hope is that such an approach will progress translational science by aligning diagnostic approaches more directly to neurobiological treatment targets. In describing the complex and dynamic nature of the underlying pathophysiology of decision making in mental disorders and highlighting the ways in which basic alterations within brain systems can manifest in very different ways in different disorders, the current review highlights both of the challenges faced by researchers working within the RDoC framework. More specifically, it may only be when the dynamic interactions between brain systems or core neurocognitive processes are fully considered that the particular features of the psychopathological disorder become apparentif this were the case, then the optimism surrounding the RDoC initiative may be misplaced. In fact, we acknowledge that the current neuroeconomic framework underestimates the degree of complexity of the determinants of decision making in mental disorders. In particular, given limitations on space, two key elements contributing to such complexity have been deliberately omitted from this review. First, there is the critical issue of overlap between disorders. As a first step, we have contrasted the disorders in their archetypal and generic forms. However, we acknowledge that there is substantial overlap between the four disorders and comorbidity with other disorders (e.g. ASD) is substantial. This highlights several key questions for further consideration. For instance, if two disorders co-occur, do their decision making attributes combine in an additive way and compound the level of impairment or does the presence of a second disorder transform the decision-making style associated with the first? For instance, what would the combination of reckless and hesitant decision making look like in the case of the child with CD and anxiety (a far from uncommon presentation)? We have highlighted how little research directly addresses the interaction between different brain systems in decision making. However, there is even less evidence relating to comorbid presentations of disorders, although some components of our model have been investigated (i.e. executive functions in comorbid internalizing and externalizing conditions; Woltering, Lishak, Hodgson, Granic, & Zelazo, 2015). Important targets for future research are the differential characterization of introspective rumination in anxiety versus depression and how these combine in the children with both conditions. Further complexity stems from heterogeneity within disorders. Although earlier models have tried to map specific mental disorders onto their underlying neural substrates, it is becoming increasingly clear that psychiatric disorders are pathophysiologically heterogeneouswith different individuals with the same disorder (or at least meeting the same diagnostic criteria) showing markedly different neuropsychological profiles. This has been perhaps most fully explored in relation to ADHD (Sjowall, Roth, Lindqvist, & Thorell, 2013), where there has been a proposal for neuropsychological subtypes (Faraone et al., 2015;Nigg, Willcutt, Doyle, & Sonuga-Barke, 2005). Heterogeneity is also considered an important issue in CD, anxiety, and depression. Accordingly, it is possible that certain subgroups within each disorder could be defined on the basis of decision-making profiles, with some patients displaying certain impairments and others showing distinct profiles. Development We have not had the space to adequately consider developmental issues. Furthermore, the limitations inherent in the present review should be acknowledged in this regard especially in relation to (a) the scarcity of child and adolescent data relating to a substantial proportion of our core hypothesesespecially for anxiety and depression; and (b) the almost complete absence of longitudinal data which would allow consideration of the developmental progression over time of decision-making phenotypes and the underlying neurodevelopmental processes that drive changes. In thinking through developmental considerations in the future, we need to reflect on the differences between the four conditions considered here in terms of clinical and developmental profiles and timings (prodromal states, initial onset and progression, and transitions to adult life). The potential roles of neurodevelopmental immaturity and maturational delay will need to be carefully considered as contributing factors, at least in the cases of ADHD and CD (Shaw et al., 2007). Developmental models not only highlight the importance of characterizing developmental phenotypes of decision-making across childhood and adolescence (i.e. how decision-making profiles change with age) but also the role of the neurodevelopmental processes that shape those phenotypes. In particular, this article has concentrated on the neurobiological substrates of suboptimal decision making in mental disorders. A developmental psychopathology perspective forces us to consider the potential role of the social environment in shaping decision-making biases or deficits. Social aspects of decision making We have also not discussed other social considerationssuch as the influence of social and peer processes on decision making (Smith, Chein, & Steinberg, 2014), decisions about how to treat others (e.g. whether to behave fairly or unfairly toward them), and the computations required to make sense of others' behavior (e.g. Behrens, Hunt, & Rushworth, 2009). These issues are undoubtedly important in understanding psychopathology. For example, adolescents with anxiety may be overly sensitive to social evaluation and may make suboptimal choices in an effort to conform to perceived social norms, whereas those with CD may be relatively insensitive to negative social evaluation. Initial studies suggest that adults with depression show atypical behavior in interpersonal economic exchange paradigms (Pulcu et al., 2015;Shao, Zhang, & Lee, 2015), and such tasks have shed light on abnormal social behavior in personality disorders (King-Casas et al., 2008;Koenigs, Kruepke, & Newman, 2010). However, for reasons of space, and because there is comparatively little evidence available from studies of developmental populations, we felt that this topic should be reserved for a future review. Clinical implications While it would be premature to speculate about detailed disorder-specific clinical implications of the current review, in more general terms, a consideration of decision making raises the following clinical questions: First, would measuring decision-making problems and related neuroeconomic parameters enhance clinicians' ability to predict a patient's overall impairment? Second, would targeting decision-making pathology make sense in developing future treatments? As mentioned already, ABMT can be construed as such an attempt and has had variable success (Cristea, Mogoase, David, & Cuijpers, 2015). This question may be particularly relevant for treatments that are thought to affect reward processes, for example, behavioral activation in depression. Could these be usefully modified so that disturbances in decision making, rather than clinical symptoms, become the prime target? Finally, decision-making pathology will be of particular interest to those clinicians who regularly assess their patients' capacitya particularly underdeveloped field in child and adolescent psychiatry. Could decision-making science provide a firmer ground for such assessments or at least be used as an additional resource? • Success or failure in life is partly determined by the decisions one makes. • Clinically, it is apparent that impaired decision-making impacts daily functioning in young people with mental health conditions. • To understand relationships between decision making and specific conditions, we first need to acknowledge the neuropsychological complexity of the decision-making processinvolving as it does multiple stages and neurocognitive systems. • We propose that decision making is impaired in distinct ways in different psychopathological conditionseach reflecting a specific neurocognitive profile. • We hypothesize that decision making is inefficient, impulsive, and inconsistent in ADHD; reckless and insensitive to negative outcomes in CD; disengaged/perseverative/pessimistic in depression; hesitant/riskaversive/self-deprecating in anxiety. • Evidence from research within a developmental psychopathology framework, across multiple levels of analysis, is required to test these hypotheses.
2016-05-12T22:15:10.714Z
2015-12-26T00:00:00.000
{ "year": 2015, "sha1": "a2389a99f55307558f89c075dc8e7221e344a681", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcpp.12496", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6114fc8f5172cfd865eb8e106ef6564c23ea3bad", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
222186301
pes2o/s2orc
v3-fos-license
Strategic determinants of the support absorption process in the SMEs sector companies Aim/purpose – The aim of the paper is to identify and assess the strategic factors that determine the absorption process of support instruments by SMEs sector companies. Design/methodology/approach – Strategic factors were identified on the basis of the literature review and opinions of ten experts representing management sciences. Further empirical verification of the proposed assumptions was carried out on a random sample of 1,741 micro-, smalland medium-sized enterprises from 22 European Union countries. Findings – The obtained results indicate a significant positive influence of five identified strategic factors, i.e. social, support environment, resource, management system and organisational, on the support absorption process. This impact is slightly stronger in the acquisition phase than in the use of support instruments by the SMEs. Research implications/limitations – The results provide a basis for improving efforts to acquire and use external support for micro-, smalland medium-sized enterprises. The limitations of the study include respondents’ high subjectivity of opinions and complex character of considered theoretical constructs. Originality/value/contribution – The contribution of research to the development of management sciences primarily includes the formulation and empirical verification of a set of strategic factors determining the support absorption process in SMEs. 1 Names are provided in the alphabetical order. The contribution of each author to the preparation of the paper is 50%. Marek Matejun, Maciej Woźniak 6 Introduction The issue of creating appropriate conditions for the development of micro-, small-and medium-sized enterprises (SMEs) has been in focus of the European Union countries, including Poland, for many years. For this purpose various support instruments have been offered by specific institutions as part of regional, national or international initiatives and assistance programmes. These activities are the subject of many scientific studies (Peszko, 2013Woźniak, 2012ab, 2014abc). An important direction of this research is the issue of absorption that means acquiring and using the support directly by companies from the SMEs sector. The course of the absorption processes is of key importance for both entrepreneurs and small business support institutions. It also allows understanding and explanation of the reasons for the low effectiveness of some of the programmes and aid initiatives being implemented. However, most of the recent research has focused on the specific nature of different support programmes and instruments (e.g. Moritz, Block, & Heinz, 2016;Ozekicioglu & Yetiz, 2019), on the outputs of support initiatives (e.g. Beņkovskis, Tkačevs, & Yashiro, 2019;Ciszewska-Mlinarič, 2018;Radicic & Pugh, 2017) or on the barriers to support absorption by SMEs (Beizitere & Brence, 2020;Szara, 2012;Wach, 2008). Nevertheless, the key (strategic) factors determining the acquisition and effective use of support in SMEs development processes were analysed to a much smaller extent. This indicates the existence of a specific research gap which justifies addressing the research problem presented in this paper. Taking this into account, the aim of the paper is to identify and assess the strategic factors that determine the absorption processes of support instruments by SMEs. The aim of the work was accomplished by carrying out the empirical research by the authors, using the survey method on a random sample of 1,741 micro, small and medium enterprises from the area of 22 European Union countries. The strategic factors were identified on the basis of the literature review and the Delphi survey, which involved 10 experts representing the management science community. The results show a significant beneficial effect of five strategic areas: social, support environment, resource, management system and organisational on the course of absorption of the support. However, the effect is slightly stronger in the phase of acquisition than the use of the support instruments by SMEs. The structure of the paper is as follows. First, the literature is reviewed and research gap is identified. Then, the methodology is described. The research findings are delivered in section 4. The paper ends with the discussion and conclusions. Literature review As noted by Jelonek (2016, p. 58), there are a number of interesting research problems related to the absorption process, which in general, means the ability to permanently introduce resources occurring in the business environment to the organisational system with the intention of using them to achieve specific goals. Research in this field may concern, among others, development of new products, creation and effectiveness of innovation, absorption of knowledge or technology. Absorption is also a term widely used in economic science to describe the challenges associated with the acquisition and use of support in the economy. It may concern the activity within:  macroeconomic situation, considered as the ability of the recipient country to its effective use (Kopiński, 2011, pp. 186-189),  mesoeconomic situation, most often implemented by local government units in order to use support in stimulating development at the regional or sectoral level (Biczkowski & Jezierska-Thole, 2010, pp. 13-24),  microeconomic situation, related to the acquisition and use of specific support instruments by specific enterprises and other types of organisation. A special group of enterprises interested in absorption activity are micro-, small-and medium-sized enterprises, for which various assistance initiatives are undertaken and specific instruments for support development are offered. In general, they are divided into financial and non-financial (Lisowska & Stanisławski, 2011, p. 292), while a broader review of the literature allows for distinguishing such support instruments for SMEs (Woźniak, 2012a , pp. 73-75) as:  non-returnable financing, e.g., grants from EU funds,  external financing, e.g. loans, leasing or sureties,  equity financing, e.g. the involvement of private equity investors,  administrative and legal, e.g. tax exemptions or investment allowances,  consulting, training and information for SMEs,  technological and pro-innovation, e.g. technology audit or transfer,  organisational or business, e.g. an offer of entrepreneurship incubators or technology parks. Factors determining the possibilities of obtaining and using these support instruments by SMEs can be divided into internal, having their source in the enterprise itself, and external, related to the entire support system as a part of the business environment. In case of financial support provided in the form of grants, the following factors need to be accounted for (Brodzińska & Brodziński, 2010, p. 12;Godek, 2008, p. 27;Kołodziej-Hajdo & Surowiec, 2001, p. 542): need to ensure equity, requirements of the application procedure, documentation of the development activities undertaken and their implementation, waiting period for fund reimbursement or preparation of the application for payment and settlement of subsidies. With non-financial support, the key factors of non-financial support include (Vedovello & Godinho, 2003, p. 17;Woźniak, 2014bWoźniak, , pp. 44-47, 2014c: infrastructure, purpose and scope of activity of the institution providing support (for-profit, non-profit), adaptation of the offer to the needs of entrepreneurs or cooperation between organisations providing support. Lisowska (2014, p. 20) while examining the opinions of entrepreneurs on the conditions of obtaining and using financial and non-financial support by SMEs in business practice, additionally draws attention to the factors affecting absorption, such as information about business environment institutions, knowledge about potential benefits and threats of using the support, quality and adjustment of support to the needs of a company, as well as the possibility of using support in the development processes of a company. The complexity of these factors means that entrepreneurs assess support initiatives in various ways. Woźniak (2012b, pp. 177-178), based on the results of his research, states that subsidies for investments and innovations, tax reliefs and exemptions, as well as preferential loans and credits have the highest rank. These factors also determine the course of the support absorption process, which includes specific activities necessary to be undertaken and implemented by the SMEs sector companies in order to effectively obtain and use support in development processes. This process can be divided into two general phases (support acquisition and use) and seven detailed stages covering (Matejun, 2015, p. 79): 1. In the phase of acquiring the support:  identification and assessment of support instruments existing in the external environment,  adaptation to the requirements of business environment institutions and conditions for the absorption of support,  application for support,  assimilation, including the introduction of support to the organisational system of a company. 2. In the phase of using the support:  exploitationusing support aimed at achieving the set goals,  evaluation, including assessment of the effectiveness of acquiring and using support,  accumulation of knowledge and experience from absorption activity for future use. The literature indicates that many difficulties arise in the absorption processes of SMEs. Szara (2012, p. 175, 179-182) analyses the taxonomy of the most common barriers encountered by beneficiaries of public aid in Poland. In the phase of gaining support, for example, there may be problems related to employees who are involved in the preparation of an application or external consulting companies that offer such a service. In the implementation phase, she distinguishes 12 barriers that most often arise, among others, interpersonal conflicts, management style or organisational structure. An unquestionable advantage of the approach presented by her is an extensive analysis of many areas affecting the acquisition of support instruments and their connection with various stages of this process. In this case, however, there is no wider empirical verification based on appropriate quantitative research. Marshall et al. (2020, pp. 1, 3-6) based on research in three sectors: manufacturing, high-tech and services in the UK, make a proposal for the dimensions of knowledge absorptive capacity in SMEs. They distinguish: acquisition, assimilation, transformation and exploitation. The conclusion is that one must understand SMEs that are 'innovation followers' and those that have sustainability orientations. Moreover, some companies recognise an innovation as necessary for business sustainability. The study is restricted to only four locations and three sectors. Wach (2008, pp. 138-142), using statistical methods, points to barriers that determine low assessment of support instruments by entrepreneurs in Poland. His results indicate that the majority of the SMEs surveyed assessed the instruments as bad or very bad. This is mainly due to the lack of developmental needs and information about the availability of assistance initiatives. Other reasons, such as too high costs and unmatched support offer, were not so important. He also indicates the existence of a relationship between: the age of a company, its size and the entrepreneurial attitude of the owner, and the use of the offer of small business environment institutions. The results of these studies were narrowed, however, to the area of two provinces (Małopolskie and Śląskie). Prokop & Stejskal (2019, pp. 134-145) focus on SMEs absorption of innovation in Germany. They state that it is mainly connected with entrepreneurial environment, globalisation and fast changing technological issues. They also set two important questions: whether support policies focused on the right beneficiaries from SMEs sector and their innovation activity and whether companies consider public schemes to be beneficial and do they intend to use them despite the bureaucracy. The study, however, does not give any recommendation for the current present. De Jesus Pacheco, ten Caten, Jung, Ribeiro, Navas, & Cruz-Machado (2017, pp. 2277-2287) find key determinants of eco-innovation in manufacturing SMEs. The research is based on systemic review covering 24 years. The external critical determinants are support and neutrality of regulatory policies regarding SMEs and large enterprises. However, the scale of support for innovative strategies and availability of resources such as people or technology are crucial in internal context. The research is restricted to search criteria adopted, therefore some databases or dissertations are missing. The review of the literature and the results of existing research indicate the existence of many proposals of factors determining the support absorption process by SMEs. However, some of these approaches are fragmentary and focus on analyses narrowed to specific macroeconomic conditions, specific aid instruments, cover only selected locations or chosen sectors, or even companies. Therefore, there is a research gap in an approach aimed at identifying and assessing strategic factors determining the absorption processes of various support instruments by SMEs. This issue was the subject of research, the results of which are presented in the further part of the paper. Research methodology The research process aimed at achieving the goal of the paper consisted of two stages: (1) expert research using the Delphi method and (2) survey research. In the first part of the research, the strategic factors determining the absorption activity of SMEs were identified. The opinions of 10 experts representing the field of science in the area of SMEs management and/or strategic management were selected on the basis of criteria: scientific, substantive and impact were used for this purpose. These opinions were formulated and evaluated in the framework of a 3-round Delphi survey, the aim of which was to obtain a high level of experts' compliance 2 . The second part of the research was devoted to a quantitative survey conducted on a random sample of 1,741 SMEs. Computerized Self-Administered Questionnaire was used as a research technique (Bryman & Bell, 2015). The research questionnaire was available to respondents at www.questionpro.com. Due to the fundamental importance of the SMEs for the socio-economic development of the European Union (Muller, Devnani, Julius, Gagliardi, & Marzocchi, 2016) and due to undertaking many support initiatives for these entities (Florio, Vallino, & Silvia, 2017;McCann & Ortega-Argilés, 2016) 22 EU countries were designated for the research: Austria, Belgium, Bulgaria, Croatia, the Czech Republic, Denmark, Finland, France, Greece, Spain, the Netherlands, Lithuania, Germany, Poland, Portugal, Romania, Slovakia, Slovenia, Sweden, Hungary, Great Britain, Italy. Statistical data (Eurostat data, 2017; The SME Performance Review data, 2016) indicate that over 21 million enterprises operate in this area, of which over 98% are SMEs. The research covered 1,183 micro enterprises (68%), 399 small companies (23%) and 159 medium-sized enterprises (9%). The size of entities was determined on the basis of a uniform formal definition of SMEs applicable in the European Union (Wach, 2004). The obtained sample size ensures statistical representativeness of the obtained results (Keller, 2012, pp. 354-356) in relation to the adopted number of SMEs in the EU (for 2015) of 22,959,600 entities (Muller et al., 2016, p. 77), taking into account the maximum allowable estimation error d = +/−0.0235 at the assumed level of significance p = 0.05. Most of the surveyed enterprises operate as sole traders (45%) or limited liability companies (LLCs) (35%). These are mainly service enterprises (60%), less often operating in the production sector (21%) or trade (19%). The majority of surveyed companies (73%) are active on the market at least on a national scale, of which 35% internationalise their activities. The sample included both mature entities with a market activity period of over 20 years (36%) as well as companies with a business period of 5 to 10 years (21%). The empirical material from the surveyed enterprises was collected based on the judgments of the respondents. They were primarily owners (74%), less often senior managers (19%) or employees authorised by management to participate in the research (7%). The questions were answered mainly by men (70%), people aged from 31 to 40 (30%) or over 50 (36%), with higher education (81%) in technical (40%) or economic/managerial field (26%). Research findings In the first part of the research specific groups of SMEs absorption factors were designated on the basis of expert opinions. Based on their statements formulated in the 1st round of the study, 36 properties describing strategic approach of SMEs to support absorption were determined and presented for evaluation in the next stages. The results obtained in stage 2 and 3 provided foundations for an appropriate level of consensus among the experts and made it possible to identify the following five groups of strategic factors: 1. Social factors related to the role of the personnel functions aimed at involving the staff in the support absorption (e.g. in the area of motivation, development of staff competence, or the system of evaluation and control), leadership of entrepreneurs and managers in the support absorption process as well as conducting the absorption activities in an ethical and socially responsible manner. 2. Factors related to support environment, including recognition and knowledge about the conditions for the acquisition and effective use of various support instruments and the ability to objectively assess the costs, benefits and risks of using support in the company development processes. 3. Resource factors related to the ability to mobilise and concentrate company tangible and intangible resources on absorption activities and the involvement of managers and employees in the absorption process of various support instruments. Furthermore, resource surpluses are of importance as they allow for company's quick response to the support opportunities. 4. Factors in the area of management system, including targeting support for the implementation of strategic directions of an enterprise development, adapting them to the pace, objectives and situation of the company as well as the ability to acquire and effective use of support quicker than competitors. 5. Organisational factors, including the ability of introducing formal organisational solutions necessary to acquire and use support (e.g. the designation of an organisational unit or a manager responsible for absorption activity) as well as the occurrence of informal conditions for a creative and inspiring approach to the acquisition and use of various support instruments in company development processes (e.g. sharing knowledge, experience and ideas related to absorption activities). In order to assess the impact of these factors on the absorption activities of companies from the SMEs sector, further quantitative research was carried out, on a wide sample of micro-, small and medium enterprises. First, the scope of using support by the analysed entities in the last 2 years was assessed. A 4-point order scale was used with a range from 0 (no use of support) to 3 (use of support in a very large range in relation to the needs of a company). Activity absorption was assessed in relation to particular types of support instruments, including: administrative and legal, advisory, training and information, technological, and organisational and business as well as non-repayable financing, external financing, equity financing. In order to obtain more precise answers, each type of support was provided with an appropriate comment together with examples of aid instruments. On the basis of individual responses, a synthetic index was calculated for each entity expressing the general attitude of the surveyed enterprises for using support in development processes (SD). This indicator was the arithmetic mean of the scope of using particular forms of support. The Cronbach alpha coefficient (Cronbach & Shavelson, 2004) was used to assess the level of reliability of this index, which amounted to 0.730. This result falls within a statistically acceptable range > 0.7 (Sarstedt & Mooi, 2014). The results obtained indicate that the surveyed companies use support in a very small scope in relation to their needs, because the average level of SD = 0.76, which is only 19% of the range of scale variation. The external financial support is used in a relatively greater range (average 1.21), advisory, training and information (1.16) and non-repayable financing (0.93). However, organisational and business support (0.46) and equity financing (0.22) were used in the relatively smallest range, which is connected with the fact that the owners want to maintain full control over the company. In the second part of the research, the support absorption process in the surveyed companies was assessed. A negative assessment was used to determine the difficulties arising at each stage, expressed on a visual analogue scale (VAS) (Reips & Funke, 2008) in the range from 0 (no difficulty) to 100 (very serious difficulties). The obtained results indicate that respondents identify moderate difficulties (45 on average), with relatively higher levels reaching the acquisition stage of support (50 on average), including identification stage (53), adaptation (51) and application for support (54). The difficulties arising in the phase of using the support (38 on average), including at the exploitation stage (39), evaluation (39) and accumulation (37), were assessed much lower. Difficulties arising at the assimilation stage (included in the support acquisition phase) also reached a relatively lower value of 41. Taking into account these results, three synthetic measures expressing the level of difficulty were constructed: (1) difficulties in the process of absorbing support in total (DT), (2) difficulties in the phase of support acquisition (DA) and (3) difficulties in the phase of using the support (DU). These indicators constitute the arithmetic average of the difficulty assessments at individual stages. The assessment of the reliability of the scales with the help of Cronbach's alpha coefficient obtained acceptable levels: alpha Cr. (DT) = 0.921, alpha Cr. (DA) = 0.890 and alpha Cr. (DU) = 0.894. The results also indicate that the level of difficulty of DT decreases significantly with an increase in the extent of absorption: r xy (n = 1,741) = −0.25, p < 0.01, which is associated with increasing experience of companies in obtaining and using support. Next, the range of occurrence of strategic determinants of the identified absorption activity course, in the expert study, was assessed. On the basis of expert opinions, every factor was translated to 2 or 3 dimensions (11 dimensions in total), assessed by respondents in quantitative studies on a scale (VAS) in the range from 0 (definitely disagree) to 100 (definitely agree). All the factors obtained acceptable levels of reliability scale measured by the alpha Cr. coefficient between 0.743 and 0.853. Their values ranged from 43 to 62, the highest value was obtained by social factors, and the lowest by resource and organisational factors. As the support absorption by SMEs is strongly related to the potential of the business environment, in addition, the considerations include external factors related to the assessment of conditions for the development of entrepreneurship in which the surveyed companies operate. They were divided into subjective and objectified factors. The subjective factors are related to the assessment of the support activity of organisations and the availability of various support instruments by the respondents. They have been operationalised using a synthetic measure consisting of six indicators expressing, among others, conditions offered to support the activity of the local environment in the activities of absorption and stability as well as the attitude of the national support system toward SMEs, diversity and activity of small business environment institutions. The analysis of the reliability scale revealed an acceptable level of the alpha Cr. = 0.877, and the average rating on the scale from 0 to 100 was only 27. An objective evaluation of the support segment was made on the basis of a synthetic index consisting of seven dimensions concerning, i.a. access to finance for SMEs, public support programmes for small businesses, R&D and technology transfer as well as commercial support institutions for SMEs. Their level was determined based on the results of the panel of national entrepreneurial experts (NES) as part of the Global Entrepreneurship Monitor (Amorós & Bosma, 2014) and disseminated to the range from 0 (worst conditions) to 100 (the best condition for the development of entrepreneurship). The average result was 38. Based on the set of adopted variables, a multiple regression analysis was carried out to assess the impact of strategic absorption factors and the conditions for entrepreneurship development on the process of support acquisition and use by the surveyed SMEs. The results are shown in Table 1. Three models were chosen for the analysis, in which the following difficulties were assumed as dependent variables: (1) in the overall absorption process, (2) in the support acquisition phase, and (3) in the support use phase. All models turned out to be statistically significant, and the analysis showed a significant impact of the identified strategic factors on reducing difficulties in the process of absorbing support by the surveyed companies. The analysis of the coefficient of determination indicates that the impact is stronger at the acquisition stage than the use of support. Discussion The results achieved are consistent with specific findings from many existing research studies. They confirm that the SMEs surveyed use support in very small scope in relation to their development needs. Low support absorption activity by small business has been observed for many years both in Polish (e.g. Borowiecki & Siuta-Tokarska, 2008, pp. 264-268) and European research (European Central Bank, 2014). This problem has been stressed in recent years by, among other, Lisowska (2017), who indicated that only 28.5% of surveyed SMEs (out of 353 companies) received support from business environment institutions. What is interesting, small scope of support use also concerns innovative SMEs, which constitute a group to which a significant part of aid programmes and institutional offer is addressed (Cyran, 2016). One of the factors of low absorption activity of SMEs are numerous barriers to the support use, which has also been the subject of many previous studies (e.g. Szara, 2012;Wach, 2008). These barriers include, among others, attention to the selectivity of public support (North, Smallbone, & Vickers, 2001), unequal access of SMEs to aid programmes, insufficient information on forms of financial support (Borowiecki, Siuta-Tokarska, Thier, & Żmija, 2018), as well as many formal and systemic barriers (Sawicki, 2019). Such barriers were also identified in the research presented in this paper, with an innovative approach to their analysis including the division into two groups related to: (1) acquisition and (2) use of the support. Moreover, it was observed that the level of difficulty decreases significantly with an increase in the extent of absorption. This is consistent with the findings of Braidford & Stone (2016) who indicated that the experience gained through long and diverse history of business-support usage increases interest and efficiency in the use of support in small business development processes. However, the novel contribution of the achieved research results to the development of management sciences includes the identification and assessment of strategic factors determining the absorption of support in SMEs development processes. The key determinants turned out to be factors related to the support environment, including the development of knowledge about the support availability and costs, benefits and threats resulting from the use of various support instruments. In the phase of support acquisition, companies should pay particular attention to the mobilisation and concentration of resources and ensure the involvement of managers and employees in absorption activities. In the phase of using the support, the social aspect was very important, mainly due to the leadership of owners in the absorption activities as well as the acquisition and utilisation of support in an ethical and socially responsible manner, which has a beneficial effect on building long-term and partner relationships with the support segment. The factors related to the management and organisational system play a smaller role in the process of support absorption, especially in the phase of its use. This is due to the qualitative features of companies from the SMEs sector that generally apply simplified management methods and simplified organisational solutions, which particularly concerns the smallest companies. However, the internal strategic factors are complemented by appropriate conditions for the development of entrepreneurship and the applied infrastructure and institutional solutions in the support system. The results indicate that objectified factors play an important role here, being a consequence of strategic assumptions adopted and implemented at the national and regional level in the field of small business support policy. The proposed approach develops and significantly improves the results of existing studies, which were fragmentary in their scope. Prior research of Han & Benson (2009) concentrated on such factors determining the support absorption SMEs as company characteristics (e.g. size, resources) and owner characteristics (e.g. gender, education, business experience). Johnson, Webber, & Thomas (2007) analysed the impact of the company's location, engagement in R&D activities and firm's orientation toward growth on the scope of external business advice services usage. Similar factors (e.g. size of the company, sector activity, entrepreneur gender, management orientation toward the growth) were also analysed by Mole, North, & Baldock (2016). However, this study took into account the SMEs demand for support and their desire to solve development problems with the use of external support as factors influencing on absorption activity in the small business practice. Prior studies were also more concentrated on the operational factors of support usage, e.g. communication of entrepreneurs with advisors and support institutions (Mole, Hart, & Roper, 2014). Therefore, the weakness of the previous research is the lack of a more comprehensive approach to the identification and assessment of key (strategic) factors determining the course of support absorption in SMEs. This gap is filled by a set of strategic factors determining the support absorption in SMEs development processes identified on the basis of the research results presented in this paper. This integrated approach provides a basis for improving efforts to acquire and use external support for micro, small and medium-sized enterprises. It also makes possible to obtain an absorption pension (cf. Czakon, 2015;Niemczyk, 2013) related to the advantage over other enterprises in the area of the ability to effectively identify and use opportunities emerging in the support segment in the long term perspective. Conclusions The study showed a beneficial effect of strategic factors identified to facilitate and improve the course of an acquisition and use of support instruments by firms from SMEs sector. These determinants include five groups of strategic factors: social, support environment, resource, management system and organizational. However, business environmental factors also play an important role, including shaping a suitable climate for the development of entrepreneurship in a given area. The obtained results indicate that the proposed approach explains about 30% of the volatility in terms of difficulties arising in the acquisition phase and about 20% of difficulties arising in the support use phase. This indicates the existence of a number of additional determinants of the process of support absorption, not considered in the framework of this research. Further analyses should therefore be aimed at searching for specific factors determining the course of the absorption process, taking into account the diversity of the acquired support instruments. However, this requires a transition from the strategic level to the operational level and perhaps to carry out additional ethnographic research based on a qualitative research approach. When implementing the presented solutions, the limitations of the conducted tests should be taken into account (Geletkanycz & Tepper, 2012). They mainly result from the use of an inductive study approach (Popper, 2014) and the survey research as the research method (Nardi, 2018). First of all, they include the high subjectivity of opinions of respondents and the multidimensional character of the theoretical constructs under consideration, which may be understood in various ways by the participants of the study. The problem is also the declarativeness of the answers provided by respondents with no guarantee that the barriers and factors analysed are actually found in the surveyed companies. These analyses certainly require to be continued, which will allow for further, more detailed results and conclusions. Further promising directions of research include, in particular: identification and assessment of mediators and moderators of the impact of strategic determinants recognised in this study in the process of support absorption in SMEs sector companies as well as identification and assessment of specific operational factors determining the absorption of various forms of support by SMEs sector companies. It is also worth considering the extension of the quantitative analysis with case studies of SMEs with successes and failures (good and bad practices) in absorption activity. A practical implication of this research may be the preparation of diagnostic tests which will allow the assessment of strategic readiness of SMEs to obtain and effectively use support in development processes.
2020-10-08T11:45:02.554Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "b89282239dca8dcab8122d7b179a78e1f3929aba", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.22367/jem.2020.41.01", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b89282239dca8dcab8122d7b179a78e1f3929aba", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
245103065
pes2o/s2orc
v3-fos-license
Even Electric Trains Use Coal: Fixed and Relative Costs, Hidden Factors and Income Inequality in HSR Projects with Reference to Vietnam’s North–South Express Railway : High-Speed Rail is often advertised as a sustainable alternative to air travel, and accordingly numerous initiatives for the construction of new HSR infrastructure are currently being pursued across Southeast Asia and the globe. However, beneath promises of “zero-emissions travel” frequently lie numerous hidden factors—how much steel is needed to build the railway? What energy sources are being used to generate the electricity which drives the train? Moreover, how many passengers are required for the train to be efficient relative to other forms of transport? This paper seeks to examine these questions to uncover what “hidden factors” may be present in HSR, using Vietnam’s proposed North–South Express Railway (NSER) as an example. This study calculates the CO 2 emissions likely to be produced by the NSER from the construction steel and the power consumed in operation using publicly available data on the technical standards of the railway and existing data on emissions per energy source, combining this data with market size analyses of the central provinces of the proposed line based on official population and income statistics across a range of scenarios to estimate what level of ridership will be required to outperform an equivalent-length air journey. The research finds that under current projections, the HSR may emit more CO 2 per end-to-end journey than a plane, that even in per-capita terms the emissions may be worse depending on the seat fill rate, and that the market size of Vietnam’s central provinces will present significant challenges in ensuring that the railway is efficient enough to outperform the plane in ridership terms. This demonstrates both the outstanding impacts of coal and other fossil fuel use in the energy mix and the potential link between environmental performance and regional inequality which constitute the hidden costs in HSR projects, and the exacerbated risks to the environment posed by inequality. Introduction From Britain's HS2 line to Japan's Chuo Shinkansen maglev project, high-speed rail infrastructure is currently a hot topic among policymakers around the world, with Southeast Asian leaders among the forefront of this push. Vietnam's proposed North-South Express Railway (NSER), initially proposed in the early 2000s and recently revived by the Vietnamese government [1], is no exception. High-speed rail projects are often pitched as sustainable and environmentally friendly modes of long-distance transport, but the realities can be much more complex (and often inconvenient) than the promises. This project was initially based on three central hypotheses-each of these now forms one section of this article. These were: • That the steel used in the railway's construction would generate significant quantities of CO 2 which would take years to "pay off" next to air travel when accounting for the coal used in production; • That the coal in Vietnam's projected energy mix would make the theoretically allelectric NSER a major source of CO 2 emissions despite the lack of direct fossil fuel use; • That the per-capita CO 2 cost of taking the NSER may, depending on passenger usage and ridership, actually end up higher than taking an equivalent flight between Hanoi and Ho Chi Minh City due to the prevalence of coal in the energy mix. These hypotheses stemmed from the author's main research project on Vietnam, which is one of the world's fastest-growing users of coal-having doubled the amount it imported between 2018 and 2019 [2], and continuing to invest in new coal power generation, which is expected to comprise more than half of Vietnam's energy mix in output terms by 2030 [3] (pp. . Coal is used heavily in both the energy sector and also in the booming steel industry-Vietnam experienced the highest compound annual growth rate in steel consumption in the ASEAN-6 between 2000 and 2018, and in 2018 consumed 20 per cent of all the steel in this group [4]. Moreover, in steel production capacity Vietnam is already the second-highest among the ASEAN-6, and it is predicted by the South East Asia Iron and Steel Institute that it will eventually grow to have the highest production capacity in the region by some margin [4]. Given that steel is still being used widely, the author began to question the environmental credentials of such an enormous mega-project as the NSERwhich would require huge amounts of steel for construction and then ultimately be running on electricity generated to a large degree by coal power stations. The JICA feasibility study is limited in answering how under these conditions the NSER project could be sustainable. To date, the academic literature provides limited analysis on the issue, with much of the analysis focused primarily on the economic perspective and with the environmental perspective still underdeveloped. This research seeks to carry out a supplementary analysis of the viability of the railway in relation to the "hidden factors" from the environmental perspective by finding answers to the following four research questions: 1. Will the fixed cost of the construction steel for the rails in terms of CO 2 emissions lead to a situation wherein the train cannot be competitive against air travel in the short term? 2. What will the impact of coal and natural gas in Vietnam's national energy mix be on the fixed, end-to-end CO 2 emissions of the NSER? 3. What level of ridership will be required to make the NSER more efficient than air travel in terms of the relative costs in per capita emissions? 4. Are the socioeconomic and market conditions in Vietnam likely to meet the threshold projected in relation to question three? These research questions seek to address existing gaps in the literature (as elucidated in the literature review section) to examine factors in HSR projects which are discussed less and to provide insight into the linkages between environmental performance in relative terms and regional inequality. The study provides a specific and unique analysis of the problems faced by Vietnam's NSER specifically, but it also presents findings which can be more widely generalized and can better inform policy on HSR projects around the world-specifically on relative performance within a national power grid and in relation to regional inequality, particularly in developing countries where income levels have not always reached a level at which they can provide consistent disposable income for residents. Providing answers to these questions will allow for the creation of more informed policy on HSR systems based on market conditions in countries in which they are proposed-perhaps encouraging the creation of policy from a holistic perspective which incorporates a stronger analysis of secondary factors beyond direct carbon emissions in environmental performance. The aim of this article is not to pronounce on whether the Vietnamese NSER will ultimately be environmentally beneficial or whether it will be fiscally viable. Indeed, there are compelling reasons to build the railway-for the purposes of enhancing national prestige, for the symbolic connection of two halves of a country which suffered deeply during the Cold War, for the purposes of economic stimulus and job creation, et ceteraregardless of the environmental and fiscal factors at play. For instance, there is evidence to suggest that HSR is popular among tourists and people wishing to visit nearby or intermediate cities [5,6], and it is likely that for cities along Vietnam's coastline which have a tourism focus such as Hue, the NSER would be extremely beneficial in that trips from Hanoi and Ho Chi Minh City would become "day-trippable" by reducing the projected travel time to only between two and three hours or less. Likewise, there is evidence to suggest that the availability of HSR infrastructure contributes to economic development in other fields, such as real estate [7]-in this sense, this paper does not seek to go against the HSR literature on potential long-term economic impacts. Rather, the aim of this article is to encourage conversation about "hidden factors" in construction and operation which can encourage the long-term sustainability of the project and minimize fixed and relative environmental costs, with a narrow focus on the consumption of coal, the generation of CO 2 , and how this will compare to aviation under a range of scenarios. If the line is completed, it simply will generate a fixed amount of CO 2 per journey, and it is important to encourage the consideration of ways to make this as efficient as possible in engineering, operational and ridership terms. This paper is structured as follows. It begins with a brief literature review, assessing some of the existing gaps in knowledge which this paper seeks to address. This is followed by a comprehensive outlay of the methodology and materials used in the paper, beginning with the baseline emissions for a comparable air journey between Hanoi and Ho Chi Minh City, and followed by the calculation methodology used to find the CO 2 emissions from the construction steel, the end-to-end journey by HSR on Vietnam's national energy grid, and the seat fill rate necessary to ensure sustainability in relative terms. The results of these findings are each analyzed in turn, and are then discussed at length in relation to the social, economic and market conditions in Vietnam. Finally, policy implications are given for the findings to each of the research questions. Literature Review The NSER itself has thus far attracted little direct interest in the academic literature, and what little analysis of the NSER that exists largely focuses on the economics of the project. Kikuchi and Nakamura's [8] paper employs an economic analysis which attempts to determine the potential profitability of the rail line under a range of potential scenarios, including the possibility of maintaining non-rail business ventures on the line and the possibility of adding cross-border sections to link to the NSER. This study is interesting particularly in relation to the third section of this article which will examine potential ridership rates; however, it is limited in that the comparative analysis it conducts is based on Shinkansen use in Japan. It does not consider regional issues in Vietnam, which is a significant limitation considering the difference in economic development and income levels between the two countries-this is an area which this study seeks to address by conducting a projected market analysis of the different provinces in the central portion of the line, supplementing the existing analysis in the paper with more data specific to the Vietnamese NSER. As the focus is on profitability, the analysis also excludes an examination of the environmental impact of the railway-a consideration which must ultimately be addressed considering that the JICA feasibility study [3] (pp. 11-12) considers reduced greenhouse gas emissions to be a direct benefit of the project, and considering the broader international context of decarbonization. Beyond this, the NSER is mentioned in passing in several papers relating to infrastructure development, such as Yoon and Doan's [9] paper on Hai Phong Port, but there is little focus on the NSER project specifically in the wider literature-this is another gap which this study seeks to address, considering both the economic scale of the project and the national market and energy mix context which it will face. There is somewhat more discussion in the literature of Vietnam's wider infrastructure needs, but the linkages to environmental performance in existing studies are weak since they tend to focus on purely economic factors. For instance, Tran [10] discusses the impact of different infrastructure levels on FDI across the different regions of Vietnam, noting that FDI was unevenly distributed in part due to poorer infrastructure in rural areas of Vietnam. This is, perhaps, one of the issues which the Vietnamese government hopes the new rail link will help to alleviate. However, while this is indicative of one of the causes of regional inequality, it does not address a potential link to demand-rather, the analysis is focused around the needs of investors rather than individuals, and it again does not account for the issue of environmental sustainability-a relationship which this study seeks to establish the extent of in the context of relative environmental performance. Likewise, Nguyen et al. [11] discuss the success factors in public-private partnerships for infrastructure in Vietnam-a category which the NSER will fit under-and conclude that financial feasibility is an important factor in ultimate project success. However, it is beyond the scope of their study to assess the factors which influence project success on an individual level-they do not consider the income disparities or demographic differences present in Vietnam or discuss in detail other factors which are likely to underpin financial feasibility. This is again an issue that this study addresses-this study engages in a market size analysis specific to the NSER which permits a discussion of the feasibility of the project in terms of environmental performance, which, while not the main objective of the paper, provides insight into economic market size in the central regions of Vietnam as well. While there is little discussion of environmental factors in Vietnam's transport infrastructure needs, there is considerably more criticism in the wider literature of the energy sector in Vietnam-indeed, in recent years, Vietnam has been sharply criticized for its investments in coal power generation and there is wide academic and political consensus urging Vietnam to move to cleaner forms of energy. Dorband et al. [12] focus on the political economy of coal use in Vietnam, concluding that vested interests drive its use-but they also urge international financiers to move toward renewable energy investments. Likewise, Tran [13], in predicting an increase in greenhouse gas emissions under a business-as-usual scenario, notes the critical importance of increasing renewable energy in the energy mix. This discourse, being related to this study, is an important one to consider in relation to HSR-HSR, by its very nature, uses considerable amounts of energy, and so even if the trainsets themselves do not directly emit CO 2 , they are straining an energy supply which does. Considering the energy mix specific to Vietnam in assessing the environmental performance of the NSER is of vital importance. In assessing the CO 2 emissions and CO 2 savings made by HSR, comparable analyses have been conducted in other national contexts. Of particular relevance from the recent scholarship on the matter, considering that this study focuses on air travel as a baseline comparison, is the paper by Avogadro et al. [14] which considers both travel time and costs when substituting air travel with HSR in the European market and ultimately concludes that emissions savings are likely if certain short to medium distance air routes which can realistically be substituted for rail are discontinued, albeit with significant caveats for regional accessibility and with varying impacts per EU member state. While ostensibly this bodes well for the Vietnamese NSER, in reality the comparison cannot be made directly-European energy mixes are extremely different to those seen in Vietnam, and in any case it is unlikely that the Vietnamese government would stop flights outright between Hanoi and Ho Chi Minh City considering both the fact that the government has an 86.16% share in the Vietnam Airlines [15] and that the route is the sixth-busiest in the world [16]. Likewise, on the costs side, as with the paper by Kikuchi and Nakamura [8], it is not realistic to translate the cost variable to Vietnam where the base level of income is likely to be considerably lower, even if some European countries actually perform worse on equality metrics such as GINI (most notably Italy) [17]. Perhaps more comparable to Vietnam are China and Turkey, and both of these have seen studies on the CO 2 performance of their respective HSRs. A study by Chang et al. [18] examines "cradle-to-grave" emissions including the construction and operating costs of the Beijing-Shijiazhuang HSR, and their study concludes that while the HSR is ultimately preferable to air or road travel, the environmental performance can be improved by increasing ridership. Their methodology for calculating construction costs goes beyond what is possible in the scope of this study, incorporating materials other than steel and accounting for energy costs in the construction process, but their findings are interesting in that, prima facie, they confirm some of the hypotheses in this study. However, their chosen case study route is considerably shorter-only 281 km in distance compared to the 1541 km of the Vietnamese NSER [18,19]. As such, different considerations may be at play in relation to consumer behavior. The shorter distance, the journey time and the relative costs of the options available will be vastly different, and so a specific and tailored analysis of the NSER's conditions is necessary. Perhaps more comparable is the Beijing to Shanghai HSR-a study of which, also cited by Chang et al. [18], was carried out by Yue et al. [20]. This study conducts a life-cycle assessment, including an examination of the role of the energy mix in the HSR's performance and a recommendation that China move away from fossil fuel-based energy [20], but it is again placed in a very different social and economic context to the NSER, with considerably different conditions along the route considering China's economic geography and the privileged position of the coastal areas relative to the inland areas. As such, this paper seeks to conduct an analysis which is specific to Vietnam-while the Chinese HSR is perhaps the closest, it is still not a perfect comparison, and a tailored, project-specific approach is necessary. Outside of China, the most directly comparable HSR system is likely to be Turkey, on which an analysis of two HSR lines was conducted by Dalkic et al. [21]. It again seems to ostensibly support some of the hypotheses laid out in this paper, for instance in concluding that cost and travel time are likely to be barriers which reduce HSR use and that energy mix is a vital factor in estimating greenhouse gas emissions. However, this particular paper focuses largely on capturing road passengers rather than air passengers [21], and it is again specific to a single national context. If anything, this literature review has made clear the importance of conducting a specific, tailored analysis of HSR projects on a case-by-case basis, since the economic, social and demographic factors are vastly different in each proposed location and project. This paper therefore addresses this literature gap in relation to the Vietnamese NSER-a proposed line on which limited policy research has been conducted beyond the feasibility studies carried out by JICA, and enriches the literature on the relationship between energy mix, inequality, market size and environmental performance in relation to HSR more broadly. Materials and Methods This paper used the statistics and estimates contained within the October 2019 Japan International Cooperation Agency (JICA) Data Collection Survey on Vietnam's proposed NSER [19] as the basis for most of the projections and modeled scenarios. The reason for this is that this is the most recent and most detailed proposal for the NSER, and that it contains detailed information on proposed technical standards on which estimates can be formulated. All estimates were based on the so-called "two-step scenario" laid out in the study, where the entire line would be expected to be open by 2040 [19]-this was selected over the alternative "five-step scenario" because the latter would only open fully in 2070, and predictions and projections for such a distance into the future would be impossible to make with any reasonable degree of rigor in the context of this study. Where appropriate, additional data provided by other ministries and agencies in Vietnam and elsewhere, such the Ministry of Natural Resources and Environment" which permitted further comparison of Shinkansen emissions factors or policy, and from various NGOs and industry bodies, including the International Energy Agency and the World Steel Association, were used. This study was broadly split into three areas; the amount of carbon produced to produce the rails, the absolute cost in carbon of each one-way journey along the entire length of the proposed line, and the "per capita" CO 2 costs under several projected passenger load scenarios. The numerous data points were synthesized to project the potential carbon emissions on the NSER-both in construction and in operation-in comparison to an air journey between Hanoi and Ho Chi Minh City, which formed a comparative baseline. First, the emissions for the air journey were based on the data provided in the UK government's greenhouse gas conversion factors for company reporting, using the figures specified for short-haul flights [22], with an assumption based on the principle of 184 passengers (the capacity on a Vietnam Airlines Airbus A321 [23]) generating 19,416.6 kg of CO 2 per journey among them (105.525 kg/capita) on a flight between Hanoi and Ho Chi Minh City. This is naturally limited to some degree-there are numerous airlines and plane types serving this route with different specific passenger capacities and carbon emissions. This only covers the A321, and not every seat will be full on every flight. There were several reasons why it was selected over other plane types. First, of the 16 flights which Vietnam Airlines operates on a single day on this route, 6 are A321s, which is tied for the most-used plane type on this route with the Airbus A320 [24]. This was based on Monday 16 July 2021, a randomly selected date. As the route is so busy, there is unlikely to be significant variance between different working days over the course of a given year, although any date suffers from being selected during the period of reduced pandemic air passenger demand. Moreover, Vietnam Airlines does not permit access to historical flight schedules, so data could only be collected from a date in the future relative to the data collection period, which took place in July 2021. Second, both the A321 and the A320 have similar specifications, with only slight differences in fuel consumption and seat capacity [22]. Both planes are also marketed by Airbus as having "unbeatable fuel efficiency" and so they represent planes which are positioning themselves as being "environmentally friendly" by air transport standards [25,26]. Beyond this, the A321 was selected over the A320 because it is the aircraft which Vietnam Airlines chooses to more heavily market-it features prominently on the website, while the other plane type does not [23], and so it is likely the aircraft Vietnam Airlines would wish to be represented with on a domestic route. This route was also, in 2019, the sixth-busiest domestic air route in the world [16], so seat capacities would be likely to be high or full much of the time. This was favored over the ICAO emissions calculator because it permits specificity of plane type, where the ICAO calculator aggregates across all plane types used on the route making the figure less precise because it is not possible to calculate per-seat emissions without selecting a specific model of plane. To summarize, the emissions for the plane were based on an Airbus A321 and greenhouse gas conversion factors used by the UK government for company reporting, based on short-haul flights and assuming a full passenger load of 184 passengers. The key data necessary for the calculation of the carbon cost of the steel production for the rails were the length of the proposed railway (1541 km), and the rail weight and type (60 kg per meter rails conforming to Japanese Industrial Standards), along with the carbon emissions per kilogram of steel produced. The former data are included within the JICA survey [19] (pp. 1-21), while the latter are based on the estimates provided by the World Steel Association, an industry body, which estimates that, averaged across the globe, 1.9 tonnes of carbon are produced per tonne of steel [27]. This is a rather crude estimate-no details on methodology are provided in the source, and the figure does not specifically target the Vietnamese steel industry or account for any specific pitfalls or inherent advantages within the Vietnamese steel industry. However, these figures would be beyond the scope of this article to collect to a higher degree of accuracy, and this figure provided a useful baseline from which to reasonably estimate the carbon emissions from the steel production needed for the railway. In terms of construction costs, additional CO 2 will be created in the production of concrete needed for slab sections of the track-however, it was not possible to calculate the CO 2 emissions from this with any degree of precision based on the available data. To summarize, this is a simple calculation of the CO 2 emitted, per the World Steel Association's [27] estimate of 1.9 metric tonnes of CO 2 per tonne of steel calculated as the length of the entire railway and assuming that it is double-tracked (for four sets of rails) at 60 kg/m in weight. Key data necessary to calculate figures on the potential CO 2 emissions from the Shinkansen were combined from several datasets. The energy consumption data for the 10-car E5-series (31.7 kilowatt-hours per kilometer) is contained within the JICA survey ( [19] Appendix pp. [14][15]. This permitted a calculation of the energy consumption across an entire end-to-end journey, which could then be combined with data on Vietnam's projected energy mix in 2030 (again found in the JICA survey) to determine how much of the provided electricity was likely to come from fossil fuel-based sources. In this case coal and gas were used since other fossil fuels are not expected to form a significant portion of Vietnam's future energy mix. In this case, by 2030, Vietnam's projected energy mix in terms of actual power generated in billions of kilowatt-hours will include 53.2 per cent coal and 16.8 per cent gas, which will comprise the two largest sources in the energy mix [3] (pp. . The survey also provides the figures in terms of projected generation capacity, but it is more accurate to use the actual output since the capacity includes a high proportion of hydroelectric dams which fluctuate in performance over time. In essence, this calculation took the energy used across the entire journey, based on the estimate of a 10-car E5-series Shinkansen, and then divided the journey into "shares" based on the proportion of each fuel type in the energy mix as it is predicted to be in 2030. For coal-based CO 2 emissions, the above figures were combined with data provided by Finenko and Thomson [28] who provide estimates of CO 2 emissions per megawatt-hour of generated electricity in Vietnam, comparing across a range of scenarios with different fuel mixes and technology types. For the purposes of this paper, the estimates used were based on the 2010 figure specified by Finenko and Thomson at 1056 kg of CO 2 per megawatt hour. This figure was used as the "worst case scenario" as it was the only confirmed and "real" figure in the data and is not a projection. This was supplemented by a "best-case scenario" estimate of approximately 730 kg of CO 2 per megawatt-hour if Vietnam switched entirely to the best current technology (ultra-supercritical plants) and fuel mix (bituminous coal) by 2030 [28]. A range of other potential scenarios apart from these two figures are projected in Finenko and Thomson's paper [28], including a business-as-usual scenario and an expansion of subcritical plants using anthracite coal, both of which predict a small rise in CO 2 emissions by 2030 versus the 2010 figure. However, since there is only little difference between these high 2030 estimates and the "worst-case scenario" 2010 figure, it is more preferable to use the "real" 2010 figure where possible. The "best-case scenario" is useful because it is the most emphatic in demonstrating the savings in CO 2 emissions which can be gained by switching to better technologies-even if the scenario itself is improbable, it most clearly demonstrates the benefits of moving to cleaner technologies. In essence, this means that two estimates are provided, based on the figures provided in Finenko and Thomson's [28] paper-one representing a continuation of the 2010 energy mix and one representing a hypothetical scenario in which all coal plants were upgraded to the cleanest possible coal technology. The calculation was again relatively simplethe megawatt-hours of electricity generated by coal as multiplied by the requisite CO 2 emissions per megawatt-hour to provide the overall level of CO 2 emissions. Natural gas emissions per megawatt-hour are based on Vietnamese government data-in 2018, Vietnam generated 39,772,700.73 megawatt-hours of electricity from gas turbines, generating 17,272,563.05 t of CO 2 [29]. A simple division of the generated CO 2 by the megawatt-hours of electricity provided a workable figure which showed that the gas power in the energy mix is producing roughly 0.4342818751 t of CO 2 per megawatt-hour, which was rounded to two decimal places to provide the working figure of 430 kg of CO 2 per megawatt-hour used in the calculations made within this paper. These figures are based on official data, and the slight rounding down gives the train the "best chance" of outperforming the plane on CO 2 emissions. The methodology here is the same as for coal, except the coal figures were replaced with natural gas figures-the natural gasbased megawatt-hours of electricity were multiplied by the 430 kg of CO 2 /megawatt-hour calculated above. An additional and hypothetical "gas-only" scenario is provided to demonstrate the CO 2 savings which could be achieved if coal were replaced with natural gas-the megawatt-hour share of coal was simply applied to natural gas instead. With these figures, it was possible to calculate how much of the Shinkansen's journey across Vietnam will theoretically come from each power source, and by extension how much CO 2 will be emitted per end-to-end journey on the NSER. If the E5-series uses 31.7 kilowatt-hours per kilometer, then this means that it will use approximately 48.88 megawatt-hours across the entire journey. It is then a simple matter of applying the percentages of coal and natural gas in the energy mix to this figure to determine how many megawatt-hours come from each energy source, and then multiplying the number of megawatt-hours by the CO 2 emission factors collected across the different data sources. This provides base estimates of the indirect CO 2 emissions caused by the electricity consumed by the train across several scenarios-emissions which will be absolute and independent of seat occupancy. The results of these figures are represented in Table 1 in the results section. The final component of this paper focuses on per capita emissions on the plane versus the train. The plane was assumed to be at full capacity-in the case of the Vietnam Airlines A321, this is 184 seats [23], and considering that the route was the sixth-busiest air route in the world pre-pandemic [16], this is a fair assumption when working out per-capita emissions from the route. For the Shinkansen, a number of scenarios are projected, with seat occupancy rates projected at 10 per cent intervals. This is because average seat occupancy across the whole journey is difficult to predict accurately, with varying predictions based on journey time and using different calculation methodologies. This approach is similar to the one used in the Chang et al. [18] study, although their study divides the emissions of the entire life-cycle of the HSR across the passenger fill-rate rather than just the emissions of a single journey. The JICA survey predicts that the passenger demand will stay relatively consistent across all the sections of the railway, with a variance between 117,000 passengers per day in the least-used Nam Dinh-Ninh Binh section (three sections of the railway outside Hanoi and connecting two smaller cities) and 150,000 in the most-used section between Long Thanh and the Thu Thiem terminus station in Ho Chi Minh City by 2050 [19] (pp. 2-19). This averages a 70 per cent seat occupancy rate across all sections [3] (pp. 5-7). However, this is questionable-the prediction model used in the study is based on a JICA study from 2013 [19], and the modeling in the study is based on traffic demand data from between 2008 and 2010 [30]. However, the intravenous period has seen considerable shifts in Vietnam's passenger transport market, which has seen extremely high passenger demand and which has seen the expansion of low-cost airlines in what is now a highly competitive space with three competing low-cost carriers (LCCs) [31]. This is only predicted to expand further with ten airport construction and expansion projects planned before 2030, including in the central regions of Vietnam such as a new airport in Quang Tri and a new terminal at Da Nang Airport [32]. To summarize, 10% interval seat fill-rates were projected, and then the CO 2 emissions per one-way journey were divided between the passengers at these intervals to estimate per-capita CO 2 emissions at each interval. Furthermore, the exact point at which the train became more efficient in per-capita terms was calculated by taking the total emissions of the train journey and dividing them by the per capita CO 2 emissions from the plane journey, which provided the precise number of seats filled which would be needed to exceed the environmental performance of the plane. This was then converted into a percentage across both the high emissions and low emissions scenarios and on both the 10 and 16-car trains. There is also the significant question of regional inequality-between the two economic centers of Hanoi and Ho Chi Minh City exist a number of significantly poorer regions. Another factor is that evidence from other countries, including Japan, suggests that demand for rail diminishes with distance [8]-and even with HSR the journey time between the termini would be over five hours-more than double an equivalent flight. Vietnamese statistical data, sourced from Vietnam's General Statistics Office, provincial government websites and state-owned regional and national newspapers was combined with average income data from Open Development Mekong [33] and three estimates of provincial inequality in Vietnam are provided, with the first using a flat 7.076% compound annual growth calculation over the twelve years between 2018 (when the data was collected) and 2030, which is limited due to the different growth rates across the country. The second uses actual provincial gross regional domestic product (GRDP) growth rates from 2018, but since annual rates are prone to fluctuation (indeed, Ha Tinh's data deliberately excludes the Formosa Ha Tinh Steel plant because the plant on its own accounts for more than half of the GRDP growth in that year and so it is unrepresentative) these figures are also likely to be flawed. The third set is a mean average of the two, and while still imperfect this allows the two sets to balance each other to some degree. However, it should be stressed that these figures are by no means definitive-since predicting future growth is difficult, and since the figures pre-date COVID-19 and do not adjust for the long term impact of it since reliable figures are unavailable which can account for it, this dataset is only intended to demonstrate the point that regional inequality is likely to persist. These combine with provincial population and population growth statistics [34] to create a rough estimate of market size based on the total personal income given per province-this is similar to Gross Regional Household Income, but is based on individual rather than household income. Figures are adjusted so that only those of legal working age (15+) are counted, with this being based on United Nations Department of Economic and Social Affairs population statistics [35]. Again, however, these figures are somewhat limited in that they do not account for the impact of COVID-19, with the final impacts of the pandemic ultimately impossible to predict. These predictions also do not account for overseas tourists, but as the most likely points of entry into Vietnam are the major international airports in Ho Chi Minh City and Hanoi, it is not expected that this would make a considerable difference considering that potential tourist HSR users would likely be boarding at or near the terminus stations and thereby only increasing their potential market size. The base data for the calculations in this section can be seen in Appendix A. To summarize, three estimates are provided of average incomes in 2030-one based on a compounded flat regional economic growth rate from the 2019 figures, one based on a compounded flat national growth rate, and one which uses the mean average of the two to adjust for potential outlying factors in regional growth figures. Regional growth rates are applied to existing population figures and compounded to provide an estimate of population size in 2030, and this is then adjusted to only account for the working age population. The estimated 2030 population size is then multiplied by the estimated average income per province to provide a market size estimate based on Gross Regional Personal Income, which is calculated in a similar manner to Gross Regional Household Income except that it is calculated on an individual and not a household level. While these issues are considered in the discussion section, these various complicating factors make projecting the passenger demand with any degree of accuracy impossible, and so it is better to use an approach which captures numerous potential scenarios using average seat fill rate. While it is true that the train will allow passenger embarkation and disembarkation along the route, the fact remains that the trains will be doing endto-end journeys just as the planes will. This makes assessments of both the end-to-end journey emissions and the per-capita emissions based on average occupancy rate important. Since an E5-series Shinkansen with ten cars has 740 seats [3] (pp. 4-87) the per-capita seat emissions can be calculated using the aforementioned total energy used across the journey and then dividing it by different seat fill rates at 10 per cent intervals based on the total seat numbers. The JICA feasibility study [3] also discusses the possibility of moving to a 16-car model after 2040 with a predicted increase in passenger demand, with this model having 1220 seats. The energy consumption of this model is not disclosed, but a crude estimate is provided based on the figure from the 10-car model and multiplying it by 1. 6. This provides a working estimate of 78.08 megawatt-hours per one-way journey, but this is limited since it does not account for any specific factors such as aerodynamic drag or different carriage weights, such as first-class or dining cars. As such, the figures for the 10-car layout should be considered more accurate. Limitations of This Study The data used in this study were naturally limited by several factors. First is data availability-it is not possible within the context of this study to provide a full life-cycle analysis on the level of the studies by Chang et al. and Yue et al. [18,20] because most of the data available to do so are not available or have not yet been assessed in rigorous detail by the project planners, with specific route details still to be finalized. Much of the data are also commercially sensitive-several parts of the publicly-accessible version of the JICA study are redacted and finding specific data on Shinkansen train sets is difficult. This is the reason for the limited estimate given for the 16-car E5-series Shinkansen (which does not yet exist even in Japan)-with only the figures for the 10-car model available, the only way to provide a workable estimate was with a crude calculation which assumed that being 1.6 times larger would equal 1.6 times the energy consumption. It does not, and cannot, compensate for issues such as additional aerodynamic drag, and of course it is not possible to tell in advance how many carriages will be assigned as different types, whether they be luggage storage, first or business-class accommodation, or dining cars. Because of this, none of the figures provided should be taken as in any way definitive-they are realistic estimates based on the data available, but they are designed to model the points made in the study and should not be considered technical samples which are completely accurate. The second factor is the simple fact that the NSER only exists on paper, and this means that several factors must be estimated or assumed. On energy mix, for example, it is possible that Vietnam will change policy significantly by 2030 and that coal power will have significantly reduced-indeed, it promised to cease constructing new coal plants and transition completely from coal in the 2030s at COP26 [55] but it remains to be seen whether this will be fully implemented. It is, of course, also possible that the opposite could happen. The use of the feasibility study as a base, while being the best source of data available on the NSER project, is itself demonstrative of this-the original 2008 passenger demand data, for instance, predates the expansion of LCCs in Vietnam and so the data collected are of questionable value now. The same may be true of this study in the future-any use or citation of this study should ensure that the points being made are still relevant at the time they are used, because while the estimated scenarios are all as realistic as possible, they are also purely hypothetical in nature. The third factor-which makes the limitation outlined above even more prescient-is that fact that this study was conducted during the COVID-19 pandemic. This will have significant ramifications for regional economic growth in Vietnam which are not accounted for, since the author felt that the pandemic recovery timescale would not be possible to predict and would bias the economic growth estimates provided. This means that, in all likelihood, the actual growth in regional incomes will be somewhat lower than the estimates provided in this study. Again, they should by no means be taken as definitive because of this. Similarly, the flight data provided were taken during the height of the pandemic, at a time of reduced air passenger demand-but since Vietnam Airlines does not provide historical flight schedules this was the only option open for this study. Finally, the comparison with the plane is significantly caveated by the fact that it only covers one type of aircraft on a single airline with a single configuration. The route operates with numerous aircraft types-some larger and some smaller than the A321-and each of these vary in both CO 2 emissions and in passenger load (and therefore per-capita CO 2 emissions). The A321 was deemed the best option for this study-being a modern aircraft which is likely to still be in use and which has relatively good emissions performance among planes and therefore a high benchmark for the NSER to compete against since it is likely that emissions from new aircraft types will continue to decline. However, the NSER's performance may vary in relative terms to other types of aircraft depending on their size, emissions, and passenger load, and so this limitation should be considered by readers of this study. Results This section is split into three components, each of these being based on the hypotheses laid out in the introduction. The first part concerns the CO 2 involved in the production of the steel, the second part discusses the CO 2 emissions of each end-to-end journey along the NSER in relation to the energy mix, and the third part discusses per-capita CO 2 emissions under a range of scenarios. Steel CO 2 Contrary to the hypothesis, the steel used in construction will only form a minor part of the carbon footprint from the railway. In absolute terms, it was calculated using the above methodology that the railway steel-assuming that it is all virgin and not recycled steel-will generate 702,696 metric tonnes (t) of CO 2 in production with an indeterminable amount also used in shipping. This assumes that the entire line is double-tracked (for four sets of rails) under the JIS-60 kg rails specified in the JICA study [3,19], and it only considers the tracks themselves. This constitutes a reasonable "worst case scenario" when testing the hypothesis and simplifies the calculation necessary since there is no indication of which sections of the line would, in reality, be double or single-tracked. If 19,416.6 kg (19.41 t) of CO 2 are indeed emitted per flight, this would mean that the railway steel was "equivalent" to around 36,202 flights. Vietnam Airlines alone operates some 40 flights per day on this route [24], which means that in effect, the steel is equivalent to around 905 days of operations by only Vietnam Airlines-in practice, considering the presence of other airlines and larger plane types also in operation, the steel CO 2 would be the equivalent of far fewer "operational days" of equivalent flights. Nonetheless, this is still a large, fixed cost, and means to reduce it will be considered in the discussion section. That being said, considering the results of the studies on the Chinese HSR system by Chang et al. and Yue et al. [18,20], it is clear that rail steel is only a small component of the overall CO 2 emissions cost of construction. While the data necessary to conduct a thorough life-cycle assessment of the Vietnamese NSER were beyond the reach or scope of this study, the results of the above studies clearly demonstrate the need for such an assessment to take place. Per-Journey CO 2 Emissions on Vietnam's Inefficient Power Grid The Shinkansen is, of course, fully electrified. However, the original hypothesis of this article was that due to the coal and gas-heavy nature of Vietnam's power grid, an electrified train running on it would not deliver significant environmental savings and would, in fact, generate significant amounts of CO 2 . This was confirmed by the findings. Table 1 shows the energy costs of the journey using a 10-car E5-series Shinkansen traveling at the projected speed of 320 kph (again, per the JICA study) analysis [3] (pp. . Per the JICA Study which itself is using figures based on Vietnam's 7th National Power Development Plan, by 2030 it is predicted that 53.2% of electrical output will be from coal, followed by natural gas at 16.8%, and fossil fuel-free sources (hydropower, renewables and nuclear) will only comprise 29.8% of the energy mix. A further 1.2% will be imported, but since this amount is small and its own makeup cannot be calculated precisely, it is excluded from the analysis [3] (pp. . It is calculated that even in the best-case scenario, using the smaller train and if all the coal power in Vietnam were to be generated by ultrasupercritical plants per the projections laid out by Finenko and Thomson [28], then the CO 2 emissions performance of an end-to-end trip on the Shinkansen would still be worse than an equivalent A321 flight by 16.28%. In the baseline high emissions scenario, the performance of each end-to-end journey would be 59.64% worse in terms of CO 2 generation on the 10-car train and 154.95% worse on the 16-car train. In the "best case" low-emissions scenario with the best and most efficient form of coal technology available on the 10-car train, the emissions from coal alone are almost a perfect match for the entirety of the plane journey as expressed in Figure 1, and this is before adding the extra CO 2 from the natural gas in the energy mix. This "hidden factor" in coal usage is a significant finding which calls into question the environmental credentials of the NSER in terms of the absolute and fixed costs of running the railway. Of course, the train has far more seats than a plane and therefore the potential of being more efficient on a per-capita basis-this is discussed in Section 3.3. It is also worth noting that on the 16-car train (expressed in Figure 2), even in an extremely unlikely hypothetical scenario of replacing all coal in the energy mix with gas power, the one-way train journey would still emit more CO 2 than the plane in terms of fixed costs-of all the scenarios calculated in this paper, only the 10-car train in the gas-only scenario exceeds the performance of the A321 in terms of fixed costs per one-way journey. This is consistent with the results seen in the existing academic literature on the subject, most of which consider energy mix to be a vital part of assessing the performance of HSR infrastructure. The results seen here match closely the results in Yue et al. [20] wherein the coal in the Chinese energy mix is determined to be the single most significant contributor to the overall environmental impact of the railway, albeit with the caveat that coal use is slightly higher in the Chinese energy mix than in the Vietnamese one [56]. They are also in accord with the conclusions of Dalkic et al. [21] who note that in their study of the Turkish HSR that 65% of the country's electricity is supplied by fossil fuel-based power plants, including coal and natural gas, and that switching to alternative energy sources could create significant savings in greenhouse gas emissions on the HSR. The results differ somewhat than the findings of Avogadro et al. [14], who conclude that emissions savings are likely if HSR replaces air travel-but if anything, this only proves the point on energy mix being important-across the EU, only 12.6% of energy is provided by solid fossil fuels such as coal, and while petroleum and natural gas use is still present [57], these fuel sources are not as emissions-intensive as coal. Moreover, in the countries with the greatest prevalence of HSR infrastructure such as France, Germany, Italy and Spain, coal use has declined significantly and forms only a small part of these countries' respective energy mixes [58][59][60][61]. Of these, Germany has the highest coal use, but even Germany only had 19.6% of its energy met by coal in 2019 and 15.53% in 2020 [57,59], both far below the projected figure for Vietnam in the JICA study of 53.2%, per Table 2. This in itself is a strong demonstration of the impact of energy mix on environmental performance. Per-Capita CO 2 Emissions on the E5-Series versus the A321 At full capacity, with every seat filled, the 10-car E5-series' CO 2 emissions performance would be 41.89 kg/capita in the high-emissions scenario and 30.51 kg/capita in the low-emissions scenario. The 16-car E5-series performs marginally better on a per-capita basis, with 40.58 kg/capita in the high-emissions scenario and 29.48 kg/capita in the lowemissions scenario. Either case is, ostensibly, very favorable for the train-in both cases it far surpasses the performance of the 184-seat A321 which would generate 105.525 kg/capita on the same journey. However, the main variable-passenger numbers-severely impacts this calculation. Taking the JICA estimate of 70% occupancy across all sections [3 p. 5-6], on the 10-car train the per-capita CO 2 emissions would be 59.84 kg in the high-emissions scenario and 43.58 kg in the low-emissions scenario, while on the 16-car train they would be 57.96 kg in the high-emissions scenario and 42.1 kg in the low-emissions scenario. In all cases, this is still better than the performance of the A321, but as the seat fill-rate declines, so does the per-capita environmental performance. Figures 3 and 4 show the per-capita performance of the E5-series 10 and 16-car trains compared to the A321 at different passenger capacity intervals-they provide a stark demonstration of the impact of seat occupancy, with the cut-off point being at 39.7% (approximately 294 seats) in the 10-car high-emissions scenario and 28.91% (approximately 214 seats) in the 10-car low-emissions scenario. In the 16-car train, the cut-off points are 38.45% (469 seats) and 27.93% (341 seats) in the high and low emissions scenarios, respectively. This also provides further demonstration of the impact of the fuel mix-by moving to ultra-supercritical coal technology, the train, regardless of size, becomes significantly more efficient in per-capita terms-the 10-car train is able to take 80 fewer passengers and the 16-car train is able to take 128 fewer passengers before reaching their respective cut-off points compared to the A321. These represent efficiency boosts on a per-capita basis of 10.81% (10-car) and 10.49% (16-car), respectively, and are indicative of the hidden factors relating to both passenger load and energy type. This is again consistent with the findings in the existing literature. Dalkic et al.'s [21] study, in one scenario estimate of passenger demand for the train, predicts a variance in emissions reduction against road use from trains between Ankara and Istanbul of between 112.2 kt of CO 2 and 151.8 kt of CO 2 , and this is based on a 30% variance in passenger numbers with the higher figure representing 95% occupancies and the lower number representing 70% occupancies. While in their study in either case emissions reductions are achieved, it is clear that the seat fill rate has a significant impact on relative performance. While this is based on modal capture-i.e., travelers using the HSR over other modes of transport such as cars-it nonetheless appears to be in accordance with this study that there is a strong link between relative performance and seat fill rate. This also further confirms the Yue et al. [20] and Chang et al. [18] studies. The Yue et al. study [20] notes that with low market penetration (since again the analysis for the HSR includes modal capture) emissions will increase since in essence, the train will still be running without capturing passengers from other modes of transport, and that with low occupancy, emissions from numerous greenhouse gases might increase by 8-15%. The Chang et al. study [18] likewise considers passenger occupancy rate, and calculates that greenhouse gas emission intensity would be more than three times worse on a train with 30% occupancy (130 g CO 2 /km traveled) than one with 100% occupancy (40 g CO 2 /km traveled). These results are entirely consistent with the findings for the NSER and underscore the importance of passenger occupancy in relation to relative performance. Regional Inequality and Potential Market Size in Vietnam The regional income inequality in Vietnam is evidently significant-the (albeit limited) modelling in Table 2 demonstrates that of the 23 proposed stations along the line, at least 10 and potentially up to 13 stations will be in provinces (marked in blue)-all in the middle portions of the line-with less than half the per capita incomes of those living in Ho Chi Minh City even by 2030. This is exacerbated by the significantly smaller populations of the intermediate areas-no province in Vietnam reaches even half the populations of Hanoi (8,093,900) or Ho Chi Minh City (9,038,600), with the closest three (Thanh Hoa, Nghe An and Dong Nai) all being in the 3,000,000 range [34]. Dong Nai also directly borders Ho Chi Minh City and its largest city in administrative terms (Bien Hoa) is part of the Ho Chi Minh City conurbation. These combined factors create a situation wherein the potential market size of the middle provinces is much smaller than the GRDP, population size or personal income figures alone would suggest-a significant "hidden factor". Table 3 building on the results of Table 2, combines the estimated population figures for 2030 (based on 2018 population growth rates [34] compounded over 12 years across the 71.7% of the population which is considered working age [35]) with the mean estimate from Table 2 to produce an estimate of the total personal income earned by the entire population of each province. The table clearly demonstrates the spiraling impacts of income inequality when combined with population size on the potential market size of a given product or service. Essentially, this is the gross sum of personal incomes, and it represents the hypothetical "pot of money" available for people in any given province to spend on the goods and services which they need, although in practice the figure will be less due to necessities such as household bill payments and taxation. Nonetheless, in assessing market size via total income rather than by GDRP, we gain a more accurate image of the potential market size since the figures only account for the money available to potential customers and exclude parts of the GRDP calculation with no bearing on market size. The results are quite stark-by this metric of market size, Hanoi and Ho Chi Minh City are by far the largest potential markets-at USD 105.72 bn and USD 128.29 bn, respectively-and no other province along the route reaches even half of these, with seventeen stations (marked in orange) having less than one tenth of the market size of even the smaller terminus of Hanoi. Only four stations are in provinces which are above one-tenth of the market size, and one of these, Long Thanh, is in a province which directly borders Ho Chi Minh City. When placed in the context of the wider HSR environmental performance literature and the general unanimity around the need for occupancy rates to be high, these issues represent significant barriers to achieving a high seat fill rate which will prevent the NSER from achieving its potential in environmental and sustainability terms, since the markets in the middle portion of the line will have fewer people each with less money to spend. Discussion This section will focus on three topics-reducing the fixed costs of the steel needed for construction, the energy mix in Vietnam, and finally means by which seat occupancy rates can be maximized. The results outlined in the previous section raise numerous interesting questions about the long-term environmental and commercial sustainability of the NSER project, the issues of equity and equality in Vietnam, and the means by which these issues can be mitigated. Construction Steel Steel is an unavoidable fixed cost, and there is only so much that can be done to avoid the use of coking coal in the production process. Vietnam itself is currently a major producer of steel-the largest producer of the ASEAN-6 countries (Indonesia, Malaysia, the Philippines, Singapore and Thailand in addition to Vietnam itself) by a significant margin, having produced some 14.5 m MT of steel in 2018 and with a rapid upward growth trajectory [4]. Of the two countries most likely to provide the steel, while Vietnam has a relatively high amount of scrap-derived, recycled steel from electric arc furnace and interstitial free processes (in 2019, they were 32% and 10% of production, respectively), and while the country is a growing importer of scrap steel for the purpose of recycling [62], the proportion of this has also declined in recent years as more traditional blast furnace mills have been created. With rising electricity prices recycled steel was expected to become less competitive over time [63], but this has not yet come to pass with the largest supplier Vietnam Electricity's (EVN) tariffs remaining constant since 2019 both in terms of retail and wholesale prices [64][65][66][67]. Nonetheless, with the growth of blast furnace steel relative to other forms of production, a further decline in the relative amount of recycled steel can be expected. It is also worth briefly considering other potential sources of steel for the railway. Japan, as the proposed financier of the NSER, is actually even worse than Vietnam in this regard-only 24.5% of steel is produced in electric furnaces (with the rest produced in basic oxygen furnaces), which makes Japan rank as the second-worst performer among the major steel producers after only China (the largest single exporter of steel products to Vietnam) where only 10.4% of steel is produced in electric furnaces [62,68]. Regardless, blast furnace steel is by some margin the most likely source for the railway despite being the most environmentally damaging-mitigation of this to the greatest degree possible by sourcing Electric Arc Furnace (EAF) and interstitial free steel should be considered as a policy priority in planning. This is true of all HSR projects-steel production is one of the leading sources of carbon emissions, with the World Steel Association [69] estimating that between 7 and 9% of all CO 2 emissions globally are due to steel production. This also ties into the later point about energy mix-even EAF and interstitial free steel cannot be said to be carbon-neutral when the energy needed to power the processes behind them is still ultimately coming to a large degree from coal plants, and so moves to increase EAF steel and reduce coal in the overall energy mix would be mutually-reinforcing initiatives in any country which decided to adopt such policies. While the hypothesis when tested did indeed show that a lower amount of carbon than initially expected would be generated, the fact is that the railway steel alone would generate some 702,696t of CO 2 based on the crude estimate in this paper-still constituting a "hidden factor" in coal-generated CO 2 which should not be ignored. The incorporation of as much recycled steel as possible into the project should be considered as a means to promote its sustainability. The JICA Study [19] (pp. 1-36) also mentions the possibilities of having lower-demand sections of the line single-tracked or, on the opposite end, dualgauged (to allow for the passage of regular-speed freight trains)-further feasibility studies may also consider these proposals in more detail so that when the project commences construction in earnest, further steel is not added unnecessarily, impacting on and perhaps bringing down the fixed costs necessary for the railway to commence operation. Even though the steel used for the tracks will emit less CO 2 than hypothesised, a more thorough analysis of the construction costs of the railway and the trains operating on it in environmental terms would be beneficial as future research, which, if possible, should be a full life-cycle analysis. This is beyond the scope of this particular project, but a thorough analysis of this, in line with the work on the Chinese HSRs completed by Chang et al. and Yue et al. [18,20] would be of great benefit in assessing the overall environmental impact of the NSER. Considering the data volumes needed to do justice to such an enormous project in the Vietnamese context, this should ideally involve an open, transparent and collaborative process of research between relevant actors in Vietnam, JICA, and the private sector (including academia) to pool resources and expertise-the incorporation of all relevant stakeholders to pool resources and expertise is especially important considering that capacity, or lack thereof, in the general environmental impact assessment system is considered a weakness in Vietnam [70]. In development aid terms, this could also be addressed via capacity building and technical assistance programmes-JICA has carried out capacity building projects dealing with EIA issues and environmental management in several countries such as Mauritius, Cambodia and even, historically, Vietnam itself [71][72][73], in addition to projects carried out by NGOs and International Organisations in Vietnam [73]. Addressing this weakness not just in Vietnam but more generally will allow for more informed decisions to be made on major infrastructure projects from an environmental perspective and may in the long-term lead to carbon emission reduction though the promulgation of best environmental practices in the construction and operation of HSR infrastructure. While the issues discussed in this section are not equity issues in a direct sense, the findings do have equity-related impacts. The contribution to climate change is, of course, the main issue, but as the succeeding sections will discuss, relative performance will hinge on income equality across different regions. Power Consumption and Vietnam's National Grid The projection that even if Vietnam switched entirely to the best and cleanest available coal power generation technology that the train, regardless of size, would still emit more CO 2 than the A321 per one-way journey between Hanoi and Ho Chi Minh City underscores the unsustainable and environmentally destructive nature of coal as a power source and this paper agrees with the wide consensus in the literature in calling for Vietnam to phase out coal in its energy mix to the greatest degree possible. The projection that in the more realistic "high emissions" scenario the train will actually emit 59.64% more CO 2 than an equivalent flight, even on the less power-consuming 10-car train, on the same route is of particular concern, and the fact that the 16-car train more than doubles the per-journey CO 2 emissions of the plane (and even reaches almost double in the low-emissions scenario) further underscores this issue. These findings drive home the immediate and unequivocal need for a reduction in coal in Vietnam's energy mix, with even a shift to natural gas-while still emitting CO 2 -producing significantly cleaner results. Indeed, if Vietnam were to hypothetically replace the entirety of the coal in the energy mix with natural gas, then the 10-car train would comfortably exceed the plane's per-journey environmental performance in terms of the fixed costs, generating only 14,714.6 kg of CO 2 or 75.78% of the plane's emissions. The 16-car train's performance would also significantly improve, generating 23,503.8 kg of CO 2 or 21.05% more than the plane, but less than half of what it would emit in the high-emissions scenario with coal. In any scenario, the coal used to create the electricity to drive the train constitutes a considerable "hidden factor", in line with the results seen by Yue et al. and Chang et al. [18,20], and in line with the criticism of Vietnam's energy mix seen across the literature. This study is a good example of the impact that energy mix has-significant savings can be achieved via industrial upgrading, whether to "better" coal technology or by replacement with natural gas or, ideally, renewable energy sources. Again, this should be considered a policy priority for the NSER to achieve its potential in environmental terms-if savings along the lines of those seen in Avogadro et al.'s study [14] against the emissions of air travel are to be achieved, there is simply no alternative to excising coal and fossil fuels from the energy mix, and so phasing them out should take priority not just in Vietnam, but wherever possible globally. Again, it is worth emphasizing that this will be a fixed cost. Even if every train were running full and therefore at maximum per-capita efficiency, the project would still benefit from a greener energy mix to reduce the environmental costs of running it. The failure to account for the fixed costs of the project is a significant limitation in the existing feasibility study-especially considering the projected carbon emissions resulting from Vietnam's energy mix. The results confirmed and indeed exceeded the original hypothesis that the train would have a "hidden factor" in that it would generate significant amounts of CO 2 due to coal use and to a lesser degree due to natural gas use. It is true that Vietnam is-albeit slowly-moving away from coal and is planning to increase the use of renewable energy in its power grid [74], but a key goal of development assistance to the country (and indeed more widely), including by Japan as the main external advocate of the project, should be to assist in making this transition as quickly and affordably as possible if the NSER project is to be environmentally sustainable in the long term. Indeed, Japan has been heavily criticized in recent years for continuing coal financing, both academically with one study finding that more than 80% of Japanese energy capacity projects were fossil fuel based and fewer than 10% were based on renewable energy sources [75], and by environmental NGOs and civil society groups [76]. It is encouraging that the Suga administration did pledge to end overseas coal financing in 2021 [77], and Japan's aid programs can be a major contributor to energy transition financing in Southeast Asia and Vietnam-a policy which will ultimately have significant benefits for the NSER's environmental performance as well as for climate change mitigation more broadly. While this far from a unique policy suggestion, considering that across the globe more than 80% of energy production is still fossil fuel-based and that roughly one third of this comes from coal [78], financing transitions either completely away from fossil fuels where possible or at the very least towards cleaner sources such as natural gas is an urgent and critical requirement in development assistance policy going forward, regardless of which countries are the recipients and which are the donors. This is a hidden factor and an equity issue with very real consequences for Vietnam if left unaddressed-the country is among the most vulnerable to climate change, facing numerous challenges such as coastal flooding and increasing incidences of extreme weather among others [79]. Mitigation of these consequences could be achieved by reducing carbon emissions to the greatest degree possible-the hidden factors of which should be considered by all "green" technologies which rely on national power grids with coal including HSR projects. This also again underscores the importance of conducting a thorough and comprehensive life-cycle impact assessment in line with those carried out by Yue et al. and Chang et al. [18,20]-there are likely to be carbon impacts left unaddressed within the limited scope of this study, but these must be accounted for if the NSER project is to achieve its full environmental potential in line with the suggestions laid out in Section 4.1 on both conducting a thorough environmental analysis using overseas expertise where necessary and also for capacity building projects from donor countries to improve the quality of future environmental impact assessments. This is true of not just the Japan-Vietnam relationship and the NSER, but more broadly to mitigate the impacts of climate change when assessing the potential pitfalls of major infrastructure projects. Passenger Load, Per-Seat Emissions and Income Inequality The results of this study, while confirming the efficiency advantages of the train in per-seat terms, also confirm the need to fill those seats for the performance of the train to be favorable to the plane in relative emissions terms. While having slightly fewer than 40% of the seats occupied at any given time is ostensibly not a particularly high threshold compared to the JICA estimate of an average 70% load rate, there are several reasons to doubt this figure which are insufficiently addressed in the feasibility study or by the political actors involved. The first issue-and perhaps the most important "hidden factor" in the NSER project is Vietnam's entrenched inequalities, which see Hanoi and Ho Chi Minh City as having by some margin the most economic activity and the highest incomes-180 of Vietnam's 335 industrial zones are in these two regions, and in extreme cases incomes are double those of Vietnam's rural provinces [34]. While growth can, of course, be expected to continue in rural Vietnam, it is questionable whether the middle portions of the line will reach this 70 per cent threshold. Indeed, Kikuchi and Nakamura's study predicts that the majority of passenger traffic will be on the Hanoi-Vinh and Ho Chi Minh City-Nha Trang sections [8], which are at the peripheries of the proposed line with a large stretch in the middle portion ending up underused. This appears to be largely supported by the gathered data-the market size in the middle provinces along the line, per Tables 2 and 3, is simply much smaller, and so it can be expected that ridership rates will be considerably smaller. In this sense, the inequality in Vietnam is likely to have a direct negative impact on the relative environmental performance of the NSER. While it is impossible to predict consumer behavior or future trends with perfect accuracy, the simple fact of the matter is that most of the stations on the railway line will be in provinces with fewer people who each have less money than the larger, wealthier populations of Hanoi and Ho Chi Minh City. This is, of course, true to some degree in many HSR projects or operating lines, but Vietnam is a standout case. While Vietnam does not have an especially high GINI score-indeed, of the 34 countries listed by the International Union of Railways as having operational, under construction or approved HSR projects [80], 11 are in countries with higher GINI) scores (although the World Bank GINI Index excludes Saudi Arabia [17]. However, of these 11, only one-Morocco-also has a lower Human Development Index score (India is tied with Vietnam on GINI score but also has a lower HDI score) and only two, Morocco and India, have lower GNI per capita scores [81]. This places Vietnam's HSR in a difficult position-Vietnam not only has pervasive inequality, but it also has a relatively low level of baseline personal income, and these factors in conjunction are a relatively unique phenomenon when assessing the viability of a major HSR project. This severely calls into question the JICA assumption that the passenger load rate will stay constant through the entire length of the line-Vietnam's regional inequality will be high among HSR-possessing countries, and the gaps in both population and income combined with the low average across the country mean that the market may be limited for the stations in the middle portions of the line-an equity issue with consequences for the project's sustainability. Another potential complication is Vietnam's growing LCC market-round-trip plane tickets between Hanoi and Ho Chi Minh City can fall as low as USD 78 at certain points in the year, making flying potentially cheaper than even a one-way HSR journey, and even in peak travel seasons such as Tet the price difference between the HSR and air does not reach the same level of difference as in Japan, with journeys available from around USD 241 USD which are still faster than the NSER [82]. While the Vietnamese government has indicated that NSER tickets will cost USD 50-90 per one-way journey targeting an average of half the cost of a plane journey [1], this target does not seem to account for the LCC market and so it may create an over-estimate of passenger demand. For end-to-end Hanoi-Ho Chi Minh City journeys specifically, Kikuchi and Nakamura [8] predict a less than 10 per cent market share for the railway based on data from the European and Japanese HSR markets. Indeed data provided by JR ostensibly confirms the estimates in the study, with journeys between Tokyo and Hakata, a journey slightly shorter in terms of distance at 1174.9 km and with a similar travel time to the projected NSER at roughly five hours having a 10 per cent market share compared to the 90 per cent market share of airlines [83,84]. The study does not account for the potential impact of ticket prices, but for the sake of comparison, a one-way Tokyo-Hakata journey costs approximately USD 213 (23,390 Japanese yen at an exchange rate of 1 yen = USD 0.00909634 (exchange rate on 9 July 2021)), while Japan's highly-developed low-cost carrier (LCC) airline market (via Google Flights data) means that a plane journey averages between USD 100 and 135 for a return journey, making a return journey by plane approximately 1 4 of the price of the Shinkansen even when booked only one day in advance [84,85]. This calculation is somewhat limited-the relative costs of taking a journey in a developed country are quite different to those in a developing country, and the study itself largely focuses on journey length rather than costs. The issue here is the expected growth in the LCC and wider aviation market-and this will not only apply to end-to-end journeys but also for journeys to the intermediate stations on the route. The aviation market in Vietnam is already highly competitive and the competition is only growing [31]-these market factors do not bode well for the relative price competitiveness of the train and by extension the number of travelers who will choose to use it, especially considering the Japanese experience, the smaller potential market size, and the reduced personal incomes available in the middle sections of the line. In the present literature, the Avogadro et al. [14] study presents an interesting point of comparison. In the European context, they consider the potential emissions savings in relation to HSR as an alternative for flying, and as noted they ultimately conclude that significant emissions savings can be achieved where HSR alternatives are available while noting the impacts on regional accessibility where flying is removed as an option and the uneven impacts per member state, noting the need for policymakers to balance between environmental and passenger needs [14]. While the first issue is arguably not applicable to the NSER-indeed, the NSER is likely to boost connectivity to Vietnam's central provincesthe cost issue will arguably be much more acute. Within western Europe, countries with HSR have a considerably higher baseline income level-for instance, Italy, which has a slightly higher GINI than Vietnam [17] also has an average personal income of EUR 30,804 in its poorest NUTS 1 statistical gathering region of Sud (Southern Italy, or ITF in the Eurostat database) [86]. This is, of course, considerably higher than anywhere in Vietnam but especially Vietnam's middle provinces. Because of this, cost is considerably more likely to be a factor in passenger choice, and if the LCC market continues to be cheaper and faster for most journeys then the seat fill-rate required to achieve better results than the plane will be difficult to reach. As noted in the results section, the literature was unanimous in noting the need reach as high a seat fill rate as possible to ensure the maximum potential of the NSER is reached in environmental terms. Indeed, more widely with the growth of LCCs worldwide, it will be an increasing problem for HSR to capture passenger share, especially in markets with low base levels of income where cost will be a more significant factor. The factors outlined above cast significant doubt on the ability of the train to attract a 70% passenger load rate across all sections of the line-indeed, if, per Kikuchi and Nakamura's [8] prediction, fewer than 10% of all one-way, full-length journeys end up being made by train, the economic cores and largest markets of the line are at the termini in Hanoi and Ho Chi Minh City, and low-cost flights are becoming increasingly prevalent across the country, then whether even a 40% load rate can be averaged is questionable. This is perhaps due to the JICA study's estimate being based on data which pre-date the spread of low-cost flights in Vietnam. This has the effect of dramatically increasing the relative costs of running the NSER. With the fixed costs remaining the same, the per-capita emissions rise as the numbers of passengers decrease-and Vietnam's energy mix means that the threshold for per-capita efficiency over an equivalent plane journey may be relatively high. Even without the comparison to the plane, the NSER project would have considerably more environmental viability if coal were to be reduced in the energy mix. Future research which focuses on how Vietnam can incentivize rail travel over air travel, and research on market demand between different sections of the line conducted in more detail so that operators could make informed decisions about service provision and train size, would be of benefit here. As things stand, the project's environmental benefits remain questionable with the various factors which will curb passenger demand and by extension relative environmental performance. This is very clear evidence of how inequality and inequity can impact relative environmental performance which act in conjunction with the high fixed costs of running the line on a largely coal-driven power grid, and these factors should be considered when planning HSR infrastructure more widely. While Vietnam has a particularly high level of inequality among HSR-possessing countries, and it has a particularly competitive LCC market, these factors are likely to be applicable in any kind of HSR planning, and so should be taken into account in policymaking more broadly. This leads to several policy implications. In the immediate term, a problem specific to the NSER is the lack of an up-to-date, forward-looking and comprehensive demand and market analysis, and this should be carried out with the inequality issue in mind. The market size analysis carried out in this paper is based solely on statistics rather than qualitative data, and will naturally be limited because the figures utilized pre-date the pandemic-a future analysis should consider both the LCC market and potential pandemic-induced changes to the general business environment (for instance, the potential for reduced business demand with the prevalence of online meetings), which would present a much more accurate view of passenger demand along the NSER route. Consideration of these factors should be included in analysis of future HSR projects more broadlypolicymakers often consider HSR to be an alternative to air travel and a more sustainable mode of transport, as stated in the Avogadro et al. and Chang et al. studies [14,18] but if the minimum threshold of seat fill rate cannot be achieved, then all the HSR does is create an additional energy and environmental burden by 'running empty' while plane demand continues unabated. Second, to combat low passenger demand in off-peak times and areas, consideration should be given to the introduction of data-driven floating fares and airline-style ticketing (instead of fixed-price ticketing), which is also recommended by the Chang et al. study [18]. This may allow for the subsidization of lower-demand sections and travel times and encourage the use of trains in off-peak hours and regions with less demand to enable higher fill rates across the line more generally. Several studies confirm the efficacy of this method-a study by Jiang et al. [87] modeled a 13.48% revenue increase as a result of a floating fare mechanism on the Beijing-Shanghai HSR, and another study by Qin et al. [88] predicted a 7.98% (peak time) to 10.41% (off-peak) revenue increase and a sustained increase in the passenger load rate in off-peak times. A floating fare system would therefore boost both the environmental and the economic prospects of the railway by more accurately reflecting passenger demand. This demand data will also benefit operational planning, and based on it consideration should be given to operating shorter trainsets, reducing service numbers, and only running on certain sections of the line where demand is lower to minimize the energy needed to operate the line. Third, when planning operational mechanisms for the HSR such as ticket sales and interior and station design, significant attention should be given to ease of use and passenger comfort in order to increase the market share captured from the wealthier, larger markets in Ho Chi Minh City and Hanoi. There is a general view in the literature that HSR is more comfortable and generally has higher service quality than air travel, with some studies citing convenience as an additional advantage depending on the context [5,[89][90][91][92] and this is backed by several passenger surveys in both Europe and Asia with the study by Pagliara et al. [5] of the Spanish HSR having found that comfort was considered the most important factor by 8.5% of respondents, the second most important by 23.1% of respondents, and the third most important by 32.2% of respondents. Additionally, conveniencerelated factors such as speed and accessibility to other cities ranked highly in their study [5], and they make the interesting point in a different paper focusing specifically on the Madrid-Barcelona HSR that HSR offers an additional advantage of more easily facilitating en-route work among business travelers due to more availability of seat space, internet access, and the ability to use a cellphone [90]. The study by Zhen et al. [92] likewise considers the possibility of mobile working as a potential means to give HSR a competitive edge against air travel. Accordingly, the design of the Vietnamese HSR should lean into these perceived advantages as laid out by Pagliara et al. and Zhen et al. [90,92]-the travel time issue would be significantly mitigated among business travelers if the travel time itself could be used efficiently, for instance with free internet access and with adequate availability of power sockets for laptops and cellphones. These are factors on which air travel-and especially services offered by LCCs-will not be able to compete effectively, and so it is likely that by offering a significantly differentiated business model focusing on passenger convenience and service that passenger numbers would increase, both in the leisure and business markets. This would allow for the transformation of a transaction cost into a potential unique selling point, and if this could be used as a means to increase ridership then it would have the beneficial effect on the per capita CO 2 emissions of the NSER. This is linked to the broader issue of sludge and transaction costs. Since convenience appears to be such an important factor in HSR success, per the Pagliara et al., Zhen et al. and Jeng and Su,studies [89,90,92], consideration should be given to reduce the impacts of sludging (such as inconvenient form-filling requirements, hidden fees and inconvenient refund conditions as defined in Shahab and Lades' study [93]) on the HSR to the greatest degree possible. Numerous studies again confirm the value of ease of use in booking systems (especially e-booking systems) and this is again an area where HSR has significant potential to outperform aviation. For instance, a study by Jeng [94] finds that perceived ease of use is the single highest influence on whether or not users choose e-tourism services, while a study by Li et al. [95] finds that complementarity with other sales channels, another convenience-based factor, is important in influencing choices in the economy hotels sector in China. These studies effectively demonstrate the power of reducing transaction costs and sludge-customers gravitate to easy-to-use options. The basic design of all operational systems related to the NSER being as frictionless as possible-for instance with a modern, secure and well-designed smartphone app and website, automatic refund systems (especially in a more financially-sensitive market such as Vietnam) in the event of delays, easy and secure logins for repeat users, and so on to foster a reputation of convenience and ease-of-use for the NSER is likely to increase ridership and capture market share by differentiating the HSR as the "convenient option" over air travel-this may go some way toward mitigating the ostensible speed advantage of the plane. Further consideration on this topic of how to reduce the transaction costs on the railway would be a useful area for future research, but more broadly, de-sludging and reducing transaction costs for HSR users is likely to increase ridership and bring down the per-capita CO 2 emissions of HSR systems-especially if people can be tempted away from aviation in the process. This is perhaps another opportunity for development assistance donor countries to introduce capacity building and technical cooperation programs to assist in website and application design as well as customer service provision. Conclusions and Policy Recommendations The environmental performance of HSR is contingent on both fixed and relative costs. The fixed costs of the NSER-the construction steel and the emissions per journey-have a direct bearing on the relative costs, with significant thresholds needing to be met in terms of passenger load for the project to be environmentally sustainable because of the continuing prevalence of coal in Vietnam's energy mix. While the fixed cost of the steel was not projected to be as high as expected in the hypothesis, there are still three key policy recommendations relating to construction costs in CO 2 emissions. The first is to make as much use of EAF and interstitial free steel where possible-these are much less carbon-intensive than traditional blast furnace steel, and sourcing from these processes will greatly reduce the carbon footprint of the railway. Second, there is a pressing need for a full life-cycle environmental impact assessment on the NSER, in line with the analyses carried out by Chang et al. and Yue et al. [18,20] on the Chinese HSR systems. Such an assessment, backed by expertise from development assistance donor countries such as Japan as well as relevant private sector partners, would allow for significantly more informed policy through the planning and construction stages and would be likely to identify means to reduce carbon footprint where possible. The third suggestion is for development assistance donor countries to provide capacity building programs and technical cooperation to boost environmental impact assessment capabilities in recipient countries-again, this will permit the formulation of policy based on more informative and comprehensive data. Significant hidden costs emerge from the contribution of coal to the overall energy mix-even if the trains are modern, efficient electric trains, which they will be if E5-series Shinkansen are ultimately used, they will still be indirectly powered by coal from thermal power plants in the Vietnamese context, and this issue must be mitigated to the greatest degree possible if the chances are to be maximized of the train being environmentally beneficial. This leads to two primary policy recommendations. The first is that Vietnam must urgently reduce fossil fuel use, and especially coal use, in its energy mix, and the second is that development assistance donor countries must urgently support this transition with effective financing and technology transfer programs. The fact that in most circumstances the train would emit more CO 2 via indirect energy use than an A321 aircraft is of great concern, and this severely calls into question the environmental benefits of the NSER. This again also underscores the importance of a full life-cycle environmental impact assessment which takes these issues into account in order to allow for informed policy decisions to be made. On the issue of ridership rates, this paper has severely called into question the possibility of achieving either the 70% JICA-predicted average passenger load rate or the roughly 40% passenger load rate required to exceed the A321 in per capita CO 2 emissions. The fact that when considering both incomes and population size the market size most of the stations in the middle portions of the line will be less than one tenth of the size of Hanoi and Ho Chi Minh City is of grave concern, and this combined with the prevalence of low-cost airlines and potential fares and travel time mean that this passenger load rate will be difficult to achieve. As noted above, if the NSER does not capture sufficient market share to hit the required passenger load rate, then all it will do is create an additional environmental burden on top of that given by the aviation industry. Accordingly, four policy recommendations are given. First, an up-to-date and comprehensive market and demand analysis should be carried out to accurately determine the likelihood of reaching the required rate of ridership. This will have the beneficial secondary effect of allowing for informed policy decisions to be made on train sizes and service frequency. Second, in line with the policy recommendations of Jiang et al., Qin et al. and Chang et al. [18,87,88], the railway should implement a floating fare mechanism to incentivize travel in off-peak hours and in low-demand areas, which would have the beneficial side-effect of increasing revenue by more accurately meeting market demands [87,88]. Third, significant attention should be given to passenger comfort and service quality as a unique selling point for the NSER-differentiating based on these factors would make the NSER a genuine alternative to aviation by permitting work and other activities during the journey itself [90,92], potentially allowing it to capture both more business and leisure market share. Fourth, attention should be given to de-sludging and reducing transaction costs by ensuring that the NSER is as easy to use and as convenient as possible through the optimal design of ticketing systems, refund systems and websites. The latter two are linked-essentially, they suggest that the NSER should lean into the perceived strengths of HSR over aviation to create a genuinely differentiated market contender, increasing end-to-end users in Hanoi and Ho Chi Minh City. This could be supported by development assistance donor countries through technical cooperation programs focusing on website and app design as well as service provision. The hidden factors discussed in this paper-the energy mix and the level of income inequality-severely curtail the train's potential to either succeed in its own right or capture aviation market share as things stand currently. The prevalence of coal in the energy mix means that the required passenger load rate is relatively high. In essence, the solutions outlined above to address this issue are to bring down the levels of CO 2 -generating power sources in the energy mix (reducing the number of passengers required for the train to be better than an equivalent flight), to take measures to ensure that the train operates as efficiently as possible, and to incentivize train use over flying. This should be additional to efforts to grow the economies of the middle provinces in Vietnam-ultimately their economic growth will be key to the relative environmental performance of the NSER. Ultimately, what this means is that holistic, "big-picture" solutions are necessary when planning large-scale infrastructure projects. This paper has provided clear evidence that there is a link between environmental performance and equality-the train will have an incrementally better relative emissions performance with every additional passenger on board, but the key to this is encouraging equitable economic growth to ensure that those in the middle portion of the line can maximize their use of the railway, because while the policy suggestions given above will help the NSER in its own right, they will ultimately not address this underlying issue. The greater the economic growth in the middle portions of the line, the greater the number of potential passengers can be attracted to it over potentially cheaper LCCs and by extension, the lower the relative costs of travel by HSR. This is not meant to contradict the results of the JICA feasibility study-rather, the intention of this paper is to be complementary to the study by drawing attention to hidden environmental factors and thereby increase the robustness of the results. Indeed, the lessons from this paper have significant wider implications for HSR planning, and future research would benefit HSR development by drawing attention to further hidden factors which can be taken into consideration for future projects. Funding: This research received no external funding. The APC was funded by the Asian Development Bank Institute (ADBI) as part of this Sustainability special issue on High Speed Rail, Equity and Inclusion. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: All data generated for this study is contained within the article, and all source data is referenced as applicable.
2021-12-12T17:00:20.017Z
2021-12-08T00:00:00.000
{ "year": 2021, "sha1": "99a348c39099b762eafc26d273cb20f7d8b19e3f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/24/13563/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "95152a99aecf6b20e65be2db8570054700bff3a0", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
85647786
pes2o/s2orc
v3-fos-license
Synthesis of N-Phenylpyrrole Carboximides Several N-phenylpyrrole carboximides were synthesised using acyl isocyanates as intermediates. Introduction During a certain stage of our synthetic approach towards potential protein tyrosine kinases inhibitors (see e. g. [1], [2]), compounds of the general structure 1 were required (Figure 1).Acyl isocyanates were used as intermediates for the synthesis of the imides of type 1.One route to this type of compounds consisted in the addition of a C-nucleophile to the isocyanate of a pyrrolecarboxylic acid, whereas in a different approach, an acyl isocyanate was used as the electrophile in a Friedel-Crafts type substitution of a suitable pyrrole derivative. Results and Discussion Reaction of N-phenylpyrrole-3-carboxylic acid (2) with thionyl chloride (Scheme 1) gave the acid chloride 3, which was treated with tetrabutylammonium isocyanate in tetrahydrofuran to yield the corresponding acyl isocyanate 4.This sensitive compound could not be isolated, but its formation was easily demonstrated by quenching the reaction with ethanol, whereupon the acylated carbamate 5 was obtained.Reaction of 4 with lithium phenylacetylide gave the imide 6.Using the second approach mentioned, the isocyanate 7, which had been obtained from phenylpropiolamide and oxalyl chloride in dichloromethane, was reacted (Scheme 2) with the suitably protected N-phenylpyrrole 8. Imide 9 was obtained in good yield; deprotection to 6, however, was difficult and could be achieved in only 23%.When unprotected N-phenylpyrrole was treated with the isocyanate 7, substitution took place predominantly in the 2-position of the pyrrole ring, yielding 10.This latter compound cyclised to the oxazinone 11 upon heating.The structure of 11 was obtained from X-ray diffraction [3].Scheme 2. Alternate synthesis of imide 6 and formation of oxazinone 11. General Chemicals were purchased from Fluka AG, Aldrich Chemical Company, Inc., Merck GmbH, or Lancaster Synthesis Ltd.Solvents used in reactions were distilled and dried or purchased in absolute quality.Tetrahydrofuran was freshly distilled from Na/K.TLC: Merck silica gel 60 F 254 precoated glass plates.Column chromatography: flash-chromatography procedure of Still et al. [4]; columns with water cooling; Merck Kieselgel 60, 40-63 µm. Colorless Butyllithium (100 mL, 1.6 M in hexane, 160.0 mmol) was added under Ar to 1-phenyl-1H-pyrrole (5.09 g, 36.0 mmol) and N,N,N',N'-tetramethylethylenediamine (22.5 mL, 160.0 mmol).The mixture was refluxed for 23 h and then cooled to -78°.Trimethylchlorosilane (20.0 mL, 160.0 mmol) was added and the mixture stirred for 6 h at 0° and for 90 min at room temperature.After washing twice with sat.NH 4 Cl solution and then with water, the organic layer was dried (Na 2 SO 4 ), filtered, and the solvent evaporated.The crude product (12.4g of a yellowish oil) was purified in ten portions by chromatography on SiO 2 (150 g, pentane/dichloromethane 12:1) to give 4.85 g (37%) of 8 as a yellowish oil.An analytically pure sample was obtained by kugelrohr distillation (175°/0.13mbar). Figure 1 . Figure 1.General structure of target compounds.
2014-10-01T00:00:00.000Z
1999-05-16T00:00:00.000
{ "year": 1999, "sha1": "8f28a1ae9bd97348bb3b531b31e951a841704657", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/4/6/151/pdf?version=1403112313", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "8f28a1ae9bd97348bb3b531b31e951a841704657", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
120408048
pes2o/s2orc
v3-fos-license
Ultraviolet luminescence enhancement of ZnO two-dimensional periodic nanostructures fabricated by the interference of three femtosecond laser beams We have developed a simple and rapid method for fabricating ZnO periodic nanostructures. By changing the laser polarization combinations, we fabricated different types of two-dimensional (2D) nanostructures on ZnO crystal surfaces by the interference of three femtosecond laser beams. The 2D nanostructures became more regular and uniform when increasing the cross angles between any two laser beams. Compared with the case of the plane surfaces of ZnO crystals, the 2D nanostructures revealed an ultraviolet (UV) luminescence enhancement excited by an 800 nm femtosecond laser beam. We studied the photoluminescence properties of the 2D nanostructures and the mechanisms of the UV luminescence enhancement. Our results indicated that the enhancement was caused by an increase in optical absorption with respect to that of the unaltered ZnO plane surface and by the formation of surface defect states. Laser-induced periodic ripples in semiconductors and dielectrics have been studied intensively in the last four decades. The periods were usually close to the laser wavelengths, and these long periodic ripples (LP-ripples) were attributed to the interference between the incident laser and the surface scattered light field [8,9]. Recently, nanoripples with periods much shorter than the laser wavelengths were reported in semiconductors and dielectrics after irradiation of linearly polarized femtosecond laser pulses [10]- [13]. These short periodic ripples (SP-ripples) were perpendicular to the laser polarization. If the laser beam was circularly polarized, nanoparticles would be induced on the sample surfaces [14]. To understand the mechanisms of the formation of SP-ripples, several explanations have been proposed, such as interference, self-organization and the enhancement of local electric field [10]- [14]. Femtosecond laser-induced nanostructures on ZnO have also been studied intensively [15]- [19]. The authors of [15]- [17] have reported on the formation of SP ripples and their Raman properties. Huang et al [18,19] studied the fabrication of uniform ZnO nanosquares ablated alternately by two femtosecond laser beams with polarizations orthogonal to each other. Dufft et al [13] reported the formation of LP-and SP-ripples. They found that the periodic ripples depended on the laser fluences and pulse numbers, and attributed the formation of SP-ripples to the surface second harmonic generation (SHG) [13]. Due to the high efficiency and low cost, holographic lithography (HL-technology) becomes an important technology for fabricating nanostructures [20]- [22]. Combining the fabrication of short-periodic nanostructures induced by femtosecond laser irradiation with HL technology, we have fabricated two-dimensional (2D) complex micro-/nano-structures on the surfaces of ZnO crystals [23,24]. In this paper, we change the laser polarization combinations and fabricate several types of 2D nanostructures on the surfaces of ZnO crystals by the interference of three femtosecond laser beams. The 2D nanostructures become more regular and uniform with increasing cross angles between any two laser beams. Then, we make further investigations into the photoluminescence (PL) properties of the 2D nanostructures induced by multi-photon absorption. Figure 1 shows the experimental set-up for the interference of three femtosecond laser beams [24]. A linearly polarized laser at a wavelength of 800 nm was delivered from a commercial Ti : sapphire regenerative amplifier (Hurricane, Spectra-Physics). The pulse duration was 50 fs, and its repetition rate could be changed in the range of 1-1000 Hz. The laser beam was passed through a half-wave plate (HF1) and a Glan polarizer (GZ) to adjust the pulse energy and polarization. Then the laser beam was split into three beams using two beam splitters. Three half-wave plates (HF2, HF3 and HF4) were used to rotate the polarization of the three laser beams, respectively. The inset in figure 1 shows that the spatial positions of the three beams formed a regular triangle. A, B and C represent the positions of the three beams on the P-plane, and O the superposition point on the sample. Laser pulses from the three beams arrived at the sample surface simultaneously. The zero temporal delay point was determined by measuring the signal of sum frequency via a BBO crystal. We changed the laser polarizations and the cross angles 2θ between any two laser beams and fabricated several kinds of 2D nanostructures. Commercially available c-cut ZnO single crystal with a size of 10 mm × 10 mm × 1 mm was used in the experiments. The two surfaces of the ZnO crystal were both optically polished. Experimental set-up PL micrographs of the 2D micro-patterns were acquired using a Nikon microscope (Eclipse 80i) following excitation with 800 nm laser pulses. For the 2D microstructures with a period of 3.36 µm, the 800 nm laser beam focused on the patterns at an incident angle of 45 • , with a focus diameter of 3 mm. However, to acquire clear PL micrographs of the 2D patterns with a period of 1µm, the 800 nm laser beam was normally focused on the sample surface using a 100× objective microscope. Here, no color filter was used in the experiment. To investigate the PL properties of the 2D nanostructures, we fabricated a large area with a size of 1.7 × 1.3 mm 2 by ablating the sample spot by spot. The ablation area was excited by an 4 800 nm laser beam at an incident angle of 45 • . The PL spectra were collected in the normal direction of the sample surface and measured using a spectrometer with a charge-coupled device. We compared the PL spectra of the 2D nanostructures with those of the plane surfaces of the ZnO crystals. To comprehend the mechanisms of ultraviolet (UV) luminescence enhancement, we measured the optical reflectivity, transmissivity and absorptivity of the 2D nanostructures and the plane surfaces of the ZnO crystal. The 800 nm laser beam was focused on the ablation area and the plane surfaces of the ZnO crystal at an incident angle of 45 • . The reflected light was collected using a silica lens with a diameter of 78 mm. The lens was set at a position 56 mm away from the sample, and the collection angle was calculated to be 70 • . Considering the scattering effect of the 2D nanostructures, we measured the scattering spectra using a fiber optic spectrometer (USB2000, Ocean Optics). The measurements were carried out on the circumference of a semicircle with a radius of 100 mm. Figure 2 shows the SEM images of the 2D nanostructures. Three types of polarization combinations are shown in the insets in figures 2(a), (c) and (e), respectively. For the cross angle of 2θ = 13.6 • , the 2D spots were formed in a hexagonal arrangement with a period of 3.36 µm. Meanwhile, short periodic nanostructures were embedded in the hexagonal microstructures. In figure 2(a), the nanoripples with a period of 200 nm formed on each ablation spot, and they were perpendicular to laser polarization. In figures 2(c) and (e), radially orientated and circularly distributed nanoripples appeared around each bulgy microspot, respectively. Also, there were six deeply ablated nanopits, which are shown by the white circles in figures 2(c) and (e). We performed theoretical calculations of the intensity pattern and the polarization pattern for the three types of laser polarization combinations. The results accord well with the 2D periodic nanostructures [24]. Periodic micro-/nanostructures As the cross angles 2θ increased to 47 • , the periods of the hexagonal patterns decreased to 1 µm. The 2D periodic structures became more regular and uniform, as shown clearly in figures 2(b), (d) and (f). In addition, we observed that the fabricated patterns were obviously different from the ones of 2θ = 13.6 • , especially for the two laser polarization combinations shown in figures 2(c) and (e). In figures 2(d) and (f), the six deeply ablated nanopits around each bulgy spot disappeared. Meanwhile, for the laser polarization combination shown in figure 2(c), the patterns evolved into hexagonal nanoflowers composed of radial nanoripples (see figure 2(d)). For the laser polarization combination shown in figure 2(e), the patterns evolved into regular nanoring arrays (see figure 2(f)). Using this method, we could obtain several kinds of regular and uniform nanostructures, which would periodically modify the photoelectric properties of the sample. respectively. For the polarization combination with a cross angle of 120 • , the luminescence from six deep nanopits around each bulgy spot was very strong (see figure 3(c)). But for the 2D nanostructures with a period of 1 µm, figure 3(d) shows that the six individual emission nanopits evolved into nanorings. The result accorded well with the SEM images shown in figure 2(f). Photoluminescence of the two-dimensional (2D) nanostructures The PL micrograph in figure 4(a) shows clearly that strong blue light is emitted from the interference pattern. In particular in the points with deeper ablation depths, the PL intensity was much stronger, which was proved by the PL intensity measurements shown in figure 4(b). The pixel numbers are the data on the inset line in figure 4(a). Figure 4(b) shows that the PL intensities on these ablation dots were 10-16 times higher than those on the plane surface surrounding the whole ablation spots. The diameter of the excitation laser was about 3 mm, which was much larger than that of the ablation area. Therefore, the weak PL intensities around the ablation area in figure 4(a) were not caused by the lower intensities of the local excitation laser field. The energy structures of the ZnO crystal have been studied in detail [25]- [27]. For a better understanding of the PL spectra in the following, we will now describe them briefly. The band gap of the ZnO crystal is of 3.37 eV, and it red-shifts to 3.25 eV at room temperature. The excitonic energy is lower than the conduction band edge by 50-100 meV. The energy of oxygen defect states is peaked at 2.35 eV and the Zn interstitial energy state is very close to the conduction band edge. Here, the valence band edge is taken as 0 eV. We studied the PL spectra of the plane surface of the ZnO crystal and the 2D nanostructures excited by 800 nm laser pulses with a repetition rate of 1 kHz. For the plane surface, figure 5(a) shows that the PL spectra consist of a broad visible band peaking at 2.35 eV and an ultraviolet (UV) band. The green light emission was mainly attributed to the defect states of oxygen vacancies. The UV emission band of the plane surface is shown in detail in figure 6(a). The peak position is at 3.11 eV for the excitation laser intensity of 16.2 GW cm −2 . It shifts gradually to 3.15 eV as the laser intensity increases to 90.0 GW cm −2 . As pointed out in [28], this UV band is a result of the superposition of the band-gap emission and the SHG emission. The band-gap emission is usually attributed to the excitons bound to neutral donors (D o X) and the excited states [26,29,30]. For the 2D nanostructures, figure 5(b) shows that the green emission band was well suppressed. Many previous works have demonstrated that the density of oxygen vacancies decreased greatly after an annealing process in an oxygen-enriched environment, and the green light emission band was greatly depressed, too [29,31,32]. The 2D nanostructures were induced by femtosecond laser ablation in air, which also had a laser annealing effect on the ablation area. So the suppression of the green emission band of the 2D nanostructures was caused by the reduction of the density of oxygen vacancies. In contrast to the green emission band, figure 5(b) shows that the UV emission band of the 2D nanostructures was greatly enhanced. The dependence of the UV band on the excitation laser intensities is depicted in figure 6(b). The UV emission band peaks at 3.17 eV for the excitation laser intensity of 10.5 GW cm −2 , and it does not shift as the intensity increases to 55.1 GW cm −2 . The papers [1,6] compared the UV luminescence and the SHG signal of ZnO nanowires excited by an 800 nm femtosecond laser beam. The UV luminescence dominates the spectra as the excitation laser intensity is higher than 9.6 GW cm −2 [6]. The two peaks are hard to resolve as both the wavelength and the bandwidth are very close [28]. Excited by femtosecond laser pulses at wavelengths of 650 and 330 nm, the UV emission bands of the 2D nanostructures were at 3.20 and 3.25 eV, respectively. The peaks changed slightly with the excitation laser wavelengths, which was similar to the results reported in [33,34]. Excited by the laser with photon energy less than half of the ZnO band gap, the peak of UV emission was 3.15 eV, which could be attributed to D o X and the excited states. If the photon energy of the excitation laser is larger than half of the band gap, the peak of the UV emission band was at 3.30 eV, which could be attributed to the free exciton emission. In general, the UV emissions of the plane surfaces and the 2D nanostructures were slightly broader and red-shifted compared with the results reported in [35], which was mainly caused by the higher temperature in our experiments (laboratory temperature). The authors of [30] have also reported that the width of the UV emission band increases and the peak red-shifts with increasing temperature. In addition, the red-shift of the UV emission band was also dependent on the excitation laser wavelengths and intensities. Figure 7 plots the PL intensities of the UV luminescence and the SHG signals as a function of 800 nm laser intensities I. For the 2D nanostructures, the slope of the fitting line of the UV luminescence at a wavelength of 391 nm was 2.4, whereas for the SHG signals it was 2.03, which was similar to the results reported in [1]. At room temperature, the ZnO crystal could linearly absorb light with photon energies in the range of 2.95-5 eV. For photon energies in the range of 2.95-3.25 eV, it was the band-tail absorption. In our experiments, the photon energies of the excitation laser were in the range of 1.48-1.61 eV. The sample can absorb the excitation laser via the two-and three-photon processes. Therefore, the dependence of I 2.4 indicated that the UV luminescence was caused by the two-and three-photon absorptions. The authors of [6] have reported on the UV luminescence of ZnO nanowires and found a dependence of I 2.13 on the excitation laser intensities, which revealed a two-photon absorption process as the excitation mechanism responsible. In addition to the slight difference between the samples, the main reason for the different slopes is the excitation laser spectra. In [6], the laser photon energies were in the range of 1.38-1.75 eV. For the plane surface of the ZnO crystal, the slope obtained by fitting the intensity dependence at a wavelength of 400 nm is 2.22. Figure 6(a) shows clearly that the UV band peaks shift gradually from 400 nm (3.11 eV) to 393 nm (3.15 eV) as the excitation laser intensities increase from 16 to 90 GW cm −2 . Therefore, the deviation in the slope of the 400 nm signal from 2.0 is mainly caused by the superposition of the UV emission band. Many previous works have reported various methods for fabricating ZnO nanostructures. The enhancement factors of the UV emissions of doped nanostructures were reported to be up to 10 2 -10 4 [2,4,5,36,37]. These results were very interesting and exciting. However, the authors of [5,36,37] also reported that before being doped, the UV emissions of the nanorods and nanowires were very weak. As a typical direct wide-band-gap semiconductor, the ZnO crystal itself has a strong blue emission band. In our experiments, we chose the ZnO crystal as the comparison object, and found that the UV emission was enhanced by 7-9 times after the 2D nanostructures were fabricated (see figure 7). The result was similar to the ZnO nanorods reported in [3]. The chemical methods for fabricating ZnO nanostructures are usually complicated. Compared with these methods, the interference of multi-femtosecond laser beams has some advantages. These luminescence dots have a regular arrangement, with the patterns and periods adjustable by changing the arrangement of the spatial positions and polarizations of laser beams. The whole fabrication process was conducted in a laboratory environment and lasted for only several seconds. Figure 8 shows the scattering light intensities of the 2D nanostructures. The results indicated that the main scatting light was in the direction of the reflection angle of 45 • . Summing the scattering light intensities in the collection angle of 15 • -75 • and in the semicircle (5 • -175 • ), a collection efficiency of 65% was obtained. So we made a correction factor of 0.65 2 for the hemispherical scattering light. Figure 9(a) shows that the relative reflectivities and transmissivities of the 2D nanostructures decreased, respectively, to 18-25% and 10-15%, where the reflection and transmission of the plane surface of the ZnO crystal were normalized to 1. For each power of the incident laser, the absorptivity of the plane surface of the ZnO crystal was 18%, and for the 2D nanostructures it was 90% (see figure 9(b)). Namely, the optical absorption was enhanced by five times after the 2D nanostructures were formed. Discussions on the enhancement of the UV emission The formation of the 2D nanostructures was one reason for the enhancement of absorptivity. The increase in surface area enhanced the light absorption, and the multiple reflections of the incident light among these nanostructures reduced the reflection. In addition, the nonlinear absorption could be improved by the light localization effect of nanostructures [38]- [40]. A theoretical study indicated that the light intensity can be enhanced by four times in the surface of nanostructures [41]. Another reason for the enhancement of absorptivity was attributed to the change in crystalline structure on the sample surface [42,43]. Micro-Raman experiments were conducted with a Raman spectrometer (Jobin Yvon T64000) excited by an argon ion laser at a wavelength of 514 nm and measured in backscattering geometry. The results are shown in figure 10. For the 2D nanostructures, the main Raman-shift peaking at 437 cm −1 , namely E 2 mode, decreased obviously compared with the plane surface. Meanwhile, the peak at 572 cm −1 , which is attributed to A 1 (LO) phonon, was also observed [25]. The results indicated that the crystalline structure in the sample surface was greatly damaged during the formation of the 2D nanostructures. The authors of [44] also studied the changes in crystalline structures during the formation of nanoripples. Cross-sectional transmission electron microscopy showed that the depths of nanoripples were in the range of 200-400 nm. The electron diffraction patterns revealed that the irradiated surfaces were capped by a layer (50-100 nm thick) of amorphous material, while in the inner region it was still of crystalline structure. Compared with the crystalline structures, the energy state densities of the band tail in the amorphous materials increased and the tail moved deeper into the band gap. This increased the two-photon absorption process of the 2D nanostructures excited by the 800 nm femtosecond laser pulses. Therefore, the slope of the UV luminescence of 2D nanostructures shown in figure 7 was less than that of the plane surface of the ZnO crystal. During the change from crystalline to amorphous structures, there were different kinds of defect states formed on the sample surface. As we discussed above, the laser ablation had an annealing effect on the 2D nanostructures. The density of oxygen vacancies could not increase obviously because the experiments were conducted in air. The main surface defect states were zinc interstitials and zinc vacancies. In addition, the defect states of zinc interstitials could enhance the UV emission, which was caused by the coupling between excitons and the donor zinc interstitials [26,27]. Therefore, the enhancement of UV emission of the 2D nanostructures resulted from the enhanced optical absorption and the surface defect states of zinc interstitials. Conclusions In summary, we fabricated the 2D nanostructures on the surfaces of the ZnO crystal by the interference of three femtosecond laser beams, and obtained regular and uniform nanostructures by increasing the cross angles between any two laser beams. Compared with the plane surface of the ZnO crystal, the 2D nanostructures revealed an enhancement of the UV emission excited by 800 nm femtosecond laser pulses. The intensity of the UV emissions was comparable to the nanorods fabricated by the chemical vapor deposition method. We studied the mechanisms of the enhancement of UV emissions and found that they were mainly caused by the increase in optical absorption and the formation of surface defect states of zinc interstitials. Recently, Shimotsuma et al [45] and Beresna and Kazansky [46] reported the birefringence of self-assembled nanostructures in glass induced by femtosecond laser pulses, and they further demonstrated the potential applications of the nanostructures in the optical storage and the polarization-sensitive diffractive elements. We have fabricated different types of complex nanostructures by changing the laser polarization. In addition to the enhancement of optical absorption and the UV luminescence, these structures could also be used as uniaxial, biaxial and azimuthal micro-crystals, and have potential applications in the optical storage and complex birefringence elements in 2D and 3D geometries.
2019-04-18T13:10:43.665Z
2011-02-01T00:00:00.000
{ "year": 2011, "sha1": "a5b78595091d48f688f4616e0e080f85797f9437", "oa_license": "CCBY", "oa_url": "http://iopscience.iop.org/article/10.1088/1367-2630/13/2/023044/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2d7d2ab7ec1cbc6cfe93101b574637fb14003731", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
169518805
pes2o/s2orc
v3-fos-license
Business Process Management in Linking Enterprise Information Technology in Companies of Agricultural Sector Business Process Management is one of the most important components of a process-driven organization that we perceive as a sum of processes that are more or less follow-up. By adjusting and managing these processes, we can greatly influence the organization's performance, efficiency, flexibility and competitiveness. Business Process Management is important to support the technical infrastructure of modern information systems and communication technologies. These systems are part of the overall enterprise information system. The following article focuses on information and communication technologies for the proper and efficient functioning of process management in agro sectoral companies. This article presents a summary of theoretical knowledge and practical recommendations for creating and maintaining a process management system in enterprises with the support of information and communication technologies. For a more detailed analysis of this issue, statistical research, the partial results of which are subjected to statistical testing, are presented in the following article. Introduction Nowadays, information and communication technologies (ICT) are an increasingly important factor in supporting the achievement of corporate goals (Hosťovecký et al., 2015). The development of information and communication technologies leads the company to process electronic documents more and more (Tvrdíková, 2008). ICT becomes an element that enables growth and development of an organization. This is why ICT demands are rising. Gradual developments have shown that ICT helps create value where it enables business processes to be supported by making profits in the technology and business parts of the enterprise (Hallová et al., 2017). In general, an ICT value for an enterprise can be viewed from an analytical or pragmatic point of view. It is a problem for many companies to link the ICT with their strategic interests and daily routines (Vaněk et al., 2011). In order for an organization to function effectively, it must identify and manage a number of related activities. The application of the process system within the organization along with process identification and interaction, as well as their management, can be seen as a process approach (McAdam, 2003). According to Ringim et al., (2012) is the ability to innovate and change to become a necessary competitive weapon, closely linked to the knowledge potential of the organization and the ability of an organization to learn. The basic principle is the collective effort to achieve the desired, shared goal by increasing the team's action potential in the process of personal and team learning. The essence of enhancing the action potential is to be able to respond based on acquired knowledge and to develop solutions (Carda et al., 2001). We can see each enterprise as a system of processes, activities, and activities to achieve the goals. They [120] Business Process Management in Linking Enterprise Information Technology in Companies of Agricultural Sector differ with each other, in particular, in how and how effective the individual processes are performed, respectively. as they are managed (Řepa, 2012). Process management is defined as a methodology for evaluating, analyzing and improving key business processes, based on customer needs and requirements. An interesting basis for understanding the procedural approach is systemic theory, which emphasizes the necessity of a comprehensive understanding of the partial management processes and their coordinated alignment in the target behavior of an integrated functioning unit. The advantage of these approaches is that they are based on the abstraction of reality on a set of elements and their linkages. They also try to identify inputs to the system that are essential to the behavior of the system so that the system achieves a goal (Ahmad et al., 2007). Process Management and Business Process Management (BPM) are the most common terms in today's business and ICT world. Process management is a natural and comprehensive management approach to business creation, creating the prerequisites for a highly efficient, agile, innovative and adaptable organization that far exceeds the reachable by traditional management approaches (Smith, 2003). The aim of the article is to analyze the use of information systems infrastructure and technologies in agribusinesses in connection with process management, and to describe their dependencies on the basis of a questionnaire survey of agriculture companies sample. With processes we meet today at almost every step. The most popular is this term in companies, corporations that are moving in the area of information systems and technologies. When it sometimes appears that processes and information technologies are the same, which of course is not the case. However, the real nature of IT governance is very much supported. The word process can be used almost anytime, anywhere in any meaning. According to Luftman (2003) process management is an activity leading to the transformation of a functionally oriented organization into a processbased organization. The process of implementing a process organization requires personnel and project security. Task solving with such a high degree of abstraction and complexity in an organization's heterogeneous ICT infrastructure environment must support appropriate ICT technologies. A technology cluster created within the organization from a variety of non-interconnected software products, which aims to cover the whole area of process design, process implementation, implementation, and tracking, does not need to meet the expectations of the organization effectively. Zairi et al., (1995) says that procedural management means a substantial change in the notion of the information system over the traditional view. The arrival of the process management terminated the concepts of the information system as a non-monolithic monolith with a given structure, determined by a properly designed database and its associated functionality. Process management requires that the information system is flexibly adapted to the business process, because its only purpose is to support it. Business processes, however, naturally change. This has a significant impact, for example, on the concept of information system standardization, as is known from the ERP, whose concept has traditionally led to a single, universal, definitive solution, generally applicable to each organization. This concept is overcoming today, precisely in the context of knowledge that has brought about process management ideas, as evidenced by SAP's approach, the market leader in systems in this area. In the past, it has come up with the new SAP Net Weaver product, meaning a significant conceptual turn in the concept of ERP by this company (Crowe, 2002). The requirements of the organization's process system for its information system are as follows (Ozcelik, 2010): • The information system must, as far as possible, support all activities of business processes, that is to say they have to be covered by their functionality, • The information system must support the management of the business processes of the workflow process, so it must enable the processes to be monitored during the process and the process, • The information system must, as far as possible, support all the patterns of the business in question, so it must, by its function, be assisted by the process of respecting as far as possible all the restrictions and rules of the business, • The information system must enable the natural transformation of business processes, so it cannot prevent its conceptual or other changes in its structure. [121] Business Process Management in Linking Enterprise Information Technology in Companies of Agricultural Sector While the data stored in the information system database represents its information potential (about the facts of the corporate system and its contexts), its functionality represents an action potential (the ability to process data). The workflow management system then has the role of a link between the data and the functions of the system, which also allows the enterprise process to use its system data through its functions (Dennis et al., 2003). The process-managed organization information system differs from the traditional one by separating the functions of the system from the way they are used. The functionality of the system for him represents the net potential of information support for activities, which is maximally universal, and therefore independent of the particular use procedure. While the functions of the system -the support of routine activities from processes are basically relatively unchanging, standard, their combining is always specific, each time given by a particular situation. The procedure for using system features is always given by the current business processes. Therefore, a process-managed organization can not suffice with a traditional information system, because it preserves some form of procedures by encoding them into functionality and thus makes business processes virtually immobilized (Mouzas, 2006). If an organization is to become dynamic, naturally variable through its diverse business processes, its infrastructure must also support this variability. Instead of frozen practices for the use of their functions, they must support their eventual combinability according to the ever changing needs of processes (Vaněk, 2009;Landa, 2010). The main trend in the further development of the BPM segment is SOA (Service Oriented Architecture). BPM solutions are a key component of service-oriented architecture. They represent the top layer of the architecture that organizes the implementation of various services in the sense of progressive implementation of the individual steps of the business process. BPM solutions must be closely interconnected with other software components of SOA implementations such as Enterprise Servis Bus (ESB), SOA lifecycle management solutions, service register, and so on (Ozcelik, 2010). The reported BPM solution development confirms the trend of BPM connectivity with SOA. Business Process Management solutions help design, deploy, implement and track business processes. They allow the organization of individual partial automatic or manual operations according to the requirements of a complex business process. They manage the execution of processes within rules that are derived from valid legislation, standards, and organization guidelines. They support the integration of applications and services into the organization's information system. Customize process execution to business requirements and track the current state of execution of processes. According to Ahmad et al., 2007, the benefits of implementing BPM technology into the corporate IS environment is to improve the organization's ability to cope with business-related requirements and opportunities. Organizations must flexibly respond to environmental changes to maintain their market position. Flexible customization of custom business processes, strategies, needs, services or products varies with customer, partner, and regulator requirements. Most changes require changes in the IT structure of the organization. The ability to flexibly change the information system is therefore a major constraint in implementing process changes resulting from the need to cope with organizational requirements and opportunities (Porter et al., 1993). BPM solutions enable fast change of IT infrastructure related to business process modeling. BPM solutions enable one group of tools to capture processes from a business perspective and then link them to the IT applications that are necessary to implement them. BPM sets of software packages include a set of software tools for the entire process development cycle, from design to monitoring of real-world processes. Their approach eliminates the repetition of the standard deficiencies of the current process analysis, to the maximum extent linked to process and application analysis, the related processes and roles are defined and documented in a structured manner. There is no difference between what has been proposed and applied in practice. BPM has a very wide-ranging application due to the provided process modeling capabilities and to cover the entire development cycle. It provides efficient tools for process modeling by providing the benefits of EAM (Enterprise Application Integration), EAM (Enterprise Architecture Modeling). BPM technology is a general platform in terms of independence from the vertical line of business or the type of organization (Rěpa, 2012; Ahmad et al., 2007). Introducing BPM is the most important prerequisite for the successful operation and advancement of any type of organization. BPM allows not only to analyze and monitor processes but, above all, [122] Business Process Management in Linking Enterprise Information Technology in Companies of Agricultural Sector to target, organizationally and efficiently to reveal new opportunities for organizational improvement (Mouzas, 2006;Tvrdíková, 2008). Material and methods In 2017, the Department of Computer Science conducted a survey focused on the issue of Process Management in agro-business enterprises. The thesis focuses on the analysis of the use of process management in connection with the infrastructure of enterprise information technologies and systems. Hence, the following hypotheses also seek to clarify the main patterns and links in contemporary businesses. Based on the main goal of the thesis, we have formulated the following hypotheses. H1: The use of process management in enterprises depends mainly on the amount of annual turnover than on the size of the enterprise by number of employees. H2: Companies with implemented process management use a significantly different set of software as enterprises that do not use process management. A survey of enterprises was made up of 51 agrosector enterprises focused on agriculture and food. Several scientific methods have been used in the survey. The main method was the analysis and comparison, which, based on the questionnaire survey, identified the current situation and the current state of use of BPM and information infrastructure in practice. We used the synthesis method to process the knowledge from the literature. We have applied the induction method to formulate conclusions based on the evaluation of the survey. Using the deduction method, we have applied the lessons learned from the literature to draw conclusions. To find the data for our analysis, we chose a questionnaire survey, which was distributed to individual entities. Statistical calculations were made by SPSS Statistics. The questionnaire, through its results, was textured to the reliability of its construct. Analysis of selected parts of the survey was performed in the SPSS IMB statistical software. Reliability was tested by Cronbach's Alpha. Cronbach's Alpha is one of the most frequently used indexes for investigating the reliability of a measuring tool (questionnaire). Based on the structure of the questionnaire and the results obtained: , where k -number of items (number of questions, quality criteria) s2i -variance for the items s2sum -variance for the sum of items Several statistical methods have been used for statistical evaluation. Verification of dependencies between the examined characters was performed using the Chi-quadrate test (χ 2 ), respectively (χ 2 ) -square contingency. This test consists of a comparison of empirical and theoretical abilities, from. what would be empirical abilities if the characters were independent. For statistical tests where the Chi-quadrate independence test could not be used because the cell count assumption in the contingency table was not followed, Fisher's exact test was used. Fisher's exact test is based on a contingency table and verifies the null hypothesis of the equality of two shares, i.e. the independence of the two binary variables. This test is based on the assumption that all marginal frequencies (row totals / columns) in the pivot table are fixed. This assumption is rarely met. Typically, only line counts or only total abundance are fixed. Results and discussion The benchmark sample consisted of 8 small enterprises, 33 medium-sized enterprises and 10 large enterprises. In the survey, we focused mainly on enterprises that implement modern information systems and management methods, so we focus mainly on large and medium-sized businesses that usually have to deal with this issue first, have more extensive experience, and have generally come to a halt in implementation. In the following part, we focus on the analysis of the use of modern management methods and IS / IT infrastructure in enterprises, the existence or absence of elaboration of strategies, concepts, whether using different management tools and methods but also information technology as a supporting mechanism in management. We also explore the use of IT and software in companies. In the survey, we analyzed the existence of plans and strategies for business areas and the use of different management methods. With the greatest percentage of occurrence and workmanship, the main strategies, strategic goals, visions and missions are found in enterprises. Secondly, there are established performance measurement systems and enterprise benchmarking, mostly for large companies. For small businesses, percentages are down towards other groups, but they are held with more than Table 1. When deciding on implementing a new enterprise information system, or upgrading or completing the current functionality, businesses are considering a number of criteria that will require the selection of this IS / IT facility. On the occasion of conducting our survey in companies and organizations of various sizes, we have included this assessment in the survey as well. We've offered the manager a choice and evaluation, which criteria are the most common in their organizations. The report is created based on the size of the business, and therefore the individual percentages are calculated as the average value of the weights that the respondents have assigned within one enterprise size group - Table 2. In the next section, we present the results of the questions on the most important areas of business processes in IS / IT implementation or innovation. It is not just the areas that IS / IT implementation is most affected, but it is also areas that affect implementation itself, its superior or even the controlling element. The survey was closed by a selection of options. The interviewed managers could first see all areas or processes, then consider whether they were in IS / IT implementation in their company and most of them checked that they were. Therefore, this report is only informative, and we have used it to sort business processes according to the significance attributed to them by the practice managers - Table 3. The survey focuses on analyzing the use of process management in linking to existing enterprise information technology and systems infrastructure. Hence, the hypotheses set by us are intended to clarify the main patterns and relationships in today's businesses. Among [124] Business Process Management in Linking Enterprise Information Technology in Companies of Agricultural Sector the businesses we have analyzed, we have selected only the ones that surveyed to use, in part, the process management, its tools, or have it fully implemented. The breakdown of enterprises into groups by number of employees is similar to the overall distribution of enterprises across the sample. However, when we look at the breakdown by turnover, there is a significant non-proportionality, 46% of process -managed enterprises being enterprises with a turnover of more than EUR 3.5 million. It follows that companies apply procedural management mainly because of work, that is, to better manage the burden of activity that we can monitor by dependence on the amount of turnover, rather than on the size of the number of employees. We can also evaluate that a process-managed enterprise is at a higher level of management, that is, it has the necessary skills needed for successful growth, as well as the strategy and strategic goals of the company, the established performance measurement system and, in particular, targeted IT / IT infrastructure management. So we can say that hypothesis 1 has been confirmed. Businesses can target there IS / IT investments effectively and aiming to maximize the resulting effect reflecting their benefits, to automate and modernize their processes, or to postpone, save, expose, intuitively, and sometimes inadvertently or impulsively, and mitigate for their inadequate result. We believe that if a company is process-driven, that is, at a higher level of management, it should know how to help with technology and purposefully and purposefully invest its resources. Under hypothesis 2, spending is remarkable for small and medium-sized enterprises that use process management up to 45.6%. This means that small and medium enterprises using process management spend more than twice as much funding on IS / IT as enterprises that are not process-driven. Thus, in the case of small and medium-sized enterprises, we assess that this hypothesis is confirmed. Conclusion At the time of the arrival of BPM from theory into practice, there were also certain exaggerated expectations. Particularly in smaller organizations, they welcomed the abandonment of traditional organizational structures and functional systems, and expected procedural management to have some management release and automatic ability to quickly adapt to changing conditions and improvement. Ciccio (2015) argues that adapting to new conditions, takes some time to ensure that integrated systems deliver the expected results. As is also the case with habit technology, the human dimension is the cornerstone of an accident. It is critical, and everything that is well thought out and planned can go wrong. While we can use engineering approaches, we must count on it, oftentimes on the illogical resistance of workers, not to try new things, even if they later save their jobs, time and performance. Therefore, the human dimension becomes a decisive factor, mainly human motivation and education. The process view requires the need to find a superior reason for the activities we perform in such a way that it is independent of the rules of operation of individual business objects. It is argued that for every business process there must be some reason in the form of purpose, purpose, and eventually also external impetus. Then we get the shifts that are important for modeling the enterprise information system. In business processes, it is essential that the outcome of the process is a product or service that is directed to a customer, predominantly external, but also within an enterprise. As a result of the process management process, the executive team's proposal is to organize the company so that the key processes that are important for competitiveness are effectively implemented and meet the expectations of both internal and external customers. After accepting the proposal by the company, the processes are introduced into the routine running of the company. The benefits of process management depend on the objectives of the project. As we have already said, the way we design processes and therefore the benefits can be influenced by the goals. However, process management gives a fresh insight into the importance of the activities carried out and helps to better define the concrete responsibilities for their implementation and quality. Distorting the traditional line structure and placing emphasis on results and totals, not on parts. So we pay attention to the process across the organizational structure, contributing to better teamwork and corporate culture (Aalst et al., 2016).
2019-05-30T23:45:44.308Z
2018-09-30T00:00:00.000
{ "year": 2018, "sha1": "958ae73f74277eaa0de4a521560e4601c6bd21c9", "oa_license": null, "oa_url": "https://online.agris.cz/download-paper/089489425210aa3c852bb001c37f9f27", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1babb5bc8dd602629e2085a072b96fcff8587869", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Business", "Computer Science" ], "extfieldsofstudy": [ "Business" ] }
236348830
pes2o/s2orc
v3-fos-license
Expecting Mother Nature: Uncertain Hurricane Forecasts Disrupt Prenatal Care and Impair Birth Outcomes Early forecasts give people in a storm’s path time to prepare. Less is known about the cost to society when forecasts are incorrect. We examine over 700,000 births in the path of Hurricane Irene and find exposure led to a decrease in birth weights and increase in preterm and low birth weight outcomes. Additional warning time decreased preterm birth rates for women who experienced intense storm exposures documenting a benefit of avoiding a type II forecasting error. A larger share of this at-risk population experienced a type I forecasting error where severe physical storm impacts were anticipated but not experienced. Disaster anticipation disrupted healthcare services by delaying and canceling prenatal care leading to impaired birth outcomes. Recognizing storm damages depend on human responses to predicted storm paths is critical to supporting the next generation’s developmental potential with judicious forecasts that ensure public warning systems mitigate rather than exacerbate climate damages. Introduction The best available climate science predicts an increase in high-intensity (1) and less predictable (2,3) tropical storms. Despite the physical damages from these storms totaling over 500 billion dollars since 2004, National Oceanic and Atmospheric Administration (NOAA) National Hurricane Center (NHC) funding allocated to forecasting tropical storm threats continues to diminish (4). Media coverage supports advanced warning systems by forecasting potential threats to the masses. The goal of disaster forecasts is to avert damages to infrastructure, human health and well-being while recognizing that the broadcast itself will increase psychological stress in viewing populations (5). Yet, no large-scale study exists documenting the relationship between forecast accuracy and human health impacts in hurricane-threatened populations. The release of highly uncertain disaster forecasts may represent a public health threat for several reasons. Disaster-related media coverage has long been shown to contribute to posttraumatic stress disorder symptoms in viewers (6,7). New evidence reveals that forecasted posttraumatic stress symptoms, leading up to a hurricane event, influences the public's mental health before and after a hurricane event (5). Taken together, NHC storm forecasts, such as the "Cone of Uncertainty," which are often misinterpreted by the public (8) but widely disseminated by the media, may cause substantial distress to viewing populations. Such a public health threat is most preventable in communities that expect and prepare for a hurricane exposure that does not end up generating physical impacts -i.e., a type I forecasting error. Experiencing a disaster during pregnancy can impair birth outcomes (9)(10)(11)(12) and disrupt access to healthcare services (13)(14)(15)(16), which may have long-run implications for the unborn child's livelihood (11,17,18). In utero exposure to stress (12), environmental toxins (19,20) and disrupted access to health services (21) are leading explanations for observed reductions in birth weight and gestation lengths. Causal linkages between these birth outcomes and later life disease prevalence (22,23), mental health (12), aptitude, educational attainment and future wages (17) have been established. No study to date has isolated these underlying mechanisms empirically and measured the extent to which institutions influence birth outcomes by disseminating uncertain disaster forecasts to the public. Results We find that in utero exposure to Hurricane Irene created widespread and detrimental impacts to birth outcomes. Herein we report birth impacts as an average across all rainfall intensity bands with associated 95% confidence intervals (CI) in parentheses. Importantly, we find that birth impacts do not vary meaningfully across storm exposure intensity, which is reinforced by overlaying a cumulative distribution of wind exposures and estimated treatment effects (Fig. 1). The average in utero exposure to Hurricane Irene reduced birth weights by 12.7g (5.4g to 20.0g), which represents a 0.17% to 0.61% reduction on the birth weight sample mean (x bw =3,263.7g). The largest treatment effect of 14.4g (-3.9g to -25.0g) is estimated for populations receiving hurricane-force winds and a one-day maximum rainfall in excess of 10 inches and the smallest (a) Birth weight (g) treatment effect of 10.1g (-3.2g and -17.1g) is estimated for populations receiving less than 1 inch of rainfall and only mild winds ( Fig. 1.a). Gestation lengths were also shortened following in utero exposure by an an average of 0.10 weeks (0.07 weeks to 0.14 weeks), which represents a 0.18% to 0.36% reduction on the gestation length sample mean (x gest =38.5 weeks). The largest treatment effect of 0.11 weeks (0.07 weeks to 0.16 weeks) is estimated for populations receiving hurricane-force winds and a oneday maximum rainfall in excess of 10 inches and the smallest treatment effect of 0.09 weeks (0.05 weeks to 0.12 weeks) is estimated for populations receiving less than 1 inch of rainfall and only mild winds ( Fig. 1.b). A similar pattern is revealed for increased likelihood of experiencing low birth weight, very low birth weight, preterm and extremely preterm birth outcomes following in utero exposure to Hurricane Irene. Each of these birth impacts are significant but statistically indistinguishable in magnitude across our wind and rainfall indicators for exposure intensity (Fig. 1). The incidence of low birth weight outcomes increased by 0.56 percentage points (0.22 percentage points to 0.90 percentage points), which represents a 2.52% to 10.34% increase in the likelihood of a low birth weight outcome on the sample mean (x lbw =0.087) ( Fig. 1.c). The incidence of very low birth weight outcomes increased by 0.38 percentage points (0.23 percentage points to 0.52 percentage points), which represents a 15.33% to 34.67% increase in the likelihood of a very low birth weight outcome on the sample mean (x vlbw =0.015)( Fig. 1.d). The incidence of preterm births increased by 0.96 percentage points (0.53 percentage points to 1.38 percentage points), which represents a 5.20% to 13.53% increase in the likelihood of a preterm birth on the sample mean (x pre =0.102) ( Fig. 1.e). The incidence of extremely preterm births increased by 0.56 percentage points (0.35 percentage points to 0.78 percentage points), which represents a 12.07% to 26.90% increase in the likelihood of an extremely preterm birth on the sample mean (x expre =0.029) ( Fig. 1.f). We would expect that a higher intensity of rainfall and wind would be associated with more drastic birth impacts. The consistency in our measured birth impacts across storm exposure intensity is unexpected and may suggest that birth impacts are being driven by a mechanism other than physical storm exposures. We further investigate the nature of potential physical exposures by focusing on statewide groundwater contamination. During Hurricane Irene, over 2 million individuals that represent over 20% of North Carolina's population (24) relied on private wells that are federally unregulated and particularly vulnerable to contamination from severe weather and flooding events (25)(26)(27)(28). We examine over 17,000 private well water samples that were collected by county health offices statewide and processed through the North Carolina State Laboratory of Public Health. We focus on nitrate, manganese, lead, chromium, cadmium and arsenic results, which are all known to disrupt in utero development and have exposure pathways related to major storm events (29)(30)(31)(32)(33)(34). While these samples are not taken from the same residences of our pregnant women sample, both data sets have statewide and residence-level coverage and use the same selection into treatment window around Hurricane Irene. This approach provides an indicator of potential environmental exposures that may explain the observed statewide hurricane-linked birth impacts. Reinforcing our suspicion that observed birth impacts are driven by a mechanism other than the physical impacts of Hurricane Irene, we find no meaningful relationship between storm exposure intensity and private well water contamination rates (Fig. A.1). We also examine a suite of medical risk factors reported for each pregnant woman within our data set (Fig. A.2). Systematic geographic sorting along socioeconomic lines, which may have occurred during our sample window, could reasonably explain observed birth impacts. In such a case, we would expect prediagnosed medical risk factors to vary systematically between our treatment and control groups. We focus on the incidence of prepregnancy hypertension and having previously had a poor pregnancy outcome for each individual within our sample. Previously poor pregnancies include those that resulted in perinatal death, small-for-gestational age or intrauterine growth restricted births. We find no relationship between selection into treatment (or intensity of treatment) and the presence of a prepregnancy hypertension or previous poor pregnancy diagnosis (Fig. A.2). We then focus on the incidence of gestational hypertension and eclampsia that may have developed during pregnancy. To the extent that maternal stress from experiencing a severe storm event is driving observed birth impacts, we might expect gestational hypertension and eclampsia rates to be elevated in our treatment group. We find no evidence of increased incidence of gestational hypertension or eclampsia relative to our baseline group ( We turn our attention to the disruption of healthcare services from hurricane exposure that might impact birth outcomes. For each birth in our analysis, we examine the impact of hurricane exposure intensity on the month that prenatal care began following the clinically-determined conception date (N ✏ 582, 407) and the total number of prenatal care visits that occurred throughout the pregnancy (N ✏ 702, 336). Both prenatal care indicators suggest that hurricane exposure creates a disruption of access to healthcare services (Fig. 2). The average in utero exposure to Hurricane Irene delayed the first prenatal care appointment by 0.24 months (0.18 months to 0.30 months), which represents an approximate 1 week or 6.92% to 11.54% delay on the sample mean of when prenatal care was initiated (x beg. =2.60). The total number of prenatal care appointments is reduced on average by 0.63 appointments (0.37 appointments to 0.89 appointments) following in utero exposure to Hurricane Irene, which represents a 3.03% to 7.29% reduction on the sample mean of total prenatal care appointments (x app. =12.21). Similar to the observed impacts of Hurricane Irene on birth outcomes, we observe that prenatal care disruptions are significant but vary little across the intensity of storm exposures. Such a finding suggests that the anticipation of hurricane exposures and associated institutional responses to that anticipation, rather than the physical impacts from the storm itself, may be the driving force that disrupts healthcare services. To better understand the connection between hurricane anticipation and observed birth impacts, we overlay the National Hurricane Center's "Cone of Uncertainty" forecasts for Hurricane Irene with the residential addresses of all pregnant women within our sample. The spatial extent of the Cone of Uncertainty represents a zone that will contain the "eye" of an impending hurricane with approximately 66% confidence. Variation in hurricane anticipation is given by the total hours that each residential address within our sample spends within this cone (Fig. 3). The cone first overlapped with North Carolina at 9:00am on August 22, 2011 and scraped across the state until the hurricane's eye was over the northeastern corner of the state at 9:00pm on August 27th (Fig. 3). We stratify our sample into three categories that experienced light rainfall (➔1 inch of rain within the most intensive 24 hour period), moderate rainfall (1-2 inches of rain within the most intensive 24 hour period) and heavy rainfall (→2 inches of rain within the most intensive 24 hour period) during Hurricane Irene. Following the same empirical approach as previous analyses, our baseline group of comparison represents births from residential addresses that would have experienced the same light, moderate and heavy rainfall conditions if their pregnancies had overlapped with Hurricane Irene -i.e, births occurred in the same location but at a slightly later time. Separating the sample into categories of physical exposure allows us to measure the additional benefit (or harm) that results from advanced warning that signals potential impact ahead of a storm event that ends up bringing either mild, moderate or severe weather. In other words, our approach isolates both (i) the reproductive health benefits of an advanced warning system that avoids a type II forecasting error -i.e., correctly provides additional warning time to vulnerable populations that receive intense physical exposures -and the reproductive health harm of an advanced warning system that commits a type I forecasting error -i.e., incorrectly provides additional warning to vulnerable populations that only receive mild physical exposures. In such a latter case, additional exposure anticipation may lead to the (ex post unnecessary) cancellation of prenatal care appointments, which may inadvertently cause harm to birth outcomes. The average individual that received heavy rainfall experienced an average of 15.9 6-hour periods within Hurricane Irene's cone of uncertainty (approximately 95 hours). For these individuals, the prediction of direct hurricane exposure was ex post correct and additional time spent within the cone of uncertainty served as an accurate risk signal to prepare for imminent exposure. We find that the marginal effect of an additional six hour window of preparation for these heavily-exposed populations had no meaningful impact on birth outcomes, gestation length or the incidence of low birth weight, very low birth weight or extreme preterm birth outcomes. We find that additional advisories for this group of women led to a statistically significant decrease in the likelihood of having a preterm birth, which represents a 1.2% reduction on the heavily-exposed sample mean (x pret =0.103). The average individual that received light rainfall experienced an average of 6.5 6-hour periods within Hurricane Irene's cone of uncertainty (approximately 39 hours). For these individuals, the prediction of direct hurricane exposure was ex post incorrect and additional time spent within the cone of uncertainty served as an inaccurate risk signal that may have disrupted unnecessarily planned healthcare services. We find that the marginal effect of residing within the cone for an additional six hour window decreased birth weights by 4.1 grams for this lightly-exposed population, which represents a 0.13% reduction in birth weight on the lightly-exposed sample mean (x bw =3264.1). For this group, we also find that an extended time of anticipating direct impact leads to a marginally significant increase in the incidence of low birth weight, very low birth weight, preterm and extreme preterm births. The marginal impact on low birth weight incidence is 0.0021, which represents a 2.4% increase in the likelihood of a low birth weight outcome on the lightly-exposed sample mean (x lbw =0.0869). The impact on very low birth weight outcomes is relatively larger, 0.0008, which represents a 5.4% increase in the likelihood of a very low birth weight outcome on the lightly-exposed sample mean (x vlbw =0.0147). We observe a similar trend for preterm and extreme preterm births. The marginal impact on preterm incidence is 0.0011, which represents a 1.1% increase in the likelihood of a preterm birth on the lightly-exposed sample mean (x pret =0.1016). The impact on extreme preterm births is relatively larger, 0.0014, which represents a 4.9% increase in the likelihood of a very low birth weight outcome on the lightly-exposed sample mean (x expret =0.0287). Discussion The findings represent the first evidence that uncertain hurricane forecasts lead to individuallevel disruptions in healthcare services. These impacts on birth outcomes are similar in magnitude to those found in response to other traumatic events experienced during pregnancy, such as nearby terrorist attacks (35), bereavement (12) and financial hardship (36). A key distinction is that the driving mechanism of exposure is a public warning system that is designed to mitigate rather than exacerbate the impacts of storm events on threatened populations. Studies such as ours are a first step to timing the optimal dissemination of disaster forecasts. Findings highlight the importance of understanding risk preferences of disaster-threatened populations and institutions. In the case of Hurricane Irene, the early release of the storm track forecast triggered a precautionary response by patients and healthcare providers. The decision to cancel healthcare appointments was driven by risk averse preferences among these groups. However, the spatial extent of those cancellations was determined by the amount of forecast uncertainty. As such, we discover that this combination of risk averse preferences and forecast uncertainty during Hurricane Irene disproprtionately harmed the unborn. On the margin, delaying the release of Hurricane Irene's storm forecast release would have improved birth outcomes (birth weight, low birth weight and preterm outcomes) for 2.5 women relative to each women for which the delay would have impaired birth outcomes (increased preterm births). Evaluating the impact of storm forecast uncertainty in this way has the potential to guide cost-benefit analyses for the research and development of improved storm prediction models. The findings presented herein provide direction for several areas of future research. Empirically, heterogeneity in the estimated treatment effects should be explored to support future policy implications. Hurricane "experience" may mediate observed healthcare disruptions, which could be investigated by linking residential addresses with historical storm events and real estate records. Trimester of exposure should also be investigated to identify populations that are most vulnerable to disruptions in health care. Behaviorally, our analysis is unable to predict the psychological impacts of a delayed storm forecast. The ambiguity (rather than uncertainty) surrounding a low-information scenario may trigger similar precautionary responses by individuals and institutions during the anticipation phase of delayed official storm forecasts. Here, future research should examine how public risk responses are likely to differ in disaster scenarios characterized by extreme ambiguity. Data and Methods Our analysis is based on the North Carolina Department of Health and Human Services (NCD-HHS) vital statistics data set for all North Carolina live and still births from August 26th, 2006 to June 14th, 2012. We construct a data set of 710,186 North Carolina birth outcomes with associated prenatal care and medical risk factor information that are georeferenced at the residential address level. Birth outcomes include birth weight (g) and gestation length (weeks) variables that are used to create binary indicators for low birth weight (➔2,500 g), very low birth weight (➔1,500 g), preterm (➔37 weeks) and extreme preterm (➔34 weeks) outcomes. Associated prenatal care indicators include the number of prenatal care visits and the gestational month in which prenatal care began. Medical risk factors include indicators for prepregnancy and gestational hypertension. We focus on the North Carolina impacts of Hurricane Irene, which made landfall August 27, 2011. Births that occurred in the five years prior to the hurricane's impact serve as a baseline of comparison against births experiencing in utero exposure to Hurricane Irene. Incorporating zip code and monthly fixed effects ensures that our estimation procedure isolates the impact of hurricane exposure on birth outcomes rather than local social and institutional factors (37)(38)(39)(40) and seasonal trends (41)(42)(43)(44)). An annual time trend is also included to control for background trends in birth outcomes from 2006 to 2012. In all analyses, standard errors are clustered at the county level, which is the level of public health services and data collection throughout North Carolina. Clustering at the county level allows for arbitrary serial correlation across births within the same county over time. Pre-hurricane births serve as a control group for post-hurricane births. Constructing the data set in this way exploits the fact that pre-natal, but not post-natal, exposure to a disaster may influence birth outcomes. Our empirical strategy hinges on the assumption that pregnant women did not select into treatment. We present layers of evidence that this assumption is reasonable and that our results glean insight into the causal nature of in utero exposure to hurricanes on birth outcomes. Precise birth dates and residential addresses allow us to control for local neighborhood and seasonality effects that are known to otherwise influence birth outcomes. We then use variation in a woman's residential location relative to the NHC's ex ante "Cone of Uncertainty" forecasts and the hurricane's ex post storm track and associated rainfall intensities. We focus on the effects of in utero exposure to disaster stress by identifying women who anticipated direct hurricane impact but were not necessarily exposed to severe weather because of the storm's changing trajectory. The selection of births includes each woman's expected delivery date, which is defined as 280 days after the clinically-estimated date of conception. An expected delivery date within five years leading up to the Hurricane Irene disaster declaration date, August 25th, 2011, is placed into the control group and an expected birthdate within the 280 days following Hurricane Irene is placed into the treatment group. Constructing the sample in this way includes all births that experience pre-natal or post-natal exposure to Hurricane Irene within the relevant time window. Construction of the data set follows convention in the literature (12,45) and helps overcome two empirical challenges. First, opting to define the treatment window based on actual birth dates creates a mechanical correlation between gestation length and the likelihood that a pregnant woman experiences a hurricane -i.e., longer gestation lengths lead to larger birth weights and an increased likelihood that hurricane exposure occurred during the pregnancy. Second, a large literature and our findings suggest exposure to a disaster influences gestation length. Defining the treatment window based on expected birth dates, rather than actual birth dates, ensures that the treatment window is predetermined at the time of Hurricane Irene's arrivali.e., there is no selection of women into treatment from exposure (12,45). Formally, the sample selection contains a treatment group and a control group. The treatment group is all pregnant women residing in North Carolina during Hurricane Irene's disaster declaration date and within the first 40 weeks following their approximate date of conception (c). We define the child's expected birth date as e b ✏ c 280. The control group contains all women whose births were within x days of Hurricane Irene. We include a full five years, x ✏ 1, 825, of control group expected birth dates to ensure that we are able to account fully for seasonality effects in birth outcomes. The sample selection of Hurricane Irene births follows (11) and (12) and is Our data set includes the residential address and birth date for each observation. We geocode these residential addresses into (x,y) decimal degree coordinate using the ArcGIS geocoder application programming interface (API) for Python (1.5.2). The coordinates were then converted into georeferenced points and used to calculate the distance of each residential location to the nearest point along a dissolved version of NOAA's preliminary best track from the National Hurricane Center's Geographic Information System (GIS) Archive -"Tropical Cyclone Best Track". Distance calculations were conducted using the UTM zone 17N projection. NOAA's NHC GIS sources were also used to overlay Hurricane Irene's "Cone of Uncertainty" predictions from advisory #7, which occurred on August 22, 2011 at 9:00am and represented the first time that the Hurricane Irene 5-day cone approached the border of North Carolina, to advisory number #30A, which occurred on August 27, 2011 at 11:00pm and represented the final intersection of North Carolina and Hurricane Irene's 5-day "Cone of Uncertainty" (Figure 3). We also merge the North Carolina State Laboratory of Public Health's statewide private drinking water well samples that were collected and processed during our Hurricane Irene treatment and control time periods. Comprehensive samples for arsenic, cadmium, chromium, lead, manganese and nitrate are all collected because they are known to cause adverse effects on birth outcomes when ingested during pregnancy. Further, each contaminant may be plausibly related to hurricane-exposure conditions or indicative of geographic sorting among pregnant women in response to risk. Together, these water samples help us determine the underlying cause of observed birth outcomes and rule out the selection of women into treatment by geographic sorting along socioeconomic lines. Each inorganic analyte is coded as a binary outcome based on whether the sample exceeded the U.S. EPA's safe drinking water standards. Our NCDHHS data set also includes the number of prenatal visits that occurred during each woman's gestation period, the month that prenatal care began and information on whether prepregnancy hypertension, gestational hypertension, eclampsia and having prior had a poor pregnancy (including perinatal death and small-for-gestational age/intrauterine growth restricted births) were diagnosed as a medical risk factor. Consistent with prior work (46), we hypothesize that hurricane exposures lead to reduced birth weights and gestation lengths and an increased likelihood of a preterm and low birth weight outcomes. Rainfall at each residential address is used as an indicator of exposure intensity represented by the one-day maximum rainfall from August 14, 2011 to September 4, 2011, which encompasses the hurricane event's impact on North Carolina. Rainfall data are from the PRISM climate group Time Series Values for Individual Locations. For the control group, rainfall data from Hurricane Irene are similarly overlaid with residential addresses as if these residences experienced physical exposures. However, our selection into treatment criteria ensures that only those women within our treatment group experienced uterine exposure whereas postnatal exposure in our control group cannot impact uterine conditions or birth outcomes of infants that were born before the hurricane's arrival. Conditioning on rainfall intensity in this way enables a comparison of women in the treatment and control groups who presumably share similar socioeconomic and demographic characteristics because they reside in the regions that would have been similarly exposed to Hurricane Irene's physical impacts. Our estimating equation is for a woman i who resided in zipcode z whose birth took place in month m of year y. The variable E iymz is a binary variable that takes the value of 1 if her birth is in the treatment group and 0 otherwise. That is, E iymz ✏ 1rc ↕ August 25, 2011 ↕ e b s iymz . The variable ln R iymz is the natural logarithm of the 24-hour maximum rainfall that the woman experienced during the hurricane week. The variables µ m and ζ z are month and zip code fixed effects. The variable Y ear is a linear year trend. The dependent variable, y iymz is the birth outcome of each woman. Our estimating equation resembles (12) but includes an exposure intensity variable that is interacted with the treatment group binary variable. Mediating the intensity of exposure in this way allows us to estimate non-linear impacts of hurricane exposure, across rainfall intensity, on North Carolina's birth outcomes. Standard errors for all regressions are clustered across the State of North Carolina's 100 counties, which is the level of public health services and data collection throughout North Carolina. Clustering at the county level allows for arbitrary serial correlation across births within the same county over time. After Equation (1) is estimated, we calculate the treatment effect of hurricane exposure on birth outcome, ψ, as a function of R iymz which is the basis for Figures 1, 2, A.1, A.2 and Table 1 with expanded summary statistics available in Table A.1. To examine the effect of hurricane anticipation on birth outcomes, we augment the estimating equation as follows where ln R iymz is the natural logarithm of the 24-hour maximum rainfall that the woman experienced (or would have experienced) at her residential address, E iymz is the treatment binary variable and y iymz is the birth outcome. The variable C iymz is the number of six hour advisories for which a woman's residence was within Hurricane Irene's cone of uncertainty. Such variation in advisories reveals the intensity of type I errors for those locations that experienced only mild weather exposures. The coefficient β 4 is the marginal effect of an additional advisory at the residence of an individual in the treatment group. While no advisories occurred for women in the control group, we calculate these advisories using the same approach to ensure that geographic factors are fully controlled. This approach, and interacting our advisory variable with our exposure indicator, ensures that our estimated marginal effects are unique to exposed women, not driven by other geographic factors local to where advisories were issued.
2021-07-27T00:06:08.872Z
2021-05-20T00:00:00.000
{ "year": 2021, "sha1": "4603578d92a694648d9830afcd6ed6b47e284cea", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-478052/v1.pdf?c=1631882267000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "c5feae35b64077b6ab1523a06fdd2d910807d9c3", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
264949499
pes2o/s2orc
v3-fos-license
QoS and Energy-aware Resource Allocation in Cloud Computing Data Centers using Particle Swarm Optimization Algorithm and Fuzzy Logic System —Cloud computing has become a viable option for many organizations due to its flexibility and scalability in providing virtualized resources via the Internet. It offers the possibility of hosting pervasive applications in the consumer, scientific, and business domains utilizing a pay-as-you-go model. This makes cloud computing a cost-effective solution for businesses as it eliminates the need for large investments in hardware and software infrastructure. Furthermore, cloud computing enables organizations to quickly and easily scale their services to meet the demands of their customers. Resource allocation is a major challenge in cloud computing. It is known as the NP-hard problem and can be solved using meth-heuristic algorithms. This study optimizes resource allocation using the Particle Swarm Optimization (PSO) algorithm and fuzzy logic system developed under the proposed time and cost models in the cloud computing environment. Receiving, processing, and waiting time are included in the time model. The cost model incorporates processing and receiving costs. Two experiments demonstrate the performance of the proposed algorithm. The simulation results demonstrate the potential of our mechanism, demonstrating improved performance over previous approaches in aspects such as providers' total income, users' total revenue, resource utilization, and energy consumption. INTRODUCTION Cloud computing offers on-demand access to various computing resources, including software, platforms, and storage [1].Coupled with IoT and big data applications, it has revolutionized information technology operations, becoming the enabling technology for next-generation communications [2].Energy consumption is a significant challenge for cloud data centers, given the substantial and constantly evolving size of cloud computing infrastructures and the rapidly growing number of users [3].From 2005 to 2010, there has been an average annual increase of 12% in energy consumption, which has intensified over the past few years [4].Excessive energy consumption produces excessive heat emissions and increases costs, resulting in a degradation of system reliability and performance [5].As energy costs rise and availability diminishes, data center resource management should be optimized for energy efficiency while ensuring high service levels [6].Consequently, cloud service providers must guarantee that rising energy costs do not adversely affect their profit margins.Rising energy costs seriously threaten cloud infrastructures as they increase the Total Cost of Ownership (TCO) and reduce the Return on Investment (ROI) [7]. Energy efficiency in data centers is a complex problem since computing applications and data grow rapidly, and larger servers and disks are required to meet the processing times [8].Green cloud computing aims to optimize the processing and management of computing infrastructure while reducing energy consumption [9].The success of cloud computing depends on the sustainability of its future growth.The advent of cloud computing with increasingly pervasive frontend client devices interacting with backend data centers could cause energy consumption to skyrocket [10].To promote green cloud computing, data centers should be operated efficiently.Cloud resources should be allocated according to user-specified Quality of Service (QoS) criteria via Service Level Agreements (SLAs) and minimize energy consumption.Multiple subscribers are served by combining the resources in the cloud [11].Using a multi-tenancy model, the provider dynamically multiplexes the resources (physical and virtual) according to the requirements of each tenant [12].Based on the lease and SLA agreement, the number of virtual resources will be assigned based on the needs of each client.As a result, as cloud service demand has grown, providers have had to scale up the number of resources and capabilities of cloud-based services to handle the increasing resource demands [13].Fig. 1 shows a process for allocating resources in a cloud environment. Integrating Internet of Things (IoT), machine learning, deep learning, and neural networks within cloud resource allocation marks a transformative shift in addressing the complexities of modern computing landscapes [14].The IoT introduces a vast network of interconnected devices and sensors, generating copious data streams requiring efficient processing and resource allocation [15,16].Machine learning, especially when combined with deep learning and neural networks, enables cloud systems to learn, adapt, and make data-driven decisions, facilitating predictive analytics for demand forecasting and user behavior analysis [17,18].These technologies empower cloud resource allocation mechanisms by automating decisionmaking processes, optimizing resource distribution, and enhancing the scalability of computing systems [19].Neural networks, a subset of deep learning, allow for pattern recognition, predictive modeling, and intelligent decisionwww.ijacsa.thesai.orgmaking, ensuring more accurate and adaptive resource allocation strategies [20,21].Leveraging IoT data and machine learning capabilities within cloud resource allocation not only enhances system efficiency but also allows for dynamic adjustments, adaptive resource scaling, and predictive provisioning, ultimately leading to improved QoS and streamlined cloud operations in a rapidly evolving technological landscape [22]. The significance of meta-heuristic algorithms in cloud resource allocation lies in their capacity to efficiently navigate cloud environments' complex, dynamic, and constantly evolving landscape, offering optimized solutions amidst varying user demands and operational challenges.This paper introduces a hybrid optimization algorithm to address the issues in multi-cloud resource allocation.Particle Swarm Optimization (PSO) algorithm and fuzzy logic system are combined as a hybrid approach to reduce the problems in multi-cloud resource allocation.The selected optimization algorithms are known for their optimal global solutions and rapid convergence characteristics.While PSO exhibits robust optimization capabilities, the integration of fuzzy logic manages the uncertainties and imprecisions inherent in the dynamic nature of cloud environments.Fuzzy logic enhances the adaptability and robustness of decision-making, particularly in scenarios involving vague or uncertain data, thereby augmenting resource allocation's overall accuracy and effectiveness.This combined approach prioritizes QoS criteria and energy efficiency in cloud data center resource allocation.The principal contributions of this paper can be summarized as follows:  Combining the PSO algorithm and fuzzy logic system for solving the resource allocation problem in cloud computing.  Enhancing resource utilization and reducing the execution time of the resource allocation problem.  Increasing user and provider utility and reducing the generational distance of the resource allocation problem. This paper presents an efficient resource allocation model that takes advantage of the benefits of optimization algorithms.The remainder of the paper is organized in the following manner.Section II reviews the previous cloud resource allocation approaches.Section III describes the proposed cloud resource allocation mechanism.Experimental results are reported in Section IV.Section V concludes the paper. II. RELATED WORK Wang and Su [23] developed an algorithm for dynamically allocating resources among numerous cloud nodes operating in a big data context.Based on computing power and storage factors, this algorithm uses fuzzy pattern recognition to divide nodes and tasks into distinct levels.Therefore, a dynamic mapping between tasks and nodes is generated.Upon the arrival of a new task, only the nodes corresponding to the task level will join the bid.The algorithm uses a hierarchical approach to minimize communication traffic during resource allocation.Based on the results of experiments, the presented algorithm is more efficient regarding makes span and communication traffic than the Min-Min algorithm. The cloud-based disassembly proposed by Jiang, et al. [24] abstracts the disassembly factory as a disassembly resource, allowing it to be allocated to disassembly tasks.Based on this model, a cloud-based disassembly solution is developed that offers users a disassembly service tailored to their needs.Disassembly services are execution plans for tasks derived from scheduling and allocating disassembly tasks.The paper uses a mathematical model to describe the disassembly service formally by taking into account the uncertainty associated with disassembly processes and the precedence relationships between tasks involved. Mousavi, et al. [25] presented a hybrid approach to load balancing that integrates Grey Wolves Optimization (GWO) and Teaching-Learning-Based Optimization (TLBO) algorithms, aiming to maximize throughput by balancing virtual machine loads and avoiding a local optimum trap.The algorithm is evaluated on eleven benchmark functions, and comparisons are made with particle swarm optimization (PSO), biogeography-based optimization (BBO), and GWO.www.ijacsa.thesai.orgCloud computing is characterized by elasticity, distinguishing it from other paradigms, such as cluster and grid computing.Based on the bio-inspired coral-reef optimization paradigm, Ficco, et al. [26] developed a meta-heuristic approach to cloud resource allocation.The resource reallocation schema was optimized using classic Game Theory based on cloud provider optimization objectives and customer requirements expressed through fuzzy linguistic SLAs. Chen, et al. [27] presented a self-adapting resource allocation methodology that consists of several feedback loops, each involving a PSO-based runtime decision algorithm and an iterative QoS prediction model.Each iteration of the algorithm improves QoS values.Future resource allocation operations are determined based on the predicted QoS value and the PSObased runtime decision algorithm.As the PSO-based algorithm iterates, no further improvements are suggested compared to current resource allocations.The proposed method is evaluated on the RUBiS benchmark, highlighting a 20% improvement in QoS prediction accuracy compared to the current state of the art based on the same historical data. Singhal and Singhal [28] developed a Feedback-based Combinatorial Fair Economical Double Auction Resource Allocation Model (FCFEDARA) to determine provider genuineness based on the prices offered and feedback from customers.The proposed framework enables customers to access resources from different providers at the best prices and prioritizes genuine providers with good feedback over nongenuine providers with bad reviews.Providers and customers submit bundle bids and resource lists in the combinatorial double auction model.By assessing provider truthfulness, penalizing market spoilers, and giving preference to providers with positive feedback from customers, the proposed model takes care of the truthfulness of providers. Thakur and Goraya [29] introduced a novel metaheuristicbased resource allocation approach for load balancing in cloud environments.The goal is to effectively reduce the uneven distribution of workloads between physical machines in addition to their resource capabilities.Consequently, the overor under-loading of active physical machines is prevented.To develop a suitable resource allocation strategy for load balancing, dragonfly and PSO algorithms are combined.The proposed algorithm is superior to PSO, dragonfly algorithm, and comprehensive learning PSO in determining optimal resource allocation. III. PROPOSED METHOD A new PSO algorithm is used in this paper to select the best member of the population.This algorithm outperforms existing multi-objective optimization techniques regarding calculation time, reasonable undefeated solutions distribution, and Pareto front convergence.Moreover, the Fuzzy set theory is used in this paper to select the best adaptive solution. A. Mathematical Formulation of the Problem Each user requests a combination of the requested resources with different attributes, the required number of the resources, and a proposed cost to buy all the resources in a bundle form.Each provider presents a combination of the resources with different attributes, the number of the presented resources, and a proposed cost to sell all the resources in a bundle form.The bundle is a request to buy with all bought products and a sell request with all products.Moreover, the meaning of a specific attribute of the requested items is the number showing the processor processing power, the accumulator capacity, bandwidth, and so on.The total number of requested resources should equal or less than the total number of all the presented resources.All the users requested resources' attributes should be equal to or less than all the presented attributes by the cloud presenter to assign the resources.After determining which provider can meet the user's requests, the cost that the user should pay to the provider is determined by a costing model.The costing model should be fair and beneficial for the provider and the user.The paid cost by the user u to the provider p Eq. ( 1) and Eq. ( 2) show the total number of a user's requested and presented items by the provider.Eq. ( 3) allows for dividing the user's suggested cost by the total number of requested products to determine the average cost per user.The average cost for each provider is the division of the proposed cost by the provider on the total number of provider items shown in Eq. ( 4).The average business cost of provider p and user u is determined based on the users and the provider's average costs using Eq. ( 5).The paid cost by the user u to the provider p is estimated through the number of assigned resources using Eq. ( 6).The earnings of each provider are presented in Eq. ( 7), which is equal to the paid cost minus the proposed cost in Eq. ( 6).The higher cost received by the provider for its resources than the expected cost leads to higher earnings by the provider.Moreover, in each user's income, Eq. ( 8) equals the proposed cost of the user minus its paid in Eq. (6).It is also clear that more paid cost by the user to rent the requested resources to the provider than its proposed cost makes more earning for the user.The resource utilization rate in Eq. ( 9) is equal to the ratio of the total number of requested items by a user to the total items presented by the provider.Objective functions in Eq. ( 10), Eq. ( 11), and Eq.(12) show the total earnings of the providers, the total income of the users, and total resource utilization, respectively.The objective area in the proposed algorithm is three-dimensional, as shown in Eq. (13).www.ijacsa.thesai.orgMoreover, the requested resources should not be more than the provided resources controlled by Eq. ( 14). B. The Proposed Method for Cloud Resource Assignment The proposed multi-objective method is proposed based on the PSO algorithm.Here, the algorithm and the way of making it multi-objective are explained.Then, the steps of the proposed method are presented: 1) Particle swarm optimization algorithm: The PSO algorithm employs a population-based stochastic process. Particles move through the search space of an optimization problem.Particles' positions represent potential solutions.The particles search the search space for better positions by modifying their velocities under rules derived from behavior models of flocking birds.The PSO algorithm is known for its adeptness at approaching near-optimal solutions, a characteristic pivotal in addressing resource allocation concerns within cloud computing environments.The time complexity of the PSO algorithm typically operates at O(n*N), where n represents the number of iterations and N stands for the population size or number of particles in the swarm.In terms of space complexity, it generally stands at O(N*D), with D representing the number of dimensions in the given problem space. 2) Multi-objective particle swarm optimization algorithm: In MOPSO, a concept called the hall of fame or repository is used so that from the investigated best answers, the best of them that are undefeated answers are stored in a repository.These repository members are an approximation of the Pareto front.Each particle in MOPSO selects one repository's member as a leader when it wants to move.It is the experimented best position of that particle.However, the answers are distributed on a multi-dimensional plate.Vertical and horizontal lines should be used to tabulate the area initially.The cells in the space are then identified, including the repository members.Some cells may have repository members, but the priority is for the cells with less population because of maintaining diversity.One of the less congested cells is selected using the Roulette Wheel Selection method, and then one of the cell's members is selected as leader randomly.Each particle moves using Eq. ( 15) and Eq.(16). The following comparisons should be made between the new role and the top memory: 1) If the new position beats the best memory, the new position replaces the best memory.2) Nothing is performed if the best memory beats the new position.3) If none of them is defeated, one of the positions is considered the best memory randomly.Then, undefeated members of the current population are the elites.Now, the quality of the response is controlled.Selecting the repository's member to remove is performed using the Roulette Wheel Selection method, but the cell with fewer roles in the diversity of the answers is selected.Here, the priority is for the cells with more population.The lack of memory has a limit on the repository size.The new members should be removed using the Roulette Wheel Selection method, depending on the crowded cells if the number of repository members exceeds the calculated capacity.A multi-objective problem that requires an agent in every square inch of space is being explored, and the PSO is an algorithm with a high convergence speed.As a www.ijacsa.thesai.orgresult, a mutation operator slows down convergence to ensure the entire space is thoroughly examined [ 03 ] . 3) The proposed multi-objective particle swarm optimization algorithm through crowding distance A new particle swarm optimization algorithm called − is proposed in [ 03 ] .A mutation operator is used in this algorithm, like the algorithm.Moreover, the crowding distance approach used in this algorithm is proposed in and used in the − algorithm.This method is also used in − to find the optimal answers.The crowding distance for each answer approximates the answers' density around it.The studied problem has three objective functions.At first, the objective function values of each dimension are sorted decreasingly.Then, the previous and next points are selected rather than the problem's objective functions.The fractions of the dimension covered by the i th member of the population in the first, second, and third objective functions are obtained by Eq. ( 17), (18), and ( 19), and the crowding distance is obtained by Eq. ( 20). It is beneficial if the population's ith member covers a bigger area.Hence, the higher priority is for the undefeated answers with higher crowding distance in the PSO algorithm.The undefeated answers in the external repository are sorted through the decreasing crowding distance decreasingly.Then, in each step, one of the top 10% of answers is selected randomly.If the repository is full, one of the last 10% of answers is selected randomly, and the new undefeated answer found in the last iteration replaces it.Algorithm 1 shows different steps of the PSO algorithm.Before the start of the main loop of the algorithm, the users' requests list and the providers' offerings, including the attributes, number, and proposed cost, should be received.Moreover, in the proposed algorithm, the particles' positions are defined as follows: 4) The fuzzy-based approach for the adaptive solution: Multi-objective optimization algorithms do not result in only one answer; a set of undefeated answers is obtained, the approximations of the first front.If a final answer is required, one of the answers should be selected as the resource allocation objective.To this aim, different methods are presented and used now, like the Fuzzy Set theory IV. SIMULATION The proposed algorithm is simulated on a Microsoft Windows system using Matlab.Experiments are divided into three categories: Small-Scale (SS), Middle-Scale (MS), and Large-Scale (LS).The total number of users and providers related to the three experiments is (20,5), (15,50), and (30, 100), respectively.The population size is 75, the maximum number of repetitions is 100, and the personal and collective learning coefficients are 1.The Inertia Weight is W=0.4,and the mutation rate is mu=0.5.The resource attributes are as follows: A. Performance Measures The performance measures of this work are as follows:  Total earnings of the provider: proportional to Eq. ( 10)  Total incomes of the users: proportional to Eq. (11).  Generation distance: proportional to the presented model in [26].  The distance: proportional to the model in [27]. B. The Experimental Results Two distinct experiments are conducted in this study.Each experiment compares the performance of the suggested method with that of the other methods. 1) First experiment: In this experiment, the proposed method performance is compared with NSGA-II [ 03 ] and MOPSO [ 03 ] algorithms in terms of the answers' quality, generation distance, distance, and execution time.Tables II to IV show the comparison of these three algorithms comparisons through the previously explained six measures for three types of experiments.Based on three types of experiments, the results for the provider's total earnings, the total users' income, the total resource utilization, and the generation distance for each algorithm are shown in Fig. 2 to Fig. 5.The performance of the proposed algorithm is superior to those of the other three algorithms.Fig. 6 to Fig. 7 illustrate the distance and execution time results for the mentioned algorithms.The proposed algorithm had an average improvement rate of 51% in total resource utilization, 50% in generation distance, and 16.5% in execution time when compared with MOPSO. 2) Second experiment: Here, the proposed method's performance is compared with the Artificial Fish Swarm Optimization algorithm (AFSO) [38].This experiment examines the proposed method's time, cost, and energy efficiency.Fig. 8 illustrates the performance of the proposed method.As the number of tasks increases, the execution time will also increase.Fig. 9 illustrates an analysis of the performance in terms of cost.System performance is affected by the maximum cost.Fig. 10 also illustrates performance in terms of energy consumption.An increase in energy means an increase in cost as well.The proposed algorithm improved the total execution time by 22%, cost by 9%, and energy consumption by 21% compared with AFSO.Performance measures expressed proportionally through specific mathematical models key indicators such as total provider earnings, user incomes, resource utilization, generation distance, and execution time.In the first experiment, the PSO-fuzzy algorithm's performance is benchmarked against NSGA-II and MOPSO algorithms, showcasing superior results across various metrics.Notably, the proposed algorithm exhibits an average improvement of 51% in resource utilization, a 50% enhancement in generation distance, and a substantial 16.5% reduction in execution time compared to MOPSO.The PSO-fuzzy methodology is pitted against the Artificial Fish Swarm Optimization (AFSO) algorithm in the second experiment.The results highlight a marked improvement in execution time by 22%, cost efficiency by 9%, and a notable 21% reduction in energy consumption when compared to AFSO.These findings underscore the robustness and efficiency of the PSO-fuzzy algorithm in optimizing resource allocation and cost management across varying scales and scenarios within cloud computing environments. V. CONCLUSIONS This paper proposed an optimal resource allocation method combining the PSO algorithm and fuzzy logic system based on the presented time and cost models in the cloud computing environment.The time model includes receiving, processing, and waiting times.Costs associated with processing and receiving are included in the cost model.The PSO algorithm was applied to the cloud environment for optimal resource allocation.The fuzzy logic system was used to evaluate the time and cost models.The proposed algorithm's efficacy was clearly demonstrated through a series of meticulously designed experiments.In the initial experiment, comparative analysis against established algorithms, namely NSGA-II and MOPSO, revealed the superiority of our method concerning providers' total income, users' total revenue, and resource utilization.Subsequently, the second experiment showcased the algorithm's superior performance in execution time, cost-effectiveness, and energy consumption when juxtaposed with the AFSO algorithm.These results unequivocally establish the proposed algorithm's prowess, emphasizing its effectiveness in both performance and efficiency metrics.The outcomes affirm the superiority of our algorithm for scheduling within cloud computing systems, surpassing existing methodologies.The success of this approach not only underscores its potential in addressing the resource allocation challenge but also signifies a significant stride toward optimizing cloud computing operations.However, while these results are promising, future work should delve into further validation across a more extensive range of scenarios and consider real-world implementations to solidify the algorithm's robustness and applicability in diverse cloud environments. 2 . Set particle velocity vi to zero (vi=0) 1.3.Evaluate the fitness value of particle Xi 1.4.Set the best position of each particle as Pbesti = Xi 1.5.Update the global best position gbest with the best particle Xi 2. Initialize the number of iterations it = 0 3. Save undefeated answers of Xi in rep 4. Begin iteration: 4.1.Calculate the crowding distance for each undefeated answer in rep 4.2.Sort undefeated answers in rep based on their crowding distance in decreasing order 4.3.For each particle Xi from 1 to nPop: 4.3.1.Randomly select an optimal guide from the top 10% of the sorted rep for particle Xi and update its position in gbest 4.3.2.Compute the new speed of each particle using equations (3-15) with c1=1 and c2=1 4.3.3.Calculate the new position of each particle using equations (3-16) 4.3.4.Adjust variable values of Xi to fit within the determined limits; if Xi exceeds the limits, reverse its particle speed by multiplying it by -1 4.3.5.Implement a mutation operation on Xi 4.3.6.Evaluate the objective function of Xi 4.4.Update undefeated answers in rep: 4.4.1.Calculate the crowding distance for each undefeated answer in rep 4.4.2.Sort the undefeated answers in rep based on crowding distance in decreasing order 4.4.3.Randomly replace one of the lower 10% of the sorted rep with the new answer Xi 4.5.Update the best position of each particle if the new position defeats the stored position in memory 4.6.Increment the iteration count it 5. Repeat steps 3 to 4 until the maximum number of iterations is reached ( 23 ) www.ijacsa.thesai.orgA normalized membership function is obtained using Eq.(24) for each undefeated solution.The best adaptive solution is the answer with the most value for s  The power and speed of the computer processor are measured by Million Instruction per Second (MIPS) with the range of [220, 1000].The memory shows the amount of memory in MB with the range of [256, 512, 1024, 2048].The accumulator shows the amount of accumulator in MB with the range of [1500, 40000].Bandwidth shows the amount of bandwidth in bits per second with the range of [120, 1000].The proposed cost is expressed as the cost unit per million instructions with the range of [0.012-0.1046]. TABLE III . STATISTICAL COMPARISON FOR MIDDLE-SCALE
2023-11-03T15:17:16.137Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "5bba1401faeafb4f4d40fe8d0aa96e2fcd24ff44", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume14No10/Paper_95-QoS_and_Energy_aware_Resource_Allocation.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0afc99dcacf89e7a6f6b8793ab6f68d7314f70c0", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
261484786
pes2o/s2orc
v3-fos-license
Locating infrastructural agency: Computer protocols at the finance/security nexus How can we make sense of tensions and contradictions in digitally mediated practices of anonymity and identification? This article calls for foregrounding computer protocols as key sites for locating how agency amongst increasingly complex sets of relations between human and non-human actors is impacting contemporary (in)security. We distinguish agency within and between contemporary finance/security infrastructures by tracing the development, application and updating of a particular set of computer protocols – blockchains. Locating agency at the site of these and other computer protocols, we argue, exposes security politics that have largely remained overlooked in the ongoing engagement of critical security studies with science and technology studies. Widening engagements with security devices, this article also broadens the interdisciplinary engagements of critical security studies with new media and software studies. Introduction In 2014, Islamic State of Iraq and the Levant (ISIS) announced it would accept donations in the leading 'cryptocurrency', Bitcoin.This was surprising given that Bitcoin's pseudonymous features give everyone, including security agencies charged with countering the financing of terrorism, an overview of all Bitcoin transactions.While granting some degree of anonymity to senders and receivers of this electronic token, Bitcoin's transparent public ledger also provides the data employed by security agencies around the world to tie cryptocurrency transactions to individual users with increasing precision.Such identification practices have unsurprisingly been decried by privacy advocates as 'a prime example of a tool where risks, harms and costs far outweigh the potential benefits resulting in a lack of proportionality or necessity, and where the pursuit of private profit ends up taking precedence over the public good' (Monero Policy Working Group 2021).Yet, in spite of confiscations of cryptocurrency and criminal convictions of their users, 1 Bitcoin and even its more privacy-orientated competitors have continually been mobilized to finance a wide range of activities with security implications, from the 'Freedom Convoy' that surrounded the Canadian capital of Ottawa in 2022 to Ukrainian resistance to Afghan, Iranian, North Korean and Russian attempts to escape sanctions (Ashraf and Nelson, 2022;Campbell-Verduyn and Giumelli, 2022;Pessarlay, 2022). How can critical security studies make sense of tensions and contradictions in increasingly digitally mediated practices of privacy and identification?This article advances understanding of contemporary security politics by locating what we call infrastructural agency in sociotechnical activities occurring at key, but overlooked, sites of contemporary activity -computer protocols.It argues that two forms of infrastructural agency shape tensions in privacy and identification practices in the case of cryptocurrencies like Bitcoin that build on blockchain protocols.First, locating what we call infrastructural agency points to continuities and changes in security practices emerging from human and non-human relations centred around computer protocols.Second, identifying interstructural agency helps us understand tensions in security practices stemming from relations between infrastructures.Together, locating inter-and infrastructural agency exposes security politics that have so far remained overlooked in critical security studies, which have instead usefully foregrounded algorithmic and information communication technologies. In their 'encounter with the marvellous minions', critical security studies have very productively 'opened up a wide world of infrastructural politics to exploration, sensitizing security studies scholars to the wondrous worlds of codes, algorithms, protocols and other security devices' (Leander, 2019: 25).Critical security studies' scrutiny of security devices has focused largely on algorithms.Foregrounding computer protocols, we argue, deepens understanding of how relations between devices like algorithms and their human creators and users impact security practices in particular ways.Linking science and technology studies' conceptions of infrastructures as sociotechnical relations with insights from fields of study that have foregrounded protocols as sites of political analysis (Cramer and Fuller, 2008;Galloway, 2004), this article broadens critical security studies' encounters with the 'marvellous minions' that Leander (2019) identifies and widens interdisciplinary encounters beyond critical security studies' productive waves of recent engagements with science and technology studies (Bellanova et al., 2020). We proceed in three steps.A first step introduces computer protocols as entry points for locating agency in infrastructures that are increasingly digitally mediated as the internet has expanded.We situate our analysis broadly, tracing how the (re)design of internet protocols impacted practices of privacy and identification in the earlier 1.0 and existing 2.0 versions of the World Wide Web.In a second step we turn to examining how the creation and updates of blockchain protocols, starting with Bitcoin in 2009, have continued to shape security practices as the so-called 'Web3' emerges as an ever-broadening space of technological experimentation beyond just cryptocurrencies.Zooming in on sociotechnical relations shaping the development, maintenance and updating of blockchain protocols, we argue that critical security studies can make sense of evolving tensions in security practices emerging from what we call infrastructural agency.Zooming out, meanwhile, on relations between an emergent blockchain-based infrastructure and a longer-existing surveillance infrastructure elucidates how privacy and identification practices are being (re)shaped in what we call interstructural agency.In a third and final step, we reiterate how locating infrastructural agency at the site of computer protocols sheds light on evolving tensions in contemporary security practices.Widening critical security studies' engagement with 'marvellous minions' that include but go beyond algorithms, we conclude, broadens the productive interdisciplinary interactions of critical security studies to new media and software studies. Computer protocols as sites of security politics A protocol, generally, is a form of 'correct or proper behavior within a specific system of conventions' (Galloway, 2004: 7).There are cultural protocols for introducing and greeting one another, diplomatic protocols for how relations between states are to be 'properly' carried out, and organizational protocols for acceptable behaviours between staff and employees in institutions.All protocols set out particular paths for information to be communicated, transmitted and acted upon that have varying implications for (in)security. Critical security studies are familiar with how Know-Your-Customer (KYC) protocols specify proper conduct for financial institutions, like banks, to identify and share information on their clients in supporting growing international Anti-Money Laundering and Countering the Financing of Terrorism (AML/CFT) efforts since the 1980s (Amicelle, 2011;Lagerwaard, 2020;Marlin-Bennett, 2016).Growing attention within critical security studies has been given to secrecy protocols specifying standards to maintain confined communication of information and/or the anonymity of those transmitting it (Bosma et al., 2020;Walters, 2020).Technologies, human practices and sociotechnical relations all play crucial roles in the (re)formation of these protocols.Yet, computer protocols as sites of agential activity in security politics have remained underappreciated. Computer protocols are digitally encoded standards defining how information is transmitted in and across networks of computers and human users.They shape the sending and receiving of digitized information in ways that can determine possibilities for often-conflicting security practices, including those of privacy and surveillance in such a digitally mediated world.Analysing computer protocols as sites where digitally mediated practices of privacy and identification are (re)shaped brings together and advances two intersecting debates in critical security studies.First are debates over agency in the ongoing 'technologization' of security (Kaufmann, 2016;see Orlando, 2020).Second, and relatedly, are debates over the human-non-human relations 'mobilizing' (in)security in sociotechnical infrastructures (Hönke and Cuesta-Fernandez, 2018).Tracing the (re)formation of standards of behaviour amongst computers and their users brings together and advances critical security studies debates on techno-agency and infrastructural politics.Responding to calls to consider the agential qualities of infrastructures (De Goede, 2020) and to situate techno-agency in wider 'infrastructural geopolitics' (De Goede and Westermeier, 2022), we widen the ongoing interdisciplinary encounters of critical security studies to new media and software studies. Techno-agency: What do computer protocols do? Agential qualities -who and what impacts the world -have increasingly been associated in critical security studies with relations between technologies, their developers and their users.Moving from an overly simplified stress on agency as the 'capacity to act' defined a priori (Orlikowski, 2007(Orlikowski, : 1438)), technological agency is being reconsidered as an iterative process, one in which humans construct technologies that impact their users and designers through unforeseen uses and effects.Yet, attempts to attribute agency to relational processes remain highly 'context dependent' (Hoijtink and Leese, 2019: 11).Their specificity along with the growing complexity of multiple-use technologies, like Bitcoin, all limit more general claims as to where 'techno-agency' might be found, assessed and potentially countered.Locating agential qualities in activities surrounding the development of computer protocols, we contend, widens possibilities for linking specific sociotechnical relations to their wider 'emergent security' implications while avoiding 'the centrality of human intentionality as a basis for constructing enmity, and by acknowledging the role of codes/software unexpected paths' (Fouad, 2022: 768). The politics of computer protocols were first examined in software studies, the interdisciplinary field that Fouad (2022) and other critical security studies scholars have engaged (Leese et al., 2022).Here, activities involved in the (re)development of computer protocols are agential in shaping 'operations that are allowed and assigned a priority or blocked' (Cramer and Fuller, 2008: 151).The 'qualitative discontinuity' resulting from the redevelopment of the Transmission Control Protocol/Internet Protocol (TCP/IP), for instance, long influenced possibilities for discriminating between types of internet traffic (Fuller and Goffey, 2009: 150-151).The (re)formation of such behavioural standards for transmitting packets of digital information, 2 that also include the likes of the Hypertext Transfer Protocol (HTTP), have continually impacted the ability of internet users to access and communicate information on the World Wide Web. Yet despite their 'diffuse and extensive' presence, computer protocols and their agential qualities are often relegated to what Fuller and Goffey (2012: 1) call grey media -'technologies that are operative far from the more visible churn' and that are frequently invoked and recognizable 'yet rarely recognized or explored', including by scholars.What software studies call for then is to expose more widely the agential properties of such media, amongst others by tracing their facilitation of activity that is produced when devices, practices, protocols and procedures, gadgets and applications, mesh and synchronize simultaneously [creating] vast black-boxed or obscurely grayed-out zones, taken for granted, more or less stabilizing and stabilized artifacts, that permit the abstract social relations characteristic of 'frictionless' communication to take root.(Fuller and Goffey, 2012: 4) Building on software studies, critical security studies can draw security politics and implications of agential relations stemming from the development, maintenance and updates of computer protocols.A first set of protocol-level activities with agential effects on security involves the development of initial standards for how information is transmitted in and across networks of computers and human users.To relate to one another, computers 'must speak the same language' (Galloway, 2004: 12;emphasis in original).Without this shared language, computer networks remain 'noninteroperable'.Digital interoperability tends to coalesce around shared norms and expectations, including those of privacy and identification. Another set of agential effects on security practices stemming from the (re)formation of computer protocols can be found in relations with the network and application 'layers' of what have been called 'stacks' (see generally Bratton, 2016).Figure 1 illustrates the schematic depictions of hierarchies among 'layers' of digital 'stacks'.Despite the oversimplification, the stack metaphor draws useful attention to two specific sets of relations shaped by protocol-level activities: network formation based on a computer protocol; and application development as the tip of digital stacks that get 'adopted, implemented, and ultimately used by people around the world' (Galloway, 2004: 7). The key point here is that neither digital networks nor the applications based upon them can be built, run or reformed without the secure and consistent communication standards developed and maintained at the site of computer protocols.The security practices emerging in and across networks and surfacing with particular everyday applications -like cryptocurrencies -are all shaped by activities at the underlying protocol 'layer'.The 'protocol politics' (DeNardis, 2009) at stake here have been foregrounded in studies tracing how the maintenance and updates of internet protocols have shaped the practices of the networks and applications of the World Wide Web.To set the stage for our analysis of the internet-based blockchain protocols emerging since 2009, we proceed by surveying briefly how activities in (re)developing the internet protocols have had agential impacts on privacy and identification practices. Firstly, the (re)development of internet protocols has had agential effects on security politics in networks of computers and users building on top of them.The original development of the Internet Protocol (IP) in the 1970s involved specific sociotechnical relations between mainframes and graduate student researchers at the United States Department of Defense.The Advanced Research Projects Agency (ARPA) is often linked to the national security origins of the internet as a networkof-networks intended for information transmission both within the USA and between its allies in the context of the Cold War.Updates to the IP in the late 1980s and 1990s were however demilitarized and undertaken via email lists of the Internet Engineering Task Force and civilian organizations composed largely of white, middle-aged, university-educated men.Nisha Shah (2009: 139) notes how these developers all possessed 'a certain level of technical proficiency and knowledge' and 'with the time and willingness to participate'.These concentrated communities of 'code rebels' (Levy, 2001) leveraged cryptographic technologies for securing communication in increasingly commercial activities across a network-of-networks that was privatized after the Cold War. 3 Concentration in the communities designing and maintaining the internet's underlying protocols was later echoed in the growing monopolization of the internet following the millennial dot-com bubble burst.The eventual dominance of Microsoft and the GAFA (Google, Amazon, Facebook, Apple) Big Tech multinational firms renewed fears of vulnerabilities in informational security practices (Barwise and Watkins, 2018;Lovink, 2022).These were exemplified most prominently by reoccurring technical outages.In the case of Facebook, a major outage led to the bulk of what is considered to be the internet for millions in South Asia and elsewhere to be inaccessible for hours (Isaac and Frenkel, 2021).This specific episode highlights a first way in which activities at the site of computer protocols help to locate security politics in the (re)formation of digital networks, including that of the internet as the network-of-networks (DeNardis, 2012). Secondly, sociotechnical activity at the site of computer protocols such as IP have impacted the security politics of application-building on standards for relations between computers and their users.Internet protocols shaped the various iterations of the Web, the key application that built on top of the network-of-networks.Web 1.0 of the early 1990s allowed for relatively anonymous information communication practices across relatively decentralized networks-of-networks.By contrast, the Web 2.0 that emerged in the 2000s and 2010s became governed by concentrated 'walled gardens' (O'Reilly, 2009), the platforms maintained by those dominant USA-based technology firms.These Big Tech firms banded together in 2017 to develop the Content Incident Protocol (CIP) that sought to standardize communication of user identification information in order to curb 'a real-world terrorism or violent extremist event' (GIFCT, n.d.).The CIP built on earlier standards introduced on top of the TCP/IP, such as the Domain Name System (DNS) protocol that translated website names into computer-identifiable addresses.What the development of these new protocols shows is how the openness and shared communication standards set by communities of TCP/IP developers in the 1980s progressively gave way to new standards for user identification that have underpinned Web 2.0.These standards were increasingly challenged after 2009 as privacy-centric blockchain protocols emerged and became part of what, more than a decade later, underpins the Web3 applications traced in the next section. In sum, the evolution of the internet's security politics can usefully be understood by foregrounding the sociotechnical development and maintenance of computer protocols like the IP.Privacy and identification practices in the network-of-networks that is the internet have all been shaped by the sociotechnical relations involved in creating, maintaining and updating standards for behaviour.Critical security studies' understanding of techno-agency is advanced by highlighting the relational activities evolving at the sites of computer protocols, especially when linked to wider efforts to conceive infrastructures relationally. Locating agency in infrastructural relations Countering common-sense notions of infrastructure as objects or static 'things' are science and technology studies-inspired accounts of infrastructures as evolving sets of relations between social actors and material objects that have gained currency in critical security studies (Bellanova and De Goede, 2022;Glouftsios, 2020;Nolte and Westermeier, 2020;O'Grady, 2021).An infrastructural turn has usefully built on earlier critical security studies' critiques of instrumentalist and functionalist efforts to designate critical infrastructures as 'objects in need of protection' (Aradau, 2010;Brassett and Vaughan-Williams, 2015;Dunn and Kristensen 2020).The agency of humans positioning some infrastructures as 'critical' is limited to states and corporations acting on or over infrastructures that these actors deem to need 'protection'.This constrained version of agency overlooks the wider impacts of seemingly apolitical relations both within communities deemed worthy of protection and between such communities and those deemed unworthy of protection.Critical infrastructures thus disentangle from one another the various relations designated or passed over as vulnerable leading to profound disconnections, for instance between 'critical' sewage systems from wider ecological systems.Efforts to designate 'critical infrastructures', in other words, overlook both sociotechnical entanglements within and between infrastructures understood relationally. Critical security studies can further the understanding of digitally mediated security practices by repurposing the critical in critical infrastructures to focus on key activities within and between sets of sociotechnical relations.Rather than simply stressing limits on how a government or corporation acts on infrastructure, critical security studies can tease out the impacts on security practices of relations between groups of human and non-human agents, including computer developers and engineers, as well as in the material cables, wires and computers underpinning increasingly digital infrastructures.Within this widened focus, computer protocols form a productive entry point and site for locating more specifically which, amongst myriad sociotechnical relations within and between infrastructures, are those most relevant to the evolution of security practices.Probing computer protocols helps pinpoint more precisely those relations underpinning digitizing infrastructures that may 'have more currency than others' (Czarniawska, 2004: 783).Put differently, foregrounding computer protocols as sites of activity helps locate agential qualities within the many (sets of) relations whose formation and reformation potentially shape security practices, such as of privacy and identification.It can provide insight into how, when and where key sociotechnical relations impact digitally mediated security practices, for instance by tracing who is ultimately responsible for how 'thinking infrastructures' achieve any or all of the following: Configure entities (through tracing, tagging); organize knowledge (through search engines); sort things out (through rankings and ratings); govern markets (through calculative practices, including algorithms) and configure preferences (through valuations and recommender systems).(Kornberger et al., 2019: 2) Locating which sociotechnical relations enable (in)security without reifying and returning to more conventional and instrumental understandings of agency on or over infrastructures can benefit from new media and software studies of what breakdowns, stoppages or cessations emerge from 'infrastructures [that] are plugged in other infrastructures' (Pipek and Wulf, 2009: 454).Repurposing the 'critical' from the notion of 'critical infrastructures' can also benefit from distinguishing between interand infrastructural agency. What we designate as infrastructural agency involves the impacts of sociotechnical relations at the site of protocols on the security politics of the networks and applications layered on top of standards of proper behaviour.Relations at the site of computer protocols can be agential not only within an infrastructure but also in the networks and applications centred around different standards of behaviour.Interstructural agency, meanwhile, involves the impacts that relations at the sites of different computer protocols have on one another.The 'installed base' (Star, 1999: 381) of an infrastructure is not entirely 'hard-wired' (De Goede, 2020: 356).Sociotechnical relations shift -often subtly -as alternative bundles of sociotechnical relations arise and interact with sociotechnical relations that already exist.In the case of activity at the site of computer protocols, Beuster et al. (2022) note how '[o]nce operationalised in infrastructures, protocols act immanently conservative and upgrades transcending its encoded rules, such as new functionalities, often must be invoked from the outside'.This understanding is useful for balancing the continuities invoked by the 'hard-wiring' of sociotechnical relations with the persistence of possibilities for change emanating from experimentation with other ways of wiring the world (Morozov, 2012). The next section traces the emergence of the so-called 'Web3' infrastructure based on blockchain protocols whose continuing entanglement with the internet protocols underlying Web 2.0 have shaped security practices since 2000.We argue that locating infrastructural agency in the formation and ongoing interaction amongst infrastructures marked by differences and contestations helps us to understand evolving tensions and contradictions between identification and privacy practices at the contemporary finance/security nexus. Locating infrastructural agency within and between finance/security infrastructures This section traces the sociotechnical relations that have had key impacts on digitally mediated privacy and identification practices since 2009.Locating infrastructural agency in the emergence of a novel privacy infrastructure and interstructural agency in the ongoing relations between new attempts to enhance anonymity with a countervailing identification infrastructure based around KYC protocols, we argue, helps make sense of continuities and changes at the contemporary finance/security nexus. The finance/security nexus consists of two 'realms' typically considered as separate, but which meet in 'profound historical and conceptual interrelations that makes one unthinkable without the other' (Westermeier, 2019: 111).Both mundane practices, such as the surveillance of digital transactions, as well as epochal events like the 2007-2008 global financial crisis are regarded in studies of this nexus as reflective of the 'cross-colonization of finance and security logics' (Amicelle, 2017: 119).The finance/security nexus has evolved over centuries as risk management practices underpinning sectors like insurance have involved and affected both state and human security (Lobo-Guerrero, 2012, 2016).The continuing motion of longer temporal threads are emphasized in the science and technology studies-inflected notion of finance/security infrastructure, where a stress on an 'installed base' (Star, 1999: 381) of sociotechnical relations evolving over time links colonial-era inequities (De Goede, 2020) with contemporary cross-border digital systems maintained by bodies like the Society for Worldwide Interbank Financial Telecommunication (SWIFT). 4 In tracing the implications for security practices of activities that (re)develop a particular set of protocols -blockchain -we respond to prompts to further draw out the agentic qualities of infrastructural relations between technical objects and practices (De Goede, 2020), as well as related calls to probe how an 'ever-increasing appetite for financial data proceeds in the name of security' (Westermeier, 2019: 116).Financial data at stake stem from the generation, retention and circulation of identity information on users of applications based on blockchain protocols in what we call a 'privacy infrastructure'. 5Locating agency in sociotechnical relations at the site of this particular set of computer protocols highlights both continuities and changes at the contemporary finance/ security nexus. As the first subsection traces, infrastructural agency in the creation and maintenance of blockchain protocols has enabled varying degrees of anonymity, disintermediation and immutability.Locating this first form of infrastructural agency helps draw out the security politics stemming from sociotechnical relations occurring in the formation of this protocol, as well as consider the impacts of these relations on the networks and applications that together have formed a novel privacy-focused finance/security infrastructure since 2009. A second subsection then locates interstructural agency in the relations between this blockchain-based, privacy-orientated infrastructure and the far longer-standing identification-focused infrastructure.Attempts to address the various insecurities prompted by the former, we show, impact and are impacted by the ongoing evolution of the latter.We draw out continuities and changes in identification and privacy practices at the finance/security nexus in official documents from international finance/security agencies, primary documents such as White Papers that spell out the standards set out in protocols such as Bitcoin, as well as English-language reports from technology industry news sites Coindesk and CoinTelegraph. Agency within a blockchain-based privacy infrastructure Infrastructural agency stems from both 'moments of creation' and subsequent updates to protocols like Bitcoin.Each moment involves sociotechnical relations that materialize specific security practices, as we show in the case of a novel finance/security infrastructure emerging following the global financial crisis of 2007-2008.Infrastructural agency can be located in two sets of infrastructural relations.First are the relations between actors and objects that form protocols such as Bitcoin and its competitor Ethereum. Second, infrastructural agency is also located in the relations between these protocols and the other 'layers' that together form an infrastructure: networks and applications (see Figure 1 above).Locating infrastructural agency within an emerging blockchain-based finance/security infrastructure, we argue, helps make more nuanced sense of continuity and change in the security politics of privacy and identification.This subsection shows how instabilities and illicit activities facilitated by sociotechnical relations within this new privacy-orientated infrastructure are neither entirely novel nor merely hard-wired.Tracing infrastructural agency, we instead highlight an evolving mix of continuity and change in key practices at the contemporary finance/security nexus. We begin with Bitcoin.A set of very specific sociotechnical relations between computer programmers and algorithmic, cryptographic and time-stamping technologies underpinned the genesis of the Bitcoin protocol, carrying important implications for identification and privacy practices.In 2008, at the height of the most severe financial crisis since the Great Depression, a still unidentified person(s) using the pseudonym Satoshi Nakamoto circulated on a cryptography newsletter a white paper calling for the development of a payment protocol, or 'cryptocurrency' (Nakamoto, 2008). 6 Nakamoto released a first version of the Bitcoin protocol to a geographically dispersed group of coders who experimented with the proposed design to instigate the Bitcoin protocol in 2009.The 'Bitcoin standard' (Ammous, 2018) specifies harnessing cryptographic, time-stamping and other existing technologies to exchange digital tokens through privacy-orientated means.Greeted cautiously in cryptographic communities, the initial version of the Bitcoin protocol began to be improved as groups of pseudo-anonymous coders exchanged money-like tokens and collaborated further with one another online. Two overlapping security implications immediately emanated from the specificities of the sociotechnical relations involved in the creation of the Bitcoin protocol.First was the emphasis on privacy embedded in the reliance on cryptography to ensure a high degree of difficulty in identifying exactly who exchanged digital tokens with whom.Second was a complicating of identification practices through a stress on decentralization or 'distribution' of authority.Not only has the author(s) of the Bitcoin white paper still not been identified, despite widespread efforts, but the decentralized standard for exchange, verification and publication of digital transactions meant that no one particular person remained 'in charge'.The lack of identification at the protocol level has impacted the evolution of privacy and identification practices in wider finance/security infrastructure emerging around Bitcoin. The first key way in which sociotechnical relations at the 'birth' of the Bitcoin protocol were agential was in shaping the security practices emerging in the networks of this alternative finance/ security infrastructure.Not unlike the evolution of the internet itself, the Bitcoin standard was formed by a largely homogenous group of actors that, although geographically dispersed, consisted largely of white men based in the Global North possessing knowledge of cryptographic technologies and little regulatory expertise, beyond desires to escape all existing laws and regulations.The activities of the identified members of this small group experimenting with cryptographic technologies impacted the security politics of the Bitcoin network most clearly by attracting illicit or regulatory-subverting behaviours.The attractiveness of Bitcoin's privacy-enhancing features was most famously exemplified by its exclusive use for payments on Silk Road, an online marketplace that, amongst other goods and services, facilitated purchases of drugs and weapons.Anonymity as set out in the Bitcoin protocol shaped the development of the wider user network.Though not solely pursuing nefarious activities, Bitcoin users were attracted by possibilities of maintaining privacy and avoiding the KYC checks that had expanded to banks and other financiers 'in the frontline' as part of the war on terror and war on drugs (De Goede, 2017).In some cases, the Bitcoin user network consisted of activists and whistleblowers.In other cases, including that of ISIS noted above, the privacy-enhancing features of the original blockchain protocol attracted fraudsters, money launderers, terrorism financiers and criminals.Even if the hard-to-measure cases of the former outweighed the latter, illicit uses of Bitcoin enabled by its underlying protocol instigated the very interactions with AML/CFT regulators that developers and early users of Bitcoin had sought to avoid. A second way in which sociotechnical relations that informed the development of the first cryptocurrency impacted security practices is through the particular applications that emerged from the original Bitcoin protocol.Despite intentions and persistent pretensions to function as an alternative peer-to-peer payments system distinct from the (central) banks involved in the 2008 global financial crisis, the cryptocurrency created by the Bitcoin protocol generated highly volatile speculation and instabilities of its own.Applications of this computer protocol enabled speculation that became increasingly difficult -though not impossible -to undertake after 2008.Bankers whose activities were restricted by post-crisis regulation turned to Bitcoin as digital chips as the 'casino capitalism' popularized by Susan Strange (1986) became increasingly extreme in the 2010s (Maurer, 2016).The quasi-anonymity of the Bitcoin attracted difficult-to-trace risky applications that concealed the identities of users speculating on massive swings in value. Another blockchain protocol developed in 2014-2015, Ethereum, further widened nefarious networks and speculative applications, solidifying and extending the central security practices that Bitcoin had established in this alternative finance/security infrastructure.The Ethereum protocol provides standards for secure exchange, verification and publishing of transactions but of a broader range of digital objects than simply money-like cryptocurrencies.The particular sociotechnical relations between technical objects (time-stamping technologies and hashing algorithms), human actors (hackers, programmers, developers) and practices (financial cryptography) that developed this second major blockchain protocol had agential impacts on its wider network of actors.Users attracted by the open-source standards facilitated by the Ethereum protocol included hackers testing the informational security of this second major blockchain and others seeking to develop an expanded set of applications facilitating privacy practices.Like the developers of Bitcoin, the main developer of Ethereum, the Russian-Canadian Vitalik Buterin (2013), advanced standards of behaviour to generate 'financial freedom'.A growing range of so-called 'decentralized finance' (DeFi) applications arose based on this protocol. 7These applications once again attracted financiers seeking possibilities to undertake speculative exchange beyond the increasingly regulated financial system.Most prominent here was the rise of 'non-fungible tokens' (NFTs) as digital representations of everything from art to worker contracts.While providing novel financing tools to artists and others shut out of traditional finance, NFTs mostly boomed as speculative assets whose price crashed spectacularly in 2022.The informational security and privacy practices provided by Ethereum's standards for developing 'smart contracts' automatically executing preset contractual terms made this expansion of speculative activity possible.Like Bitcoin, the privacy features of the Ethereum protocol and lack of KYC checks in its applications generated DeFi as the 'wild west' of finance (Kruppa and Murphy, 2019) not only for speculation, but equally for money laundering and other illicit activities (Attlee, 2022;Young, 2020). The activities and relations between human actors and technologies informing the initial development of Bitcoin and Ethereum have had an impact on the security practices of a wider blockchain-based alternative infrastructure since 2009.Agency within this new privacy-orientated finance/security infrastructure cannot merely be located in these initial 'moments of creation' but also in seemingly routine protocol maintenance work.Updates to protocols occur daily and even hourly.Yet, not all changes to the protocol are as impactful on other security practices across infrastructural layers.Sociotechnical relations informing two protocol updates to Ethereum and Bitcoin, in 2016 and 2017 respectively, were significant in their shaping of privacy practices in the networks and applications spawned by this infrastructure after 2009. A first protocol update that we can consider as agential in its impacts on security practices involved the introduction of a new standard of acceptability in Ethereum.In 2016, coders led by the protocol's co-founder Buterin initiated a 'technical fix' to the hack and theft of $150 million raised through an experiment with the automatic allocation of crowdsourced funds called The DAO.This acronym stands for decentralized autonomous organization, digital entities which combine blockchain-based smart contracts (Dupont, 2017;Hütten, 2019).Insecurity sparked by the hack split the different coder groups involved with Ethereum.Protocol 'purists' defended the original yet clearly flawed Ethereum protocol, while key 'insiders' like Buterin sought to update and alter the standards.The former were outmanoeuvred by the latter 'interventionists'.The blockchain split into 'Ethereum Classic' and a new 'Ethereum' blockchain.These changes led to a widening of activities that grew, rather than reduced, insecurities by inducing an explosion of further attempts to automate finance/security through smart contracts. 8Instead of ending after the failure of the 2016 hack, efforts to develop DAOs were resurrected in a post-2019 boom in DeFi applications.The instability of yet another boom-and-bust cycle was once again marked by widespread frauds perpetrated by mostly -though not always -anonymous users. A second agential moment impacting security practices across the blockchain-based infrastructure was undertaken a year later.The 2017 Bitcoin protocol update sought to attend to growing delays in processing its transactions that were hampering the daily exchange of goods and services.Purists and interventionists once again debated splitting or 'forking' the original Bitcoin blockchain protocol.As with Ethereum, the purists lost out to those seeking the wider usage of the cryptocurrency who established a new protocol, called Bitcoin Cash (BCH), that maintained the original features and transaction history of what was renamed the Bitcoin Core (BTC).Once again, the privacy features of the original protocol were not significantly altered.What was changed, however, was the behavioural standard enabling further spin-offs, or 'forks', of blockchain protocols.The 2017 update solidified the notion that protocols could be spun off from earlier protocols.The new BCH protocol split into two as Bitcoin Cash Satoshi Version and Bitcoin Cash Adjustable Blocksize Cap were created in 2018.Like the earlier 2017 update, the second protocol change was a contentious affair.Users who sought to expand the usage of cryptocurrencies ultimately won, thereby widening the possibilities for speculation and illicit uses of the now various versions of the Bitcoin protocol.Attempts to encourage a more 'everyday' use of the token as a real 'currency' sparked a seemingly ever-widening competition for financial and security advantages between the hundreds of 'altcoins' that arose in the aftermath of the 2017 protocol update. 9An Initial Coin Offering (ICO) boom occurred in 2018 as speculative investment flowed into pre-sales of tokens in processes comparable to the Initial Public Offerings (IPOs) of company stocks.This boom was marked by an explosion in fraud that yet again enhanced insecurities (Tiwari et al., 2020).Moreover, the growth of blockchain protocols in turn marked an expansion of efforts to set standards for enabling communication not merely between users of a single cryptocurrency network but across them in a wider, expanding blockchain-based alternative finance/security infrastructure. Such 'meta-level' attempts to develop protocols for relations amongst protocols have led to an ongoing struggle for 'cross-chain interoperability' to connect different protocols built on the two main blockchains, Bitcoin and Ethereum.These efforts have parallels with standardization in the existing finance/security infrastructure, such as the European Union's 'regulatory push for interoperability' in digital markets (Westermeier, 2020(Westermeier, : 2058)).Attempts to develop one protocol to standardize the increasingly fractured blockchain-based finance/security infrastructure, however, diverted scarce resources for maintaining and updating existing blockchain protocols.This dispersion of both computing power and human attention rendered protocols like Ethereum more and more vulnerable to the kinds of cyberattacks exemplified most dramatically by The DAO (Voell, 2020). In 2022, a long-delayed update of the Ethereum protocol was finally undertaken.The mechanism specified for verifying transactions in this blockchain was shifted from the proof-of-work consensus, which originated and has persisted in the Bitcoin protocol, to proof-of-stake.In the former, users harness growing amounts of costly computational power to solve complex computational puzzles, racing to verify transactions to win newly minted tokens with transaction fees paid by users.In the latter consensus mechanism, transaction validators prove their commitment to the blockchain by 'staking' a certain amount of ETH, Ethereum's' 'native' token, which stands to be lost if stakers act maliciously.The protocol switch to a radically different method of processing transactions impacted the security practices surrounding Ethereum by moving it closer to US regulatory reach.Most stakers are estimated to reside 'directly or indirectly under U.S. jurisdiction' (Georgiades, 2022).However, unlike the energy-intensive proof-of-work consensus mechanism, staking operations can no longer be located by tracking abnormally high electricity consumption. Sociotechnical activities both at the birth of and at key updates to protocols impact security politics.The infrastructural agency traced here involved geographically dispersed communities harnessing cryptographic technologies to form and maintain two blockchain protocols in ways that shaped networks formed and applications emerging from privacy practices after 2009.The new standards of behaviour introduced by blockchain protocols, however, not only spurred the formation of a new privacy-centric finance/security infrastructure; they have also instigated a flurry of developments in an existing identification-focused finance/security infrastructure. Locating agency between privacy and identification infrastructures The initial design of, and subsequent updates to, protocols can also impact other infrastructures.What we call interstructural agency refers to how sociotechnical relations at the site of protocols impacts relations between other sets of sociotechnical relations, or infrastructures.This subsection highlights how sociotechnical relations at both the 'birth' of and subsequent updates to blockchain protocols have impacted, and in turn been impacted by, an existing finance/security infrastructure orientated less towards secrecy and privacy and more towards identification practices.We trace both the impacts of activities at the site of blockchain protocols on the security politics of the existing identification infrastructure and vice versa.Locating intrastructural agency, we argue, helps to make nuanced sense of the push-and-pull of privacy and identification practices at digital frontiers of the contemporary finance/security nexus. Agency between infrastructures is most clearly illustrated in efforts by actors in the existing identification finance/security infrastructure to designate relations with the blockchain-based privacy finance/security infrastructure as 'critical' (Financial Stability Board, 2022).A sole emphasis on the agency exercised by formal regulators, however, would leave new and updated blockchain protocols as grey media, backgrounding their ongoing agential impacts both in and between new and existing infrastructures.A whole set of new protocols has emerged from consultation between networks of actors central to both 'new' and 'old' finance/security infrastructures, including coders and regulators of leading intergovernmental organizations like the Financial Action Task Force (FATF).Decades-long discussions and technological experiments led to a controversial compromise in which existing identification practices that enforce AML/CFT standards were extended to certain, but not all, blockchain-based privacy-centric activities.In particular, the collection of personally identifiable information through KYC standards, along with protocols for storing and transmitting identity data to which banks and other institutions in the existing finance/security infrastructure had long been subjected, was extended to cryptocurrencies and other so-called 'virtual assets'. Interstructural agency can be seen in how the formation of blockchain protocols triggered the extension of the Travel Rule, a key standard in the identification infrastructure, to the privacy infrastructure centred around blockchains.More of a recommendation for proper behaviour amongst organizations, the Travel Rule is itself a protocol allowing banks sending and receiving transactions to obtain customer identity information and to let this identity information 'travel' amongst them.The extension of the Travel Rule exemplifies how protocol-level activities in one infrastructure instigated continuities and changes in security practices in another.Locating the interstructural agency of activities at the site of protocols helps avoid understanding digitally mediated practices as entirely 'hard-wired' and unchanging on the one hand, as well as entirely novel on the other.Critical security studies gain more nuanced appreciations of the unexpected paths of 'emergent security' (Fouad, 2022) in the ongoing push-and-pull between identification and privacy politics by zooming in on activities at the site of protocols. Interstructural agency can be located in both compliance and defiance.Following these twin processes illustrates how privacy and identification practices are shaped not just within but also between complex sociotechnical infrastructures.In a first instance, the process of compliance involves acquiescence with rules and regulations through the development of new protocols, which we call 'add-ons'. 10Most prominent 'add-on' protocols are those facilitating the compliance of blockchain-based transactions with the aforementioned Travel Rule.A flurry of 'add-on' protocols emerged after 2019 to provide a common language for communicating identity information between 'virtual asset service providers' (VASPs), the FATF's technical name for entities facilitating the buying, selling or trading of cryptocurrencies and other blockchain-based products.The Switzerland-based OpenVASP network emerged from discussions and experiments between dozens of 'identity start-up' firms in novel relations with one another.The developments this network drew from new Ethereum-based protocol were said to put 'privacy of transferred data at the center of its design' (Riegelnig, 2019: 1).This network attempts to 'square the circle' of enabling VASPs to collect and transfer data on customers while ensuring that user anonymity is protected through the use of peer-to-peer messaging system Whisper, which employs 'so-called dark routing to obscure message content and sender and receiver details to observers, a bit like anonymous web browsing using Tor' (Allison, 2020).The attempt to balance privacy and identification practices with OpenVASP is one outcome of interstructural relations between actors and networks of new and established finance/security infrastructures.Similar 'add-on' standards include the likes of open-source peer-to-peer VASP Address Confirmation Protocol, developed by California-based CipherTrace (2019), a firm with close ties to the US Department of Homeland Security, and bought by Mastercard in 2021. Locating add-on protocols developed for obtaining and sharing user identity information between VASPs as resulting from interstructural agency helps to illustrate both continuities and changes in security practices.Such specific ways of combining privacy and identification practices striving to collect yet also limit user information, as well as retain yet share it, were unlikely to emerge solely from a privacy-centric blockchain infrastructure or from an identification infrastructure.They are rather the outcomes of unfolding relations between these antagonistic infrastructures.A large part of what blockchain protocols originally arose to counter after all was precisely such identification and exchange of information.The significant but far-from-wholesale shift towards implementing standards for identification in this new privacy infrastructure is enabled by the creation of 'add-on' protocols developed by the likes of Taiwan-based Sygna, which emphasizes the need to maintain privacy practices while enable 'VASPs to share encrypted transmittal information with each other securely and privately' (Sygna, 2020).Foregrounding infrastructural agency at the site of computer protocols enables the more nuanced evolution in privacy and identification practices at the finance/security nexus to be understood.Locating what we call interstructural agency specifically foregrounds not only overlapping relations between infrastructures but also the relations that push infrastructures apart. A second set of sociotechnical relations between infrastructures impacting the evolution of digitally mediated security practices at the site of computer protocols involves defiance.These processes were most prominently illustrated after 2019 as further 'add-on' protocols developed to escape the expansion of (limited) information-gathering and sharing by the Travel Rule.So-called 'privacy protocols' like Cashshuffle/Cashfusion, Lelantus, MimbleWimble and OpenBazaar arose, promulgating standards for enhanced, though rarely complete, anonymity for blockchain-based digital transactions.Some of these enhanced privacy protocols built on top of existing blockchain protocols, as Enigma (n.d.) did with Ethereum.Other 'add-on' blockchain privacy protocols developed entirely new standards of behaviour between human actors and non-human objects.Most prominently, the CryptoNote protocol underlying privacy-coin Zcash set a standard for minimal information-provision through the use of zero-knowledge cryptography. 11Wasabi Wallet, a project enabling so-called secret contracts, meanwhile, allows its users to exchange without ever 'seeing' data, through novel transaction-scrambling standards. 12The aforementioned post-2019 explosion of DeFi applications based on the Ethereum protocol also saw the expansion of 'decentralized exchanges' (DEXs), forums where peer-to-peer agreements to directly exchange digital tokens without identity verification by formal intermediaries are made.Other such Ethereum-based decentralized applications (DApps) that emerged in the post-2019 DeFi boom 13 included Tornado Cash, which offered updated ways of ensuring private finance transactions.Foregrounding activities at the sites of protocols highlights how the process of defiance in relations between infrastructures has nuanced implications for security practices.While numerous blockchain protocols emerged post-2019 to comply with the Travel Rule and other longstanding KYC protocols, other blockchain protocols doubled down on privacy practices.This defiance in turn has led to further developments in identification practices, which led to the arrest of a Tornado Cash coder in 2022 (Jagati, 2022). Locating interstructural agency at the site of computer protocols helps to make nuanced sense of the evolution of security practices.Sociotechnical relations between existing and new infrastructures shape security politics through the two processes which we have traced here, that of compliance and defiance.Interstructural agency emerges out of collaboration amongst seemingly opposing standards for privacy and identification practices.Agency between infrastructures also emerges out of divergencies in the doubling-down on opposing privacy and identification practices.Locating the agential impacts of sociotechnical relations at the site of computer problems enables critical security studies to make sense of 'emergent security' (Fouad, 2022) in ways that navigate between continuity and change. Conclusion This article has argued for security studies to consider computer protocols as productive entry points for locating how what we call infrastructural agency shapes digitally mediated tensions between privacy and identification practices.Our analysis has distinguished inter-and infrastructural agency in the development and updating of computer protocols.Impacts on security practices of sociotechnical activities at the site of blockchain protocols on the digital networks and applications built upon them are forms of agency within infrastructures.Implications of sociotechnical relations at the site of protocols on other sets of sociotechnical relations are understood as agency between infrastructures.Locating these twin forms of infrastructural agency at the site of a particular set of computer protocols -blockchains -helps us make sense of the evolution of digitally mediated practices of privacy and identification since 2008.We have traced the formation and expansion of an alternative privacy-orientated finance/security infrastructure to activities at the sites of protocols like Bitcoin and Ethereum.Locating agency in updates and developments at the sites of these and other protocols helps us interpret the interactions of novel and longer-standing finance/security infrastructures over the decade following the 2007-2008 global financial crisis and beyond.Key sociotechnical relations underpinning both the initial and continual exercise of infrastructural agency within and between these finance/security infrastructures reveal overlaps and divergences in privacy and identification practices since 2008.Tracing the 'birth' of computer protocols, and their key updates and reformations, reveals security politics that are neither entirely dynamic nor wholly static.Activities at the site of blockchain protocols highlight a finance/security nexus in continual flux, where novel privacy practices both diverge from and reinforce identification practices. By probing computer protocols, critical security studies can advance debates over infrastructures and techno-agency.Future research can further broaden the intersections of such discussions by asking, for instance: Who (re)develops the standards for digital communication between computers and between computers and humans?What networks and applications do certain computer protocols support?Which do they exclude or marginalize?How do computer protocols facilitate or hamper particular sociotechnical relations both within and between infrastructures? Locating infrastructural agency at the site of computer protocols can shed further light on the security politics of digital 'traceability' by scrutinizing the identification standards emerging as part of Central Bank Digital Currencies (CBDCs).The standards for interoperability between CBDCs, some but not all of which are based on blockchain protocols, as well as between the identification practices they enable and the privacy of 'analogue' cash and coin-based monetary relations, are promising areas of future research at the finance/security nexus.Such investigations could foreground 'protocological' relations between coders and international bodies beyond the FATF, like the International Organization for Standardization (ISO), locating infrastructural agency in their shaping of the possibilities for privacy and identification practices (Campbell-Verduyn and Hütten, 2021).In locating infrastructural agency at the sites of computer protocols, critical security studies research can further widen engagements with science and technology studies, new media and software studies to include critical code studies (Marino, 2020) in order to tease out the security politics of evolving relations between human and non-human actors.Explicit attention to ethics and moral tensions informing and emerging from these interactions and those activities they enable could contribute to debates over what has been regarded by some as the more positive implications stemming from the development of a 'two-tier financial ecosystem' (Kapsis, 2020), one centred around privacy and the other around identification practices. What our locating of infrastructural agency in activities at the sites of computer protocols ultimately hopes to stimulate is a politicization of seemingly esoteric sites where contemporary security practices are shaped and can also be challenged.Locating infrastructural agency in the (re) formation of standards guiding human-non-human relations is an important step in widening critical security studies' encounter with the 'marvelous minions' whose (re)production increasingly shape digitally mediated contemporary security politics.
2023-09-03T15:32:57.501Z
2023-08-31T00:00:00.000
{ "year": 2023, "sha1": "9f4288542c5fbe6ba224834188ca462e53f59938", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/09670106231187267", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "8f74e64b21926e62a98b694fa55a18b45a8fd2cf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
259023392
pes2o/s2orc
v3-fos-license
Implementation of early warning system in the clinical teaching unit to reduce unexpected deaths Background Early detection of patients with clinical deterioration admitted to the hospital is critical. The early warning system (EWS) is developed to identify early clinical deterioration. Using individual patient’s vital sign records, this bedside score can identify early clinical deterioration, triggering a communication algorithm between nurses and physicians, thereby facilitating early patient intervention. Although various models have been developed and implemented in emergency rooms and paediatric units, data remain sparse on the utility of the EWS in patients admitted to general internal medicine wards and the processes and challenges encountered during the implementation. Local problem There is a lack of standardised tools to recognise early deterioration of patient condition. Methods This was a quality improvement project piloted in the clinical teaching unit of a tertiary care hospital. Data were collected 24 weeks pre-EWS and 55 weeks post-EWS implementation. A series of Plan, Do, Study, Act cycles were conducted to identify the root cause, develop a driver diagram to understand the drivers of unexpected deaths, run a sham test trial run of the EWS, educate and obtained feedback of clinical care teams involved, assess adherence to the EWS during the pilot project (6 weeks pre-EWS and 6 weeks post-EWS implementation), evaluate outcomes by extending the duration to 24 weeks pre-EWS and 55 weeks post-EWS implementation, and retrospectively review the uptake of the EWS. Interventions Implementation of a standardised protocol to detect deterioration in patient condition. Results During the pre-EWS implementation phase (24 weeks), there were 4.4 events per week (1.2 septic workups, 1.9 observation unit transfers, 0.7 critical care transfers, 0.13 cardiac arrests and 0.46 per week unexpected deaths). In the post-EWS implementation phase (55 weeks), there were 4.2 events per week (1.0 septic workup, 1.9 observation unit transfers, 0.82 critical care transfers, 0.25 cardiac arrests and 0.25 unexpected deaths). Conclusion The EWS can improve patient care; however, more engagement of stakeholders and electronic vital sign documentation may improve the uptake of the system. INTRODUCTION Problem description There was no standardised protocol for managing clinically deteriorating patients in our hospital, and the nursing staff relied solely on their clinical judgement when to communicate with physicians. This could cause delay and gaps in the care of already very sick patients. Available knowledge Approximately 14%-28% of intensive care unit (ICU) transfers are unplanned. 1 Evidence supports that patients show signs of early deterioration before they become unstable. 2 The deterioration of a patient's medical status is often preceded by abnormal vital or physiological signs. 3 If these changes are detected early, unexpected deaths, serious adverse events or cardiac arrest can be prevented. Delays in ICU consultations for critically ill patients in medical wards have been associated with increased mortality. 4 For the past two decades, the early warning system (EWS) communication tool has been employed in various medical institutions worldwide. 5 6 Using individual patient's records of vital signs, this bedside score can indicate early clinical deterioration, triggering a communication algorithm between nurses and physicians, thereby facilitating early patient intervention. The EWS was first WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Early warning system (EWS) is being used in many jurisdictions. However, there is still a lack of reports on the processes used in implementing the EWS and measuring patient-oriented outcomes. WHAT THIS STUDY ADDS ⇒ This study discusses the implementation of EWS in the clinical teaching unit on the general internal medicine ward and discusses the process of implementation and challenges encountered. HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY ⇒ Process of implementation and lessons learnt from this quality improvement project can be used to implement EWS. Open access introduced in 1997 in the UK and has since been implemented in multiple centres across the world. 7-10 It was initially designed to detect and respond to unrecognised deterioration and reduce inpatient mortality. 3 It significantly outperformed other early detection scores, such as the Systemic Inflammatory Response Syndrome and quick Sequential (sepsis-related) Organ Failure Assessment scores in predicting severe sepsis, septic shock, sepsis-related mortality and all-cause mortality. This suggests that the EWS may be a better prognostic tool. 11 Previous studies have examined the impact of EWS in various settings. [12][13][14][15] De Meester et al assessed the EWS in patients recently discharged from ICU and found a significant reduction in serious adverse events following ICU discharge. 12 Moon et al retrospectively examined the EWS with a critical care outreach service and found a significant decline in cardiopulmonary arrest and in-hospital mortality. 13 Conversely, Patel et al in a retrospective study involving trauma patients, evaluated EWS with a critical care outreach service and did not find a significant reduction in mortality. 14 A systematic review of 17 observational studies with 11 unique models that were based on vital signs and clinical evaluation suggested that EWS is useful in predicting cardiac arrest and death, but its impact on health outcomes and utilisation of resources remains unknown. 15 Although the results of studies examining the impact of the EWS on all health outcomes and resource utilisations have been mixed, widespread use of this system suggests that there are potential benefits of using EWS. 15 The National Health Services has developed a national EWS in England and considers this a key component of patient safety for better patient outcomes. 16 Although efforts have been done to validate the model, there is still a lack of reports on the processes used in the implementation of the EWS and measurement of patientoriented outcome. 3 Aim Our aim was to reduce the number of unexpected deaths and cardiac arrests by 50% in 1 year after the implementation of the EWS tool in the clinical teaching unit (CTU). Measures Outcome: Our outcome measure was the number of unexpected deaths (patients not receiving end-of-life care) and transfers to the critical care unit. Process: Our process measure included the number of transfers to observation units (the highest acuity unit outside the ICU). Balancing measure: Our balancing measure was the initiation of septic workups (blood culture, urine culture, complete blood count analysis, venous/arterial blood gas and lactate analysis, and chest radiograph). Participants Participants were identified through ward audits by a nursing manager and were included in the analysis if they were transferred to a higher level of care (ICU or observation unit) or had in-hospital cardiac arrest or death. Intervention: the EWS algorithm The CTU EWS scores range from 0 to 20 and are derived from measurements of seven physiological parameters: blood pressure, temperature, heart rate, respiratory rate, oxygen saturation, oxygen delivery and level of consciousness. A higher score is more likely to indicate clinically deteriorating patients. Each EWS score corresponds to a colour (green, yellow, orange or red), which triggers a different escalation process according to the EWS algorithm, as described above. In our unit, vital signs are entered manually by the nursing staff. Vital sign sheet was replaced by EWS scoring sheet, so the nursing staff can enter vital signs directly into the EWS scoring sheet. Plan, do, study, act cycle 1: root cause analysis and driver diagram This project was implemented as a resident physician initiative after a discussion of a clinical case in quality improvement (QI) rounds. In our QI rounds, clinical cases are discussed to improve the system and patient safety. A driver diagram was initially created to understand the drivers and reduce the occurrence of unexpected deaths and cardiac arrests, which revealed a deficiency in our current system (figure 1). There was no specific protocol for managing clinically deteriorating patients, and the nursing staff relied solely on their clinical judgement to manage patients with signs of clinical instability. This highlighted the value and feasibility of implementing the EWS in the CTU. Our improvement team consisted of resident physicians, unit manager, clinical nurse coordinator and a staff physician. Plan, do, study, act cycle 2: sham test trial run of the EWS The EWS tool was already developed in our health region; however, it was not implemented. Our EWS system comprises two parts: (1) a clinical status score (EWS score), calculated based on systolic blood pressure, temperature, heart rate, respiratory rate, oxygen saturation, oxygen delivery and level of consciousness and (2) a standardised algorithm, based on the EWS score, the nurses followed the algorithm to escalate patient care. The escalation process is divided into four zones according to the clinical score of a patient: green (0-2), yellow (3)(4), orange (5-6) and red (≥7) (online supplemental appendix 1). If a patient is in the green zone, the nurses reassess and rescore the patient in 12 hours; if a patient in the yellow zone, the nurses screen for sepsis, notify the in-charge nurse, and reassess in 4 hours. If a patient' score is in the yellow zone for two consecutive assessments, the nurses verify the scores with junior resident physicians on call. If the score is in the orange zone, the nurses screen for sepsis and notify the in-charge nurse and the junior resident physician for a management plan. If the patient Open access is in the red zone, the nurses notify the senior resident physician on call for immediate assessment. The nurses used the standardised Situation-Background-Assessment-Recommendation communication sheets to communicate with the physicians. We initially performed a sham trial of the EWS that was run for 2 weeks in the CTU by recording the number of times it was used to capture the event. We found 40 patients in the red zone, and according to the initial version of EWS protocol, the most responsible physicians (MRPs) were contacted 40 times. Our current CTU model operates such that the MRPs are not present in-house during overnight hours, and as such, there is a chance of delayed patient care in these instances. After the initial trial run and feedback from physicians and nursing staff, a change was made to inform the senior resident when the patient was in the red zone, as the senior resident remains in the hospital overnight and would facilitate more timely patient assessment. Further, the meaning of 'altered mental status' from baseline lacked clarity. Therefore, 'altered mental status' was changed to 'new onset of confusion' in the EWS scoring system. Plan, do, study, act cycle 3: education and feedback Education was provided to nursing staff during their education rounds. We also discussed the implementation of the EWS in various departments of medicine rounds to educate physicians working at the CTU. Plan, do, study, act cycle 4: pilot project The EWS intervention was launched in the CTU ward on 9 December 2019, and data were collected at 6 weeks pre-EWS and post-EWS implementation. Pre-EWS implementation: Fifteen patients experienced an outcome with 28 events, compared with 24 patients, corresponding to 41 events post-EWS implementation. The following events were observed: a decreased number of unexpected deaths 6 weeks post-EWS implementation (6 pre-EWS implementation and 2 post-EWS implementation), an increased number of code blues called (1 pre-EWS implementation and 4 post-EWS implementation), an increased number of transfers to critical care (2 pre-EWS implementation and 8 post-EWS implementation), and more septic workups ordered (5 pre-EWS implementation and 10 post-EWS implementation), while there was no change in the number of transfers to the observation unit (15 pre-EWS implementation and 15 post-EWS implementation). The notable increase in septic workups performed following the intervention could indicate the effect of the EWS in early detection of clinical deterioration. Plan, do, study, act cycle 5: adherence of the EWS during the pilot project Adherence to the EWS was also assessed during the pilot project. Six weeks post-EWS implementation, the EWS was adhered to at the rate of 41.4%. For patients with high EWS scores, compliance increased to 60.0%, whereas compliance was 46.2% for patients with low scores. The odds of a patient having a high score and nurses complying with the EWS algorithm were 1.75 (95% CI 0.08 to 3.42). Plan, do, study, act cycle 6: 24 weeks pre-EWS implementation and 55 weeks post-EWS implementation To understand the true effect of the EWS, we extended the project to evaluate the baseline status at 24 weeks pre-EWS implementation and 55 weeks post-EWS implementation. Results from this phase are discussed in the results section below. Open access Plan, do, study, act cycle 7: retrospective review of the uptake of the EWS A retrospective chart review was conducted between February and June 2020 to examine the uptake of the EWS after the pilot project. Convenience sampling was performed by including patients admitted in the first week of each month. The following questions were used: (1) were all vitals correctly assigned in the patient's chart?; (2) were any vitals missing from the patient's EWS score assessment?; (3) were all EWS scores calculated on the EWS chart? and (4) were there any errors in the final calculated EWS scores in the EWS chart? In total, 172 patients were included in this review. Of the 172 patients, only 26 (15.11%) had all the vital signs assigned correctly on the score sheet, 139 (86%) patients had missing vital signs and 33 (14%) had complete vital signs, and of those who had their EWS scores calculated, only 38 (32%) had calculated correctly scores. RESULTS The QI macro for Excel 2017 was used to create charts. Baseline data were collected in real time for 6 weeks before the implementation of EWS for the pilot project (as mentioned in plan, do, study, act (PDSA) cycle 4); subsequently, additional 18 weeks of data were collected retrospectively to cover baseline data for the total of 24 weeks. During the pre-EWS implementation phase (24 weeks), 42 patients experienced the desired outcome. DISCUSSION Our EWS was designed with the intent to reduce unexpected deaths and cardiac arrests 1 year after its integration into the CTU. Although we did not achieve our aim, we observed an absolute reduction in the number of unexpected deaths. Nevertheless, our current QI assessment did not reveal meaningful reductions in code blue calls or unexpected deaths post-EWS implementation. Our study is limited by the small number of events and the effect of the COVID-19 pandemic. The number of septic workups was our balancing measure, as there was a concern that by implementing the EWS, the number of septic workups might increase. Although we did see an increase in septic workups in our pilot phase, we did not observe this after 1 year. We consider that the implementation of the EWS before the COVID-19 pandemic might have affected its uptake, which may have resulted in suboptimal adherence. Future PDSA cycles will be aimed at re-educating the nursing staff and expanding our patient selection to include other CTUs. Open access The educational outcomes of the PDSA cycle were not assessed but would have been useful. Another limitation might be that the calculation of EWS was performed manually; therefore, heavy workload and change in protocols during COVID-19 might have impacted the implementation. It was not scope of this project to see if this version of EWS accurately detects patient's early deterioration, however, would be important to evaluate in the future. Lastly, we did not record the total number of admissions to the unit during the implementation phase of the project to calculate the proportion of the events. However, our unit has a fixed number of beds, and there was no change in the number of beds in the unit before and after the implementation of EWS. Furthermore, beds are always occupied in the CTU. Hence, we anticipate that there was no major difference in the number of patients before and after implementation. Accurate assessment and documentation of vital signs are keys for the effective use of the EWS; perhaps, electronic vital signs charting and automatic calculation could overcome this barrier in the future. CONCLUSION Implementing a protocol is more complex than developing a protocol. Having evidence and designing a tool do not mean that it can be used in clinical practice effectively. Furthermore, even if it is used, it does not mean that it is being used accurately. Hence, continuous PDSA cycles are important for evaluating whether the tool in place is used and is used correctly and consistently. We learnt some very important lessons through our PDSA cycles. First, this QI effort was limited by the COVID-19 pandemic a few weeks after the implementation. We initially had the engagement of stakeholders, but during the pandemic, shifts in the roles of staff from one area to another effected our implementation. It is very well possible that the number and acuity of the patients were different in the pre-EWS and post-EWS implementation phase, as we observed that the number of patients was less, early in the pandemic. In our next steps to roll the system to other units, we would need not only the continuous engagement of stakeholders and education, but also some assessment of education. The EWS protocol provides an objective and simple screening method for clinically deteriorating patients. The EWS may detect deterioration earlier and reduce the number of unexpected deaths. Future directions should also elucidate whether a change in the EWS score can predict poor clinical outcomes. More resources to implement electronic vital sign charting and automatic calculation of the EWS score might improve the uptake of the EWS.
2023-06-03T06:17:48.966Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "fd313fcd682e852f3dae7973c6969c8a42048a63", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1136/bmjoq-2022-002194", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "885c6edc029463da5ef5c2a515a419e085032cdf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235258775
pes2o/s2orc
v3-fos-license
Using a recently approved tumor mutational burden biomarker to stratify patients for immunotherapy may introduce a sex bias The U.S. Food and Drug Administration (FDA) recently approved the treatment with pembrolizumab, an immune checkpoint inhibitor (ICI) targeting PD1 (anti-PD1), for patients with advanced solid tumors with a high tumor mutational burden (TMB) (defined as TMB ≥10 mutations/Mb). However, following recent studies suggest that TMB levels and response to ICI treatment may differ between male and female melanoma patients, we investigated whether using this high-TMB threshold for selecting patients for anti-PD1 treatment may induce a sex-dependent bias. We analyzed a large ICI cohort of 1,286 patients across nine cancer types treated with anti-PD1/PDL1. We find that using this threshold would indeed result in an unwarranted sex bias in melanoma, successfully stratifying female but not male patients. While this threshold is currently not a regulatory prerequisite for ICI treatment in melanoma, it is important to raise awareness to this bias. Notably, no sex-dependent significant differences were observed in the response of melanoma patients to anti-CTLA4 therapies, different chemotherapies or combination therapies. Beyond melanoma, the high-TMB threshold additionally introduces a sex bias of considerable magnitude in glioblastoma and in patients with cancers of unknown origin, however, these results are not statistically significant. A power analysis shows that these biases may become significant with larger sample size, warranting further careful testing in larger cohorts. treatment may differ between male and female melanoma patients, we investigated whether using this high-TMB threshold for selecting patients for anti-PD1 treatment may induce a sexdependent bias. We analyzed a large ICI cohort of 1,286 patients across nine cancer types treated with anti-PD1/PDL1. We find that using this threshold would indeed result in an unwarranted sex bias in melanoma, successfully stratifying female but not male patients. While this threshold is currently not a regulatory prerequisite for ICI treatment in melanoma, it is important to raise awareness to this bias. Notably, no sex-dependent significant differences were observed in the response of melanoma patients to anti-CTLA4 therapies, different chemotherapies or combination therapies. Beyond melanoma, the high-TMB threshold additionally introduces a sex bias of considerable magnitude in glioblastoma and in patients with cancers of unknown origin, however, these results are not statistically significant. A power analysis shows that these biases may become significant with larger sample size, warranting further careful testing in larger cohorts. Main Text Treatment with immune checkpoint inhibitors (ICI) have shown remarkable clinical response in many cancers. This response is, however, limited to ~15-20% of patients, raising a need for reliable response biomarkers especially biomarkers that apply to many tumor types to achieve maximum clinical benefits [1]. A biomarker increasingly referenced in clinical use is the tumor mutation burden (TMB), which is a measure of the total number of mutations in the coding region of the genome [2,3]. A prospective biomarker analysis of the basket trial KEYNOTE-158, in which 1,066 solid tumor patients across 10 cancer types were treated with pembrolizumab, demonstrated that oncology patients with high-TMB, defined as ≥ 10 mut/Mb on the FoundationOne CDx assay, showed a higher frequency of response to anti-PD1 treatment vs non-high-TMB (<10 mut/Mb). The FDA subsequently approved the TMB ≥ 10 mut/Mb as a biomarker for administering anti-PD-1 therapy for advanced solid tumors that have progressed 3 from prior treatment [4]. However, recent studies have suggested that the TMB levels, strength of immune selection, and response to ICI treatment differ between male and female melanoma patients [5][6][7]. These sex differences motivated us to examine whether usage of the 10 mut/Mb threshold for both sexes could introduce an unwarranted sex bias when selecting patients for anti-PD1 treatment. To study this question, we mined the largest publicly available dataset of ICI-treated patient' responses with TMB and demographic information [3]. This dataset includes 1,286 patients across nine different cancer types treated with anti-PD1/PDL1, 99 patients treated with anti-CTLA4 and 255 treated with an anti-PD1 + anti-CTLA4 combination. Among the 130 melanoma patients available in this cohort, we first observe a higher median TMB in male vs. female melanoma patients (median TMB=11.81 vs 6.51, respectively, Wilcoxon rank sum test P<0.10, Figure 1A top group), in concordance with previous reports [5]. We next asked whether the difference in survival of patients with high vs. non-high TMB is dependent on the sex of the patient. We find that using the 56)). Consistently, we observed a higher median TMB in male vs. female melanoma patients in each of these three cohorts ( Figure 1A bottom three groups) and found a lower HR in female than male patients in two out of three cohorts, ( Figure 1B bottom three groups). A combined meta-analysis (Weighted z-test) of all the four cohorts together shows a higher median TMB in male vs female patients (combined P=0.006) and a lower HR in female vs male patients (combined P=0.027). We note that these findings have limited immediate clinical implications as high TMB is not currently an FDA prerequisite for treating metastatic melanoma patients with anti-PD1 [11]. However, as clinicians may still take this threshold into 4 account while considering therapies for a patient given the central role of TMB as a biomarker in general (and in ongoing clinical trials, e.g., NCT04187833, NCT02553642), we think it is important to take note of this potential bias. We next tested whether the sex bias observed above extends to other ICI and non-ICI treatments in melanoma. To this end, we mined melanoma patients' survival and TMB 5 information in three additional patient cohorts, the first treated with anti-CTLA4 (N=174 [12,13]), the second treated with an anti-PD1/PDL1 + anti-CTLA4 combination (N=115 [3]), and the third treated with different chemotherapies (N=322 [14]). We did not observe a significant difference in HR between male and female patients in any of these cohorts (P<0.14, P<0.8, To test whether the small sizes of the glioblastoma and cancer of unknown origin datasets may impede the discovery of potentially significant sex-dependent effects, we down-sampled the melanoma anti-PD1/PD-L1 treatment cohort to the size of the glioblastoma and cancer of unknown origin cohorts (N=114 and N=88 respectively [3]). We repeated the down-sampling analysis 5,000 times, keeping the respective female to male ratio as in these cohorts. In these down-sampled melanoma cohorts, we find a large but statistically insignificant difference between HR in male and female patients: mean HR=0.20 and 0.95 for females and males, respectively; P-value=0.51 for a set size equal to that of glioblastoma cohort and a mean HR=0.20 and 1.04 for females and males, respectively; P-value=0.46 for a set size equal to that of cancer of unknown cohort. These results suggest that the small size of the glioblastoma and cancer of unknown origin may hinder our ability to identify significant trends and calls for further testing in larger cohorts. Interestingly, we note that even though the size of the NSCLC cohort is substantial (N=329), we do not observe any notable difference of HR between male and female NSCLC patients (female vs male HR=0.70 vs 0.69, P-Interaction <0.99, Figure 2B), which is further confirmed in another cohort (N=16, interaction p<0.24) [17]. .gov Identifiers: NCT04187833, NCT02553642). Interestingly, in NSCLC, we did not observe any sex bias difference with TMB, despite the large size of the cohort. Further, our findings suggest that usage of this high-TMB biomarker may introduce a sex bias in glioblastoma and cancers of unknown origin, which needs to be carefully tested further in larger datasets as has been suggested by others for a variety of clinical findings regarding immunotherapy and immunology that may have a sex bias [18]. Data and Code availability Statement Scripts and data used in the study is provided to reproduce each step of results and plots in this GitHub repository.
2021-06-01T13:22:54.515Z
2021-05-29T00:00:00.000
{ "year": 2021, "sha1": "5d2137ef35c6daedd32d87da555b65e330ff7cb6", "oa_license": "CCBYNC", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/05/29/2021.05.28.446208.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "5d2137ef35c6daedd32d87da555b65e330ff7cb6", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248228609
pes2o/s2orc
v3-fos-license
Characterization of two infection‐induced transcription factors of Magnaporthe oryzae reveals their roles in regulating early infection and effector expression Abstract The initial stage of rice blast fungus, Magnaporthe oryzae, infection, before 36 h postinoculation, is a critical timespan for deploying pathogen effectors to overcome the host's defences and ultimately cause the disease. However, how this process is regulated at the transcription level remains largely unknown. This study functionally characterized two M. oryzae Early Infection‐induced Transcription Factor genes (MOEITF1 and MOEITF2) and analysed their roles in this process. Target gene deletion and mutant phenotype analysis showed that the mutants Δmoeitf1 and Δmoeitf2 were only defective for infection growth but not for vegetative growth, asexual/sexual sporulation, conidial germination, and appressoria formation. Gene expression analysis of 30 putative effectors revealed that most effector genes were down‐regulated in mutants, implying a potential regulation by the transcription factors. Artificial overexpression of two severely down‐regulated effectors, T1REP and T2REP, in the mutants partially restored the pathogenicity of Δmoeitf1 and Δmoeitf2, respectively, indicating that these are directly regulated. Yeast one‐hybrid assay and electrophoretic mobility shift assay indicated that Moeitf1 specifically bound the T1REP promoter and Moeitf2 specifically bound the T2REP promoter. Both T1REP and T2REP were predicted to be secreted during infection, and the mutants of T2REP were severely reduced in pathogenicity. Our results indicate crucial roles for the fungal‐specific Moeitf1 and Moeitf2 transcription factors in regulating an essential step in M. oryzae early establishment after penetrating rice epidermal cells, highlighting these as possible targets for disease control. | INTRODUC TI ON Rice blast caused by Magnaporthe oryzae is a severe rice disease attacking rice worldwide and causes high yield losses, up to 30% (Fernandez & Orth, 2018;Valent & Chumley, 1991). The infection process begins when the pathogen's conidia contact the leaf surface of the host plant (Wilson & Talbot, 2009). Within the first 2-4 h postinoculation (hpi) and under appropriate temperature and humidity conditions, conidia begin to germinate to form germ tubes that develop into appressoria at the germ tube ends after 6-8 hpi (Beckerman & Ebbole, 1996). Over time, the appressorium cell wall undergoes melanization and accumulates large amounts of glycerol intracellularly (Ryder et al., 2019). This amount of glycerol in the cytoplasm combined with the strong melanized cell wall results in the appressorium osmotically taking up surrounding water, and an immense turgor pressure develops (de Jong et al., 1997). The pressure finds an outlet through the formation of a penetration peg and then drives the penetration of the leaf epidermis 16-24 hpi (Ribot et al., 2008). Inside the host cell, hyphae develop from the penetration peg at the infection site. These hyphae enter neighbouring host cells within 36-48 hpi (Khang et al., 2010). As the infection spreads further, visible disease lesions appear on the host leaves approximately 72-96 hpi (Sakulkoo et al., 2018). At this time, new conidia form on the lesion areas and spread by wind or rain splashes to the surfaces of healthy leaves to start new infections (Wilson & Talbot, 2009). Due to the economic importance, genetic tractability, and genome sequence availability, M. oryzae has emerged as a model organism to study fungal pathogenesis and interaction with host plants (Ebbole, 2007). The first 36 hpi of pathogen-host contact is named the early infection stage in this study. At this stage, the pathogen is still only in the first host cell and secretes many effectors to weaken the host immune responses (Kim et al., 2020). One of the immune responses by the host is a burst of reactive oxygen species (ROS) triggered by the innate immune system recognizing the pathogen (Jwa & Hwang, 2017;Smirnoff & Arnaud, 2019). The ROS at the penetration site can be detected by staining cells with 3,3′-diaminobenzidine (DAB) (Li et al., 2019). The pathogen-host struggle at this early biotrophic stage directly determines the outcome, whether the subsequent infection hyphae can survive and disease occurs (Vargas et al., 2012). Therefore, the processes triggered during the early infection stage are essential for M. oryzae survival and spread to other plants, but how these processes are regulated, especially at the transcriptional level, is still poorly understood. Transcription factors are essential for regulating gene expression and cell development. The rice blast fungus genome encodes 495 predicted putative transcription factors in the fungal transcription factor database (Park et al., 2013). According to the InterPro classification (Zdobnov & Apweiler, 2001), these transcription factors can be divided into 44 families. The six major families are bZIP, C2H2, HMG, MADS-box, MYB, and Zn2Cys6 (Park et al., 2013). To date, dozens of transcription factors of M. oryzae have been functionally characterized, and the results suggest they play different roles in vegetative growth (Li et al., 2010), conidiation (Bhadauria et al., 2010;Kim et al., 2009;Matheis et al., 2017;Zhou et al., 2009), appressorium formation (Kim et al., 2009;Li et al., 2010;Odenbach et al., 2007;Tang et al., 2015), and host infection (Kim et al., 2009;Mehrabi et al., 2008;Nishimura et al., 2009;Zhou et al., 2011). The rice blast fungus can express more than 800 putative effector proteins during infection (Chen et al., 2013). Therefore, it would be overwhelming to analyse the regulatory relationships between the large number of transcription factors and putative effector proteins. Because these effectors are induced during infection (Chen et al., 2013;Liu et al., 2021), this inspired us to examine our hypothesis that the transcription factors that regulate the expression of effector proteins are also expressed during infection. We found that two transcription factors, MOEITF1 and MOEITF2, were specifically up-regulate during the early infection process and each specifically controls the expression of a gene for an effector protein. | Selection of transcription factors We used data from our previous study (Meng et al., 2014) to analyse the expression patterns of 495 transcription factors of M. oryzae during all development stages. We found 30 transcription factors highly expressed during the early infection of onion epidermis. To experimentally test their regulatory roles during rice infection, we selected the top up-regulated 15 for deletion and managed to delete nine genes (authors' unpublished data). Only two of the highly up-regulated transcription factors that we managed to delete, MOEITF1 and MOEITF2 (ranked third and sixth in expression, respectively), showed altered infection phenotypes for their deletion mutants. These two genes were selected for further analysis. The remaining potential transcription factors are presently being investigated further in our laboratory. | Sequence analysis of MOEITF1 and MOEITF2 MOEITF1 is located on chromosome 6 in the M. oryzae genome and encodes a 316 amino acid protein with a Zn2/Cys6 DNA-binding domain at the N-terminus (Figure 1a). A similarity search of amino sequences in NCBI database showed that Moeitf1 has homologs only in ascomycete fungi ( Figure S1), suggesting that Moeitf1 is conserved in ascomycetes. MOEITF2 is located on chromosome 7 in the M. oryzae genome and encodes a 441 amino acid protein with a bZIP domain at the C-terminus ( Figure 1a). A similarity search of amino sequences in the NCBI database showed that Moeitf2 has homologs only in the genus Pyricularia ( Figure S2), suggesting that Moeitf2 is conserved in these fungi. The gene expression pattern of MOEITF1 and MOEITF2 during the whole infection process was analysed by reverse transcription-quantitative PCR (RT-qPCR). The relative expression value for each infection stage is calculated as 2 −ΔΔCt using the 0 h expression as a reference value (Livak & Schmittgen, 2001). The 0-h sample was obtained by sampling immediately after inoculation. −ΔΔC t = (average C t of the target gene − average C t of β-tubulin) mutant − (average C t of the target gene − average C t of β-tubulin) 0 h postinoculation (hpi) sample. The average C t of each gene was obtained from three RT-qPCR replicates the respective transcription factors, pGBKT7-Moeitf1/pGADT7 or pGBKT7-Moeitf2/pGADT7. | MOEITF1 and MOEITF2 are not involved in vegetative or reproductive growth A gene deletion assay was performed to study the function of MOEITF1 and MOEITF2 in M. oryzae. For each gene, two mutants named Δmoeitf1-1,-2 and Δmoeitf2-1,-2 were acquired. A Southern blot assay was performed to confirm that the target genes had been successfully knocked out in the mutants (Figure 2d). Because the two replicate mutants of both genes were found to have the same phenotype, only one mutant of each, designated as Δmoeitf1 and Δmoeitf2, was selected for further characterization in the following text. and Moeitf2 (c) are localized to the nucleus. Fluorescence observation and imaging were performed using a laser confocal microscope. Myc, mycelium; Con, conidia; App, appressorium. All size bars are equal to 10 μm. (d) Southern blotting for verifying the knockout of MOEITF1 and MOEITF2. The wild-type band should be approximately 2700 bp and the two MOEITF2 mutants' bands should be approximately 3100 bp each, while for the knockout of MOEITF1, wild-type and mutant bands of approximately 2100 bp and 1800 bp were expected to appear, respectively. * indicates target bands We first tested the colony appearance and growth rate by growing the fungi on rice bran medium for 10 days. The results showed that Δmoeitf1 and Δmoeitf2 showed no difference to the wild-type | MOEITF1 and MOEITF2 are not necessary for conidial germination and appressoria formation As conidial germination and appressoria formation are prerequisite steps for M. oryzae infection, we tested the performance of the mutants concerning these two aspects. After incubating conidia in water for 4 h at 25℃, we analysed the germination rate of conidia and found no significant difference between the mutants Δmoeitf1 and Δmoeitf2 and the wild-type strain 98-06 (Figures 4a and S4a). After incubation for 8 h, the appressoria formation rate was analysed. We found that Δmoeitf1 and Δmoeitf2 showed a similar result to the wild-type strain 98-06 (Figures 4b and S4b). As the normal functional appressoria of M. oryzae develop a high turgor pressure, we also performed an appressoria collapse assay to test if the appressoria of mutants show normal turgor pressure development. As shown in Figure 4c, when treated with 2, 3, and 4 M glycerol, the Δmoeitf1 and Δmoeitf2 and the wildtype strain 98-06 showed a similar proportion of collapsed appressoria. These results indicate that MOEITF1 or MOEITF2 are not required for conidial germination, appressoria formation, or the appressorial turgor pressure generation of M. oryzae. | MOEITF1 and MOEITF2 regulate the infection process The above phenotype testing results showed that MOEITF1 or MOEITF2 were only involved in the infection stage. Therefore, we performed conidial spray inoculation of rice seedlings to determine whether these F I G U R E 3 There is no alteration in the vegetative and reproductive growth of mutant Δmoeitf1 and Δmoeitf2. (a) Colony morphology of each strain grown on rice bran medium for 10 days. (b) The morphology of conidia and conidiophores of each strain on the medium surface was photographed using light microscopy. The hyphal layer growing on the medium was scraped off for preparing the sample, and the medium was cut into small blocks. Then sporulation was induced by placing the blocks under continuous light for 24 h at 25°C. Size bar 50 μm. (c) Sexual reproduction-related morphology of each strain. Black perithecia appear between the two fungal colonies after 30 days of interaction between the test strain and the TH3 strain. When the perithecia are crushed, the asci and ascospores inside are visible under the microscope. Size bar 30 μm two genes contribute to M. oryzae infection. As shown in Figure 5a, the pathogenicity of Δmoeitf1 and Δmoeitf2 was significantly reduced, showing fewer and smaller lesions for the two mutants than for the wild-type strain 98-06 and the complemented strains Δmoeitf1/ MOEITF1 and Δmoeitf2/MOEITF2. The infection of rice sheath cells was studied to observe the mutant's infection capacities. By analysing the different infection hyphal types at 24 hpi, we observed that over 60% of mutant infection hyphae stopped developing as type 1, while fewer than 10% of type 1 was found for the wild-type strain and the complemented strains (Figure 5b). For the typical infection hyphae of types 2, 3, and 4, the mutants had lower percentages than the wildtype and complemented strains. These results confirm that MOEITF1 and MOEITF2 contribute to the M. oryzae rice infection process. The above results also showed that the mutant could produce functional appressoria for infection, but the pathogenicity of both mutants was significantly reduced. We speculated that the reduced pathogenicity possibly resulted from not overcoming the host de-fences. Because the ROS burst is a common defence reaction induced by the host on infection, we used a DAB staining assay to examine whether ROS accumulated during mutant infection. As shown in Figure 5c, the infection hyphae of the mutants were surrounded by intense DAB staining, while the infection hyphae of the wild-type and complemented strains had light DAB staining. This result suggests that Δmoeitf1 and Δmoeitf2 are defective in overcoming host defences. | A set of effectors were down-regulated in Δmoeitf1 and Δmoeitf2 Because effector proteins of plant pathogens have essential roles in attenuating host defence reactions (Giraldo & Valent, 2013), we investigated whether the mutant inability to cope with host ROS bursts was caused by abnormal effector expression or secretion. Because the wild-type strain 98-06 has been reported to encode more than 100 predicted effectors (Dong et al., 2015), we selected 30 effectors highly expressed during 98-06 infection to test whether their expression was inhibited in the mutants Δmoeitf1 and Δmoeitf2. We performed RT-qPCR assays and calculated the relative expression of these genes in mutants and 98-06. We found that the expression of 21 and 19 predicted effectors was reduced at different levels in the Δmoeitf1 and Δmoeitf2 strains, respectively (Tables S2 and S3). Among those, the effector T1REP (transcription factor 1 regulated effector protein) in Δmoeitf1 and the effector T2REP (transcription factor 2 regulated effector protein) in Δmoeitf2 were over 10-fold significantly down-regulated (Tables S2 and S3, Figure S5). These observations were corroborated in that both genes are mainly up-regulated just after penetration between 8 and 24 hpi in downloaded secondary data (Dong et al., 2015; Figure S6). Bioinformatics analysis using SignalP showed that both T1REP and T2REP have a signal peptide ( Figure S7a). DeepLoc also predicted T1REP to be located in mitochondria or plastids, and SecretomeP gave a high score for alternative secretion ( Figure S7a,b). Therefore, both T1REP and T2REP are probably secreted during infection. To confirm this, we experimentally tested the T1REP and T2REP localization using red fluorescent protein (RFP) labelling. By observing the red fluorescence of 98-06 expressing T1REP-RFP or T2REP-RFP, we found that both T1REP and T2REP showed probable plant apoplast localization and a punctate accumulation at the infection hyphae forming the biotrophic interfacial complex (BIC) (Figure 6a,b), but no RFP signal was found in mycelia, conidia, or the appressorium cell ( Figure S8). T1REP-RFP and T2REP-RFP were also expressed in Δmoeitf1 and Δmoeitf2, respectively, and no red fluorescence was found (Figure 6a,b). F I G U R E 4 Conidial germination, appressoria formation, and turgor pressure of mutant Δmoeitf1 and Δmoeitf2 are normal. (a) Conidial germination was induced for 4 h on hydrophobic slide surfaces. Size bar 10 μm. (b) The development of mutant appressoria after induction on hydrophobic surfaces for 8 h. Size bar 10 μm. (c) Collapse assay tests the appressoria turgor pressure difference between the mutant and the wild-type strain. The collapse frequency of appressoria reflects this difference after treatment with different glycerol concentrations. The data come from three biological replicates, and each biological replicate was performed with three technical replicates. Size bar 10 μm. The same lowercase letters on the error bars indicate no significant differences between samples (p > 0.05, t test) 2.9 | The down-regulation of T1REP and T2REP could be responsible for the reduced pathogenicity of the transcription factor mutant We used the strong promoter TrpC to drive T1REP and T2REP in Δmoeitf1 and Δmoeitf2, and transformed it into both mutants to test if the down-regulation of the effectors in the respective mutants were responsible for the reduction of mutant pathogenicity. We discovered that both Δmoeitf1/TrpC-T1REP and Δmoeitf2/TrpC-T2REP could cause more disease lesions than Δmoeitf1 and Δmoeitf2, respectively, although still less than found for the wild-type strain 98-06 (Figure 7a,b). Thus, overexpression of T1REP and T2REP could partially restore the pathogenicity of Δmoeitf1 and Δmoeitf2, respectively, suggesting that a specific down-regulation of the effectors was mainly responsible for the reduced pathogenicity of the mutants. To investigate whether T1REP and T2REP themselves contribute to M. oryzae infection, we performed a gene deletion assay and obtained two T2REP mutants, Δt2rep-1 and Δt2rep-2, which were verified by Southern blotting (Figure 7c). Phenotype analysis showed that Δt2rep-1 and Δt2rep-2 were significantly reduced in pathogenicity ( Figure 7d) but showed no alteration in vegetative growth, conidiation, conidial germination, and appressoria formation (Table S4) in comparison with the wild-type 98-06 strain. This suggested that T2REP is a virulence factor during infection. Intriguingly, we could not obtain the gene deletion mutant of T1REP even after many attempts and testing more than 400 genetic transformants, which suggests that T1REP is essential for M. oryzae to survive under some growth conditions and not only be active in the early infection stage. | Moeitf1 and Moeitf2 bind with the promoter region of T1REP and T2REP, respectively As T1REP and T2REP were significantly down-regulated in the two transcription factor mutants Δmoeitf1 and Δmoeitf2, respectively, we speculated that the down-regulation of the effectors was a direct result of the deletion of the transcription factors as it would be if Moeitf1 and Moeitf2 directly controlled the T1REP and T2REP expression. We used the yeast one-hybrid assay to test whether Moeitf1 and Moeitf2 have a physical binding activity to the 1.5 kb promoter regions of T1REP and T2REP, respectively. The results showed that the yeast transformed with Moeitf1 and the T1REP F I G U R E 5 The pathogenicity of mutant Δmoeitf1 and Δmoeitf2 is reduced. (a) Conidial spray inoculation assay showed that the pathogenicity of Δmoeitf1 and Δmoeitf2 was reduced. The photographs show that the size of the disease lesions caused by the mutant were generally smaller and the number of lesions were fewer. The bar chart shows that the mutant produced fewer lesions. The data in this figure were calculated from three independent replicates. The same lowercase letters on the error bars indicate no significant differences between samples. The different lowercase letters indicate significant differences (p < 0.05, t test). (b) Conidial injection inoculation to the rice sheath showed that the early infection process of Δmoeitf1 and Δmoeitf2 was affected by the mutations. Most of the mutant infection hyphae remained at the type 1 stage. The percentage of different types was calculated from three biological replicates, and each of these was performed with three technical replicates. (c) 3,3′-diaminobenzidine staining assay showed that more reactive oxygen species (ROS) formed in host cells infected by the mutants, as indicated by the dark brown staining caused by the ROS. Size bar 10 μm promoter region or transformed with Moeitf2 and the T2REP promoter region could both grow normally on binding activity testing medium, while yeast that was transformed with Moeitf1 and the T2REP promoter region or that transformed with Moeitf2 and the T1REP promoter region could not grow on binding activity testing medium (Figure 8a | Transcription factors Moeitf1 and Moeitf2 specifically contribute to the early infection stage of M. oryzae Our results showed that Moeitf1 and Moeitf2 are indeed typical transcription factors in that they accumulate in the nucleus (Figure 2), bind to the regulatory portions of genes (Figure 8), and regulate the genes they bind to (Figures 6 and 7). MOEITF1 and MOEITF2 are strongly up-regulated only during early infection, so they are not involved in appressorium formation like MoHOX7, MoLDB1, and Con7p (Kim et al., 2009;Li et al., 2010;Odenbach et al., 2007;Tang et al., 2015). They are not active during all stages of infectious growth (Mig1, Mstu1, MoHOX8, and MoMCM1) (Kim et al., 2009;Mehrabi et al., 2008;Nishimura et al., 2009;Zhou et al., 2011) | Effectors T1REP and T2REP are regulated explicitly by the early infection-stage transcription factors Moeitf1 and Moeitf2 During M. oryzae infection more than 6000 expressed genes can be detected, of which more than 800 are putative effectors (Chen et al., 2013). Given the vital role of effectors in attenuating host immunity (Jaswal et al., 2020), we speculated that the reduced infection ability of transcription factor mutants in this study might be due to the abnormal expression of pathogenicity-related effectors needed to hide the fungus from the plant innate immunity or turn off plant defences (Vargas et al., 2012). The former is likely during the early biotrophic infection phase. As we expected, the RT-qPCR results found that most of the 30 highly expressed effectors in the wild-type strain 98-06 were down-regulated in the mutants, and two of them, T1REP and T2REP, were down-regulated by more than 10-fold. The functions of T1REP and T2REP are unknown, but both are relatively small, secreted proteins, as would be expected for effectors in this infection phase. Neither of the two effectors have any known enzyme-like domains. T2REP is predicted to have a positive charge with an even number of cysteines at the C-terminus (https:// aps.unmc.edu/predi ction), like many antimicrobial peptides and peptide effectors (Ku et al., 2020;Lazzaro et al., 2020). Thus, T2REP could potentially interfere with the host membranes. T1REP is, on the other hand, predicted to be cationic (https://aps.unmc.edu/predi ction) but also predicted to localize to mitochondria and plastids as well as potentially to become alternatively secreted ( Figure S7b). Therefore, T1REP might be needed in the fungal mitochondria and also be secreted as an effector during host invasion. This would explain why we have not succeeded in deleting it. Additional evidence that both proteins are indeed effectors when regulated by Moeitf1 or Moeitf2 comes from overexpressing them in the corresponding transcription factor mutants, when the pathogenicity of the mutants was partially restored (Figure 6). Moeitf1 and Moeitf2 bound the F I G U R E 6 Fluorescent protein fusion and fluorescence signal observation showed that the expression of two effector proteins T1REP and T2REP is affected for mutant Δmoeitf1 and Δmoeitf2, respectively. (a) Compared with the wild type, no red fluorescent protein (RFP) signal was detected in the mutant Δmoeitf1, indicating that the expression of T1REP is affected. Size bar 10 μm. (b) Compared with the wild type, no RFP signal was detected in the mutant Δmoeitf2, indicating that the expression of T2REP is affected. Size bar 10 μm. The red signals indicated by the arrow show accumulation at the biotrophic interfacial complex (BIC), the structure involved in the translocation of effectors into rice cells (Khang et al., 2010) promoter regions of T2REP and T1REP, respectively, and the binding was specific for each transcription factor and effector (Figure 8).To our knowledge, this is the first discovery in M. oryzae that individual transcription factors specifically regulate the expression of proteins that act as effectors. | Effectors T1REP and T2REP appear to localize to the BIC structure During infection by rice blast fungus, multiple effectors are secreted and translocated into rice cells Mosquera et al., 2009;Wu et al., 2015;Yoshida et al., 2009;Zhang & Xu, 2014). Two different secretion systems have been identified in M. oryzae . One system uses the conserved endoplasmic reticulum to Golgi secretory pathway to secrete effectors into the extracellular space between the fungal cell wall and the extra-invasive hyphal membrane produced by the plant cells (Kankanala et al., 2007). As the effectors stay in the extracellular space, effectors secreted by this system are called apoplastic effectors ). The other system is an M. oryzae-specific plant-derived structure, called the BIC; these effectors accumulate for later delivery into the rice cells . The effectors secreted by this system mainly go inside host cells, so they have been named cytoplasmic F I G U R E 7 Overexpression of T1REP and T2REP partially restored the pathogenicity of Δmoeitf1 and Δmoeitf2, respectively. (a) Conidial spray inoculation assay showed that the strain Δmoeitf1/ TrpC-T1REP caused more lesions than Δmoeitf1, but still less than the wild type. (b) Conidial spray inoculation assay showed that the strain Δmoeitf2/ TrpC-T2REP caused more lesions than Δmoeitf2, but still less than the wild type. The average lesion number on 2 cm 2 rice leaf was calculated from three biological replicates, and three technical replicates were performed for each biological replicates. The same lowercase letters on the error bars indicate no significant differences between samples. Different lowercase letters indicate significant differences (p < 0.05, t test). (c) Gene deletion verification of the T2REP mutant Δt2rep-1 and Δt2rep-2 by Southern blotting. The 780 bp segment before the target gene coding region was amplified and labelled as the hybridization probe. The genomic DNA was digested using SmaI, and after blotting, two bands of approximately 3500 and 5700 bp were expected to appear in the wild-type 98-06 and mutants, respectively. * indicates the target bands. (d) Conidial spray inoculation assay showed that the pathogenicity of Δt2rep-1 and Δt2rep-2 was significantly reduced effectors . Imaging a fungus expressing the fluorescently labelled cytoplasmic effector Pwl2 showed that the BICs are located at concentrated regions of infection hyphae . In our study, T1REP and T2REP also showed a similar BIC accumulation in addition to what seems to be a general localization in the plant apoplast (Figure 6a,b), suggesting that they are possibly two new cytoplasmic effectors. Because both effectors are regulated during early infection, the effect of these effectors is likely to pave the way for other effectors needed later in pathogenicity. This could explain why regulating either of these effectors by the two transcription factors Moeitf1 and Moeitf2 substantially affects overall pathogenicity. However, further experimental verification is needed to show if the proteins enter the plant cytoplasm. | Moeitf1 and Moeitf2 as possible targets for disease control The most economical and effective method for controlling the rice blast disease currently is to use disease-resistant rice varieties (Li et al., 2021). However, the pathogen mutates quickly under field conditions. Thus, new disease-resistant rice cultivars might lose their disease resistance within 3-5 years of planting (Zhou et al., 2007). | Conclusion We conclude that two early infection-induced transcription fac- The M. oryzae wild-type strain 98-06 (Dong et al., 2015) was used as the background for gene deletion. The susceptible indica rice cv. CO-39 was grown for 2 weeks for the spray inoculation assay. A rice bran medium, made from crushed rice seed coats and 15 g/L agar, was used to grow M. oryzae and induce conidial production. Oat medium (50 g/L oatmeal, 15 g/L agar) was used to perform a sexual reproduction assay . Vegetative growth was tested by measuring the colony diameter after 10 days of growth on rice bran medium in 9-cm Petri dishes incubated at 25°C under 12 h/12 h light/dark periods. Conidial production was evaluated by flooding the 12-day-old colony with double distilled water, filtering out the mycelia by gauze, and then counting the conidia using a haemocytometer. The primers used in this study are listed in Table S1. | Transcription activity tested by yeast two-hybrid assay Using EcoRΙ and PstΙ, the full-length of MOEITF1 and MOEITF2 without intron regions were cloned into pGBKT7. The resulting plasmids were transformed with empty pGADT7 into the yeast strain AH109. Growth of yeast transformants on the test medium (SD/−Trp/− Leu/−His/−Ade) for reporter gene activation indicated that Moeitf1 or Moeitf2 activated the transcription of the yeast reporter gene. Yeast transformed with the combination of pGADT7-T/pGBKT7-53 and pGADT7/pGBKT7 served as positive and negative controls, respectively. | Sexual reproduction assay Strains tested were crossed with the sexually compatible strain TH3 on oatmeal medium for at least 30 days . If the tested strains have sexual reproduction activity, black perithecia develop at the intersection of the two strains, visible to the naked eye on the agar surface. Crushing perithecia releases clavate asci and ascospores visible by microscopy (BX51; Olympus). | Molecular manipulation The target genes' 1 kb upstream and downstream fragments were amplified with a 15 bp adapter sequence of HPH (hygromycin phosphotransferase) gene to construct MOEITF1 and MOEITF2 gene deletion cassettes. Then, the fragments were fused with the N-terminus or C-terminus of the HPH gene by overlapping PCR. MOEITF1 and MOEITF2 gene complemented vectors were constructed using the full length of the target genes. The upstream 1.5 kb native promoter was cloned into pCB1532 between XbaΙ and BamHΙ sites using a seamless cloning method (ClonExpress II One Step Cloning Kit). Moeitf1 and Moeitf2 localization vectors were constructed as follows. The TrpC promoter and GFP sequences were fused with the target gene's N-terminus and C-terminus, and then inserted into the plasmid pCB1532 between the XbaΙ and the BamHΙ sites. | Fungal transformation The fungal transformation was performed using the polyethylene glycol-mediated protoplast transformation method (Li et al., 2016). The protoplast cells were prepared as described previously (Li et al., 2019), then the DNA was introduced to the protoplasts. For gene deletion assay, at least 2 μg of gene deletion cassette DNA was transformed into the wild-type strain 98-06, and the transformants were screened on TB3 medium (6 g/L casamino acids, 6 g/L yeast extract, 200 g/L sucrose, 15 g/L agar) with 250 μg/ml hygromycin. Southern blotting was conducted to verify which transformants had successfully replaced the target genes with the HPH deletion construct using a digoxigenin high prime DNA labelling and detection starter kit I (Roche). Southern blotting was used for verifying the knockout of MOEITF1 and MOEITF2. The 800 bp segment before the target gene coding region was amplified and labelled as the hybridization probe. To verify MOEITF2 knockout, NheI and SplI were used to digest the genomic DNA, and after blotting two bands of approximately 2700 bp and 3100 bp were expected to appear in the wild type and mutants, respectively. To verify the MOEITF1 knockout, PstI and DraI were used to digest the genomic DNA, and after blotting two bands of approximately 2100 bp and 1800 bp were expected to appear in the wild type and mutants, respectively. The gene complementation transformants were verified by showing phenotypes that resemble the wild-type strain 98-06. Using the same plasmid for constructing vectors, the method for effector overexpression transformation was similar to that of the gene complementation transformations. | Conidial germination, appressoria formation, and pathogenicity assay Conidial germination assay and appressoria formation assay were performed by incubating conidial suspensions of 5 × 10 4 spores/ml on a hydrophobic surface in a sealed humid environment at 25°C for 4 and 8 h, respectively . Conidial germination rate and appressoria formation rate were calculated by counting the percentage of germinated conidia and appressoria-forming conidia. A sprayer pump bottle was used for conidial inoculation of 10 2-week-old rice seedlings with 5 ml of conidial suspension adjusted to 5 × 10 4 spores/ml. The conidial suspension was evenly sprayed onto the seedlings. The inoculated plants were incubated at 25°C for 24 h in a controlled environment chamber with 90% humidity and then moved to a standard rice-growing environment for another 4-5 days until disease lesions appeared. The pathogenicity of different strains was evaluated by counting the number of lesions and comparing their sizes. Injection inoculation was performed by injecting the prepared conidial suspension into rice sheath cavum taken from 21-day-old plants. The injected sheaths were then incubated for 24 h at 80% humidity. After that, the inner sheath surfaces were peeled and made into slide samples to observe infection hyphal growth by microscopy. The infection hyphae were grouped into four types to evaluate the infection ability: type 1, a small infection peg formed; type 2, the small infection peg begins hyphae-like growth; type 3, the infection hyphae fill the first infected host cell; type 4, the infection hyphae spread to the neighbouring host cell. | Appressoria collapse assay The appressoria collapse assay was performed as described previously (Li et al., 2016) to test whether the appressoria turgor pressure was normal. As a high glycerol concentration generates the appressoria turgor pressure, they were treated with exogenous glycerol to observe if they collapsed. Conidial suspension drops, 10 μl each, were placed on hydrophobic slides and incubated, as described above, for 24 h at 25°C to allow appressoria maturation. Then the covering water was carefully removed and replaced with an equal volume of 2, 3, or 4 M glycerol solution. After incubation for another 15 min, the ratio of collapsed to normal-looking appressoria was determined using microscopy. A high ratio of collapsed appressoria at a low glycerol concentration indicates low appressorial turgor pressure. | DAB staining The DAB staining to indicate host ROS formed during M. oryzae infection was performed as described previously (Li et al., 2019). A conidial suspension of 5 × 10 4 spores/ml was sprayed onto 2-week-old barley and incubated for 24 h. The inoculated leaves were plucked and placed in 1 mg/ml DAB solution for 8 h at room temperature. Then the samples were soaked in a washing solution (ethanol:acetic acid 94:4, vol/vol) for 2-3 h. The ROS are detected as dark brown precipitates visible in the infected host cells when observed under a microscope. | Yeast one-hybrid assay The yeast one-hybrid assay (Zhang & Xu, 2014) was used to check whether the target transcription factor can bind to the promoter region of the tested effector genes. First, we amplified the full-length coding sequence of the target transcription factor and cloned it into the pGADT7 vector using EcoRΙ and BamHΙ restriction enzymes. Subsequently, an approximately 1.5 kb sequence of the promoter region of the effector was amplified and cloned into the pAbAi plasmid using the seamless ligation kit as mentioned above. Then, the obtained plasmids above were cotransformed into the yeast strain Y1H Gold. After obtaining the transformants, we checked whether the transformants could grow on a medium containing 100 ng/ml abscisic acid. The transformed yeast containing the combination of the two plasmids, p53-AbAi and pGADT7-Rec-53, served as a positive control. The crossover combination of two target transcription factors and two tested effectors was used as a negative control. | Electrophoretic mobility shift assay The MOEITF1 and MOEITF2 cDNA sequences were amplified and cloned into prokaryotic expression vector pGEX-KG, respectively, containing a C-terminal glutathione S-transferase (GST) tag. The resulting Moeitf1-GST and Moeitf2-GST proteins were expressed by Escherchia coli BL21 and purified using glutathione magarose beads (Smart Lifesciences). The 1.5 kb putative promoter region DNA (0.1 μg) of T1REP and T2REP was amplified and incubated with the purified Moeitf1-GST and Moeitf2-GST (0.1 μg), respectively, for 20 min at 25°C. Then 1% agarose gel electrophoresis was performed to test whether the promoter DNA could be retarded due to binding the corresponding protein. The addition of GST and proteinase K worked as negative controls.
2022-04-19T06:23:06.556Z
2022-04-17T00:00:00.000
{ "year": 2022, "sha1": "b9ebdd78c0cc8ebe2cef52535890e347ca02cef6", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mpp.13224", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4308cb4bd66a30b96bd137d78a17414081edb27f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
34896652
pes2o/s2orc
v3-fos-license
Distribution of dark and baryonic matter in clusters of galaxies We present the analysis of baryonic and non-baryonic matter distribution in a sample of ten nearby clusters (0.03<z<0.09) with temperatures between 4.7 and 9.4 keV. These galaxy clusters are studied in detail using X-ray data and global physical properties are determined. Correlations between these quantities are analysed and compared with the results for distant clusters. We find an interesting correlation between the extent of the intra-cluster gas relative to the dark matter distribution. The extent of the gas relative to the extent of the dark matter tends to be larger in less massive clusters. This correlation might give us some hints on non-gravitational processes in clusters. We do not see evolution in the gas mass fraction out to a redshift of unity. Within r_{500}, the mean gas mass fraction obtained is 0.16\pm0.02 h_{50}^{-3/2}. Introduction Clusters of galaxies can be regarded in many respects as representative for the universe as a whole. As clusters accumulate mass from a large volume the baryon fraction in clusters is representative for the baryon fraction in the universe and hence can be used to determine Ω m when comparing the cluster baryon fraction to the upper limit of Ω baryon from primordial nucleosynthesis. The total mass of a cluster can be determined in various independent ways: strong and weak lensing, X-ray observations, galaxy velocities and the Sunyaev-Zel'dovich effect. From these mass determinations an average baryon fraction of 15-20% was inferred (e.g Mohr et al. 1999;Ettori & Fabian 1999;Arnaud & Evrard 1999;Grego et al. 2001) which implies an upper limit of the mean matter density Ω m < ∼ 0.3 − 0.4. Furthermore, the distribution of the different cluster components can be studied when mass profiles are available. From X-ray observations the gas density and the gas temperature can be determined, which yield not only the gas mass profile, but with the assumptions of hydrostatic equilibrium and spherical symmetry also the total mass profile of the cluster. Numerical simulations showed that these masses are quite reliable for virialised clusters (Evrard et al. 1996;Schindler 1996;Dolag & Schindler 2000). A comparison of both radial profiles gives Send offprint requests to: A. Castillo-Morales, acm@ugr.es information on the relative distribution of dark and baryonic matter. This can be used to learn about the cluster formation process: whether all the energy comes only from the gravitational collapse, in which case both distributions baryonic and dark matter are expected to have the same distributions, or whether there are additional physical processes involved. Deviations of the L X − T relation compared to the relation expected for self-similar scaling (e.g. Arnaud & Evrard 1999) and entropy studies (Ponman et al. 1999) suggested that at least for small galaxy systems additional (pre-)heating processes (Tozzi 2001) play an important role. Numerical simulations were performed to test this suggestion (Bialek et al. 2001, Borgani et al. 2001 finding that an entropy floor around 50 -100 keV/cm 2 is required to fit the observational results. In this article we determine various cluster properties for massive systems and compare the distribution of the different components within these clusters. Moreover, we use the determined masses to derive relations between the masses and other cluster quantities, which give more information about cluster formation and evolution. We use a sample of nearby clusters with the best ROSAT and ASCA data. This sample is complemented with four new observations of distant clusters performed by Chandra and XMM. Throughout this paper we use H 0 = 50 km/s/Mpc and q 0 = 0.5. The sample Since the aim of this paper is the analysis of the total and gas mass distribution in nearby galaxy clusters we selected clusters in which an accurate total mass determination is possible, i.e with relaxed and symmetric morphologies, good temperature measurements and well determined surface brightness profiles. Therefore we obtain a high quality sample that consists of ten best clusters in the redshift range 0.03 < z < 0.09 observed with the ROSAT PSPC. Obviously, the sample is not complete in any sense, therefore no analyses of distribution functions can be made with it. But the analyses of correlations between the different quantities are not affected by the incompleteness (see Finoguenov et al. 2001). Cluster temperatures measured by the ASCA satellite are taken from ), Markevitch et al. (1999, Sarazin et al. (1998) and Bauer & Sarazin (2000) resulting in a temperature range of 4.7-9.4 keV. In Table 1 the different properties for each cluster are listed. Of all the new observations only the best data are selected in order to obtain the most reliable X-ray mass estimate. In Table 2 the published data we use to derive masses are listed. Data analysis We use X-ray imaging data retrieved from the ROSAT archive 1 to determine the surface brightness profiles of the clusters. For each cluster a ROSAT PSPC image was reduced using the standard analysis with the EXSAS software. In order to maximize the signal-to-noise ratio, we choose the hard energy band (0.5-2.0 keV) corresponding to channel numbers 52-201. The images were corrected for exposure variations and telescope vignetting using exposure maps. The cluster cores are somewhat blurred in PSPC images by the point spread function (PSF) (≈ 20 − 30 arcsec FWHM for ROSAT/PSPC). This effect is more important for compact clusters. For example for the compact cluster A780, Neumann & Arnaud (1999) estimated the core radius to be overestimated by about 10%. For the other clusters in our sample, which have a larger core radius, we estimate that the effects of the PSF are much smaller than the statistical errors. Therefore no correction for the PSF is necessary. We generate radial surface brightness profiles in concentric annuli (centred on the emission maximum of the cluster) excluding obvious point sources manually. The observed profiles are fitted with a β-model, (Cavaliere & Fusco-Fermiano 1976) plus background. The parameters r c (core radius), β, S 0 (central surface brightness) and the background B are obtained from 1 http://www.xray.mpe.mpg.de/rosat/archive a least-squares fit to the X-ray profile. The slope β and the core radius r c are not independent parameters, with β increasing when r c increases. They are found to range between 0.6 and 0.8 and between 130 and 290 kpc, respectively. However, the overall β-model fit is a poor description of the central region of some clusters where excess emission is observed. Indeed we find in most of the cases very large χ 2 values when fitting the entire cluster emission. We reduce the χ 2 values by excluding the central bins from the fit. The best fit β -model was determined excluding the data within the cooling radius taken from Peres et al. (1998), Allen & Fabian (1997), White et al. (1997). For the clusters in our sample with excess central emission, the exclusion of the central part of the profile yields larger β values and core radii values compared to the overall β-model fit. Excluding the central excess we underestimate the gas density at the centre (about 12% for the central 3 ′ in the case of the cluster A2199). However we estimate that the central gas mass contributes only with a few percent in the gas mass at larger radius (about 3% for cluster A2199 at the radius of r 500 ). Therefore for the gas mass determination, this underestimate at small radii is negligible when we integrate the gas density out to large radii. In Table 3 the resulting parameters are shown. The reported errors are 90% confidence level. Mass determination Once we deproject the surface brightness to threedimensional density with the β model, together with the assumptions of hydrostatic equilibrium and spherical symmetry the integrated mass within radius r can be determined as with ρ and T being the density and the temperature of the intra-cluster gas, respectively. k, µ, m p , and G are the Boltzmann constant, the molecular weight, the proton mass, and the gravitational constant. Isothermal analysis As well as the density profiles we need temperature profiles to determine the total cluster mass. We follow two different approaches for the temperature profile. In our first study we assume isothermality. In Sect. 4.2 we include the temperature gradients derived from Markevitch's ASCA analysis to see how the total cluster mass is affected. In the isothermal approach we neglect the term associated with the temperature gradient in Eq. (2). In this case the total mass profile is: where the mean atomic weight µ is assumed to be 0.61. The total mass thus depends linearly on both β and T gas . In the case of clusters with central X-ray excess, we use emission-weighted gas temperature obtained by excluding the central part of the cluster. Typical mass profiles are shown in Fig. 1 for the cluster Abell 2199. After having determined the gravitational mass profiles for the clusters, it is important to fix the radius within which the cluster masses can be calculated. As the mass of a cluster increases with radius, masses can only be compared when derived within equivalent volumes. Simulations by Evrard et al. (1996) showed that the assumption of hydrostatic equilibrium is generally valid within at least radius r 500 , where the mean gravitational mass density is equal to 500 times the critical density ρ c (z) = 3H 2 0 (1 + z) 3 /8πG. The resulting cluster masses and gas masses within r 500 and 0.5 × r 500 for all the clusters are listed in Table 4. The errors on the total mass can be estimated as follows. The total cluster mass is affected by the errors in the parameters of the β-model. This error is estimated to be about 5%. The uncertainties in the total mass estimate are much larger due to the uncertainty in temperature estimates and possible temperature gradients. For larger radii we assume an error of 10% in the total cluster mass, caused by the existence of a temperature gradient. There are also additional uncertainties coming from deviations from spherical symmetry (Piffaretti et al. 2003), deviations from hydrostatic equilibrium and projection effects (together about 15% − 30%, Evrard et al. 1996, Schindler 1996, but these are hard to quantify for each cluster individually. As only well relaxed clusters where chosen, these errors should be relatively small (< 15%) in the clusters of this sample. We compute the error in the total cluster mass as the convolution of the error coming from the uncertainty in the fit parameters, the error in the temperature and the 10% error coming from the assumption of non isothermal- Table 3. X-ray quantities as measured from ROSAT/PSPC and ASCA observations. The clusters are listed in Col. 1. Col. 2 shows the emission-weighted gas temperature. In Cols. 3, 4, 5 and 6 the fit parameters of the β model are shown: the slope β, the core radius r c , the central surface brightness S 0 and the background B in the energy band 0.5 -2.0 keV. S 0 is in units of 10 −2 ROSAT/PSPC counts/s/arcmin 2 . B is in units of 10 −4 ROSAT/PSPC counts/s/arcmin 2 . In Col. 7 r f denotes the radius range fitted. Table 4. Results of the isothermal analysis: total mass, gas mass and gas mass fraction for the cluster sample. The first column gives the cluster name. Col. 2 denotes the radius r 500 which comprises an overdensity of 500 over the critical density. Cols. 3, 4 and 5 list the total mass, the gas mass and the gas mass fraction within r 500 , respectively. In Cols. 6, 7 and 8 the same quantities are listed for a radius 0.5 × r 500 . For all the quantities the reported errors are 90% confidence level. In the energy range considered, the gas mass estimate is almost independent on the temperature measurement. We estimate the error in the gas mass to about 5% due to the uncertainty in the fit parameters. Temperature gradients The isothermal assumption may give poor estimates of the total mass if strong temperature gradients are present. Observations with ASCA suggest that the temperature does decrease with radius. To estimate the effect of these temperature gradients on cluster masses, we have calculated M tot using the temperature profiles derived by Markevitch et al. 1998, Markevitch et al. 1999and Bauer & Sarazin 2000. In some clusters, the temperature outside the cooling core can be well approximated by a polytropic function where γ is the polytropic index. In this case the total mass enclosed in a sphere of radius r (Eq. (2)) is: Gm p µ kT gas (r) (r 2 + r 2 c ) . Here T gas is the true temperature, rather than a projection on to the plane of the sky. Markevitch et al. (1999) showed that as long as the temperature is proportional to a power of the density, and the density follows a β-model, the true Fig. 1. Total mass profile assuming isothermality (solid line) and gas mass profile (dotted line) for the cluster A2199. The dashed line represents the total mass profile derived with a temperature gradient. r 500 is the radius which comprises an overdensity of 500 over the critical density. r T represents the maximum radius out to which the temperature profile is calculated. A typical error bar for the isothermal total cluster mass is shown at r 500 . temperature differs from the projected temperature only by a constant factor, given by: In our analysis, the projected temperature profiles are parametrized using a linear function of the form The parameters T(0) and α are determined by fitting a straight line to the projected temperature profiles mentioned before, excluding the central cluster region in the case of clusters with central X-ray excess. It is not useful to fit a more complex function because the temperatures for consecutive annuli are determined with low accuracy. We do not deprojected the temperatures for the mass analysis with temperature gradient. Although this introduces some additional uncertainty in the total mass we show below that the results using true temperatures or projected temperatures are very similar. For the cluster A2199 we compare the total mass derived using the linear gradient Eq. (7) and the mass calculated with the polytrope Eq. (4) by Markevitch et Fig. 2. Gas mass fraction assuming isothermality (solid line) for cluster A2199. Dashed line represents the gas mass fraction derived with a gradient of temperature. A typical error bar for the isothermal gas mass fraction is shown at r 500 . al. (1999). The estimated values for the total cluster mass agree within the errors. There is a difference ≤ 10% at large radii (r > 1M pc) and ≤ 15% at the small radius (r = 0.2M pc). For example Markevitch et al. (1999) using a polytrope equation with γ = 1.17 find a total cluster mass of (0.65 ± 0.11) × 10 14 M ⊙ , (2.9 ± 0.3) × 10 14 M ⊙ and (3.6 ± 0.5) × 10 14 M ⊙ at a radius of 0.2 Mpc, 1 Mpc and r 500 = 1.3 Mpc respectively, for cluster A2199. We derive the total cluster mass for this cluster using the linear gradient T (r) = −2.1r + 5.6 where r is units of Mpc and temperature is in keV. The masses obtained in this case are quite similar, (0.56 ± 0.14)× 10 14 M ⊙ , (3.1 ± 0.8)× 10 14 M ⊙ and (3.9 ± 1.0) × 10 14 M ⊙ at a radius of 0.2 Mpc, 1 Mpc and 1.3 Mpc, respectively. Therefore we conclude that for our purpose of comparison, it is sufficient to use a linear gradient of projected temperatures. In the following we will apply only a linear temperature gradient. In this case we estimate the errors in the total cluster mass and gas mass as follows. Due to the errors in the temperature, the errors in the linear fit are large. We estimate the errors in the total cluster mass using the different possible temperature gradients given by the lower and upper values of T(0) and α (see Eq. (7)). When the different possible temperature gradients are used, the radii r r500 and r r500 /2 change significantly and hence the total and gas mass enclosed by them. For the gas mass an error of ∼ 10% is estimated. Fig. 3. Comparison of masses obtained with the isothermal and temperature profile analyses at radius r 500 . The smaller symbols represent the temperature profile analysis. In Fig. 1 the different mass profiles calculated using the isothermal model, and a linear gradient of temperature, for cluster A2199 can be seen. With the assumption of constant temperature (solid line) the total mass is overestimated (∼ 10%) at large radii and underestimated at small radii, compared to the temperature gradient analysis (dashed line). The overestimate of the total mass is only significant at radii larger than r T , with r T being the radius out to which the temperature was measured. This trend is observed for all the clusters in our sample. Fig. 1 also shows the gas mass profile (dotted line) in the radius range that has been used to fit the data with the β-model. Fig. 2 shows the gas mass fraction, defined as the ratio between the gas mass M gas and the total mass M tot . The difference in the gas mass fraction between the isothermal and non-isothermal cluster studies is shown as well (solid and dashed lines, respectively). At small radii both analyses provide approximately parallel profiles with smaller values obtained in the temperature gradient case. At larger radii the temperature profile analysis yields steeper profiles and higher values (∼ 10%) compared to the isothermal one. The gas mass fraction derived using a gradient of temperature is not reliable beyond the radius r T where the gradient is determined and thus it is not plotted. The masses calculated with both analyses (isothermal and temperature profile) are compared in Fig. 3 for each cluster in the sample at radius r 500 . The symbols corre- Fig. 4. Gas mass fraction profiles derived for the nearby cluster sample. Profiles are plotted from the minimum radius fitted in the β model. spond to the cluster name listed in Fig. 6. The smaller symbols represent the values for the temperature profile analysis. Although the gas mass profile is not influenced by the change in the cluster temperature, the gas mass at radius r 500 is different in the two analyses due to the difference in r 500 which depends on the temperature. Results and discussion In the following all the results presented are refered to the analysis of the cluster sample assuming isothermality. Gas mass fraction The gas mass fraction defined as f gas = M gas /M tot is calculated for each cluster in the sample. The errors associated with the gas mass fractions are calculated by the convolution of errors in the total cluster mass and the gas mass. We find that inside each cluster the gas mass fraction increases with radius (see Fig. 4) implying that the gas distribution is more extended than dark matter. A low Ω is required to reconcile the high gas mass fraction of < f gas > r500 = 0.16 ± 0.02, whith the baryon fraction predicted by primordial nucleosynthesis. To test whether there is any evolution of the gas mass fraction we plot this quantity versus redshift (Fig. 5) including the analysis from Schindler (1999) for distant clusters (0.3<z<1.0). In this figure, as well as in all other following figures, each cluster is plotted with a different symbol (see Fig. 6). We include in the comparison another distant clusters: RX−J0849+4452 (z=1.26), RBS797 (z=0.354), A1835 (z=0.25) and RX−J1120.1+4318 (z=0.6) where we calculate the total and gas mass using the published parameters from the Chandra and XMM data analysis (see Table 2). In the gas mass fraction we see no clear trend with redshift. The mean value in our sample is < f gas >= 0.16 ± 0.02 which is in agreement with other nearby samples, Mohr et al. (1999): f gas =0.21, Ettori & Fabian (1999) : Fig. 7. Gas mass versus total mass. In solid line is shown the best fit for the nearby sample. For comparison, in dotted line is plotted the best fit obtained for the distant sample by Schindler (1999) f gas =0.17 and also in agreement with the value for distant sample by Schindler (1999), f gas =0.18. Therefore we conclude that we do not see evolution in the gas mass fraction out to a redshift of unity. In contrast to this result Ettori & Fabian (1999) found indications for a decrease of f gas with increasing redshift in a nearby sample. Matsumoto et al. (2000) found no clear evidence of evolution of f gas for the clusters at z < 1.0. In Fig. 7 we compare the gas mass with the total cluster mass. A trend of an increasing gas mass with total mass is visible in our sample. This trend was also found for distant clusters by Schindler (1999) (dotted line in Fig. 7). A linear fit regression yields M gas (r 500 ) = 0.13M tot (r 500 ) (1.09±0.12) . M gas (r 500 ) and M tot (r 500 ) are in units of 10 14 M ⊙ . This trend is in agreement with the analysis by Arnaud & Evrard (1999). Because the exponent in Eq. (8) is close to unity, we find no clear dependence of the gas mass fraction on the total mass in our sample (see Fig. 8 where the nearby and distant sample are shown). Several authors have also related measured values of f gas and M tot or T gas . Up to now the results are discordant concerning the form of this relation. For example, David et al. (1995) found indications for an increase of f gas with increasing T gas . Allen & Fabian (1997) found in a sample of X-ray luminous clusters indications for a decrease of f gas with increasing T gas . Ettori & Fabian (1999) found no dependence of f gas on M tot for high luminosity clusters. As mentioned before the gas mass fraction is not constant with radius. In order to plot this increase against other cluster properties, we compare the gas mass fraction at radius r 500 with the gas mass fraction at 0.5 × r 500 in each cluster. The mean gas mass fraction at 0.5 × r 500 is 0.138 ± 0.016, i.e. smaller than the mean of 0.16 ± 0.02 at r 500 . The ratio of these fractions is a measure for the extent of the gas distribution relative to the dark matter extent: E = fgas(r500) fgas(r500/2) . The error in the relative gas extent E is not easy to determine. We estimate its uncertainty by testing how much this value changes when: • we consider the uncertainty in the gas temperature • the fit parameters change Since the relative gas extent is calculated as the ratio of gas mass fractions at different radii, the errors coming from the uncertainty in the temperature are cancelled out. The relative gas extent E changes significantly when the fit parameters (β and r c ) are varied. This error coming from the uncertainties in the fit parameters is included in the following graphs. For all the clusters in our sample the relative gas extent E is larger than 1 (see Fig. 9). This means that in general the gas distribution is more extended than the dark matter, which is in agreement with other results for nearby cluster samples by David et al. (1995), Jones & Fig. 9. Ratio of gas mass fraction at r 500 and r 500 /2 as a measure for the relative extent of the gas distribution versus total cluster mass at r 500 . For clarity, clusters with the largest error bars from Schindler (1999) are not shown. Forman (1999), Ettori & Fabian (1999) and for distant samples by Schindler (1999) and Tsuru et al. (1999). The distant sample by Schindler (1999) and the other Chandra and XMM observations are also shown in Fig. 9 for comparison. In the nearby sample (4.7 < T < 9.4 keV) this relative gas extent E shows a mild dependence on the total cluster mass (see Fig. 9) similar to the result by Schindler (1999). Clusters with larger masses tend to have smaller relative gas extents (similar dependence confirmed by Reiprich & Böhringer 1999 although they used different radii). The differences in the gas and dark matter distribution cannot be explained by purely gravitational energy coming from the collapse of the cluster. Additional energy input is necessary to explain it, e.g. from supernova-driven galactic winds. It might be that this additional energy affects low mass clusters more than massive clusters, so that a massive cluster can maintain a ratio E ≈ 1, while in the smaller clusters the gas is more extended. Entropy studies by Ponman et al. (1999) of cool clusters (T < 4keV) observed with ROSAT and GINGA suggested that for these systems, (pre-)heating processes play an important role in cluster formation. Equation (2) shows that for a given radius M tot ∝ T × β. If there is a temperature rise due to preheating processes, a cluster with a certain M tot should have shallower gas density profile or more extended gas distribution (β value smaller). Then if that heating is more effective Fig. 10. β-temperature relation. β is the parameter obtained when fitting with a β−model the X-ray surface brightness profile. There is a trend to find larger β values for larger temperatures. in cooler cluster, i.e. less massive clusters, one should expect an anticorrelation between total mass and gas extent and lower values of β in cooler systems. This seems to be consistent with observations (e.g. Mohr & Evrard 1997, Ponman et al. 1999, Nevalainen et al. 2000. Our sample also exhibits this behavior, see Fig. 10. Mass-temperature relation Accurate measurements of the cluster total mass are possible only for a limited number of clusters. For this reason there are currently not enough data available for a direct derivation of the mass function, which is crucial for the determinations of the cosmological parameters using the cluster abundances at different redshifts. A more practical way of determining the mass function is to observe the distribution of readily available average cluster gas temperatures and to convert these to masses, taking advantage of the tight correlation between mass and temperature predicted by hydrodynamic cluster formation simulations (e.g. Evrard et al. 1996). Therefore, a well-established M tot − T relation can be used as a powerful cluster mass estimator. Furthermore the M tot − T relation is also interesting in itself, because deviations from the predicted self-similar scaling of M ∝ T 3/2 would indicate that more physical processes are at play than gravity alone. Assuming self-similarity and a velocity dispersion proportional to the X-ray temperature, the virial theorem provides a relation between total mass, radius and X-ray temperature: M tot (r 500 )/r 500 ∝ T . Equivalently, r 500 can be expressed by the definition of the overdensity r 500 ∝ M tot (r 500 ) 1/3 /(1 + z) yielding the relation M tot (r 500 ) ∝ (T /(1 + z)) 3/2 . We see this correlation in our data (see Fig. 11a). A fit taking into account the errors in the temperature and the error in the mass yields the following relation M tot (r 500 ) = 0.36 T 1+z (1. 7±0.2) shown in Fig. 11a as solid line. M tot (r 500 ) is in units of 10 14 M ⊙ and T in units of keV. The slope is greater than the virial value of 3/2 but consistent within the errors. If we do the same fit using the masses calculated with the temperature profiles the slope found is 1.4 ± 0.2. Other observational analyses, with only hightemperature (kT > 4 − 5keV) clusters, have achieved results consistent with the theoretical prediction (Hjorth et al. 1998, Neumann & Arnaud 1999. However many studies (e.g. Ponman et al. 1999, Horner et al. 1999, Nevalainen et al. 2000, Finoguenov et al. 2001) have shown that the influence of energy feed-back into ICM in low-temperature clusters/groups can become more significant than that in high-temperature systems. Consequently, the self-similarity may break at the lowtemperature end. If supernovae release a similar amount of energy per unit gas mass in hot and cool clusters, the coolest clusters would be affected more significantly and exhibit a stronger shift to higher temperatures in the M-T diagram than the hotter clusters. This will steepen the M-T relation. Furthermore, we test the relation between the gas mass and gas mass fraction and the temperature. As expected from the gas mass−total mass relation, there is also a relation between gas mass and temperature (see Fig. 11b). A linear regression fit yields M gas (r 500 ) = 0.04T (1.80±0.16) with M gas (r 500 ) in units of 10 14 M ⊙ and T in keV. This correlation is in agreement with other results in nearby clusters (Reiprich & Böhringer 1999;. For comparison, Reiprich & Böhringer (1999) find an exponent of 2.08 and Schindler (1999) a larger exponent 4.1±1.5 for distant clusters. As expected from the non-correlation of the gas mass fraction with the total mass, we find also no correlation between the gas mass fraction and the temperature (see Fig. 11c). This result is in good agreement with Mohr et al. (1999). They find a mild dependence comparing low temperature clusters (T < 5keV) with high temperature clusters (T > 5keV). For the high temperature clusters alone, in which category all our clusters fall, they find no dependence. We find an interesting correlation between the relative gas extent and the temperature (see Fig. 11d), which is of course related to the dependence of the relative gas extent on the total mass, shown above. The relative gas extent tends to be slightly larger in lower temperature clusters. Summary We have analysed a sample of ten nearby clusters of galaxies using the X-ray data provided by the ROSAT and ASCA satellites. For this sample physical quantities like gas mass, total mass, gas mass fraction and relative gas extent have been derived. Correlations between the above quantities have been studied and our findings are • The gas mass fraction increases with radius for all our clusters implying that gas is more extended than dark matter, confirming previous results (David et al. 1995, Ettori & Fabian 1999. This behaviour is more pronounced when temperature profiles are taking into account in the mass analysis. • Within r 500 , the mean gas mass fraction obtained is (0.16 ± 0.02)h −3/2 50 . We see no trend in the gas mass fraction with redshift. • The gas extent relative to the dark matter distribution shows a mild dependence on the total mass and gas temperature. Clusters with larger masses have smaller relative gas extents, as we would expect if nongravitational processes are important in cluster forma-tion. Hints for this kind of behaviour have been found previously in distant clusters. These new results confirm this trend. • Studying the mass-temperature relation we find a slope slightly steeper compared with the theoretical value of 3/2 although consistent within the errors. The selfsimilar slope 3/2 is found when using temperature gradient analysis. Other observational analyses, with only high-temperature clusters, have achieved results consistent with the theoretical prediction (Hjorth et al. 1998, Neumann & Arnaud 1999, Nevalainen et al. 2000.
2017-09-08T18:17:19.363Z
2003-03-19T00:00:00.000
{ "year": 2003, "sha1": "9826685a600a1939a9913689b83fc9ccd3321111", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2003/20/aah4160.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "9826685a600a1939a9913689b83fc9ccd3321111", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253244521
pes2o/s2orc
v3-fos-license
Moduli spaces of flat Riemannian metrics on 3- and 4-dimensional closed manifolds We describe the topology of the moduli spaces of flat metrics for all the 3-dimensional closed manifolds. We give an algebraic description of the moduli spaces for the 4-dimensional closed flat manifolds with a single generator in their holonomy and, in some cases, also study their topology. . We are using the notation for the flat manifolds given in [11], later in preliminaries we will list them explicitly. [15]. The organization of this paper is as follows. We start in Section 2 with some preliminaries, in Section 3 we explain the descriptions of the moduli spaces of flat metrics for dimension 3 and prove theorem 1.3, and finally in Section 4 we study their topology for some of the cases. Acknowledgements. The results in this paper are part of my Ph.D. thesis [6] developed under the supervision of Wilderich Tuschmann. I thank Prof. Tuschmann for presenting me this interesting research line and for his guidance. I thank Oscar Palmas and Ingrid Membrillo for comments on the first versions of the present manuscript, and for useful conversations. This project was supported by the DFG, Research Training Group 2229. Preliminaries Here we fix some notation. The group of affine transformations of R n , denoted by Aff(n), has the structure of a semidirect product Aff(n) = GL(n, R) R n . The group of isometries of R n denoted by Iso(n), also have the structure of a semidirect product Iso(n) = O(n) R n . We work with the following type of groups: Definition 2.1. A Bieberbach group π is a discrete subgroup of Iso(n) that is torsion-free and such that R n /π is compact. We have that (R n /π, σ), with σ the metric induced from the usual metric of R n , is a closed flat manifold. On the other hand, let M be a closed manifold with a flat metric g, then its universal cover with the metric induced from g is isometric to R n with the usual metric. In other words, R n with the usual metric is a Riemannian covering of (M, g) and we consider its group of deck transformations, denoted by π. Then (M, g) is isometric to (R n /π, σ), where π is a Bieberbach group. Therefore a closed flat manifold is represented by its Bieberbach group and the Bieberbach theorems describe important properties about them. One of these properties is that two closed flat manifolds with isomorphic fundamental groups, are affinely equivalent. See [3] or [17]. Consider the projection homomorphism Definition 2.2. Let π be a Bieberbach group. The holonomy of π is the subgroup of GL(n, R) given by H π := τ (π). The kernel of τ restricted to π is denoted by L π . It is the maximal normal abelian subgroup of π, which consists of all the translations (Id, v) of π. We have a short exact sequence (1) 1 → L π → π → H π → 1. We fix some notation in order to give the classification of the Bieberbach groups in the dimensions consider here. We shall denote by e 1 = (1, 0, 0), e 2 = (0, 1, 0) and e 3 = (0, 0, 1) the vectors of the standard basis of R 3 . The basic translations of R 3 are denoted by t i = (Id, e i ). Also, the rotation matrix by an angle θ ∈ [0, 2π] is denoted as , and the reflection as E 0 = 1 0 0 −1 . We now enumerate the Bieberbach groups, along with their holonomy and their generators. [10]). There are only 10 Bieberbach groups in dimension 3 up to affine change of coordinates. The first six of them give orientable manifolds and the last four give non-orientable manifolds. 1. We will use analogous notations for dimension 4. Theorem 2.4 ( [11]). There are 18 Bieberbach groups in dimension 4 up to affine change of coordinates, which have only one generator in their holonomy. The first eight of them give orientable manifolds and the last ten give non-orientable manifolds. Lemma 2.5. For some of the Bieberbach groups in Theorem 2.4, we conjugate them in order to get the following representations. Proof. We change the representation by conjugating with the affine transformation (P, 0), where P is: The next result gives a description of the moduli space of flat metrics on a manifold depending only on its Bieberbach group π. We denote the normalizer of π in Aff(n) by N Aff(n) (π) = {γ ∈ Aff(n) | γπγ −1 = π}. of the double coset space Iso(n)\ Aff(n)/ N Aff(n) (π), is in bijective correspondence with the set of all isometry classes of Riemannian manifolds that are affinely equivalent to M = R n /π. The double coset Iso(n) · γ·N Aff(n) (π) corresponds to the isometry class of R n /(γπγ −1 ). Actually, there is an homeomorphism between this double quotient and the moduli space given in definition 1.1; details can be found in [6]. The translation part does not bring any additional information to the expression of M f lat (M ). Therefore, the moduli space of flat metrics of M = R n /π is Notations. In the next sections we use the following notations: where H 1 and H 2 are two subgroups of a given group. The matrix part of the normalizer N π = τ (N Aff(n) (π)). The lattice of π, L π , consists of all the translations (Id, v) of π. The standard lattice is Z n = (Id, e i ) | {e 1 , . . . , e n } the standard basis of R n . Algebraic description In this section we give information of the moduli spaces M f lat (M ) for the 3dimensional closed manifolds and compute them for the family of 4-dimensional closed manifolds. First we need the representation of the Bieberbach group given in theorems 2.3 and 2.4. Then we use Theorem 2.6 in order to describe the moduli space of flat metrics, which we express in the previous section as Let us analyse the structure of the cone space and the matrix part of the normalizer. 3.1. The cone space. The cone space C π is easy to analyze since it only depends on the holonomy. To describe the space C π , one has to solve the equation: For the descriptions of C π in dimension 3 we refer to [8] and [9]. We will give the description for the 4-dimensional closed flat manifolds with one generator in their holonomy. Proposition 3.1. The possible spaces C π for the 4-dimensional closed flat manifolds with a single generator in their holonomy are the following: 1. For trivial holonomy: T 4 , the space is C π = GL(4, R). 2. For H π = Z 2 , the spaces are: For cyclic holonomy of order bigger than 2, the spaces are: ). Proof. We consider each case separately. Case 1. When the holonomy is trivial, we have the result in the Corollary of Theorem 1 in [18]. When H π is generated by A = −Id 0 0 Id or its negative −A, which is the case of O 4 2 and O 4 3 , we get from equation (2) that When the generator of H π is A = ( Id 0 0 −1 ) or its negative −A, which is the case of N 4 1 , N 4 2 , and N 4 14 , we have Case 3. For cyclic holonomy with order bigger than 2 we use the property that if A ∈ H π and X ∈ GL(n, R) such that When H π is generated by matrices of the form When the holonomy is generated by matrices of the form −E 0 0 When the holonomy is generated by Then the vectors x 1 and x 2 have the same length and the angle between them should be smaller than π. When the holonomy is generated by , which is the case of N 4 21 , we have This means that the vectors x 2 , x 3 and x 4 have the same length and form the same angle between them. For this situation we have that the angle is θ ∈ (0, 2π 3 ) since having angle 2π 3 means that the vectors are coplanar (and not linearly independent anymore). With this information we can conclude that 3.2. The matrix part of the normalizer. The description of the normalizer depends not only on the holonomy but also on the affine structure as well, i.e., on how the translations are acting. To get easier computations for some cases we change the representation by conjugating with a suitable affine transformation. These cases are: , where the representation is changed as in Lemma 2.5. We observe that if π = ξπξ −1 where ξ ∈ Aff(n) and π a Bieberbach group, we have that the normalizer behaves as follows N Aff(n) (π ) = ξN Aff(n) (π)ξ −1 ([10, page 1069]). By (1), we always have a lattice L π inside our Bieberbach group π, and τ (N Aff(n) (L π )) is GL(n, Z) or a conjugate of GL(n, Z) by the matrix of change of coordinates when the lattice L π is not Z n . To keep our notation simple, we will assume that the lattice is the standard one for the next explanation. Then we have N π ⊆N GL(n,Z) (H π ). We may have the following two situations: N π is not always N GL(n,Z) (H π ) and the normalizer is not always a semidirect product. Having the following property of π will make N π easier to describe. Definition 3.2. Let π be a Bieberbach group with non trivial holonomy. We say that the group has translation part not involved, when for X ∈ N GL(n,Z) (H π ) we have that , . . . , n} and u j = v j for j / ∈ I; for each generator α = (A, v) of π such that A = Id. Otherwise, we say it has translation part involved. In the Bieberbach groups we are studying, having translation part not involved, standard lattice and for each generator α = (A, v) we have (A, −v) ∈ π, then N π = N GL(n,Z) (H π ). When we do not have the properties mentioned before, which is most of the cases, we can still see if the normalizer has a structure of semidirect product using the following lemma. Since G has the product from Aff(n), the previous lemma is actually telling us when G can be split into a product M × T . Then, for proving the lemma one can use the splitting theorem. In general, we have to look for matrices in N GL(n,Z) (H π ) which preserve the translations of any generator α = (A, v) ∈ π with A = Id, i.e., all the possible options for a vector u ∈ R n such that (A, u) ∈ π. We proceed with the description of N π for the 3-dimensional Bieberbach groups. Proposition 3.4 ([10] ). Let π be one of the Bieberbach groups for the 3-dimensional closed flat manifolds, then the matrix part of the normalizer of π, N π , in Aff(3) are as follows: Although the above result was proved in [10, Lemma 3.3], we point out and correct a mistake in the cited reference while calculating N π for the group B 1 . The group B 1 has standard lattice and the normalizer has structure of semidirect product. Now, we have to be careful with the translation part of the generator ; this means we have to restrict to matrices in N GL(3,Z) (H π ) that preserve the corresponding lattice of the generator : 2n 1 + 1 2 e 1 + n 2 e 2 + n 3 e 3 , with n 1 , n 2 , n 3 ∈ Z}. We continue computing the matrix part of the normalizer for the 4-dimensional closed flat manifolds with one generator in their holonomy. We analyze separately the orientable and the non-orientable manifolds. Let us introduce the following notation, which we will use in the coming two propositions. Again we consider Proposition 3.5. The matrix part of the normalizer of π in Aff(4) for the 4-dimensional orientable closed flat manifolds with a single generator in their holonomy is as follows: 1. For T 4 , N π = GL(4, Z). For Proof. In the case of T 4 , the result follows from Corollary of Theorem 1 in [18]. Therefore we exclude the torus from our analysis. The lattices of the Bieberbach groups O 4 3 , O 4 5 , and O 4 7 are not the standard ones. Then the group of matrices that normalizes the lattice is conjugate to GL(4, Z) by a matrix Q ∈ GL(4, R). For these cases the Q is computed, but fortunately the X ∈ Q GL(4, Z)Q −1 that satisfy the condition XA = AX for the generator of the holonomy A are reduced to matrices in GL (4, Z). Then in all cases we can consider matrices in GL(4, Z). We first find all the matrices X ∈ GL(4, Z) that normalize the holonomy H π . For all cases we get that the matrix must have the form It turns out that the translation part is involved for all the cases. Then the lattices of the generators of the holonomy have to be computed and we have to search for matrices X ∈ N GL(4,Z) (H π ) that preserve or switch the lattices. For O 4 2 , the matrix X 2 ∈ GL(2, Z) must preserve vectors of the form X 2 (n 3 , 2n 4 +1 2 ) t = (k 3 , 2k 4 +1 2 ), with n i , k i ∈ Z for i = 3, 4, similar to the case of B 1 . For cyclic holonomy of order k bigger than 2, we have the cases For the ones with standard lattice, O 4 4 , O 4 6 and O 4 8 , we look for the matrices X 1 such that X 1 (n 1 , kn 2 +1 k ) = (k 1 , kk 2 +1 k ) or X 1 (n 1 , kn 2 +1 k ) = (k 1 , kk 2 +r k ), with n i , k i ∈ Z for i = 1, 2, depending on if we are fixing the generator A or switching it to the generator A r . The matrices X 2 are the same as in the cases of dimension 3 with their respective holonomy. We have more cases for the ones with non-standard lattice. Let us see this more closely: For O 4 5 the lattices of the generators are: √ 3 e 4 and n i ∈ Z, i = 1, 2, 3, 4} √ 3 e 4 and n i ∈ Z, i = 1, 2, 3, 4}. We have the next three options: 1. −n 3 + n 4 ∈ 3Z, 2. −n 3 + n 4 ∈ 3Z + 1, 3. −n 3 + n 4 ∈ 3Z + 2. Looking at all combinations for sending the lattices, it is concluded that not all of them are possible, leading us to get the structure of semidirect product in the normalizer. O 4 3 and O 4 7 are the only ones whose normalizer do not accept a structure of semidirect product. We consider each case separately. For O 4 3 , the lattice of the generator is: We have two cases: n 4 odd or n 4 even. Looking at all possibilities for the translations of the generators, we obtain: where ξ = For O 4 7 , the lattices of the generators are as follows: αL π = {(A, v) | v = 2n 1 +n 3 +n 4 2 e 1 + 4n 2 +2(n 3 +n 4 )+1 4 e 2 − n 4 e 3 + n 3 e 4 and n i ∈ Z, i = 1, 2, 3, 4}. We will have two cases: n 3 + n 4 even or n 3 + n 4 odd. Looking at all possibilities for the translations of the generators, we obtain: Proposition 3.6. The matrix part of the normalizer of π in Aff(4) for the 4-dimensional non-orientable closed flat manifolds with a single generator in their holonomy is as follows: Proof. First, we explain the case of N 4 1 . The group N 4 1 has standard lattice, translation part involved, and its normalizer has structure of semidirect product. The matrix that normalizes the holonomy has to be of the form For . The case of N 4 2 is similar to N 4 1 but its group has non-standard lattice. Then the form of the matrix X is the same but the lattice of the generator α is different: 2n 1 + 1 2 e 1 + n 2 e 2 + 2n 3 + n 4 2 e 3 − n 4 2 e 4 and n i ∈ Z, i = 1, 2, 3, 4}, with two cases: n 4 even or n 4 odd. This leads us to have elements in the normalizer that needs the translation part different from zero. Therefore the whole normalizer group is The case of N 4 14 is simple because the form of the matrix X is also as in N 4 1 but now the group N 4 14 has standard lattice, translation part not involved, and for the generator α we have (A, − 1 2 e 4 ) ∈ π, then N π is the same as N GL(4,Z) (H π ). The remaining groups but N 4 18 are also simple to compute since they have standard lattice, translation part not involved and N GL(4,Z) (H π ) is finite, which we computed using Mathematica. Then, we just have to select the matrices that send the lattices of the generators correctly. The case of N 4 18 has the generator (A, 1 4 (e 1 +e 2 )+ 1 2 e 3 ), this means that the rotation of the matrix affects the translation. Even though we change the representation to get standard lattice, the translations of the generators are a bit more complicated, that is why we have to check for each P ∈N GL(4,Z) (H π ) if there is an x ∈ R 4 such that (P, x) normalize π. Moduli spaces. Having the descriptions of C π and N π , we can describe the moduli spaces of flat metrics, which we need in order to study their topology: We proceed to describe the moduli spaces of flat metrics for the 4-dimensional closed flat manifolds with one generator in their holonomy. Proof of Theorem 1.3. As we have seen M f lat = O(4)\C π /N π , and we already have described the spaces C π (Proposition 3.1) and N π (Proposition 3.5 and 3.6), so we just have to put all the information together. For the orientable manifolds of cyclic holonomy of order greater than 2 their double quotient has the form where Γ 1 , Γ 2 ⊂ GL(2, Z), and R, A ∈ O(2), the respective matrices that appear in N π for each case. Observe that C 0 0 0 1 0 0 0 1 / ∈ N π , where C ∈ Γ 2 ; this means that N π can not be separated as the product of the groups. But we still can factorize the double quotient as follows: this is because the second part of the space C π is R + × O(2) and R, A , the second factor of the group N π is finite and generated by orthogonal matrices. Then we separate the double quotient into two factors and reduce the second factor as in Theorem 3.7. For the non-orientable manifolds with cyclic holonomy of order greater than 2, we can reduce the double quotient because the normalizer is a subgroup of O(4) and the cone space C π is equal to orthogonal matrices times the positive real numbers. Let us see the case of Topological description In this section we study the topology of the moduli space M f lat (M ) for closed manifolds in dimension 3 and some cases in dimension 4. This is related to the study of the action of subgroups of SL(2, Z) on the hyperbolic plane. M f lat (M ) can be seen also as a quotient of the Teichmüller space by the group N π . In [2] Bettiol, Derdzinski and Piccione studied the Teichmüller space of flat manifolds proving that it is always a Euclidean space. Since N Lπ is a conjugate of GL(n, Z) inside GL(n, R), which is discrete, and N π ⊂ N Lπ then N π is discrete. Thus the Teichmüller space and M f lat (M ) are orbifolds with the same dimension. Even though N π is a discrete group acting on a Euclidean space, it turns out that M f lat (M ) can have interesting topology. The Teichmüller space of flat metrics and M f lat (M ) of the 2-torus are very well understood, see [5]; in the cited reference it is shown the existence of an homeomorphism Since M f lat (T 2 ) = O(2)\GL(2, R)/GL(2, Z), we still need to see what happens to the action of GL(2, Z). Observe that we are quotienting out the orientation reversing matrices, therefore we just have to consider the group SL(2, Z) acting on the previous space. The action of SL(2, Z) on H 2 is via Möbius transformations, and we can even compute the fundamental domain to get the next result To study the topology of the double quotient of some of our flat manifolds we have to compute the fundamental domain of the action of a subgroup of SL(2, Z) on the hyperbolic plane. We use the fact that SL(2, Z) has two generators S = ( 0 −1 1 0 ) and T = ( 1 1 0 1 ), where the map S is an inversion together with a reflection and the map T is just a translation. For general information about this see [1], [12] or [13]. The algorithm to compute the fundamental domain of a subgroup Γ of SL(2, Z) on H 2 , deduced from Proposition 2.16 in [12], is: (1) Compute the index of Γ in SL(2, Z). 3. It is enough to see what the two transformations S and T are doing to the fundamental domain of SL(2, Z) on H 2 . Then we make the corresponding compositions to obtain the fundamental domain shown in Figure 1. For B 2 , we have where Γ(2)· Y + are the matrices in Γ(2)· Y with positive determinant. We compute the fundamental domain of Γ(2) · Y + on H 2 : 1. With a similar procedure as in the case of Γ 0 (2), we obtain that [SL(2, Z) : Γ(2) · Y + ] = 3. 3. Since we already expressed the representatives in terms of the generators T and S, we can apply them easier to the fundamental domain of SL (2, Z). In this way we obtain the fundamental domain for Γ(2) · Y + on H 2 , as shown in Figure 2. We notice that the fundamental domain of Γ(2) · Y + is quite similar to the one of Γ 0 (2) + and it is also homeomorphic to a cylinder. Therefore the moduli space of flat metrics for B 2 is The borders of the fundamental domain are identified by T 2 , 1 0 −2 1 , and −3 2 −2 1 ∈ Γ(2) + . Making the border identifications, as shown in Figure 4, we have an orbifold which is homeomorphic to a 3-punctured sphere.
2022-11-02T01:16:01.122Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "73af004ad39bb4143d247b79a36f0cf34ddff15f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "73af004ad39bb4143d247b79a36f0cf34ddff15f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
215846513
pes2o/s2orc
v3-fos-license
The Interruptible Load Control Strategy of Distribution Network in Integrated Energy Service System In the integrated energy service system, distribution network shall manage to settle its own power balance issue. The market competition that grid corporations open the sale services of electricity to the society and interruptible load is encouraged to participate in ancillary services becomes a development tendency. At this time, a corresponding control strategy was required to instruct interruptible loads scheduling. In this paper, a control strategy using discrete Fourier transform is proposed in combination with the requirements of the electricity market. Under the constraint of the new evaluation criteria, different types of resident interruptible loads were on-demand dispatched in different periods. The root mean square error (RMSE) of the feeder daily forecast could improves the power load control capability of distribution network as well as the quality of power supply. Introduction There are a lot of interruptible loads (IL) [1] in the distribution network, such as air conditioning, refrigerator and electric vehicle. From the perspective of system operation, these IL are important power balance resources, which can smooth the load curve of the distribution network. In the traditional distribution network, due to the lack of consideration of how to play the role of adjustable load and lack of technical means in planning and design, the load only passively absorbs power from the grid according to its own needs, and does not play a role. Now, users can choose to sign power supply guarantee agreement, IL agreement and other contracts with distribution network managers voluntarily based on their own load characteristics, stipulate their rights and obligations of auxiliary services, bear necessary auxiliary service costs, or get corresponding economic compensation according to their contributions. At present, limited by the power reserve service market is not fully open in China, there are still many issues worthy of discussion such as IL scheduling control. The peak load of power grid can be restrained only by integrating the air conditioning load [2]. Therefore, a double-layer optimal scheduling model of air conditioning is proposed to control the start and stop of air conditioning so as to adjust the power and maintain the room temperature in a comfortable range. In [3], based on the priority and cycle scheduling algorithm, the air conditioning cluster control scheme is established. According to the random sampling theory, an instruction algorithm for group tracking active power control of air conditioning load is proposed in [4]. In [5], it is considered that controlling multiple residential loads is an important measure to implement demand response in distribution network, and the control model is designed based on the actual power shortage and different IL physical characteristics. The fuzzy neural network designed in [6] can learn the electricity habits of smart home appliances in advance, and then automatically respond to the demand. In [7], the dynamic priority of household appliances is calculated online according to their real-time state, and the load control decisions are carried out according to the priority order. In reference [8], the electrical appliances and key devices in the home are abstracted mathematically, and the demand response control architecture model is constructed in the home domain network. In [9], a household energy management system was established to obtain an IL optimization model. The above research has carried on the multi-angle analysis to the IL control in the distribution network, provided the constructive opinion, but still needs to consider the following several questions: 1) the reasonable classification and time-sharing scheduling of the IL in the distribution network can improve the utilization efficiency, accurately track the demand change; 2) in meeting the requirements of the distribution network operation, the IL control times should be as few as possible to reduce the impact on the user's electricity consumption; 3) The control strategy of IL control standard needs a set of standards that meet the actual needs of the power market to guide the IL scheduling control; 4) the IL control strategy should be as simple and effective as possible for practical application. This paper focuses on IL control under the premise that grid companies have signed interruption contracts with enough users. According to the requirements of distribution network guiding IL to provide auxiliary services in the power market environment, an IL control strategy using discrete Fourier transform (DFT) is proposed. Based on the regulation response time, the strategy classifies different IL according to the frequency domain. According to the deviation between the load power and the planned value, it can calculate the auxiliary reserve regulation quantity in different frequency bands, dispatches the corresponding IL power to suppress the fluctuation of the load power, reduces the prediction deviation of the feeder power in the distribution network with less regulation cost, and improves the service ability of the power selling enterprises. The demand of interruptible load control The early implementation of interruptible load is mainly for industrial users, but its production continuity is strong, the machine is not easy to start and stop, and it is not suitable for distribution network standby. With the development of economy and the improvement of residents' living standards, the electricity consumption of the tertiary industry and residents has increased dramatically. Intelligent and controllable household appliances such as air conditioning are gradually popularized, which can be used as the main regulatory resources of distribution network IL. The classification of residents' IL can make clear the scheduling objects, integrate the scattered resources, and provide the basis for future IL contract pricing. According to different response time, electrical characteristics and other factors, this paper classifies the residents' IL in Table 1. Among them, washing machine, dishwasher and disinfection cabinet are all other IL due to the dispersion of working time. Different IL has different scheduling mechanisms: the refrigerator is controlled by heating up or suspending refrigeration; the electric water heater is mainly controlled by slowing down heating; the air conditioner is controlled by constant temperature of 26 degree, and 24-28 degree is set as the temperature regulating range. In the integrated energy service system, the electricity selling company can purchase electricity in many ways. In the new era, the market hopes that the electricity selling company will provide innovative services, that is, in addition to the traditional electricity selling business, the electricity selling company is encouraged to provide users with value-added services including contract energy management, comprehensive energy conservation and energy use consulting. From another point of view, both the seller and the user are participants in the market. The user is no longer simply a "buyer" or "consumer", but can form a sales and consumption alliance, jointly complete the day ahead market quotation, and provide auxiliary services in the power sales jurisdiction. Due to the influence of weather change and fault burst, the actual power of feeder load often deviates from the planned value, and the excessive deviation will lead to the reduction of power quality and harm to the operation of distribution network. In the real-time market, in order to make up for the deviation of the day ahead market plan, the control center collects the signed IL status according to the two-way information channel and determines the corresponding adjustable equipment. Different types of IL backup resources provide corresponding backup auxiliary services according to the needs of each time period. The above behaviors can not be separated from the market environment incentives. In the power market, encouraged and guided by policies, IL users need to sign relevant contracts with power selling companies, establish mutual trust and mechanism, and on this basis, apply effective control strategies to increase or decrease the controllable load or shift the power consumption period, which helps to improve the comprehensive energy efficiency of reserve resource allocation on the generation side and demand side. Discrete Fourier transform control strategy It is a typical non-stationary strong stochastic process that the power flow of distribution network feeders changes rapidly and its amplitude changes greatly. The short-term power prediction accuracy of feeders is not high, the power flow fluctuation is large, and the power deviation from the original planned value occurs. At this time, we should use a certain method to call IL with different response time to accurately stabilize the power within the planned value fluctuation deviation range. DFT is one of the fast and effective methods to solve these problems. DFT transforms the fluctuating signal from time domain to frequency domain, and then studies the frequency response and change rule of the signal. The unique information represented by frequency sequence is obtained by finite calculation, and then the power deviation in frequency domain is obtained. Input the power deviation in a specific period of time, use (1) to convert it into frequency domain, and establish corresponding relationship with different IL capacities in Table 1, so as to obtain the basis of each IL capacity demand in this period. The Parsevals' theorem shows that the energy obtained in time domain is equal to that obtained in frequency domain. In Table 1, 6 kinds of IL have been characterized by different frequency bands. By using DFT to obtain the total deviation frequency domain signal in the period of T and mapping it to each kind of IL specific frequency band, the energy of these frequency bands can be obtained by Parsevals' theorem. Therefore, the demand value of signal energy in different IL frequency band in can be obtained from (2) Interruptible load control target The deviation between measure value and planned value for the compensated power on the distribution feeder in one planning period (one day) can be expressed as: Evaluation standard Appropriate performance evaluation standards can objectively and fairly evaluate the role of each control capacity on the control object, which is an important link to establish a standard order and promote the application of standby auxiliary service technology. In the power market, it is necessary to check the degree of load power change of distribution network with supporting evaluation standards as the basis for IL fine scheduling. Meanwhile, it is necessary to prevent IL over compensation. On the basis of meeting the control accuracy requirements, scheduling shall be minimized to avoid affecting the demand satisfaction of users. In the power market, the assessment of 10kV feeder power in the distribution network is generally that the root mean square error of all day prediction results of the daily prediction curve of feeder power is not more than 10%, and the maximum prediction error is not more than 30%. The expression is Based on the RMSE standard, this paper constructs the evaluation standard of 10kV feeder power in the distribution network, and guides the deviation to approach the zero axis, but does not require the deviation of actual power value and planned value to be zero in real time, so as to reduce the frequency of IL regulation. Therefore, RMSE is divided into: 1) short-term index SRMSE, which responds to real-time power deviation; 2) long-term index LRMSE, which suppresses large fluctuation of feeder power. SRMSE and LRMSE are assessed every 15 minutes. SRMSE aims to evaluate the short time power correction effect, and can be given by: Control threshold This strategy focuses on the long-term control effect and does not require the feeder power deviation to cross zero frequently in the assessment time. In line with the above RMSE standard, three thresholds are set in the regulation area. Dead time. There is no need to compensate for the small power fluctuation, and it does not affect the customer satisfaction as much as possible. The dead time start condition is: Emergency time. At some time of the day, the actual power value will greatly exceed the planned value, which requires a lot of IL emergency compensation deviation. The starting conditions of emergency regulation area are: The demand value of IL scheduling is modified to make the usage capacity and scheduling times of controlled IL meet the actual requirements. The starting conditions of the normal time is: Deviation compensation According to the DFT control requirements, it is necessary to input the power deviation in t period to calculate the IL capacity. The length of T is determined by the frequency band classification of IL and RMSE standard. After a lot of experiments and simulations, it is most appropriate to take 15 minutes before the calculation point as the sampling interval. The sampling frequency is determined by the distribution network automation. The distribution network automation construction in China is relatively weak, and the level is uneven. This paper collects data every 1s. Assuming the control system takes i time as the starting point, takes the unadjusted PDEV,i-15min as the original data, uses DFT to convert it into frequency domain, and establishes the corresponding relationship with each frequency band in Table 1, then obtains the scheduling needs evaluation of all kinds of IL from Parsevals' theorem, through the correction of three control thresholds: Control process With the standard of RMSE, the control process of interruptible load is illustrated in Figure 1. Determine 3 control thresholds. Using DFT to calculated P ILx By missing roll. If P DEV >E, the following rules are ignored: whether the evaluation standard is met; whether the correction conditions are met; scheduling IL compensation deviation. If P ILx > D , the IL is not scheduled to suppress the power fluctuation. If D < N < E, the goal is: P DEV is as small as possible; meets the evaluation criteria; schedules IL according to the modified conditions According to 3 control thresholds, correct P ILx to P R , compensate P DEV , and make P DEV close to zero. Scroll and mark the amount and times of IL used. In addition to emergency regulation, each IL dispatch value recovers the economic operation value in batches every hour. Verify the control effect of the whole day and set the control threshold of tomorrow's control. Conclusions With the deepening of power reform, the distribution network needs to develop auxiliary services to regulate load power with the integrated energy service system of multiple participation. Based on the method of statistical analysis, aiming at the demand of power market and combining with the characteristics of interruptible load, this paper puts forward the strategy of using DFT to control the distribution network interruptible load, integrating the interruptible load resources, dispatching different interruptible load to compensate the power shortage or absorb the excess power in a specific period of time, so as to reduce the power prediction deviation of the distribution network feeder and improve the service ability of the power selling enterprises.
2020-04-21T04:03:12.916Z
2020-03-21T00:00:00.000
{ "year": 2020, "sha1": "c071da7fc251b6c69644a9c70d98f2173b4734a2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/446/4/042035", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bb5b0ac0fef2df334e8a2889b910158c18f4806e", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
18142765
pes2o/s2orc
v3-fos-license
Revision surgery is overestimated in hip replacement Objectives The Kaplan-Meier estimation is widely used in orthopedics to calculate the probability of revision surgery. Using data from a long-term follow-up study, we aimed to assess the amount of bias introduced by the Kaplan-Meier estimator in a competing risk setting. Methods We describe both the Kaplan-Meier estimator and the competing risk model, and explain why the competing risk model is a more appropriate approach to estimate the probability of revision surgery when patients die in a hip revision surgery cohort. In our study, a total of 62 acetabular revisions were performed. After a mean of 25 years, no patients were lost to follow-up, 13 patients had undergone revision surgery and 33 patients died of causes unrelated to their hip. Results The Kaplan-Meier estimator overestimates the probability of revision surgery in our example by 3%, 11%, 28%, 32% and 60% at five, ten, 15, 20 and 25 years, respectively. As the cumulative incidence of the competing event increases over time, as does the amount of bias. Conclusions Ignoring competing risks leads to biased estimations of the probability of revision surgery. In order to guide choosing the appropriate statistical analysis in future clinical studies, we propose a flowchart. Introduction One of the most important outcome measures in orthopaedic surgery is the time to a certain event. In joint replacement surgery, for instance, the time to revision surgery is seen as the most important determinant of the clinical success of any prosthesis. Techniques from the field of survival analysis, such as the Kaplan-Meier estimator, 1 have been used to estimate time to revision surgery since the 1980s. 2,3 The time from implantation of a prosthesis until a specified event of interest is used in survival analyses. An important advantage of survival analyses is that these techniques allow analyses with "censored data", i.e. data concerning patients for which revision surgery has not yet taken place within the study period. 1 If the endpoint of interest has not yet occurred at the end of the observation window, the event time is censored. The probability of revision surgery can be estimated with the Kaplan-Meier estimator at any specific point in time. At first glance, the Kaplan-Meier estimator seems ideal for orthopaedics since analyses can be performed before revision surgery has occurred in all patients. However, this method makes a number of assumptions. 4,5 The Kaplan-Meier estimator is specifically developed for studies with a single time to a certain event, which in turn is able to be censored. The assumption of independence of the time to time to event and the censoring distributions is of critical importance. The probability of the event of interest is estimated by assuming that patients whose time is censored have the same probability of revision at any later time. When estimating the time to revision surgery, often more types of events play a role, which may prevent the event of interest from occurring. For instance, revision of an implant may be unobservable because the patient dies. In this particular case, death is a competing event, which poses a competing risk -a risk that may be high, especially in studies with long-term follow-up. The Kaplan-Meier method of censoring patients who experience a competing event is not ideal when the estimation of the probability of the event of interest is the goal, since this implicitly assumes that the event of interest still could occur after the time point at which censoring occurred. [6][7][8] If a patient does experience a competing event, the event of interest can no longer occur: therefore the potential contribution to the estimate from this patient should become zero. The probability of the event of interest must be estimated by taking into account the probability of the competing events; ignoring the competing risks leads to a biased estimation of the probability of the event of interest (see Appendix 1 of the Supplementary Material for more technical details). 5,[9][10][11] In this study we compare the Kaplan-Meier estimator with the cumulative incidence estimator in a competing risk setting and show how the level of bias introduced by violating critical assumptions of the Kaplan-Meier estimator. We propose a simple algorithm to help select the appropriate data analysis technique to estimate the probability of revision surgery in future studies. In order to illustrate these statistical methods, developed by Kaplan and Meier 1 and Bernoulli, 10,11 we used data from a previous cohort of acetabular revision patients. 12 Materials and Methods In our published cohort study, 62 acetabular revisions were performed in 58 patients between January 1979 and March 1986, at the Radboud University Medical Center in Nijmegen, The Netherlands. 12 There were 13 men and 45 women with a mean age at revision of 59.2 years (23 to 82). Revision was undertaken using impacted morsellised bone grafts and a cemented acetabular component in all cases. They were followed prospectively with yearly clinical and radiological assessments. Competing risks versus Kaplan-Meier. Competing risks are applied to situations where more than one competing endpoints are possible. Their competing in that one event will preclude the other occurring. In our situation there are two different endpoints: revision surgery and death. The occurrence of death prevents the occurrence of the event of interest, namely revision surgery. The competing risks model can be represented as an initial state (alive after initial revision surgery) and two different competing endpoints: revision surgery and death. We are interested in the probability of revision surgery (event of interest) in the presence of the competing event of death -which clearly prevents the occurrence of revision. The Kaplan-Meier estimator is often used to estimate this probability. However, in this model the competing cause endpoints (i.e., death) are treated as censored observations. If a patient has experienced death, he or she has zero probability of experiencing the event of interest, and this must be considered in the model. The cumulative incidence estimator is used to estimate the probability of each competing event. The cumulative incidence function of cause k is defined as the probability of failing from cause k before time t. Here we are interested in the cumulative incidence function of revision surgery in the presence of death. Statistical analysis. All analyses concerning competing risks models have been performed using the mstate library 13,14 in R. 15 For technical details concerning the software, see de Wreede et al. 13,14 Results At a mean of 23 years (20 to 25) after surgery, no patients were lost to follow-up. A total of 13 hips in 12 patients had undergone revision surgery, and 30 patients (33 hips) had died of causes unrelated to their hip surgery (Table I). The cumulative incidence estimators for both competing events, i.e. revision surgery and death, are shown in Figure 2. The cumulative incidence estimator of revision surgery by the competing risks method at five, ten, 15, 20 and 25 years is 2%, 6%, 15%, 18% and 21%, respectively. The cumulative incidence of death represents the probability of dying before revision surgery. If death occurs first, the observation will not be considered censored in the competing risk approach (in contrast to the Kaplan-Meier approach), but it will contribute to the competing event of death. Discussion In the current orthopaedic literature, the Kaplan-Meier estimator is an accepted standard in estimating the probability of revision surgery in cohort studies of any type of joint replacement. In the absence of competing risks, this method is valid. However, in the presence of competing risks, the Kaplan-Meier estimator overestimates the probability of revision surgery. In our example, the probability of revision surgery is overestimated by 60% at a follow-up of 25 years. In the Kaplan-Meier approach failures from the competing causes are treated as censored observations. Individuals who will never be revised because they have died, are censored and thus treated as if they still could be revised. In other words, the Kaplan-Meier estimator allows patients to be revised after they have died. Clearly, this results in an incorrect or biased estimate of the actual probability of revision surgery at that specific time point. When competing risks are absent (i.e., the competing event death has not occurred), the Kaplan-Meier estimator gives a valid estimation of the probability of revision surgery. However, in our example involving a long Comparison of cumulative incidence of revision surgery estimated with the Kaplan-Meier estimator and the competing risks method. The discrepancy between the lines represents the bias, which is introduced by erroneous usage of the Kaplan-Meier estimator. follow-up, competing events such as death do occur frequently. Also, it can be seen from our dataset that the first patient died as early as one year after surgery (Fig. 2). By five years after the initial surgery, a total of six patients had died, compared with only one patient who had undergone revision surgery, resulting in a 3% overestimation of the probability of revision surgery (Fig. 3). In other words, the hazard of the competing events is considerable, leading to an overestimation of the revision surgery probability, even at mid-term follow-up. In this paper a competing risks model has been applied to a cohort where only two competing events are present. However, in other clinical situations, more competing events can occur. Consider estimating the probability of revision surgery due to a specific event, for instance the probability of revision surgery due to recurrent dislocations. In this situation, there are three competing events: revision surgery for recurrent dislocations, revision surgery for any other reason and death of a patient. The competing risk model can easily be extended to deal with another competing event. From a statistical point of view, competing risk analysis should be used whenever competing risks are present. In order to aid in deciding which analysis should be used to estimate the probability of revision surgery in future clinical studies, we propose a simple algorithm (Fig. 4). Every clinical study that investigates the probability of revision surgery should address the occurrence of competing events. When no competing events have occurred, the Kaplan-Meier estimator of revision surgery will be valid. However, whenever any competing event occurs, the Kaplan-Meier estimator will introduce bias. The resulting bias is greater when the "competition" is heavier, i.e. when the hazard of the competing events is larger. See Appendix 2 of the Supplementary Material for a concise summary of necessary variables to perform a competing risk analysis. Recently, minimal clinically important differences (MCIDs) have gained attention in the literature. [16][17][18] Using MCIDs, patients can be classified as responders or nonresponders to a particular therapy. Theoretically, one could investigate the time to a MCID after joint replacement, using MCIDs in health-related quality of life (HRQoL). However, contrary to the occurrence of revision surgery or the first occurrence of a complication, 19 which can be assessed over a time period, whether or not a patient has attained a MCID in HRQoL is typically measured using a questionnaire at a specific point in time. Neither the Kaplan-Meier estimator nor a competing risk model is an appropriate approach, unless the assessment of the occurrence of an MCID is repeated at small time intervals. The competing risk analysis can be performed using the mstate library 13,14 in R. 15 R and the mstate package are both freely available at The R Project for Statistical Computing and The Comprehensive R Archive Network. Supplementary material Two appendices, giving 1) further details of the mathematical background of the Kaplan-Meier estimator and competing risk analysis and 2) a concise summary of necessary variables to perform a competing risk analysis, are available with this article on our website www.bjr.boneandjoint.org.uk What is probability of revision surgery? Did any competing events occur? Yes No Competing risk analysis Kaplan-Meier estimator Algorithm detailing the appropriate data analysis technique to estimate the probability of revision surgery. The possibility and actual occurrence of competing events should be assessed in order to determine the appropriate data analysis technique.
2016-05-12T22:15:10.714Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "b6769890f452812c0ff908698c90c6124bb3feb7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1302/2046-3758.110.2000104", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b6769890f452812c0ff908698c90c6124bb3feb7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119365976
pes2o/s2orc
v3-fos-license
Riemann problems and dispersive shocks in self-focusing media The dynamical behavior resulting from an initial discontinuity in focusing media is investigated using a combination of numerical simulations and Whitham modulation theory for the focusing nonlinear Schrodinger equation. Initial conditions with a jump in either or both the amplitude and the local wavenumber are considered. It is shown analytically and numerically that the space-time plane divides into expanding domains in which the solution is described by a slow modulation of genus-zero, genus-one or genus-two solutions, their precise arrangement depending on the specifics of the initial datum The NLS equation and its Whitham equations. We begin by reviewing some background material, to set up the relevant framework. We write the one-dimensional focusing NLS equation with small dispersion as where subscripts x and t denote partial differentiation and 0 < 1 is a small parameter that quantifies the relative strength of dispersion compared to nonlinearity. Recall that Eq. (1) possesses several invariances: phase rotations, spatial reflections, scaling and Galilean transformation. Specifically, if q(x, t) is any solution of Eq. (1), so are: e iα q(x, t), q(−x, t), aq(ax, a 2 t) and e i(Vx−V 2 t) q(x − Vt, t), where all transformation parameters are real-valued. All of these invariances will be useful below. As is well known, the so-called Madelung transformation, namely q(x, t) = √ ρ(x, t) e iS(x,t)/ , where the real-valued quantities ρ(x, t) and S(x, t) represent respectively the local intensity and local phase, transforms Eq. (1) into the hydrodynamic-type system ∂ρ ∂t where S x (x, t) = v(x, t) is the local wavenumber. The truncation = 0 of Eqs. (3) is the genus-0 NLS-Whitham system. Recall that Eq. (1) admits the background solution q(x, t) = q o e 2iq 2 o t , together with its generalizations via the above-mentioned invariances. The system (3) describes slow modulations of such a solution. A system of modulation equations for the above periodic solutions of the focusing NLS Eq. (1) when 0 < 1 can be obtained via Whitham averaging theory [20]. The result is the genus-1 NLS-Whitham system [33,34]. Explicitly, in Riemann invariant coordinates, where r 1 , . . . , r 4 are the Riemann invariants, and the characteristic velocities are where V = r 1 + r 2 + r 3 + r 4 , with where K(m) and E(m) are the complete elliptic integrals of the first and second kind, respectively [31]. Importantly, the Riemann invariants are exactly the branch points of the elliptic solution (4). That is, r 1 = α, r 2 = α * , r 3 = γ and r 4 = γ * . The NLS Eq. (1) also admits multiphase solutions [32]. In general, genus-g solutions are expressed as ratios of Jacobi theta functions [35], and their modulations are described by corresponding genus-g Whitham modulation systems [33,34]. We refer the reader to [19] for a review. We will not use such higher-genus solutions and the corresponding modulation systems here. Riemann problems for the focusing NLS equation. The Whitham equations for Eq. (1) are elliptic, and the Riemann invariants and the characteristic velocities are in general complex-valued. Hence, these systems cannot be used to study IVPs in general, contrary to the defocusing case. Nonetheless, we next show that, notwithstanding this difficulty, the system (6) still yields useful information about the behavior of solutions of Eq. (1). We consider the focusing NLS equation with the following class of initial conditions (IC): with A ± ≥ 0 and µ and φ real, and we classify the resulting dynamics depending on the values of A ± and µ. (It turns out that the value of φ has no effect on our results.) The results can be considered the analogue for the focusing case of those obtained in [28] for the defocusing case. Nonzero values of µ correspond to the presence of carriers with opposite wavenumbers for x < 0 and x > 0, which, due to the Galilean invariance of the NLS equation, induce counter-propagating flows. For µ > 0, the two halves of the IC (9) propagate inward (i.e., towards each other), whereas for µ < 0 they flow outward (i.e., away from each other). Note that one can always take the discontinuity at x = 0 and set the phases of the IC for x < 0 and x > 0 to be equal and opposite without loss of generality, thanks to the translation and phase invariance of the NLS equation. Similarly, one can always take the carrier wavenumbers for x < 0 and x > 0 to be equal and opposite thanks to the Galilean invariance of the NLS equation. One-sided step. The simplest scenario of IC given by Eq. (9) is that of a one-sided step, in which A − = 0. Then we can always set A + > 0 and µ = φ = 0 without loss of generality thanks to the phase and Galilean invariances of Eq. (1). The long-time asymptotics of solutions generated by these IC was studied in [14] by IST. On the other hand, the dynamics can be effectively described via the genus-1 Whitham system (6) [19]. Even though the Riemann invariants and the characteristic velocities are in general complex, the genus-1 system (6) does possess some real-valued solutions. In particular, it admits the self-similar solution [22] 4α re + 2(A 2 with ξ = x/t. This solution describes a slow modulation of the elliptic solution (4) [now describing oscillations with characteristic spatial period O( ), as can be easily seen by performing a simple rescaling of the spatial and temporal variables in Eq. (1) when = 1]. As discussed in [19], Eqs. (10) correctly capture the behavior of the solution of the NLS Eq. (1) with IC (9) for the solution of the Whitham system is simply given by γ = iA + and α = 0, which matches the limit of the self-similar solution (10), and which yields the constant solution q(x, t) = A + of the NLS equation (up to a uniform phase). Finally, for x < 0 the solution of the NLS equation is described by the degenerate, constant genus-0 solution γ = α = 0 of the Whitham system (6), which yields the trivial solution q(x, t) = 0 of the NLS equation. Summarizing, the solution (10) describes an oscillatory wedge V − t < x < V + t, with V − = 0 and V + = 4 √ 2A + , which connects the constant solution q(x, t) = 0 to its left to the constant solution q(x, t) = A + to its right. The actual behavior of the solutions of the NLS equation with the above IC [36] is shown in Fig. 1 together with the predictions from Whitham theory, demonstrating excellent agreement. Importantly, note that the velocity of the matching point between the two solutions is zero, i.e., the discontinuity is pinned at x = 0, unlike what happens in the defocusing case [27]. Symmetries. Importantly, the invariances of the NLS equation induce corresponding symmetries for the Whitham equations. (This is similar to what happens for other nonlinear evolution equations, even in two spatial dimensions, e.g., see [37].) Specifically, the Whitham modulation equations (and therefore the Riemann invariants) are insensitive to (i.e., invariant under) uniform phase rotations of the solutions of the NLS equation [i.e., under transformations q (x, t) = q(x, t) e iφ , with φ an arbitrary real constant]. Spatial translations of the solution of the NLS equation yields a corresponding scaling for the Riemann invariants [that is, α (x, t) = aα(ax, a 2 t) and γ (x, t) = aγ(ax, a 2 t)]. Finally, Galilean boosts also translate to a corresponding transformation for the Riemann invariants. That is, let- (Interestingly, the induced transformation for the Riemann invariants is essentially the same as in the corresponding symmetries of the Whitham equations for the KdV equation, even though the transformation of the solutions of the two PDEs is very different [4].) The combination of the above invariances and the pinning of the discontinuity at x = 0 for the self-similar solution (10) arising from the one-sided step IC (9) is the key that enables us to use the above solution to analyze the more complicated scenarios, as we discuss next. In particular, note that multiplying Eq. (9) by e iV o x results in a slanted oscillatory wedge with boundaries at Symmetric two-sided step. Consider now a two-sided step given by the IC (9) with A − = A + and µ = 0, corresponding to an initial phase discontinuity at x = 0. Without loss of generality we can set A ± = 1 thanks to the scaling invariance of the NLS equation. One can view the above IC as a superposition of two one-sided steps. Of course the solution of the NLS equation is not simply given by a superposition of the corresponding one-sided solutions in general. Nonetheless, the property holds for the Whitham system in this case, thanks to the invariances of the one-sided solution, the fact that the discontinuity of one-sided wedge is pinned at the origin, and that the solution to the left of the wedge is asymptotically zero. Thus, the solution of the genus-1 Whitham system generated by the above IC is simply the linear superposition of the evolution of the one-sided step discussed above and that of a reflected step. The resulting behavior is shown in Fig. 3, again demonstrating excellent agreement. Note that the Whitham system is insensitive to the phase of q(x, t) and hence to the value of φ. In particular, when φ = 0 the IC has no discontinuity at x = 0; this is the case considered in [22]. But a similar behavior arises in the presence of a jump discontinuity in the phase of the IC. It was suggested in [22] that the self-similar solution (10) describes the evolution of small perturbations of the constant background as a result of modulational instability. And indeed it was shown in [16,18] using IST that the long-time asymptotics of a very broad class of IC corresponding to localized perturbations of a constant background tends asymptotically to this behavior as t → ∞. The properties of the self-similar solution were further studied in [17]. Next we show how suitable combinations of the above one-sided solutions allow one to FIG. 3. Same as Fig. 1, but for a two-sided step IC given by (9) with A − = A + = 1, φ = π/2 and µ = 0. Fig. 1, but for an asymmetric two-sided step IC (9) with A − = 0.7, A + = 1.2 and µ = φ = 0. study even more general scenarios. FIG. 4. Same as Asymmetric two-sided step. We now consider the case A ± = 0 and A − = A + , which we refer to as an asymmetric step. Without loss of generality we can take A − < A + , owing to the invariance of the NLS equation with respect to spatial reflections. As before, one can view these IC as a superposition of two one-sided steps, but now with different amplitude. As a result, the solution of the corresponding IVP for the Whitham equation is again given by the superposition of two one-sided self-similar solutions. Note however that now the two halves have a different amplitude and different propagation speed. An example of the resulting behavior is shown in Fig. 4 for A − = 0.7 and A + = 1.2, demonstrating once more excellent agreement between the self-similar solutions of the Whitham equations and the numerical solution of the NLS equation. Outward counter-propagating flows. We now consider a different generalization of a symmetric two-sided step by allowing for the presence of counter-propagating flows, which are obtained when µ = 0. We first discuss the case of symmetric outward flows; that is, we consider the IC (9) with A − = A + = A and µ < 0. Again, with-out loss of generality we can take A = 1 thanks to the scaling invariance of the NLS equation. As before, it is useful to look at the IC as a superposition of two one-sided steps. Because of the presence of a non-zero wavenumber, however, in this case the discontinuity for each solution half does not remain pinned at x = 0, but instead travels to the left or to the right with speed 2|µ|. More precisely, the discontinuity in the left half is now located at x = −2|µ|t and the one in the right half at x = 2|µ|t. As a result, a wedge-shaped vacuum zone develops in the central portion of the xt-plane, i.e., for |x| < 2|µ|t. An example of the behavior resulting from the above IC is shown in Fig. 5 for µ = −0.5, demonstrating again excellent agreement with the corresponding solution of the Whitham equations. Inward counter-propagating flows. A different outcome is obtained when inward counter-propagating flows are present, namely when µ > 0. In this case, the non-zero carrier causes the two halves of the solution travel toward each other. In particular, the two individual solutions overlap in the region |x| < 2µt. An example of the resulting behavior for A ± = 1 and µ = 0.5 is shown in Fig. 7. The dot-dashed lines in Fig. 7(b) indicate the boundary of the overlap region. Inside this region, the interaction between the two genus-1 regions presumably results in the formation of a genus-2 region (similarly to what happens in the defocusing case [29,30]), so the asymptotic expression for the solution in this region will be described by a slow modulation of the genus-2 solutions of the NLS equation, and one cannot obtain useful information in this region using the genus-1 Whitham equations. Regions with more complicated oscillation patterns are indeed clearly visible in Fig. 7. (The appearance of genus-2 regions for this case had been predicted in [13] based on the calculation of the long-time asymptotics of solutions. Note, however, that [13] predicted a central genus-1 region surrounded by two genus-2 regions, each of which directly adjacent to the outermost genus-0 regions, in constrast with the predictions from Whitham theory and the numerical results. Moreover, the boundary between the two genus-2 regions and the genus-0 regions was predicted to be at x = ±2(µ + A 2 /µ) t, which is inconsistent with the limit µ → 0, since in that case the solution reduces to the symmetric two-sided step discussed earlier.) We also note that an even more complex scenario is obtained when µ > 2 √ 2A, since in that case the genus-1 Whitham equations predict that the DSW region generated by the left half of the IC would be completely to the right of the one generated by the right half, and vice-versa. It is not possible to make any predictions for this case using only the genus-1 Whitham equations. The last scenario we consider is that of an asymmetric step with inward counter-propagating flows, which is obtained from Eq. (9) by taking 0 < A − < A + and µ > 0. This case yields a similar outcome as that of a symmetric step, the only differences being once again the amplitude inside the oscillation region and its expansion speed. The corresponding solution is shown Fig. 8. Interestingly, however, the difference between the genus-1 and genus-2 regions is more clearly identifiable in this case compared to that of a symmetric step. We do not have a simple explanation for why this should be the case. As before, more complicated outcomes can be obtained when larger values of µ are considered. In this case, however, we expect two different thresholds, one at µ = 2 √ 2A − and one at µ = 2 √ 2A + . Discussion. In summary, we proposed a classification of the dynamical scenarios generated by a single jump in the IC for the focusing NLS equation in the semiclassical limit. The first few cases had been studied before, either by Whitham theory or by IST. In particular, the self-similar solution of the genus-1 Whitham equations derived in [22] was used in [23] to present a qualitative description of the solutions produced by the inward and outward counterpropagating flow ICs discussed above. Note, however, that no actual solutions of the NLS equation (either exact or numerical) were reported in [23]. Thus, a quantitative comparison between the predictions of Whitham theory and the actual solutions of the NLS equation was still missing. This is surprising, since, in the defocusing case, similar problems, as well as much more complicated ones, have been well-characterized [27][28][29][30]. While in this work the problem was formulated in the framework of semiclassical limits, there is a wellknown correspondence between small dispersion limits (described by Whitham theory) and long-time asymptotics, which applies whenever the IC considered are scale-invariant, and the IC (9) studied in this work do possess this property. There are marked differences between the behavior resulting from an initial discontinuity in the focusing ver-sus the defocusing case. In the defocusing case, a single discontinuity generates at most genus-1 regions [26], and one can only obtain genus-2 regions when two or more such discontinuities are considered [29,30]. Here, instead, we have seen that expanding genus-2 regions are generated when the IC contains inward counterpropagating flows (as in Figs. 7 and 8). Unlike what happens in other situations in both the focusing and defocusing case [12,29,30,38,39], here the boundaries of the genus-2 regions are not curved, but straight lines instead. This is because of the scaling invariance of the IC considered here, which makes the semiclassical limit equivalent to the long-time asymptotics. Whitham theory appears to slightly overestimate the spatial extent of the oscillation regions in some cases. Recall that small deviations between the predictions of Whitham theory and the actual PDE behavior are known to arise in the linear limit [4] of the former. Moreover, all numerical computations in this work were done with = 1, while Whitham theory is designed to capture the behavior of solutions as → 0, so one would not necessarily expect the latter to be effective for such large values of . Perhaps better agreement could be attained for smaller values of . But, given the ellipticity of the Whitham equations in the focusing case, we find it quite remarkable that Whitham theory is even effective at all in the various situations considered here. In fact, while in the defocusing case one can prove that the solutions of the Whitham equations provide an asymptotic approximation for the time evolution of the corresponding IC for the NLS equation, the same is not possible in the focusing case since in this case the Whitham equations are elliptic. The situation is the same as in the cases studied in [19,22]. Therefore, a rigorous description of the problems studied here can be obtained by computing the long-time asymptotics via the IST, which will also be necessary to quantitatively describe the solution in the various genus-2 regions as well as in the cases when the genus-1 Whitham equations yield no predictions whatsoever (e.g., in the case of inward counter-propagating flows when µ > 2 √ 2A). On the other hand, we reiterate that since Whitham theory does not require integrability, the results of this work are expected to also be applicable to many other NLS-type evolution equations that are not integrable, such as the ones considered in [21], which means that they should be experimentally observable in one of the various physical settings in which these models arise. Indeed, experimental observation of some of the scenarios discussed in this work have already recently been reported in [40]. It is therefore hoped that the remaining ones will also be realized experimentally in the future.
2018-10-26T18:57:32.000Z
2018-10-26T00:00:00.000
{ "year": 2018, "sha1": "3126a8025717af3074157185922e09eece4ad153", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevE.98.052220", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "3126a8025717af3074157185922e09eece4ad153", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
54879303
pes2o/s2orc
v3-fos-license
The Navier-Stokes equations with the Neumann boundary condition in an infinite cylinder We prove unique existence of local-in-time smooth solutions of the Navier-Stokes equations for initial data in $L^{p}$ and $p \in [3, \infty)$ in an infinite cylinder, subject to the Neumann boundary condition. Introduction We consider the three-dimensional Navier-Stokes equations subject to the Neumann boundary condition: (1.1) ∂ t u − ∆u + u · ∇u + ∇p = 0, div u = 0 in Π × (0, T ), ∇ × u × n = 0, u · n = 0 on ∂Π × (0, T ), u = u 0 on Π × {t = 0}, for the infinite cylinder Here, n denotes the unit outward normal vector field on ∂Π. The local well-posedness of the Neumann problem (1.1) is established in [15], [28] for initial data in L p , when Π is smoothly bounded. See also [29], [22] for the Dirichlet problem. The purpose of this paper is to develop L p -theory of (1.1) for the infinite cylinder Π. Let L p σ denote the L p -closure of C ∞ c,σ , the space of all smooth solenoidal vector fields with compact support in Π. The main result of this paper is the following: The Neumann problem plays an important role in the theory of weak solutions to the Euler equations. When Π is a two-dimensional bounded and simply-connected domain, global weak solutions to the Euler equations are constructed in [5], [39], [30] by taking a vanishing viscosity limit to (1.1). Since vorticity satisfies the homogeneous Dirichlet boundary condition subject to the Neumann boundary condition (1.1), L p -norms of the vorticity are uniformly bounded independently of viscosity. For the three-dimensional Cauchy problem, vanishing viscosity methods are applied in [36], [23] to construct unique local-in-time solutions to the Euler equations in R 3 . It is unknown whether a vanishing viscosity method is applicable for domains with boundary. See [13], [7], [38], [24] for local well-posedness results of the Euler equations. In [1], the author studied vanishing viscosity limits of (1.1) for axisymmetric data based on the main result of this paper. We outline the proof of Theorem 1.1. We extend the approach for bounded domains [28]. We set the Laplace operator subject to the Neumann boundary condition When Π is smoothly bounded, it is known that the operator −B generates a C 0 -analytic semigroup on L p for p ∈ (1, ∞) [28], [4]. We show analyticity of the semigroup for the infinite cylinder by using a solution formula for the resolvent problem. We then define a fractional power B 1/2 0 for the operator B 0 = B + λ 0 and λ 0 > 0. Since the operator B 0 admits a bounded imaginary power [32], [21], the domain of the fractional power D(B 1/2 0 ) is continuously embedded to the Sobolev space W 1,p . We then define the Stokes operator as a restriction of the Laplace operator Since the Laplace operator B is commutable with the Helmholtz projection operator P, the Stokes operator acts as an operator on the solenoidal vector space L p σ . By using the analyticity of the semigroup and boundedness of the Helmholtz projection operator on L p [34], we construct mild solutions u ∈ C([0, T ]; L p ) for u 0 ∈ L p σ and p ∈ [3, ∞) of the form u = e −tA u 0 − Higher regularity of mild solutions follow from elliptic estimates for the Helmholtz projection and the Stokes opeartor. We show that all derivatives of solutions belong to the Hölder space C µ ((0, T ]; L s ) for µ ∈ (0, 1/2] and s ∈ (3, ∞). This paper is organized as follows. In Section 2, we show that the Laplace operator generates an analytic semigroup on L p for the infinite cylinder. In Section 3, we define a fractional power of the Laplace operator and prove a continuous embedding of the domain of the fractional power. In Section 4, we define the Stokes operator. In Section 5, we construct mild solutions. In Section 6, we prove higher regularity of mild solutions. In Appendix A, we prove higher regularity estimates for the Laplace operator in the infinite cylinder, used in Section 6. In Appendix B, we estimate resolvent of the Laplace operator by a multiplier theorem. Resolvent estimates for the Laplace operator We start with a resolvent estimate for the Laplace operator subject to the Neumann boundary condition in the infinite cylinder. We derive a solution formula for the Neumann problem by using resolvent of two-dimensional problems. We first prove the a priori estimate for solutions of (2.2) for µ = 0 by a contradiction argument. Suppose on the contrary that (2.5) were false. Then there exists a sequence of functions {v m } satisfying (2.2) for µ = 0 and g m such that Since the estimate (2.4) holds for µ = 1, applying (2.4) for v m − ∆v m = g m + v m implies that {v m } is uniformly bounded in W 2,p . Thus by the Rellich-Kondrachov theorem [14,5.7 THEOREM 1], there exists a subsequence (still denoted by {v m }) such that v m converges to a limit v in W 1,p and the limit v satisfies ∆v = 0 in D, Since ∇ ⊥ · v is harmonic and vanishes on ∂D, we have ∇ ⊥ · v = 0 in D. Moreover, by −∆v = −∇∇ ⊥ · v − ∇div v and v · n = 0 on ∂D, we see that div v = 0 in D. Since D is simply-connected, there exists a stream function ψ such that v = ∇ ⊥ ψ. Since ψ is harmonic and constant on ∂D, we have v ≡ 0. This contradicts ||v|| W 1,p = 1. Hence (2.5) holds. By (2.5) and (2.4) for µ = 1, we obtain ||v|| W 2,p (D) ≤ C||g|| L p (D) (2.6) for solutions of (2.2) for µ = 0. We next estimate resolvent of the Neumann problem (2.3). For p = 2, integration by parts yields with some constant C, independent of µ. We consider the Neumann problem (2.9) −∆p = f in D, for average-zero functions f ∈ L p , i.e. D f dx = 0. Solutions of (2.9) uniquely exist up to an additive constant and satisfy the estimate by [25]. Applying (2.10) for −∆w = h − µw yields the estimate (2.7) for p = 2. We now derive a solution formula for the problem (2.1). We use a cylindrical coordinate x 1 = r cos θ, x 2 = r sin θ, x 3 = z and decompose a vector field f = f r e r (θ)+ f θ e θ (θ)+ f z e z by the basis e r (θ) = t (cos θ, sin θ, 0), e θ (θ) = t (− sin θ, cos θ, 0), e z = t (0, 0, 1). In the sequel, we write the horizontal component by f h = f r e r + f θ e θ . We define a partial Fourier transform u = F u by for functions u(·, x 3 ) in the Schwartz class S(R; X) for a Banach space X. See [6,Chapter 6]. , solutions of (2.1) are represented by u = u h + u z e z and where F −1 denotes the Fourier inverse transform. Proof. Let u = u h + u z e z be a solution of (2.1). Since the Neumann boundary condition in (2.1) implies Thus u h and u z satisfy We consider the partial Fourier transform for u h and u z . Sinceû h andû z satisfy (2.2) and (2.3) for µ = λ + ξ 2 , g =f h , h =f z , we see thatû h = (µ + B 1 ) −1f h andû z = (µ + B 2 ) −1f z by Propositions 2.1 and 2.2. By the Fourier inverse transform, we obtain (2.11). Remark 2.6. By using a multiplier theorem on a UMD-space, we are able to obtain the L p -estimate for solutions to (2.1) and λ ∈ Σ θ . We give a proof for (2.15) in Appendix B. 2.3. L p -estimates. We next prove (2.12) for p ∈ (1, ∞) and large |λ| ≥ δ by a cut-off function argument. We apply L p -estimates for the resolvent equation in a smoothly bounded domain G ⊂ R 3 : holds for solutions of (2.16) for |λ| ≥ δ, f ∈ L p (G) and g ∈ W 1,p (G) satisfying g · n = 0 on ∂G. Proof. The stronger estimate is proved in [4, Theorem 1.2], where p ′ denotes the conjugate exponent to p. Since the trace of h is estimated by by [20,II.4., Theorem II.4.1], applying the Young's inequality implies (2.17). For a solution u of (2.1), we see that u j = uϕ j satisfies We take a smoothly bounded domainG j such that Since the estimate (2.17) holds inG j for |λ| ≥ δ with δ > 0 by Proposition 2.7, we have with some constant C, independent of j. The above estimate yields By summing over j, we obtain for |λ| ≥ δ and δ ≥ 1. We take δ ≥ 1 so that C ′′ δ −1/2 ≤ 1/2 and obtain (2.12). Proof of Lemma 2.4. We apply the a priori estimate (2.12) for solutions given by the formula . For general f ∈ L p (Π), we construct solutions by an approximation by elements of C ∞ c (Π) and the estimate (2.12). The uniqueness follows from a duality argument. The proof is now complete. Fractional powers In this section, we see that a domain of a square root of the Laplace operator B 0 = B + λ 0 for λ 0 > 0 is continuously embedded to the Sobolev space W 1,p (Π). We first recall the notion of a bounded H ∞ -calculus for a sectorial operator in an abstract Banach space. In the subsequent section, we apply an abstract theory to the operator B 0 and deduce the continuous embeddings. 3.1. BIP and H ∞ . We recall a bounded H ∞ -calculus [27]. We follow a booklet [10]. We say that a closed linear operator L in a Banach space X is sectorial if the domain D(L) and the range R(L) are dense in X, (−∞, 0) ⊂ ρ(L) and there exists a constant C > 0 such that Here, ρ(L) is the resolvent set of L and || · || denotes the operator norm on X. The estimate (3.1) implies that the resolvent (t + L) −1 has an analytic extension to a sector Σ θ = {λ ∈ C\{0} | |argλ| < θ} for some θ ∈ (0, π/2] and t(t Let H(Σ φ ) denote the space of all holomorphic functions in Σ φ . For simplicity, we abbreviate the domain Σ φ in sentences. Let H ∞ denote the space of all bounded and holo- The space H 0 is smaller than H ∞ and consists of functions vanishing at λ = 0 and |λ| → ∞. We define bounded linear operators f (L) for holomorphic functions f ∈ H 0 . Here, we take φ ∈ (φ L , π) so that the spectrum σ We say that the operator L admits a bounded H ∞ -calculus if there exists φ ∈ (φ L , π) and K > 0 such that The infimum of such φ is called H ∞ -angle, denoted by φ ∞ L . If the operator L admits a bounded H ∞ -calculus, we are able to define a bounded linear operator f (L) for f ∈ H ∞ by an approximation. In particular, we are able to define pure imaginary powers L is since f (λ) = λ is is bounded and holomorphic in Σ π . Here, λ is takes the principal branch. We say that the operator L admits a bounded imaginary powers if there exists a constant C such that Since L is forms a group, the estimate (3.4) implies that L is is quasi-bounded, i.e., ||L is || ≤ Ce θ|s| for s ∈ R and some constants θ, C > 0. The infimum of such θ is called power angle of L, denoted by θ L . It follows from (3.3) that 0 ≤ φ L ≤ θ L ≤ φ ∞ L < π. If a sectorial operator admits a bounded imaginary powers, it follows that the domain of the fractional power D(L α ) agrees with the complex interpolation space [X, D(L)] α for α ∈ [0, 1]. Here, D(L α ) is equipped with the graph-norm. See [37], [10] for fractional powers of a sectorial operator. 3.2. A domain of a square root. We now define fractional powers for the operator B 0 = B + λ 0 for λ 0 > 0. By Lemma 2.4, the operator B 0 is invertible and sectorial on L p with spectral angle zero. The boundedness of pure imaginary powers of the operator B 0 is proved by R. Seeley in [32]. More strongly, we have: 2) by taking f (λ) = λ −α and the counter-clockwise integral path Γ, consisting of the two half lines {λ ∈ C | arg (λ − a) = ±ψ} for some a > 0 and ψ ∈ (0, π/2). We deduce a continuous embedding of the domain of the square root B 1/2 0 , which is used later in Section 6. with continuous injection. The Stokes operator We define the Stokes operator as a restriction of the Laplace operator in a solenoidal vector space. Since the Helmholtz projection operator is commutable with the Laplace operator subject to the Neumann boundary condition, a restriction of the semigroup e −tB forms a bounded C 0 -analytic semigroup on L p σ . Proof. We prove (4.1). The equality (4.2) follows from (4.1). We take u ∈ D(B). Since the operator P acts as a bounded operator on W 2,p [34, Theorem 6] (see Lemma 6.2 in Section 6), the function Pu belongs to W 2,p . By taking the rotation to We prove (4.3). The property (4.4) follows from a duality. We set It is not difficult to see that ∇Φ ≡ 0 since Φ satisfies the Neumann problem in a weak sense. Here, div ∂Π denotes the surface divergence on ∂Π. Indeed, integration by parts yields for ϕ ∈ C ∞ c (Π). Since ∇ϕ is orthogonal to solenoidal vector fields, it follows that The above equality is extendable for all ∇ϕ ∈ G p ′ (Π) since gradients of functions in Lemma 7], where p ′ is the conjugate exponent to p. It follows that Here, ( f, g) denotes the integral of f · g in Π for f ∈ L p and g ∈ L p ′ . We proved ∇Φ ≡ 0. We consider the Stokes operator for λ ∈ ρ(−B). In particular, the Stokes operator −A generates a bounded C 0 -analytic semigroup on L p σ . We set the operator and define fractional powers of the operator by the same way as we did for B 0 in the previous section. Since the resolvent of A 0 agrees with that of B 0 on L p σ by (4.7), we have Proof. The property (4.8) follows from (4.7). We show (4.9). For an arbitrary By multiplying B α 0 by B −α 0 u = B −α 0 Pu, we have u = Pu ∈ L p σ and f = B −α 0 u = A −α 0 u by (4.8). Hence f ∈ R(A −α 0 ). We proved (4.9). We set the fractional power A α 0 u for u ∈ D(A α 0 ) = R(A −α 0 ) as we did for B 0 . Proposition 4.3 and Lemma 3.2 imply: In particular, D(A 1/2 0 ) is continuously embedded to W 1,p ∩ L p σ . In order to construct mild solutions of (1.4), we prepare an estimate of the composition operator A −1/2 0 Pdiv. Proposition 4.5. There exists a constant C such that for F ∈ C ∞ c (Π). The operator A −1/2 0 Pdiv is uniquely extendable to a bounded operator on L p . Proof. We first observe that the operator A 0 = A 0,p defined on L p σ satisfies For simplicity, we abbreviate the subscript p. By (4.11), we see that the same property holds for the resolvent of A 0 and we have For ϕ ∈ C Since D(A 1/2 0 ) is continuously embedded to W 1,p ′ by Lemma 4.4, there exists a constant C such that The estimate (4.10) follows from the duality. Existence of mild solutions We construct solutions of an integral equation (1.4) by using analyticity of the Stokes semigroup. We first prepare linear estimates for an iterative argument. By estimating u j+1 − u j by a similar way, we are able to show that lim j→∞ sup 0≤t≤T t γ ||u j+1 − u j || q (t) = 0. Thus a limit u satisfies the integral equation (1.4) such that t γ u ∈ C([0, T ]; L q ) and t γ u vanishes at time zero. Higher regularity We prove Theorem 1.1. It remains to show that mild solutions constructed in Theorem 5.2 are smooth in Π × (0, T ]. We use the fractional power of A 0 = A + λ 0 , defined in Section 4. By multiplying e −λ 0 t by the mild solution u, we see that v = e −tA 0 u satisfies Our goal is to prove: Theorem 6.1. All derivatives of v belong to C µ ((0, T ]; L p ) for µ ∈ (0, 1/2) and p ∈ (3, ∞). Proof. The property (6.6) follows from (6.5) by the continuous injection from D(A 1/2 0 ) to W 1,p and the elliptic regularity estimate (6.3) for A 0 v = ∂ t v + F. We prove (6.5). We differentiate v by the fractional power A 1/2 0 and apply Lemma 6.2 (i). We use the integral representation for t ≥ δ > 0 of the form The first term is smooth for t > δ. We multiply A 1/2 0 by the second term and observe that The first two terms belong to C 1+µ ((0, T ]; L p ) since A −1/2 0 F ′ ∈ C µ ((0, T ]; L p ) by ∂ t vv ∈ C µ ((0, T ]; L p ) and Proposition 4.5. The last term belongs to C µ ((δ, T ]; L p ) by applying Lemma 6.2 (i). We proved (6.5). Appendix A. Higher regularity estimates for the Laplace operator In Appendix A, we prove higher regularity estimates for the Neumann problem (A.1) −∆u = f in Π, ∇ × u × n = g, u · n = 0 on ∂Π. Let W 1,p tan (Π) denote the space of all functions g ∈ W 1,p (Π) such that tangential components of g vanish on the boundary ∂Π. We prove Lemma A.1 by a reduction to bounded domains. Proof of Lemma A.1. We prove by a cut-off function argument as we did in the proof of We set u j = uϕ j and G j = D × ( j − 1, j + 1). Observe that u j satisfies −∆u j = f j in G j , ∇ × u j × n = g j , u j · n = 0 on ∂G j . x 3 ϕ j and g j = gϕ j + ∇ϕ j × u × n. We take a smoothly bounded domainG j such that G j ⊂G j ⊂ Π and apply Proposition A.2 to estimate We obtain the desired estimate by induction for m ≥ 0. Appendix B. L p -resolvent estimates near λ = 0 In Appendix B, we prove the resolvent estimate (2.15). We apply a multiplier theorem on a UMD-space due to L.Weis [40]. We prove Lemma B.1 by using the solution formula (2.11). We show that resolvent of the Laplace operators B i (i = 1, 2) are R-bounded. We recall the notion of R-bounded. See [11]. Let X and Y be Banach spaces. Let B(X, Y) denote the space of bounded linear operators from X to Y. We say that a family of bounded linear operators τ ⊂ B(X, Y) is R-bounded if there exists a constant C such that for all T 1 , · · · , T N ∈ τ, x 1 , · · · , x N ∈ X and N ≥ 1, holds, where {r j } is a sequence of independent symmetric {−1, 1}-valued random variables on [0, 1], e.g., the Rademacher functions r j (t) = sign(sin(2 j πt)). The smallest constant C such that the above inequality holds is denoted by R(τ). For two families of R-bounded operators τ, κ ⊂ B(X, Y), the sum and product τ + κ = {T + K | T ∈ τ, K ∈ κ} and τκ = {T K | T ∈ τ, K ∈ κ} are also R-bounded and satisfies R(τ + κ) ≤ R(τ) + R(κ) and R(τκ) ≤ R(τ)R(κ). Since the R-boundedness is stronger than the uniform boundedness, we are able to define R-sectorial operator and R-angle φ R L for a sectorial operator L by replacing the uniform bound (3.1) to the R-bound. When X is a UMD-space, it is known that a sectorial operator L with a bounded imaginary powers of power angle θ L , is R-sectorial for φ R L ≤ θ L [9]. When X = Y = L p (D) for p ∈ (1, ∞), the condition of the R-boundedness is equivalent to the condition N j=1 |T j x j | 2 1/2 . (B.2) We say that a function m : R\{0} → B(X, Y) is a Fourier Multiplier on L q (R; X) if the operator K f = F −1 m(·)F f, for f ∈ S(R; X), extends to a bounded operator from L q (R; X) to L q (R; Y). It is known that for UMD-spaces X and Y, a function m ∈ C 1 (R\{0}; B(X, Y)) is a Fourier Multiplier on L q (R; X) for all q ∈ (1, ∞) if m(ξ) and ξm ′ (ξ) are R-bounded for ξ ∈ R\{0} [40, 3.4 Theorem]. We apply a multiplier theorem on L q (R; L p (D)) and estimate u h given by the formula (2.11). We use the boundedness of the pure imaginary powers of B 1 Proposition B.2. The estimate holds for u h given by the formula (2.11) for f ∈ C ∞ c (Π). (B.4) Since |λ|/|λ + ξ 2 | ≤ 1/ sin θ for λ ∈ Σ θ as in the proof of Proposition 2.5, it follows from (B.2) that Hence we have Since the resolvent is holomorphic, we are able to estimate an R-bound of ξm ′ 1 (ξ) by using (B.4) as we did in the proof of Proposition 2.5. Thus the function m 1 is a Fourier multiplier on L q (R; L p (D)) for all q ∈ (1, ∞). We next estimate u z . We set the domain of the operator B 2 by D(B 2 ) = {w ∈ W 2,p (D) | ∂ n w = 0 on ∂D}. Since the kernel of the operator B 2 is not empty on L p (D), it is not a sectorial operator in the sense of Section 3. We thus restrict the operator to a space of average-zero functions L Since the average ofB 2 w in D vanishes by the Neumann boundary condition, the operatorB 2 is an invertible sectorial operator acting on L p 0 . Moreover, the operatorB 2 admits a bounded imaginary powers of power angle zero [34,Theorem 2]. We show the estimate (B.1) for u z by applying a multiplier theorem for a resolvent ofB 2 . We consider functions f 1 ∈ C ∞ (Π) satisfying We see that F f 1 is average-zero in D and belongs to L p 0 for ξ ∈ R. Hence we use the resolvent ofB 2 and set u 1 = F −1 (λ + ξ 2 +B 2 ) −1 F f 1 . for λ ∈ Σ θ and f 1 ∈ C ∞ (Π) satisfying (B.5). Proof. The assertion follows from a multiplier theorem as in the proof of Proposition B.2. We subtract from f z the average of f z in D and apply Proposition B.3. holds for u z given by the formula (2.11) for f ∈ C ∞ c (Π). Proof of Lemma B.1. The estimate (B.1) holds for u given by the formula for f ∈ C ∞ c (Π) by (B.3) and (B.8). For general f ∈ L p (Π), we take a sequence { f m } ⊂ C ∞ c (Π) such that f m → f in L p (Π) and obtain the desired estimate.
2018-12-10T10:56:08.301Z
2018-06-13T00:00:00.000
{ "year": 2018, "sha1": "9893562adb4ef4f3470da441ab27018f418c0bb7", "oa_license": null, "oa_url": "http://dlisv03.media.osaka-cu.ac.jp/contents/osakacu/journal/14321785-160-3_4-359.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9893562adb4ef4f3470da441ab27018f418c0bb7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
204026771
pes2o/s2orc
v3-fos-license
The Rodent Models of Dyskinesia and Their Behavioral Assessment Dyskinesia, a major motor complication resulting from dopamine replacement treatment, manifests as involuntary hyperkinetic or dystonic movements. This condition poses a challenge to the treatment of Parkinson's disease. So far, several behavioral models based on rodent with dyskinesia have been established. These models have provided an important platform for evaluating the curative effect of drugs at the preclinical research level over the past two decades. However, there are differences in the modeling and behavioral testing procedures among various laboratories that adversely affect the rat and mouse models as credible experimental tools in this field. This article systematically reviews the history, the pros and cons, and the controversies surrounding rodent models of dyskinesia as well as their behavioral assessment protocols. A summary of factors that influence the behavioral assessment in the rodent dyskinesia models is also presented, including the degree of dopamine denervation, stereotaxic lesion sites, drug regimen, monitoring styles, priming effect, and individual and strain differences. Besides, recent breakthroughs like the genetic mouse models and the bilateral intoxication models for dyskinesia are also discussed. INTRODUCTION Dopamine (DA) replacement therapy with levodopa (L-DOPA) is the most effective pharmacotherapy for motor symptoms of Parkinson's disease (PD). However, prolonged L-DOPA use inevitably leads to complications such as motor fluctuations (i.e., on-off fluctuations and wearing-off phenomenon) and dyskinesia. As the earliest and the most common complication, L-DOPA-induced dyskinesia (LID) occurs in half of the patients undergoing treatment for 5 years (1,2). Dyskinesia is defined as abnormal involuntary movements characterized by hyperkinetic movements or dystonic features (3). This occurs mostly at the maximal L-DOPA plasma level (peak-dose dyskinesia) and less commonly at the initial or the late phase of drug use, or both (diphasic dyskinesia) (4). It should be noted that dyskinesia can also be induced by other DA agonists or dopaminergic neuron transplant (viz., graft-induced dyskinesia, GID) (5), but LID is the most classical type. Although much has been learnt about the risk factors of dyskinesia (i.e., the onset age of PD, disease severity, L-DOPA dosage, and pulsatile administration) (1), its exact molecular mechanisms remain unresolved and there is no effective treatment for PD (6,7). Animal model plays an important role in therapeutics research. The 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-lesioned non-human primate was the first model of LID. It provides an accurate replication of motor features of human dyskinesia, but the use of this model is largely limited by adverse factors like non-unified methodological criteria and high cost (8,9). Given the advantages of time-and cost-effectiveness and the ease of genetic manipulation, several rodent dyskinesia models have been developed since the 1990s. However, only few dyskinesia assessment protocols have been validated based on available clinical agents known to have efficacy on LID. In this paper, we discuss the recent advances in modeling dyskinesia in rodents (mainly in unilateral 6-OHDA-lesioned rodents) and analyze the pros and cons of different dyskinesia assessment protocols. We further discuss the major disputes and factors influencing the assessments procedures and the recent advances in the methods used to establish rodent dyskinesia models. This review aims to provide an application framework of the past and present rodent models of dyskinesia to help researchers using the available animal models and behavioral assessment protocols to obtain reliable findings. 6-OHDA Rat Model The unilateral 6-OHDA-lesioned rat model is a classical model for PD motor symptoms that was first introduced in the 1960s, soon after which drug-induced involuntary rotation was described in this model (10,11). Due to the lack of typical motor impairments observed in human parkinsonism, i.e., bradykinesia, rigidity, and rest tremor, rotation has long become the only behavioral output in unilaterally 6-OHDA-lesioned rats and is used to test parkinsonian disability and modeling the response to DA replacement therapy (12, 13). In the 1970s, Creese and Iversen observed a series of stereotypy responses induced by amphetamine in DA denervated rats and established a rough rating scale where the numbers stand for the presence or absence of a certain response state, i.e., sniffing, licking, and gnawing (14). Since then, the term stereotypy was used to refer to the dopaminergic drug-induced abnormal response in parkinsonian rats (15,16). However, stereotypy did not fit specific symptomatic analogs in human patients, and its dyskinetic predictive value has been doubted by some scholars (17) as stereotyped behavior may also be induced among normal rats by over-stimulation of the DA system (18). In the 1990s, scholars found that changes in rotation behavior over time may be a useful model for motor fluctuations (19). Elsewhere, Cenci and colleagues first observed and defined the abnormal involuntary movements (AIMs) in rats after a 2-to 3-week administration of L-DOPA. The features matched with the dyskinetic manifestations of PD patients and non-human primate models (25). Rat AIMs were classified into four subtypes according to the body parts involved, i.e., rotational locomotion, axial torsion, limb movements, and orolingual stereotypies, and each subtype was scored 0-4 separately in accordance with the corresponding time/monitoring period during which the body part was affected (25). However, rotational locomotion differs from the other three subtypes (collectively called ALO AIMs or Body AIMs) due to its unique properties, which are discussed in detail under the subsection "Controversies around the assessment of rodent dyskinesia." Later on, some modifications were performed on this scale, e.g., addition of amplitude scores for limb and axial AIMs to increase scoring accuracy (26), but such amplitude scores are too complicated to be applied in majority of studies. Cenci's rating scale has since been well-accepted in dyskinesia studies since it was validated by high-quality pharmacological trials and LID molecular markers' tests (25,(27)(28)(29). Although detailed rating rules, e.g., the turns and intervals of rating, were constantly revised by subsequent groups (30)(31)(32), the framework of this scale remains unchanged. Similarly, Steece-Collier et al. developed an independent rating scale for rat dyskinesia (33,34). This scale had more subcategories of dyskinetic movements, i.e., neck postural dysfunction, trunk dystonia, forelimb dystonia, hindlimb dystonia, contralateral forepaw dyskinesia, orolingual stereotypy, and forelimb-facial stereotypy. During the rating session, scores of each item are graded separately based on the intensity and frequency and then multiplied, named as severity scores. The total LID score is calculated by the severity scores. Steece-Collier's scale is mainly used for evaluating LID in mesencephalic dopaminergic neuron graft rats (34)(35)(36) and also for GID rating (37). Thus, far, Henry's, Cenci's, and Steece-Collier's protocols have been used for the behavioral assessment of dyskinesia in rats by other researchers with modifications (24,30,38,39). Being the first model to quantitatively record LID in rats, Henry's protocol differs from the other two in many aspects, and its efficiency is controversial as it is not clear whether rotational behavior is a manifestation of rodent dyskinesia (40). Despite similarities in the rating rules, there are remarkable differences in the rating items and monitoring styles between Cenci's and Steece-Collier's protocols ( Table 1). Whereas, Cenci's scale is designed to avoid measuring stereotyped behaviors (orolingual AIM Relatively poor inter-rater reliability Lack of pharmacological validation The predictive value of rotational sensitization is controversial Single-point sampling at the peak time cannot cover the whole time-action curve of an agent excepted), Steece-Collier's scale is a hybrid of both dyskinetic and stereotypy-like movements scoring (41). The fact that more stereotypic movements and less dystonic features occur in postgrafting rats might account for this deference in designing (34), as the latter was initially designed for dopaminergic neuron transplantation-related studies. In their attempt to distinguish between choreiform and dystonic movements, Steece-Collier's scale is similar to that used in the clinic and seems to be more accurate in recording of rodent dyskinesia, but a single-point scoring only at the peak-dose period was considered a limitation. 6-OHDA Mouse Model MPTP injection is the classic approach used to induce PD in mice. However, MPTP mice exhibit a prominent capacity for motor function recovery instead of persistent dysfunction; thus, they are suitable for molecular and cellular mechanisms studies rather than studying the symptomatology of PD (42). Indeed, some scholars have attempted to induce LID in MPTP mice with a high dose of L-DOPA and found that such dose causes few dyskinetic subtypes, unlike that observed in the non-human primate model or 6-OHDA-lesioned rat model (43). The study of LID in mice lagged far behind that in rats. It was the case until unilateral 6-OHDA-lesioned mice were introduced by Cenci's group and a similar AIM rating scale for mice was built up in 2004 (44). The advantage of the 6-OHDA lesion procedure lies in its stable and duplicable damage to the nigrostriatal DA neurons, besides its high predictability during the time course of DA degeneration (8). Compared with the rat, mouse AIMs are more rapid, have a more simplified repertoire, and present more prominent rotational locomotion with less dystonic features, i.e., axial AIMs (44,45). High postoperative mortality used to be a major obstacle to the development of mouse models, especially those lesioned in the medial forebrain bundle (MFB), but recent progress in surgical procedures and postoperative support have improved this situation (46,47). VALIDATION OF RODENT MODELS Animal models play an irreplaceable role in the search of new medications for LID. It is important for a model to be validated. There are two main aspects of validity referred to here: predictive validity and construct validity. Predictive validity reflects how well the animal model imitates humans in response to experimental manipulation; thus, naturally it can be probed by testing the model's response to the effects of pharmacological agents known to alleviate or aggravate LID in the clinic. While construct validity examines whether the test performance is based on the actual neurobiological mechanisms underlying the disease. Accordingly, this is determined by quantifying dyskinesia-associated molecular markers in the animal model (48). Henry's group first compared the rotational behavior in rats caused by L-DOPA and bromocriptine (a DA agonist known to present low dyskinesiogenic potential when clinically used for PD treatment). They pronounced that there was no significant sensitization in the bromocriptine group (39). However, such a conclusion was met with strong criticisms (28,49). Cenci's protocols have been tested by other agonists with low dyskinesiogenic potential including quinpirole, pramipexole, ropinirole, and bromocriptine, etc. in rats (13,28,50) and mice (45). These tests highlighted a good predictive validity of the rodent AIMs in line with the measures of dyskinesia that are used in both the clinic and non-human primate models. Moreover, L-DOPA-induced ALO AIMs could be ameliorated by clinically confirmed antidyskinetic compounds including amantadine, clozapine, and yohimbine in rats (28,29), which further supports AIM scores (locomotive AIM excepted) as a reliable assessment tool for dyskinesia. Although few pharmacological validations for Steece-Collier's protocol have been reported, a good correlation has been recently demonstrated between this protocol and that developed by Cenci (41). It was previously demonstrated that striatal mRNA's level of opioid precursors and related proteins, i.e., preproenkephalin-A (PPE-A), preproenkephalin-B (PPE-B), the 67-kDa isoform of glutamic acid decarboxylase (GAD67), closely, if not specifically, correlated with dyskinesia expression in primate PD models (7), which were also the postsynaptic markers of basal ganglia output pathways. To test the construct validity of Henry's protocol, several groups have reported upregulation of PPE-A, PPE-B (39), or their precursor Pdyn (24) mRNA level in response to rotational sensitization in rats treated repeatedly with L-DOPA; however, the lack of subgroup analysis in these studies made it difficult to confirm that the upregulation of these markers is a specific response to rotation behavior other than the general effects of drug treatment. In Cenci's protocol, better experimental controls were designed, which highlighted a linear positive correlation between ALO AIM scores and the expression level of these molecules in rats (25)(26)(27) and mice (44,51). Also using Cenci's protocol, a highly interactive and positive correlation was found between the expression level of transcription factor FosB/ FosB and dyskinesia severity in rats (27) and mice (44), which was later confirmed to occur in PD patients (52,53) and non-human primate models (54,55). Overall, these studies demonstrate the validity of Cenci's protocol. A similar validation for Steece-Collier's protocol has also been done, but only in a few reports (34,35). CONTROVERSIES AROUND THE ASSESSMENT OF RODENT DYSKINESIA The Predictive Value of the Rotation Few behaviors are more impressive than drug-induced rotation in hemiparkinsonian rodents. DA agonists induce contralateral rotation, while agents that increase DA levels, i.e., amphetamine and amantadine, cause ipsilateral rotation. The physiological mechanism of this specific behavior has not been completely elucidated (28,56). However, the dominant DA-dependent hypothesis advocates that circling is the result of a bilateral dopaminergic imbalance in the striatum and animals always tend to rotate toward the side with lower striatal dopaminergic activity (57). The interpretation of rotation has always been the focus of debate. For a long time, the intensity of rotation was regarded to be a correlative index of the degree of neurotoxininduced DA denervation for screening well-lesioned animals, as well as an indicator for evaluating the anti-akinetic efficiency of antiparkinsonian drugs (12, 58, 59). An inherent contradiction of this theory lies in the following reasoning: if the intensity of rotation is equivalent to the therapeutic effects, then the sensitization of rotational behavior should naturally be interpreted as an enhanced therapeutic action, which is apparently inconsistent with the clinical fact that the efficacy of L-DOPA always decreases gradually as the medication persists (12). Moreover, L-DOPA-induced rotation is a purposeless movement that is poorly correlated with the improvement of motor function as determined by cylinder test or rotarod test (28,60), as sensitive measures of motor impairments in hemiparkinsonian rodents. Presently, majority of studies have reached a consensus that the measure of rotation should not be included in the AIM score (29,49,61,62), and the shortened duration of rotation response should be viewed as a behavioral index of "wearing-off " like phenomenon (59,61), but the behavioral significance of rotation requires further clarification. Environmental Interference Environmental factors, mainly the test apparatus in which dyskinesia and rotation measures are performed, have not been considered as a source of interference. Various designs and sizes of test apparatus are employed in different laboratories, including hemispheric bowl (i.e., automated rotometry), rectangular boxes (i.e., rearing cages), and cylinder containers. Pinna et al. first reported that a more prominent sensitization of rotational behavior was induced when rats were tested in the hemispheric apparatus compared with rectangular boxes (63). However, this idea was challenged by other scholars (29,41) as different counting procedures, i.e., automated rotometry and visual observation, were used in this case. Conversely, it was found that all AIM scores, Steece-Collier's scores, and the number of rotation turns in the Cenci model were not significantly different between round cylinders and square boxes (41). Nevertheless, it should be noted that a number of previous studies that confirmed the environment-dependent behavioral response to DA agonists were performed in non-parkinsonian rodents (64)(65)(66). According to our observation (unpublished), motor response to L-DOPA is always disturbed or suspended when the rat moves to the corners of a rectangular box, and more dyskinetic features but less rotation behavior are presented in an enclosed and smaller container compared with the open field. A possible explanation for this difference might be that the container's walls facilitate a bipedal standing position in rats, which is associated with ALO AIMs (67). Based on these clues, it is reasonable to speculate that the test environment influences the rodent dyskinesia assessment; however, further supporting evidence is required to confirm this concept. DA Denervation Numerous studies have demonstrated a positive but nonlinear correlation between AIM scores and the extent of DA denervation in rats (26,27,67) and mice (44,45) by quantifying spared nigral DA neurons (determined by tyrosine hydroxylase immunohistochemistry) and/or striatal DA fibers (determined by DA transporter autoradiography). These studies found that a residual DA innervation below a critical, threshold value is a necessary prerequisite for animals to develop AIMs. In rats, it has been reported that <10% of DA cell sparing is needed for overt ALO AIMs, while a stricter <5% DA cell sparing is required for locomotive AIM (26). This is in line with the mainstream findings that DA denervation is a necessary but not sufficient condition for the development of dyskinesia in patients (68). Animal Individual Differences Similar to human dyskinesia, hemiparkinsonian rodents exhibit significant differences in latency, severity, and subtypes of AIMs across individuals, although these features are relatively constant in one animal (27,67). This cannot be only interpreted by the degree of DA denervation as AIM scores show a dispersed distribution even among rats with most severe denervation. Although the mechanisms are not fully understood, it has been proposed that the presence of non-dyskinetic animals and a high inter-individual variability in AIMs severity might contribute to understanding the factors that promote human dyskinesia (8, 69). 6-OHDA Lesion Sites For nigrostriatal lesion, two main intracranial injection sites are used: MFB and striatum. The classical MFB model produces a fast (<3-4 days) and severe (usually <20% residual innervation) DA depletion that imitates the advanced stage of PD, while striatal lesion leads to a protracted and moderate retrograde degeneration (1-3 weeks) more similar to that seen in parkinsonian patients; for a review, see (42). A study by Winkler et al. compared dyskinetic features between these two models and found an overall lower incidence, less severity, and different topographic distribution of AIMs in the latter model under the same L-DOPA dosage (26). Importantly, there was no locomotive AIM (or rotation) in intrastriatally lesioned rats. This may be ascribed to the fact that locomotion in rodents is mediated by DA fibers in the medial subregion of striatum, and intrastriatal administration of 6-OHDA triggers a focal destruction only in the ventrolateral striatum, but not in the medial striatum. In contrast, the MFB lesion damages the DA fibers in the whole striatum (26,27). Correspondingly, intrastriatally lesioned mice were less likely to develop dyskinesia compared to MFB-lesioned ones, but there was no difference in the representation of AIM subtypes (44). It was proposed that different brain sizes might account for this inter-species variation. An unusual characteristic of striatum-lesioned mice is the decline of AIM severity after a few weeks of L-DOPA treatment (45). The mechanism for this observation is poorly understood, but a postsynaptic hypothesis, viz., the desensitization of postsynaptic D1 receptor (45), and the shortening of the motor response duration to L-DOPA (19,45,70) similar to the "wearing-off " phenomenon of one of the late complications of drug treatment for PD may account for this. The Dosage Regimen of L-DOPA and Benserazide A dose-dependent relationship has been demonstrated between L-DOPA and the intensity of rotation, rotational sensitization, and AIM scores (49). Given its higher water solubility, L-DOPA methyl ester hydrochloride (methyl L-DOPA) is commonly used in many laboratories instead of ordinary L-DOPA. However, the dose of methyl L-DOPA adopted in previous studies varies from 2.5 to 50 mg/kg, making it challenging to directly compare results between such studies. Cenci and Lundblad proposed 6-10 mg/kg per injection as a standard dosage for rats and MFB-lesioned mice in dyskinesia studies, but a three-to four-fold dosage is required for intrastriatally lesioned mice to yield similar dyskinetic responses (71). It was reported that a standard dose regimen produced a pronounced interindividual variation of dyskinesia severities, but a high-dose regimen (usually >25 mg/kg/day) evoked a more uniform response with rotation in all animals (69). Besides the dosage per injection, administration frequency may also influence animal behaviors. Rodents receiving frequent injections maintained a hyperactive state for a longer time during a day, but application of the dose once or twice per day is recommended to be sufficient for dyskinesia studies (71). Benserazide, a peripheral inhibitor of aromatic amino acid decarboxylase (AADC), is concomitantly used with L-DOPA to increase its effective concentration in CNS while reducing its peripheral side effects. It has been found that benserazide prolongs the acting time of L-DOPA dose-dependently in rodents (72). According to Cenci's report, when benserazide was applied at a dose of <8 mg/kg per day, the duration of L-DOPA action lasted for 100 min, which diminished the AIM scores. In contrast to the fixed ratio of L-DOPA:benserazide, 4:1 in standard clinical preparations, the dosage of benserazide for rodents varies greatly among studies; thus, a dose of 12 to 15 mg/kg/day has been proposed to be the appropriate regimen for inducing rodent dyskinesia (71). The Length and Turns of Monitoring As mentioned above, a long enough monitoring is important for evaluating the anti-dyskinetic potential of different drugs, especially those with unknown pharmacodynamic profiles. The original version of Cenci's protocol adopted a 180-min monitoring with nine turns of scoring (25,27,44), but different lengths (such as 60, 120, 160, and 240 min) and turns (2,4,8, and 12, respectively) of monitoring have also been used currently (31,32,73,74). Whatever length is employed, it must be able to cover the whole time-action curve of the agents being investigated. Furthermore, a multiple time-point evaluation has been found to be superior to selected peak-time assessment adopted in Steece-Collier's protocol, as some agents, e.g., amantadine, may reduce dyskinesia severity at peak time, but prolong the duration of motor dysfunction (41). Priming Effect Priming refers to the phenomenon in which a single or repeated exposure to specific DA agonists would lead to a longlasting enhancement of locomotor and stereotypic behaviors elicited by the subsequent DA agonist challenge (75). A study showed that twice administration of apomorphine 1 month earlier would remarkably promote rotation behavior and AIMs during the subsequent chronic DA replacement therapy, inducing not only a higher degree of turning and AIM scores but also an earlier emergence of their peak value, which occurred on the first rating day (13). Thus, the priming effect should be seriously taken into consideration, as dopaminergic drugs, e.g., apomorphine, is commonly used for screening well-lesioned animals. Alternative drug-free tests such as the cylinder test have been found to be effective in avoiding such interference (71). Strain Differences Unlike MPTP, the sensitivity to 6-OHDA among different strains of rats and mice are relatively stable (42); thus, strain differences are always not considered in rodent dyskinesia studies. So far, almost all published studies were carried out in Sprague Dawley rats, Wistar rats, or C57BL6 mice. Thiele et al. compared the rotation behavior and AIMs of pure FVB and FVB/C57BL6 mice following a chronic L-DOPA treatment. They observed that the latter were more sensitive to develop dyskinetic behaviors under the same dosing regimen (76). No other strain comparisons have been reported so far, but such a latent interstrain difference should be considered as different strains of transgenic mice might be used for dyskinesia studies in the future. CURRENT USAGE AND ADVANCES IN THE USE OF RODENTS IN DYSKINESIA STUDIES Given the developments witnessed in the past two decades, the use of rodents as useful models for dyskinesia has gradually become mainstream in academia (8,69). Cenci's protocol has become the most widely accepted option for evaluating dyskinesia in 6-OHDA-lesioned rodents. On the basis of this protocol, numerous pharmacological trials have led to the discoveries of several promising therapeutic targets in recent years, including the GABA system (77-79), the serotonin system (80)(81)(82)(83)(84), adenosine receptors (85)(86)(87)(88), opioid receptors (89)(90)(91)(92), neuronal nitric oxide synthase inhibition (93), β 2 nicotinic receptors (94)(95)(96)(97)(98), and cation channels (99,100). Confidence in the application of rodent models has been further boosted by the findings that the GABA system and serotonin system translate quite well when these agents are applied in non-human primate models and clinical trials (8). Otherwise, due to the ease of genetic engineering (both knock-in and knock-outs), the mouse has become the standard model for studying molecular pathways of dyskinesia. This model has led to the discovery that the ERK1/2-DARPP32-mTOR and Ras-GRF1 signal pathways play important roles in the development of LID (51,55,(101)(102)(103). Previous studies have shown that changes in synaptic plasticity influence the pathogenesis of dyskinesia (104,105). It has been demonstrated that bidirectional synaptic plasticity exists in the cortical striatum of normal rats, including long-term potentiation (LTP) and long-term depression (LTD). This bidirectional synaptic plasticity is essential in maintaining stable motor function (106). For LID rats, the indirect pathway is only characterized by LTD, while the direct pathway is characterized by LTP (107), which leads to the clinical manifestation of dyskinesia. In addition, many therapeutic approaches targeted at pathways such as mTOR pathway (108,109), Ras-ERK signaling (110,111), M4 muscarinic receptor signaling (107), and metabotropic glutamate receptors (112,113) have shown high potential in alleviating LID in rats. In recent years, multichannel electrophysiological recording technology has enabled synchronous observation and recording of LFP, spike signals, and behavior in multiple brain regions, which paves the way for a deeper understanding of the mechanism of abnormal neural loops in PD and dyskinesia. The β-oscillation associated with motor inhibition in the cortical-basal ganglia circuit (114,115) and the gamma oscillation associated with dyskinesia (116,117) have been confirmed in various animal models such as rats, mice, non-human primates, and human experiments. Although the mechanisms of the electrophysiological abnormalities are not fully known, the relationship between electrophysiological indicators and pathological status provides new avenues for evaluating the efficacy of drugs on dyskinesia. Over the past 10 years, a number of discoveries have been made, which have expanded our understanding into the role of rodent models in LID studies. Paille et al. established the first bilateral 6-OHDA-lesioned rat model for dyskinesia study, although this model was characterized by unilateral AIMs (118). Interestingly, a study reported that asymmetric severity of AIMs was induced in the left, right, and inverse sequential bilateral lesioned rats (119). This may have a profound effect on rodent LID models if it is proven to be correct in further studies. Although tremendous progress has been made in developing genetic animal models of PD in recent years, there are still many challenges affecting behavioral research. One of the key limitations is that these models cannot reproduce the degeneration of dopaminergic neurons observed in PD patients. For mouse models used to study gene overexpression involving some familial PD forms, such as α-synuclein and leucine-rich repeat kinase 2, since they do not experience dopaminergic cell loss, LID cannot be induced with levodopa treatment (120), and there is currently no single PD animal model that perfectly replicates all the core features of PD (121). Nevertheless, this model still plays an important role in exploring the molecular mechanisms of early pathological changes in PD. It has been reported that RGS9 knockout mice are induced to dyskinesia when subsequently activating D2-like dopamine receptors (DRs) following inhibition of dopaminergic transmission (122). Pitx3 ak/ak mice, or aphakia mice, are established by deletion of the Pitx3 promoter region. Aphakia mice display a spontaneous recessive phenotype characterized by lack of lens in the small eyes (123) and during development. In this mice, both sides of the cerebral hemisphere lack the nigrostriatal DA projection and hence are more likely to develop LID. In 6-OHDA and other drug-induced PD mouse models, it is not easy to achieve bilateral DA depletion without excessive mortality (124). Cao et al. (125) injected a virus vector carrying FosB cDNA into the injured striatum of rats, and Feyder et al. (126) developed a PD model of mitogen and stress-activated kinase 1 knockout mice (MSK1KO) and FosB-or cJun-overexpressing transgenic mice with 6-OHDA. In both studies, chronic L-DOPA treatment successfully induced LID. CONCLUSIONS The development of unilateral 6-OHDA-lesioned rodent models and the establishment of corresponding behavioral assessment protocols have greatly promoted the research of dyskinesia over the past two decades. This has led to the discovery of several novel therapeutic agents to control the intractable complications advanced PD. In spite of the controversies surrounding the behavioral assessment in these models, rodent models are still powerful and cost-effective tools in dyskinesia studies. In order to properly assess abnormal involuntary movement of PD rodents and to accurately conduct related mechanisms and drug development studies, these behavioral assessment methods and rating scales need to be used rationally. AUTHOR CONTRIBUTIONS QP, SZ, YT, WZ, JW, CC, YW, and XY reviewed the literature and drafted the manuscript. YX and XC revised and proofread the manuscript. All authors read and approved the final manuscript.
2019-10-11T14:35:44.595Z
2019-10-11T00:00:00.000
{ "year": 2019, "sha1": "13202a5c6b032d740f013fe0e8a6a7ad08347963", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2019.01016/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "13202a5c6b032d740f013fe0e8a6a7ad08347963", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
230961701
pes2o/s2orc
v3-fos-license
STUDY ON THE PLANT SPECIES, ACORUS CALAMUS FOR INSECTICIDAL PROPERTIES AGAINST THE FILARIAL VECTOR, CULEX QUINQUEFASCIATUS SAY. (DIPTERA: CULICIDAE). Leaf extract of the species, Acorus calamus was evaluated for the egg hatchability, larvicidal and pupicidal activity and the protein content of mosquito, Culex quinquefasciatus under the room temperature in the laboratory. Dosage value as expressed in ppm was 10 to 140 for Culex quinquefasciatus. A relationship was observed between the plant extract doses and percentage mortality. The percentage of egg hatchability decreased; larval and pupal mortality were found to be increased with increase in the dosage. Based on the probit analysis, the LC50 value of egg (99.15), I instar (42.24), IV instar (101.49) and pupae (121.57mg/mL) were recorded. The ovary protein content of the treated (0.083) was estimated to be very low when compared with that of the control (0.172mg/mL). INTRODUCTION Vector control is a serious concern in developing countries like India due to lack of general awareness, development of resistance and socioeconomic reason. The role of mosquitoes is becoming increasingly important in the recent years because of change in ecology caused by human intervention. Mosquitoes constitute a major public health menace as vectors of serious human diseases (Logankumar et al., 2008). Of the various mosquito spread diseases, filariasis transmitted by Culex quinquefasciatus is dangerous and is spreading into wucherria bancrofti have taken epidemic form and have been reported from Tamil Nadu, West Bengal, Uttar Pradesh, Gujarat and Delhi (Kebra et al., 1992). In recent years, scientists try a variety of botanical derivatives to eradicate many harmful insect pests including mosquitoes. Insecticidal activity of neem has been reported. Vector control is facing a threat due to the emergence of resistance to synthetic insecticides. Insecticides of botanical origin may serve as suitable alternative biocontrol techniques in the future (Nandita Chowdhury et al., 2008). Aedes are vectors for the pathogens of various diseases like dengue fever, dengue haemorrhagic fever and yellow fever (Rajmohan and Ramaswamy, 2007). Many authors world wide started large screening activity for using extracts of medicinal and herbaceous plants to control mosquitoes (Halawa, 2001;Das et al., 2003;Choochote et al., 2004). The plant species, Acorus calamus is a widely distributed neotropical shrub introduced to many parts of the tropics. For the present study, this species was screened against egg hatchability, larval and pupal mortality of the mosquito Culex quinquefasciatus. So as to control the population of Culex quinquefasciatus by an eco-friendly approach. MATERIALS AND METHODS Fresh leaves of Acorus calamus were collected from the plants growing in agricultural lands. Leaves were washed, shade dried and ground in a mixture to form a fine powder. The 25g of the powder was then used for extraction in acetone in soxhlet apparatus. The extract was concentrated on water bath to evaporate the acetone. The filterate was considered as pure material and redissolved in acetone to form standard formulation. By further dilutions with required amount of water, different doses (ppm) were prepared. Eggs of Culex quinquefasciatus were procured from the Research Laboratory of National Institute of Communicable Diseases (NICD), Mettupalayam, Coimbatore and brought to the laboratory and cultured. Eggs, first and fourth larval instars and pupae were harvested from the colony and were placed in different concentrations as biocide. Twenty individuals were used for each concentration. Eggs, larval instars and pupae were checked for mortality for every 24 hours. In the case of control only, carrier solvent was added. Food was provided in all the test beakers. Each test was replicated for five times.The effect of leaf Acorus calamus on the egg hatchability, mortality of first and fourth larval instars and pupal mortality of Culex quinquefasciatus was studied. Following 24 hours were corrected for natural response by Abbott's formula (Abbott, 1925) as follows: Busvin (1971) suggested that the critical doses of susceptibility can be estimated with sufficient accuracy from a probit / log concentration graph. Based on the log concentration and the probit mortality percentage values, regression equation was obtained. Using the regression, a straight line was fitted. Fitting of regression line and homogeneity of population were also tested employing chi-square (χ 2 ) test. By graphical interpolation, LC50 values of the leaf extract for 24 hours of exposure of egg, first and fourth instar larvae and pupae of Culex quinquefasciatus and their fiducial limits (95% upper fiducial limit and lower fiducial limit) were calculated. Blood fed females survived through pupae treated with any concentration of plant extract was harvested at different hours. The ovaries were carefully dissected out and washed in physiological saline solution. The adhering water the tissues were removed using filter paper. The ovaries were weighed and homogenized in phosphate buffer (pH 7.0; 0.01M). The sample was then centrifuged at 5000rpm for 10 minutes and supernatant was taken for the estimation of protein by adopting standard methods for protein determination. RESULTS AND DISCUSSION Mortality values of egg, larvae and pupae treated with different concentrations (ranging from 10ppm to 140 ppm) of the leaf extract of Acorus calamus at the end of 24 hrs are given in Table 1-4 for egg, I instar, IV instar larvae and pupae of Culex quinquefasciatus. The LC50 values and their 95% upper and lower fiducial limits, and chi-square value of the leaf extract of Acorus calamus for 24h exposure of Culex quinquefasciatus are given in Table 5. Based on the probit analysis the 24 hr LC50 value of the leaf extract of Acorus calamus for egg, I instar and IV instar larvae and pupae of Culex quinquefasciatus was found to be 99.15, 42.24, 101.49 and 121.57 respectively (Fig.1). The ovary protein content of the treated was estimated to be very low when compared with that of the control (Table 6). An important criterion determine in the present study the enzyme activity declined invariably in all the treatments of Culex quinquefasciatus with Acorus calamus leaf extract. Decline in the ovary protein content will reduce the egg laying capacity of Culex quinquefasciatus where as inhibition of enzyme activity will help in the arresting of the developmental stages. Inhibitory effect of Acorus calamus leaf extract was found to be higher than that of the synthetic inhibitors. The visible morphological abnormalities occurred in treated were the larva were smaller than that of its control. Pupae survived through larval treatment frequently showed a variety of malformations like demelanized pupa with straight abdomen, partially melanised pupa with extended abdomen, dwarf pupa with retarded abdomaen, dechitinised pupa and inability of the adult to shed completely its exiuuvia which remained attached to its appendages. These results are in agreeing with the earlier findings made by many workers with botanicals for various properties (for oviposition avoidance, Tilak et al., 2005;larvicidal, Halawa, 2001;Khater, 2003;Saleh, 1995, adulticidal, Choochote et al., 2004and repellent activities, Choochote et al., 2004Prakash et al., 2000). As the botanical insecticides for including the extract of C. odarata are biodegradable and harmless to the environment, pest -specific and relatively harmless to non-target organisms (Su and Mulla 1998;Sivagnaname and Kalyana Sundaram, 2004;Sun et al., 2006) they are more eco-friendly. The three active principle compounds reported in the study species Acorus calamus such as Augustineolide, 3 -β -6 hydroxy, dihydroxy cartin and 6 -acetoxihumininolide are determined to have mosquitosidal properties which is perhaps be a reason for the medicinal use of this species in terms of mosquito repellent (Logankumar, 2006). The results of the present study, indicate that the leaf extract of the species, Acorus calamus caused low percentage of egg hatchability and high percentage of larval and pupal mortality. Hence the large biomass of Acorus calamus available in Southern India can be used in the pharmacological industries to obtain effective repellent to control mosquito population is an ecofriendly manner. No of larvae exposed 10 20 Concentra 30 tion (p pm) 40 50 Alive dead Alive dead Alive dead Alive dead Alive dead 20 19 1 17 3 12 8 11 9 2 18 20 18 2 15 5 11 9 10 10 4 16 20 19 1 16 4 12 8 10 10 3 17 20 16 4 17 3 13 7 12 8 4 16 20 17 3 15 5 11 9 10 10 5
2020-01-30T09:03:24.045Z
2014-06-30T00:00:00.000
{ "year": 2014, "sha1": "d066aa67e0efe65b7dc7db43204d72c1a67b556d", "oa_license": "CCBY", "oa_url": "https://krjournal.com/index.php/krj/article/download/22/343", "oa_status": "HYBRID", "pdf_src": "Unpaywall", "pdf_hash": "d066aa67e0efe65b7dc7db43204d72c1a67b556d", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
258248926
pes2o/s2orc
v3-fos-license
Omega-3 Fatty Acids during Pregnancy in Indigenous Australian Women of the Gomeroi Gaaynggal Cohort Higher dietary intakes of Omega-3 long-chain polyunsaturated fatty acids (n-3 LC-PUFAs) have been linked to lower rates of preterm birth and preeclampsia. The aim of this analysis was to describe dietary intake and fractions of red blood cell (RBC) membrane LC-PUFAs during pregnancy in a cohort of Indigenous Australian women. Maternal dietary intake was assessed using two validated dietary assessment tools and quantified using the AUSNUT (Australian Food and Nutrient) 2011–2013 database. Analysis from a 3-month food frequency questionnaire indicated that 83% of this cohort met national n-3 LC-PUFA recommendations, with 59% meeting alpha-linolenic acid (ALA) recommendations. No nutritional supplements used by the women contained n-3 LC-PUFAs. Over 90% of women had no detectable level of ALA in their RBC membranes, and the median Omega-3 Index was 5.5%. This analysis appears to illustrate a decline in concentrations of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) across gestation in women who had preterm birth. However, there was no visible trend in LC-PUFA fractions in women who experienced hypertension during pregnancy. Further research is needed to better understand the link between dietary intake of n-3 LC-PUFA-rich foods and the role of fatty acids in preterm birth and preeclampsia. Introduction Long-chain polyunsaturated fatty acids (LC-PUFAs) are a group of fatty acids (FAs) that are necessary for the normal growth and development of both mothers and babies during pregnancy. Much is known about the role of Omega-3 (n-3) LC-PUFAs in fetal development, as they support the development of the fetal nervous system and retina [1,2]. The n-3 LC-PUFAs also support the mother's overall health through the physically challenging time of pregnancy. They are known to play an important structural role within cell membranes, strengthen cognitive function, and lower cardiovascular disease risk factors via their antiinflammatory properties [3][4][5]. There is also a growing body of evidence linking higher levels of n-3 LC-PUFA consumption with improved outcomes during pregnancy, including in the cohort and map the results with key pregnancy outcomes of hypertension during pregnancy (HDP) and preterm birth. Study Design The Gomeroi Gaaynggal study was a prospective longitudinal study of Indigenous and non-Indigenous women carrying Indigenous Australian babies throughout their pregnancy and into early childhood [20]. The study aimed to achieve meaningful and positive health outcomes for Indigenous mothers and their babies by examining a wide range of markers, including nutrition and chronic disease risk factors. The Gomeroi Gaaynggal cohort was established in close partnership with the local Indigenous communities involved [21]. Prior to recruitment commencing, an extensive 2-year community consultation process was undertaken to ensure that the objectives of the study were planned with communities to identify their interests [21]. A study plan was formalised as a result of these conversations. Recruitment of participants occurred between 2009-2019 and was facilitated by Indigenous members of the research team within antenatal clinics. In 2013, the Gomeroi Gaaynggal Advisory Committee was established to ensure a validated, Indigenous-led governance structure across all projects within the study. The Advisory Committee was consulted for initial approval to proceed with the analysis outlined in this paper in May 2021. Final approval was gained from the committee prior to the submission of the paper for publishing, with further input and consultation scheduled throughout all stages of the analysis. Additional details of the cohort study protocol have been previously published [20]. Ethics Ethics approval for the Gomeroi Gaaynggal study was received from the follow- Setting Recruitment of the cohort took place in two towns in NSW, Australia. Town 1 covers an area of 9900 m 2 and is located 400 km northwest of the nearest major city and 200 km inland from the east coast of Australia. It has a population of approximately 60,000 people, with a median weekly household income of $AUD 1180 [22]. Approximately 10% of residents identify as Indigenous, compared with the NSW average of 3%, and the Indigenous median weekly household income is $AUD 1106 [22]. Town 2 is located 550 km northwest of the nearest major city and 500 km from the east coast. It has approximately 2150 people, with 43% of those identifying as Indigenous [23]. The median weekly household income of the town is $AUD 1039, compared with $AUD 789 for its Indigenous residents [23]. Participants Pregnant women who identified as Indigenous Australian, as well as non-Indigenous women carrying Indigenous babies, were eligible to enrol in the study at any stage of their pregnancy. Written informed consent was obtained from all study participants following consultation with Indigenous members of the research team to ensure each mother had a thorough understanding of the study. Demographics and Other Health Factors An online survey was administered to obtain additional demographic data from participants during study visits, with Indigenous members of the research team available for assistance if required. Questions were predominantly multiple-choice, with space to provide additional detail or explanation where necessary. Information was collected about age, Indigenous identity, gravidity and parity, income, education, employment, and any medical history related to obstetrics and reproductive health. Diabetes status, gestational age at delivery, and birth weight of the baby were obtained from hospital records or collected directly from participants where hospital records were not available. Smoking status was also established and considered positive if the participant reported smoking at any time during their pregnancy. Participants self-reported their pre-pregnancy weight, and a member of the research team measured their current height and weight. The pre-pregnancy body mass index (BMI) of participants was then calculated [weight (kg)/height (m) 2 ]. These results should be interpreted with caution as BMI measures have not been validated to accurately estimate overweight and obesity in Indigenous Australian populations [24]. Individual sub-categories within pre-pregnancy BMI and diabetes status were grouped and presented accordingly to protect the anonymity of participants. Assessment of Pregnancy Outcomes Participants were categorised as having HDP as reported on their antenatal hospital records. This included a diagnosis of gestational hypertension, preeclampsia, or eclampsia, with the three conditions grouped to protect the anonymity of participants. Where hospital records were unavailable, participants were defined as having preeclampsia based on blood pressure measurements at least 4 h apart post 20 weeks of gestation and before the onset of labour, in which systolic pressure was ≥140 mmHg and/or diastolic pressure ≥90 mmHg with proteinuria (urinary protein ≥ 300 mg/24 h or spot urine protein:creatinine ratio ≥30 mg/mmol creatinine or urine dipstick protein ≥ ++) [25]. Preterm birth was categorised as the birth of a live infant earlier than 37 weeks of gestation. This was determined by the gestational age reported on hospital antenatal records or calculated based on gestational age determined from ultrasound records and date of birth where hospital records were unavailable. Participants could be assigned to both the HDP and preterm birth outcome groups if they experienced both pathologies during their pregnancy. Dietary and Supplemental Intake of Omega-3 Fatty Acids Two validated dietary assessment tools were used for the nutritional analyses in this study, with all data collected between 2014-2019 [26,27]. Dietary assessments were not performed in the cohort prior to this time. Nutrient intakes from both dietary data sets have been quantified using the AUSNUT (Australian Food and Nutrient) 2011-2013 database, which was considered the most comprehensive source of nutrient information in Australia [28]. AI values were used to determine participants' nutritional adequacy of n-3 FA intake during pregnancy, with intake from each tool reported separately. A 24-h food recall was collected from participants during the earlier stages of their pregnancy. Prior to 2018, this data was obtained from participants via a structured interview with a qualified dietitian using the validated triple pass method [26]. The triple pass method ensures a comprehensive account of all foods consumed in the previous 24 h by asking about the participants' intake at three different stages in the interview. This method provides a short-term, detailed view of intake, including data on individual food and beverage items and their quantities [26]. A qualified dietitian then entered this data as food records into the Australian version of the Automated Self-Administered 24-Hour (ASA24) Dietary Assessment Tool [29]. From 2018, dietary intake data has been self-recorded by the participant via the National Cancer Institute's ASA24 (Australia), a validated multiple-pass method. Both a trained Indigenous research assistant and a qualified dietitian were with the participant to assist with any questions about the survey. Total energy (kJ), gram weight and nutrient content of individual food and supplement items were quantified. As supplements are often used during pregnancy to support increased nutritional requirements, the AUSNUT 2011-2013 Dietary Supplement Nutrient Database was used to categorise and account for supplementation in this analysis [9,28,30]. In addition, the Australian Eating Survey Food Frequency Questionnaire (AES FFQ) was used during the third trimester to assess dietary intake across the pregnancy. The AES FFQ is a self-administered tool that captures estimated dietary intake over the previous six months via 120 semi-quantitative questions [27]. The surveyed food list is thorough to allow estimation and ranking of usual macronutrient and micronutrient intakes, with n-3 LC-PUFA consumption measured from food items only [27]. A response for each food or food type is a frequency with options ranging from 'never' to 'four or more times per day'. The AES FFQ has been shown to provide a valid and reliable estimate of dietary intakes of Australian adults (median age: females 41.3 years, males 44.9 years) over the previous six months [27], with validity demonstrated for n-3 LC-PUFA dietary intakes compared to red blood cell membrane FAs in both children and adults [31]. Further information about this survey has been previously published [27]. Red Blood Cell Sample Collection and Analysis Blood samples were collected from participants during each trimester of their pregnancy by an Indigenous research assistant trained in phlebotomy. Random (non-fasting) samples were collected in EDTA tubes and stored on ice until centrifugation at 3000× g at 4 • C for 10 min. RBCs were separated and stored at −70 • C before analysis. A sub-sample of participants from the cohort was chosen based on their preterm birth and HDP status. These women were then case-matched to those with an uncomplicated pregnancy based on their stage of the trimester, followed by age and pre-pregnancy BMI where possible. Erythrocyte Membrane Fatty Acid Preparation Using the method described by Tomoda et al. [32], the erythrocytes were lysed, and their membranes were solubilised and purified. A total of 500 µL of erythrocytes were vortexed with 12 mL of hypotonic tris buffer (10 mM tris hydroxymethyamino methane/5 mM ascorbate buffer, pH 7.4). After standing on ice for five minutes, 12 mL of 0.25 M glucose solution was added and vortexed again. Following another five minutes on ice, the sample was centrifuged at 10,000 RPM at 4 • C for ten minutes. After discarding the supernatant, the procedure was repeated twice more (resuspending the pellet by vortexing) with the same quantities of tris and glucose solutions above, then centrifuged at 12,000 RPM at 4 • C for 10 min and then 15,000 RPM at 4 • C for 20 min. The pellet was then resuspended in 250 µL each of the tris and glucose solutions. The sample was stored at −80 • C prior to methylation. Fatty Acid Determination Total erythrocyte membrane fatty acids were determined via the method established by Lepage and Roy [33]. Two mL of a methanol/toluene mixture (4:1 v/v) containing C21:0 (0.02 g/L) as an internal standard and BHT (0.12 g/L) was added to 500 µL of erythrocyte membrane suspension. A total of 200 µL of acetyl chloride was added dropwise to methylate the fatty acids. The samples were heated to 100 • C for one hour. After cooling, 5 mL of 6% potassium carbonate solution was added to stop the reaction. To facilitate the separation of the layers, the sample was centrifuged at 3000 RPM at 4 • C for 10 min. The upper toluene layer was used for gas chromatography analysis of the fatty acid methyl esters, using a 30 m × 0.25 m (DB-225) fused carbon-silica column coated with cyanopropylphenyl (J & W Scientific, Folsom, CA, USA). Both injector and detector port temperatures were set at 250 • C. The oven temperature was 170 • C for two minutes, increased 10 • C/min to 190 • C, held for one min (24), then increased 3 • C/min up to 220 • C and maintained to give a total run time of 30 min. A split ratio of 10:1 and an injection volume of 5 mL were used. The chromatograph was equipped with a flame ionisation detector, autosampler, and autodetector. Sample fatty acid methyl ester peaks were identified by comparing their retention times with those of a standard mixture of fatty acid methyl esters and quantified using a Hewlett Packard 6890 Series Gas Chromatograph with Chemstations Version A.04.02. Statistical Analysis A descriptive cross-sectional analysis of all dietary (24-h recall and FFQ) and maternal blood sample data obtained from the cohort was undertaken. Only data from participants carrying a single fetus and with either a dietary record or blood sample were used for the analysis. All data obtained were included, regardless of any diagnosed or suspected medical conditions. Where multiple 24-h recall entries were collected during a single pregnancy, the earliest record obtained was included. FFQ data were excluded if the overall energy intake was deemed implausible (<4500 kJ/day or >20,000 kJ/day) or if it was collected outside the period of 27 weeks of gestation to birth. All data were tested for normality using Shapiro-Wilks deemed non-normal and presented as the median and interquartile range (IQR) or as a number and percentage where appropriate. Separate descriptive statistics were performed on the 24-h recall data to account for n-3 LC-PUFA supplementation. Scatterplots were used to visually depict the fractions of key LC-PUFAs recorded in the RBC membranes (Total EPA + DPA + DHA, LA, ALA, EPA, DPA, and DHA). No power calculations are presented, as this is an exploratory analysis based on available data. Pregnancy outcomes were overlaid on this data, with a line of best fit to highlight any trends associated with either n-3 LC-PUFA intake and/or maternal blood levels of n-3 LC-PUFAs. Additionally, figures were used to visually represent the fractions of total EPA + DPA + DHA, LA, and ALA in the RBC membranes relative to the nutritional adequacy of LC-PUFA intake compared with AI values. Participants were categorised as having met the AI level if they met the AI in either or both the 24-h recall or FFQ data. Not meeting AI was defined as having inadequate intake in both the 24-h recall and FFQ or inadequate intake assessed by one tool and missing data for the other. Data was deemed missing if participants did not complete either of the dietary assessment tools. All data manipulation, visual analyses, and statistical explanations were performed using STATA/IC, version 15.1 [35]. Participant Characteristics Of the 434 women recruited to the study, 204 were eligible for inclusion in this sub-set analysis. The baseline sociodemographic characteristics of the pregnant women from the Gomeroi Gaaynggal cohort included in this analysis are summarised in Table 1 A summary of participants' health characteristics and pregnancy outcomes are described in Table 2. A total of 18 women experienced a preterm birth, and 16 had HDP, while six women had both a preterm birth and HDP. The majority of women reported a pre-pregnancy BMI over 25 kg/m 2 (66.7%), with a median of 29.7 kg/m 2 (IQR: 17.4-59.8). Participants who had a preterm delivery appeared to have a higher median BMI of 32.4 kg/m 2 (IQR: 21.3-59.8), although data was only available for half of the group. In total, 29.0% of women reported having smoked at some point during their pregnancy, compared with 47.1% of those with preterm birth and 20.0% of those with HDP. Women who had either preterm birth or HDP had a higher prevalence of diabetes (type 1, type 2, or gestational) at 27.8% and 37.5%, respectively, compared with 14.8% of the total. Maternal Dietary Intakes A summary of total nutrient intake from foods and supplements, as recorded by 24-h recall during the early stages of pregnancy, is provided in Table 3. This data was collected at a median gestational age of 21.7 weeks (IQR: 5.3-37.6). The median daily energy intake of those included in this analysis was 7443 kJ (IQR: 710-16,992 kJ/day), with 48.3% of women meeting the recommended proportion of daily energy intake from total fat, known as the Acceptable Macronutrient Distribution Range (AMDR), with a median % energy from the fat of 34.5% (IQR: 6.8-65.3%). According to the 24-h recall analysis, only 55.2% and 28.7% of women met the AI for ALA and LA, respectively, while 37.1% of women met the AI for total EPA + DPA + DHA during pregnancy with a median intake of 84.4 mg/day (IQR: 0-588.3). By contrast, 50% of women with HDP meet the AI for total EPA + DPA + DHA with a median intake of 170.3 mg/day (IQR: 32.9-588.3). The data shows that none of the supplements taken during pregnancy by the women in this cohort contained any n-3 LC-PUFAs or other FAs (see Table S1 in Supplementary Materials). Nutrient intakes as assessed by FFQ, which did not assess supplementation during the third trimester of pregnancy, are summarised in Table 4, reporting intakes of the same key nutrients as in Table 3. The percentage of those women who met the AI for ALA and LA was similar to that recorded by the 24-h recall during the earlier stages of pregnancy at 59.8% and 32.0%, respectively. However, the total combined intake of EPA, DPA, and DHA, as assessed by FFQ, was higher, with a median intake of 186.7 mg/day (IQR: 20.6-643.9) and 81.4% of women meeting the AI. Red Blood Cell Analysis The FA profile of RBC membranes at each trimester of pregnancy is described in Table 5. Notably, 91.4% of total observations contained 0.0% ALA in the blood at any stage of gestation with a median of 0.0% (IQR: 0.0-0.6%). Saturated FAs account for the largest overall percentage of FAs in these blood samples at 43.2% (IQR: 41.1-71.9), with n-6 LC-PUFAs the next largest fraction at 27.9% (IQR: 4.7-32.9). Figure 1 highlights the fraction of key LC-PUFAs in the maternal blood samples of women in the cohort who experienced preterm birth. Figure 2 highlights those who had HDP. The majority of pregnancies in the current sample were uncomplicated and indicated little change in the relative concentration of each FA in RBC membranes over the pregnancy. For those with preterm birth, the exception was for EPA and DHA, and therefore, total n-3 LC-PUFA concentrations. The trendlines appear to show an increase in the concentration of these n-3 LC-PUFAs across gestation in those with an uncomplicated pregnancy and a decrease in those who had a preterm birth. Those with HDP appear to have higher fractions of total n-3 LC-PUFAs and LA in their maternal RBC membranes than those who did not. Conversely, those with a preterm birth presented indications of lower concentrations of all the LC-PUFAs throughout the pregnancy compared with those who did not. The Omega-3 Index of women in the cohort is shown in Figure 3, highlighting those who had preterm birth and those who had HDP. The median Omega-3 Index for the whole cohort was 5.5% (IQR: 0.0-9.0). Figure S1 illustrates the fractions of total n-3 LC-PUFAs, LA, and ALA compared with those meeting the AI for each, which shows no clear correlation between dietary intakes and FA concentrations in RBC membranes. Discussion This descriptive analysis examined the dietary intake of FAs and the FA profile of RBC membranes in the women of the Gomeroi Gaaynggal cohort during pregnancy. To the best of our knowledge, this is the first study to describe these factors in an Indigenous Australian population. The results highlight that most women from this cohort were meeting the national pregnancy NRV recommendations for intakes of total EPA, DPA, and DHA from the diet. While only over half of the women were meeting the guidelines for ALA, over 90% of the women included in this analysis had no detectable ALA in their RBC membranes at any stage throughout their pregnancy. Additionally, none of the supplements used by the participants contained any n-3 LC-PUFAs and only negligible levels of essential LC-PUFAs, ALA, and LA. The median Omega-3 Index for women in the cohort was 5.5%, with values ranging from 0.0-9.0%. When the LC-PUFA fractions from RBC membranes were compared for those with and without pregnancy complications, there appeared to be a decline in the concentration of both EPA and DHA throughout gestation in those women who had preterm birth. In contrast, there was no visible trend when evaluating the same LC-PUFA fractions in those who experienced HDP. There is the limited literature reporting the dietary intakes of n-3 LC-PUFAs in pregnant women in Australia, and none specifically examining Indigenous women. An analysis of n-3 LC-PUFA intakes in pregnant women participating in the Australian Longitudinal Study on Women's Health (ALSWH) reported a mean combined daily intake of EPA, DPA, and DHA of 336.2 mg/day (sd, 379.1) [12]. This is higher than the median intakes of women in the current Gomeroi Gaaynggal study, at 84.4 mg/day and 186.7 mg/day for intakes assessed by 24-h recall and FFQ, respectively. The reported mean ALA intake alone for pregnant women in the ALSWH, however, was equal to the median intake recorded by both the 24-h recall and FFQ in the Gomeroi Gaaynggal analysis at 1.1 mg/day [12]. It is worth noting that the data used in the ALSWH study was collected in 2003, and the population studied was considered to be a representative sample of Australian women aged 25-30 years at that time, with the total proportion of Indigenous women not reported. The sociodemographic and geographic differences between these two cohorts, along with the timing of data collection, could contribute to the inconsistencies in reported intakes. Moreover, while there has since been an increased awareness and emphasis on the role of n-3 LC-PUFAs in pregnancy [6], particularly among health researchers, public health recommendations simultaneously encourage pregnant women to exercise caution when eating fish [36,37]. The evidence describing n-3 LC-PUFA intakes in pregnant populations internationally is also limited and indicates varied intakes. A similar-sized study of pregnant women in Norway published in 2020 found that only 29.1% of participants were meeting Norwegian national recommendations to eat seafood at dinner two-three times per week [38]. This result aligns with the broader intake of Norwegian women aged 30-39 as reported in their national dietary survey of 2010-2011 [38]. However, the majority of pregnant women from this study (approximately 77%) were taking an n-3 LC-PUFA supplement at some point during their pregnancy, highlighting an understanding of the importance of n-3 LC-PUFAs for maternal health in this population [38]. In contrast, a small study in Belgium found that in 29 pregnant women, the median daily n-3 FA intake was 1.72 g, with an n-6/n-3 ratio of 8.78 [39]. The median intakes of EPA and DHA in these women were 120.0 mg/day and 150.0 mg/day, respectively, with 24.6% reportedly consuming a supplement [39]. A German study analysing the FA distribution of maternal and fetal blood found that fish was being consumed less than once a week, with only 20% of women using an n-3 LC-PUFA supplement during pregnancy [40]. The overall fat intake of this group was high at 45% of total energy intake; however, <1% (1.67 g/day) of this was from n-3 LC-PUFAs [40]. The dietary pattern of these women is described as an omnivore, with participants reporting that their dietary behaviours did not change while pregnant [40]. Interestingly, this study also found that the relative amount of n-3 LC-PUFAs in the maternal blood was no higher in those women who did use a supplement versus those who did not during pregnancy [40]. Most studies examining the relationship between dietary intake of FAs and corresponding levels in RBC membranes during pregnancy have been unable to draw a direct correlation between the two. The Norway study is the first to demonstrate any such relationship and used a principal component analysis to illustrate these findings [38]. Given the limited evidence to support an association between the two, it may be assumed that other factors also influence the FA profile of maternal blood. Additionally, despite over half of the women in the Gomeroi Gaaynggal cohort meeting the national recommendation for ALA intake from food, over 90% had no detectable ALA in their RBC membranes at any stage during their pregnancy. While previous studies analysing the FA profile of maternal blood have reported the overall percentage of ALA in their respective populations, they provide no further comment on the significance of these findings, focusing mainly on the roles of EPA and DHA. Of the other maternal studies located, all reported higher RBC membrane concentrations of ALA than what has been observed in the Gomeroi Gaaynggal cohort, with mean concentrations ranging from 0.10-0.29% [38][39][40][41][42][43]. One study in Japan reported a median ALA fraction of 0.2% with an IQR of 0.0-2.7%, confirming that concentrations of 0.0% are not isolated finding [43]. This is not altogether unsurprising, as ALA is a known precursor for longer chain PUFAs, is also utilised for β-oxidation, and is, therefore, not commonly stored [44,45]. However, the metabolism of ALA to EPA or DHA is considered to be poor [46]. This can be mediated by the amount consumed, with Goyens et al. [47] reporting that a low intake of LA increased ALA metabolism to EPA, while a high intake of ALA increased DHA production. Additionally, metabolism is improved in women compared to men, with oestrogen likely upregulating this metabolic pathway [45]. In fact, due to the high levels of DHA required for fetal development, metabolic adaptation to upregulate DHA production, including higher conversion of ALA, has been proposed during pregnancy as increased intake likely does not cover the additional requirements [45,48]. The low consumption of LA in this cohort, in combination with increased circulating oestrogen during pregnancy, may explain the low levels of ALA, as it was utilised as a precursor for LC-PUFA conversion. However, research into ALA metabolism in pregnancy and fetal development is limited and may play a larger role than a metabolic precursor [49]. There has been growing evidence linking higher RBC membrane concentrations of EPA and DHA to reduced risk of some pregnancy complications, including preterm birth and preeclampsia. The Omega-3 Index has been shown to be a reliable marker of EPA and DHA status, with an Omega-3 Index of 8-11% linked to a lower risk of cardiovascular disease [50]. The limited literature on Omega-3 Index suggests that it may be an appropriate marker during pregnancy for reduced risk of complications such as preterm birth [50]. The median Omega-3 Index of 5.5% recorded for the women in the Gomeroi Gaaynggal study analysis is not dissimilar to that of other pregnant population samples from studies in Belgium, Germany, and Norway [38][39][40]. With rates of preterm birth on the rise in many parts of the world, there have been calls to screen for the Omega-3 Index, either pre-conception or during the very early stages of pregnancy, to identify those at the highest risk [50]. However, given the weak link between dietary intake and FA concentrations identified in blood, further research is needed to better understand the factors resulting in lower levels in those individuals. Notably, a German study published in 2019, claiming to have produced the largest available database of FA analyses, stated that an Omega-3 Index of <2.0% is not possible in humans [51]. However, 21 of the women included in this analysis recorded an Omega-3 Index of <2.0%, with nine of the women at 0.0%. This is not the only time an Omega-3 Index of <2.0% has been reported in a study of pregnant women. Results from a 2020 study in Norway found that one participant had an Omega-3 Index of 1.93% [38]. These low values may be due to the process of analysing erythrocyte membrane fatty acids requiring measuring the peaks of the chromatograph. The software will only allow the measurement of peaks that meet a minimum requirement (area under the peak). It is possible that these participants had peaks for EPA and DHA that were too small to be detected. This may equally apply to ALA findings. Both findings would indicate that more research is required to better understand the implications of these levels and if there are any discrepancies in methodology between studies. The results of the Gomeroi Gaaynggal study analysis highlight a trend towards lower concentrations of EPA and DHA in the later stages of pregnancy in those women who had preterm birth. By contrast, there is no obvious trend in the fractions of n-3 LC-PUFAs in women with HDP across gestation. Two high-quality systematic reviews found high-quality evidence that preterm birth rates are lower in those who consumed dietary and/or supplemental n-3 LC-PUFAs during their pregnancy compared with those who did not [6,52]. The 2018 Cochrane review, however, found only low-quality evidence, indicating a potential decrease in the rate of preeclampsia in those who consumed n-3 LC-PUFAs, whilst the more recent 2023 systematic review and meta-analysis found them to be protective [6,52]. While the reviews don't compare EPA and DHA levels in the blood with these pregnancy complications, the findings are aligned with those observed in the Gomeroi Gaaynggal study analysis. It is also worth noting that for the six women included in this evaluation who had both a preterm birth and HDP, preterm birth may have been related to HDP. This possibility could skew these findings and likely result in a more obvious decline of EPA and DHA levels in those with preterm birth. The limitations of the current study must be acknowledged. Dietary assessments are prone to measurement errors when relying on memory recall to estimate intakes. FFQs looking at intake over the previous 3-6 months can lead to an overrepresentation of recently consumed foods and a general overestimation of intake [53], which was addressed in this analysis by removing any entries with an implausible energy intake. The FFQ used in this study has also been shown to be a poor measure of fat and oil consumption, as the food list is limited and does not include specific kinds of margarine and vegetable oils [31]. Additionally, it doesn't include n-3 LC-PUFA intakes from supplementation [31], which has only been captured in the current study using the 24-h recall data. Conversely, a single day of 24-h recall data may omit foods that are consumed less frequently, such as seafood, a common source of EPA and DHA, resulting in an underrepresentation of the usual intake of these nutrients. The gestational age at which each of the data sets used for this study (24-h recall, FFQ and maternal blood samples) were collected was not aligned and, therefore, cannot be triangulated. Additionally, a dietary analysis was unavailable for each blood sample and vice versa. All of these factors make it difficult to evaluate relationships between intake and the blood concentrations of n-3 LC-PUFAs associated with the pregnancy complications being assessed in this analysis. Due to the need to protect the anonymity of participants, it was not possible to report a specific diabetes diagnosis for each of the women (type 1, type 2, or gestational). This may have provided more insight into the results as each group has differing levels of clinical risk during pregnancy and may also have an influence on gestational duration. In addition, the sample size is relatively small, especially for those who had either a preterm birth and/or HDP. As all women participating in the analysis resided in a single regional or rural town of inland NSW, the findings may not be generalisable to broader Indigenous Australian populations. Despite the limitations of the dietary assessment tools used in this study, both the AES FFQ and the ASA24 are validated tools that have been deemed reliable measure of population intakes. To the best of our knowledge, this is the first study to analyse the dietary intake and FA profile of RBC membranes in a pregnant Indigenous population. The role of n-3 LC-PUFAs in maternal health is widely under-researched and even less so among Indigenous populations. This data will add to the gap of available knowledge in this area that, in time, may contribute to the development of clearer EAR and RDI guidelines for pregnant women. A better understanding of the role of n-3 LC-PUFAs in pregnancy and the factors associated with RBC membrane concentrations can lead to improved health outcomes for mother and baby and more equitable outcomes for Indigenous Australian populations. Practical Implications Despite the limitations in being able to correlate intake of n-3 LC-PUFAs and RBC membrane concentrations, it is still advised that women from the Gomeroi Gaaynggal cohort aim to achieve the current Australian NRVs for these nutrients, as this is the best available evidence to support optimal health outcomes during pregnancy. The findings of the current study suggest that pregnant women from this population may benefit from increasing their intake of n-3 LC-PUFA-rich foods and/or choosing supplements to increase intake where appropriate. Following the results of the 2018 Cochrane review [6] and subsequent randomised control trials, the International Society for the Study of Fatty Acids and Lipids (ISSFAL) published a statement in September 2022 that advises women with a low baseline level of n-3 LC-PUFAs at the start of their pregnancy may benefit most from supplementation of approximately 1000 mg of total DHA + EPA to reduce the risk of preterm birth [54]. Food sources of EPA, DPA, and DHA that are accessible to women of the Gomeroi Gaaynggal cohort include canned tuna, mackerel, sardines, and, to a lesser extent, frozen fillets of Atlantic Salmon, which are more expensive and less available. Traditional Indigenous Australian foods, such as turtle and yabby, are also rich in n-3 LC-PUFAs [28]. The levels of mercury found in these marine foods is generally lower than some other sources and have been deemed safe for consumption during pregnancy [36]. Sources of ALA-rich foods may also be accessible and include canola oils and spreads, along with various nuts and seeds. Small amounts of these foods could help women to meet the NRV recommendation for ALA intake during pregnancy. Conclusions This study adds to the body of the literature regarding the role of n-3 LC-PUFAs during pregnancy, presenting data from a unique Indigenous cohort. While there is some evidence to support an association between higher intakes of n-3 LC-PUFA consumption during pregnancy and lower rates of preterm birth and preeclampsia, further research is needed to draw a definitive conclusion. Further research is needed regarding links between dietary intake, levels of FAs in the blood, metabolism of these FAs during pregnancy, and their relationship with pregnancy outcomes such as preterm birth and preeclampsia. Additionally, more research in other populations is needed to understand the independent role of ALA during fetal development and the significance of ALA levels found in the maternal blood of women in the Gomeroi Gaaynggal cohort. A better understanding of the role and acceptability of traditional Indigenous n-3 PUFA food sources during pregnancy may also assist in guiding culturally safe recommendations for this cohort. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/nu15081943/s1, Table S1: Supplemental fatty acid intake during pregnancy as recorded by 24-h recall during the earlier stages of pregnancy, Figure S1
2023-04-21T15:16:51.010Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "720dc28c7d5dc4f52cf7e0cbd279ba07efaa03e2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/15/8/1943/pdf?version=1681816529", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "289c2d56a64ae2e8afd7de4b3b63ad3f76e8c393", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
9446409
pes2o/s2orc
v3-fos-license
An Improved Fatigue Detection System Based on Behavioral Characteristics of Driver In recent years, road accidents have increased significantly. One of the major reasons for these accidents, as reported is driver fatigue. Due to continuous and longtime driving, the driver gets exhausted and drowsy which may lead to an accident. Therefore, there is a need for a system to measure the fatigue level of driver and alert him when he/she feels drowsy to avoid accidents. Thus, we propose a system which comprises of a camera installed on the car dashboard. The camera detect the driver's face and observe the alteration in its facial features and uses these features to observe the fatigue level. Facial features include eyes and mouth. Principle Component Analysis is thus implemented to reduce the features while minimizing the amount of information lost. The parameters thus obtained are processed through Support Vector Classifier for classifying the fatigue level. After that classifier output is sent to the alert unit. INTRODUCTION According to WHO, about 1.25 million people die globally each year due to road accidents [1]. This figure is errored due to the under-reporting. Fatigue driving is one of the key causes of these accidents [2]. India is one of the largest contributors to this number. Today there are many numbers of technologies developed for fatigue monitoring [3]- [16]. The drowsy state detection system can be classified into three kinds. The first one is based on an attribute such as steering wheel movement, lane position, acceleration, distance to the nearby vehicles, etc. But these type of system is constrained bylimitation like road state, away of driving, the vehicle used, etc. In the second kind of system, a physiological signal such as electroencephalogram (EEG), electromyogram (EMG), electrooculogram (EOG), electrocardiogram(ECG) is used to detect the fatigue level. Physiological signal based system is the most promising fatigue detection system but they require sensor attached to the skin which may affect the user by causing skin irritation, revulsion, loathing, repulsion, etc. The third kind of system uses characteristics like eye blinking, yawning, head pose, etc. to monitor the behavior of the driver and alert the driver if any of drowsiness symptoms are detected. Based on this three kind of systems and their fusions there are several types of products are commercially available in the market. But some of them make an alarm when the driver maybe goes to the microsleep and alarm wakes up the driver and maybe become the cause of abrupt reaction of the driver which may lead to an accident.Many other possible approaches which are subject dependent and require calibration for proper working [17]. In this paper, we propose a more practical, subject independent, robust, calibration free, behavioral based system. II. PROPOSED APPROACH Research has helped to identify some signs or symptoms which help in determining the drowsy state of the driver [ Here we made our focus on the features related to eye and mouth like frequent blinking, yawning etc., for classifying the fatigue level of a driver. The detection system of fatigue, as proposed by us is being divided into four parts: First, a video camera is used for the live streaming of the driver while driving and sends the video feed to a computer vision system that can detect the driver's face in the video frame. After getting the face image it sends to Support Vector Machine (SVM) classifier which classifies the facial image as fatigued or not fatigued. The output of the classifier is represented +1 for fatigued and -1 for not fatigued and this number is input to a running sum that adds the consecutive output values. These outputs then serve to be the input for alert system unit which then acts upon these inputs to produce a resultant of them. This resultant with respect to time is then classified into different fatigue level i.e. no, low and high fatigue level. For different fatigue level, the action taken by the alert unit is different which is explained in the later part. III. DATA Generally, we undermine the importance of data. But data is equally or we can say not less important than the algorithm that uses the dataset for training. Large data can outrun a better algorithm and more the data is versatile more it will be better. In our case, the dataset should contain the wide range of facial images viz. talking faces, smiling faces, face with both dark and transparent glasses. In the case of dark glasses, our system learns to works only on the mouth feature set. Also, images of the people who put a hand on the mouth while yawning help our system to predict correctly for the fatigue. Images with dim light and broad light also need to be considered. IV. PROPOSED SYSTEM The proposed system is divided into four the following subsystems: A. Video Capture Unit The video capture unit used to record the video in real time of the frame containing the driver face through a camera placed on the car dashboard. The video is sampled with some frequency and the sampled frame is sent to the face detection unit. B. Face Detection Unit and Features Extraction This unit receives the sampled video frame from the video capture unit. The images from the video capture unit are the RGB image and for the very dim light condition, we perform low-light image enhancement and noise elimination [20]. For improving the accuracy of our system, we eliminate the noise of the image before amplifying it through contrast enhancement. The above process is divided into two subtasks. At first, for denoising the image we apply the superpixel based adaptive noising and secondly, for amplifying the image luminance adaptive contrast enhancement is used. We need to denoise the image before contrast enhancement, so that the noise has been eliminated before its amplification through contrast enhancement. The above method increases the accuracy of our system significantly as it eliminates heavy noise, texture blurring and over-enhancement from the image which is then processed as accordingly. The image is changed to the grayscale image because for face detection we do not need the color data. For face detection in the frame, we use the rapid object detection which uses the boosted cascade of the classifier by Viola-Jones that works with the Haar-like features [21]. The face detection method returns the abscissa, ordinates, length, and breadth of the rectangle boxed in the facial image. As we need only the eyes and mouth part of the face for the feature set. For this first, we scaled the rectangle boxed in face image to 100*100 pixels and then we use the 80x30 rectangular pixels window for eye and 40x40 rectangular pixels window for the mouth. The x-and ycoordinates of the rectangular window is (10,20) and (30, 60) for the eye and mouth respectively. The above part is then transferred to the fatigue detection unit for further processing. C. Fatigue Detection on Extracted Features From the face detection unit, we get a sequence of eye and mouth image of the driver. Now from this extracted dataset, we can perform fatigue detection analysis on various facial features. These facial features include eyes (fast blinking or heavy eyes), mouth (yawn detection). The combined result of fatigue detection on these facial features is used to give the final result as to whether the driver is in fatigue or alert state. The images from face detection unit, in pixels, can contain a lot of features. If we perform as said in the previous unit we would get a feature vector of size 4000. This size can be further reduced with the use of Principle Component Analysis (PCA). PCA can be used to avoid the problem caused by high-dimensionality as it compresses the data while minimizing the amount of information lost. It searches for a pattern in the data and reduces as much possibly correlated high dimensional variables. The compressed data set can now be divided into training set and test set. To classify if the driver is fatigued or not fatigued, we use Support Vector Classifier. This can also be correlated from the fact that Support Vector Classifier is highly efficient in working with the high dimensional feature vector. It is also very flexible in dealing with linearly as well as nonlinearly separable data sets. SVM is a supervised learning method used for classification and regression. SVM is also referred to as Maximum Margin Classifier because it can maximize the geometric margin and minimize the emperical classification error simultaneoulsy. During classification SVM creates a maximal separating hyperplane. Now, two parallel hyperplanes are constructed on each side of hyperplane that separates the data. Here an assumption is made that larger the distance between the two parallel planes better is the generalization error of classifier [22]. SVM was used to implement this problem because of, binary nature of classification problem and the efficiency of SVM in working with high dimensional data and its flexibility in working with both linearly and nonlinearly separable data sets [23]. Unlike other classification algorithms, SVM can be used in both linear and non-linear ways with the use of a kernel. In cases when we have a limited set of points in many dimensions SVM tends to be very efficient because it can find linear separation in the data. SVM is also eliminates the drawbacks of outliers as it uses only relevant points to find a linear separation, also called support vectors. Now, we can train the classifier on the training set and can check its accuracy using test set. The test set and training set are completely different i.e. no two instances in both data sets are same. We do this so that our SVC model is always tested on a data which it has not seen before. This strategy provides a better picture of the generalized functioning of SVC. After the SVC model is trained we can calculate the prediction accuracy of our model using cross validation. In cross-validation, we perform a number of iterations, wherein each iteration we give a different data subset to test set from the previously allotted set. Finally, the classifier would return 1 if the driver is found to be in fatigue state or return -1 if the driver is found to be in an alert state to the alert unit. D. Alert Unit The modeling of the alert unit is done on the format (r, t), {r>=0, t>=0} where r is the resultant value and t is the time. The whole model depends on the output value of the fatigue detection unit. The Support Vector Machine (SVM) classifies the image, whether they are fatigued or not fatigued. The representation of the classification output is either a +1 or -1 and this number is used by the alert unit in the following way. Specifically, the classification output is the input of a running sum that adds the consecutive output values and has a minimum value up to zero. This unit employs two threshold level, first one is to detect whether there is no or low fatigue level. The second threshold value is to mark the difference between low and high fatigue level. Whenever the resultant value goes above a specified threshold, it implies that fatigue is detected and the alert system goes active and takes action according to the detected fatigue level. For low fatigue level, the alarm rings for 10sec and after 10sec the system again checks the resultant value and performs the alert process accordingly. For high fatigue level, we can use more effective accident preventive measures like an automatic reduce the speed and ultimately stop the vehicle and, or water spray. This system can be used to track the fatigue level of a driver and detect the sleep onset with a safe margin. V. DISCUSSION AND CONCLUSION Our proposed system can be ahighly efficient system to monitor fatigue level in a driver and can dominate over disadvantages from previously developed methods by using both eye and mouth feature set and also with a vast pool of data. This system requires only a camera to monitor the driver's face, therefore reducing its hardware cost. This system works only on the required part of face image i.e. eyes and mouth, rejects the rest. This step decreases the unnecessary features in the feature set. Eye and mouth detection is less accurate than the face detection, that's why we use the face detection to get the eye and mouth image part of the driver. It makes the system optimized in the context of time and accuracy. The division of system alert unit into three units is an efficient way to alert the driver. It works in such a way that the driver is not subjected to sudden attack, which may lead to an accident.
2017-09-17T14:28:21.000Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "add8453bee697e8910b3aa02e3ead748e898aab1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1709.05669", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "39eedd76ce04afad05d47dbccd49987014976820", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
259713750
pes2o/s2orc
v3-fos-license
Analysis of the market value of public service companies in Brazil, Argentina and Chile: An application using panel data The proposed study analyzes the pricing of the market value of companies in the public utility sector in different scenarios. From the theoretical point of view, it is understood that the intention to invest in the business environment is facilitated when conditions in a country are favorable, as well as the financial health of companies. In this sense, the study incorporates economic and financial variables as predictors, in order to capture the effect and intensity in the pricing of market value. To achieve the objective, an econometric model was estimated from the organization of panel data, justified by the nature of the data. The results show that economic policies are decisive, but according to the scenario analyzed, the intensity and effects tend to change. Furthermore, financial information is relevant and must be incorporated by decision makers in the business environment Introduction The market value is a topic that always triggers discussion in the academic environment, even more so when talking about the determinants that can explain the intensity and variation depending on the scenario analyzed and the conditions that favor the policies of a country aimed at attracting both national and international capital. abroad. In this context, the public utility sector can be used as a strategy for those investors who aim to manage the risk-return relationship in the best way, considering that the utility sector is one of the least volatile given the importance of the sector in economies. As a result, macroeconomic variables play an important role in serving as a basis for analyzing the market value of companies, citing variables such as Gross Domestic Product -GDP, exchange rate and interest rate, mainly. Thus, changes in monetary and exchange rate policy are crucial to explain the market value, which consequently influence economic activities. Namely, given the case of an expansive monetary policy, the tendency is to provide an increase in consumption and credit considering the expansion of the money supply (Blanchard, 2011). In this context, studies such as those by Camara (2012) and Mokhova and Zinecker (2014) found a significant relationship and influence of macroeconomic variables such as: interest rate and income in the economy when analyzing the capital structure of companies. At the same time, Sánchez Arévalo & de Souza (2022) states that the capital structure of companies serves as a basis for discussing the solvency of companies, which, a priori, would help to price the market value of companies. Still talking about capital structure, financial variables that have a strong relationship to explain cash flow and EBTIDA are important to explain the market value of companies. At this point, investors' expectations and investment intentions depend on the information published by the companies, as well as the aggregation of value. 50 In this way, the incorporation of financial variables in the study is of total relevance which helps to contemplate both the performance of the company, and added to that, the performance of the economy with macroeconomic variables to explain the value that the market prices for the companies. When talking about financial variables, Malta and Camargos (2016) found eight variables with explanatory power of shares, which are: Return on Assets -ROA, on Investment -ROI, on Equity -ROE, Earnings per Share -LPA, Market-to-Book Ratio -MBR, Market Liquidity -INEG, Gross Margin -MB and Third-Party Capital Participation -PCT. In addition, given the scenarios proposed in this study, Argentina, Chile and Brazil, which are countries with different realities, different effects can be expected from macroeconomic variables, mainly (Lucey & Zhang, 2011;Kayo & Kimura, 2011). Along these lines, given the observed reality of each of the countries under discussion, the incidence of coefficients for macroeconomic variables, for example: interest rate and exchange rate, may be greater considering the effect of monetary and exchange rate policies with greater variance in in relation to Chile. A relevant factor, it was observed in Argentina that the interest rate in 2018 reached 80%, with inflation of 47.6% (BCRA, 2019). In addition, economic income (GDP) in countries such as Brazil and Argentina shows moments of growth, deceleration and recession, whereas in the case of Chile, the GDP performance is more stable and growing over the time studied. Regarding the public utility sector, it is understood that it is a defensive sector within the stock exchange, since companies offer an essential service to the population. Thus, even in periods of crisis or any systemic event affecting the business environment, the population will continue to consume water, energy and need basic sanitation. Consequently, it is to be expected that the sector under study will end up offering a good predictability in its revenues, implying the payment of dividends. In the meantime, some risks must be considered, many of these companies are state-owned, being susceptible to the risk of government intervention. Based on the above, the objective of the study is to analyze the effect that macroeconomic and financial variables have on the market value of companies in the public utility sector in Brazil, Argentina and Chile in the period from 2010 to 2019. The macroeconomic variables are GDP, interest rate, exchange rate, return on equity -ROE, gross profit and financial slack (Peixoto & Alves, 2015;Paredes & Oliveira, 2017;Assefa;Esqueda & Mollick, 2017;Kumar, 2017;Picolo et al., 2018). The choice of countries is due to the fact that they are considered the most relevant in terms of representation in the South American scenario. In the case of Brazil and Argentina, for having the highest GDPs in the region and in the case of Chile for presenting one of the highest growth indicators in the period studied. In addition, the study aims to fill the existing gap around the importance of the public utility sector as an attractive sector in the business environment, as well as less risky in crisis situations. In view of this, other countries such as Argentina and Chile are incorporated, in order to portray the importance 51 of the study in countries that have a growth pattern similar to that of Brazil. For this, it is considered that both Brazil, Argentina and Chile present a pattern of sustained growth in household consumption, representing approximately 60% of the GDP of the three countries (IBGE, 2022). A detailed description of the study problem can be seen in figure 1, where the relationship between the so-called determinant variables to explain the market value of companies is observed. Also, studies are cited that serve as a theoretical basis to support the incorporation of variables in the discussion of the problem and the hypotheses described in the methodology. The economy's income and macroeconomic variables Among the several variables that can explain the behavior of the market value of publicly traded companies, there is the economy's income. Namely, the income of the economy, a priori, is understood to have a close relationship with the Stock Exchange indicator, considering that investments and the flow of money in general are channeled through the financial market (Sánchez Arévalo & de Souza, 2022). In addition, as they are companies focused on the public utility sector, the service provided directly affects household consumption and, consequently, the income of the economy. In this way, the question of discussion derives from the relevance that household consumption has in GDP, and in the three objects of study, household consumption represents approximately 60% of GDP, that is, it is a major engine of the economy. At this point, an aspect to be observed is the cycles that the economy's income can present and that, depending on the movement of the series, they can present structural breaks which can be decisive in the direction and the effect that the economy's income can have on the market value of companies. For the range under study, this aspect is not observed (see figure 2), although in the case of Brazil there is a tendency for income to fall at dollar prices. 54 In the Brazilian scenario, Paredes and Oliveira (2017) found influence on market value when considering the effect of GDP, the economy's basic interest rate, inflation and exchange rate in the steel, financial, construction and electricity sectors. Thus, in the study proposed here, a positive relationship is expected between the income of a country's economy and the market value of companies, since income generation in the economy reflects the growth of productive sectors. In the case of the exchange rate, transactions carried out with the foreign market influence the economic activities of companies, mainly in the export sector. When there is a devaluation of the local currency, exporting companies tend to obtain greater gains (Pires, 2019). In a study by Silva, Coronel and Vieira (2014), when analyzing the exchange rate and the Ibovespa index, they found a negative relationship between the exchange rate and the unidirectional index. That is, the exchange rate has an impact on the determination of the Ibovespa index. Thus, it appears that the exchange rate is a predictive potential of the index. At this point, attention is drawn to the great loss of purchasing power of the Argentine domestic currency (see figure 3) and, as a result, the observed trend should be treated with caution, since it should exert a great influence on the market value of companies. In the Chilean case, the exchange rate fluctuation On the interest rate side, Assefa, Esqueda and Mollick (2017) found a significant negative relationship with the share price. In this study, a universe of 21 developed and 19 developing countries was analyzed, being treated with panel data with a quarterly frequency, from 1999 to 2013. The findings of this study are corroborated by the literature, since the increase in the interest rate increases the opportunity cost of investing in fixed income, which leads to an expected reduction in demand for variable income. Furthermore, in the case of interest rates, it is understood that this is a very important variable when it comes to controlling or expanding the money supply. Depending on the type of policy adopted by each country and depending on the economic scenario, its adoption has a direct effect on the value of the company, as all companies in the financial system are financed by the basic interest rate of the economy. The decrease in the interest rate causes an increase in companies' investments, consequently, in the short and medium term, the company's value tends to be maximized (Paredes & Oliveira, 2017;Bernardelli & Bernardelli, 2016). The figure 4 shows the behavior of the interest rate on bank deposits for the three countries that are the object of study. The accentuated trend of the series for the Argentine case can be clearly seen, this behavior being the result of the policy adopted by the Argentine Central Bank as an attempt to control inflation (considered one of the highest in the world) and stabilize the exchange market. In the Brazilian case, between 2010 and 2019, the interest rate dropped from approximately 8.87% to 5.43%, and in the case of Chile, it increased from 1.75% to 2.54%. These last two countries present a totally different reality when buying monetary policies, as well as the exchange market. In addition, and emphasizing the importance of macroeconomic variables as determinants in the pricing of market value of companies, Bernardelli and Bernardelli (2016), highlight in their study that the economy's income, the interest rate and the exchange rate are the greater effect on the value of companies. Based on this, in this proposed study, it is necessary to verify whether the findings by these authors can be corroborated when we analyze different economies. Relationship between market value and financial variables The financial indicators play an important role in the relationship with the value of the company, as they are part of the management process of an entity. As relevant variables in this context, we consider profit, profitability, as well as the capital structure, signaling the financial slack. The latter can be understood as liquidity, and both slack and profitability are factors that make companies stand out (Rezende & Macedo, 2021;Picolo et al., 2018). In the case of the sector under study, a priori, as they are companies that provide public service and have a niche of activity, profitability must be a factor linked to the company, that is, already expected by investors, on the other hand, the question that remains is to analyze the degree of commitment of equity capital. It is worth reflecting that in many of these companies the government's participation is very large, which in some cases, depending on the decisions, exerts a positive or negative influence on the business environment and, consequently, on the attractiveness of the capital market as a source of financing for economic activities. Still talking about profitability and profit, Silva, Tavares and Azevedo (2018) 57 Also, Hahn et. Al. (2010) found a causal relationship between the share price and the payment of dividends; it should be noted that the payment of dividends is linked to earnings. This behavior reinforces the investor's preference for dividends, which does not rule out the importance of capital gains through the appreciation of stock prices. In addition, the conclusion brought by the author corroborates the causal relationship between the disclosure of information with the pricing of stock prices and, consequently, the market valuation of companies. Along the same lines, Abrokwa and Nkansah (2015) found that dividends per share have a predictive effect on stock returns, although other variables such as earnings per share, company size and book value per share were incorporated in the study. Another study, such as the one by Forti, Peixoto and Alves (2015), found a significant positive relationship between the variables return, company size and profit growth, with the distribution of dividends with the market value of companies. These authors reinforce that, regardless of the dividend distribution policy, earnings growth is linked to greater earnings distribution, consequently, dividend distribution announcements help to positively price the value of companies. Finally, market value is also influenced by factors that measure the level of profitability, risk and opportunity for growth, which are directly related to the level of indebtedness, the latter of which determines the financial slack of companies (Pamplona, da Silva & Nakamura, 2021; Rodrigues dos Santos, & Dos Santos, 2020). In addition, specific factors of the sector in which companies operate cannot be neglected, such as the importance of the sector and the sensitivity that it can present in the face of economic cycles. The panel data model The panel method is an alternative methodology when the data used are the result of a combination of time series with a cross section. The advantage of using the methodology on screen, in addition to the efficiency with the information, is to obtain a greater number of degrees of freedom, which allows improving the degree of fit of the model and the statistical significance of the parameters. Therefore, the effects not observed in a cross-section or in a time series can be detected using a panel (Gujarati & Porter, 2011;Greene, 2011 59 In order to verify relationships with the dependent variable "Market Value of Companies -VME" and the independent variables, as well as justify the expected sign of the coefficients in light of the estimation of equation 1, Table 1 details the hypotheses and signs expected. It is worth mentioning that, due to the fact that different countries are being analyzed, some coefficients may show signs contrary to what was expected, for which it will be necessary to verify the necessary justifications. The Financial slack is a resource used to generate opportunities and a high level of it can lead to more efficient management and, consequently, help to price the market value of companies (Picolo, et al, 2018;Rezende & Macedo, 2020). Source: Research data. Source and treatment of data Data on the value of the company, as well as the financial indicators of profitability, gross profit and financial slack were collected on the platform Economática and Thompson Reuters. The universe of data collected starts from 2010 to 2019, with quarterly series. To estimate the results, the company value collected at current price was deflated using a broad correction indicator for each country under study. 60 The same correction procedure (inflation adjustment) was performed for GDP, the exchange rate, the basic interest rate of the economy. The data for these variables were collected together with information published by the Central Bank of each country under study, in the case of Brazil BACEN -Central Bank, in the case of Chile BCENTRAL -Central Bank of Chile and, in the case of Argentina BCRA -the Central Bank of the Republic of Argentina. In total, data from 12 companies in Chile, 10 from Argentina and 39 from Brazil were analyzed with all the information available for the study. Results for Brazil The 61 *** significant at 1%, ** significant at 5%, * significant at 10%. GDP = Deflated GDP, TXJ = SELIC interest rate, deflated, TXC = US dollar / Brazilian real exchange rate, deflated Regarding the sign of the coefficient linked to the GDP variable, the variation in the economy's income implies a variation in the same direction for the market value of the companies under study. Namely, the sector in question, as it is a public utility, corroborates the hypothesis that it is important for the economy and, even in crisis situations, it will hardly be affected to a great extent, implying the possibility of observing robust structural breaks. Continuing, the negative coefficient of the TXJ variable denotes those monetary policies are relevant in pricing the market value of companies in the sector studied. At first, it can be understood that monetary policy decisions made by the Central Bank initially affect the business environment, influencing short-term decisions by investors. In this case, it appears that the intensity of the effect is three times, given a change of 1% in the interest rate, the market value of companies tends to fall by approximately 3.6%. On the side of the TXC exchange rate, there is an effect similar to that observed in the previous coefficient. Evidently, exchange rate policies, as well as the very condition of the market that price the purchasing power of domestic currency, are determinant in the value of the company. At this point, the information that must be incorporated by decision makers is that the effect of monetary and/or exchange rate policies has similar intensities for the public utility sector in particular. When considering the Brazilian economic environment, the country suffered over the period under study, increasing increases in the exchange rate and given the loss of purchasing power of the domestic currency, what would be expected would be a fall in the value of these companies. In the case of Gross Profit, ROE and financial slack, these three variables reflect the importance of financial information as fundamental in the investment intention, as well as in the aggregation of value by the company, which reflects in the pricing that the market makes on the value from the company. In this sense, given the positive sign of the coefficients, a priori, it is understood that the market value tends to increase in the face of positive variations in these financial indicators. At this point, it is important to emphasize the importance of financial slack and, therefore, of the formation of the capital structure of companies and to verify to what extent the debt commitment becomes relevant. For the specific sector, the financial slack reflects, among other aspects, the company's ability to deal with short-term debts making use of available and realizable, in general, the debt does not cover a higher percentage of equity in relation to the structure of companies. Results for Argentina In the table 3 shows the result of the estimation of equation 1 for Argentina. As in the Brazilian case, the result of the Hausman test concludes that the fixed effect is the best to the detriment of the other approaches. In this sense, interpretation and discussion of the results will follow the order of the coefficients found through the fixed effects. *** significant at 1%, ** significant at 5%, * significant at 10%. GDP = Deflated GDP, TXJ = basic interest rate of the Argentine economy (LELIQ), deflated, TXC = US dollar / Argentine peso exchange rate, deflated. With regard to GDP, it can be seen that positive variations are followed in the same direction by the company value, although it is not statistically significant, this result brings a good reflection on the matter. In general, as it is a strategic sector, a priori it is understood that the public utility sector behaves similarly to the performance of the economy. At this point, it is worth mentioning the effect of price control policies carried out in that country, considering the economic instability given the increase in poverty levels, which weakens the market value of these companies. In addition, another factor worth mentioning is related to the monetary policies adopted in Argentina. During the study period, it was found that interest rates in Argentina rose significantly, and 63 this measure was adopted with the aim of controlling inflation. The high rate of inflation actually weakens the purchasing power of the domestic currency, which consequently affects the market value of the companies under study. At this point, the sign of the TXC exchange rate coefficient stands out, which does not corroborate the expected. Due to the positive sign of TXC, the company value increases as the domestic currency depreciates against the dollar. These last two results deserve a reflection, on the one hand it is understood that the restrictive monetary policy tends to curb inflation through a lower expansion of the money supply, however in the Argentine case this reality was not the expected and the domestic currency continued to lose purchasing power. At this point, a much-commented aspect in the business environment centers on the loss of confidence among international investors related to the government's ability to meet all debt payments. Thus, given this reality, it was expected that the exchange rate coefficient would follow the behavior of monetary policy, with a negative sign. Regarding the coefficients of the variables Gross profit, ROE and financial slack, the signs are expected and denote the importance of the fundamentals of the companies in the sense of pricing the market value of the companies. As they are companies operating in an important sector of the economy, it was expected that the profitability, profit and ability of companies to meet their short-term obligations are determining variables, although only the gross profit has a statistical and significant argument. Still related to fundamentals and specifically ROE profitability, a positive relationship was to be expected, although the coefficient is not statistically significant, important conclusions can be drawn considering that profitability plays a key role in attracting investments. Results for Chile For the Chilean case, the results can be seen in Table 4. Initially, it is worth mentioning that the interpretation of the results follows the estimation of fixed effects considering the result obtained by the Hausman test. Thus, it is considered that the coefficients incorporate heterogeneous information within the universe of companies studied, which makes the interpretation of the results found plausible. On the GDP side, there is a positive relationship between income and the market value of companies, although not statistically significant. This initial result encourages a reflection on how important the performances that public utility companies have in the Chilean scenario can be to explain the behavior of the economy's income. Although it is a strategic sector for serving public services and the Chilean economy is one of the most stable for the time being studied, a positive and significant statistical relationship was expected. The balance that can be made in the light of the result found concerns the loss 64 of market value of some companies that are the object of study, with behavior different from that of the economy, and in the period under study the Chilean GDP had one of the best performances of the region. In relation to the TXJ coefficient, the result denotes a positive relationship, contrary to what was expected in the light of the theory. However, it is worth pointing out some aspects of the Chilean economic reality that may help to explain this finding, which is related to economic stability, as well as the intensity of the interest rate variation. It should be noted that a brief analysis of the interest rate trend in Chile indicates a stable behavior over the analyzed period. Thus, in a country with a controlled level of inflation and an almost constant interest rate, the effect on the value of companies tends to be positive and companies are encouraged to obtain financing without greater risks. Linked to this behavior, there is a positive result of the coefficient of the TXC exchange rate, in such a way that the devaluation of the domestic currency against the dollar would imply in a greater value of the companies. A justification for this result can be explained by the social and political uncertainty that took shape in 2019, through the protests in October of that year, which devalued the purchasing power of the domestic currency. Additionally, according to the Central Bank of Chile (2019), this country has a solvent financial system, a solid fiscal situation and inflation expectations around 3%, with monetary policy adapting to these circumstances. Thus, although the domestic currency loses value, there is a whole context of information that the business environment incorporates and that implies the value of companies. Looking at the financial information, the intensity of the Gross Profit and ROE coefficients draws attention, although the results show no statistical significance, important conclusions can be drawn from this information. In the business environment, decision makers need to incorporate information into their decisions to help them choose the best path. In the case of gross profit, this can represent a competitive advantage as a particular company incorporates a higher volume when compared to other companies in the same sector. In addition, gross profit as well as the result of profitability may indicate a competitive advantage in product costs due to efficient production techniques or economies of scale. In general, for the Chilean case, decision makers must incorporate as relevant information the information of the companies, such as fundamentals, and secondly, the main macroeconomic variables of the country. Final considerations Through this study, we sought to analyze how macroeconomic and financial variables can explain the value of public utility companies in different scenarios, with the countries of Brazil, Argentina and Chile being the object of study. The choice of the countries under study is justified, on the one hand, by the representation that the countries of Brazil and Argentina have in terms of GDP in South America and, in the case of Chile, for being the country with one of the best indicators in terms of growth and development in the region. It is worth noting that different results were observed according to each country under analysis, for which decision makers should incorporate them according to the priority observed in this study. In the Brazilian case, the monetary and exchange rate policies that this country may experience are considered as priority information, which, in short, are seen as determinants of the value of companies for the sector under study. Namely, for the time being studied in Brazil, the basic interest rate of the economy had large fluctuations, as well as the domestic currency lost value against the dollar. Specifically, between the pre-and post-World Cup years, Brazil experienced one of the highest interest rates in history and, in the light of economic theory, the high interest rate keeps the economy with a low growth rate and low investment rate, which in in the case of the Brazilian economy, this has resulted in a regression of the production matrix, reflecting, among others, in the economy's GDP between 2015 and 2016. Still in the Brazilian case, the financial slack is considered a very important information to be incorporated by decision makers, that is, it is worth reflecting on the capital structure and the solvency level explained by the companies' ability to pay short-term debts. In the case of Argentina, monetary policies and exchange rate policy are also decisive in the context of the study. At this point, something that stands out is the balance that these two policies make in controlling the market value of companies, if on the one hand a restrictive monetary policy would tend to negatively affect the market value, on the other hand, the loss of power of purchase of the domestic currency against the dollar raises the market appreciation of companies in this sector. Namely, the volatility of the market, the policies adopted in order to curb the exchange rate and the effect that these measures have on the liquidity of the depriving sector and on the entire business environment, create situations that can lead to a paradox. Still on the Argentine market, on the GDP side, although a positive relationship is observed, but not statistically significant, the characterization of the economic context can justify this result. The Argentine economy is characterized by an underdeveloped financial market and a volatile macroeconomy, although public utilities are considered to play a fundamental role in promoting productive sectors that are strategic for the country's growth. On the Chilean market, there is a different behavior, which is reflected in highlighting the exchange rate, on the macro side, as more relevant in terms of intensity to explain the market value of companies, although with a sign contrary to what was expected, being a behavior similar to that observed in Argentina. It remains as a reflection, to what extent the policies adopted aimed at raising the interest rate, which values the exchange rate, favor or not the control of inflation, and what are the effects on the price index and, if there is a pass-through (effect causal) for basic services. An important piece of information that should be included in this discussion is related to the price control of basic services, something signaled by the government in charge in 2019, which would tend to be one of the measures to stop the social protests that year, in order to alleviate visa cost and inequality. Regarding the financial information, the relevance of gross profit was verified with significant statistics and positively affecting the market value. However, in terms of intensity, ROE profitability incorporates relevant information, although the coefficient is not significant. In this way, the argument that financial information and especially profitability is important is reinforced, however, although the sector under study is focused on analyzing a strategic sector of the economy, the macroeconomic context appears to be relevant. Consequently, it is recommended that future studies include a variable that measures the market value through the indicator or sub-indicator of freedom, considering that this index incorporates social, economic and political information. Finally, this study brings a reflection and at the same time a reflection on how the difference in environment can incite to different conclusions about the discussion of company value. In the common case, we consider the public utilities sector which have a significant economic and social impact, as they
2023-07-12T07:21:57.302Z
2022-12-06T00:00:00.000
{ "year": 2022, "sha1": "ed2a5cf045b812adfb070563bd23e30a5fdff6e5", "oa_license": "CCBY", "oa_url": "http://www.cya.unam.mx/index.php/cya/article/download/2867/1964", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a2620bfeaa9a1806ec44fad3c42ebfcfdaa5a1b8", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
49561901
pes2o/s2orc
v3-fos-license
Age and sex differences in emergency department visits of nursing home residents: a systematic review Background Nursing home residents (NHRs) are often transferred to emergency departments (EDs). A great proportion of ED visits is considered inappropriate. There is evidence that male NHRs are more often hospitalised, but this is less clear for ED visits. It is unclear, which influence age has on ED visits. We aimed to study the epidemiology of ED visits in NHRs focusing on age- and sex-specific differences. Methods A systematic review was carried out based on articles found in MEDLINE (via PubMed), CINAHL and Scopus. Articles published on or before Aug 31, 2017 were eligible. Two reviewers independently identified articles for inclusion. The quality of studies was assessed by the Joanna Briggs Institute critical appraisal tool for prevalence studies. Results Out of 1192 references, we found seven studies meeting our inclusion criteria. Six studies were conducted in the USA or Canada. Overall, 29–62% of NHRs had at least one ED visit over the course of 1 year. Most studies assessing the influence of sex found that male residents visited EDs more frequently. All but one of the five studies with multivariable analyses reported a statistically significant positive association (with odds or rate ratios of 1.05–1.38). All studies assessed the influence of age. There was no clear pattern with some studies showing no association between ED visits and age and other studies reporting decreasing ED visits with increasing age or increasing proportions followed by a decrease in the highest age group. Studies used 85+ or 86+ years as the highest age category. Hospital admission rate ranged from 36.4 to 48.7%. There was no study reporting stratified analyses by age and sex. Only one study reported main diagnoses leading to ED visits stratified by sex. Conclusion Male NHRs visit EDs more often than females, but there is no evidence on reasons. The association with age is unclear. Any future study on acute care of NHRs should assess the influence of age and sex. These studies should include large sample sizes to provide a more differentiated age categorisation. Trial registration PROSPERO CRD42017074845. Electronic supplementary material The online version of this article (10.1186/s12877-018-0848-6) contains supplementary material, which is available to authorized users. Background Older people use emergency department (ED) services more often than persons of younger age [1]. In times of demographic changes, the burden on ED systems may further increase. In 2014, just over 1.4 million residents were living in US nursing homes, corresponding to 2.6% of the over-65 population and 9.5% of the over-85 population [2]. Compared with community dwellers nursing home residents (NHRs) have higher utilisation rates of EDs [3]. However, a large proportion of these ED presentations is considered inappropriate [4,5]. Furthermore, it is questionable if benefits outweigh potential risks as ED visits of NHRs often result in unintended consequences and adverse outcomes like greater cognitive and physical decline or hospital-acquired infections [6,7]. Approximately 50% of NHRs visiting EDs are discharged back to the nursing home without being hospitalised [8,9] and almost one fifth of presentations followed by ED discharge had no diagnostic testing at all [9]. Although NHRs are typically older than 65 years, they represent a wide range of age groups up to over 100 years and a large proportion is female with increasing tendency in older age groups [10,11]. Patterns of chronic diseases differ between sexes and across the age span in this population [10,12], but most studies present epidemiologic measures aggregated for both sexes and potential differences between age groups are often not further examined. In their systematic review published in 2011, Gruneir et al. compared ED use by older adults to younger age groups, but they did not report on further age differences in NHRs [1]. Overall, the literature on age differences in hospitalisations of NHRs is inconclusive [13]. This seems also to be the case for ED visits with studies showing different findings [14,15]. On the other hand, previous research showed that male NHRs are more often hospitalised than female NHRs [13,16,17], which might also apply for ED visits [15,18]. The aim of this systematic review is to estimate the incidence and prevalence of ED visits in NHRs, focusing on age-specific and sex-specific patterns. We also gathered information on age-specific and sex-specific differences in reasons for ED visits, revisits and hospital admissions. A systematic literature search was carried out for articles published on or before Aug 31, 2017. In a first step electronic databases including MEDLINE (via PubMed), CINAHL and Scopus were searched combining an adapted version of the search strategy of Hoffmann and Allers for NHRs [13] and a filter to retrieve studies related to EDs from Kung and Campbell [19]. The search strategy can be found in the Additional file 1. In a second step the reference lists of all identified articles were scanned for additional studies. There was no limitation regarding the time period. Inclusion and exclusion criteria Studies were included if they assessed all-cause ED visits among NHRs and presented age-specific or sex-specific analyses on incidence or prevalence of ED visits or included one of these variables in crude or multivariable regression models. Prevalence is defined as the proportion of NHRs admitted to EDs at a given point in time. Numerator is the total number of NHRs admitted to EDs and denominator is the total number of NHRs. Incidence is defined as the measure of ED visits within a specified period of time and is usually expressed as a rate (e.g. per 100 or 1000 resident days, resident years, nursing home bed days). Numerator is NHRs' ED visits and denominator is either the total number of NHRs under risk within a specified period of time or the accumulated time NHRs are at risk. ED is defined as a hospital facility that provides unscheduled outpatient services to patients whose conditions require immediate care because of injury or illness or urgent medical conditions, and is staffed 24 h a day [20]. The current review considered prevalence studies, prospective and retrospective cohort studies and (randomised) controlled trials (provided that data from the comparison group were reported) for inclusion. There were no language restrictions and all articles published in languages other than English were translated. Studies were excluded if they were restricted to specific groups of NHRs (e.g. specific diagnoses, specific levels of care, only NHRs with previous ED visits) or specific ED visits (e.g. specific diagnoses, only ED visits leading to hospitalisation). Study selection and data extraction After removing duplicates, all titles and abstracts were screened independently by two reviewers (AB and KA) against the predefined inclusion criteria. In a next step, the full texts of all potentially relevant articles were assessed by the same reviewers. Any disagreement was resolved by discussion or by involving a third reviewer (FH). Data extraction was performed by one reviewer (AB) and verified by a second (KA). We performed a descriptive synthesis of the identified studies due to the heterogeneity of the included studies. Risk of bias/quality assessment All included studies were assessed by two independent reviewers (AB and KA) for methodological validity using a version of the prevalence critical appraisal instrument from the Joanna Briggs Institute (JBI) [21]. Any disagreements that arose between the reviewers were resolved by discussion or by a third reviewer (FH). We considered every study that met the inclusion criteria independent of their quality. Search results After the removal of duplicates electronic searches identified a total of 1192 records. Screening for titles and abstracts resulted in the exclusion of 1095 records. Ninety-seven of the remaining potentially relevant articles were obtained in full text including four French, two Spanish, one Russian and one Hebrew article. Full text screening resulted in the exclusion of 90 articles and a total of seven articles were eligible for inclusion. No further articles were found in reference lists of the identified articles ( Fig. 1). Study characteristics The included studies were from the USA (n = 5) [15,[22][23][24][25], Sweden (n = 1) [14] and Canada (n = 1) [18]. The years of data used ranged from 1995 to 2009. Articles were published from 1998 to 2016. The studies included data from 719 to 132,753 NHRs. Follow-up periods ranged from 1 month to 3 years. Data on ED visits were most commonly obtained from administrative data or Minimum Data Set (n = 4) [22][23][24][25]. In the other three studies, data were collected by hospital staff [15,18] or registered nurses [14]. The two articles from Stephens et al. reported findings from the same study but used different designs, analyses and number of participants [24,25]. Therefore, both articles were included (Table 1). Quality appraisal of included studies The quality assessment of all included studies and quality criteria are given in Table 2. The percentage of quality criteria answered with 'Yes' varied between 88 and 100%. The sample was representative of the target population in all studies. All study participants were recruited in an appropriate way and all studies used objective criteria assessing ED visits. All studies except one used appropriate statistical analyses. Overall, questions were answered predominantly with 'Yes' , because almost all studies used administrative data. The remaining two studies including one survey mentioned the response, but gave no details regarding a sufficient coverage of the identified sample. One study included only residents aged 75 and older [14]. Two studies investigated residents with varying stages of cognitive impairment and dementia [23,25]. The mean age of NHRs ranged from 79.6 to 85.8 years [14,18,23] and between 29.0-39.4% were aged 85 or 86 years and older [22,24,25]. In these six studies between 66.2 and 71.0% were females. In one study [15] baseline data on age and sex were not reported. Frequency of ED visits All but one study [25] examined some measure of all-cause ED visits ( Table 3). The proportion of NHRs admitted to the ED ranged from 29 to 62% over a one-year period [14,23,24]. The incidence of ED visits ranged between 62.6 and 215.5 per 100 resident years [14,15,18,22,23], with three of the studies ranging between 110 and 150 ED visits per 100 resident years [14,15,22]. One study found a trend towards increasing ED visits over time with a rate of 146.0 ED visits per 100 resident years in 2001 compared to 215.5 ED visits per 100 resident years in 2008 [23]. All studies assessed the influence of sex. Two studies stratified their results for males and females [14,15] and four studies conducted regression analyses including sex in the model [18,22,23,25]. One study reported both [24]. Most studies came to the conclusion that male NHRs visit EDs more often than females. One study reported that 65.3% of the male NHRs visited the ED over a one-year period compared to 60.5% of the female NHRs [24], while another study reported a prevalence of 33 and 28% for male and female NHRs [14], respectively. A further study found an incidence of 154.5 per 100 resident years for male and 111.6 per 100 resident years for female NHRs [15]. All but one of the multivariable analyses reported a statistically significant positive association between male sex and ED visits (odds or rate ratio: 1.05-1.38) [18,22,24,25]. The other study analysed factors predicting time to first ED visit in the year after study entry and found no association between male sex and ED visits [23]. All seven studies assessed the influence of age. Three of the studies stratified their results by age [14,15,24]. One study reported decreasing incidences of ED visits with rising age (65-74 years: 153.2, 75-84 years: 124.3, 85+ years: 113.0 ED visits per 100 resident years) [17]. Another study found a higher prevalence of ED visits (30%) in the age of 85 years and older compared to the age of 75-84 years (25%), but this finding was not statistically significant [14]. The third study underlined a slightly increasing prevalence of ED visits from 65-75 years (62.7%) to 76-85 years (63.9%) of age and a slightly decreasing proportion in those aged 86 years and older (59.7%) [24]. Five studies included age in regression analyses [18,[22][23][24][25]. Whereas Stephens et al. found significantly lower odds of any ED visit for the age of 65-75 years compared to the age of 76-85 years, this was not statistically significant for the age group of 85 years and older [24]. One study found that higher age was associated with lower rates of total ED visits [25], while two other studies did not show any association between age and overall ED use rate [18,22]. One study found that age influences the time to first ED visit [23]. However, of the five studies that conducted multivariable analyses, only two used the same age categories (65-75, 76-85 and 86+ years) [24,25], two incorporated age as a continuous variable [18,23], and the last did not clearly report how age was included in the model, but probably also continuously [22]. One of these two studies reported reasons leading to ED visits stratified by sex [14]. For women, falls were the most frequent reason (25.4%) followed by cardiovascular and cerebrovascular problems (15.4%) and for men falls as well as cardiovascular and cerebrovascular problems (17.7% each) were most common. No study stratified reasons for ED visits by age or ACSC diagnoses by sex or age. Revisits Another three studies showed the following pattern of ED revisits. One study reported that 60.5% had one ED visit, 22.3% had two visits and 17.2% had three or more visits over the course of 1 year [22]. The second study found that 2.4% of the study population had been seen in ED less than 72 h ago, while 87.3% were not seen again (for 10.3% the status was unknown) [15]. Only one study stratified the results by sex, showing that female NHRs had 1.4 revisits and male NHRs 1.7 revisits during the one-year study period [14]. There was no study that stratified revisits by age (Table 3). Hospital admission Four studies reported subsequent hospital admissions of NHRs following ED visits. The proportion of hospitalisation ranged from 36.4 to 48.7% [15,22,23,25] and between 0.5 and 1.3% of NHRs died in the ED [22,23]. Three studies [15,22,25] reported on differences of age and sex. While two studies found that patients admitted to hospital did not vary by age and sex [15,22], one other study reported that male sex and advanced age were associated with higher odds of hospitalisation [25] ( Table 3). Summary of main findings This systematic review analysed age-related and sex-related ED presentations in NHRs and found only very few studies assessing these patterns. Most studies examining sex differences in ED visits found that male NHRs visited EDs more often than females. The influence of age was less clear with some studies showing no association and others reporting decreasing ED visits with increasing age or increasing proportions followed by a decrease in the highest age group. However, comparability is limited as some of the included studies used age as a continuous variable. There was no study which reported stratified analyses by age and sex. Comparison with the existing literature We found a wide range between 29 and 62% of NHRs that had at least one ED presentation over a one-year period and the proportion of NHRs being admitted to hospital ranged from 36.4 to 48.7%. These findings and the variability are comparable with the literature and might also reflect facility-level variations [1,4,6,9]. Furthermore, the existing literature is also heterogeneous with respect to methods, time periods and populations. This is important to keep in mind, when comparing and interpreting findings between different studies. Like in our recent review on hospitalisations of NHRs [13], we also found that male NHRs visited EDs more often than females in this systematic review. We only included studies assessing all NHRs in the denominator instead of only ED patients, because the latter might have led to the conclusion that women visit EDs more frequently [9,26]. However, this is explained by the fact that a large proportion of NHRs is female. Although not all included studies found statistically significant effects, which might also be due to small sample sizes, a clear trend was seen. The strongest influence of male sex with a rate ratio of 1.38 was shown by McGregor et al. [18], but this result was not further discussed by the authors as these differences were not the focus of their study. This was also the case in the other included studies. In their review on trends and appropriateness of ED use by older adults, Gruneir et al. [1] did not even mention sex as a potential factor. This was also the case in a more recent review by Trahan et al. on factors influencing decision-making on transitions of NHRs to EDs [27]. Although the authors identified residents and family factors as one of five categories, no sociodemographic factors were considered. Because hospital as well as ED use is higher for males, decisions to transfer seem to be made in the nursing home. Only one of the included studies reported reasons leading to ED visits stratified by sex and found that falls were more often the reason to transfer female NHRs (25.4% vs. 17.7%) with men having slightly higher proportions in several other categories [14]. But for one fifth of transfers no reason for referral was available. The proportion of potentially avoidable ED visits was high and ranged between 25 and 55% [6,28,29]. One of our included studies stratified the proportion of NHRs having at least one potentially avoidable ED visit by sex and found only marginal differences between males and females [24]. Furthermore, facility-level variation across nursing homes has been shown to influence health care including ED transfers [4,30], but it is unclear whether sex differences also depend on facility-levels. Future studies should assess which ED transfers vary between sexes. On the other hand, the influence of age on ED visits was inconsistent in the included studies. There is some evidence of a decreasing influence of age above about 85 years, but this was not shown or assessed in all studies. Such heterogeneous findings were also found in the literature on ED use of elderly patients irrespective of nursing home stay [20,26,31,32]. In our systematic review on hospitalisations of NHRs we also concluded that the influence of age was inconclusive due to methodological differences [13]. In a large cohort of German NHRs, we recently found that hospitalisation rates declined with increasing age even up to 95+ years, but this effect was much more pronounced before nursing home entry [33]. These inconsistent findings on the influence of age in the literature may be on the one hand due to different outcomes. Two of the studies included in our review assessed prevalences [14,24] and one assessed incidences [15] of ED visits. The included studies also used different statistical analyses (e.g., logistic, poisson or cox proportional hazard regression). On the other hand, age was mostly assessed as a continuous variable in regression models, although no linear effect might exist, or with only few categories. Three out of seven studies included in this review conducted multivariable analyses including age as a continuous variable [18,22,23] and the other four studies used 85+ or 86+ years as the highest age category [14,15,24,25]. As NHRs typically represent a much wider age span ranging between under 65 up to over 100 years [10,11], more differentiated age-specific patterns have to be assessed. When further taking into account that women have longer life expectancies than men resulting in a higher percentage of women at older ages [10], both sociodemographic variables have to be considered simultaneously. This is important because the individual effects of age and sex cannot be determined otherwise and confounding or effect modification is possible. However, none of the included studies stratified their results on ED visits by age and sex. In our recent systematic review on age and sex differences in hospitalisations of NHR, we encouraged further research on the influence of sociodemographic characteristics on ED visits of NHRs [13]. As ED visits are frequent events in NHRs and only about half of the visits result in hospital admission [1,9,23], acute care in EDs plays an important role. Interestingly, we only found seven studies (of which the two articles from Stephens et al. even reported findings from the same study [24,25]) on age or sex differences in ED visits of NHRs as compared to 20 in our review on hospitalisations [13]. Moore et al. already pointed out in 2012 [10], that understanding age and sex dependent patterns in NHRs is the key to optimize individual care. Therefore, we strongly encourage that any further research on health care of NHRs should include large sample sizes and consider differences between these sociodemographic characteristics. Only after exploring reasons for age and sex specific patterns of ED visits, conclusions for health administrators and clinicians can be drawn. Strengths and limitations We conducted the first systematic review examining age and sex differences in the epidemiology of ED visits of NHRs using a comprehensive search strategy. We did not restrict our search to specific languages. Furthermore, we screened reference lists of all included articles. Nevertheless, there is still the possibility that we could have missed studies that comprised information about ED visits of NHRs by sex or age. However, we screened the full text of about 100 articles that might have reported such information but finally included only seven relevant studies in our systematic review. The interpretation of our findings is limited by the inclusion of very heterogeneous studies in terms of populations, time frames and estimates (e.g. crude or standardised frequencies and multivariable regression models) which might have accounted for some of the differences in the results. The studies included are also too few to assess time trends or differences between countries. Since there are no established and validated tools for studies on prevalence and incidence, quality assessment was carried out by using the critical appraisal instrument of the JBI [21]. This tool rather gives an overview on the study characteristics than evaluating methodological quality and the application to studies using administrative data is difficult because they generally have, for example, an adequate response or an appropriate sample size. Further research on tools for quality assessment of studies examining prevalences or incidences is needed. Conclusion Our knowledge on age and sex differences in acute care use of NHRs is still limited. We only found seven studies meeting our inclusion criteria. Male NHRs visit EDs more often than females, but reasons for that are not analysed or discussed in the corresponding studies. The influence of age is less clear, which might be due to very heterogeneous age categorisations. Taken together, any future studies on acute care of NHRs should assess the influence of sociodemographic characteristics like age and sex. These studies should include large sample sizes to provide a more differentiated age categorisation. Availability of data and materials All data generated or analysed during this study are included in this published article [and its supplementary information files]. Authors' contributions AB, FH and KA developed the concept of this systematic review. Then a comprehensive search strategy was generated by AB and tested by AB and KA. AB performed the literature search. AB, FH and KA participated in the selection of literature, in the data extraction and quality assessment. All authors participated in the analysis of the literature, wrote and reviewed the manuscript and approved the final version. Ethics approval and consent to participate Not applicable. Consent for publication Not applicable.
2018-07-04T02:32:29.008Z
2018-07-03T00:00:00.000
{ "year": 2018, "sha1": "2aee0a012c4b27cc6406e26c9af6755091738a22", "oa_license": "CCBY", "oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-018-0848-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2aee0a012c4b27cc6406e26c9af6755091738a22", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
243971036
pes2o/s2orc
v3-fos-license
Understanding the Mechanism of Action of NAI-112, a Lanthipeptide with Potent Antinociceptive Activity NAI-112, a glycosylated, labionine-containing lanthipeptide with weak antibacterial activity, has demonstrated analgesic activity in relevant mouse models of nociceptive and neuropathic pain. However, the mechanism(s) through which NAI-112 exerts its analgesic and antibacterial activities is not known. In this study, we analyzed changes in the spinal cord lipidome resulting from treatment with NAI-112 of naive and in-pain mice. Notably, NAI-112 led to an increase in phosphatidic acid levels in both no-pain and pain models and to a decrease in lysophosphatidic acid levels in the pain model only. We also showed that NAI-112 can form complexes with dipalmitoyl-phosphatidic acid and that Staphylococcus aureus can become resistant to NAI-112 through serial passages at sub-inhibitory concentrations of the compound. The resulting resistant mutants were phenotypically and genotypically related to vancomycin-insensitive S. aureus strains, suggesting that NAI-112 binds to the peptidoglycan intermediate lipid II. Altogether, our results suggest that NAI-112 binds to phosphate-containing lipids and blocks pain sensation by decreasing levels of lysophosphatidic acid in the TRPV1 pathway. Introduction Ribosomally synthesized and post-translationally modified peptides, or RiPPs, form a diverse group of natural products characterized by a peptide skeleton that can undergo a number of post-translational modifications [1]. In the past 20 years, new members have been added to this class of secondary metabolites, mostly thanks to the development of genome-mining tools to detect RiPP biosynthetic gene clusters in the growing microbial genome databases. Over 20 different RiPP families are known, each carrying unique chemical features [2]. This high structural diversity is due to the various post-translational modifications that impart new chemical functionalities on the precursor peptides, leading to core peptides that carry multiple variable sites [3]. In some cases, the discovery of a new RiPP has been accompanied by establishing the correspondent bioactivity [1,3]. However, because most new RiPPs are being discovered through approaches based on structural novelty, they are mostly orphan of an associated biological function (e.g., [4,5]) unless they have measurable antimicrobial activity. NAI-112 is a labionine (Lab)-containing lanthipeptide with a glycosylated tryptophan residue and an unusual methyl-Lab bridge [6]. It was discovered in a phenotypic screening program for inhibitors of bacterial cell wall biosynthesis. It showed modest antibacterial activity with a minimal inhibitory concentration (MIC) of 32 µg/mL against Staphylococcus aureus. At the time of its discovery, there was only one precedent of Lab-containing In mammals, many different pathways can lead to pain sensation and the inhibition of many of them has been exploited by marketed drugs [8]. Knowledge about the pathway can properly direct the development path of a new drug candidate and address potential side effects [9]. For these reasons, it is important to establish the mechanism(s) of the analgesic activity of NAI-112 to evaluate its potential as a drug candidate. As a step in that direction, we report here the result from lipidomic studies in mice and the characterization of resistant mutants of S. aureus. Overall, our results are consistent with NAI-112 binding to lipids and interfering with the TPRV1 pathway. Lipidome Analysis in the Spinal Cord To understand the effect of NAI-112 and learn its possible mechanism in nociception, we performed an untargeted lipidomics experiment using mice spinal cords to observe any possible differential lipids that might play a role in NAI-112 nociception. We initially compared two experimental groups (vehicle n = 7 and NAI-112 group n = 6), where mice were treated with 30 mg/kg body weight of NAI-112 using IP injection and spinal cord samples were collected after 2 h for lipid extraction. After total lipid extraction from the samples, we acquired high-resolution LC-MS/MS data. A representative chromatogram of the mass-spectrometer-based untargeted lipidomics analysis is shown in Figure 2A. After data acquisition, we used dedicated statistical tools for analyzing the possible differential lipids between experimental groups. As shown in Figure 2B, using principle component analysis (PCA), we observed clear separation between experimental groups, and we then selected differentially expressed lipids using orthogonal projection to latent structures-discriminant analysis (OPLS-DA), as shown in Figure 2C. Among other In mammals, many different pathways can lead to pain sensation and the inhibition of many of them has been exploited by marketed drugs [8]. Knowledge about the pathway can properly direct the development path of a new drug candidate and address potential side effects [9]. For these reasons, it is important to establish the mechanism(s) of the analgesic activity of NAI-112 to evaluate its potential as a drug candidate. As a step in that direction, we report here the result from lipidomic studies in mice and the characterization of resistant mutants of S. aureus. Overall, our results are consistent with NAI-112 binding to lipids and interfering with the TPRV1 pathway. Lipidome Analysis in the Spinal Cord To understand the effect of NAI-112 and learn its possible mechanism in nociception, we performed an untargeted lipidomics experiment using mice spinal cords to observe any possible differential lipids that might play a role in NAI-112 nociception. We initially compared two experimental groups (vehicle n = 7 and NAI-112 group n = 6), where mice were treated with 30 mg/kg body weight of NAI-112 using IP injection and spinal cord samples were collected after 2 h for lipid extraction. After total lipid extraction from the samples, we acquired high-resolution LC-MS/MS data. A representative chromatogram of the mass-spectrometer-based untargeted lipidomics analysis is shown in Figure 2A. After data acquisition, we used dedicated statistical tools for analyzing the possible differential lipids between experimental groups. As shown in Figure 2B, using principle component analysis (PCA), we observed clear separation between experimental groups, and we then selected differentially expressed lipids using orthogonal projection to latent structuresdiscriminant analysis (OPLS-DA), as shown in Figure 2C. Among other significantly altered lipids, we observed a significantly sharp increase in the levels of phosphatic acid (18:1/20:4, 1-oleoyl-2-eicsoatetraenoyl-PA), as shown in Figure 3A. significantly altered lipids, we observed a significantly sharp increase in the levels of phosphatic acid (18:1/20:4, 1-oleoyl-2-eicsoatetraenoyl-PA), as shown in Figure 3A. . No significant effects were observed on LPA levels in no-pain mice (C), while LPA (18:0) reduced after treatment with NAI-112 in in-pain mice (D). In all panels, values are expressed as means ± SEM. The two-tailed t-test was used to assess statistical significance: * p < 0.05; ** p < 0.01; n.s., not significant. . In all panels, values are as means ± SEM. The two-tailed t-test was used to assess statistical significance: * p < 0.05; n.s., not significant. . In all panels, values are expressed as means ± SEM. The two-tailed t-test was used to assess statistical significance: * p < 0.05; ** p < 0.01; n.s., not significant. There are several reports showing a role of phosphatic acids (PAs) and of their hydrolytic product lysophosphatidic acids (LPAs) in pain sensation. Neuropathic pain damage to the nerve along the pain pathway results in spontaneous generation of action potential and lowered nociceptive threshold, as seen in allodynia and hyperalgesia. This abnormal pain transmission had been linked to LPA production in the spinal cord [10][11][12]. Having observed an effect of NAI-112 on PA levels in naive mice, we wanted to see what happens in mice when pain is induced before NAI-112 injection. So we re-performed the untargeted lipidomics experiment using three experimental groups using seven mice for each group: vehicle, pain model (sciatic nerve ligation model [6]), and treated (pain model + NAI-112). The lipidomics experimental workflow and data analysis remained the same, but we majorly focused on PAs and LPAs and then overlapped the two experimental results (with and without pain). As shown in Figure 3 Overall, treatment with NAI-112 did not lead to profound alterations in the spinal cord lipidome but led to an increase in PA levels and a concomitant decrease in LPA levels. Interestingly, LPA has been reported as a chemical signature of neuropathic pain since it activates the TRPV1 receptor, triggering pain sensation [10,11]. Effects on Enzymes in the TPRV1 Pathway The lipidome data are consistent with an effect on the TRPV1 pathway and suggest that NAI-112 directly or indirectly interferes with the formation of LPA. A previous experiment carried out using NAI-112 and a selected panel of pain-related receptors (Ca 2+ channel, norepinephrine transporter, TRPM8, and TRPV1) showed no antagonist effect of NAI-112 by direct binding to TRPV1 (see the Materials and Methods section for details) We thus investigated whether NAI-112 can affect any of the enzymes acting upstream of TRPV1 according to the model proposed by the Nobel Prize winner David Julius and coworkers [13]. Using commercially available kits, we unexpectedly observed that NAI-112 inhibits multiple enzymes of the pathway, such as protein kinase C and several phospholipases. Oddly, the activity of some of the enzymes was enhanced by low NAI-112 concentrations and the inhibition curves did not show the expected sigmoid shape. Of note, all these enzymes require micelles for optimal activity and were inhibited by NAI-112 with similar IC 50 s (Supplementary Figure S1). A possible explanation for the above observations is that NAI-112 strongly interacts with micelles, enhancing their stimulatory effect at low concentrations and then disrupting their structure in a linear concentration-dependent manner, thus interfering with the activity of micelle-requiring enzymes. The above results are consistent with the possibility that NAI-112 directly binds PAs, sequestering them from processing by phospholipases and thus leading to decreased LPAs in the spinal cord. Binding Experiments We thus tested whether NAI-112 is able to form a complex with either PAs or phosphatidylethanolamines (PEs). When 0.4 mM 1,2-dipalmitoyl-phosphatidic acid (DPPA) or 1,2-dipalmitoyl-phosphatidylethanolamine (DPPE) was incubated with an equimolar amount of NAI-112 for 30 min at room temperature and analyzed by direct infusion in a mass spectrometer in negative and positive ionization modes, after 10-fold dilution with 50% acetonitrile, a signal at m/z 1506. When performing these studies, we realized that the commercial DPPA sample contains a detectable amount of LPA. By zooming in the range between m/z 1300-1800 both in negative and in positive ionization mode in the DPPA-NAI112 sample, a signal at m/z 1387.7 [M-2H] 2-was detected, consistent with a 1:1 complex of NAI-112 with LPA ( Figure 4). Altogether, these results suggest that NAI-112 can form complexes with mono-or dipalmitoyl-glycerol with an unmodified phosphate group. Isolation of NAI-112-Resistant Mutants NAI-112 was discovered in the course of a screening program for peptidoglycan biosynthesis inhibitors and showed modest inhibitory activity against staphylococci and streptococci [6]. We reasoned that if the antinociceptive activity of NAI-112 is due to binding to lipid components, a similar mechanism might be responsible for its antibacterial activity. Thus, isolation of bacterial strains resistant to NAI-112 might shed light on its molecular target. We thus decided to look for NAI-112-resistant mutants by direct selection on media containing NAI-112 and by serial passages in the presence of sub-inhibitory concentrations. When S. aureus cells were plated in NAI-112-containing medium, no spontaneous resistant mutants were obtained in the presence of 160 or 640 g/mL of NAI-112. Thus, at these concentrations (equivalent to 5× and 20× its MIC, as determined in liquid medium), spontaneous resistant mutants occur at a frequency of <10 −9 CFU/mL. No attempts were made to select resistant mutants at lower NAI-112 concentrations. We were, however, able to generate resistant strains by serial passages in the presence of sub-inhibitory concentrations of NAI-112. In general, multiple passages at the same sub-inhibitory concentration were required before S. aureus was able grow at the next higher concentration. After the eighth passage, the strain was able to grow at 64 μg/mL of NAI-112, and after the fifteenth passage, growth was observed in a culture containing 256 μg/mL of NAI-112 ( Figure 5A). When performing these studies, we realized that the commercial DPPA sample contains a detectable amount of LPA. By zooming in the range between m/z 1300-1800 both in negative and in positive ionization mode in the DPPA-NAI112 sample, a signal at m/z 1387.7 [M − 2H] 2− was detected, consistent with a 1:1 complex of NAI-112 with LPA ( Figure 4). Altogether, these results suggest that NAI-112 can form complexes with mono-or dipalmitoyl-glycerol with an unmodified phosphate group. Isolation of NAI-112-Resistant Mutants NAI-112 was discovered in the course of a screening program for peptidoglycan biosynthesis inhibitors and showed modest inhibitory activity against staphylococci and streptococci [6]. We reasoned that if the antinociceptive activity of NAI-112 is due to binding to lipid components, a similar mechanism might be responsible for its antibacterial activity. Thus, isolation of bacterial strains resistant to NAI-112 might shed light on its molecular target. We thus decided to look for NAI-112-resistant mutants by direct selection on media containing NAI-112 and by serial passages in the presence of sub-inhibitory concentrations. When S. aureus cells were plated in NAI-112-containing medium, no spontaneous resistant mutants were obtained in the presence of 160 or 640 µg/mL of NAI-112. Thus, at these concentrations (equivalent to 5× and 20× its MIC, as determined in liquid medium), spontaneous resistant mutants occur at a frequency of <10 −9 CFU/mL. No attempts were made to select resistant mutants at lower NAI-112 concentrations. We were, however, able to generate resistant strains by serial passages in the presence of sub-inhibitory concentrations of NAI-112. In general, multiple passages at the same sub-inhibitory concentration were required before S. aureus was able grow at the next higher concentration. After the eighth passage, the strain was able to grow at 64 µg/mL of NAI-112, and after the fifteenth passage, growth was observed in a culture containing 256 µg/mL of NAI-112 ( Figure 5A). From the population growing at the highest NAI-112 concentrations after passages 8 and 15 (64 and 256 μg/mL, respectively), single colonies were isolated after plating in antibiotic free-medium. Two colonies from the 8th passage (designated R8.1 and R8.2) and five from the 15th passage (designated R15.1 through R15.5) were analyzed for growth and antibiotic susceptibility. An example for colonies R8.1 and R15.5 is shown in Figure 5B. In the absence of NAI-112, the two mutant strains and the wild type grew at a similar rate. However, while growth of the wild type was retarded and fully inhibited at 16 and 32 μg/mL of NAI-112, respectively, 64 and 128 μg/mL of NAI-112 was required to observe a similar effect on mutant strain R8.1. Mutant strain R15.5 was slightly more resistant, showing reduced growth at 128 μg/mL and no growth at 256 μg/mL. The growth curves of the other analyzed mutant strains are shown in Supplementary Figure S3. Similarly, mutant strain R8.2 showed reduced growth and no growth at 64 and 128 μg/mL, respectively, while mutant strains R15.1-R15.4 behaved similarly, with growth retardation and growth inhibition observed at 128 and 256 μg/mL of NAI-112, respectively. In summary, the analysis of individual colonies confirmed the results from population studies ( Figure 5A), indicating that the strain had become increasingly resistant to NAI-112 through serial passages. We then wondered whether any of the mutant strains had become resistant to other antibiotics in addition to NAI-112. The results for mutant strains R8.1 and R15.5 are reported in Table 1. Interestingly, there was a modest but consistent shift in the MICs of vancomycin, ramoplanin, and NAI-107 against R8.1 and R15.5. These antibiotics all bind to the essential peptidoglycan precursor lipid II and are known to be partially affected by the mutations arising in the so-called vancomycin-insensitive Staphylococcus aureus (VISA) From the population growing at the highest NAI-112 concentrations after passages 8 and 15 (64 and 256 µg/mL, respectively), single colonies were isolated after plating in antibiotic free-medium. Two colonies from the 8th passage (designated R8.1 and R8.2) and five from the 15th passage (designated R15.1 through R15.5) were analyzed for growth and antibiotic susceptibility. An example for colonies R8.1 and R15.5 is shown in Figure 5B. In the absence of NAI-112, the two mutant strains and the wild type grew at a similar rate. However, while growth of the wild type was retarded and fully inhibited at 16 and 32 µg/mL of NAI-112, respectively, 64 and 128 µg/mL of NAI-112 was required to observe a similar effect on mutant strain R8.1. Mutant strain R15.5 was slightly more resistant, showing reduced growth at 128 µg/mL and no growth at 256 µg/mL. The growth curves of the other analyzed mutant strains are shown in Supplementary Figure S3. Similarly, mutant strain R8.2 showed reduced growth and no growth at 64 and 128 µg/mL, respectively, while mutant strains R15.1-R15.4 behaved similarly, with growth retardation and growth inhibition observed at 128 and 256 µg/mL of NAI-112, respectively. In summary, the analysis of individual colonies confirmed the results from population studies ( Figure 5A), indicating that the strain had become increasingly resistant to NAI-112 through serial passages. We then wondered whether any of the mutant strains had become resistant to other antibiotics in addition to NAI-112. The results for mutant strains R8.1 and R15.5 are reported in Table 1. Interestingly, there was a modest but consistent shift in the MICs of vancomycin, ramoplanin, and NAI-107 against R8.1 and R15.5. These antibiotics all bind to the essential peptidoglycan precursor lipid II and are known to be partially affected by the mutations arising in the so-called vancomycin-insensitive Staphylococcus aureus (VISA) strains [14,15]. In contrast, the MICs of erythromycin, ciprofloxacin, and rifampicin, which target other cellular processes, were not affected. Genome Analysis of Resistant Strains To identify the mutations responsible for NAI-112 resistance, we compared the genomes of the parental strain and of the mutants R8.1, R15.3, R15.4, and R15.5. This analysis led to the identification of 37 mutations, including 33 single-nucleotide polymorphisms (SNPs) and 4 insertion or deletion of bases (INDELs). Six SNPs were common to the four mutants. The remaining SNPs/INDELs were distributed, as shown in Figure 6, with the majority of mutations shared by at least two mutants. All mutant strains carry different mutations, so they are not siblings. The SNPs observed in at least two mutants are listed in Table 2 strains [14,15]. In contrast, the MICs of erythromycin, ciprofloxacin, and rifampicin, which target other cellular processes, were not affected. Genome Analysis of Resistant Strains To identify the mutations responsible for NAI-112 resistance, we compared the genomes of the parental strain and of the mutants R8.1, R15.3, R15.4, and R15.5. This analysis led to the identification of 37 mutations, including 33 single-nucleotide polymorphisms (SNPs) and 4 insertion or deletion of bases (INDELs). Six SNPs were common to the four mutants. The remaining SNPs/INDELs were distributed, as shown in Figure 6, with the majority of mutations shared by at least two mutants. All mutant strains carry different mutations, so they are not siblings. The SNPs observed in at least two mutants are listed in Table 2, while the remaining 20 unique mutations are reported in Supplementary Table S1. (Notice that 16 of the 37 SNPs with respect to the parental strain matched the corresponding sequences in the deposited ATCC 6538P genome.) Of the six SNPs common to the four mutants, only one led to an amino acid change in the corresponding gene product, Cys598Tyr in WalK (Table 2), a two-component-system sensor kinase involved in cell wall metabolism and previously reported as a necessary but not sufficient mutation to conferring decreased susceptibility in some VISA lineages [18][19][20]. The other five common SNPs fall in intergenic regions or represent synonymous substitutions in transposase genes, suggesting they affect gene expression. Interestingly, four mutations fall within SAFDA_1386, which belongs to the DUF1672 family, a major component in the S. aureus lipoproteome [21]: two as synonymous substitutions and two that result in changes of two distinct amino acids ( Table 2). The other SNPs common to at least two mutants are listed in Table 2. All together, these results indicate that strains selected for resistance to NAI-112 are phenotypically and genetically similar to VISA strains, which can arise through multiple mechanisms [22,23], one represented by mutations in walK [18,19]. Discussion The results from the lipidomic experiments in mice, from the binding assay analyzed in MS and from the analysis of the S. aureus mutants resistant to NAI-112, are consistent with the hypothesis that NAI-112 binds to one or more phosphate-containing lipids and, by doing so, interferes with the processing of these lipids by relevant enzymes. In mice, this interference translates in lower LPA levels. It has been reported that if in control mice, LPA produces acute pain-like behaviors, those effects are substantially reduced in Trpv1-null animals. It was demonstrated that TRPV1 is a direct molecular target of the pain-producing molecule LPA, and that is the first example of LPA binding directly to an ion channel to acutely regulate its function [24]. Since LPA has been reported to activate the TRPV1 receptor triggering pain sensation, a decrease in LPA is expected to result in pain relief. That NAI-112 acts in the TRPV1 pathway is also consistent with previous experiments that demonstrated that pain amelioration by NAI-112 is reverted by the TRPV1 antagonist AMG9810, while NAI-112 does not bind directly to TRPV1 [6]. A few studies have identified the molecular targets of lanthipeptides. Notably, class I lanthipeptides, including nisin, gallidermin, and NAI-107, have been shown to bind to lipid II, followed by pore formation in the membrane or membrane disruption [25][26][27]. Class II lantibiotics, such as mersacidin and actagardin, also bind to lipid II [28]. Interestingly, some structurally unrelated class II lantibiotics, namely cinnamycin and its variants duramycin and ancovenin, have been shown to bind to phosphatidylethanolamine-containing lipids [29][30][31][32]. Two-peptide lantibiotics, exemplified by lacticin 3147, also form a complex with lipid II through their mersacidin-like component [33]. Recently, Medeiros-Silva et al. [34] demonstrated binding of nisin to the pyrophosphate cage of lipid II in membranes of bacterial cells. While this work was in progress, Prochnow et al. [35] reported that labyrinthopeptins induce a virolytic effect through binding to phosphatidylethanolamine, a component of the viral membrane. Given all these precedents for lanthipeptides, the similar ring topology between NAI-112 and labyrinthopeptins (Figure 1), and the results presented herein, it is tempting to speculate that lanthipeptides are generally able to bind to (pyro)phosphatecontaining lipids, and small differences in their structures can result in preferential affinity for various phospholipids. Since only a portion of a lanthipeptide is involved in phospholipid binding, its biological activity might ultimately depend on a combination of the preferred phospholipid and the interaction of the remaining lanthipeptide portion with membrane-associated cellular components. NAI-112 Source NAI-112 was obtained after cultivation of Actinoplanes sp. DSM24059 in two 4 L bioreactors, followed by purification of the compound according to the procedures described by Iorio et al. [6]. A single batch of product was used for all the experiments described herein. For each study, NAI-112 was freshly dissolved in dimethyl sulfoxide at 10 mg/mL and diluted with the appropriate medium just before use. Mice Study Male CD1 mice weighing 25-30 g (Charles River) were used in accordance with the ethical guidelines of the International Association for the Study of Pain and in compliance with Italian and European Economic Community regulations (D.M. 116192; O.J. of E.C. L 358/1 12/18/1986). Mice were housed in groups of 5 or 4 in ventilated cages containing autoclaved cellulose paper as nesting material, with free access to food and water. They were maintained under a 12/12 h light/dark cycle (lights on at 08:00 a.m.) at controlled temperature (21 ± 1 • C) and relative humidity (55 ± 10%). Sciatic nerve ligations were performed, as described by Iorio et al. [6], and the code number of the authorized animal protocol is Decreto ministeriale n • 41/2010-B. The vehicle or test compound was dissolved in 0.9% sterile saline/5% PEG-400/5% Tween-80 and injected subcutaneously. Spinal cord dissection was done according to Henriques et al. [36]. Mice were sacrificed by decapitation, and spinal cords were rapidly dissected, frozen in liquid nitrogen, and stored at −80 • C until further processing. Untargeted Lipidomics Sample Preparation For lipid extraction from spinal cord samples, the Bligh-Dyer protocol was followed. Briefly, equal amounts of samples were dissolved in 2 mL of 2:1 (v/v) methanol:chloroform, followed by homogenization, vortexing. and sequential addition of 0.6 mL of chloroform and water. Samples were centrifuged at 3500 rpm for 20 min, and the lower organic phase was transferred into clean 4 mL glass vials and dried under a nitrogen stream. Dried samples were reconstituted in 0.4 mL of 9:1 (v/v) methanol:chloroform. Mass Spectrometer Data Acquisition Four microliters of each sample was injected in a Waters UPLC Acquity system coupled to a Synapt G2 QToF high-resolution mass spectrometer operating in negative (ESI-) ion mode. After sample injection, lipids were separated on a reversed-phase C18 CSH column (2.1 × 100 mm) using mobile phase A (10 mM ammonium formate in 60:40 acetonitrile/water) and mobile phase B (10 mM ammonium formate in 90:10 isopropyl alcohol/acetonitrile). The total run time for each sample was 25 min, and the following gradient program was used: 30% mobile phase B for 1 min, which was brought up to 35% in 3 min, then to 50% in 1 min, and then to 100% in 13 min, and then a 1 min isocratic step at 100% mobile phase B, followed by reconditioning to initial conditions until a 25 min total run time. Across the lipid separation, the column temperature was maintained at 55 • C. For mass spectrometry data acquisition, the following parameters were used: capillary voltages were set at 2 kV and cone voltage at 35 V; the source temperature was 120 • C; desolvation gas and cone gas (N 2 ) flows were 800 L/h and 20 L/h, respectively; and the desolvation temperature was set to 400 • C. Data were acquired in MSe mode and MS/MS fragmentation performed in the trap region. Low-energy scans were acquired at fixed 4 eV potential, and high-energy scans were acquired with an energy ramp from 25 to 45 eV. The scan rate was set to 0.3 s per spectrum. The scan range was set to 50 to 1200 m/z. Leucine enkephalin (2 ng/mL) was infused as a lock mass for spectra recalibration. Multivariate Data Analysis and Lipid Identification After data acquisition, raw data files were loaded on MarkerView™ Software (Applied Biosystems/MDS Sciex, Toronto, Canada) to generate principle component analysis (PCA) and orthogonal projection to latent structures-discriminant analysis (OPLS-DA) plots for generation of differential lipids across groups. Analytes were identified using Metlin and HMDB databases with a mass tolerance of 5 ppm. The accurate mass list was searched against the METLIN [37] and HMDB [38] [39], in addition to indications on MS/MS fragmentation patterns already available in the literature [40]. GraphPad Prism software was used for final visualization of significant lipids and calculation of p-values. Enzymatic Assays To access NAI-112 effects on enzymatic activities involved in the vanilloid-sensitive pathway, commercially available kits were used. Calibration curves to determine the enzyme concentration for the assay were performed, and all assays were run following the instructions provided in the commercial kits. All experiments were run in duplicate. Control reactions consisted of a negative control without an enzyme and a positive control in which no inhibitor was added. For protein kinase C (PKC), we used the ab139437 PKC Kinase Activity Assay Kit (Abcam, UK), which is based on recognition by an antibody of a phosphorylated peptide produced in situ by PKC. For phospholipase C (PLC) activity, the Amplex ® Red Phosphatidylcholine-Specific Phospholipase C Assay Kit (Molecular Probe) was used, which uses the phosphatidylcholine-specific PLC enzyme from Bacillus cereus and measures the product through a coupled enzyme assay. For phospholipase A1 (PLA1) activity, the EnzChek Phospholipase A1 Assay Kit (Thermo Fisher Scientific) was used, which uses PLA1 and measures product formation from a fluorescent substrate. Binding Experiments Binding experiments were performed by mixing 0.4 mM 1,2-dipalmitoyl-phosphatidylethanolamine (DPPE) or 1,2-dipalmitoyl-phosphatidic acid (DPPA) and 0.4 mM NAI-112 in 2 mM Tris-HCl at pH 7.5 with 8% DMSO. No additional pH modulators were added. After 30 min at room temperature, the binding mixtures were diluted 10 times with 50% acetonitrile and analyzed by direct infusion in low-resolution mass spectrometry in negative and positive ionization modes. The single molecules were separately studied as controls. The instrument was a LCQ-Fleet (Thermo Fisher Scientific, Waltham, MA, USA) equipped with an electrospray interface (ESI) and a tridimensional ion trap. The m/z ranges were set at 200-2000 with ESI conditions as follows: spray voltage of 3500 V, capillary temperature of 275 • C, sheath gas rate at 15 units, and auxiliary gas rate at 0 units. The flow rate was set at 3 µL/min, and the normalized collision energy used to fragment the complex was 25%. Selection of Resistant Strains by Direct Plating and Serial Passages All experiments were performed with S. aureus ATCC6538P. The strain was propagated at 37 • C in Mueller-Hinton Broth (BD Difco TM ) containing 20 mg/L of CaCl 2 and 10 mg/L of MgCl 2 (MHBC medium). For selection of resistant strains by direct plating, S. aureus cultures were grown until OD 600 of 0.7, corresponding to 5 × 10 8 CFU/mL, centrifuged, and resuspended with fresh medium to 1 × 10 10 CFU/mL. Then, 100 µL were spread on Muller-Hinton Agar (BD Difco TM ) supplemented with 160 µg/mL or 640 µg/mL of NAI-112 and incubated at 37 • C. Colonies were scored after 10 days. For selection of resistant mutants by serial passages, MHBC medium containing serial, twofold dilutions of NAI-112, starting from 512 µg/mL down to 0.5 µg/mL, was inoculated with 1 × 10 5 CFU/mL in a 96-well microtiter plate. Cultures were incubated overnight at 37 • C. The culture at the highest concentration that supported growth was diluted to 1 × 10 5 CFU/mL with fresh medium and added to a fresh plate containing serial dilutions of NAI-112 as before, followed by overnight incubation. This process was continued for 15 passages. From cultures at passage 8 (grown at 64 µg/mL) and passage 15 (grown at 256 µg/mL), single colonies were isolated and named R8.1 and R8.2 (from the eighth series) and R15.1-15.5 (from the fifteenth series). Antibacterial Assays MICs were determined by the broth microdilution methodology in sterile 96-well microtiter plates according to Clinical and Laboratory Standards Institute (CLSI) guidelines, as described by Iorio et al. [6]. When indicated, bovine serum albumin was added to MHBC medium at 0.02% (w/v). Bacteria were inoculated at 1 × 10 5 CFU/mL, and after 24 h incubation at 37 • C, the MIC was defined as the lowest drug concentration causing complete suppression of visible growth. Growth curves were measured under the same conditions by recording the optical density at 595 nm (OD 595 ) over 20 h using a Synergy 2.0 plate reader (BioTeck, Winooski, VT, USA). Genome Sequencing and Bioinformatic Analysis Genomic DNA was extracted from the S. aureus strains using the GenElute bacterial genomic DNA kit (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer's instructions. Genome sequences were determined by GenProbio Srl (www.genprobio. com, accessed on 23 July 2019) using the Illumina MiSeq platform. DNA libraries were prepared using the Nextera XT DNA sample preparation kit (Illumina, San Diego, CA, USA) according to the manufacturer's instructions. One ng input DNA from each sample was used for library preparation. The isolated DNA underwent fragmentation, adapter ligation, and amplification. Samples were quantified using Qubit Fluorometer, followed by size evaluation using Tape Station 2200 (Agilent Technologies Santa Clara, CA, USA). The ready-to-go libraries were pooled equimolarly, denatured, and diluted to a sequencing concentration of 15 pM. Library samples were loaded into Flow Cell V3 600 cycles (Illumina) according to the technical support guide. Sequencing cycles resulted in an average reading length of approximately 290 nucleotides for both paired-end sequences. Fastq files of the paired-end reads were used as input for genome assemblies through the MEGAnnotator pipeline [41]. The SPAdes program version 3.12.0 was used for de novo assembly of the genome sequence [42]. The assembled contigs were ordered using MAUVE software [43], with accession number AP014942.1 as the reference genome. For each sequenced strain, assembly yielded approximately 2.74 Mb of mappable data and resulted in the assembly of 25-31 large contigs, with an average 159-210-fold coverage. SNPs and INDELs were predicted mapping sequenced reads of each genome against the AP014942.1 GenBank entry as the reference genome. Of note, our sequence of the parental strain ATCC 6538P presented 135 SNPs/INDELs versus the reference genome. These differences were ignored when looking for SNP/INDELs in the mutant strains (Tables 2 and S1).
2021-11-11T16:17:41.569Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "728ea15148cd4028a496ade38be25de6ec8ab25e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/22/6764/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e49a670db70311a75bbcf275bd5ca53617f23ca", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234209298
pes2o/s2orc
v3-fos-license
Naira-Yuan Diplomacy: A Pathway for Unlocking Nigeria’s Manufacturing Sub-Sector Potentials The fountain head for weighing one unit of a domestic currency in-terms of another within an international framework is rooted in the famous Gold Standard proposed by the Bretton Woods Institutions (BWIs). This brand of thought had since been practiced and experienced in numerous trade ties that Nigeria had had with China. Like other bilateral agreements, it sets to re-define and deepen the two countries’ economic space. Thus, this paper shed lights on Naira Yuan Diplomacy as a Pathway for Unlocking Nigeria’s Manufacturing Sub-Sector Potentials.The manufacturing industries are engines of economic prosperity. Facilitation of job creation space and poverty reducing strategies are core values in manufacturing too. This paper conclusively presume that the exchange rate pass-through mechanism can transmit price increase and macroeconomic instability from China and supply shocks to the Nigerian economy (especially from manufactured products) when adequate provisions are not domestically taken. Furthermore, the Naira-Yuan diplomacy will increase imports from China thereby increase her foreign income since Nigeria will be spending more on Chinese manufactured products, hence, increase the national income of China. The policy implication of this finding is that the net exports of China will rise faster and add to her expansion of domestic income instead of Nigeria. The paper therefore calls upon Nigeria to be proactive in ensuring a stable trade and exchange rate policies to deepen technical innovation for local manufacturing tools to boost output rather than depending more on China. Introduction One important feature of today's life, is that productive resources are traded across national boundaries (Nenbee and Medee;. According to Akinbobola and Nwosa (2015), productive resources (also called foreign capital) inflow exists in various forms like workers remittances, foreign aid, private capital inflow and more. Today, most countries now engage in diverse forms of international trade agreements to better their citizens' lives. Often times, such transactions take the form of regional blocks and bilateralism. Bilateralism exemplifies a form of political-cum-economic relationship between two trading countries. It offers the two countries an opportunity to optimize the inherent comparative advantage in trade arrangements. One undeniable fact by Helpman and Krugman (2009:1) is that more than half of world trade is in manufactured goods, where markets are often oligopolistic rather than competition. Nigeria's manufacturing sector is also facing the "oligopolistic attributes rather than competitive" market structural foundation too. The reality therefore is to formulate healthy trade policies and agreements. Experiences recently gathered from events in other clines according to Lindert and Pugel (1996:1) reveal that national borders come and go especially during the 1990s. Others were emergence of trade bloc revolution, the immigration fight on who is my neighbour paradigm and gyrating exchange rates between the early and mid-1990s. Irrespective of either the convergence or divergence of trade policy models of countries, the thin line is that Nigeria's economic strength can be measured from the development of manufacturing industries. The manufacturing industries are engines of economic prosperity. Facilitation of job creation space and poverty reducing strategies are core values in manufacturing. Manufacturing in Kenton's (2019) discourse connotes the processing of raw material or parts into finished goods through the use of tools, human labor, machinery, and chemical processing. These finished products are traded in both domestic and foreign markets. External trade therefore, is that which involves a country's commodity crossing to another one and involves the use of foreign currency (Gbosi, 2019).It desccribes how exchange of comodities among nations in which payments are made with the usage of an internationally recognised currency plathform. This informs trade policy and agreements between nations like China -Nigeria bilateralism option. China's interest in developing nations can be treacable to the famous "Banung Conference of 1955" and later in 1980, when she joined the Bretton Woods Institutions(BWIs) and General Agreement on Trade and Tariff (now World Trade Oganisation, [WTO]). The quest to expand China's manufactured market perhaps informed the recently signed currency swap agreement with Nigera. The idea was to foster beneficial trading activities between the two countries. As reported by the CBN( 2018), it was intended to relax the possible influence and the use of US dollar (Green back). The Yuan as the basic unit of the Renminbi by Chinese' law in1948 was to be exchanged for the Nigeria's naira (CBN, 2018). One take home from the above is that both countries will be better -of in this formally entered trade aggrement. Historically, extant economic literature since the era of the Mercantilists' thinking have laboured to show that effective movement of goods and services across national boundaries could be rightly possible through exchange rate measured in the various domestic currencies. Like other bilateral agreements, it is hoped that the economic and social interaction between the two nations will blossom. Thus, this paper shed lights on Naira -Yuan Diplomacy as a Pathway for Unlocking Nigeria's Manufacturing Sub-Sector Potentials The paper is sub-divided into five sections. Sections two and three present the Overview of Naira -Yuan Diplomacy and Stylized Facts about the Manufacturing Sector in China and Nigeria. In section four, the Currency Swap Agreement: A Synoptic view was made. Section five looks at Naira-Yuan as a Mechanism for Unlocking Potentials in the Nigeria's Manufacturing Sector channels while six centred on Concluding Remarks. Naira -Yuan Diplomacy: A Historical Reflection Both China and Nigeria set out to achieve some degree of macroeconomic goals like poverty reduction, price stability, balanace of payments' equilibrium, etc . Each of them need to overcome a plethora of structural rigidities. The structural rigidities are by-products of the adopted type of economic system. China for instance, started as a socialist economy and relied on the famous Marxian ideologies. The crux of the Marxian thesis was to rationalize direct government participation in productive activities. In China, the public enterprises were mainly owned and controlled by the state and they produced diverse outputs. The growth in Chinese public enterprises ownership can be traceable to Kirkpatrick, Lee and Mixson (1985:156) opinion. Kirkpatrick, Lee and Mixson (1985:156) noted that "the growth in the public -enterprise section can be analyzed from two alternative although not mutually exclusive perspectives. The first approach views the establishment of public enterprises largely as a result of certain economic factors. Government ownership of the production process is therefore seen as a non-ideological response to failures in the workings of the market mechanism", Kirkpatrick, Lee and Mixson (1985:156). A second approach considers the public enterprise sector from a broader socio-political perspective, and sees its growth as being determined by the interplay of political and social forces within the developing countries, Kirkpatrick, Lee and Mixson (1985:156). History attest to the fact that China was a developing country during the 1980s but like other East and South-East Asian countries, certain causatic factors enhanced her growth.These factors in the eye of Adedeji (1998) were broad-based growth strategy with high human capital investment, positive regional macroeconomic environment, political stability and softening authoritarian through social justice. A particular interest from the above is that China had elope the clutches of a developing economy. Danjuma and Luis (2017) aptly writes that the world economic system has herald the emergence of China as a major economic power house, a fact that most be reckoned by the US and other major actors alike in the global arena. Nenbee and Kalu (2013) reasoned that the quest for China's economic dominance in the world surfaced in the 1970s sequel to diverse economic reforms. This informs why the contours of an exalted China's economic map is common place in internaltional trade cycles today. Dèes (2001) benignly noted that China embarked on tremendous economic reforms in 1978. These reforms were visible in the form of an impressive annual real output growth of 8.3%. The progress in the Chinese economic reforms is undoubtedly the main reason for the Chinese take-off. The Chinese reform agenda primarily targetted two main goals of how to market the internal economy and to open up towards the rest of the world (Dèes, 2001). Qin et al (2007) accepts Dèes (2001) view that the reforms progressed gradually from farming to commerce, to state-owned enterprises, and to government's finance and banking. The net result of the reforms in China is increased socio-economic prosperity today. Chamberlin and Yueh (2006:447) aptly noted that China's recent economic performance has been truly impressive, growing at an annual average rate of 9.8 percent. Since the year 1978, China has become an important actor in the global economic trade discourse. Unlike China, Nigeria started as an agriculture driven economy during the 1960s but later slipped into crude oil fortunes. The accidental oil economy boomed especially during the 1970s and beyond. Iwayemi (2013) while reflecting on the 1970s noted that oil revenures rose sharply from 26.3 percent in 1970 to 82.1 percent in 1974. However, the sad story remains as to the re-investment of the accruable revenues into enclave sectors bathed by corruption rather than the manufacturing sector ( See NEEDS;2004, Nenbee and Kalu;2013, Ekpo;2015, Oakhenan and Aigheyisi;. The nursing bed of China-Nigeria's economic relationship is the quest to boost trade and capital investment. Izuchukwu and Ofori (2014) contend that Nigeria is the largest recipient of FDI in Africa. Admittedly, UNCTAD (2009) had also reported that Nigeria is the nineteenth greatest recipient of FDI in the world. As China seek to expand its trade relation with Africa, she is becoming one of Nigeria's most important source of FDI. In year 2013, the Chinese government invested $1.1billion in Nigeria's infrastructure (UNCTAD; 2009). This billion of USD investment from China to Nigeria calls for questioning and concern. What type of investment is of interest to China? One needs to ask why is China investing so heavily in Nigeria? Izuchukwu and Ofori (2014) indentified oil and gas sector to have received 75 percent of China's FDI while Lafargue (2005) saw raw materials deposits, hence increasing her trading partnership with Nigeria in order to secure regular supplies. Table 1 shows taxonomy of China's investment in Nigeria. The popular lesson from the above discussion is that there should be a display of great tact and discretion in Yuan-Naira diplomacy option. China is today unarguably one of the closest business partners with Nigeria. Trade between the two countries is on the increase today. For instance, trade between them hit US$384 million in 1998 and as at 2006, it rose to US$3 Billion while in 2010, it worths US$7.8 billion Wiki;Investopedia, 2019). This trading relationship is skewed in favour of China's manufactured products. This will likely pose a major economic challenge to Nigeria if not properly re-examined. China's Manufacturing Sector Outlook Over 40.5 percent of China's Gross Domestic Product (GDP) comes from Industrial output and the higher on the worldwide industrial output too (IMF;. This is the result of a painstaking industrial development approach. Machine-building and metallurgical industries were the key drivers as they accounted for about 20-30 percent of the total gross value of industrial output (Florida Though China's facts and figures show an impressive conversation, there need to further analyse the implication for Nigeria. WDI (2019) had clearly stated that imports from China accounts for more than 35% of total imports in Nigeria but the NBS (2019) maintained that the figures changed in 2018, with Nigeria's imports from China declining to 26.4% and exports to China rose to 14.57% . What a paradox of trade outcomes. The Yuan-Naira Swap Agreement: A Synoptic View The trade path of nations over time have shown a mutualistic behaviour. The case of China and Nigeria is a typified sample.The government in Nigeria would like to increase the domestic output and vice versa with China.This reason perhaps eulogises the CBN (2015) report that as the imports from china has provided a cushion for output gap and domestic output shortfalls as well as machinery and / or spare parts for production and vehicles assemblage in Nigeria. Whereas, Nigeria exports crude oil in millions of barrel and raw materials (e.g., cocoa beans, sesame seeds, zircon sand, etc (CBN; 2015). In our view, there seem to be less value addition to these exported products from Nigeria to China. Furthermore, China's concerted effort in strengthening trade relations with Nigeria is underlined by her increasing appetite for natural resources domiciled in Nigeria 2007). Again, China's exports to Africa, and indeed Nigerian market have grown rapidly with their attendant effects of creating more severe competition in domestic markets. This most likely will create less exchange-value for other imported goods that are not of Chinese origin as well as weaken the productive strength of domestic and/or infant industries due to the face cut throat competition ( See Schott;2004). An attempt to redress the above daunting challenges perhap made the CBN, on June 6, 2018 to issue the Regulations for Transactions with Authorized Dealers in Renminbi. The Regulation provides the framework for implementation of the bilateral currency swap agreement. The currency swap agreement covered trade financing and investment between both countries (See Investopedia,2015 and Emeka;. Taking a glimpse of China's manufacturing outlook can set the tone to understand where Nigeria stands in the currency swap agreement. Aside United States of America, India, Japan and few others China is obviously the leader of manufacturing and industrial outputs in the world today. These sectors account for over 40% of China's GDP (Sean, 2015). Prominent Chinese's industrial sectors include -manufacturing, agriculture and telecommunication services (Investopedia, 2015). Manufacturing in China is an important investment option with diverse product lines. Despite increasing economic growth recorded in Nigeria as reported by National Institute for Legislative Studies ( NIL; 2015), from 2011-2013 , social realities show increasing evidence of unemployment and poverty incidence. Her industrial sector's contribution to economic progress is still relatively poor. Undoubtedly, industrial development is a route to achieving sustained economic growth (Todaro, 2000 andOrji, 2019). Industrial sector lies at the heart of manufacturing sub-sector. Manufacturing in most developing economies (including Nigeria) is still at a teething stage. In Nigeria as reported by the NBS (2019) The anatomy of manufacturing sector above in Nigeria perhaps will suggest a tail of woes in recent year.It seemingly exibits a case of underperformance and can be traced to the discovery of crude oil in Nigeria. Manufacturing contribution to Gross Domestic Product ( GDP ) as reported by the NBS (2013) rose to 72% in 1970 but declined to 7.4% in 1975. It later nosedived to a all-time low of 5.4% in 1980 before rising marginally to about 10.7% in 1985. In 1990, manufacturing sector contributed only 8.1% to Gross Domestic Product (GDP) but thereafter dropped to 7.9% in 1992 and 6.7% in 1995.The downward trend persisted till 2000 with all-time low of 3.4%. However, activities of the manufacturing sector began to increase in its growth trajectory in recent times due to some policy interventions of government and ban on some imported products (See CBN, 2011;Fred-Young and Evans,2018). Sequel to government's policy measures, the manufacturing sector in Nigeria witnessed a significant growth compared to year 2000s performances (NBS, 2019). Figure 7 reiterates the fact that though manufacturing sector has been described as the modern catalyst for economic progress, Nigeria had not attained such feet. Kayode (2000) is of the view that Nigeria's manufacturing sector is still beclouded by challenges in the form of low capacity utilization, unstable exchange rate, infrastructural inadequacies, persistent rise in production cost, multiple taxation, policy inconsistency of government, porous border and smuggling activities, inadequate and erratic power supply, and inefficient energy utilization and others. Naira-Yuan as a Mechanism for Unlocking Potentials in the Nigeria's Manufacturing Sector channels Nigeria might benefit from the Naira-Yuan currency swap agreement if she understands that international trade is no a tea party. Countries do engaged in international trade to attract increased economic progress. At the centre of these economic progress are the real sectors' contributions. Manufacturing sector is now the modern driver of growth. Amongst some of the expected channels Nigeria will benefit are the duo of Exchange rate Pass-through Effect and Repercussion Channel Effect. a. Exchange rate Pass-through Channel Effect: Exchange rate represents the value of a domestic currency in-terms of a foreign one. Here, our concern is to scale the responsiveness of domestic prices of goods denominated in Naira to the exchange rate value (i.e. Yuan-Naira). Conceptually, it is the elasticity of Naira import prices with respect to exchange rate of Naira-Yuan. Extant economic literature reiterates the fact that changes in import prices do affect both retail and consumer prices. This implies that an increase in exchange rate pass-through will definitely trigger inflation between China and Nigeria. According to Chamberlin and Yueh (2006:445), "any factor that changes import prices, whether it is the marginal cost of production, the mark-up or the nominal exchange rate, can have a bearing on the domestic price level. Out of these, the factor that has attracted the most interest is the nominal exchange rate". In sum, changes in the exchange rate have a direct impacts on the domestic price level through its effect on import prices. Expectedly, in Naira-Yuan diplomacy, the percentage of cost of imports to the exchange rate value is of concern. The would-be rate of change in domestic prices in relation to a unit change in exchange rate will either make or mar the performance of the manufacturing sector. Accordingly, an adequate supply of Chinese Yuan in the weekly intervention of the government averaging of about CNY 135 million per month is well applauded in this direction( See All Africa.com). From the foregoing analysis in figure 2, there is increased demand for China's manufactured products. Therefore, more of Rehinmbi would be demanded for exchange and settlement of goods and services. The growth demand for Yuan would likely cause Yuan to appreciate over time, and the exchange rate of Yuan -Naira will rise above par. The pressure of rise in Yuan will fall on the Naira denominated products thereby, increasing the domestic prices of goods imported from China. The exchange rate pass-through mechanism will definitely transmit price increase and macro-economic instability from China and supply shocks to Nigerian economy. Adequate provisions should be made domestically to checkmate the influence of such foreign shocks. Similarly, the difference in exchange rate when dollar was to be used is significantly large. The products of China if it was to be dollar denominated will exact much more pressure on Naira as the relative scarcity of the green back stalls free flow of trade between Nigeria and China. Therefore, the currency pass-through effect on Naira will be much higher when it comes to domestic currency. (b) Repercussion Channel Effect: Increased imports of manufactured products by Nigerians from China imply export of job opportunities and increased deteriorating social conditions. It transmits into higher foreign income stream because such spending on Chinese goods will increase her national income . An increased domestic income in China will stimulate spending on China's domestic goods hence, the net exports of China rises and by extension adds to the expansion of domestic income. However, any change in domestic policy resulting in the depreciation of the domestic currency will have a negative impact on the other country. Increase in the export in turns increases the domestic income and employment opportunies. However, the falling imports results into fall in foreign income and hence employment in Nigeria resulting in a Beggar -Thou -Neighbor symdrone. Concluding Remarks One take home so far is that there is need to have an insight into a deeper view on how formally entered trade aggrements by the governments of both sovereign nations like China and Nigeria had fared. Historically, extant economic literature since the era of the Mercantilists' thinking have laboured to show that effective movement of goods and services across national boundaries could be rightly possible through exchange rate measured in the various domestic currencies. The fountain head for exchanged for one unit of a currency for another within an internationally framework is rooted in the BWIs framework. This brand of thought had since then been practiced and experienced in numerous trade ties Nigeria as a nation have had with Peoples Republic of China. Like other bilateral agreements, it is assumed amongst others that the economic and social interaction between the two nations will blossom. Thus, this paper sheds light on Naira -Yuan Diplomacy as a Pathway for Unlocking Nigeria's Manufacturing Sub-Sector Potentials. Manufacturing sector is now the modern driver of this growth. Amongst some of the expected channels Nigeria will benefit are the duo of Exchange rate Pass-through Effect and repercussion effect channel. This paper conclusively presume that the exchange rate pass-through mechanism can transmit price increase and macro-economic instability from China and supply shocks to the Nigerian economy (especially from manufactured products) when adequate provisions are not domestically taken. Furthermore, the Naira-Yuan diplomacy will increase in imports from China thereby increase her foreign income since Nigeria will be spending on Chinese manufacture products, hence, national income of China. This is so because the net exports of China will rise faster and add to her expansion of domestic income. On the other hand, any change in domestic policy resulting into the depreciation of the domestic currency will have a negative impact on the other country. The depreciation of domestic currency usually make domestic goods more competitive relative to foreign goods in trade. This leads to increase in exports and a decline in imports of the country, improving the net exports of the country. Increase in the export in turns increases the domestic income and
2021-05-11T00:03:45.931Z
2021-01-17T00:00:00.000
{ "year": 2021, "sha1": "5f1b59d414ad580f36ee5c0276a1cc86c0bfa9d5", "oa_license": "CCBYNC", "oa_url": "https://www.richtmann.org/journal/index.php/mjss/article/download/12330/11926", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4c06d7214e9a707530fc7606dcc1be823629effb", "s2fieldsofstudy": [ "Economics", "Political Science" ], "extfieldsofstudy": [ "Business" ] }
11605679
pes2o/s2orc
v3-fos-license
The landscape of alternative splicing in cervical squamous cell carcinoma Background Alternative splicing (AS) is a key regulatory mechanism in protein synthesis and proteome diversity. In this study, we identified alternative splicing events in four pairs of cervical squamous cell carcinoma (CSCC) and adjacent nontumor tissues using RNA sequencing. Methods The transcripts of the four paired samples were thoroughly analyzed by RNA sequencing. SpliceMap software was used to detect the splicing junctions. Kyoto Encyclopedia of Genes and Genomes pathway analysis was conducted to detect the alternative spliced genes-related signal pathways. The alternative spliced genes were validated by reverse transcription-polymerase chain reaction (RT-PCR). Results There were 35 common alternative spliced genes in the four CSCC samples; they were novel and CSCC specific. Sixteen pathways were significantly enriched (P<0.05). One novel 5′AS site in the KLHDC7B gene, encoding kelch domain-containing 7B, and an exon-skipping site in the SYCP2 gene, encoding synaptonemal complex 2, were validated by RT-PCR. The KLHDC7B gene with 5′AS was found in 67.5% (27/40) of CSCC samples and was significantly related with cellular differentiation and tumor size. The exon-skipping site of the SYCP2 gene was found in 35.0% (14/40) of CSCC samples and was significantly related with depth of cervical invasion. Conclusion The KLHDC7B and the SYCP2 genes with alternative spliced events might be involved in the development and progression of CSCC and could be used as biomarkers in the diagnosis and prognosis of CSCC. Introduction Alternative splicing (AS) is a biological process by which different exons are joined together to generate a series of mRNA isoforms from a single primary transcript. Nearly 90% of human multiple-exon genes are alternatively spliced, and AS is a common mechanism for generating both different transcription products and protein diversity in higher eukaryotic cells. 1 The roles of AS in human diseases, especially in cancer, have been widely studied. 2 Tumor formation might be due to the imbalanced expressions of either the splicing variants or the incorrect isoforms. 3 Many oncogenes and tumor suppressor genes, such as BRCA1/2 4 and p53, 5 are alternatively spliced in cancer cells. The cancer-specific isoforms induce the phenotypic transformation of cancer cells. 6,7 Transcript sequencing has indicated that the gene mutations associated with cancer-specific AS events could be potentially used as valuable biomarkers in the diagnosis and therapy of cancer. 8 Cervical cancer is the third most commonly diagnosed cancer and the fourth leading cause of cancer deaths among women around the world. 9 Cervical cancer comprises 80% of squamous cell carcinoma (CSCC). 10 The etiology of cervical cancer is absolutely related to persistent infection by human papillomavirus (HPV The carcinogenesis due to HPV depends on the activities of viral oncoproteins E5, E6, and E7, which inhibit various cellular targets, including the tumor suppressor proteins p53, pRb, p21, and p27, as well as disrupting critical cellular processes, including cell cycle, apoptosis, and malignant transformation of cervical basal cells. 12 In high-risk HPV types, transcription is initiated at the early promoter located in the E6 open reading frame (ORF) and the late promoter in the E7 ORF of HPV. All viral genes are transcribed to many polycistronic RNAs with two or more ORFs, which then undergo further processing, including AS and polyadenylation. 13 For HPV16, at least 13 different mRNAs with the capacity to encode capsid proteins are produced by AS. 14 In this study, we detected the CSCC-specific AS events by comparing the global transcriptional changes of CSCC to that of the adjacent nontumor tissues (ATN) through RNA sequencing. This study aims to advance our understanding of CSCC. Tissue specimens Forty paired fresh-frozen tissue samples (CSCC and ATN) were collected from patients receiving radical hysterectomy for CSCC during the period of January 2012 to August 2013 (Peking Union Medical College Hospital, People's Republic of China). Diagnosis of all cases was histologically confirmed by two independent pathologists, and all tumor tissues were assessed by hematoxylin-eosin (HE) staining, and only those tissues with percentage of tumor cells more than 90% were used. Four paired samples were randomly selected for RNA sequencing from among these cases. Informed consent from each patient was obtained. The procedures have been approved by the ethics review committee of Peking Union Medical College Hospital and are in accordance with the Helsinki Declaration of 1975. Raw read filtering The complementary DNA (cDNA) library of the four paired samples was constructed and sequenced. The raw RNAsequencing data were filtered according to the following criteria: 1) reads containing sequencing adaptors were removed; 2) nucleotides with a quality score ,20 were removed; 3) reads with more than 8% nitrogenous bases were removed. All subsequent analyses were based on clean reads. 15 Detection of as SpliceMap was used to detect splicing junctions and different types of AS events, including exon skipping, mutually exclusive exons, intron retention, 5′AS, and 3′AS in CSCC and ATN tissues. 16 The read was separated into segments. Each segment was mapped to the human genome with Bowtie software. 17 Then all of the segments were pieced together to determine the locations of exons and possible junctions. We filtered the splice junctions originally detected according to two criteria: quality of the alignment and coverage of splice junction. The AS events presented only in the CSCC or ATN samples were detected. Kyoto Encyclopedia of Genes and Genomes pathway analysis The unique lists of CSCC-specific AS genes were submitted to the Web-based functional annotation tool, which is known as the Database for Annotation, Visualization and Integrated Discovery v6.7. 18 The false discovery rate (FDR) was set at 5%, and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis was conducted for functional annotation categories. Correlation between AS genes and the clinicopathologic characteristics of CSCC patients was tested by the chi-square test. Fisher's exact text was used when theoretical frequency was ,5.0. Statistical significance was assumed as P,0.05. Statistical analysis was performed using the SPSS 13.0 statistical software. Detection of CSCC-specific AS genes SpliceMap was used to detect splice junctions. We compared the CSCC transcripts and ATN transcripts with the reference genome (Table 1). There were 17,462, 25,101, 9,034, and 25,925 newly detected AS events in the CSCC tissues and 17,161, 25,101, 19,901, and 13,279 newly detected AS events in the ATN tissues from the four paired samples. We screened out the AS events with more than one mapped reads; thus, 307, 555, 86, and 603 specific AS events were present in the four CSCC tissues, respectively. There were 35 common AS genes among the four CSCC tissues. One novel junction with a 5′AS site in the KLHDC7B gene was supported by 1, 3, 18, and 22 reads in the four CSCC tissues, and an exon-skipping site in the SYCP2 gene was supported by 2, 3, 12, and 13 reads in the four CSCC tissues (Table S1). Kegg pathway analysis KEGG pathway analysis was used to identify AS gene-related significantly enriched pathways. In total, 16 pathways were significantly enriched (P,0.05), including metabolic pathways, endocytosis, and the Ras signaling pathway, and all of these pathways were specific for CSCC ( Table 2). Validation of as genes KLHDC7B and SYCP2 genes were chosen as the candidate genes according to the number of mapped reads in the four pairs of samples. One novel junction with a 5′AS site in the KLHDC7B gene ( Figure 1) and an exon-skipping site in the SYCP2 gene were found (Figure 2). The AS events in the KLHDC7B ( Figure 3A) and the SYCP2 genes ( Figure 3B) were CSCC specific and were validated by RT-PCR. In total, the KLHDC7B gene with 5′AS was found in 67.5% (27/40) of CSCC samples and was positively related with cellular differentiation and tumor size ( Table 3). The exon-skipping site of the SYCP2 gene was found in 35.0% (14/40) of CSCC samples and was positively related with the depth of cervical invasion (Table 3). Discussion AS is crucial in normal development programs and its dysregulation is related to tumorigenesis. 19,20 AS events are involved in cell cycle, metabolism, tumor suppression, and various cell signaling pathways. 21 The cancer-specific AS events can lead to the activation of oncogenes and cancer-related Abbreviations: as, alternative splicing; esT, expressed sequence tags; chiP, chromatin immunoprecipitation; KLHDC7B, kelch domain-containing 7B; snPs, single-nucleotide polymorphisms; Ucsc, University of california, santa cruz. pathways, as well as inactivation of tumor suppressors. 22 In this study, a total of 35 novel CSCC-specific AS genes and 16 significant pathways were identified. Metabolic pathways, endocytosis, and the Ras signaling pathway were the three most important pathways detected by KEGG pathway analysis. It has been reported that metabolic alternations contribute to the development of cancer. 23 The major metabolic pathways, such as glycolysis and oxidative phosphorylation, are altered in cancer cells to meet the bioenergetic and biosynthetic demands associated with tumor growth. 24,25 The process of endocytosis and endocytic proteins are involved in the regulation of cell cycle, mitosis, and apoptosis in cancer cells. 26,27 The AS isoforms of Ras were able to activate the MAP kinase signaling pathway and to induce tumor formation in nude mice. 28 The Ras/Raf/ MAPK cascade can be activated by the epidermal growth factor receptor (EGFR/ErbB1), a member of the ErbB receptor tyrosine kinase family, which is frequently mutated and overexpressed in different human cancers, including glioma, non-small cell lung carcinoma, ovarian carcinoma, and prostate carcinoma. 29 This research demonstrated that the AS genes in these signal pathways might participate in the progress of CSCC. On the basis of the RNA-sequencing analysis, we confirmed the AS events in KLHDC7B and SYCP2 genes in CSCC tissues by RT-PCR for the first time. The KLHDC7B mRNA is increased under several biological conditions mainly due to infection. KLHDC7B expression was increased during acute HCV infection and was induced by interferon gamma, TNF-α, and IL-4. The KLHDC7B gene regulates and facilitates HCV replication in hepatocytes. 30,31 In the study of Kim et al the KLHDC7B gene containing a kelch domain was identified as a candidate novel epigenetic marker that was hypermethylated and upregulated in breast cancer; the methylation level of the 14CpG sites at the promoter region of the gene was higher in cancer tissues and cultured breast cell lines. 32 KLHDC7B was upregulated in vulvar intraepithelial neoplasia due to HPV infection. 33 We speculated that the spliced KLHDC7B gene might bind to HPV oncoproteins to promote CSCC progression. SYCP2 is a proteinaceous structure that links homologous chromosomes during the prophase of meiosis. The protein encoded by this gene is a major component of the synaptonemal complex and may bind DNA at scaffold attachment regions. It had been reported that SYCP2 was upregulated in Caski and SiHa cells and is associated with invasive cervical cancer. 34,35 The expression of the KLHDC7B gene with 5′AS was positively related with cellular differentiation and tumor size. The SYCP2 gene with exon skipping was positively related with depth of cervical invasion. The AS events in KLHDC7B and SYCP2 genes might generate new transcripts or regulatory proteins to promote the progress of CSCC, and KLHDC7B and SYCP2 genes with the novel AS events could submit your manuscript | www.dovepress.com Dovepress Dovepress 78 guo et al be used as potential biomarkers in diagnosis and therapy of CSCC patients. However, the mechanisms of action of the two genes with AS events in the carcinogenesis of CSCC need further investigation.
2017-06-23T04:58:10.922Z
2014-12-22T00:00:00.000
{ "year": 2014, "sha1": "691a201e4e286109041ef6fb24c035e144073f7f", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=22994", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cc55488b4177fbf86d42aaa54dd23ca0d2556b99", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254823071
pes2o/s2orc
v3-fos-license
Understanding the impact of mobility on COVID-19 spread: A hybrid gravity-metapopulation model of COVID-19 The outbreak of the severe acute respiratory syndrome coronavirus 2 started in Wuhan, China, towards the end of 2019 and spread worldwide. The rapid spread of the disease can be attributed to many factors including its high infectiousness and the high rate of human mobility around the world. Although travel/movement restrictions and other non-pharmaceutical interventions aimed at controlling the disease spread were put in place during the early stages of the pandemic, these interventions did not stop COVID-19 spread. To better understand the impact of human mobility on the spread of COVID-19 between regions, we propose a hybrid gravity-metapopulation model of COVID-19. Our modeling framework has the flexibility of determining mobility between regions based on the distances between the regions or using data from mobile devices. In addition, our model explicitly incorporates time-dependent human mobility into the disease transmission rate, and has the potential to incorporate other factors that affect disease transmission such as facemasks, physical distancing, contact rates, etc. An important feature of this modeling framework is its ability to independently assess the contribution of each factor to disease transmission. Using a Bayesian hierarchical modeling framework, we calibrate our model to the weekly reported cases of COVID-19 in thirteen local health areas in Metro Vancouver, British Columbia (BC), Canada, from July 2020 to January 2021. We consider two main scenarios in our model calibration: using a fixed distance matrix and time-dependent weekly mobility matrices. We found that the distance matrix provides a better fit to the data, whilst the mobility matrices have the ability to explain the variance in transmission between regions. This result shows that the mobility data provides more information in terms of disease transmission than the distances between the regions. The outbreak of the severe acute respiratory syndrome coronavirus 2 started in Wuhan, China, towards the end of 2019 and spread worldwide. The rapid spread of the disease can be attributed to many factors including its high infectiousness and the high rate of human mobility around the world. Although travel/movement restrictions and other non-pharmaceutical interventions aimed at controlling the disease spread were put in place during the early stages of the pandemic, these interventions did not stop COVID-19 spread. To better understand the impact of human mobility on the spread of COVID-19 between regions, we propose a hybrid gravity-metapopulation model of COVID-19. Our modeling framework has the flexibility of determining mobility between regions based on the distances between the regions or using data from mobile devices. In addition, our model explicitly incorporates time-dependent human mobility into the disease transmission rate, and has the potential to incorporate other factors that affect disease transmission such as facemasks, physical distancing, contact rates, etc. An important feature of this modeling framework is its ability to independently assess the contribution of each factor to disease transmission. Using a Bayesian hierarchical modeling framework, we calibrate our model to the weekly reported cases of COVID-19 in thirteen local health areas in Metro Vancouver, British Columbia (BC), Canada, from July 2020 to January 2021. We consider two main scenarios in our model calibration: using a fixed distance matrix and time-dependent weekly mobility matrices. We found that the distance matrix provides a better fit to the data, whilst the mobility matrices have the ability to explain the variance in transmission between regions. This result shows that the mobility data provides more information in terms of disease transmission than the distances between the regions. Introduction The pandemic of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which started in the city of Wuhan, Hubei province, China [1] has since spread all over the world with over 585 million reported cases and 6.4 million reported deaths, as of August 2022 [2]. In human populations, the virus can be transmitted through the inhalation of infectious droplets in aerosols, exposure to infectious respiratory fluids, coughing, sneezing, and having physical contact with an infected individual. It can also be transmitted indirectly when a susceptible individual comes in contact with a contaminated surface, such as door handles or other commonly shared surfaces or objects [3][4][5]. SARS-CoV-2 is the casual agent for the coronavirus disease 2019 (COVID- 19), and is estimated to be more infectious compared to other coronaviruses such as the severe acute respiratory syndrome (SARS) and the Middle East respiratory syndrome coronavirus (MERS) [6,7]. The COVID-19 disease was declared a public health emergency by the World Health Organization (WHO) on January 20, 2020 [8] and a pandemic on March 11, 2020 [9]. Due to the fast spread of COVID-19, during the early stages of the pandemic, governments around the implemented non-pharmaceutical interventions (NPIs) such as movement/travel restrictions, wearing of facemasks, closure of schools and businesses, physical distancing, etc. [10][11][12][13][14], to limit the spread of the disease. Although, the implementation of these NPIs helped in slowing down the spread of COVID-19, the disease still continues to spread under these restrictions. In addition, these NPIs have significant social and economic effects around the world [15][16][17], and could not be put in place for too long. The development of safe and effective COVID-19 vaccines brought some relief and were introduced to replace stringent NPIs [18,19]. The first set of COVID-19 vaccines became available towards the end of 2020 [20]. These vaccines provide significant protection against the earlier strains of SARS-CoV-2 virus [21][22][23]. However, the emergence of highly infectious mutant strains such as Omicron variant led to the continuous spread of the disease. The first case of COVID-19 was reported in Wuhan, China, in December 2019 [24]. On the 12 th of January 2020, the first case of the disease outside of China was confirmed in Thailand [25]. By January 30, 2020, COVID-19 has spread to 18 countries outside of China with a total of 7,818 confirmed cases worldwide [25]. The first confirmed case of COVID-19 in Africa was reported on February 14, 2020 [26,27], in North America, January 21, 2020 [28], and in Europe, January 24, 2020 [29]. COVID-19 has spread more rapidly and widely around the world than previous outbreaks of coronaviruses. This spread can be attributed to globalization, settlement and population characteristics, and high human mobility [30]. Several studies have looked at the effect of human mobility on the spread of COVID-19 [31][32][33]. In Kraemer, Moritz UG, et al [31], real-time human mobility data was used to investigate the role of case importation in the spread of COVID-19 across cities in China. The impact of human mobility network on the onset of COVID-19 in 203 countries was studied in [32]. They used exponential random graph models to analyze country-to-country spread of the disease. Their study suggested that migration and tourism inflow contributed to COVID-19 case importation, and that a mixture of human mobility and geographical factors contribute to the global transmission of COVID-19 from one country to another. Human mobility data collected via mobile devices such as cell phones, smartwatches, e-readers, tablets etc., have also been used to study the spread of COVID-19 [34][35][36][37]. In [34], county-level cell phone mobility data collected over a period of 1 year in the US was used to study the spatio-temporal variation in the relationship between COVID-19 infection and mobility. They found that in the spring 2020, sharp drop in mobility often coincide with decrease in COVID-19 cases in many of the populous counties. Mathematical models have been used to study the relationship between the spatio-temporal spread of COVID-19 and human mobility [37][38][39][40][41][42][43][44][45]. A city-based epidemic and mobility model together with multi-agent network technology and big data on population migration were used to simulate the spatio-temporal spread of COVID-19 in China [45]. In [46], a stochastic, data-driven metapopulation model was used to study the initial wave of COVID-19 in Belgium, and also to study different re-opening strategies. Their model incorporates the mixing and mobility of different age groups in Belgium. Another stochastic metapopulation model was used to study the spread of COVID-19 in Brazil [43]. This model assumes that epidemics start in highly populated central regions and propagate to the countrysides. For many states, they found strong correlations between the delay in epidemic outbreaks in the countrysides and their distance from central cities. In [47], an SEIR country-wide metapopulation model was used to study the spread of COVID-19 in England and Wales. The model was used to predict the COVID-19 epidemic peak in England and Wales, and also to study the effect of different non-pharmaceutical intervention strategies on the predicted epidemic peaks. Similarly, in [48] a stochastic SIR model was applied to describe the spatio-temporal spread of COVID-19 across 33 provincial regions in China and to also evaluate the effectiveness of various local and national intervention strategies. Their model incorporates an outflow mobility index for all the regions and the proportion of travelers between regions. More discussions on human mobility and COVID-19 transmission can be found in the systematic review article [49]. The relative contribution of mobility data to the observed variance in the COVID-19 transmission rates between regions still remains an unexplored problem. We develop a hybrid gravity-metapopulation modeling framework for studying the spread of COVID-19 within and between different regions. An important feature of our framework is the ability to determine human mobility based on the distances between the regions or through empirical data such as those collected through mobile devices. In addition, our framework allows for the explicit incorporation of factors that affect disease transmission, such as facemasks, physical distancing, contact rates, etc., into a time-dependent disease transmission rate and the assessment of the contribution of each of these factors to actual disease transmission. As an illustration, we use a Bayesian hierarchical modeling framework to calibrate our model to the weekly reported cases of COVID-19 in the thirteen local health areas (LHAs) of Fraser health authority (Fraser Health), British Columbia (BC), Canada, from July 2020 to January 2021. The study area comprises 1.9 million population in the eastern sections of the Greater Vancouver area. We estimate region-specific scaling parameters for computing baseline disease transmission rates for each region, and a parameter for quantifying the contribution of mobility to disease transmission. In addition, we estimate a time-dependent piece-wise constant scaling parameter to account for the cumulative effect of the remaining factor that affect disease dynamics, which are not explicitly included in our model. We consider two main model structures in our example, which are determined by the mobility matrices used: one with a distance matrix (computed using the distances between the regions, based on the population weighted centroid) and another with time-dependent mobility matrices computed from mobile device data. The results from these two scenarios are used to test the hypothesis of whether the time-dependent mobility matrices, computed from mobile device counts, provide more information about human mobility, with respect to disease transmission between the regions than the distances between the region. Mathematical model We develop a hybrid gravity-metapopulation model to study the dynamics of COVID-19, within and between regions. The model stratifies the population of each region into six compartments: susceptible (S), exposed (E), pre-symptomatic infectious (P), symptomatic infectious (I 1 and I 2 ), and recovered (R). Individuals in the pre-symptomatic infectious compartment are infectious (can transmit the disease) but do not show symptoms yet. Similar to [50,51], we divided the infectious compartment into two classes so that the recovery time follows a Gamma distribution rather than an exponential distribution. This way, a symptomatic infectious individual spends the first half of their infectious period in I 1 and the other half in I 2 . We assume that there is no re-infection in our model due to relatively low infections across the study period as the size of the susceptible population is far greater than the size of the recovered population. In addition, we assume that all the individuals infected during our study period will not lose their COVID-19 immunity and be reinfected within this period [52]. Furthermore, asymptomatic cases were not considered as testing guidelines during the study period were symptom-based. A schematic diagram of the model illustrated for four (4) regions is shown in Fig 1, where the gray circles on the left represent the regions, while the black arrows show the interactions and movements of individuals between the regions. On the right, we have an illustration of the population dynamics in each of the regions, where the subscript j represents the j th region. The black arrows here show the transition of individuals through the different stages of COVID-19 at the rates indicated beside the arrows. The red dashed arrows indicate disease transmission. Observe that there is a red dashed arrow extending from each of the remaining three regions into region j, these arrows account for the contributions of infectious individuals in the three regions to disease transmission in the j th region. The ordinary differential equations (ODEs) for the model are given by (see Fig 1 for where β j � β j (t) is the time-dependent disease transmission rate for region j. We aim to define the transmission rate (β j ) as a function of the different factors that affect disease transmission. This way, we would be able to evaluate the contribution of each of these factors to the overall disease transmission. Therefore, we define β j (t) as b j ðtÞ ¼ exp ðc 0j þ c 1 m j ðtÞ þ gðtÞÞ: ð2Þ . Model compartments are defined as follows: Susceptible (S j ); exposed (E j ); pre-symptomatic infectious (P j ); symptomatic infectious (I 1j and I 2j ); and recovered (R j ) for region j. Our model assumes that there are no re-infections. The black arrows show the movement of individuals from one region to another (left) and the transition of individuals through the different stages of COVID-19 at the rates indicated beside the arrows (right). The red dashed arrows indicate disease transmission (see (1) for more details). https://doi.org/10.1371/journal.pcbi.1011123.g001 PLOS COMPUTATIONAL BIOLOGY Here, c 0j is the scaling parameter for the baseline disease transmission rate for region j, c 1 is the scaling parameter used to remove biases from the time-series mobility data, and g(t) is a time-dependent piece-wise parameter used to account for other factors that affect disease transmission other than human mobility (e.g. facemask, social distancing, contact rates, etc.), which are not explicitly incorporated into the model. Movement within the j th region is captured by a time-series mobility data represented by m j (t). This data is used as a proxy for the time-dependent contact rate in the region. We have defined our disease transmission rate, β j (t), as an exponential function to ensure that its value remains positive due to the way the time-series mobility data and the function g(t) are to be incorporated into β j (t). This definition will ensure that the estimated model parameters are identifiable (see Bayesian inference section for more details). Based on the definition of β j (t) in (2), e c 0j is the baseline disease transmission rate for region j, while e c 1 m j ðtÞ incorporates the effect of human mobility within the region into the transmission rate. Lastly, e g(t) accounts for the effect of other factors that affect disease transmission, which are not explicitly incorporated into the model, on the disease spread. Although, the formulation in (2) explicitly incorporates only human mobility into the disease transmission rate, this formulation can be extended to include other factors that affect disease transmission such as facemaks, physical distancing, etc. See more details in the Discussion section. The parameter C j � C j (t) in (1) is used to incorporate infectious interactions within the j th region, and their contribution to disease transmission in the region. In terms of a homogeneous single population model, this parameter would represent the probability of making an infectious contact in the population. Here, C j is defined as where 0 � θ � 1 is a parameter used to measure the effective contribution of human mobility to disease transmission in all the regions. Here, θ = 0 implies that there are no infectious contacts due to mobility as defined by the intra-regional mobility matrix (π), and the regions are essentially uncoupled from each other. On the other hand, θ = 1 means that all the infectious contacts in the system are due to human mobility. The parameterN j is the adjusted population size for region j, which incorporates the changes in the population size of the region due to movements in and out of the region. We defineN j bŷ where M is the total number of regions under consideration and N j is the baseline population size of the j th region. The first term in (3) given by ð1 À yÞðP j þ I 1j þ I 2j Þ=N j accounts for all the infectious contacts made by the residents of region j who are not moving within the region, while the second term, ðy=N j Þ P M i¼1 p ji ðP i þ I 1i þ I 2i Þ, accounts for all the infectious contacts made in region j by the residents of the region who are moving within the region and the visitors from other regions. In (3) and (4), π ji is the probability that an individual who migrated into region j, originated from region i, given that he/she is from one of the other regions under consideration. We compute this probability using two different approaches. The first approach uses the distances between the regions. In this case, π ji is given by PLOS COMPUTATIONAL BIOLOGY Where d ij � d ji is the distance from region i to region j, k 2 R þ and M is the total number of regions considered. The second approach used to compute the probability π ji involves using mobile device data. The model parameters, their descriptions, and values are provided in Table 1. The estimated parameters are presented in the Results section. As an illustration of concept, we consider the thirteen (13) local health areas (LHA) of Fraser health authority, British Columbia (BC), Canada. These regions include the communities of Abbotsford, Agassiz/Harrison, Burnaby, Chilliwack, Delta, Hope, Langley, Mission, Maple Ridge/Pitt Meadows, New Westminster, South Surrey/White Rock, Surrey and Tri-Cities. Fraser health authority (Fraser Health) is the largest of the five regional health areas in BC, with 12 acute care hospitals and providing health care to over 1.9 million people [65]. It has a width of 150 km. Table C in S1 Text. We use a Bayesian hierarchical modeling framework to calibrate our model to the weekly reported cases of COVID-19 in these 13 LHAs, from July 2020 to January 2021. From the model calibration, we estimate the parameters c 0j , c 1 and g(t), which are used to construct and study the time-dependent disease transmission rate for each region, and to study the dynamics of the time-dependent piece-wise parameter g(t). We also estimate the parameter θ, used to quantify the effect of mobility, both within and between the regions, on disease transmission in the regions. Data Human population move between regions for many reasons including work, leisure, family visits, health reasons, e.t.c. The main goal of this work is to develop a mathematical modeling framework for studying and understanding the effect of human mobility on the spread of COVID-19 within and between regions. We consider the period from July 1, 2020 to January 27, 2021, inclusive. Although, movement restriction was imposed in Fraser Health during PLOS COMPUTATIONAL BIOLOGY some part of this period, we used the mobility data collected through mobile device counts as a proxy for quantifying movements between the regions and the contact rate within each region. We quantify mobility between the regions using two approaches. The first approach uses the physical distances between the regions, based on population weighted centroid (left panel of Fig 3) and the formula in (5) to calculate the probability that an individual moving in region j, who came from one of the 13 regions, originated from region i (π ji ). The premise of using physical distance between regions is based on the concept of geographic distance decay, where spatial and social interactions decrease as the distance between regions increases [59]. The distances between our regions of interest are given in the left panel of Fig 3, while probabilities π ji computed from these distances are presented in the right panel. The diagonal entries of the probability matrix (π) represents the probability that an individual moving in a region is a resident of that region. It is important to note that the probability matrix is not symmetric, even though the distance matrix (left panel of Fig 3) is symmetric. In addition, each row of the probability matrix sums to 1. The second approach used to construct PLOS COMPUTATIONAL BIOLOGY the probability matrix (π) is based on mobile device counts and uses Telus mobility (TELUS) data. TELUS is a Canadian national telecommunications company that has network coverage in 99% of the populated areas of Canada. TELUS Insights provides anonymized geo-intelligence data, which reflects population location and mass movement patterns based on information about locations and population movement of TELUS mobile device users [66]. These data have helped answer a range of questions around location and public mobility patterns within Canada, including in infrastructure planning, health services, roads, and transit routes. As TELUS subscribers use their mobile devices, they connect to various cellular towers for telecommunication services. These connections are used to determine the users' locations based on the nearest tower that relays signals to their devices. Every Telus user with their mobile network active would be included in the TELUS data, except the subscriber opt-out [67]. This network data provides insights into movement patterns and trends across Canada. To provide a layer of privacy, all the mobility data provided by TELUS are de-identified, aggregated into large data pools, rounded-up to the nearest 10 counts and all results are extrapolated to represent the entire population of a given region. This ensures that the data cannot be traced back to individual TELUS subscribers. The results of the TELUS application programming interface (API) implementation, which provide the numbers of mobile devices moving within and between geographical locations of interest and the neighbourhood that a mobile device resides in, depend on cellular tower locations at the time of the analysis. We generate the mobility data for each region using a one-day bucket size and 120-minute minimum dwell time. We filtered for "non-residents", "moving residents" and "residents", which represent, respectively, the daily number of mobile devices residing in an LHA and spending at least two hours in another LHA (movement between regions), the daily number of mobile devices residing in an LHA and spending over two hours outside their census track within the LHA (movement within a region), and the total number of mobile devices residing in an LHA. To construct the weekly mobility matrices, we consider the "non-residents" and "moving residents" data. For each region and for a specified time interval (weekly), we PLOS COMPUTATIONAL BIOLOGY compute the number of mobile devices from the other 12 regions that visited the region and stayed there for at least 2 hours during the visit. This gives us the mobile device count for movement into the region (off-diagonal entries). For movement within the regions (diagonal entries), we used the "moving residents" data, from which we computed the number of mobile devices registered to a region and moving within the region. These information are used to construct a mobility matrix of device counts within the specified time interval for each region. We normalize each row of the matrix with the total number of devices in the row. This way, the i th element of the j th row represents the fraction of mobile devices that came into the j th region (from the 13 regions) that originated from the i th region. These fractions can also be interpreted as the probability that an individual moving within the j th region (whom originated from one of the 13 regions) is from the i th region (π ji ). Using this approach we compute the probability/mobility matrices for each week from July 1, 2020 to January 27, 2021. The computed matrices for week 1 (July 1-7, 2020) and 30 (January 21-27, 2021) are shown in Fig 4, while the matrices for the remaining weeks are presented in Figs (B-F) in S1 Text. The distance matrix (right panel of Fig 3) and the constructed mobility matrices are used to describe the interaction between individuals from different regions. We considered two main scenarios in our Bayesian inference based on the distance and mobility matrices and investigated whether the mobility data is more informative, in terms of disease transmission than the distances between the regions. We also used the Telus mobility data to compute the weekly mobility rate for the movements within each region. To compute these rates, we sum the daily device count in each region for "non-residents" and "moving resident", and divide it by the sum of the "residents" and "non-residents" device count for our entire study period. This gives us the proportion of mobile devices moving in each region with respect to the total number of devices in the region during our entire study period. For each week in our study period, we sum the computed proportion of mobile devices and divide by 7 to get the weekly average proportion of mobile devices moving in each of the regions, as shown in Fig 5. These mobility rates are used as PLOS COMPUTATIONAL BIOLOGY proxy for the contact rates in the regions and are represented by m j (t) in the disease transmission rate (β j (t)) defined in (2). We observe from Fig 5 that there is a sharp decline in mobility rate around the first week of September 2020 in most of the regions. Similarly, there is another decline in mobility rate around the first week on November. This decline is associated with the implementation of public health measures in BC. We calibrate our model to the weekly reported cases of COVID-19 in the thirteen local health areas of Fraser Health, BC, obtained from the British Columbia Centre for Disease Control (BCCDC). We extracted these data from a line list generated by BCCDC Public Health Reporting Data Warehouse (PHRDW), based on symptom onset date or reported date where symptoms onset date is not available. The collected case data spans the period from July 2020 to January 2021, inclusive, and was incorporated into the model likelihood based on the computed disease incidence as shown in (6). The collected weekly reported cases of COVID-19 for the 13 regions are shown in Fig A in S1 Text. Similar to [50], our model incidence is computed as the number of individuals in the pre-symptomatic population (P), transitioning to the symptomatic infectious compartment (I 1 ). Bayesian inference Our hybrid gravity-metapopulation model (1) is fitted to the COVID-19 cases in all the thirteen regions using a Bayesian hierarchical modeling framework. Bayesian inference is a statistical technique for data analysis and parameter estimation, which is based on the Bayes' theorem. It has been applied to problem in many fields ranging from biology, physics, sport, epidemiology, ecology, and engineering, among others [68][69][70]. A Bayesian hierarchical modeling framework is one where the prior distribution of some of the model parameters depend on other parameters to be estimated. It allows the incorporation and estimation of model parameters at individual and population levels (see [71,72] for more information on Bayesian hierarchical models). We implement our Bayesian inference model with the RStan package in R version 3.6.3 [73]. Stan is a free and open-source probabilistic programming language for statistical inference implemented in C++. It performs Bayesian inference on arbitrary user-defined models through Markov Chain Monte Carlo (MCMC), and can be invoked through other programming languages such as Python, Matlab, Julia and R. [74]. RStan is the R interface to Stan, which provides full Bayesian inference via the No-U-Turn sampler (NUTS), a variant of Hamiltonian Monte Carlo (HMC), approximate Bayesian inference via automatic differentiation variational inference (ADVI), and penalized maximum likelihood estimation via L-BFGS optimization [73]. In our Bayesian inference model, we construct the likelihood for the j th region as cases j ðtÞ � NegBinðincidence j ðtÞÞ; cÞ; ð6Þ where NegBin(�) is the negative binomial distribution, cases j (t) and incidence j (t) are the weekly reported cases of COVID-19 and the incidence computed from the model (1), respectively, for region j. Here, ψ is the over-dispersion parameter. Using Bayesian inference framework implemented in Rstan gives us the flexibility to incorporate our prior knowledge into the model parameters and the ability to evaluate probabilistic statements of the data based on the model. In addition, this framework allows us to incorporate hierarchical structure into the model parameters, with the benefit of understanding variations in the parameters at individual and population levels. It enables us to construct the posterior distribution for the population mean and variance of the model parameters and those of the individual parameters for each region, which are conditioned on the population mean and variance. We have used a negativebinomial distribution to model the weekly reported cases of COVID-19 because of its effectiveness and convenience in modeling nonnegative over-dispersed data. Uninformative priors were implemented in the Bayesian inference framework. We incorporate the time-series mobility data (Fig 5) into our modeling framework using an exponential scaling approach for the disease transmission rate. The disease transmission rate is given by (2), where c 0j � N þ ð0; 1Þ is the scaling parameter for the baseline transmission rate for the j th region (e c 0j is the baseline transmission rate) and c 1 � N þ ð0; 1Þ is the scaling parameter used to remove biases from the time-series mobility data (m j (t)). Here, e c 1 m j ðtÞ models the time-varying effect of m j (t) on the disease transmission rate (β j ) for region j. The time-dependent piece-wise constant parameter g(t) is used to account for other factors that affect disease transmission, which are not explicitly accounted for in the model. This parameter is estimated every four weeks (except for the last interval which has 2 weeks). We also estimated the total prevalence of COVID-19 in all the 13 regions at the beginning of our study period. Similar to [50,54], when building our Bayesian inference modeling framework, we simulated the incidence for our model (1) using known parameters values and then tested the ability of our framework to recover the values. We inspect the resulting posterior distribution for biases and their coverage of the true parameters. Throughout this article, we used the Variational Bayes (VB) method with the meanfield algorithm implemented in RStan [75,76] for our inferences, from which we estimate the total initial prevalence in all the 13 region and a parameter, θ, used to quantify the effective contribution of mobility to disease transmission in the regions (see the formulations in (3) and (4)). We estimate a fixed value of the parameter g(t) for every four weeks, starting from the beginning of our study period, and for the last two weeks. Thereby making it a time-dependent and piece-wise parameter. To ensure that the estimated parameters are identifiable and that the estimated values of g(t) from the second interval onward are relative to that of the first interval, we set g(t) = 0 for the first four weeks (first sub-interval). In addition, we rescaled the timeseries mobility data using the first week's mobility rate as a reference for the remaining rates. This was done by subtracting the mobility rate for the first week from those of the subsequent weeks. This way, the rescaled mobility rate for the first week is 0, while those for the remaining weeks are centered around 0. We used a Bayesian hierarchical modeling framework to estimate the parameters c 0j and g(t). We construct their population posterior distributions, which are used as priors for estimating the region specific c 0j for j = 1, . . ., 13, and the interval specific g(t), respectively. The remaining parameters of the model are fixed and are as presented in Table 1. We consider two main scenarios in our model calibration: one with a fixed distance matrix (computed from the distances between the regions, see Fig 3) and another with weekly mobility matrices (computed from Telus mobility data, see Fig 4 and Figs (B-F) in S1 Text). These two matrices are used to quantify mobility between the 13 regions. Performing inference based on these two scenarios enabled us to understand the effect of mobility on the posterior predictive distributions of the model and to determine which of the two mobility quantifiers best recreates the observed case data. It would also help us to identify, which of the two approaches provides more information on human mobility in terms of disease transmission. The two scenarios were ranked by comparing their leave-one-out predictions and standard errors, computed using the leave-one-out cross-validation (LOO) method [77][78][79], and using the widely applicable information criterion (WAIC) method [80,81]. We compared the Variational Bayes (VB) method to the adaptive Hamiltonian Monte Carlo method No-U-Turn sampling. The results from both methods are found to produce comparable estimates of the posterior distribution with significant reduction in total computation time when VB is used [82]. For the case of a fixed distance matrix, the mean and/or median ELBO usually converges in 5, 000-6, 000 iterations of the stochastic gradient ascent algorithm, while it converge in 11, 000-12, 000 iteration for the weekly mobility matrices case (see [76,83,84] for more information about ELBO in the variational Bayes method). Results We considered two main scenarios when fitting our model to the weekly reported cases of COVID-19 (see Methods). Results for the two scenarios, for selected regions (Agassiz/Harrison, New Westminster, Maple Ridge/Pitt Meadows and Surrey), are presented in Fig 6. We selected these regions based on their population sizes and geographical locations, to show the diversity in reported cases and population sizes in the regions considered, and the model's ability in predicting cases irrespective of these factors. The results for the remaining regions are presented in Figs G and H in S1 Text. For each model scenario, we present the posterior predictions of the weekly cases of COVID-19 in each region (columns 1 & 3 of Fig 6). We compute the time-dependent disease transmission rate, β j (t), using the estimated parameters and the formula in (2). These rates are presented in blue for the fixed distance matrix scenario (column 2 of Fig 6) and in gold for the weekly mobility matrices scenario (column 4 of Fig 6), together with the contribution of mobility to the transmission rate, e c 1 m j ðtÞ (green) with 50% credible interval (CrI) (darker bands) and 90% CrI (lighter bands). We observe from these results that our model is able to capture the trends and reported cases of COVID-19 in each of the regions with a high degree of accuracy for both model scenarios. In addition, we see that there are significant changes in the computed disease transmission rate over time, which has a similar trend for all the regions. Even though there are no much changes in the time-series mobility data, its effect on the disease transmission rate for each region is still noticeable. The mean estimate for the initial total prevalence in the 13 regions is 47.61 (90% CrI: 44.82 -50.31) for the distance matrix scenario and 50.19 (90% CrI: 47.37-53.04) for the weekly mobility matrix scenario. The mean estimate of the parameter used to quantify the effect of mobility on disease transmission in the regions (θ) for the distance matrix scenario is 0.53 (90% CrI: 0.44-0.60) and 0.90 (90% CrI: 0.72-0.98) for the scenario with weekly mobility matrices. This implies that movement between the regions contribute to a mean fraction of 0.53 and 0.90 of the total reported cases of COVID-19 in the regions for the distance and mobility matrix scenarios, respectively. The scaling parameter used to remove biases in the time-series mobility data (c 1 ) was estimated as 1.51 (90% CrI: 0.90-2.10) for the distance matrix and 2.11 (90% CrI: 1.52-2.69) for the mobility matrix scenario. We estimated the scaling parameters for the baseline disease transmission rate, c 0j for j = 1, . . ., 13, using Bayesian hierarchical modeling framework. These parameters are used to compute the baseline disease transmission rate for each region defined by e c 0j for j = 1, . . ., 13. The mean estimate for the population mean and variance are 0.45 (90% CrI: 0.35-0.54) and 0.18 (90% CrI: 0.12-0.26), respectively, for the distance matrix and 0.21 (90% CrI: 0.05-0.36) and 0.35 (90% CrI: 0.24-0.47), respectively, for the mobility matrix scenario. The mean estimate for c 0j , for j = 1, . . ., 13 with 90% credible interval (CrI) are presented in Tables A (distance matrix) and B (mobility matrix) in S1 Text. The estimated distribution for the baseline disease transmission rates for the regions are presented in Fig 7. We observe from the results in this figure that the predicted distributions for the larger and more urbanized regions with dense populations are similar for the distance and mobility matrix scenarios. These regions include Abbotsford, Burnaby, New Westminster, Surrey, and Tri-Cities. On the other hand, the predictions for the less densely populated smaller regions are relatively different for the two scenarios. In addition, the variances in the distributions for the smaller regions are larger than those of the bigger regions with larger populations. The time-dependent piece-wise parameter, g(t), was also estimated using a Bayesian hierarchical modeling framework with population mean and variance estimate with 90% credible interval given by -0.33 (-0.52, -0.14) and 0.30 (0.16, 0.47), respectively, for the distance matrix scenario, and -0.28 (-0.45, -0.10) and 0.32 (0.19, 0.50), respectively, for the weekly mobility matrix scenario. The mean estimates with 90% credible interval for the interval-specific parameters (g 2 -g 8 ) are given in Tables A (distance matrix scenario) and B (mobility matrix scenario) in S1 Text. It is important to emphasize that we have set g 1 = 0 (week 1-4) to ensure that the model parameters are identifiable, and to estimate g 2 , . . ., g 8 relative to g 1 . The distributions for the time-dependent effect of other factors that affect disease transmission, other than mobility, on the disease transmission rate, are given in Fig 8. We observe that the constructed distributions for the two scenarios agree well. Lastly, we compare the estimated expected leave-one-out predictions and their standard errors, for the two model scenarios, computed using the leave-one-out cross-validation (LOO) method [77][78][79] and the widely applicable information criterion (WAIC) method [80,81]. The comparison is summarized in Table 2, where the distance matrix scenario is ranked better than the mobility matrix scenario, in terms of their ability to capture the case data. Even Tables A and B in S1 Text for the estimates of c 0j ). Scenarios: fixed distance matrix (blue) and weekly mobility matrices (gold). https://doi.org/10.1371/journal.pcbi.1011123.g007 PLOS COMPUTATIONAL BIOLOGY though the distance matrix scenario captures the case data better than the weeekly mobility matrix scenario, the difference in the fits for the two approaches is not much. Discussion An important feature of our modeling framework includes the ability to explicitly incorporate factors that affect disease transmission into the transmission rate. This formulation allows us to effectively access the contributions of these factors to disease transmission in our model. In the example presented in this article, due to lack of adequate data, only time-series mobility data was incorporated explicitly into the disease transmission rate. The effect of other factors (e g(t) ) to the transmission rate (β(t)), computed every four weeks and for the last two weeks: g 1 = 0 (weeks 1-4), g 2 (weeks 5-8), g 3 (weeks 9-12), g 4 (weeks 13-16), g 5 (weeks [17][18][19][20], g 6 (weeks 21-24), g 7 (weeks 25-28), g 8 (weeks 29 & 30). Scenarios: fixed distance matrix (blue) and weekly mobility matrices (gold). The estimated means with 90% credible interval are presented in Tables A and B in S1 Text. https://doi.org/10.1371/journal.pcbi.1011123.g008 PLOS COMPUTATIONAL BIOLOGY that affect disease transmission was accounted for using a time-dependent piece-wise parameter. We attempted to incorporate the effect of facemasks into the model but could not get adequate data for facemask usage in each of the regions we considered. In this case, the disease transmission rate was formulated as follows where c 0j for j = 1, . . ., 13 are region-specific scaling parameters used to compute the baseline disease transmission rate for each region (e c 0j is the baseline transmission rate for region j). The parameters c 1 and c 2 are covariates for the mobility and facemask usage rates, respectively, and g(t) is a time-dependent piece-wise parameter that is used to incorporate the effect of other factors that affect disease transmission other than mobility and facemask. This formulation can always be extended to explicitly account for other factors that affect disease transmission based on data availability. Our model captures the trends and reported COVID-19 cases in each region (see Fig 6, and Figs G and H in S1 Text). In addition, the results of the two model scenarios agree well, although, there is a slight different in the estimated time-dependent disease transmission rates and the contribution of mobility to disease transmission (columns 2 & 4) for some regions. There are significant changes in the computed disease transmission rate over time, which may be attributed to the intervention strategies implemented by the government during this period. Even though there are no much changes in the time-series mobility data, its effect on the disease transmission rate is still apparent for each region. The estimated total initial prevalence of COVID-19 in all the regions for the two scenarios agree well, as well as the estimates for the time-dependent piece-wise parameter (g(t)), used to incorporate the effect of other factors that affect disease transmission into the transmission rate (see Fig 8). However, the estimated effect of mobility on disease transmission is significantly different for the two scenarios. The mean estimate of this parameter was 0.53 (90% CrI: 0.44-0.60) for the distance matrix scenario and 0.90 (90% CrI: 0.72-0.98) for the weekly mobility matrix scenario. This can be interpreted as mobility contributing to 53% and 90% of the cases in the regions for the distance and mobility matrices scenarios, respectively. These results show that the weekly mobility data provides more information, in terms of disease transmission, than the distances between the regions. Note that the mobility referred to here is for both within and between the regions. To confirm that indeed the weekly mobility data provides more information, we considered a third scenario, where we used a fixed mobility matrix computed using the mobility data for the entire study from July 2020 to January 2021. For this scenario, we estimated the effect of mobility on disease transmission as 0.60 (90% CrI: 0.52-0.70) (see Fig (I-K) and Table D in S1 Text). As expected, the fixed mobility matrix does not provide more information about disease transmission than the weekly mobility matrices, even though it does better than the distance matrix. Table 2. Model comparison using leave-one-out cross-validation (LOO) and the widely applicable or Watanabe-Akaike information criterion (WAIC). Model ranking (in descending order) is shown in the first column. The difference between the expected log pointwise predictive density (ELPD) for each scenario and that of the best scenario with standard errors are shown in the second column. In the third column, we have the Bayesian LOO estimate of the expected log pointwise predictive density (ELPD LOO) and its standard error. The LOO information criteria (LOOIC) and its standard error are given in the fourth column. Lastly, the computed Watanabe-Akaike information criterion (WAIC) for each model is shown in the fourth column. PLOS COMPUTATIONAL BIOLOGY The constructed distributions for the baseline disease transmission rate for the two model scenarios are similar for some of the regions and significantly different for other regions. These distributions are similar for the larger and more urbanized regions with dense population (Abbotsford, Burnaby, New Westminster, Surrey and Tri-Cities) and significantly different for the less densely populated smaller regions (see Fig 7). The difference in the predicted distributions for these two groups of regions may be attributed to their population size and mobility in the regions. Lastly, we compared the results obtained from the two scenarios using the leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) methods. This comparison ranks the distance matrix results better than those of the weekly mobility matrices, although, the computed LOO and WAIC for the two scenarios are very similar (see Table 2). We considered these two model scenarios in order to test the hypothesis of whether the time-dependent mobility matrices, computed from the mobile device data, provide more information about human mobility between the regions in terms of disease transmission than the distances between the regions. Based on our results, we conclude that even though the distance matrix provides a better fit to the data, the weekly mobility matrices have the ability to explain the variance in transmission between regions over time. The model for when the distance matrix is used is considered a gravity model, while the scenario where the weekly mobility matrices are used is referred to as a metapopulation model. Hence, our hybrid gravity-metapopulation model. Unlike in other models used to study the effect of human mobility on disease spread, where mobility is described based on either the distances between regions or using mobile devices data or other forms of mobility data only [37,45,47], our hybrid gravity-metapopulation modeling framework provides the flexibility of switching between the two data types. In addition, our framework provides an approach for studying the spread of diseases between all the regions of interest, rather than from an epicenter or a large city to its neighboring smaller cities [43,45], with the ability to quantify the effective contribution of mobility to disease spread between the regions. The models used to study disease spread from an epicenter to neighboring regions/cities are only suitable for studying disease spread at the early stages of the disease outbreak since there is a much higher probability of disease transmission from a person living in the epicenter to those living in the neighboring regions as shown in [45]. Also these models do not account for disease transmission between the smaller neighboring regions. Our modeling framework is suitable for studying disease dynamics at any stage of the epidemic and accounts for disease spread from each of the regions to all the remaining regions, irrespective of the number of reported cases in each region. Overall, our modeling framework provides the ability to explicitly incorporate real data on factors the affect disease transmission into the disease transmission rate, and also allows independent assessment of the contribution of these factors to disease transmission in an epidemic. Furthermore, this framework allows us to quantify the effect of mobility on disease transmission in the regions. However, this work is not without limitations. We quantified the effect of mobility on disease transmission in the 13 LHAs of Fraser Health, BC, based on movements between these thirteen regions only. However, there are movement in and out of these regions to other parts of BC. Another limitation of this work is that some regions in Fraser Health are closer to regions in other regional health areas in BC, than they are to other regions in Fraser Health. For example, Burnaby is closer to Vancouver than it is to many of the LHAs in Fraser Health. As a result of this, the spread of COVID-19 in Burnaby may be influenced more by the number of cases in Vancouver than in other regions in Fraser Health, e.g. Hope, Chilliwack and Agassize/Harrision. In the example presented here, we explicitly incorporated only the time-series mobility data into the disease transmission rate and accounted for other factors that affect disease transmission through a piece-wise parameter. An interesting extensions of this work would be to incorporate the data for other factors that affect disease transmission explicitly into the model. This way, the effect of each factor on disease spread can easily be assessed. Another extension of this model is to include vaccination and the variants of concern of COVID-19. Since mobility rate varies by age, an exciting extension of this work would be to stratify the population of each region by age. This way, in addition to assessing the impact of mobility on disease spread, it would also be possible to assess the contribution of each age group to disease spread. Supporting information S1 Text. Supplementary methods and results. This document contains more details of the methods and results. (PDF)
2022-12-19T08:02:18.615Z
2022-12-18T00:00:00.000
{ "year": 2023, "sha1": "ae45efcbdf569b61b6826cc9a53bb63d7b47d1f3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pcbi.1011123", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8aea78d7401d7187facc259315cd413938ccda9e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
258881925
pes2o/s2orc
v3-fos-license
Molecular basis and dual ligand regulation of tetrameric estrogen receptor α/14-3-3ζ protein complex Therapeutic strategies targeting nuclear receptors (NRs) beyond their endogenous ligand binding pocket have gained significant scientific interest driven by a need to circumvent problems associated with drug resistance and pharmacological profile. The hub protein 14-3-3 is an endogenous regulator of various NRs, providing a novel entry point for small molecule modulation of NR activity. Exemplified, 14-3-3 binding to the C-terminal F-domain of the estrogen receptor alpha (ERα), and small molecule stabilization of the ERα/14-3-3ζ protein complex by the natural product Fusicoccin A (FC-A), was demonstrated to downregulate ERα-mediated breast cancer proliferation. This presents a novel drug discovery approach to target ERα; however, structural and mechanistic insights into ERα/14-3-3 complex formation are lacking. Here, we provide an in-depth molecular understanding of the ERα/14-3-3ζ complex by isolating 14-3-3ζ in complex with an ERα protein construct comprising its ligand-binding domain (LBD) and phosphorylated F-domain. Bacterial co-expression and co-purification of the ERα/14-3-3ζ complex, followed by extensive biophysical and structural characterization, revealed a tetrameric complex between the ERα homodimer and the 14-3-3ζ homodimer. 14-3-3ζ binding to ERα, and ERα/14-3-3ζ complex stabilization by FC-A, appeared to be orthogonal to ERα endogenous agonist (E2) binding, E2-induced conformational changes, and cofactor recruitment. Similarly, the ERα antagonist 4-hydroxytamoxifen inhibited cofactor recruitment to the ERα LBD while ERα was bound to 14-3-3ζ. Furthermore, stabilization of the ERα/14-3-3ζ protein complex by FC-A was not influenced by the disease-associated and 4-hydroxytamoxifen resistant ERα-Y537S mutant. Together, these molecular and mechanistic insights provide direction for targeting ERα via the ERα/14-3-3 complex as an alternative drug discovery approach. Initial studies have shown that NR/14-3-3 PPIs can be modulated using small molecules. Specifically, the natural product FC-A or covalently tethered small molecules have been shown to stabilize the ERα/14-3-3 and ERRγ/14-3-3 PPIs (Figs. 1C and S3) (38,(44)(45)(46). These small molecules, also called molecular glues, increase the binding affinity between the two protein partners (PPI stabilization) by binding in a composite pocket formed at the interface of 14-3-3 and the phosphorylated NR protein. Further, stabilization of the ERα/ 14-3-3 PPI by FC-A suppresses ERα chromatin binding and subsequent transcriptional activity (38). These results illustrate the potential of the stabilization of the ERα/14-3-3 interaction as an alternative drug discovery entry. Despite the potential of therapeutic targeting of NR/14-3-3 complexes, little is known about NR/14-3-3 interactions on a molecular level. To date, biochemical and structural studies of NR/14-3-3 complexes have been exclusively performed using short phosphopeptide mimics of the NRs (Figs. 1C and S2) (45)(46)(47). Studies of NR/14-3-3 complexes using protein domains or full-length NRs, such as those performed for other 14-3-3 PPIs (48)(49)(50)(51), are needed to gain an enhanced molecular and structural understanding of NR/14-3-3 complex formation. As an example, the use of the entire NR LBD allows studies on the interplay between NR/14-3-3 complex formation and aspects like NR dimerization, ligand binding, and cofactor recruitment. Additionally, small molecule stabilization of full-length NR/14-3-3 complexes can be investigated in the context of clinically relevant NR point mutations which is of high interest (52)(53)(54)(55). Molecular insights into NR/14-3-3 PPIs beyond the phospho-binding groove would also provide the potential to identify novel composite binding pockets for small molecule targeting, thereby expanding the number of entry points for novel PPI modulator design and increasing the potential for selectivity (56). Within this work, we aim to enhance our understanding of 14-3-3 binding to ERα via in vitro characterization of the protein complex formed by 14-3-3ζ and an ERα construct comprising the LBD and phosphorylated F domain. To gain a robust understanding of protein complex formation and the stoichiometry of binding, several biophysical assays were performed, including analytical size exclusion chromatography (SEC) and analytical ultracentrifugation (AUC). In addition, differential scanning fluorimetry (DSF), fluorescence anisotropy (FA), and hydrogen-deuterium exchange (HDX) experiments determined the role of ERα ligands E2 and 4-OHT on ERα/14-3-3 complex formation and identified 14-3-3 binding to the drug-resistant ERα-Y537S mutant. Finally, we show that the ERα/14-3-3 PPI can be stabilized by the natural product AEGFPA pT V-COOH NTD DBD LBD . The binding of its endogenous ligand E2 (blue sticks) induces the folding of helix 12 (h12) in an active conformation, allowing cofactor binding (yellow cartoon). In contrast, antagonist 4-hydroxytamoxifen (4-OHT, yellow sticks) inhibits this conformational change and cofactor recruitment, PDB: 5WGD and 3ERT. C, crystal structure of 14-3-3 dimer (gray/white surface) and co-crystal structure of 14-3-3σ (white surface) with the ERα derived Cterminal phosphopeptide (green sticks) and small molecule stabilizer Fusicoccin-A (FC-A, pink sticks), PDB: 4JDD. The 14-3-3/ERa-LBD-F-domain complex FC-A and that this PPI stabilization can be achieved independently from ERα ligand binding in both wild-type ERα and the ERα-Y537S mutant, thus presenting a potential orthogonal therapeutic strategy for targeting endocrine resistance in breast cancer. Co-crystal structures of 14-3-3 bound to ERα-phosphopeptides revealed why the strep-tagged ERα peptide showed a reduced FC-A responsiveness (Fig. S10). The C-terminal streptag occupied the FC-A binding site within the 14-3-3 binding groove, forcing a structural rearrangement of the ERα peptide upon FC-A binding. This makes stabilization by FC-A less favorable compared to the WT ERα peptide. While the ERα(PKA)-strep construct could be used as a representative protein for ERα itself, the protein is less favorable to study 14-3-3 binding and small molecule PPI stabilization. As such, we shifted our focus to an alternative recombinant protein expression approach to obtain the ERα/14-3-3 protein complex. Co-expression and co-purification of ERα/14-3-3ζ complex To circumvent proteolytic cleavage of ERα or the use of Cterminal purification tags, we used bacterial co-expression of ERα, 14-3-3ζ, and PKA to obtain the ERα/14-3-3ζ protein complex. Specifically, N-terminally His-SUMO-tagged ERα(PKA) was expressed together with PKA and strep-tagged 14-3-3ζ in E. coli ( Fig. 2A). Co-expression of 14-3-3ζ and ERα with PKA allowed 14-3-3 to bind in situ to the phosphorylated ERα protein shielding the disordered F domain from proteolytic degradation. A similar co-expression approach was previously applied to successfully identify 14-3-3 binding to Ataxin-1 and to obtain a purified Tau/14-3-3 complex (64,65). Notably, the 14-3-3 protein comprises seven human isoforms which have proven to feature highly similar biochemical and structural features (66,67). Yeast two-hybrid studies have previously shown that ERα is able to interact with all seven isoforms of 14-3-3 (38). Therefore, we have selected here one 14-3-3 isoform, zeta, as a representative for the class of 14-3-3 proteins. 14-3-3ζ was specifically selected as it expresses well in E. Coli, because of its relatively high abundance in the human body (68), and because it has been successfully used in similar biochemical studies of larger 14-3-3 protein complexes, such as the BRAF/ 14-3-3 complex (50,51,69). The ERα/14-3-3 protein complex was co-purified using three subsequent chromatography methods. First, a Ni-NTA column was used to select for His-tagged ERα protein (Fig. 2B). Elution fractions contained both His-tagged ERα and 14-3-3ζ protein, indicating strong binding of 14-3-3ζ to ERα, as 14-3-3 did not contain a His-tag itself. After hydrolysis of the N-terminal His-SUMO tag of ERα, the complex was purified with a streptavidin-column, which again showed coelution of 14-3-3ζ and ERα (Fig. 2C). The ERα/14-3-3ζ protein complex was finally purified by SEC which resulted in the elution of a uniform peak that contained both proteins (Fig. S11). High-resolution mass spectrometry (LC-QToF-MS) analysis showed two protein peaks of which the first corresponded to the mass of 14-3-3ζ and the second to that of the phosphorylated ERα LDB-F protein (Fig. 2D). Notably, truncation of ERα was not observed after co-expression and -purification with 14-3-3ζ. Quantitative MS experiments showed the approximately equimolar presence of both 14-3-3ζ and ERα within the purified protein complex sample, indicating a 1:1 binding of the two proteins (Figs. S12 and S13). Furthermore, ERα binding to 14-3-3 proved to be phosphorylationdependent as co-expression in the absence of PKA did not result in co-elution of 14-3-3ζ with His-tagged ERα (Fig. 2E). ERα/14-3-3 bind in a 2:2 stoichiometry Native PAGE analysis of the purified ERα/14-3-3ζ complex showed a single band protein complex indicating stable complex formation between ERα and 14-3-3ζ without the presence of any major excess of either protein (Fig. S14). The addition of a high-affinity competitive 14-3-3 binder (70) disrupted the ERα/14-3-3ζ complex as apparent by two distinct bands that ran in the native PAGE gel at the same heights as 14-3-3ζ and ERα individually, testifying to the stable complex formation between ERα and 14-3-3ζ. Analytical SEC and sedimentation velocity AUC (SV-AUC) were used as orthogonal approaches to determine the ERα/14-3-3ζ protein complex size and the stoichiometry of ERα and 14-3-3ζ binding (Fig. 3, A and B). Analytical SEC results showed a single peak for the individual 14-3-3ζ (theoretical M w 29 kDa) and ERα (theoretical M w 33 kDa) proteins which eluted similar to monomeric BSA (theoretical M w 66 kDa), indicating that both proteins themselves were present as stable homodimers in solution (60 kDa), which is in line with reported observations (Fig. 3A) (59,67,71,72). The ERα/14-3-3ζ complex eluted as a single peak with a clear shift in molecular weight compared to ERα and 14-3-3ζ individually, approaching the dimeric BSA peak around 132 kDa. Together with the quantitative MS studies, which showed the equimolar presence of both ERα and 14-3-3ζ in the purified mixture (Figs. S12 and S13), these results indicated the tetrameric complex formation of an ERα and a 14-3-3ζ homodimer. SV-AUC sedimentation coefficient distributions c(s) validated results observed with SEC. ERα and 14-3-3ζ showed individual peaks with weight-averaged sedimentation coefficients (corrected to 20.0 C and to the density of water), s w(20,w) , of 4.0 S and 3.8 S, corresponding to M w 55 kDa and 58 kDa, respectively (theoretical M w of dimeric ERα is 66 kDa; and of dimeric 14-3-3ζ is 58 kDa) (Fig. 3B). The ERα/ 14-3-3ζ complex showed a main peak with a weight-averaged sedimentation coefficient of 6.3 S, corresponding to a M w 124 kDa, thus supporting the 2:2 stoichiometry of the ERα:14-3-3ζ complex (72,73). Notably, a minor peak was observed at peak positions of the 14-3-3ζ and ERα indicating traces of the non-interacting proteins present in the solution under these conditions. Dimer-to-dimer binding enhances the ERα/14-3-3 protein complex affinity Multivalent dimer-to-dimer complex formation was further accessed using a competitive SV-AUC experiment where 14-3-3ζ was added to ERα/14-3-3ζ protein complex to obtain various molar ratios of ERα and 14-3-3ζ. The addition of up to six equivalents of 14-3-3ζ to ERα resulted only in species representing the ERα/14-3-3ζ tetramer and the 14-3-3ζ dimer but did not show any 2:1 14-3-3ζ:ERα complex formation (Fig. 3C). This result showed that ERα binds to 14-3-3ζ as a stable dimer. The dimeric state of ERα is hypothesized to cooperatively enhance ERα binding affinity to 14-3-3ζ, similar to earlier described multivalent 14-3-3 binders CFTR and LRRK2 (74,75). The binding of one ERα monomer to the 14-3-3 dimer brings the second ERα monomer in proximity to 14-3-3, thereby increasing the effective molar concentration of ERα to 14-3-3ζ. Furthermore, dissociation of the protein complex is hypothesized to be slower since two binding interfaces need to dissociate in close succession for 14-3-3ζ and ERα to dissociate. Therefore, dimer-to-dimer binding of ERα and 14-3-3ζ is expected to increase the affinity of ERα for 14-3-3ζ and the stability of the tetrameric protein complex. The dissociation constant K D of dimeric 14-3-3ζ and dimeric ERα was estimated using SV-AUC results of the ERα/14-3-3 complex (Fig. S15). Based on the area under the curve, the amount of ERα and 14-3-3ζ in complex and alone was quantified from which the K D was calculated using the steady state equilibrium binding equation ). This calculation provided a K D 32 ± 6 nM. This affinity is almost 10fold higher than that of the ERα phosphopeptide binding to 14-3-3, indicating stronger binding of the phosphorylated ERα LBD-F domain protein due to the dimer-to-dimer binding mechanism. deg. C and 59.0 C ± 0.2 deg. C, respectively (Fig. 3, E and F). ERα phosphorylation at T594 did not influence its melting temperature (Fig. S16). The ERα/14-3-3ζ protein complex showed two distinct melting peaks (Fig. 3E), with 2 C increased thermal stability when compared to the individual protein partners (Fig. 3F). A similar increase in thermal stability for 14-3-3ζ was observed upon the addition of an ERα phosphopeptide (Fig. S16). These results thus show a mutual stabilizing effect of 14-3-3ζ and ERα upon tetramer formation. Mapping of interactions between ERα and 14-3-3ζ using HDX-MS Hydrogen-deuterium exchange (HDX-MS) assays were performed on the individual proteins 14-3-3ζ and ERα, and the ERα/14-3-3ζ protein complex, to identify which regions within the ERα and 14-3-3ζ protein were involved in protein complex formation (Figs. 4 and S17-S20). The proteins were incubated in deuterium-containing buffers which were quenched after 20 s, 2 min, 20 min, and 2 h. Proteins were digested using pepsin after which the amount of deuteration of individual peptides was determined using mass spectrometry. HD exchange kinetics of ERα was followed for 248 peptides, covering 98.6% of the protein sequence, and 240 peptides for 14-3-3ζ which cover 100% of the sequence. An HD exchange profile was made to visualize the deuteration kinetics of each ERα residue for apo-ERα and ERα within the ERα/14-3-3ζ complex (Figs. 4A and S17). Fast deuterium exchange kinetics (>30% deuteration after 20 s) were mainly observed in helix 1 to 2 (H1-2), the beta sheets (B), helix 7 (H7), residues between helix 9 and 10, helix 12 (H12), and the entire F-domain. Residues in H5-6, H8-9, H10, and H11, on the other hand, showed low amounts of deuterium exchange. These results corresponded nicely with the ERα crystal structures where regions with fast deuterium exchange kinetics are typically present in flexible and/or solvent-exposed regions of ERα. Notably, the C-terminal F-domain of ERα has never been crystallized before or studied with alternative structural biology techniques. Within this study, fast HD exchange kinetics were observed within the entire F-domain indicating that the ERα F-domain is most probably unstructured and solvent-exposed, as observed in the predicted Alphafold structure (76, 77). 14-3-3ζ binding to ERα decreased the deuteration kinetics of several regions within the ERα protein (Fig. 4, A-C; Figs. S17 and S18). Shielding effects were determined by calculating the difference in HDX between 14-3-3ζ-bound ERα and ERα by itself, after which this difference profile was visualized on the ERα Alphafold structure ( Fig. 4C and S17). Most pronounced shielding effects were observed in the N-terminal side of H3 (residues 328-354), the beta sheets through H7 (residues 397-420), the C-terminal side of H11 (residues 521-528), and the C-terminal end of the F-domain (residues 583-591) (Fig. 4, A-C; Figs. S17 and S18). Smaller effects were observed in H1-2 (residues 302-327), H8 (residues 421-444), H12, and the F- domain (residues 531-571). H4-6, H9-10, and the N-terminal part of H11 seemed unaffected, although it should be mentioned that these regions showed minor deuteration in the first place. Interestingly, although the 14-3-3 binding groove is known to primarily bind the C-terminal end of the ERα Fdomain, multiple regions additional to the F-domain seem to be affected by 14-3-3 binding. Shielding effects were observed for almost all flexible and solvent-exposed regions within ERα, indicating an overall stabilization of the ERα fold upon tetramer formation. Most pronounced effects were clustered on the 'bottom' of the ERα LBD structure (H7, H3 N-term, H11 C-term), indicating the proximity of 14-3-3ζ to this side of ERα, albeit with the dynamic movement of the ERα LBD dimer, facilitated by the long and flexible F-domain, leading to mild shielding effects at all sides of the ERα LDB. Notably, ERα/14-3-3ζ dimer-to-dimer binding might result in differential binding of one ERα monomer in comparison to the other ERα monomer, potentially further explaining why many regions within the ERα protein are mildly affected upon 14-3-3ζ binding. Deuteration kinetics for 14-3-3ζ in the absence and presence of ERα protein were similarly analyzed (Fig. 4, D-F, Figs. S19 and S20). The regions of high and low exchange rates of 14-3-3ζ alone corresponded well to known crystal structures and previously published HDX of 14-3-3 (78,79), with high deuteration in loops between H2-3, H3-4, H4-5, and H8-9, and none to minor amounts of deuteration in H2, H3, H4, H5, H7, and H9. ERα binding to 14-3-3ζ led to shielding effects on various parts of the 14-3-3ζ protein, which was most pronounced after 2 h of incubation (Fig. 4, D-F; Figs. S19 and S20). The largest shielding effects were observed within peptides in the C-terminal end of H3 (residues 50-68), H6-7 (residues 154-174), H8 (residues 180-199), and the C-terminus in H9 (residues 217-230). These shielded regions strongly correlated with the 14-3-3 binding groove (H3, H7 and H9) where the ERα C-terminus binds. Interestingly, also H8 on "top" and H6 on the "back" of the 14-3-3 protein were partially shielded, indicating that these 14-3-3 regions are potentially in close proximity to other parts of ERα. In contrast, the "base" of the 14-3-3 protein seems to be less affected by ERα binding, indicating the absence of direct contact with ERα (Fig. S19). Crystallography and HDX studies of other 14-3-3 PPIs typically show similar binding interfaces involving the 'top' of 14-3-3 (H8-9), whereas the "base" of 14-3-3 is not involved with the PPI formation (48, 69, 79-81). 14-3-3ζ can bind the ERα Y573S drug-resistant mutant Point mutations in the ERα LBD are known to modulate ERα conformation and transcriptional activity. ERα Y537S is one of the most prevalent somatic mutations in patients with breast cancer, typically acquired after antiestrogen treatment (54). Structural and biophysical characterization has shown that the Y537S mutation places H12 in an agonistic, constitutively active, conformation (Fig. S21), causing this ERα mutant to be resistant to antiestrogen treatment (54,82). Therefore, it is of high interest to study 14-3-3ζ binding to ERα-Y537S as it may provide a new approach to modulate the transcriptional activity of the drug-resistant ERα-Y537S mutant. The ERα-Y537S/14-3-3ζ complex could successfully be co-expressed and co-purified using the aforementioned methodologies for WT ERα/14-3-3ζ complex purification, indicating that the Y537S point mutation did not impede 14-3-3 binding. SV-AUC confirmed this as similar sedimentation distributions were obtained for the ERα-Y537S/14-3-3ζ complex and the WT ERα/14-3-3ζ protein complex (Fig. S22A). Furthermore, DSF studies showed a 2 C enhancement of thermal stability of both ERα-Y537S and 14-3-3ζ upon complex formation, similar to WT ERα (Fig. S22, B-D). Interestingly, ERα-Y537S in the absence of 14-3-3 showed melting temperatures of 47.1 C ± 0.5 deg. C, which is 2 C higher than wildtype ERα, indicating higher thermal stability of ERα when mutated. All data together confirmed ERα-Y537S/14-3-3ζ complex formation, providing an interesting entry point of targeting this mutant via its interaction with 14-3-3. ERα ligand binding is independent of ERα/14-3-3 complex formation Small molecule ligands play an important role in the regulation of ERα transcriptional activity in both healthy and diseased state, making it highly valuable to study their effects on ERα/14-3-3 complex formation. Therefore, we set out to study the effect of ERα/14-3-3 complex formation on ligand binding to both WT and the Y537S mutant of ERα. Here, we specifically studied the endogenous ERα agonist E2 and therapeutic partial antagonist 4-OHT (Fig. 5A) (83,84). In both SV-AUC and native PAGE, E2 and 4-OHT did not show any effect on the ERα/14-3-3 protein complex size, apparent by the similar c(s) distribution of the ERα/14-3-3ζ complex in the presence and absence of ligands (Figs. 5B and S23). Furthermore, ligand-dependent DSF studies were performed to determine the effect of ERα ligand binding on the thermal stability of ERα in complex with 14-3-3ζ. (Fig. 5 Similar effects of E2 and 4-OHT were observed for experiments with the ERα-Y537S/14-3-3ζ complex (Fig. S25). Overall, the SV-AUC and DSF data indicated that ERα ligands did not disrupt ERα/14-3-3ζ complex formation and ERα was fully ligand responsive when bound to 14-3-3ζ. Notably, the overall effects of E2 appeared to be smaller (max. shielding effect 30% instead of 40%) when ERα was bound to 14-3-3ζ which can be explained by the partial shielding effects that 14-3-3ζ already has on ERα, making the effect of E2 less pronounced. The affected regions of ERα upon E2 binding correlated well with published ERα-E2 crystal structures (Fig. 6, B and D), where E2 binds in a pocket formed by H3, H6, H8, the beta sheets, and H11 to 12 (84). Interestingly, these results indicated that the ERα F-domain conformation was not significantly affected by E2 binding, which has not been studied on a structural level previously. The similar shielding effects upon E2 binding to ERα alone and the ERα/ 14-3-3ζ complex, clearly showed that 14-3-3ζ does not influence the E2-induced conformational changes in ERα. This suggests that the ERα F-domain is sufficiently long and flexible to accommodate ligand-induced conformational changes to helix 12, without affecting 14-3-3 binding to ERα. FA assays with the mutated ERα-Y537S construct showed improved cofactor recruitment of apo ERα-Y537S in comparison to WT ERα (Fig. S31). Apo ERα-Y537S provided a binding affinity for the LXXLL peptide of 1.7 ± 0.1 μM, which was almost similar to the amount of cofactor recruitment in the presence of agnostic ligand E2 (K D = 0.8 ± 0.1 μM). These results align with previously published data where the Y537S mutation resulted in a constitutively active conformation of ERα in the absence of agonistic ligands (54). Furthermore, ERα-Y537S was found to be less sensitive to 4-OHT inhibition of cofactor recruitment (Fig. S31). Similar to WT ERα, 14-3-3 binding did not significantly influence SRC-1 recruitment to the ERα-Y537S protein (Fig. S31). ERα/14-3-3 PPI stabilization by FC-A is orthogonal to ERα ligand binding The natural product FC-A is a known stabilizer of the ERα/ 14-3-3ζ PPI. This small molecule binds at the interface of ERα/ 14-3-3 protein complex (Fig. 7A) and thereby increases the affinity between the binding partners (38,46). DSF studies were used to determine the effect of FC-A on 14-3-3ζ, ERα, or the ERα/14-3-3ζ protein complex, for both WT ERα and ERα-Y537S ( Fig. 7B; Figs. S24 and S25). As expected, FC-A had no effect on the T M of 14-3-3ζ or ERα alone (Fig. S24) but increased the T M of the 14-3-3ζ protein in complex with WT ERα from 61.1 to 67.6 C (+6.5 C) (Fig. 7, B and C). Similarly, FC-A was found to stabilize the ERα-Y537S/14-3-3ζ complex as apparent from the increase 14-3-3ζ melting temperature from 60.2 to 66.3 C (+6.1 C) (Fig. S25). FC-A did not affect the T M of the ERα protein, in the ERα/14-3-3ζ complex, indicating a local effect of FC-A, confined to the composite binding pocket. This is in line with the previous observations, where the ERα F-domain acts as a long and flexible linker between the most C-terminal ERα residues binding in the 14-3-3 binding groove, and the globular ERa LBD dimer. Interestingly, the FC-A-induced increase of 14-3-3ζ thermal stability was fully orthogonal to E2 or 4-OHT binding to both the WT ERα and the Y537S mutant. In the presence of E2 and 4-OHT, FC-A still increased the melting temperature of 14-3-3ζ in the ERα/14-3-3ζ protein complex with +6.4 C and +6.5 C, respectively (Figs. 7C and S24). In reverse, the earlier described increase in ERα melting temperature upon the addition of E2 and 4-OHT (Fig. 5, D and E) was not influenced by ERα/14-3-3ζ stabilization by FC-A (Figs. 7D and S24). E2 and 4-OHT increased the ERα T M , in the ERα/14-3-3ζ complex, with +12.8 C and +13.7 C, respectively, which was even slightly increased in the presence of FC-A (+14.1 C E2; +15.4 C 4-OHT) (Figs. 7D and S24). FC-A thus clearly stabilized the ERα/14-3-3ζ complex, for both WT ERα and Y537S mutant, and showed to be independent of ERα ligand binding. Discussion NR drug discovery approaches have mainly focused on targeting the NR endogenous ligand binding pocket present within the LBD. Despite great successes using this approach, significant interest has developed in alternative manners to modulate NRs. An orthogonal entry for NR modulation is offered by their PPIs with the 14-3-3 protein. However, the highly relevant molecular understanding of these NR/14-3-3 PPIs is often lacking, while this is necessary to identify new entry points for NR drug discovery. Here we studied the NR ERα and its interaction with 14-3-3 on a molecular level. Copurification of the intact complex of 14-3-3ζ and the ERα LBD and F domains revealed high-affinity binding between ERα and 14-3-3ζ via the formation of a tetrameric complex between an ERα homodimer and a 14-3-3ζ homodimer. Furthermore, the The 14-3-3/ERa-LBD-F-domain complex binding of 14-3-3ζ to the disease-relevant Y537S-ERα was confirmed, highlighting the possibility of targeting the ERα/14-3-3ζ PPI as an alternative drug discovery approach for drugresistant mutants of ERα. Both agonist (E2) and antagonist (4-OHT) binding to the ERα LBD did not disrupt 14-3-3 binding to ERα, as apparent from SV-AUC and the native PAGE. Furthermore, natural ligand E2 induced activating conformational changes of the ERα LBD and subsequent cofactor peptide recruitment, in a similar fashion for ERα in isolation and ERα in complex with 14-3-3ζ. Similarly, synthetic ligand 4-OHT showed antagonistic behavior for ERα alone and ERα in complex with 14-3-3ζ, as observed by reduced cofactor recruitment. Finally, the ERα/14-3-3ζ PPI stabilization by FC-A was shown to be functional and not impeded, nor dependent, on ERα ligand binding in both wild-type ERα and the Y537S mutated protein. Combined, these results inform that 14-3-3ζ binding to ERα, and stabilization of this PPI by FC-A, function independently of conformations, mutations, and liganded state of the ERα LBD. This orthogonality is most likely facilitated by the long and flexible 42-residue F-domain of ERα, accommodating ERα conformations to not affect 14-3-3 binding. Molecular stabilization of the ERα/14-3-3 protein complex with molecular glues like FC-A would therefore be a potential entry point for targeting ERα and its drug-resistant variants. The orthogonality of the molecular events within the ERα would even bode for dual targeting of both the PPI interface and the classical ERα ligand binding pocket. Furthermore, whereas orthosteric drugs such as 4-OHT also show binding to ERβ (85) and ERRγ (86), next to ERα, 14-3-3ζ binding to ERα occurs at the ERαunique C-terminus, providing the possibility to target ERα in a highly selective manner. The concept of therapeutic targeting of the ERα/14-3-3 PPI could be envisioned to be translatable to other NR/14-3-3 protein complexes. So far, 14-3-3 has been identified as the binding partner of eight NRs, for which 14-3-3 binds to each NR in a unique manner. The NR/14-3-3 interactions form a potential entry point for targeting 'hard-to-drug' NRs due to, for example, drug resistance or the absence of an orthosteric pocket in the LBD. A potential example is the Androgen Receptor (AR), an established prostate cancer target. Although prostate cancer is initially often successfully targeted with androgen deprivation therapy or AR antagonists such as enzalutamide, drug-and castration-resistant AR mutants or splice variants are often developed within patients with prostate cancer (25,87,88). The most prevalent AR splice variant, AR-V7, even lacks the entire LBD while remaining constitutively active, making it extremely challenging to target this drug-resistant variant of AR (87,88). The binding of 14-3-3 to the NTD of AR, which remains present in the AR splice variants, provides an alternative entry point of targeting AR in drug-and castration-resistant patients with prostate cancer. In all cases, mechanistic and structural insights into the formation of the NR/14-3-3 complex, such as those obtained in this study for the ERα/14-3-3 complex are urgently needed. Analytical SEC Protein samples were diluted in 20 mM Tris pH 7.5, 150 mM NaCl, 10 mM MgCl 2 , and 0.5 mM TCEP to a final concentration of 5 to 10 μM. All analytical SEC experiments were performed on an Agilent 1260 bio-inert HPLC in combination with a Superdex200 increase 3.2/300 column at a flow rate of 0.075 ml/min and 20 mM Tris pH 7.5, 150 mM NaCl, 10 mM MgCl 2 , and 0.5 mM TCEP as running buffer. Peak detection was performed by absorbance measurements at 280 nM. Sedimentation-velocity AUC Protein samples were dialyzed into 20 mM HEPES pH 7.5, 150 mM NaCl, 10 mM MgCl 2 , and 0.5 mM TCEP before all AUC measurements to obtain the best buffer match between the blank and the sample. Protein samples were diluted to their final concentrations in dialysis buffer and ligands were added where described. Samples were placed into double sector titanium centerpieces with 12-mm optical path length. SV-AUC experiments were performed using a ProteomLabTM XL-I analytical ultracentrifuge (Beckman Coulter) at 20 C and at 43.000 to 45.000 rev./min rotor speed (An-50 Ti rotor, Beckman Coulter). All sedimentation profiles were collected by absorbance measurements at 280 nm. The calculated distributions were integrated to establish the weight-average sedimentation coefficients corrected to 20 C and to the density of water (s w(20,w) ). QToF-MS quantification Dilution series of 14-3-3ζ-strep and ERα-pT594-strep were prepared in MQ (0.1% FA) to final concentrations of 0.025, 0.020, 0.015, 0.010, and 0.005 mg/ml. Furthermore, a 500×, 750× and 1000× dilution of the ERα/14-3-3ζ protein complex were prepared. The final samples (100 μl) were transferred to a 200 μl LC-MS vial. UPLC-QToF-MS analysis was performed on a Waters (Milford, MA, USA) Acquity I-Class UPLC system coupled to a Waters Xevo G2-XS quadrupole time-offlight (QToF) mass spectrometer. The devices were controlled by MassLynx Software (version 4.2, Waters). Full scan in positive electrospray ionization (ESI+) mode was used as MS acquisition mode with an acquisition range from 150 to 2000 m/z. A 3 μm, 100 × 2.0 mm Polaris 3 C8-A column (Agilent, Middelburg, the Netherlands) was placed inside a column oven at 40 C and used for chromatographic separation. Flowrate was set at 0.3 ml/min, and a gradient of water containing 0.1% (v/v) formic acid (A) and acetonitrile con- Data were analyzed using MassLynx software. Chromatograms were background subtracted (polynomial order 1, below curve 40%, tolerance 0.010, flatten edges). The area under the peak was determined using integration in the MassLynx software with a relative area threshold of 10. The obtained area under the curve was then plotted against the protein concentration, after which a linear regression was determined between the five data points. Using the equation of the linear regression, the concentration of 14-3-3 and ERa was determined in each protein complex sample. To perform mass analysis of the individual peaks deconvolution was performed on m/z spectra of each individual peak. After visual inspection of the m/z spectrum, the spectrum was zoomed to the five most abundant peaks from which the mass spectrum was determined using MaxEnt1 (mass ranges 27-30 kDa or 32-36 kDa; resolution 0.10 Da/channel, Simulated Isotope Pattern with Spectrometer Blur width 0.32-0.38 Da, minimum intensity ratios left 33%, right 33%, iterate to converge). Mass spectra were centered and errors of the deconvolution process were determined. HDX -peptide mapping 100 pmol of 14-3-3ζ-strep or ERα(PKA)-strep was mixed in 1:1 (v/v) ratio with 1 M glycine at pH 2.3 and injected on a mixed Pepsin/Nepenthesin-2 acidic protease column. Generated peptides were trapped and desalted by a Micro trap column (Luna Omega 5 um Polar C18 100 Å Micro Trap 20 × 0.3 mm) for 3 min at a flow rate 200 μl min −1 using isocratic pump delivering 0.4% formic acid in water. Both protease column and trap column were placed in an icebox. After 3 min, peptides were separated on a C18 reversed-phase column (Luna Omega 1.6 μm Polar C18 100 Å, 100 × 1.0 mm) with a linear gradient 5 to 35% B in 26 min, where solvent A was 2% acetonitrile/0.4% formic acid in water and solvent B 95% acetonitrile/5% water/0.4% formic acid. The analytical column was placed in an icebox. TimsToF Pro mass spectrometer (Bruker Daltonics) operating in positive MS/MS mode was used for the detection of peptides. Data were processed by DataAnalysis 5.3 software (Bruker Daltonics). MASCOT search engine was used for the identification of peptides using a database containing the sequence of 14-3-3ζ or ERα. HDX All proteins were dialyzed and diluted into 20 mM Hepes, 150 mM NaCl, 10 mM MgCl2, and 0.5 mM TCEP pH7.5 to a final concentration of 20 μM. E2 or DMSO was added to a final concentration of 150 μM. Hydrogen deuterium exchange was initiated by 10-fold dilution of the proteins under different conditions in a deuterated buffer. Fifty microliter aliquots (100 pmol) were taken after 20 s, 2 min, 20 min and 2 h of incubation in deuterated buffer, quenched by 50 μl of 1 M glycine, pH 2.3 and snap frozen in liquid nitrogen. Aliquots were quickly thawed and analyzed using the same system as described above. Peptides were separated by linear gradient 10 to 30% B in 18 min. Mass spectrometer was operated in positive MS mode. Spectra of partially deuterated peptides were processed by Data Analysis 5.3 (Bruker Daltonics) and by inhouse program DeutEx. Native PAGE Samples were prepared in 20 mM Tris pH 7.5, 150 mM NaCl, 10 mM MgCl 2 , and 0.5 mM TCEP with protein at a final concentration of 2.5 to 5 μM. Ligands were added at a final concentration of 100 μM. All samples were 1:1 diluted into native PAGE loading dye (62.5 Tris pH 7.1, 75 mM NaCl, 5 mM MgCl 2 , 20% glycerol, 0.01% bromphenolblue). after which 12 μl of each sample was loaded on a 4 to 20% Mini-PROTEAN TGX Precast Protein Gel (Bio-Rad). Gels ran at 130 V for 2.5 h at 4 C in running buffer (25 mM Tris, 192 mM Glycine, pH 8.3). Gels were washed in MilliQ (20 min), stained with Coomassie Brilliant Blue G-250 (Bio-Rad), and destained in MilliQ until bands were clearly visible. Gels were imaged and analyzed with ImageJ. Differential scanning fluorimetry Proteins were diluted (in 20 mM Hepes pH 7.5, 150 mM NaCl, 10 mM MgCl 2 , 500 μM TCEP) to obtain 40 μl samples containing 5 μM 14-3-3ζ-strep, 5 μM ERα-strep or 10 μM ERα/14-3-3ζ complex, with either 1% DMSO (negative control) or 100 μM ligand (E2, 4-OHT, FC-A). All samples additionally contained 10x ProteoOrange dye (Lumiprobe, 5000x stock in DMSO) and were heated from 35 to 79 C at a rate of 0.3 C per 15 s in a CFX96 Touch Real-Time PCR Detection System (Bio-Rad). Fluorescence intensity was determined using excitation 575/30 nm and emission 630/ 40 nm filters. Based on these melting curves, the (negative) first derivative melting curve was obtained, from which the melting temperature T M could be determined. Reported T M values were obtained from three independent experiments from which the average and standard deviations were determined using excel. Fluorescence anisotropy All FA dilution series were prepared in polystyrene (nonbinding) low-volume Corning Black Round Bottom 384-well plates (Corning 4514 or 4511). FA measurements were performed directly after plate preparation, using a Tecan Infinite F500 plate reader at room temperature (l ex : 485 ± 20 nm; l em : 535 ± 25 nm; mirror: Dichroic 510; flashes: 20; integration time: 50 ms; settle time: 0 ms; gain: 60; and Z-position: calculated from well). Wells containing only fluoresceinlabeled peptide were used to set as G-factor at 35 mP. All data were analyzed using GraphPad Prism (7.00) for Windows and fitted using a four-parameter logistic model (4PL) to determine apparent binding affinities (K D app ). All results are based on two independent experiments from which the average and standard deviations were calculated to obtain the final values. Both soaked and non-soaked crystals were fished and flashfrozen in liquid nitrogen. X-ray diffraction data were collected at the p11 beamline of PETRA III facility at DESY (Hamburg, Germany) with the following settings: 1440 image, 0.25 /image, 100% transmission, and 0.1 s exposure time. Initial data processing was performed at DESY using XDS after which preprocessed data was taken to further scaling steps, molecular replacement, and refinement. Data were processed using the CCP4i2 suite (version 7.1.18). XDS-preprocessed data were scaled using AIMLESS. The data were phased with MolRep, using protein data bank (PDB) entry 4JC3 as a template. A three-dimensional structure of 3 0 dAc-FC-A was generated using AceDRG, which was thereafter built in the electron density based on visual inspection Fo-Fc and 2Fo-Fc electron density map. Sequential model building (based on visual inspection Fo-Fc and 2Fo-Fc electron density map) and refinement were performed with Coot and REFMAC, respectively. Finally, alternating cycles of model improvement (based on isotropic b-factors and the standard set of stereo-chemical restraints: covalent bonds, angels, dihedrals, planarities, chiralities, non-bonded) and refinements were performed using Coot and phenix.refine from the Phenix software suite (version 1.20.1-4487). Pymol (version 2.2.3) was used to make the figures in the manuscript. All structures were deposited in the protein data bank (PDB) and obtained IDs: 8C40, 8C42, 8C3Z and 8C43. See Table S1 for x-ray crystallography data statistics. Data availability Crystal structures described in this manuscript have been deposited to the PDB. They have the following PDB codes: 8C40, 8C42, 8C3Z, and 8C43. Supporting information-This article contains supporting information.
2023-05-25T15:05:35.861Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "9dcc7f695fa7b22183de40bfc2ce6d7c05106564", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jbc.2023.104855", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cdb1a29a403f30a5c007dd85bb89b85b432aa882", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
182289397
pes2o/s2orc
v3-fos-license
not THE STUDY OF MULTICOMPONENT LOADING EFFECT ON THIN-WALLED STRUCTURES WITH BOLTED Features of influence of various factors on the stress-strain state of composite thin-walled structures with bolted connection of separate elements were studied on an example of the test problem. As an example of such structures, a metallic granary (a silo) consisting of panels connected with bolts was taken. The test structure contained two lapping narrow flat strips. A bolt is inserted in bolt holes bored in these strips and pre-tightened. Friction and slipping of the strips and the bolt, contact between the side surface of the bolt and the holes as well as mutual influence of bending and stretching were taken into consideration. Thus, the model has taken into consideration geometric, physical and structural nonlinearities. The system was subjected to a transverse load applied to one side of the strip. Staged loading of the systems was modeled. It was established that under load, the studied system acquires a deflection which unevenly increases with the load increase. This is determined by the fact that it is affected by both elastic deformation of the strips and mutual slip in the connection zone. When the gap between the bolt and the holes in the panels finally vanishes, mainly elastic deformation of the system takes place. Residual deflection was established in the system after the first unloading. It was also established that longitudinal forces act in the system. They can be much larger than transverse forces from the load. The system featured strong mutual influence of bending and stretching of the strip. As a result of the studies, factors determining stress-strain state of the studied system were determined: geometric nonlinearity, contact interaction, friction and slip, connection between deflection and stretching. Thus, the design model for such thin-walled structures will be inadequate without all these factors, the results of calculations with its application will have significant errors and recommendations will be unreliable. The conducted studies have made it possible to develop more adequate models for analysis of reaction of composite thin-walled structures to the effect of loading Introduction Thin-walled structures have become widespread in practice. They include a wide variety of composite structures which in many cases consist of plain or corrugated panels connected with high-strength bolts. In particular, metal granaries (silos) are typical examples Silos are subjected to the effect of a system of operating loads including effects of wind, rain, snow, temperature fluctuations, etc. At the same time, internal pressure from grain, bulk material or liquid is the main load. Accidents often occur as a result of failure of silo elements under action of operating loads. In many cases, such failures occur in places of bolted connection of panels. These problems cannot be predicted with the use of traditional continuous strip shell models because impact of bolts on stress-strain state (SSS) of the studied structure is not taken into account. Existing models of such connection do not take into consideration all factors effecting SSS of such composite structures. Accordingly, existing models require further development and improvement. Therefore, let us consider composite structures and methods for calculating them on an example of silos. Literature review and problem statement Silos are widely used in present-day industry due to their advantages: ease of assembly, reliability, low cost operation, easy maintenance, etc. However, as practice shows, a number of problem situations occur in operation that lead to structure failure [1]. Main failure causes include loss of stability of bearing elements and walls, emergence of unwanted deformations in difficult to predict places because of large load variation, problems with bolted connections and metal corrosion. Such situations are caused by the fact that silos operate in hard conditions of constant influence of various multi-cycle multicomponent loads from the stored material, rigs, adverse environmental conditions and possible seismic influence. When designing new silos, it is necessary to carry out a comprehensive analysis of SSS of the entire structure using various mathematical models that take into account the structure features and the most adverse combination of various factors. It has been shown in [1,2] that this is essentially a non-trivial problem requiring additional studies in many fields of science. It is necessary to create new algorithms and approaches to numerical simulation, experimental laboratory studies and testing of actual silos. This task can be partially facilitated by the use of design codes and branch standards [3] which, however, do not completely cover all possible load variations arising in the structure. As a result, a too "soft" estimate of loads [1,2] is obtained. It follows that development, substantiation and implementation of a complex parametrized mathematical model of SSS of silos and individual elements should be considered the most important and primary task. Only on this basis, it is possible to conduct adequate, accurate and complete numerical simulation of processes in and states of silo elements in the process of construction and operation and, consequently, preparation of appropriate recommendations. As usual, main structural elements of silos include corrugated panels which are connected with overlapping and corresponding stiffening ribs by means of preliminary tightened bolted connections. Therefore, when constructing an adequate design model, the system of strips (or shells) reinforced by various structural elements is ultimately obtained. The bolt pre-tightening forces act between these elements. Also, forces of contact interaction of the bolt head and the nut with the strip and the cylindrical bolt section with internal surface of the bolt holes bored in the connected strips act in the bolted connections. Such problem formulation takes into account geometric, structural and physical nonlinearity of the structure. Thus, analysis of the structure and loading conditions shows that the potential significant factors to be taken into consideration during development of the mathematical mod-el and its numerical implementation by means of the finite element method (FEM) include the following: 1) correct application of boundary conditions; 2) detailed analysis of bolted connections in the structure; 3) taking into account friction in "nut-strip-washer-bolt head" combinations; 4) taking into consideration the "slider" effect, that is, modeling tangential displacements of panels under applied normal load; 5) taking into consideration heterogeneity of gap distribution in the "internal strip-bolt-outer strip" system among individual locations of bolted connections; 6) taking into consideration unevenness (variation) of forces in bolted connections. These peculiarities in formulation of the study tasks are in a good agreement with the problematic moments reflected in [1]. Attention was paid in [4,5] to numerical and experimental studies of samples of panels connected by bolt fixtures with preliminary tightening. Sandwiches of strips of various thicknesses with various schemes of sandwich formation were studied. Various properties of materials of bolts and connected strips were also taken into consideration. Stages of elastic deformation of the studied structure, slippage as a result of overcoming static friction forces by tensile forces as well as plastic deformation were considered. The obtained results formed the basis for conclusions on characteristic features of behavior of the studied "plates-bolts-plates" nonlinear system. In the long run, it is possible to construct on this basis "phenomenological" models of special finite elements (SFE) embedded in traditional finite element models in the zones of connection of individual silo panels. Such an approach is quite productive and of considerable interest. However, it has certain disadvantages. First, tensioning with longitudinal forces is mainly considered in the study of the specimens described in [4,5] whereas in reality, the silo elements work in conditions of longitudinal and transverse bending ( Fig. 1). Under these conditions, the following factors are at the forefront: 1) magnitude of the tensile force, k, in the bolt varies during loading (in contrast to practically constant magnitude for the case of longitudinal tension [4,5]); 2) magnitude of the tensile force in strips, N, has a significant effect on deflection, w; 3) large deflections, w, cause noticeable tangential deformation, and, therefore, affect the force N; 4) there may be a gap between the bolt and the strips (surfaces S_b, S_ p) with contacting conditions: 5) in addition, a plastic gasket having a non-linear pattern of "ε-σ» dependence may be placed between the bolt and the strip surface; 6) contact, friction and slip between of the bolt and the strip surfaces appear. Combination of factors 1) to 6) reduces the problem to a system of related nonlinear equations that connect together physical, geometric and structural nonlinearities. At the same time, it is impossible to identify dominant factors. Accordingly, the task is considerably complicated in comparison with the cases described in [4,7]. Given the fact that the analyzed references extensively study SSS of thin-walled structures including those with bolted connections, a problem of substantiation of design models being used arises. In this case, it is possible to isolate individual lines of study. For example, reaction to loading of structures which consist of corrugated panels is studied in [4,7]. Study of thin-walled structures with bolted connections is described in [6][7][8][9][10][11][12]. At the same time, the loading patterns used in these works do not fully correspond to actual conditions of work of these structures. Elastic and elastoplastic behavior of thin-walled structures under seismic loads [13][14][15][16], in collapse analysis [12,17], in determining response of these structures to action of wind loads is not enough adequately modeled [18]. Thus, it can be concluded that there is no complete solution to the problems of study of composite thin-walled structures. This necessitates improvement of existing models of SSS and methods for studying behavior of such structures under loading. The aim and objectives of the study The study objective was to analyze behavior of composite thin-walled structures on the example of two strips with bolted connections. To achieve this objective, the following tasks were solved: -to formulate problem and construct design diagrams of the test compound system of strips with bolted connections; -to carry out numerical study of SSS of the composite test strip and analyze the results obtained from the point of view of influence of design parameters on the strip strength. Problem formulation and design diagrams To study qualitative features of behavior of the "panels-bolts-panels" system ( Fig. 2), simplified (test) specimens were studied. The specimens form a system of two strips connected by one or more bolts and loaded with transverse forces. All elements characteristic to silos were present in this test system (TS) and all of the above important factors were taken into consideration (besides corrugations). The TS dimensions are close to dimensions of the strip taken from the metal granary. It is necessary to develop a mathematical model of the TS SSS using the method of finite elements. Next, it is necessary to study the TS SSS for bending taking into consideration moderate deflections and the effect of longitudinal forces. The test system is modeled as a composite strip: two strips connected by bolt fixtures with a gap (Fig. 3). To be more specific, the following dimensions and material properties were taken in numerical formulation by FEM: module of elasticity of the material E=2.1•10 11 N/m 2 ; Poisson coefficient v=0.3; length l=5•10 -1 m; width C=5•10 -2 m; thickness h=2•10 -3 m; the total length of the connected strips L= =9.6•10 -2 m which corresponds to the length of the span between stiffening ribs on the silo. Diameter of bolt holes in strips d 1 =1.2•10 -2 m; bolt diameter D=10 -2 m. Bolted connection arrangement was as follows: the bolt is placed in the strip holes with a gap and tightened with a nut to tightening torque T K . The tensile load that occurs at the points of strip connection is balanced at initial stages of loading by frictional forces in connections resulted from application of bolt tightening force F tight If the force of tensioning the rods along the x axis exceeds frictional forces, the strips will slip until the gap is vanished. At this moment, cylindrical surfaces of the bolt and holes in the contacting strips begin to work. Main attention is paid to variants of bolted connection arrangements, which influence behavior of gap vanishing during bending of the system. The following changes in connection arrangement will be considered in the presented task ( Fig. 4-6). Fig. 3. System of two strips with bolted connection Design of the bolted connection belonging to the first group is a connection with a single bolt and the use of joint washers made of polypropylene having non-linear elastic properties in the contact areas (Fig. 4). In this task formulation, of coefficients of friction and bolt tightening force were varied and joint washers were used (Table 1). Transverse force distributed on the upper edge of the strips is given by force F (cyclic loading-unloading). The tightening torque is modeled as the force of preliminary bolt tightening F tight . Table 1 shows variants of contact interaction at various bolted connection arrangements. The bolted connection arrangement belonging to the second group (Fig. 5) different hole diameters (Table 2). Also, position and arrangement of the strips relative to the hole axis vary. This misalignment can occur in thin-walled structures during their connection. In the presented task, it is proposed to consider three variants of misalignment (respectively, variants 2_1p-2_3p): displacement of strips with a selected gap between the hole and the bolt (1), increased gap (2) and side displacement of the strips relative to the hole (3). In this formulation, a bolted connection is used with a 0.01 m diameter bolt. When the hole diameter is changed, the gap between the bolt and the inner surface of the hole gets smaller/larger. Because of this, the gap change is accompanied by a decrease or increase in the composite rod bending during loading. This variation makes it possible to analyze the effect of the gap size on SSS of the model of the composite strip under study. Tаble 2 The studied list of the bolted connection arrangements Elements with an increased number of bolted connections and addition of strips to the studied system are considered in the third group (Fig. 6). Such alternation takes place in thin-walled machine-building structures. Thin-walled panels are interconnected with a different number of bolted connections in a row. Also, composite thin-walled panels are used. They can be single-layered and multilayered. Single-layered panels are two butted panels and multilayered ones are four or more panels in one connection. Multilayered panels are arranged in two ways: two or more panels in a sandwich (1) and successive panel alternation (2). Also, panels are arranged using various number of bolt fixtures. In particular, connection with two bolt fixtures was considered in this task. Joint washers made of physically non-linear materials are used in the described connections. A list of the studied arrangements of strips with various numbers of bolted connections is given in Table 3. c d e Table 3 The studied arrangements of bolted connections The finite element model. The studied problem is reduced to analysis of the finite element model shown in Fig. 7, 8. 'Sweep' method of finite-element grid was used. The number of elements of SOLID 186 (ANSYS) type in models is from 44 to 90 thousand, the number of nodes from 215 to 435 thousand. Geometric, physical and structural nonlinearities are taken into consideration. The model loading diagram is shown in Fig. 8. The structure is considered in a section (symmetric with respect to the xOz plane). Pressure q with a total force F=450 N (Fig. 1, 3) acts on the upper part of the strip. Strips are rigidly fixed at their ends. Movement along y axis in the plane of symmetry xz (Fig. 3) is restricted. The transverse force distributed on the upper edge of the strips is set by the force F (the system is loaded in steps. Preliminary tightening takes place at the first stage. The system is loaded by increments of pressure q, and, accordingly, the force F ) at the further stages. Results obtained in numerical simulation of SSS of the test strip with bolted connections Let us consider the results obtained for the first group (I), Fig. 4. They include pictures of system deflections, distribution of equivalent Mises stresses, reactions in the supports, the forces arising in the bolt under loading (Fig. 9, 10). Analysis of the presented dependences provides the basis for the following conclusions: 1. Deflections of the system of strips (Fig. 9, a) is a step function. Plain sections correspond to deflection of the strip as a solid rod (in this case, the friction force is less than the limit value and mutual slip of the strip does not occur). When the friction force becomes equal to the limit value, a sharp increase in deflections takes place. This increase is caused by "extension" of the strip because of mutual slipping of its halves. "Plain" and "sharp" stages alternate until the gap between the bolts and panels vanishes. Further growth of the load leads to a slow growth of deflections without sharp jumps. In this case, the system of these strips behaves as a continuous strip (CS) but is lengthened by the size of the gap between the bolt and holes in the strips. 2. When the coefficient of friction decreases, stepwise behavior changes to plain curvilinear and a displacement of strips relative to each other occurs at a smaller load. 3. The use of a joint washer that fills the gap leads to a plainer vanishing of the gap. The load that is required for displacement of strips increases. 4. Under the action of a cyclic load, two zones are observed: the first zone is growth of deflections with increasing load. At the same time, regions of "sharp" and "plain" change of deflections are realized alternately. The second zone is a weak gradual growth of deflections with a further increase in load and a reverse change occurs with a decrease in load to zero (a case is considered without change of the load sign). A significant residual deflection remains in the studied system after the "loading-unloading" cycle (in this case, at a level of 60 % of the maximum). Further "loading-unloading" cycles occur practically by the same path as in the initial cycle. Thus, it can be noted that when load is taken off, the system does not return to its original state. With a further pulsating cyclic loading (from zero to maximum and then back to zero), the system behaves like a "pseudo-elastic" but with some residual deflection. 5. The levels and distributions of equivalent Mises stresses are shown in Fig. 9, b, 11, 12. Fig. 12 illustrates comparison of emerging maximum stresses in all arrangements of bolt fixtures (Table 1). These stresses nonlinearly grow with the load increase. If there is sealing material in the gap of the bolted connection, it levels the stress concentration. 6. To analyze displacements of the contacting surfaces of the strips, consider dependence of displacements of strips relative to each other along the x axis under load (Fig. 9, c). A stepwise behavior of strip displacements with growth of loading is observed. No displacements occur at the initial load. With an increase in load up to 50 N, a jump-like displacement of strips appears relative to each other which is accompanied by a partial gap vanishing. With a further growth of the load (after a complete vanishing of the gap), a slight displacement is observed only as a result of deformation of the contacting strips and bolts. 7. Let us consider also reactions in places of plate fixation (Fig. 10). It should be noted that longitudinal reaction components, in contrast to the transverse ones, vary almost nonlinearly. However, with growth of the load after onset of contact of the bolt with edges of the holes, the reaction components grow approximately linearly. It is important to note that the level of longitudinal reactions remains practically constant when strips are displaced. Besides, the level of longitudinal loading is an order of magnitude higher than that of transverse loading (and, accordingly, the reactive force along the z axis, Fig. 3). It should also be noted that in other variants (Tables 2, 3), dependence of reactions R x and R z on transverse loading is similar (and, therefore, they are not discussed further). 8. A significant effect of sharp increase in tensile force in the bolt is possible. In some cases, this force may exceed the level of initial tightening more than 4 times (Fig. 9, d). To reduce this effect, it is suggested to use joint washers. In particular, the studied arrangement of the bolted connection in the contact between the strips and the bolt head through two yielding washers (variant 1_4) has made it possible to understand causes of force increase in the bolt. Reaction will increase when the parts are rigidly connected. When introducing yielding elements into the system, no marked increase in tightening forces occurs in comparison with the initial value. When there is an increase in the tightening force of the bolt fixture with the washer that fills the gap (variant 1_7), reaction in the bolt remains virtually unchanged. Let us consider the results obtained for the second group (II, Fig. 5). Fig. 13 shows results of the study with varying hole diameter ( Table 2) (Table 1) а b c d Fig. 13. Results of numerical simulation of the second group for all connection variants: deflection w, mm (a); maximum equivalent Mises stresses ϭ і , MPa (b); displacement of contacting surfaces of the strips relative yo each other u, mm (mutual slipping) (c); reaction in the fixture along x axis R x , N When comparing the studied system of strips in a geometrically nonlinear formulation with a conventional formulation for a continuous strip (CS), one can state the following. There is a significant difference between deflections, equivalent stresses and reaction components in supports. Behavior of the continuous strip is shown in graphs as more smooth and linear while behavior of nonlinear character is manifested substantially in the system of interconnected strips studied and described above. When a continuous strip is loaded, the value of deflection is more than twice less than that of deflection in the system of strips with bolt fixturing depending on the hole diameter. This is explained by the fact that there is a structural nonlinearity in the system of strips with bolt fixturing, and there are significant gaps (depending on diameter) in holes commensurate with the values of displacement of points in the strips as a result of elastic deformations. Thus, conventional formulation of such a class of problems with a design diagram in the form of a continuous strip without taking into consideration bolt fixturing, gaps, tension and friction results in a significant inaccuracy in the results obtained. When strips are laterally displaced relative to the holes, an additional influence on the deflection behavior takes place (Fig. 14, a). When strips with an increased gap are displaced, the deflection is about 0.04 m. When the gap is reduced, deflection equals to 0.018 m. It is equal to 0.025 m when strips are displaced laterally. Behavior of deflection becomes stepped and nonlinear with loading growth. Maximum stresses were observed in the first and third variants after gap vanishing (Fig. 14, b). In the second variant, stress growth occurs immediately at the beginning of loading. Behavior of slippage of strips relative to each other corresponds to the change of deflection (Fig. 14, c). Force in the bolt increases during the system loading. Nonlinear increase to 3500 N is observed in the first variant and about 4000 N in the second variant. The force is maximal and equals to about 6000 N (Fig. 14, d) in the third variant when lateral displacement takes place. The results of study of the third group (III, Fig. 6). Let us consider the results of study of the system of strips with two bolted connections (Fig. 15). It follows from the results that application of two bolted connections in the studied system of strips leads to an increase in the force required for gap vanishing (compared to the system with one bolted connection). It is about 400 N in the bolted connections without joint washers and about 250 N in the bolted connections with joint washers. Gap vanishing occurs gradually along with nonlinear behavior of deflection. Deflection was about 0.027 m in all variants (Fig. 15, a). Equivalent Mises stresses were 1,126 MPa in the first variant, 1,844 to 2,083 MPa in the second variant depending on the coefficient of friction and 1,876 MPa in the third variant with two joint washers (Fig. 15, b, 16). Forces in bolted connections begin to grow significantly when the gap vanishes. Reaction in the bolts grows from 1,000 N to 4,000 N in a connection without joint washers, up to 1,700 N in the variant with two washers and up to 1,300 N in the variant with four washers. It follows from these results that the use of joint washers significantly affects performance of the bolted connections. Let us consider the results obtained for multi-layered panels with group arrangement (variant IV) (Fig. 6) and arrangement by the method of sequential alternation (variant V) (Fig. 16-18). ( Fig. 17, b). Displacement of the strips (Fig. 17, c) and bolt forces (Fig. 17, d) have the same nature of dependence on the load F as in the previous variants. Having analyzed the obtained results, the following conclusions can be drawn: behavior of deflections of the system of strips depends on arrangement of thin-walled elements (group connection and alternation method). In the first variant of arrangement, the gap vanishes at a load level lower than in the second variant. This is explained by the fact that the number of contacting surfaces with friction is equal to three in the first variant and five in the second variant. Stresses by Mises are in the same range. The described study is a continuation and development of studies [19][20][21][22]. It should be noted that the models developed and described in this paper have significant advantages over conventional ones. First, they take into consideration additional factors that were insufficiently taken into consideration in earlier studies (contact, friction, slip, variable force in the bolted connections). Secondly, all these factors act in interaction and interconnection. Thirdly, the created model more adequately reflects physical essence of the processes and states realized in the studied structures. It should also be noted that application of developed models has established new patterns of behavior of thinwalled structures with bolted connections. In particular, the effects of reaction of such structures to the effect of loading were established. This reaction combines stages of elastic deformation and mutual slipping of the strips. Residual deformation is accumulated in the composite strip at the first loading. It is determined by gap vanishing. After the first loading, the system is mainly deformed in an elastic region. The constructed models and revealed features of behavior of thin-walled structures with bolted connections can be used in structure studies of silos of various sizes, shapes and purposes. The developed models give fundamentally more accurate results (compared to conventional ones). For example, in the failure to consider deformation of the bolted connection, deflections of the composite strip are 2-3 times smaller than when taking into consideration this factor. However, it should be noted that the constructed models do not take into consideration some factors inherent to actual structures of silos. First of all, it concerns the type of the load which can be multiple-cycle and alternating. Besides, possible loss of stability is not taken into consideration when compression forces occur in the strip. It is also worth investigating the effect of corrugations on behavior of structures of this type. The noted problem issues are directions to further studies. Conclusions 1. The developed model of designing thin-walled structures with bolted connections has advantages over conventional models. Unlike the simpler models, it takes into consideration geometric, physical and structural nonlinearities. Taking into consideration friction makes it possible to determine dependence of the studied system state on the loading history. The achieved properties give an opportunity to more adequately simulate stress-strain state of thin-walled structures with bolted connections. 2. Design diagrams, coefficients of friction, gaps and loads were varied in numerical studies of SSS of a composite strip with bolted connections. As a result, regularities of their influence on SSS of the studied structure were established. 3. Analysis of the obtained behavioral characteristics of the studied structures has made it possible to state the following: -regardless of the variant of embodiment, presence or absence of the joint washer as well as the number of bolted connections, deflection of the studied composite strip is similar for different variants of the composite strip by its character of transverse loading, however, it differs sharply from behavior of continuous strips. In particular, there is a combination of plain areas and sharp increments. The first ones correspond to bonding of the strips caused by friction due to bolt tightening. The second accompany slippage of the strips. After a full vanishing of the gap between the bolt side surface and the bolt holes in the strips, the system becomes comparable to a continuous strip but with residual deflections; -deflection of the studied composite strip responds in different ways to single and cyclic application of transverse load. Two stages are clearly distinguished in a single loading. The first stage combines gradual and sharp changes of deflection. Then (after vanishing of the gap), the stage of only plain growth of deflection comes. If the system is unloaded after this point, it does not return to its original state. Residual deflection is formed. Further cyclical loading and unloading (without changing the load sign) occurs along the curve corresponding to the first unloading. Thus, the system acquires residual deformations mainly in the first loading cycle. Practically nonlinear-elastic deformation of the strip system occurs in subsequent cycles; -the stressed state of the studied system of strips is characterized by the fact that the Mises stresses are concentrated in the panels in the zone of bolted connections. The maximum values of stresses with the load growth behave nonlinearly. In multiple-cycle loading, accumulation of a certain value of residual stresses at the first stage occurs first and then their nonlinear elastic change occurs. It should be noted that a model problem was considered: it was assumed that the material of the strips works in an elastic region despite the high level of stress; -an effect of possible sharp increase in tensile forces in the bolt was detected during loading of the studied system of strips. This is especially evident in the absence of a joint washer. Therefore, application of a model with a fixed force in the bolted connections is inadmissible in a general case; -control over behavior of longitudinal forces in the studied system has made it possible to establish that when load а b Fig. 18. Distribution of equivalent Mises stresses (Table 3): variant 3_1p (a); variant 3_2p (b) increases, they sharply increase from zero to values exceeding transverse load several times. This indicates that deflections cause significant extension of the studied strip and the longitudinal forces resulting from this, in turn, affect deflection. It turns out that mutual influence of stretching and bending takes place. Thus, like in the case of continuous strip, it is necessary to determine SSS from stretching and bending together. However, unlike the case of a continuous strip, composite strip demonstrates an additional elongation and deflection not only due to elastic deformations but also due to the possible mutual slip of the strips relative to each other. This results in a more complicated connection between stretching, sliding and bending. This feature must be taken into consideration in the design models of such structures; -an increase in the number of bolted connections leads to a noticeable "strengthening" of the structure. The struc-ture is also "strengthened" by the use of multilayered strips which are stacked by superimposing strips to the left and to the right alternately; -introduction of a joint plastic washer between the bolt head and the strip, between the nut (metal washer) and the strip, between the side of the bolt and the hole in the strip "smoothes" but not eliminates the revealed features of behavior of the studied composite strip. 4. The established features and regularities of SSS of a composite strip with bolted connections show that consideration of the contact, friction and slip, forces of bolt tightening and deformation of a yielding washer-spacer dramatically change behavior of the test system compared with the continuous strip. Accordingly, these factors need to be taken into consideration in the design models of such systems.
2019-06-07T22:45:44.237Z
2019-01-14T00:00:00.000
{ "year": 2019, "sha1": "4e8ea1f85d8809d0b2bf3c6a408089b2e8446912", "oa_license": "CCBY", "oa_url": "http://journals.uran.ua/eejet/article/download/154378/157151", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "99d626010cce7a3b8f0f848f60caeb8827bc9019", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
245828688
pes2o/s2orc
v3-fos-license
Tissue Inhibitor of Metalloproteinase-3 Ameliorates Diabetes-Induced Retinal Inflammation Purpose: Endogenous tissue inhibitor of matrix metalloproteinase-3 (TIMP-3) has powerful regulatory effects on inflammation and angiogenesis. In this study, we investigated the role of TIMP-3 in regulating inflammation in the diabetic retina. Methods: Vitreous samples from patients with proliferative diabetic retinopathy (PDR) and non-diabetic patients were subjected to Western blot analysis. Streptozotocin-treated rats were used as a preclinical diabetic retinopathy (DR) model. Blood-retinal barrier (BRB) breakdown was assessed with fluorescein isothiocyanate (FITC)-conjugated dextran. Rat retinas, human retinal microvascular endothelial cells (HRMECs) and human retinal Müller glial cells were studied by Western blot analysis and ELISA. Adherence of human monocytes to HRMECs was assessed and in vitro angiogenesis assays were performed. Results: Tissue inhibitor of matrix metalloproteinase-3 in vitreous samples was largely glycosylated. Intravitreal injection of TIMP-3 attenuated diabetes-induced BRB breakdown. This effect was associated with downregulation of diabetes-induced upregulation of the p65 subunit of NF-κB, intercellular adhesion molecule-1 (ICAM-1), and vascular endothelial growth factor (VEGF), whereas phospho-ERK1/2 levels were not altered. In Müller cell cultures, TIMP-3 significantly attenuated VEGF upregulation induced by high-glucose (HG), the hypoxia mimetic agent cobalt chloride (CoCl2) and TNF-α and attenuated MCP-1 upregulation induced by CoCl2 and TNF-α, but not by HG. TIMP-3 attenuated HG-induced upregulation of phospho-ERK1/2, caspase-3 and the mature form of ADAM17, but not the levels of the p65 subunit of NF-κB and the proform of ADAM17 in Müller cells. TIMP-3 significantly downregulated TNF-α-induced upregulation of ICAM-1 and VCAM-1 in HRMECs. Accordingly, TIMP-3 significantly decreased spontaneous and TNF-α- and VEGF-induced adherence of monocytes to HRMECs. Finally, TIMP-3 significantly attenuated VEGF-induced migration, chemotaxis and proliferation of HRMECs. Conclusion: In vitro and in vivo data point to anti-inflammatory and anti-angiogenic effects of TIMP-3 and support further studies for its applications in the treatment of DR. INTRODUCTION Diabetic retinopathy (DR) is the most frequent microvascular complication of diabetes mellitus and remains the principal cause of visual impairment among the working-age population. Evidence is accumulating that chronic lowgrade subclinical inflammation is fundamental in the initiation and progression of DR (Joussen et al., 2004;Forrester et al., 2020). Enhanced adhesion of circulating leukocytes to the retinal microvascular endothelium actively contributes to the development of retinal endothelial cell damage, breakdown of the blood-retinal barrier (BRB) and capillary non-perfusion (Joussen et al., 2004). The breakdown of the BRB and the concomitant increase in vascular permeability results in diabetic macular edema, which affects vision in diabetic patients (Daruich et al., 2018). In the ocular microenvironment of patients with PDR, several inflammatory and angiogenic factors are upregulated reinforcing the paradigm that inflammation and angiogenesis are critical mechanisms initiating and supporting progression of PDR (Abu El-Asrar et al., 2019, 2021aRezzola et al., 2020;Wu et al., 2021). Among these factors, vascular endothelial growth factor (VEGF), released in response to hypoxia, is a key player in promoting retinal angiogenesis and vascular leakage (Peach et al., 2018). VEGF exerts this effect by activating its transmembrane tyrosine kinase-containing receptor VEGFR-2 on vascular endothelial cells (Peach et al., 2018). Despite advances in drug discovery and development, it is still necessary to gain insight into the etiology of DR for allowing for the discovery of novel biomarkers and therapeutic targets. Effective inhibition of diabetes-induced retinal injury might require multiple agents acting on different pathways to attain complete disruption of disease progression. Tissue inhibitors of metalloproteinases (TIMPs) constitute a family of four members in the human species (TIMP-1, TIMP-2, TIMP-3, and TIMP-4). TIMPs are endogenous inhibitors of matrix metalloproteinases (MMPs) and play critical roles in the maintenance of extracellular matrix (ECM) homeostasis. Although originally identified as inhibitors of MMPs, TIMPs have also been shown to act as multifunctional signaling molecules with cytokine-like activities that are independent of their MMPinhibitory function (Ries, 2014;Jackson et al., 2017;Eckfeld et al., 2019). TIMP-3 is unique in that, in addition to inhibiting MMPs, TIMP-3 is also an efficient inhibitor of several members of the ADAM (a disintegrin and metalloproteinase), including ADAM17, and ADAMTS (ADAM with thrombospondin motifs) families. As a particular note, ADAM17, also named tumor necrosis factor-α (TNF-α) converting enzyme, activates pro-TNF-α into a key inflammatory mediator. Hence, TIMP-3 possess important signaling functions. TIMP-3 is also distinct from other human TIMPs in that it is sequestered in the ECM (Fan and Kassiri, 2020). TIMP-3 has emerged as a key mediator limiting inflammation and fibrosis and promoting the resolution of inflammation following injury (Kassiri et al., 2009;Gill et al., 2010Gill et al., , 2013Fiorentino et al., 2013). TIMP-3 is also a mediator of macrophage polarization and function Menghini et al., 2012;Gill et al., 2013;Das et al., 2014;Stöhr et al., 2014). In addition to its role in regulating inflammation, several studies demonstrated that TIMP-3 is a potent inhibitor of angiogenesis and suppresses VEGF-mediated angiogenesis independently of its MMP inhibitory properties. Its angiostatic function is mediated by blocking the binding of VEGF to its receptor VEGFR-2 and inhibiting proliferation, migration and tube formation of endothelial cells, key steps in the angiogenesis cascade (Qi et al., 2003(Qi et al., , 2013Chen et al., 2014). Furthermore, in several studies it was reported that TIMP-3 is a potent inhibitor of tumor angiogenesis, growth, inflammatory cell infiltration and metastasis (Spurbeck et al., 2002;Qi et al., 2003;Chen et al., 2014;Das et al., 2014Das et al., , 2016aAdissu et al., 2015). In a previous study, we systematically investigated all four human TIMPs in the vitreous fluid from patients with PDR, and we showed that TIMP-1 and TIMP-4 were significantly upregulated in PDR. In contrast, TIMP-2 and TIMP-3 levels were not enhanced in PDR patients, compared to non-diabetic control patients (Abu El-Asrar et al., 2018). We here assessed signaling functions of TIMP-3 in vitro and in vivo within the context of DR and on the basis of these findings, we hypothesized that enhancing the expression of TIMP-3 could serve as a potential therapeutic strategy for the amelioration of diabetesinduced retinal injury. Vitreous Samples Undiluted vitreous fluid samples (0.3-0.6 ml) were obtained from 12 patients with PDR during pars plana vitrectomy, for the treatment of tractional retinal detachment, and/or nonclearing vitreous hemorrhage. We processed these samples as described previously (Abu El-Asrar et al., 2019, 2021a and compared samples from diabetic patients with those of a clinical control cohort. The control group consisted of 12 patients who had undergone vitrectomy for the treatment of rhegmatogenous retinal detachment with no proliferative vitreoretinopathy (PVR). Control subjects were clinically checked to be free from diabetes or other systemic disease. The study was conducted according to the tenets of the Declaration of Helsinki. All the patients were candidates for vitrectomy as a surgical procedure. All patients signed a preoperative informed written consent and approved the use of the excised vitreous fluid for further analysis and clinical research. The study design and the protocol were approved by the Research Centre and Institutional Review Board of the College of Medicine, King Saud University. Diabetic Retinopathy Animal Model All procedures with animals were performed in accordance with the Association for Research in Vision and Ophthalmology (ARVO) statement for use of animals in ophthalmic and vision research and were approved by the institutional Animal Care and Use Committee of the College of Pharmacy, King Saud University. Induction of streptozotocin-induced diabetes in rats was done as follows: Adult male Sprague Dawley rats of 8-9 weeks of age (around 200-220 g body weight) were overnight fasted and a single bolus dose of streptozotocin (STZ) (55 mg/kg) in 10 mM sodium citrate buffer, pH 4.5 (Sigma, St. Louis, MO, United States) was injected intraperitoneally. Equal volumes of citrate buffer were injected in age-matched control rats. Seventytwohours after STZ injection, rats were checked and considered diabetic if their blood glucose levels were in excess of 250 mg/dl. Only diabetic animals were kept under deep anesthesia, and 350 µM of recombinant human TIMP-3 in 5 µl sterilized solution was injected into the vitreous of the right eye. The left eye received an equal volume of sterile phosphate-buffered saline (PBS) as a control. The animals were euthanized 2 weeks after TIMP-3 injection and the retinas were processed for Western blot analysis (Abu El-Asrar et al., 2021b) to assess the effect of TIMP-3 on early inflammatory marker expression. The effect of TIMP-3 administration on diabetes-induced breakdown of the BRB was evaluated at a later time point. Ten weeks after induction of diabetes, 350 µM of TIMP-3 in 5 µl sterilized solution was injected into the vitreous of the right eye. The left eye received an equal volume of sterile PBS as a control. Retinas were analyzed for BRB breakdown 2 weeks after intravitreal injection of TIMP-3 using FITC-conjugated dextran as previously described (Abu El-Asrar et al., 2019, 2021b. BRBbreakdown was calculated using the following equation, with the results being expressed in µl/g/h. Human Retinal Müller Glial Cell and Human Retinal Microvascular Endothelial Cell Cultures To corroborate the findings in vitro at the level of critical cell types, we used human retinal microvascular endothelial cells (HRMECs) and human retinal Müller glial cells, two major cell types which actively participate in diabetes-induced inflammatory reactions in the retina. Human retinal Müller glial cells (MIO-M1) (a generous gift from Prof. A. Limb, Institute of Ophthalmology, University College London, United Kingdom) and HRMECs (Cell Systems Corporation, Kirkland, WA, United States) were cultured as described previously (Abu El-Asrar et al., 2019, 2021a. Müller cell cultures were either left untreated or stimulated for 24 h. The following stimuli were used: 300 µM of the hypoxia mimetic agent cobalt chloride (CoCl 2 ) (Cat No A1425-L, Avonchem Limited, United Kingdom), 25 mM glucose (Cat No GL0125100, Scharlau S.L, Gato Prez, Spain), or 50 ng/ml recombinant human TNF-α in the absence or presence of 1 h pretreatment with TIMP-3 (100 ng/ml). For highglucose (HG) treatment, 25 mM mannitol (Cat No MA01490500, Scharlau S.L, Gato Prez, Spain) was used as a control. HRMECs were treated with 50 ng/ml recombinant human VEGF or 50 ng/ml recombinant human TNF-α for 24 h in the absence or presence of 1 h pretreatment with human TIMP-3 (100 ng/ml). Cell supernatants were collected for ELISA analysis. Cells were lysed in radioimmunoprecipitation assay (RIPA) lysis buffer; sc-24948, Santa Cruz Biotechnology, Inc., Santa Cruz, CA, United States) for Western blot analysis. Western Blot Analysis Rat retina, cell lysates and vitreous samples were analyzed. Incubation with primary and secondary antibodies was carried out as described previously (Abu El-Asrar et al., 2019, 2021a,b). To verify similar sample loading, membranes were stripped and reprobed with β-actin-specific antibody (1:3,000, sc-47778, Santa Cruz Biotechnology Inc.). Bands were visualized with the use FIGURE 1 | Expression of TIMP-3 in vitreous fluid samples. Equal volumes (15 µl) of vitreous fluid samples from patients with proliferative diabetic retinopathy (PDR) (n = 12) and non-diabetic patients with rhegmatogenous retinal detachment (RD) (n = 12) were subjected to gel electrophoresis and the presence of TIMP-3 was detected by Western blot analysis. A representative set of samples is shown. of high-performance chemiluminescence (G: Box Chemi-XX8 from Syngene, Synoptic Ltd., Cambridge, United Kingdom) and the band intensities were quantified with the use of GeneTools software (Syngene by Synoptic Ltd.). Enzyme-Linked Immunosorbent Assays Enzyme-linked immunosorbent assay (ELISA) kits for human monocyte chemotactic protein (MCP-1)/CCL2 (Cat No DCP00), and human VEGF (Cat No DY293B) were purchased from R&D Systems. Levels of human MCP-1/CCL2 and VEGF in culture medium were determined with the aforementioned ELISA kits according to the manufacturer's instructions. The minimum detection limits for MCP-1/CCL2 and VEGF ELISA kits were 10 and 31.2 pg/ml, respectively. Monocyte-Endothelial Cell Adhesion Assay Monocyte-endothelial cell adhesion was assessed using CytoSelect Leukocyte-endothelium adhesion kit (Cat. No. CBA-210, Cell Biolabs, Inc., San Diego, CA, United States) following the assay protocol provided by the supplier. Briefly, 2 × 10 5 HRMECs were seeded on 0.2% (v/v) gelatin-coated 24well plates (Abu El-Asrar et al., 2021a). After reaching a confluent monolayer and overnight starvation, cells were stimulated with 25 ng/ml recombinant human TNF-α or 50 ng/ml recombinant human VEGF for 24 h with or without a 1-h pretreatment with 100 ng/ml TIMP-3. To investigate the capacity of TIMP-3 to inhibit the basal binding of THP-1 monocytes (American Type Culture Collection, Manassas, VA, United States) to HRMECs, overnight starved THP-1 monocytes were treated with or without 100 ng/ml recombinant human TIMP-3 for 24 h. Next, 5 × 10 5 fluorescent-LeukoTracker labeled monocytic THP-1 cells were added to the HRMEC monolayer for 60 min. After washing, the remaining adherent THP-1 cells were lysed and fluorescence was measured using a SpectraMax Gemini-XPS (Molecular Devices, CA, United States) with excitation and emission wavelengths of 485 and 538 nm, respectively. In vitro Angiogenesis Assays Human Retinal microvascular endothelial cells were seeded at 1 × 10 5 cells/well on 6-well culture plates and allowed to grow till 80-90% confluency. Quiescence was induced by incubating the cells overnight in minimal medium. Using sterile pipette tips, scratches were made in the monolayers, and detached cells were removed with PBS. Next, part of the wells were incubated with 100 ng/ml TIMP-3 for 1 h and subsequently stimulated with 50 ng/ml recombinant VEGF for 24 h. To corresponding control wells, only minimal medium was added. Cell migration was monitored using an inverted microscope (Olympus IX81, Olympus Corporation, Tokyo, Japan). Analysis of migration was done using Image J software. Chemotaxis of HRMECs was evaluated using an xCelligence apparatus [Real Time Cell Analyzer-Double Plate (RTCA-DP) system; ACEA Biosciences, Inc., San Diego, CA, United States]. First, the lower chamber of a Cell Invasion/Migration (CIM)-Plate (ACEA Biosciences, Inc) was loaded with 10 ng/ml VEGF or dilution medium [MCDB131 medium (Gibco, Thermo Fisher Scientific, Merelbeke, Belgium) supplemented with 0.4% (v/v) fetal calf serum (FCS)]. Subsequently, the upper part of the chamber was mounted on top of the bottom plate and 50 µL of serum-free MCDB131 medium (pure or containing 10 or 100 ng/ml of TIMP-3) was added to the top wells. After equilibration for 1 h at 37 • C, 4 × 10 4 HRMECs were added to the top wells (100 µL/well) that underwent a 30-min pre-incubation with serum-free MCDB131 medium, or TIMP-3 (10 or 100 ng/ml). Migration was monitored in the RTCA-DP system after an additional incubation (30 min, room temperature), allowing the cells to settle onto the membrane. The rate of chemotaxis, recorded as changes in electrical impedance, was monitored every minute for 15 h. In total five experiments were performed and conditions were tested in duplicate or triplicate within 1 experiment. To assess the influence of TIMP-3 on the proliferative effect of VEGF, HRMECs were seeded in a 96-well plate (5 × 10 3 cells in 100 µl/well) in Endothelial Cell Basal Medium-2 (EBM-2) supplemented with the SingleQuots kit (both Lonza, Verviers, Belgium). The next day, cells were washed with serumfree MCDB131 medium and starved in serum-free MCDB medium, supplemented with 2 mM GlutaMAX and 30 µg/ml Gentamicin (Gibco) for 4 h at 37 • C, 5% CO 2 . After starvation, cells were preincubated with 0, 10, or 100 ng/mL TIMP-3 in MCDB131 medium, supplemented with 2 mM GlutaMAX, 30 µg/ml Gentamicin and 1% FCS (proliferation medium) for 30 min at 37 • C, 5% CO 2 . Finally, cells were stimulated with 10 ng/ml VEGF in proliferation medium or proliferation medium only. After 72 h, cell proliferation was measured using the ATPlite Luminescence Assay kit (Perkin Elmer, Waltham, MA, United States) according to the manufacturer's instructions. Results are expressed as mean ± standard deviation of 12 rats in each group. One-way ANOVA and independent t-test were used for comparisons between the three and two groups, respectively, panels (A-E). *p < 0.05 compared with non-diabetic controls. # p < 0.05 compared with PBS-treated diabetic rats. for 24 h or TIMP-3 (100 ng/ml) for 1 h followed by HG, CoCl 2 , or TNF-α. For HG treatment, cultures containing 25 mM mannitol were used as a control. Levels of vascular endothelial growth factor (VEGF) and monocyte chemotactic protein-1 (MCP-1) were quantified in the culture media by ELISA. Results are expressed as median (interquartile range) from three different experiments performed in triplicate. Kruskal-Wallis test and Mann-Whitney test were used for comparison between three groups and two groups, respectively. *p < 0.05 compared with values obtained from control cells. # p < 0.05 compared with values obtained from cells treated with HG, CoCl 2 , or TNF-α. Statistical Analysis Data were collected, stored and managed in a spreadsheet using Microsoft Excel 2010 R software. Data were analyzed and figures prepared using SPSS R version 21.0 (IBM Inc., Chicago, IL, United States). Tests for normality were done using Shapiro-Wilk test and Q-Q plots. The normally distributed data were presented using bar charts showing the standard deviations, while the not normally distributed data were presented using box and whisker plots showing the medians, upper and lower quartiles and range. Consequently, one-way ANOVA and independent t-test or Kruskal-Wallis and Mann Whitney tests (applying Bonferroni correction where necessary) were used to test the differences between the groups for normally distributed data and non-normally distributed data, respectively. Any output with a p below 0.05 was interpreted as an indicator of statistical significance. The Glycosylated Form of Tissue Inhibitor of Matrix Metalloproteinase-3 Is Upregulated in Vitreous Samples From Patients With Proliferative Diabetic Retinopathy In a previous study, with the use of ELISA, we demonstrated that mean TIMP-3 levels did not differ significantly between PDR patients and non-diabetic control patients (Abu El-Asrar et al., 2018). In the present study, we added Western blot analysis to provide insights into the relative abundance of the various proteoforms and fragments of TIMP-3. With the use of Western blot analysis of equal volumes of vitreous fluid, we confirmed the presence of TIMP-3 in vitreous samples. TIMP-3 immunoreactivities were expressed as two protein bands at approximately 24 and 30 kDa. These correspond by their molecular weights to the previously reported unglycosylated and glycosylated forms of TIMP-3, respectively (Langton et al., 1998;Spurbeck et al., 2002). Most of the TIMP-3 immunoreactivity appeared at the level of 30 kDa form indicating that TIMP-3 in vitreous samples is largely glycosylated (Figure 1). Intravitreal Administration of Tissue Inhibitor of Matrix Metalloproteinase-3 Attenuates Diabetes-Induced Breakdown of Blood-Retinal Barrier and Retinal Expression of the p65 Subunit of NF-κB, Intercellular Adhesion Molecule-1, and Vascular Endothelial Growth Factor With the observed association of increased 30 kDa TIMP-3, and PDR, it was of importance to evaluate whether such increased levels of TIMP-3 are detrimental, beneficial or have no effect in vivo. FITC-conjugated dextran was used to investigate the extent of vascular permeability. In STZ-induced diabetic rats, retinal vascular permeability was significantly increased at 12 weeks after the induction of diabetes when compared with nondiabetic rats. Intravitreal treatment with recombinant human TIMP-3 significantly attenuated the diabetesinduced BRB breakdown, compared to PBS-treated diabetic rats (Figure 2A). Western blot analysis of homogenized retinal tissue revealed that diabetes significantly increased the protein levels of phospho-ERK1/2 (Figure 2B), the p65 subunit of NF-κB (Figure 2C), ICAM-1 (Figure 2D), and VEGF ( Figure 2E) at 2 weeks after the induction of diabetes when compared with the retinas of non-diabetic control rats. Treatment with intravitreal TIMP-3 significantly reduced the expression of the p65 subunit of NF-κB ( Figure 2C), ICAM-1 (Figure 2D), and VEGF ( Figure 2E) proteins in STZ-induced diabetic rats when compared with the values obtained from the PBS-treated contralateral eye. However, TIMP-3 did not affect the expression of phospho-ERK1/2 ( Figure 2B). Tissue Inhibitor of Matrix Metalloproteinase-3 Attenuates the Expression of Angiogenic and Inflammatory Molecules Induced by Diabetic Mimetic Conditions in Human Retinal Müller Glial Cells To better understand the observed alterations in vivo, we investigated the molecular effects by TIMP-3 on leukocytes, Müller cells and HRMECs with the use of various assays applying condition relevant in the context of DR. With the use of ELISA analysis, we demonstrated that treatment of Müller cells with the diabetic mimetic conditions HG (Figure 3A), the hypoxia mimetic agent CoCl 2 ( Figure 3B) and the proinflammatory cytokine TNF-α ( Figure 3C) induced significant upregulation of the proangiogenic factor VEGF and the inflammatory chemokine MCP-1/CCL2 in the culture medium as compared to untreated controls. Pre-incubation of Müller cells with TIMP-3 significantly attenuated the levels of VEGF induced by HG, CoCl 2 , and TNF-α. TIMP-3 significantly attenuated upregulation of MCP-1/CCL2 induced by CoCl 2 and TNFα, but not by HG. Tissue Inhibitor of Matrix Metalloproteinase-3 Counteracts High-Glucose-Induced Upregulation of Phospho-ERK1/2, the Apoptosis Executer Enzyme Caspase-3 and ADAM17 in Human Retinal Müller Glial Cells With the use of Western blot analysis, we demonstrated that treatment of Müller cells with HG induced significant upregulation of the protein levels of phospho-ERK1/2 Results are expressed as median (interquartile range) from two independent experiments (each treatment condition: 6 wells) (*p < 0.05; Mann-Whitney test). Alternatively, HRMECs were pre-incubated with TIMP-3 (100 ng/ml) or dilution medium for 1 h before stimulation with dilution medium, tumor necrosis factor-α (TNF-α) (25 ng/ml) [panel (B)] or vascular endothelial growth factor (VEGF) (50 ng/ml) [panel (C)] for 24 h. Adhesion of fluorescently labeled monocytic cells to the HRMEC monolayer was assessed. Results are expressed as median (interquartile range) from three independent experiments (each treatment condition: 6 wells). Kruskal-Wallis test and Mann-Whitney test were used for comparisons between three groups and two groups, respectively (RFU = relative fluorescence unit). HRMECs were left untreated or were stimulated with TNF-α (50 ng/ml) for 24 h (Continued) FIGURE 5 | with/without a 1-h pre-incubation with TIMP-3 (100 ng/ml). Protein expression of vascular cell adhesion molecule-1 (VCAM-1) [panel (D)] and intercellular adhesion molecule-1 (ICAM-1) [panel (E)] was determined by Western blot analysis. Results are expressed as mean ± standard deviation from three independent experiments (each treatment condition: 8 wells). One-way ANOVA and independent t-test were used for comparisons between three groups and two groups, respectively. *p < 0.05 compared with values obtained from untreated cells. # p < 0.05 compared with values obtained from cells treated with TNF-α or VEGF. Tissue Inhibitor of Matrix Metalloproteinase-3 Reduces THP-1 Cell Adhesion to Human Retinal Microvascular Endothelial Cells Increased expression of retinal ICAM-1 and enhanced adhesion of circulating leukocytes to the retinal vascular endothelium are hallmark features of DR (Joussen et al., 2004). We found that treatment of THP-1 cells with TIMP-3 significantly decreased the adherence of monocytes to HRMECs (Figure 5A). In addition, TIMP-3 pretreatment of HRMECs significantly decreased the upregulation of the adherence of monocytes to HRMECs induced by TNF-α ( Figure 5B) and VEGF ( Figure 5C). Furthermore, TIMP-3 significantly reduced TNF-α-induced upregulation of the adhesion molecules VCAM-1 ( Figure 5D) and ICAM-1 ( Figure 5E) in HRMECs. These findings suggest that TIMP-3 may protect against inflammatory stimulation in HRMECs during the progression of DR. Tissue Inhibitor of Matrix Metalloproteinase-3 Inhibits Vascular Endothelial Growth Factor-Induced Migration, Chemotaxis and Proliferation of Human Retinal Microvascular Endothelial Cells Migration, chemotaxis and proliferation of endothelial cells are critical components of angiogenesis. We tested TIMP-3 for its ability to block migration of HRMECs. TIMP-3 pretreatment significantly attenuated VEGF-induced migration of HRMECs in the scratch wound migration assay ( Figure 6A). Similarly, when HRMECs were pretreated with 100 ng/ml TIMP-3, the chemotactic effect of VEGF was inhibited with about 40% (Figure 6B). In contrast, TIMP-3 at 10 ng/ml only marginally inhibited the VEGF-induced migration and the 14% reduction was not statistically significant ( Figure 6B). Finally, preincubation of HRMECs with 100 ng/ml TIMP-3 could partially (38%), but significantly inhibit the VEGF-induced proliferation of the endothelial cells ( Figure 6C). DISCUSSION In the present study, we demonstrated that local treatment with intravitreal TIMP-3 attenuated the increase in retinal vascular leakage and BRB breakdown in STZ-induced diabetic rats. These findings are in line with previous studies documenting that TIMP-3 preserves blood-brain barrier function in a model of traumatic brain injury (Menge et al., 2012) and attenuates the increase in pulmonary microvascular endothelial cell permeability under septic conditions (Arpino et al., 2016). More recently, Dave et al. (2018) used the same recombinant TIMP-3 material, as used here, to demonstrate in vivo that TIMP-3 stabilizes the developing blood-brain barrier and attenuates germinal matrix brain hemorrhage in mice. The mechanisms by which TIMP-3 attenuated diabetesinduced BRB disruption might be multifold. In this study, we confirmed that diabetes induced a clear upregulation of the retinal expression of VEGF, a key inducer of diabetes-induced breakdown of the BRB (Peach et al., 2018), and we demonstrated that intravitreal TIMP-3 administration normalized retinal VEGF expression. With the use of in vitro studies with two critical cell types, Müller cells and retinal endothelial cells, we tried to obtain mechanistic insights into the mechanism(s) of action of TIMP-3. Müller cells are known to be the major source of VEGF secretion in the retina (Bringmann et al., 2006). We demonstrated that TIMP-3 attenuates upregulation of VEGF in human retinal Müller glial cells induced by the hypoxia mimetic agent CoCl 2 , HG or the proinflammatory cytokine TNF-α. We also demonstrated the capability of HG to target Müller cells and to induce activation of the ERK1/2 signaling pathway and that TIMP-3 significantly attenuated the HG-induced upregulation of phospho-ERK1/2. In line with our data, in a previous study it was demonstrated that TIMP-3 inhibited VEGF-stimulated phosphorylation of ERK1/2 in endothelial cells (Qi et al., 2013). In addition, TIMP-3 deficiency increased the levels of phospho-ERK1/2 in the kidney from diabetic mice (Fiorentino et al., 2013). TIMP-3 administration could also attenuate diabetesinduced BRB breakdown through its anti-inflammatory activity. Intravitreal treatment with TIMP-3 attenuated diabetes-induced upregulation of the pro-inflammatory transcription factor NF-κB and the adhesion molecule ICAM-1. Our data also suggest that TIMP-3 has anti-inflammatory effects through the attenuation of VEGF-or TNF-α-stimulated binding of human monocytes to HRMECs. Increased expression of retinal ICAM-1 and enhanced adhesion of circulating leukocytes to the retinal microvascular endothelium are crucial in the development of diabetes-induced retinal endothelial cell damage and breakdown of BRB (Joussen et al., 2004). We also demonstrated that TIMP-3 significantly attenuated TNF-α-induced upregulation of the adhesion molecules ICAM-1 and VCAM-1 in HRMECs. ICAM-1 and VCAM-1 play an important role in promoting leukocyte FIGURE 6 | Tissue inhibitor of matrix metalloproteinase-3 (TIMP-3) inhibits vascular endothelial growth factor (VEGF)-mediated human retinal microvascular endothelial cell (HRMEC) migration, chemotaxis and proliferation. A scratch was made in confluent monolayers of overnight starved HRMECs with a micropipette tip. Subsequently, the cultures were pre-incubated with TIMP-3 (100 ng/ml) or dilution medium for 1 h before stimulation with dilution medium or vascular endothelial growth factor (VEGF) (50 ng/ml) for 24 h. Cells were visualized using an inverted microscope. Three independent experiments were performed. Each experiment was done in triplicate and 6-8 independent field images were taken for the migration analysis which was done by using image J software. In the figure one representative image is shown [panel (A)]. Results are expressed as median (interquartile range). Kruskal-Wallis test and Mann-Whitney test were used for comparisons between three groups and two groups, respectively. *p < 0.05 compared with untreated cells. # p < 0.05 compared with VEGF-treated cells. Chemotaxis and proliferation of HRMECs stimulated with 10 ng/ml VEGF was modulated by a 30-min pre-incubation with TIMP-3 (10 or 100 ng/ml). Cell migration was monitored using the xCELLigence RTCA-DP system. The median (interquartile range) percentage of inhibition of VEGF-induced chemotaxis [panel (B)] or proliferation [panel (C)] is shown. In total five chemotaxis experiments were performed and conditions were tested in duplicate or triplicate within 1 experiment. For proliferation, 6 experiments were performed and conditions were tested at least in triplicate within 1 experiment. *p < 0.05; Mann-Whitney test (compared with VEGF). These findings are in agreement with previous studies that demonstrated that TIMP-3 is a powerful regulator of inflammation. In mouse models of acute lung injury, TIMP-3 deletion resulted in a markedly elevated and persistent inflammatory response due to a pronounced increase in the number of infiltrated neutrophils and macrophages (Gill et al., 2010(Gill et al., , 2013. In a mouse model of unilateral ureteral obstruction, mice lacking TIMP-3 exhibited increased renal injury, increased activation of fibroblasts and greater interstitial fibrosis (Kassiri et al., 2009). Deficiency of TIMP-3 leads to increased macrophage infiltration in the kidney and exacerbates renal damage in response to chronic hyperglycemic stress caused by diabetes (Fiorentino et al., 2013). TIMP-3-deficient tumors, showed markedly increased inflammatory cell infiltration along with increased expression of MCP-1, TNF-α and interleukin-1β (Adissu et al., 2015). In relation to inflammation, TIMP-3 is also a mediator of macrophage polarization and function. In the absence of TIMP-3, macrophage differentiation was altered, resulting in macrophages that were skewed toward a more proinflammatory polarization (Gill et al., 2013). In a mouse model of atherosclerosis, lack of TIMP-3 increases inflammation and polarizes macrophages toward a more inflammatory phenotype resulting in increased atherosclerosis (Stöhr et al., 2014). As a complementation of these data, overexpression of TIMP-3 in macrophages leads to smaller, more stable atherosclerotic plaques that contained fewer inflammatory cells . Similarly, in a mouse model, overexpression of TIMP-3 in macrophages protects from metabolic inflammation and related metabolic disorders such as insulin resistance, glucose intolerance and non-alcoholic steatohepatitis . Angiogenesis, the process by which new capillaries are formed by sprouting from existing vessels, is a fundamental requirement for PDR initiation and progression. VEGF plays a pivotal role in promoting retinal vascular leakage and angiogenesis in DR (Peach et al., 2018). Designing effective therapeutic strategies against PDR-associated angiogenesis requires further understanding of the dynamic balance between proangiogenic and antiangiogenic factors in the ocular microenvironment of patients with PDR. Restoration of this balance between the angiogenic stimulators and inhibitors by activating endogenous angiogenesis inhibitors can become a potential strategy for PDR therapy. Interestingly, in the present study, we demonstrated that the predominant proteoform of TIMP-3 in vitreous samples corresponds to that of glycosylated TIMP-3. Qi et al. (2009) reported that glycosylation lead to a reduction in MMP inhibitory activity of a TIMP-3 mutant with a consequent increase of VEGFdependent endothelial cell migration and tube formation. In the present study, we demonstrated that TIMP-3 attenuated VEGFinduced HRMECs migration, chemotaxis and proliferation, crucial steps in the angiogenesis cascade. Similarly, several studies demonstrated that TIMP-3 is a potent inhibitor of tumorassociated angiogenesis (Spurbeck et al., 2002;Qi et al., 2003;Chen et al., 2014;Das et al., 2014Das et al., , 2016aAdissu et al., 2015). In addition, intravitreal injection of TIMP-3 also inhibits oxygeninduced retinal neovascularization (Hewing et al., 2013) and laser-induced choroidal neovascularization (Qi et al., 2013). In addition, TIMP-3 protects against hemorrhage in the developing brain (Dave et al., 2018). In conclusion, our results demonstrate an important role for TIMP-3 in the pathogenesis of diabetes-induced retinal inflammation. In our study, we used both in vitro and in vivo models to investigate the anti-inflammatory and anti-angiogenic effects of TIMP-3 in the diabetic retina. Our findings suggest that pharmacological enhancement of local endogenous TIMP-3 levels or local administration of exogenous TIMP-3 proteoforms would be a potential therapeutic strategy which could exert biological effects in several ways. However, more investigations are needed to explore the use of slow release formulations. Despite the progress provided by our study, we have not verified the mechanisms by which TIMP-3 interacts with VEGF and exerts its biological effects. Understanding the mechanisms through which TIMP-3 interferes with VEGF could pave the way for the rational design of drugs that disrupt the progression of diabetes-induced retinal injury. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Research Centre and Institutional Review Board of the College of Medicine, King Saud University. The patients/participants provided their written informed consent to participate in this study. All procedures with animals were performed in accordance with the Association for Research in Vision and Ophthalmology (ARVO) statement for use of animals in ophthalmic and vision research and were approved by the institutional Animal Care and Use Committee of the College of Pharmacy, King Saud University. AUTHOR CONTRIBUTIONS AMA designed the manuscript, supplied funding, interpreted the data, and wrote the manuscript. AA, MN, MS, AD, and LV performed experiments and interpreted the data. PG analyzed the data. GO provided funding, designed experiments, interpreted data, and edited the manuscript. SS provided funding, designed and supervised experiments, interpreted data, and edited the manuscript. All authors read and approved the final manuscript. FUNDING This work was supported by the King Saud University through Vice Deanship of Research Chair, Nasser Al-Rashid Research Chair in Ophthalmology (AMA). Research in the Rega Institute at KU Leuven was supported by C1 funding (C16/17/010 KU Leuven) and the Research Foundation of Flanders (FWO- Vlaanderen G0A3820N, G0A5716N, G0D2517N, and G0A7516N). AD received a Ph.D. fellowship from FWO-Vlaanderen.
2022-01-10T14:22:35.715Z
2022-01-10T00:00:00.000
{ "year": 2021, "sha1": "1cfb9b646b2b1500c02944fa1570afb69005ff4f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "1cfb9b646b2b1500c02944fa1570afb69005ff4f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
115876412
pes2o/s2orc
v3-fos-license
Research on Membrane Structure Performance Membrane materials and open-close roof buildings are new building materials and structural forms, which have developed rapidly in recent years. This new form of close combination of architecture and nature has enriched the connotation of architecture, exhibited the city, but also to the architectural design and construction has brought great challenges. Through the research and analysis of the performance of the membrane material, this study provides a reference for the design and construction of the domestic membrane material for the opening and closing roof structure. Introduction In the the Fifties or sixties of the 20th century, along with the revolution in material technology, people's requirements for the use of building functions gradually raised, known as the "fifth generation Building Materials" [1] of high-strength flexible film (hereinafter referred to as "membrane") into people's vision, membrane structure began to be used in temporary buildings. In the the early 1970s, glass fiber fabric as the substrate of the "permanent membrane" by the United States DuPont mainly developed by several companies and design units, so that the membrane structure is officially used in permanent buildings. Over the time, with the full study of membrane materials in some western countries and Japan, people have a deep understanding of their mechanical properties (tensile and tear resistance), making the space large-span curved membrane structure become Synonymous with "lightweight, beautiful, modern" architecture, and it has become one of the mainstream forms of opening and closing roof structures. Designers can design and build according to their wishes open and close roof film structure building, not only to achieve the architectural function and urban cultural heritage of the coordination and unity, but also to express the "unity of Heaven and Man" this kind of architectural and natural deep-seated unique views, is not only the foundation of architectural art schools, but also the embodiment of material concretization of harmonious society. Types and properties of membrane materials The membrane material can be divided into two categories, such as fabric film and foil film, according to the different structural structure. Fabric film (Fig. 1) is a composite of fabric base material (hereinafter referred to as "substrate") and coating materials (hereinafter referred to as "coating") composite which has high strength and flexibility. Common substrates are polyester fiber, fiberglass, polypropylene fiber, polyamide amide fiber, etc., the common coating [2] has PVC (polyvinyl chloride), PTFE (polytetrafluoroethylene, commonly known as "Teflon") and so on. The mechanical properties of fabric film material like strength are mainly provided by the base material, and coating materials has fire prevention, antiaging and other physical and chemical properties. Foil film is made of fluorine plastic. In recent years, the most commonly used material is ETFE membrane. The study of performance of the following materials is around the PVC coating covering polyester fiber film, PTFE coating covered fiberglass base membrane and ETFE, which are commonly used in current open roof building. Main properties of PVC The main properties of the four kinds of PVC membranes which surface treatment is different are compared in table 1. Performance of PVC 1) At room temperature, the stress-strain of PCV membrane under uniaxial tensile load (Fig. 2) is mainly divided into the elastic stage of the front line, the yield stage and the second linear elastic stage. The first line elastic stage is borne by the base and the coating, and the slope of the stress-strain curve depends on the yield ratio of the material. When the stress of the membrane material in the yield stage reaches the yield strength, the upward trend of the stressstrain curve is nonlinear, that is, the formation of the enhanced deformation stage, the coating begins to break and peel off , the intensity of the work gradually decreases; the elastic stage of the second line is when the coating breaks down to complete failure, and the upward trend of the stress-strain curve changes linearly, that is, the formation of the enhanced stress stage, the tension is all borne by the substrate, until the substrate begins brittle fracture failure, cannot meet the application of engineering materials. 2) Comparison of mechanical properties of commonly used PVC film materials (table 2). Main properties of PTFE 1) High and low temperature: Normal operation of the temperature is generally 180~260℃. 2) Resist chemical corrosion: Not only can withstand Wang Shui, concentrated hydrochloric acid, smoke sulfuric acid and other strong acidic substances and strong alkaline substances, but also strong oxidation agent, reducing agent and other organic solvents. 3) Excellent non-stickiness and smoothness: Presented as infinity, the friction coefficient of thesurface is very small, the surface tension is only 0.019 N/m, most of the general material cannot adhere tothe surface. In the process of using them, they can be highly maintained the cleanliness of their own structure, not contaminated with dust and other pollutants. 4) Excellent weatherability: The role of light performance is stable, not affected by ultraviolet and ozone and other optical emitting substances, in the relatively humid air and the surrounding environment, unaffected by microorganisms, can be exposed to the atmosphere for a long time, and the physical properties will not change. It has a long service life in the application of membrane materials. 5) Non-flammability: The limit oxygen index is relatively high, belongs to the non-combustible material, the melting point is 275 ℃. 6) Excellent dielectric properties and electrical insulation: It will be penetrated within the voltage range of 25~40 Kv. Its dielectric nature does not change within the temperature range of the wide area, and the sensitivity to temperature is low, so it is often used in high temperature resistant insulating materials. Performance of PTFE 1) The elongation deformation process of PTFE membrane under uniaxial tensile load (Fig. 3) is divided into three stages: The first stage is the linear elastic deformation stage: The yield strength is 20MPa where the tensile stress of this stage material is between the 0~20% fracture strength, the elastic modulus is 900MPa, the strain is about 1.7%.The second stage is the nonlinear deformation stage, the yield strength is 25MPa where the tensile stress of this stage is between the 20%~60% fracture strength, the strain is about 10.6%, and the third stage is the linear enhanced stress stage where this stage of the membrane tensile stress exceeds 60% fracture strength, the coating completely loses the ability to withstand the tensile force, The tension is borne by the substrate directly, and this stage cannot meet the engineering requirements. 2) Comparison of mechanical properties of commonly used PTFE membrane materials (table 3) 2) Break elongation: Break elongation up to more than 300%. 3) Stress-strain relationship [4,5]: At room temperature ETFE film is divided into complete elasticstage, yield stage, plastic strengthening stage (Fig. 4).Tensioning force is in the fully elastic stage below 20MPa, the tensile film volume is up to 800~1000Mpa, when the tensile stress near 25MPa will appear yield point, Tensioning force is in the yield stage at 20Mpa~25Mpa, and the tensile stress enters the plastic reinforcement stage after 25MPa until it breaks.Unlike other membrane materials' tensile curves (Fig. 5), the stress-strain curve of the ETFE membrane has the firstyield point and the second yield point, and its corresponding strength is the first yield strength and the second yield strength, respectively. The first type of long-term load of prestress and snow load which adopts the first yield strength, and the second type of short-term load, mainly wind load, adopts the second yield strength. Other properties of ETFE The ETFE membrane is the most superior substitute material in the daylighting roof film material,which is compared with the properties of other membrane materials in table 4. 1) Thickness and quality: Thickness is generally in the 0.05~0.25mm, with the increase in thickness, the film material becomes more brittle and hard and difficult to process; the density is about 1.75g/cm 3 . 2) Color and transmittance: Usually colorless transparent or white. In practical engineering applications, according to the building effect, people can mix ETFE membrane with additives for dyeing or printing patterns, changing the transmittance of the membrane, blocking ultraviolet rays and other light transmission. 3) Self-cleaning performance: With high anti-fouling, easy to cleancharacteristics, the surface can be descaled by the general natural rainwater scouring, manual surface cleaning generally four years once. 4) Fire Resistance: It will melt in 200℃, which fire rating reach B1, DIN4102 standard, fire point is comparatively high, it will not occur spontaneous combustion generally and when it burn it will not drip. 5) Durability and weather resistance: It has excellent anti-aging, stability, which can better adapt to a variety of environmental and climatic conditions and other factors. It has more than 25 years service life, operating temperature range between 200~150℃, and melting point is about 275 ℃. In engineering practice, after more than 15 years explosion in harsh climatic conditions, the main performance of ETFE membrane has not changed. In the event of hail, even if the glass roof is smashed by hail, only a few tiny dents will be produced on the roof of the ETFE membrane. Conclusion Although the membrane structure building form is developing at a faster speed in China, the application prospect of the opening and closing roof structure is considerable. At present, the film produced in China is basically stronger than the film produced abroad, but the other properties (fire resistance, self-cleaning performance, service life) are far from each other. Therefore, China's membrane material research urgently needs to learn from foreign advanced technology, continuously improve the performance of membrane materials, improve the overall level of membrane structure research in China, and better combine membranes with domestic and open roof structure design.
2019-04-16T13:29:23.886Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "cc508b2f6c4a18c3ff68c147d0ef788ca0032da3", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/05/e3sconf_arfee2018_01014.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "44aa210776f0692760b68a2fb0d420bbf6ece742", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
51139361
pes2o/s2orc
v3-fos-license
Keck Spectroscopy of Gravitationally Lensed z=4 Galaxies: Improved Constraints on the Escape Fraction of Ionizing Photons The fraction of ionizing photons that escape from young star-forming galaxies is one of the largest uncertainties in determining the role of galaxies in cosmic reionization. Yet traditional techniques for measuring this fraction are inapplicable at the redshifts of interest due to foreground screening by the Lyman alpha forest. In an earlier study, we demonstrated a reduction in the equivalent width of low-ionization absorption lines in composite spectra of Lyman break galaxies at z=4 compared to similar measures at z=3. This might imply a lower covering fraction of neutral gas and hence an increase with redshift in the escape fraction of ionizing photons. However, our spectral resolution was inadequate to differentiate between several alternative explanations, including changes with redshift in the outflow kinematics. Here we present higher quality spectra of 3 gravitationally lensed Lyman break galaxies at z=4 with a spectral resolution sufficient to break this degeneracy of interpretation. We present a method for deriving the covering fraction of low-ionization gas as a function of outflow velocity and compare the results with similar quality data taken for galaxies at lower redshift. We find a significant trend of lower covering fractions of low-ionization gas for galaxies with strong \Lya emission. In combination with the demographic trends of \Lya emission with redshift from our earlier work, our results provide new evidence for a reduction in the average H I covering fraction, and hence an increase in the escape fraction of ionizing radiation from Lyman break galaxies, with redshift. INTRODUCTION Star forming galaxies are the leading candidate for the source of ultraviolet photons required to reionize the universe. Several lines of evidence indicate that reionization was underway by z = 11 and ended a few hundred Myr later at z ≃ 6-7 (e.g., Schenker et al. 2012;Mortlock et al. 2011;Hinshaw et al. 2012). Deep near-IR imaging with the Hubble Space Telescope has provided good constraints on the UV luminosity density of star forming galaxies during the reionization epoch (e.g., Ellis et al. 2013;Oesch et al. 2013) which indicate that the ongoing star formation is likely capable of producing the required ionizing flux. However, it is unclear whether this radiation is able to escape from galaxies and actually ionize the intergalactic medium (IGM). Based on estimates of the total UV luminosity density and IGM clumping factor, the required escape fraction f esc of hydrogen-ionizing photons is 0.2 (e.g., Robertson et al. 2013). The precise value of f esc is a key uncertainty in determining the role of galaxies in reionization. Direct measurements of f esc during the reionization epoch are essentially impossible, not only because of the faint apparent luminosity, but also because foreground IGM attenuates the ionizing flux to undetectable levels even at z 4. Direct imaging and composite spectra of galaxies at lower redshift has established a modest average f esc ≃ 0.05 for Lyman break galaxies (LBGs) at z = 3 (Bogosavljević 2010). If star forming galaxies dominated reionization, f esc must have been higher at earlier times. This paper is concerned with improving the constraints on f esc at higher redshifts for which direct measurements are not practical. Our methodology is as follows: f esc is set by the areal covering fraction of hot stars by H i such that f esc = 1 − f c , and f c can be inferred from intermediate dispersion spectroscopy of interstellar UV absorption lines. The difficulty, of course, is that LBGs at redshift z > 3 are very faint, so securing suitably high quality absorption line spectra of individual examples is a very challenging proposition. In an earlier paper (Jones et al. 2012) we therefore analyzed the average properties of LBGs at z ≃ 4-5 derived from composite spectra in a manner similar to that pioneered at z ≃ 3 by Shapley et al. (2003). A particular motivation for the present study was the discovery of a marked reduction with increasing redshift in the equivalent width of low-ionization absorption lines at fixed UV luminosity, suggestive of changes in either the kinematic profile or the covering fraction of neutral gas (Jones et al. 2012). Additionally, spectra of individual galaxies show a trend of increasing Lyα emission equivalent width with redshift which we argued could reflect evolution in the H i covering fraction , especially given the strong correlation of Lyα and low-ionization absorption lines (Shapley et al. 2003;Jones et al. 2012). A lower covering fraction for the neutral gas within typical LBGs would be particularly important as it could imply a higher escape fraction of ionizing photons. The composite spectra discussed by Jones et al. (2012) did not have adequate resolution to distinguish the effects of reduced covering fraction and kinematics, and so the present paper takes this investigation one step further by attempting to resolve this important ambiguity. Here we present higher resolution spectra of 3 gravitationally lensed z ≃ 4 LBGs. Although their un-lensed luminosities are typical of the constituent galaxies comprising the Jones et al. (2012) composite, their individual lensed magnitudes are much brighter enabling comparable signal to noise to the stack of LBGs discussed in our earlier paper. Throughout the paper we adopt a flat ΛCDM cosmology with Ω Λ = 0.7, Ω M = 0.3, and H 0 = 70 km s −1 Mpc −1 . All magnitudes are in the AB system (Oke 1974). The spectra comprising the composite published by Jones et al. (2012) were taken with the 600 line mm −1 DEIMOS grating with a resolution of ≃3.5Å, although uncertain systemic redshifts led to reduced resolution of the composite spectrum (corresponding to a velocity resolution of ∼ 450 km s −1 FWHM). The composite comprised galaxies with z ′ AB = 24-26 with 90% completeness to z ′ AB = 25 at z ≃ 4. The stacked spectrum reveals multiple low ionization lines such as Si ii λ1260, O i λ1302 + Si ii λ1304, and C ii λ1334 in the region where the signal/noise is optimal. As discussed by Jones et al. (2012), these lines are normally saturated so the line depth at a given velocity provides a measure of the areal covering fraction f c of O and B stars by neutral H i gas along the line of sight. Typical LBGs are too faint for detailed line profile studies even at z = 2, but strong gravitational lensing can boost the brightness of representative examples making such studies of individual sources a practical proposition. Studies of several lensed z ≃ 2-3 LBGs have found that absorption velocities of low-ionization metal transitions range from ∼ −1000 to +500 km s −1 with typical line centroids v ∼ −200 km s −1 (Pettini et al. 2002;Quider et al. 2009Quider et al. , 2010Dessauges-Zavadsky et al. 2010). The mean low ionization absorption velocity in the Jones et al. (2012) composite is similar, v LIS = −190 km s −1 . In a similar fashion, for the present analysis we have located 3 gravitationally-lensed LBGs at z ≃ 4. Two are independent sources lensed by the well-studied cluster Abell 2390 and the third was located in the cluster J1621+0607 in the Sloan Digital Sky Survey. A2390 H3 and H5 represent two distinct highly-elongated pairs of lensed images that were spectroscopically confirmed to be at different redshifts by, respectively, Frye & Broadhurst (1998) and Pelló et al. (1999). The tangential arc system in J1621+0607 was spectroscopically confirmed by Bayliss et al. (2011). The gravitational magnification factor is ≃10 in each case (Pelló et al. 1999). We summarize the key properties in Table 1. The redshifts and absolute UV luminosities are representative of sources studied by Jones et al. (2012). In addition, for comparison purposes, we include an analysis of high quality spectra for 3 further lensed z = 2-3 sources (courtesy of M. Pettini) in Table 1. These include the 'Horseshoe ' (z=2.38, Quider et al. 2010), cB58 (z=2.73, Pettini et al. 2002) and the 'Cosmic Eye' (z=3.07, Quider et al. 2009). Spectra were taken with the 1200 line mm −1 DEIMOS grating during two runs in October 2011 and June 2012. This provides a resolution of ≃1.7Å corresponding to a velocity resolution of ≃70 km s −1 , considerably better than for the composite discussed by Jones et al. (2012). Spectra of each galaxy covered wavelengths corresponding to at least 1175−1675Å in the rest frame. The lensed sources in Abell 2390 were observed simultaneously with a multi-slit mask that sampled two images of each source. Seeing varied between 0. ′′ 4 and 1. ′′ 4 FWHM during the observations, and the bulk of the data used has seeing in the range 0. ′′ 7-0. ′′ 9. Some exposures (∼ 10%) were affected by cirrus and are not included in the final addition. Total observing times for the final spectra are given in Table 1. The DEIMOS spectra were reduced and calibrated using the Spec2D pipeline following the techniques discussed in detail by Stark et al. (2010). In the case of A2390 H3, care was taken to ensure that the extracted spectrum was not contaminated by light from a nearby cluster member. Data from the October 2011 and June 2012 runs were reduced separately and the resulting one-dimensional spectra were combined with an inverse-variance weighted mean. Spectra of J1621 are affected by poor sky subtraction residuals, while the Abell 2390 arc spectra are of excellent quality. Spectra of different images of the Abell 2390 arcs were scaled to the same flux level before combining to a common wavelength scale with 0.7Å pixels, roughly Nyquist sampled. The final spectra, shown in Figure 1, reach an average continuum S/N per 70 km s −1 resolution element of 5 for J1621, 9 for A2390 H3, and 10 for A2390 H5 over the rest-frame wavelength range 1250-1650Å. This is comparable to that in the composite spectrum in Jones et al. (2012) which has S/N equivalent to ∼ 10 at the improved resolution of 70 km s −1 of our new data. Systemic Redshift Accurate systemic redshifts are required in order to examine the kinematics of gas seen in absorption and the techniques for estimating these are discussed in detail in Jones et al. (2012). This is straightforward when nebular emission lines are visible, such as is the case in both J1621+0607 (O iii] λλ1661,6) and A2390 H5 (O iii] λλ1661,6, He ii λ1640, C iv λλ1548,51). The strong emission from highly ionized species such as He ii and C iv seen in A2390 H5 is uncommon but has been observed in some high redshift starburst galaxies and signifies an extremely young, metal-poor, and hot stellar population (e.g. Fosbury et al. 2003;Erb et al. 2010). Alternatively they may signify the presence of an active galactic nucleus, but the narrow line widths (50 − 185 km s −1 FWHM, corrected for instrumental resolution) suggest an origin in starforming Hii regions. No appropriate features are detected in the spectrum of A2390 H3 and so we estimate the systemic redshift from low-ionization absorption lines using the method of Jones et al. (2012). This gives z = 4.043±0.002 = z IS +190 km s −1 , with uncertainty dominated by an rms difference ∼ 125 km s −1 between redshifts derived from absorption lines and that obtained from nebular emission . The adopted systemic arc redshifts are listed in Table 1. Low-Ionization Covering Fraction Ideally we would measure the covering fraction of neutral hydrogen directly from spatially resolved H i absorption. However, the only available transition (Lyα) is dominated by strong emission with net equivalent width W Lyα = 20 − 100Å, and the observed line profile is complicated by resonant scattering in the extended circumgalactic medium (CGM; Steidel et al. 2011). These effects are apparent from the Lyα line profiles which show redshifted emission as well as strong absorption arising from both the CGM and Lyα forest ( Figure 1). We therefore estimate the covering fraction of neutral hydrogen from absorption lines of heavier low-ionization species which arise in H i gas, i.e., those with ionization potentials less than 1 Rydberg. The covering fraction of any ion is related to its absorption line optical depth τ and residual intensity I via where I 0 is the continuum level. Optical depth is in turn related to column density as ( 2) where f is the ion oscillator strength, λ is the transition wavelength expressed inÅ, and N is the ion column density in cm −2 (km s −1 ) −1 . Combining equations 1 and 2 yields an expression for f c as a function of I and N . In cases where two or more transitions are measured for the same ion, from the same ground state, with different values of f λ, it is possible to solve these equations for N and f c . In the following analysis we will treat all variables as functions of velocity, i.e., f c (v). For the low ionization species of interest, our spectra cover three such transitions of Si ii at 1260, 1304, and 1526Å which we use to measure the covering fraction as a function of velocity f c (v), for each galaxy. Si ii λ1304 is only used in the velocity range v −200 km s −1 where it is not contaminated by O i λ1302. We resample the spectrum of each transition to a common velocity scale, and find the values of N and f c which minimize the least-square residual χ 2 = (I obs − I N,fc ) 2 /σ 2 obs in each velocity bin. We additionally find the range of N and f c for which χ 2 is within 1 of the minimum value, and adopt this as the 1σ uncertainty. The best-fit f c and uncertainty calculated for each arc are shown as a function of velocity in Figure 2. Since Si ii is the dominant ion of silicon in H i gas, this is approximately equal to the covering fraction of H i (provided that it is enriched with Si) which impedes the escape of ionizing radiation. Average Low-Ionization Absorption Profile A simple and complementary alternative to the method outlined in Section 3.2 is to estimate the covering fraction from saturated transitions. In cases where τ ≫ 1, Equation 1 simplifies to ( Several of the strongest absorption lines covered by our spectra are typically saturated, including: Si ii λ1260, O i λ1302, Si ii λ1304, C ii λ1334, and Si ii λ1526 which are all tracers of H i gas. In order to minimize the statistical uncertainty we calculate the average intensity of these transitions as a function of velocity using an inverse-variance weighted mean, taking care not to use the wavelength region where O i λ1302 and Si ii λ1304 are blended (roughly −300 v −200 km s −1 depending on the kinematics of each source). These profiles are shown in Figure 2 together with the covering fraction measured from Si ii. We note that the covering fraction derived from Equation 3 is a strict lower limit. RESULTS This work was motivated in large part by the need to disentangle kinematics and covering fractions of absorbing gas. In particular, we seek to explain the extent to which decreased absorption line equivalent widths measured from composite spectra in our earlier work (Jones et al. 2012) result from changes in gas kinematics compared to covering fractions, and the implications of this result for the escape of ionizing radiation. We are limited in examining the redshift evolution of these properties by the small number of sources with suitable spectra, and we caution that this sample is not necessarily representative of the LBG population at these redshifts. Nonetheless we can examine general trends within the existing data from this work and others in the literature (Pettini et al. 2002;Quider et al. 2009Quider et al. , 2010Dessauges-Zavadsky et al. 2010). Following the methods in Section 3 we show the average absorption profiles and Si ii covering fractions (derived from unblended transitions at 1260, 1526, and 1808Å) of well-studied z = 2-3 galaxies for comparison in Figure 2. To quantify trends in the absorption line profiles, the velocity extent and maximum absorption depth for each galaxy are shown as a function of redshift in Figure 3. In the case of Abell 2390, we take the most conservative approach noting that the interpretation of the absorbing gas is complicated by the physical proximity and similar redshift of the two arcs. Their projected separation is ∼ 70 kpc (Pelló et al. 1999) and it is unclear which source lies in the foreground. At lower redshifts, Steidel et al. (2010) have shown that low-ionization absorption seen in a background source at b = 70 kpc has a detectable average equivalent width ∼ 0.4Å for the transitions of interest. Since we lack specific information about the 3-D geometry, the following analysis does not include any contribution from this effect. If anything, our results will overestimate the true covering fraction and therefore yield a more conservative constraint on the escape fraction. Kinematics The kinematics of foreground low-ionization gas are revealed in the absorption line profiles shown in Figure 2. In all cases we see significant blueshifted absorption indicating outflows, as expected given the high star formation surface densities (Heckman 2002). The maximum outflow velocity at which absorption is detected is −700 km s −1 in A2390 H3, with an uncertainty of ∼ 125 km s −1 since we do not directly measure the systemic redshift. A2390 H5 reveals weak absorption extending to −600 km s −1 , seen also in higher-ionization Si iv and C iv lines, although it is only marginally detected at < −300 km s −1 . The maximum outflow velocity in J1621 is −300 km s −1 . The outflowing low-ionization gas attains a somewhat lower (∼ 30% on average) maximum velocity at higher redshift. This trend is not due to lower quality data as it remains evident in Figure 2 if we consider alternative measures such as the FWHM or an absorption threshold at 25% of the continuum flux. However all galaxies except J1621 have similar maximum velocities ranging from 600 − 800 km s −1 . Likewise the extent of redshifted absorption is approximately +200 km s −1 for all sources, with the notable exception of the Cosmic Eye as discussed in detail by Quider et al. (2010); this indicates little difference in line broadening from rotation or other internal kinematic structure. Therefore, in this limited sample, the low-ionization gas kinematics are similar with a somewhat lower average velocity extent at higher redshift. Covering Fraction The covering fraction of each galaxy as a function of gas velocity is estimated from the methods described in Section 3 and shown in Figure 2. Both methods are generally in good agreement indicating that Equation 3 is a valid approximation. Si ii covering fractions derived for the Cosmic Eye are systematically higher than indicated by the average absorption profile; this is largely an artifact caused by additional absorption at the wavelength of Si ii λ1260 from intervening gas at z = 2.66 (Quider et al. 2010). It is also apparent from Figure 2 that the Si ii covering fraction is poorly constrained in regions of weak absorption due to the marginal significance of individual absorption lines. This is most problematic in the high-velocity wings. The strong anticorrelation between absorption line strength and covering fraction, as well as results at lower redshift (Martin & Bouché 2009), suggest that the most likely solution for such ambiguous cases is a low covering fraction of optically thick gas. We therefore opt to compare galaxies on the basis of their average absorption line profile as this quantity is simpler to define and less susceptible to the uncertainties described above. Nonetheless the Si ii results are an important verification that the average profile accurately traces f c . We can now compare the covering fractions measured at z = 4 with sources at lower redshift. Figure 3 shows the maximum absorption depth for each galaxy as a function of redshift. The z = 4 galaxies have maximum absorption depths corresponding to f c = 0.3-0.9, in each case occurring at v ∼ −100 km s −1 . There is a large scatter in Figure 3 with σ(f c,max ) = 0.26 and no strong redshift dependence. Galaxies at z = 4 do, however, have covering fractions which are lower on average by 25% or ∆f c,max = 0.16 compared to z = 2-3. Trends with Lyα We now turn to trends with Lyα equivalent width. Previous sections focused on possible redshift evolution of low-ionization absorption lines in our quest to examine whether this may signify an increasing ionizing escape fraction. The connection with Lyα is a natural one to explore given there is a strong correlation between its equivalent width W Lyα and low-ionization absorption (Jones et al. 2012;Shapley et al. 2003). Since our previous work has suggested that the distribution of W Lyα for LBGs of a fixed luminosity increases with redshift (Stark et al. , 2011Schenker et al. 2012), we can hope to derive inferences about the low ionization absorption in sources for which Lyα measurements are now widely available. Each galaxy in Figure 3 is color-coded according to W Lyα . This value refers only to the equivalent width of Lyα emission, differing from the conventional net sum of emission and absorption. The maximum outflow velocity is lower on average in galaxies with stronger Lyα emission, consistent with well-quantified results from composite spectra (Shapley et al. 2003). More interestingly, Figure 3 reveals a trend of lower absorption depth (implying lower f c ) with stronger Lyα emission at 3.5σ significance. We show this relation in Figure 4. In contrast, the trend of lower average f c at higher redshift has limited significance (1.1σ) and is explained by Lyα demographics within the sample. Although limited by the small sample size, this is an important first quantitative result at these redshifts. Since the frequency and equivalent width of Lyα emission increases in LBGs at higher z = 3 → 6, these results imply that the average covering fraction of low-ionization gas should decrease with redshift. Ionizing Escape Fraction Direct measurements of the ionizing flux are impractical at the redshifts of interest in this paper both because of the faint apparent magnitudes of LBGs and the high opacity of the Lyα forest. Nonetheless we can provide important constraints on f esc using indirect tracers of H i. Before doing so, we consider the potential systematic uncertainties which may limit our ability to estimate the true value of f esc from metal absorption lines. Results at z ≃ 3 have shown that f esc is indeed dependent on low-ionization absorption strength (Bogosavljević 2010) although this relation is not one-to-one, likely due to the factors described below. While in general these preclude accurate estimates of the escape fraction, the maximum absorption depth (Figures 2, 3) is a valuable constraint on H i spatial homogeneity and sets a stringent upper limit on f esc . 1. We measure covering fraction as a function of velocity, yet gas at different velocities may cover different spatial regions and we lack the spatial resolution needed to evaluate this effect. The maximum absorption depth at a given velocity is thus a lower limit on the total H i covering fraction and an upper limit on f esc . 2. Metal-free H i is not detected. To date only two instances of metal-free gas have been found at these redshifts (Fumagalli et al. 2011) and so this is likely insignificant. Again, this possibility implies that covering fractions measured in Section 3 yield upper limits on f esc . 3. Low column density gas will not be detected, although such gas will not affect f esc unless it has very low metallicity 0.1 Z ⊙ . The optical depth of metal transitions used here compared to Lyman continuum at ∼ 900Å is τ τLyC = 1.7 for the weakest line (Si ii λ1304) and 10-20 for the strongest transitions (Si ii λ1260, C ii λ1334, O i λ1302) for solar abundance ratios (Asplund et al. 2009). Galaxies in the z = 2-3 sample with measured interstellar abundance ratios have Z 0.4 Z ⊙ for the relevant elements, such that the attenuation of weaker Si lines is roughly equal to that of ionizing radiation. 4. There may be narrow components with uniform covering fraction which are spectroscopically unresolved. However, such smooth absorption and column density profiles would require a remarkably regular spacing of discrete narrow components, and so a partial covering fraction appears more likely (see Pettini et al. 2002;Quider et al. 2009). 5. Si ii and C ii are present in both H i and H ii regions. We have therefore confirmed that O i, with essentially the same ionization potential as H i, gives consistent results in all cases. 6. We measure a covering fraction at 1260-1526Å, but the stars which are bright at these wavelengths do not necessarily emit at <912Å. Our measurements therefore correspond to a constraint on the relative escape fraction f esc = L LyC /L 1500 as commonly used in the literature. However, the spatial distribution of L 1500 is similar to ionizing emission as traced by Balmer lines in z ≃ 1 − 3 galaxies (e.g., Jones et al. 2010), indicating that this effect may be minimal. With these caveats in mind, we now summarize what can be learned about the ionizing escape fraction at z = 4. The depth of low-ionization absorption lines gives upper limits f esc < 0.7 for A2390 H5, < 0.3 for A2390 H3, and < 0.1 for J1621. The true values are likely well below the upper limits and in three lower-redshift galaxies we can verify that this is the case. Quider et al. (2010) show that the Cosmic Eye has an H i covering fraction of ≃ 95% implying f esc < 0.05 based on damped Lyα absorption, more stringent than the f esc < 0.1 from our analysis. Secondly, no ionizing radiation is detected from the Horseshoe in deep UV imaging (B. Siana, private communication) nor from the spectrum of Q0000-D6 (f esc < 0.16; Giallongo et al. 2002) despite low maximum covering fractions (Figures 2, 3). Non-uniform coverage of low-ionization metals is evidently a necessary, but not sufficient, condition for the escape of ionizing radiation. Results derived in this paper should therefore be strictly interpreted as upper limits. Nonetheless, our measurements at z = 4 are readily compatible with the value f esc 0.2 required for galaxies to reionize the universe (Robertson et al. 2013). DISCUSSION In our previous work (Jones et al. 2012) we found a decrease in the low-ionization absorption equivalent width with redshift in composite LBG spectra, but were unable to distinguish whether gas kinematics and/or covering fraction were the cause. The new data presented in this paper now resolve this ambiguity. Although the sample of LBGs with high quality spectra at z > 2 is small and not necessarily representative, the present data show an approximately equal decrease of ∼ 25% in the gas velocity and covering fraction with redshift. The much stronger dependence of low-ionization absorption with Lyα emission appears to be predominantly due to the covering fraction of neutral hydrogen. The main limitation of methods used in this work to constrain f esc is that we do not directly measure the fraction of ionizing emission covered by H i. While the spectrallyresolved covering fraction of heavy elements provides an important measure of the "patchiness" of H i, gas at different velocities may cover different spatial regions and so we can derive only a lower (upper) limit on f c (f esc ). Deep integral field spectroscopy with good spatial resolution may resolve this issue and provide better constraints on f c and f esc . We have successfully measured the kinematics and covering fraction of low-ionization (H i) gas in z = 4 LBGs from high quality rest-UV spectra. Resulting upper limits on the ionizing escape fraction f esc ≤ 1 − f c are readily consistent with that required for star-forming galaxies to reionize the universe (e.g., Robertson et al. 2013). Importantly, the new data enable these measurements at a time < 1 Gyr from the reionization epoch, much earlier than was previously possible. We note that in order for galaxies to reionize the universe, the escape fraction must increase rapidly from f esc = 0.05 measured at z = 3 (Bogosavljević 2010) to a value f esc 0.2 at z = 7 (Robertson et al. 2013;Kuhlen & Faucher-Giguère 2012). While the trends with redshift are poorly constrained, the available data reveal a reduced covering fraction with increasing W Lyα indicating that galaxies with moderate or strong Lyα emission are likely to have larger f esc . This is supported by direct evidence at z = 3, where galaxies with detectable ionizing flux have stronger Lyα emission and weaker low-ionization absorption than LBGs with lower f esc (Bogosavljević 2010). Clearly the distribution of (metal-enriched) H i is patchier in galaxies with stronger Lyα emission, and therefore more likely that ionizing radiation can escape. Since the frequency and strength of Lyα emission in typical LBGs increases with redshift (Stark et al. , 2011, our results provide new evidence that the covering fractions decrease (and therefore f esc increases) with redshift. We thank Max Pettini for providing ESI spectra of the z = 2-3 galaxies used as a comparison sample. T.A.J. acknowledges support from the Southern California Center for Galaxy Evolution through a CGE Fellowship. D.P.S. acknowledges support from NASA through Hubble Fellowship grant #HST-HF-51299.01 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc, for NASA under contract NAS5-265555. The analysis pipeline used to reduce the DEIMOS data was developed at UC Berkeley with support from NSF grant AST-0071048. This work relies on data obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration, and was made possible by the generous financial support of the W.M. Keck Foundation. We wish to recognize the significant cultural role that the summit of Mauna Kea has within the indigenous Hawaiian community; we are most fortunate to have the opportunity to conduct observations from this mountain. -Mean low-ionization absorption line profile and its associated neutral gas covering fraction derived using the methods discussed in Section 3. Si ii covering fraction of the z ≃ 4 arcs is measured from smoothed spectra with FWHM resolution ≃ 110 km s −1 , while average line profiles are from unsmoothed data (FWHM ≃ 70 km s −1 ). There is no significant difference in results derived from the smoothed and unsmoothed spectra. Velocities are relative to adopted systemic redshifts (Section 3.1), derived from absorption line centroids for A2390 H3 and nebular emission in the other two cases. Equivalent measurements from ESI spectra of z = 2-3 arcs are shown for comparison. Details of these ESI spectra can be found in Pettini et al. (2002) and Quider et al. (2009Quider et al. ( , 2010. Figure 2, which serves as a proxy for the covering fraction of H i at the corresponding velocity. We include estimated values for the 8 o'clock arc (Dessauges-Zavadsky et al. 2010) and a somewhat lower resolution (R=1300) spectrum of Q0000-D6 ) in addition to the sources shown in Figure 2. Galaxies are color-coded according to their W Lyα and relevant data are listed in Table 2. On average, the z = 4 galaxies show somewhat lower maximum outflow velocities and lower covering fractions (weaker absorption depth) compared to similarly studied sources at lower redshift, but the strongest trend is a decreasing covering fraction with W Lyα . The z = 4 data enable us to examine trends of H i covering fraction at a time significantly closer to the epoch of reionization, thought to end at z ≃ 7 (e.g., Schenker et al. 2012). Here W Lyα includes only the emission component, and the two points at (1,1) are slightly offset for clarity. As discussed in the text, these absorption line measurements correspond to a lower limit on fc and an upper limit on fesc. Since Lyα emission strength increases with redshift (Stark et al. , 2011, this result likely indicates lower covering fractions (which would permit higher escape fractions) at earlier times.
2013-04-29T17:54:05.000Z
2013-04-25T00:00:00.000
{ "year": 2013, "sha1": "e935da4f395562e5626fd2c492ae2efaa5cf33bc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1304.7015", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e935da4f395562e5626fd2c492ae2efaa5cf33bc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
271450077
pes2o/s2orc
v3-fos-license
A comparative analysis of preclinical computed tomography radiomics using cone-beam and micro-computed tomography scanners Background and purpose Radiomics analysis extracts quantitative data (features) from medical images. These features could potentially reflect biological characteristics and act as imaging biomarkers within precision medicine. However, there is a lack of cross-comparison and validation of radiomics outputs which is paramount for clinical implementation. In this study, we compared radiomics outputs across two computed tomography (CT)-based preclinical scanners. Materials and methods Cone beam CT (CBCT) and µCT scans were acquired using different preclinical CT imaging platforms. The reproducibility of radiomics features on each scanner was assessed using a phantom across imaging energies (40 & 60 kVp) and segmentation volumes (44–238 mm3). Retrospective mouse scans were used to compare feature reliability across varying tissue densities (lung, heart, bone), scanners and after voxel size harmonisation. Reliable features had an intraclass correlation coefficient (ICC) > 0.8. Results First order and GLCM features were the most reliable on both scanners across different volumes. There was an inverse relationship between tissue density and feature reliability, with the highest number of features in lung (CBCT=580, µCT=734) and lowest in bone (CBCT=110, µCT=560). Comparable features for lung and heart tissues increased when voxel sizes were harmonised. We have identified tissue-specific preclinical radiomics signatures in mice for the lung (133), heart (35), and bone (15). Conclusions Preclinical CBCT and µCT scans can be used for radiomics analysis to support the development of meaningful radiomics signatures. This study demonstrates the importance of standardisation and emphasises the need for multi-centre studies. Introduction Radiomics analysis translates medical images into quantitative data, termed "features", in attempt to provide more information to improve patient diagnosis and treatment decision-making [1,2].Within oncology, radiomics has been proposed as a virtual biopsy to noninvasively determine the biological features of tumours including mutation status, immune response, and hypoxia [3].Studies have shown that the implementation of radiomics features with clinical factors can improve patient risk stratification for overall survival and local regional control [4,5]. Despite the advantages of radiomics image analysis, the question of feature reliability between scanners remains a major limitation in the transferability of radiomics outputs and their adoption into the clinic [6,7].To date, most radiomics studies are limited to a single-centre analysis and lack multi-centre validation, slowing its integration into clinical workflows [8,9].Multiple parameters feed into inter-scanner variability, including but not limited to, imaging modality, acquisition parameters, image discretization, reconstruction techniques and filtering [10][11][12]. In the past decade, preclinical imaging modalities have evolved which are downscaled in size and imaging energy [13].These include preclinical computed tomography (CT), magnetic resonance (MR), and positron emission tomography (PET)-based scanners which have been routinely implemented for radiotherapy treatment planning, and detection of metastases or toxicities within translational research [14].Recent studies have shown that these rich preclinical datasets could expand our current understanding of radiomics imaging signatures [15][16][17][18][19]. Like clinical studies, there is yet to be a comparison of preclinical radiomics outputs across imaging modalities and institutions. Our study aimed to compare radiomics outputs from two preclinical CT-based scanners.This represents the first cross-centre comparison of radiomics outputs in preclinical research, shedding light on the transferability of radiomics features derived from both CBCT and µCT scanners. CBCT scans Routine cone beam CT (CBCT) imaging was performed using the Small Animal Radiation Research Platform (SARRP, Xstrahl Life Sciences, Camberley, UK) at Queen's University Belfast (Supplementary Table 1).The SARRP acquires images with tube voltages between 40 and 80 kVp.For this study, scans were acquired at energies routinely used for preclinical imaging (40 and 60 kVp) with a current of 0.8 mA, 60 s imaging time and 48 mAs current-exposure time [20].Image reconstruction was performed using 360 • images and filtered back-projection.Log(white/x) was applied to input images using FDK with a Hamming filtering window.CBCT scans had a slice thickness of 0.26 mm as the standard protocol for this scanner. Micro-CT scans Routine µCT imaging was performed using the Quantum GX2 CT system (Perkin Elmer, UK) at the Royal College of Surgeons in Ireland (RCSI) (Supplementary Table 1).Scans were acquired using a standardised imaging protocol at energies of 40 and 60 kVp with a current of 0.088 mA, imaging time of 4 min and 21.12 mAs current-exposure time.Image reconstruction was performed using 360 • images and filtered back-projection with post-filtering.µCT scans had a slice thickness of 0.09 mm as the standard protocol for this scanner. Phantom An anatomically correct mouse phantom developed by the National Physical Laboratory (NPL, UK) for preclinical dosimetry was used to assess intra-scanner reliability at different imaging energies and across segmentation volumes (Supplementary Fig. 1).This phantom has a unique design that is anatomically correct to a mouse with tissueequivalent inserts with varying densities for bone (1.39 g/cm 3 ), lung (0.68 g/cm 3 ), and soft tissue (1.01 g/cm 3 ) [21,22].The phantom was scanned twice on each scanner (scan-rescan) to assess the reproducibility of radiomics features between scans. Mouse models Retrospective mouse scans from both scanners were analysed and compared.All previous in vivo experimental procedures were carried out in accordance with the Home Office Guidance (Scientific Procedures Act 1986 (PPL2813)) and Directive 2010/63/EU of the European Parliament on protection of animals used for scientific purposes.For CBCT, 13 scans of female immunocompromised SCID mice aged 8-10 weeks (18 -24 g) were used for comparative analysis.Mice were anaesthetised for imaging using with ketamine and xylazine (100 mg/kg and 10 mg/kg).For µCT, 13 scans of 7 males and 6 females of NOD SCID Gamma (NOD.Cg-Prkdc scid Il2rgtm1Wj) mice aged 8-weeks old (20-25 g) were used for comparative analysis.Mice were anaesthetised using inhalant isoflurane for imaging. Segmentation All segmentations were created using ITK-SNAP software (version 3.8.0,http://www.itksnap.org)[23].For phantom analysis, manual spherical contours were created using the 3-D round brush.To avoid variability, set brush sizes were used of 44, 92, and 238 mm 3 .For all mouse scans, tissues were contoured by two independent, experienced observers (KHB and BNK).Lung and bone (ribcage and spine) tissues were contoured using semi-automated methods as previously reported in Brown et al [24].Mouse hearts were manually contoured using the brush tool.All segmentations were inspected and manually altered if required prior to analysis.These segmentations defined a volume of interest (VOI) for radiomics analysis. Radiomics analysis Radiomics features were extracted using PyRadiomics software (version 2.7.7,Harvard Medical School, USA), with a fixed bin width of 25 as previously reported [19].A total of 842 features were extracted including original and wavelet feature types.All shape features were removed as these features could potentially confound the results from standard spherical segmentations or from tissue segmentations with distinct differences in the size, shape and anatomy.A total of 828 features were used for analysis including: first order statistics, gray level cooccurrence matrix (GLCM), gray level run length matrix (GLRLM), gray level size zone matrix (GLSZM), gray level dependence matrix (GLDM), and neighbouring gray tone difference matrix (NGTDM) [25].Harmonisation of voxel sizes was compared by altering the resam-pledPixelSpacing of 0.09 mm to 0.26 mm for µCT (Fig. 1). Statistical analysis The intraclass correlation coefficient (ICC) score was used to determine the reliability of radiomics outputs.Reliability is defined as the extent outputs can be replicated [26].ICC scores were calculated where an output of 0 indicated no reliability and 1 indicated perfect reliability [27].ICC results are classified by Koo et al as poor (<0.5), moderate (0.5 -0.7), good (0.7 -0.9), and excellent (>0.9) [27,28].Reliable features were defined as those with an ICC score >0.8 to better match with previous thresholds reported in test-retest analysis.ICCs were calculated using the irr library from the lpSolve package in RStudio software (version 4.1.2)based on a single value with absolute-agreement and determined using 2-way mixed-effects models [27].Scan-rescan analysis was completed on both scanners to assess reliability for each individual scanner at imaging energies of 40 and 60 kVp.Other parameters were also assessed for reliability including varying segmentation size, slice thickness and across mouse tissues [27].Features that were identified as reliable for mouse tissues were then compared across scanners.Statistical analysis of the percentage of overlapping features was performed by two-way ANOVA using GraphPad Prism 7 (Version 7.01, San Diego, CA, USA) (www.graphpad.com),with significance reported as p ** <0.01. Assessment of scanner reproducibility using a phantom model The reliability of radiomics features on two CT-based preclinical scanners was determined through phantom scan-rescan analysis.Reliable features (ICC>0.8)from CBCT and µCT phantom scans were derived at two imaging energies (40 and 60 kVp) and three segmentation volumes (44, 92, and 238 mm 3 ) (Fig. 2, Supplementary File 1).Overall, more radiomics features were reliable from the CBCT scans in comparison to µCT.A higher imaging energy increased the number of reliable features for CBCT but not for µCT.Increasing the segmentation volume did not increase the number of overlapping reliable radiomics features between scanners.First order and GLCM features are the most stable feature types across scanners and segmentation volumes (Fig. 2B). The impact of tissue density on the reliability of radiomics features The reliability of radiomics features at different tissue densities was assessed using retrospective mouse scans (Fig. 3).Our results show radiomics features extracted from µCT scans have higher ICC scores compared to CBCT (Fig. 4 & Supplementary Table 2).For both scanners, the number of reliable features was highest for the lung (CBCT=580, µCT=734, Fig. 4A) and lowest for bone (CBCT=110, µCT=560), potentially due to increased imaging artefacts associated with higher densities.CBCT scans are more greatly affected due to their reduced image quality in comparison to µCT scanners.The NGTDM feature class has the lowest reliability across all tissues for CBCT.GLCM, GLRLM, GLSZM and GLDM features are the most reliable on both scanners for lung tissue, and first order features are the most reliable for CBCT for heart tissue. Voxel resampling is a common normalisation step that can improve transferability of radiomics results across scanners as it is completed after scans are acquired.Voxel sizes were normalised across CBCT and µCT scanners to 0.26 mm (Fig. 4B).Similar to the results in Fig. 4A, an inverse relationship is observed between tissue density and feature reliability with ICC scores highest in the lung and lowest in the bone.For µCT images, an adjusted voxel size of 0.26 mm resulted in less reliable features across lung, heart, and bone (732, 568 and 450 features, respectively) compared to the original voxel size (734, 664, and 560 features, respectively). Identification of overlapping reliable features between CBCT and µCT scanners Reliable features (ICC>0.8)from both scanners were compared for the lung, heart, and bone, to identify features that could be stable for comparative analysis (Fig. 5A, Supplementary File 2).This was completed for both original and normalised voxel sizes.Overall, µCT produced more reliable features than CBCT and overlapping features between both scanners decreased as tissue density increased.Normalisation improved the percentage overlap of features for the lung (73 %) and heart (59 %) (Fig. 5B).These data suggest that harmonisation of datasets may improve both the transferability and reliability of features for comparison of preclinical CBCT and µCT scanners. Identification of radiomics features specific to different tissue densities Overlapping features for each tissue from Fig. 5 were compared to identify radiomics features that are reliable, transferable across both scanners, and potentially tissue density-specific (Fig. 6).These which overlapped across original and normalised methods were determined as specific to each tissue density (Fig. 6C) and radiomics signatures for the lung (133 features), heart (35 features) and bone (15 features) were determined (Supplementary File 3).These signatures may provide comparable reference features for normal or healthy tissue and alterations to these features may be used to detect damage or disease. Discussion Currently, preclinical radiomics studies have been performed at single institutions and with limited external validation of results.Multicentre comparisons are required to highlight potential scannerdependent differences and thus improve the knowledge transfer within the radiomics community [29,30].This study is the first to compare radiomics outputs across two CT-based preclinical imaging modalities.The reliability of radiomics features was assessed with the rationale that features with low reliability are not stable and may subsequently lead to poor predictive models [31].Normalisation was trialled through alteration of voxel size in pre-processing steps and comparable tissue-density specific features were identified for the lung, heart, and bone.The presented results show promise for the transferability of radiomics data across CT-based scanners yet emphasise the need for thorough reliability comparisons. CBCT and CT scanners are both instrumental within oncology for patient diagnosis, treatment planning (CT), and pre-treatment Fig. 1.Overview of the radiomics workflow for comparison of preclinical CBCT and µCT scanners using various analytical parameters.An anatomically correct mouse phantom was used to assess the reproducibility of features between scans (scan-rescan) and also the impact of different segmentation volumes on feature reliability.Feature reliability in three tissues of different densities (lung, heart, and bone) was also assessed across two cohorts of mice scanned on either the CBCT or µCT scanner.Created with BioRender.com.positioning (CBCT) [32].It is pertinent that both CT-based scanners can be used, and results interchanged for radiomics analysis [33][34][35].Todate only a handful of clinical studies have directly compared radiomics outputs acquired on CBCT and CT scanners [36][37][38].These studies have demonstrated the feasibility of correlating CBCT and CT radiomics outputs, yet further work is required to develop prognostic imaging biomarkers.In this study we have used their preclinical counterparts, typically used for translational research [13,39], to assess the feasibility of comparing features across imaging modalities.Our results support the hypothesis that some radiomics features are reliable and interchangeable between CT-based scanners. Reliability and robustness tests typically use "coffee break" style scan-rescan analysis, two scans acquired on the same scanner after a short time period.Results from scan-rescan analysis are not generalizable and need to be conducted for individual scanners [40][41][42].In this study, scan-rescan analysis was used to identify differences between scans on each scanner at different imaging energies and identify and remove unreliable radiomics features [19].As CBCT scans have reduced image quality [32,43], there is an increased level of scattering, beam hardening artifacts, and reduced accuracy of Hounsfield units (HU) which may have implications on the extraction of stable radiomics features at lower energies.However, some features are robust to this noise when imaging and analysis protocols are standardised [5,[44][45][46].We have shown here and from previous work that preclinical radiomics is suspectable to changes in imaging energies and should be standardised whenever possible [19].From phantom analysis there was more reliable features extracted from CBCT scans than µCT.This may be due to the lower quality of CBCT and when analysing a uniform phantom, but importantly this does not hold true for in vivo tissue analysis which have increased heterogeneity. Segmentation, and in particular differences in segmentation volumes, is one of the main sources of variability within the radiomics workflow.This is particularly evident within oncology with the irregularity in tumour sizes.As a result, features correlated with volume could be falsely detected as predictive.Features that have been shown to correlate or heavily depend on volume may not provide any addition information and could be removed during feature reduction methods [5,8,19,47].Some radiomics features (54) have been shown to directly correlate with increasing volume on preclinical CBCT scans [19], this has yet to be validated for µCT scanners.Our results show that using a range of small segmentation volumes (44 -238 mm 3 ) does not significantly influence the reliability of radiomics features and first order & GLCM feature types may be most resistant to changes in segmentation volume (Fig. 2B). Harmonisation methods can overcome variabilities by transforming imaging data [6,7].CT scans can be harmonised through the adjustment of HU values to standardise tissue contrast, however this was not feasible in our study as CBCT data from the SARRP are given in proprietary CT numbers and the accurate conversion to mass density is not routinely performed [48].However, as radiomics textural outputs are voxel size and gray-level discretisation dependent, voxel resampling may reduce variabilities across the two systems [11,41,49].Our results showed that transforming data through the harmonisation of slice thicknesses was beneficial for comparison of lower-density tissues (lung).Yet, there were marginally fewer comparable reliable features for higher-density tissues (heart and bone).These data could be due to additional artefacts or increased homogeneity in higher density tissue that could reduce the subtle variability in the radiomics features.Some radiomics features are more dependent on voxel size in comparison to gray-level discretisation and studies should take into consideration tissue density during normalisation steps [11]. The use of preclinical models to develop radiomics signatures is still a relatively new concept and requires continual optimisation.Preclinical models are pivotal within oncology research to recapitulate the underlying biology of human disease, yet mice are not mini humans [50,51].A recent study has shown strong correlation between radiomics features extracted from mouse and patient datasets [52], supporting the use of preclinical models to develop radiomics signatures in controlled and standardised settings.Our study has identified 133, 35 and 15 reliable features associated with the lung, heart and bone tissues in mice across two CT-based scanners.These signatures could be clinically validated to feed into pipelines applying radiomics biomarkers to differentiate between normal and abnormal tissues (e.g.tumours). Despite compelling evidence that many reliable radiomics features can be compared across the two preclinical CT-based scanners, our study had a few limitations.These included the lack of movement of the phantom between scan and rescan and the imaging of the phantom on the same day.However, mouse scans were acquired over several days which would account for any movement and day-to-day scanner variation.Phantom analysis showed that CBCT images had more reliable radiomics features compared to µCT scans.These data are unexpected due to the differences in quality between these scanners yet may be due to the higher level of uniformity in the phantom materials than tissues in vivo.Some radiomics studies have increased the complexity of phantom analysis by using textural phantoms like the Credence Cartridge Radiomics (CCR) phantom [29].The CCR phantom represents tissues which are minimally to highly varied in texture and has been used to provide insight on the impact of scanner protocols on radiomics features [29,46,53].Another limitation of this study was different cohorts of mice were imaged on each scanner.To minimise uncertainties, genetically comparable immunodeficient mouse strains were used, which importantly reflects heterogeneity in patient populations.Additionally, movement on scans from motion artefacts could potentially affect analysis however, this is likely to be minimal given the small range of motion even for mobile organs (e.g.lung) that is estimated to be <5 mm [54,55].These limitations could be addressed by using 4-D CT scanners or gated image acquisition: however, both approaches are not routinely available for preclinical CT imaging systems. We demonstrated variations in the reliability of radiomics features across two CT-based preclinical scanners.Our results emphasise that to improve the predictive potential of radiomics features it is paramount to use reliable radiomics features across scanners.Despite significant differences between the scanners and imaging parameters e.g.imaging protocols, geometry and intensity range, normalisation steps can be useful to improve the comparability of CBCT and µCT scans e.g.standardisation of imaging energy and pre-processing factors (voxel size).Our results have identified tissue density-specific radiomics signatures that are transferable and reliable for the lung, heart, and bone.Suggesting that preclinical CBCT and µCT scanners can both be used, and results interchanged for the investigation and development of radiomics imaging biomarkers.Our data indicate the potential need to consider optimising CT imaging protocols not only for physical parameters (contrast and noise) but for downstream quantitative radiomics analysis. Declaration of Competing Interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 2 . Fig. 2. Comparative analysis of reliable in phantom radiomics features (ICC>0.8)using CBCT and µCT scanners.Scan-rescan tests were conducted on both scanners and the radiomics features were extracted from segmentation volumes of 44, 92, and 238 mm 3 at 40 and 60 kVp.Venn diagrams in Panel A show the number of overlapping reliable features between CBCT and µCT scanners.Panel B summarises the overlapping feature classes across multiple segmentation volumes at 40 and 60 kVp. Fig. 3 . Fig. 3. Representative scans for both CBCT and µCT scanners.Axial imaging panels with segmentations for lung, heart and bone are shown in red.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Fig. 4 . Fig. 4. Boxplots comparing the ICC scores for all radiomics features derived from CBCT and µCT at slice thicknesses at image acquisition of 0.26 mm and 0.09 mm (Panel A) or at resampled voxel size of 0.26 mm (Panel B).Features are categorised by feature class and feature type, unfiltered (left) and wavelet (right), for lung, heart, and bone. Funding KHB is supported by a Training Fellowship from the National Centre for the Replacement Reduction and Refinement of Animal in Research (NC3Rs, NC/V002295/1).BNK, CC, AB, KTB are funded by the Higher Education Authority North-South Research Programme 2021 RadCOL.ATB would like to acknowledge previous support from Science Foundation Ireland (SFI) and the European Regional Development Fund (ERDF) (18/RI/5759 & 13/RC/2073). Fig. 5 . Fig. 5. Comparative analysis of reliable radiomics features (ICC >0.8) in the lung, heart, and bone using CBCT and µCT scanners.Scan-rescan tests were conducted on both scanners and the radiomics features were extracted using slice thicknesses of 0.09 mm on µCT, and 0.26 mm on both scanners.Panel A shows the number of overlapping reliable features between the scanners for the different tissues and slice thicknesses.Panel B summarises the percentages of overlapping features for the different tissues. Fig. 6 . Fig. 6.Identification of tissue-specific radiomics features for the lung, heart, and bone.Overlapping reliable radiomics features between scanners in each tissue were compared to identify tissue-specific features.Comparisons were performed at different voxel sizes of 0.26 mm and 0.09 mm (Panel A) or at the equivalent voxel size of 0.26 mm (Panel B), and tissue-specific features were categorised by feature class for lung, heart, and bone.Shared tissue-specific features identified for each tissue at original and normalised voxel sizes were compared (Panel C) and the overlapping features were categorised by feature class (Panel D).
2024-07-26T15:22:44.222Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "0b110cb363527583fce67017d0c587c58a38017f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.phro.2024.100615", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "801d48552fab5f94b3e4a40be09ccc1d5c655284", "s2fieldsofstudy": [ "Medicine", "Physics", "Engineering" ], "extfieldsofstudy": [] }
25675893
pes2o/s2orc
v3-fos-license
Interleukin-22 contributes to liver regeneration in mice with concanavalin A-induced hepatitis after hepatectomy AIM: To investigate the therapeutic effects and mechanisms of interleukin (IL)-22 in liver regeneration in mice with concanavalin A (ConA)-induced liver injury following 70% hepatectomy. METHODS: Mice were injected intravenously with ConA at 10 μ g/g body weight 4 d before 70% hepatectomy to create a hepatitis model, and recombinant IL-22 was injected at 0.125 μ g/g body weight 30 min prior to 70% hepatectomy to create a therapy model. Control animals received an intravenous injection of an identical volume of normal saline. RESULTS: IL-22 treatment prior to 70% hepatectomy performed under general anesthesia resulted in reductions in the biochemical and histological evidence of liver injury, earlier proliferating cell nuclear antigen expression and accelerated recovery of liver mass. IL-22 pretreatment also significantly induced signal transducer and activator of transcription factor 3 (STAT3) activation and increased the expression of a variety of mitogenic proteins, such as Cyclin D1. Furthermore, alpha fetal protein mRNA expression was significantly elevated after IL-22 treatment. the base of each lobe. ConA-induced liver injury ConA was injected intravenously at 10 μ g/g body weight 4 d before the operation, and the 70% hepatectomy model animals received intravenous injections of identical volumes of normal saline. ConA-induced liver injury and 70% hepatectomy model ConA was injected intravenously at 10 μ g/g body weight, and 70% hepatectomies were performed 4 d later. CONCLUSION: In this study, we demonstrated that IL-22 is a survival factor for hepatocytes and prevents and repairs liver injury by enhancing pro-growth pathways via STAT3 activation. Treatment with IL-22 protein may represent a novel therapeutic strategy for preventing liver injury in patients with liver disease who have undergone hepatectomy. RESULTS: IL-22 treatment prior to 70% hepatectomy performed under general anesthesia resulted in reductions in the biochemical and histological evidence of liver injury, earlier proliferating cell nuclear antigen expression and accelerated recovery of liver mass. IL-22 pretreatment also significantly induced signal transducer and activator of transcription factor 3 (STAT3) activation and increased the expression of a variety of mitogenic proteins, such as Cyclin D1. Furthermore, alpha fetal protein mRNA expression was significantly elevated after IL-22 treatment. these cells. Additionally, a recent study supported a potential therapeutic role for IL-22 as a protective factor in hepatic resection. The authors of this study observed significant increases in hepatic IL-22 receptor expression and serum IL-22 levels after 70% hepatectomy and a significant decrease in liver regeneration after IL-22 blockade [6] . However, the precise mechanism of IL-22-mediated liver protection remains unclear. Currently, all of the evidence supporting the role of IL-22 in liver protection is from liver injury models, and the evidence for the liver proliferative effects is almost entirely from the simple 2/3 liver resection model without other injury. However, clinical patients with hepatectomy nearly always have liver disease and thus significantly decreased liver regeneration abilities. In this article, we sought to investigate the therapeutic effects and mechanisms of IL-22-mediated liver regeneration in mice with ConA-induced liver injury following 70% hepatectomy. Materials Recombinant IL-22 protein was purchased from Pepro Tech Inc (New Jersey, United States). Anti-STAT3, Cyclin D1 and proliferating cell nuclear antigen (PCNA) antibodies were obtained from Cell Signaling Technology Inc (CST, United States). Female C57/BL6 mice were purchased from HFK Bioscience Co., Ltd. (Beijing, China). 70% hepatectomy model Female C57/BL6 mice (6-8 wk of age, 20-25 g) were maintained under specific pathogen-free conditions with free access to water and food before each experiment. The animals were anesthetized with chloral hydrate injections. After a midline incision was created under microscopic guidance, and the middle and left hepatic lobes of the liver were fully freed, 7-0 vascular sutures were used to ligate the branches of the hepatic artery and portal vein of the median and left lateral lobes of the liver. Next, the bile duct was ligated with 7-0 vascular sutures, and the gallbladder was removed. Finally, the median and left lateral lobes of the liver were resected after a 4-0 silk suture ligation was secured around the base of each lobe. is an inducible cytokine of the IL-10 superfamily that was identified by the Belgian Renauld team as early as 2000 and was previously known as the IL-10-related factor from T cells [1] . IL-22 is produced by activated T cells and natural killer (NK) cells and acts via a heterodimeric receptor complex consisting of IL-22 receptor α (IL-22Rα) and IL-10 receptor β (IL-10Rβ). INTRODUCTION IL-22 has been demonstrated to exhibit a variety of effects. IL-22 appears to play an important role in inflammation and has also been noted to exert proliferative effects in a hepatocyte cell line in vitro [2][3][4][5] . In 2004, the Bin Gao team demonstrated that IL-22 expression is significantly induced in T cell-mediated hepatitis and that IL-22 blockade markedly enhances liver injury in this model, while administration of recombinant IL-22 prevents concanavalin A (ConA)induced liver injury [5] . These findings suggest that IL-22 acts as a protective cytokine that attenuates liver injury in T cell-mediated hepatitis. Furthermore, in vitro studies also revealed that IL-22 has no obvious toxicity in liver cell lines or primary liver cells and has proliferative and survival effects on IL-22 treatment model Four days after the intravenous injections of ConA at 10 μg/g body weight, recombinant murine IL-22 was injected intravenously at 0.125 μg/g body weight 30 min prior to 70% hepatectomy. Control animals received intravenous injections of identical volumes of normal saline. Liver weight/body weight ratio At 32 h, 40 h, 48 h, 1 wk, and 2 wk, the mice were humanely killed under general anesthesia once moribund. The liver and body weights of all mice in each group were measured, and the liver weight/body weight ratio was then calculated to observe the liver regeneration conditions. Examination of liver injury To assess the damage to the hepatic parenchyma, serum alanine aminotransferase (ALT) and aspartate aminotransferase (AST) levels were measured using a serum analyzer (Cobas-Mira Plus, Roche, Manheim, Germany). The liver specimens were fixed in 10% buffered formalin and embedded in paraffin, and the paraffin embedded liver tissue sections were then stained with hematoxylin and eosin (HE) for histological examinations. Immunohistochemistry Tissue specimens were fixed in neutral buffered formalin and then embedded in paraffin. The 4-μm paraffin sections were deparaffinized in xylene and rehydrated in a graded series of alcohol. Endogenous peroxidase was inhibited with 0.3% H2O2 in methanol. The sections were heated in a microwave oven (in 10 mmol/L citrate buffer, pH 6.0) for 20 min for epitope retrieval, followed by incubation with the primary antibody against PCNA (1:4000; Abcam). The slides were then incubated with a biotinylated bridging antibody (dilution: 1/200, DAKO) for 60 min. The sections were counterstained with Mayer's hematoxylin. PCNA antigen expression levels were evaluated by counting the positively stained cells in the portal triads of five high-power fields (HPFs) per slide, and the results are expressed as the average number of positive cells/HPF. Quantitative real-time PCR analysis RNA was extracted from snap-frozen liver tissue samples using the TRIzol reagent. Five micrograms of RNA was reverse-transcribed into cDNA using oligo-dT primers with a Superscript Ⅲ First-Strand Synthesis System (Invitrogen). Quantitative real-time PCR was performed with iCycler IQ system (Bio-Rad, Hercules, CA). The primer sequences for alpha fetal protein (AFP) gene were 5'-CAA AGC ATT GCA CGA AAA TGA G-3' (forward) and 5'-AAC AAA CTG GGT AAA GGT GAT GGT-3' (reverse). β-actin was measured as a housekeeping gene. The cycling conditions comprised a 5 min polymerase activation at 95 ℃, 40 cycles of 95 ℃ for 5 s and 60 ℃ for 30 s and a single fluorescence measurement. Melting curve analysis based on increasing the temperature from 60 to 95 ℃ at a rate of 0.5 ℃/s with continuous fluorescence measurement revealed a single, narrow peak for the suspected fusion temperature. Western blot analysis The mice were euthanized at baseline and at 32 h, 40 h, 48 h, 1 wk and 2 wk after hepatectomy, and liver samples were obtained for Western blot analyses. The proteins were extracted from the liver tissues and quantified using a protein assay (Bio-Rad Laboratories, CA). The protein samples (30 μg) were fractionated by SDS-PAGE and transferred to a nitrocellulose membrane. Immunoblotting was conducted using antibodies against STAT3 and Cyclin D1 (Cell Signaling Technology Inc., United States). The results were visualized via an enhanced chemiluminescent detection system (Pierce ECL Substrate Western blot detection system, Thermo Scientific, IL) and exposure to autoradiography film (Kodak XAR film). Statistical analysis All parametric data are presented as the mean ± SD. The data were analyzed for significance using Student's t-tests. One-way analyses of variance with Fisher's protected least significant difference (PLSD) tests were used to compare the means. The log-rank test was applied to compare survival curves. Differences were considered statistically significant at P < 0.05. Effects of IL-22 on the liver weight/body weight ratio after partial hepatectomy At 32 h, 40 h, 48 h, 1 wk, or 2 wk, the mice were humanely killed under general anesthesia once moribund. The liver and body weights were measured, and the liver weight/body weight ratios were then calculated. As illustrated in Figure 1, increases in the liver weight/body weight ratios were observed in the PHX, ConA + PHX and ConA + PHX + IL-22 groups, and all groups returned to normal liver weights by 2 w. Compared with the ConA + PHX group, the ratio of the ConA + PHX + IL-22 group increased more rapidly, and significant differences between these two groups were observed at 40 h, 48 h, 1 wk and 2 wk. Similarly, compared to the ConA + PHX group, the ratios of the PHX group increased more rapidly, and the differences reached significance at 48 h or 1 wk. However, the increase in the PHX group was less than that in the ConA + PHX group at 32 h. These data correlated with the cellular swelling in the liver at 32 h in the ConA + PHX group. Effects of IL-22 on hepatocyte proliferation after partial hepatectomy Hepatocyte proliferation was determined by the expression of PCNA, which is a nuclear antigen that is associated with hepatocyte proliferation. As illustrated in Figure 4, the PCNA labeling indices in the four groups began to increase postoperatively and peaked at 48 h. Compared with the ConA + PHX group, the PCNA labeling indices were significantly increased in the ConA, PHX and ConA + PHX + IL-22 groups at all of the time points, particularly at 32, 40, and 48 h. Furthermore, the PCNA levels in the ConA + PHX + IL-22 group increased to greater extents than those of the ConA and PHX groups at 32 h, 40 h and 48 h. Effects of IL-22 on AFP mRNA expression after partial hepatectomy The mice underwent 70% hepatectomy or sham laparotomy, and quantitative analysis of the liver AFP mRNA expression was performed by real-time RT-PCR. At 32 h, 40 h, 48 h, 1 wk, and 2 wk, the hepatic AFP mRNA expression levels in the ConA + PHX group were 0.55 ± 0.06, 0.93 ± 0.08, 1.72 ± 0.11, 0.66 ± 0.05 and 0.43 ± 0.04, respectively. In the IL-22 pretreatment group, the corresponding values were 0.74 ± 0.08, 1.81 ± 0.14, 2.80 ± 0.26, 0.86 ± 0.05 and 0.51 ± 0.03. Figure 5 illustrates that the AFP mRNA expression began to increase at 32 h and had significantly increased by 48 h after hepatectomy in all four groups. Additionally, the AFP mRNA levels in the ConA + PHX + IL-22 group were significantly increased Effects of IL-22 on liver damage following partial hepatectomy At 32 h, 40 h, 48 h, 1 wk and 2 wk after 70% hepatectomy, serum samples were collected, and the ALT and AST levels were measured via biochemical analyses. As illustrated in Figure 2, compared with the ConA + PHX group, the ALT and AST serum levels in the ConA + PHX + IL-22 group were reduced, and these differences were significant at all of the time points. Furthermore, with the recombinant IL-22 pretreatment, the decreases in the ALT and AST levels of the ConA + PHX + IL-22 group were significantly greater than those of the ConA and PHX groups at 40 h, 48 h and 1 wk. We also investigated the histologic features of the liver by HE staining. As illustrated in Figure 3, the HE staining of the ConA + PHX group demonstrated severe sinusoidal narrowing that was noted as early as 32 h after hepatectomy. By 48 h, swelling, nuclear condensation and laminar necrosis of the hepatocytes were observed in addition to the near-total loss of the hepatic sinusoids. By 2 wk, the hepatic sinusoids and the complete structure of the hepatic lobule were not observed in the hepatocytes. In contrast, the HE staining of the ConA + PHX + IL-22 group revealed much less evidence of injury at 48 h; the cytoplasm was preserved, and some less severe swelling was present. At 2 wk, normal nuclear morphologies, hepatic sinusoids, complete structures of the hepatic lobules and significant regeneration in the hepatocytes were observed. Figure 1 Liver weight/body weight ratios following partial hepatectomy. A and B: Increases in the liver weight/body weight ratio were observed in the PHX, concanavalin A (ConA) + PHX and ConA + PHX + interleukin (IL)-22 groups, and all groups returned to normal liver weights by 2 wk. Compared with the ConA + PHX group, the ConA + PHX + IL-22 group exhibited greater increases that reached significance at 40 h, 48 h, 1 wk and 2 wk ( a P < 0.05); these differences were particularly notable at 48 h and 1 wk ( c P < 0.01). The increases in the PHX group were significantly different from those in the ConA + PHX group at 48 h and 1 wk ( a P < 0.05); however, the increase in the PHX group at 32 h was less than that in the ConA + PHX group ( c P < 0.01). These data were correlated with the cellular swelling in the liver at 32 h in the ConA + PHX group. Effects of IL-22 on STAT3 and Cyclin D1 activation after partial hepatectomy The activation of the STAT3 and Cyclin D1 was measured using Western blot analysis to assess the effects of IL-22 after partial hepatectomy. As illustrated in Figure 6, no significant activation of STAT3 or Cyclin D1 was observed at 32 h after partial hepatectomy in the ConA + PHX group, and the activation increased gradually at 48 h and 2 wk. Although STAT3 and Cyclin DISCUSSION IL-22 has previously been shown to have a variety of effects. IL-22 appears to play a protective role in inflammation [7][8][9][10] and has also been demonstrated to have proliferative effects in a hepatocyte cell line [11] . Hepatectomy in T cell-mediated hepatitis induced by ConA models clinical hepatectomy with liver disease well. We demonstrated that IL-22 acts as a protective cytokine that attenuates liver injury in this model. The present study revealed that pre-treatment with IL-22 prior to hepatectomy significantly decreases the serum ALT and AST levels and increases the serum ALB level following 70% hepatectomy. Our findings suggest that IL-22 plays a protective role against liver injury in ConA-induced hepatitis following 70% hepatectomy and that IL-22 is a survival factor for hepatocytes. With the administration of exogenous IL-22, the liver weight/body weight ratio increased significantly and returned to the normal level by 2 wk. Additionally, the nuclear morphologies, hepatic sinusoids, complete structures of the hepatic lobules returned to normal, and significant regeneration was observed in the hepatocytes. In contrast, the ConA + PHX group that was not administered exogenous IL-22 exhibited swelling, nuclear condensation and laminar necrosis of the hepatocytes and a near-total loss of the hepatic sinusoids. In the liver, IL-22 plays an important role in the acute-phase response and possibly also plays a role in the promotion of liver regeneration [2,6,12,13] . IL-22 acts via a heterodimeric receptor complex that consists of IL-22Rα and IL-10Rβ [2,5,14,15] . Examinations of the downstream signaling events following IL-22 administration in the context of partial hepatectomy demonstrated an increase in STAT3 activation. A substantial volume of published evidence supports the notion of STAT3-mediated cell survival and proliferation [16][17][18][19][20][21] . Our present investigation revealed that the injection of IL-22 rapidly induced STAT3 activation in the liver and that STAT3 induced the expression of genes that are important for cell cycle progression (e.g., Cyclin D1) and concurrently significantly increased PCNA staining to eventually promote cell survival and proliferation. These findings suggest that IL-22 was also partially responsible for hepatic STAT3 activation in this model. Thus, the activation of STAT3 by IL-22 was likely responsible for the protective role of IL-22 in the hepatocytes. AFP is a specific marker of liver cancer tumors and is closely related to individual development, tissue regeneration, apoptosis and tumorigenesis [22][23][24][25] . The main role of AFP in liver regeneration is the regulation of hepatocyte growth. It has also been demonstrated that in the context of the synergy of various growth factors, AFP mediates cell growth regulation via an interaction with a special cell membrane receptor that results in the uptake of arachidonic acid and AFP into the cell. The process provides the necessary substrate and signal transduction for the M phase of mitosis [26,27] . Our present study found that following pretreatment with recombinant IL-22, AFP mRNA begin to be expressed from 32 h, and this expression increased significantly by 48 h after hepatectomy. This increase in expression was probably due to the loss of the negative regulation of the transcription inhibitory factor. This expression trend reflects the promotion of cell proliferation in the liver by AFP mRNA. These findings suggest that IL-22 can decrease the expression of the transcription inhibitory factor to induce the expression of AFP mRNA and provide the necessary material basis for mitosis and thus eventually promote cell survival and proliferation. In summary, the model of hepatectomy in T cellmediated hepatitis induced by ConA simulates clinical hepatectomy with liver disease accurately, and our findings suggest that IL-22 played protective and survival roles against liver injury in this model. Thus, IL-22 treatment should be considered to be a novel therapeutic option for liver injury and regeneration. Background Interleukin (IL)-22 appears to play a protective role in inflammation and has also been demonstrated to exert proliferative effects in a hepatocyte cell line; however, it has rarely been reported that the protective and proliferative effects exist simultaneously. In this article, the authors sought to investigate the therapeutic effects and mechanisms of IL-22 in liver regeneration in mice with concanavalin A (ConA)-mediated liver injury following 70% hepatectomy. Research frontiers IL-22 has been demonstrated to play a protective role in inflammation and proliferative effects in a hepatocyte cell line, however, it has rarely been reported that the protective and proliferative effects exist simultaneously. Innovations and breakthroughs In this article, the authors investigated the therapeutic effects and mechanisms of IL-22 in liver regeneration in mice with ConA-mediated liver injury following 70% hepatectomy. IL-22 was demonstrated to play a protective role and have proliferative effects together. Applications IL-22 treatment should be considered to be a novel therapeutic option for liver injury and regeneration.
2018-04-03T05:00:56.058Z
2016-02-14T00:00:00.000
{ "year": 2016, "sha1": "72f6a5da3770994ec5fa120438afad0825609343", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v22.i6.2081", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "b6a56b457763b0a3c6fcce56518490e1a7fe6b21", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246113412
pes2o/s2orc
v3-fos-license
State‐wide increase in prenatal diagnosis of klinefelter syndrome on amniocentesis and chorionic villus sampling: Impact of non‐invasive prenatal testing for sex chromosome conditions Abstract Background To analyze population‐based trends in the prenatal diagnosis of sex chromosome aneuploidy (SCA) since the availability of non‐invasive prenatal testing (NIPT). Methods Retrospective state‐wide data for all prenatal diagnoses performed <25 weeks gestation from 2005 to 2020 in Victoria, Australia. Non‐invasive prenatal testing became locally available from 2012. The prenatal diagnosis rates of SCA as proportions of all prenatal diagnostic tests and all births were calculated. Statistical significance was assessed with the χ2 test for trend, with p < 0.05 considered significant. Results 46,518 amniocentesis and chorionic villus sampling were performed during the study period, detecting 617 SCAs. There was a significant increase in the rate of prenatal SCAs from 5.8 per 10,000 births in 2005 to 8.7 per 10,000 births in 2020 (p < 0.0001). This increase was predominantly due to 47,XXY cases, 91% of which were ascertained via positive NIPT for this condition in 2020. The prenatal diagnosis rate of 47,XXY significantly increased from 0.8 per 10,000 births in 2005 to 4.3 per 10,000 births in 2020 (p < 0.0001). Conclusion Screening for SCAs using NIPT has directly led to an increase in their prenatal diagnosis on a population‐wide basis, especially 47,XXY. This has implications for clinician education, genetic counselling, and pediatric services. Open access publishing facilitated by The University of Melbourne, as part of the Wiley -The University of Melbourne agreement via the Council of Australian University Librarians What does this study add? � This is the first population-based data demonstrating the significant rise in prenatal diagnosis of sex chromosome conditions on amniocentesis and chorionic villus sampling, predominantly driven by increases in the prenatal diagnosis of 47,XXY. � The prenatal diagnosis rate of 47,XXY has increased from one in 12,500 births to one in 2300. � This five-fold increase in the prenatal diagnosis of 47,XXY has significant clinical implications for health professional education and postnatal management. | INTRODUCTION Until recently, the prenatal diagnosis of a sex chromosome condition was most commonly an incidental finding following a diagnostic procedure performed for another indication. 1 The introduction of NIPT over the past decade has not only reduced the total number of invasive procedures performed, 2 but also created new opportunities for women to obtain specific prenatal screening information on SCAs. Non-invasive prenatal testing became clinically available in Australia on a self-funded basis in late 2012 3 and is now utilized by at least 20% of women electing to have screening in Victoria. 4e most common SCA, 45,X (Turner syndrome), is the only sex chromosome aneuploidy (SCA) with the potential for detection via specific fetal ultrasound findings (cystic hygroma, hydrops fetalis, increased nuchal translucency, or cardiac and renal anomalies).The other SCAs include 47,XXY (Klinefelter syndrome), 47,XYY (Jacob Syndrome) and 47,XXX (Triple X).Children with these conditions often have a normal phenotype at birth typically presenting across the life course with issues including developmental delay, reproductive abnormalities, and reduced fertility. 5Their highly variable presentation and the less accurate performance of NIPT for these conditions has created debate over whether sex chromosomes should be analyzed in NIPT. 6A joint statement by The Royal Australian and New Zealand College for Obstetricians and Gynecologists and the Human Genetics Society of Australasia recommended that while there is no precedent for screening for SCA at the population level, if it is to be offered it must be done in a voluntary manner, with informed consent, including counselling about possible unanticipated findings and the option to opt out. 7It is notable that in some countries, such as India and China, assessment of fetal sex chromosomes is not legally permitted due to concerns about sex selection. 8Furthermore, there is significant variation in the range of conditions for which screening by NIPT is done, not only between countries, but also between different providers within the same state or country. 9This ranges from only screening for chromosomes 13, 18, and 21, to genome wide analysis including rare autosomal trisomies +/− sex chromosomes, as seen in countries such as Belgium and the Netherlands. 9,10 used a population-based prenatal diagnosis data collection to analyze trends in the definitive prenatal diagnosis of SCA on amniocentesis and CVS before and after the introduction of NIPT in 2012.We hypothesized that the prenatal diagnosis of SCAs, and 47,XXY specifically, would rise significantly after the clinical availability of NIPT. | MATERIALS AND METHODS The Australian state of Victoria has approximately 79,000 births annually, with an average fertility rate of 1.7 births per woman and a median maternal age of 31.5 years 11 Women in the most advantaged regions of Victoria have a higher rate of prenatal diagnosis, with 315 prenatal diagnoses performed per 10,000 births compared to 149 per 10,000 births for women in the most disadvantaged regions. 12ior work from our group has shown that advantaged regions are also five times more likely to have NIPT-indicated prenatal diagnosis compared with women from disadvantaged regions. 13e Victorian Prenatal Diagnosis Database (VPDD) is a centralized data collection for prenatal diagnosis that captures all such testing in our population.All CVS and amniocentesis samples collected at <25 weeks gestation from 2005 to 2020 were included in this study.Indications for testing were provided by the clinical referrer.Chromosomal analysis was performed by G-banded karyotyping, fluorescence in situ hybridization, and/or chromosomal microarray.Multiple pregnancies and repeat tests were merged into single 'per pregnancy' records.Data were analyzed using STATA version 16 (StataCorp, 2019) and EpiTools (2018). 14nual numbers of SCAs as a percentage of total prenatal diagnostic tests were analyzed with a χ 2 test for trend with a p value of <0.05 being considered significant. The prenatal diagnosis results and the birth registrations were not linked, so the rate per 10,000 births is not able to capture true prevalence.However, this metric was used to control for variation in annual birth rates.During the 16-year study period there were 1,117,475 births and 46,518 prenatal diagnostic procedures, identifying 617 SCAs.The most common SCA detected during the study period was 45,X (n = 294), followed by 47,XXY (n = 174), 47,XXX (n = 91) and 47,XYY (n = 53).The annual number of SCAs remained relatively stable between 2005 and 2015, ranging between 20 and 41 per annum, after which an upward trend in annual cases to a peak of 65 in 2020 was observed (Figure 1A, Supplementary Material S1).The total number of confirmed SCAs as a percentage of prenatal diagnostic tests increased significantly over the total study period from 0.8% in 2005% to 3.3% in 2020 (χ 2 trend = 279.6,p < 0.0001).When analyzed as proportion of total births, the prenatal diagnosis rate of an SCA increased from 5.8 per 10,000 births in 2005 to 8.7 per 10,000 births in 2020 (χ 2 trend = 45.9, p < 0.0001) (Figure 1B).This rise in SCAs was largely driven by a significant increase in the identification of 47,XXY cases.Since 2017, 47,XXY has exceeded 45,X as the most common SCA detected on prenatal diagnosis (Supplementary Material S1).The prenatal diagnosis rate of 47,XXY significantly increased from 0.8 per 10,000 births in 2005 to 4.3 per 10,000 births in 2020 (χ 2 trend = 45.2, p < 0.0001).This translates to a prenatal diagnosis rate of 8.4 per 10,000 male births in 2020. Non-invasive prenatal testing is the now most common screening method leading to a prenatal diagnosis of SCA (Figure 2).More than 80% of all SCA were ascertained via NIPT in 2020, including 29 of 32 cases of 47,XXY (91%) (Table 1).increase in the prenatal detection of SCA prior to 2016. 4 The subsequent increase likely reflects rapid uptake of NIPT after this time, including screening for SCAs. | DISCUSSION Traditionally, only children with SCAs on the more severe end of the spectrum present with clinical features at birth 16 or indeed for medical care at any time across the life course, with up to 75% of males with 47,XXY remaining undetected. 17Although the clinical phenotype cannot be predicted on the basis of chromosomal findings alone, there is evidence that early detection can be beneficial.Prospective studies of infants with 47,XXY diagnosed by newborn screening identify speech-language delays in 75% and motor skill delays in 50%; early identification and intervention for these infants results in improved neurodevelopmental outcomes. 18,19It is currently recommended that children with a diagnosis of 47,XXY undergo comprehensive developmental assessments at 9-15 months, 18-24 months and 30-36 months, 20 which can only be facilitated by antenatal or perinatal diagnosis.Antenatal detection also affords the opportunity to actively screen for, and address, somatic outcomes such as the evolution of hypogonadism (pre-and post-puberty) and osteoporosis. 21,22While interventions to preserve fertility in prepubertal boys with 47,XXY remain controversial, 23,24 early diagnosis does present the opportunity for discussion around potential sperm banking and early intervention in adolescence and early adulthood. 25,26Prenatal screening for 47,XXY therefore opens up F Using a population-based data collection with full capture of amniocentesis and CVS procedures, we show an unprecedented rise in the prenatal diagnosis of SCA since 2016, predominantly driven by detection of 47,XXY via NIPT.While NIPT became available on a selffunded basis in late 2012, there was no statistically significant I G U R E 1 prenatal diagnoses of sex chromosome aneuploidy (SCA) 2005-2020 (A).Frequency of SCA (B).Rate of SCA per 10,000 births in Victoria.SCA, sex chromosome aneuploidy 158 -LOUGHRY ET AL. This study received Human Research Ethics Committee (HREC) approval from the Royal Children's Hospital HREC (Reference No. 31135A) and Monash Health HREC (Reference No. 12063B).A waiver of individual patient consent was granted in accordance with the National Health and Medical Research Council National Statement on Ethical Conduct in Human Research 2007 Section 2.3.10. LOUGHRY ET AL. 37dication for prenatal diagnosis in cases of confirmed sex chromosome aneuploidies from 2005 to 2020Other testing indications included: single gene testing, repeat testing, first trimester serum screening only, high risk screening result (test not specified), no clinical notes given, and 'other' (selective reduction of multifetal pregnancy, maternal anxiety, previous child or fetus with a structural abnormality, previous or recurrent miscarriage).Indications for prenatal diagnosis in cases of confirmed sex chromosome aneuploidy (SCA)(2005-2020).#20casesalsohadanultrasound abnormality (12 structural, 8 increased nuchal translucency).*22cases also had an ultrasound abnormality (21 structural, 1 increased nuchal translucency).**Advancedmaternalagedefined as ≥37 years at time of delivery.Other testing indications combined constituted <15% of all indications annually throughout the study period and are not depicted in this figure new possibilities for anticipatory care as well as genetic counselling challenges for individuals, families and health professionals.27Obstetricianshavehistoricallynotprovided pre-test counselling for SCA as the focus of prenatal screening has been the common autosomal aneuploidies (trisomy 21, 13 and 18).Despite the normal or mild phenotype associated with most SCAs, many couples elect to are not collected centrally.We are also unable to assess the overall uptake of prenatal diagnostic testing after a positive NIPT result for a SCA.However, our prior individual patient record linkage study including women undergoing NIPT in 2015 in Victoria found that 58.7% of women with positive NIPT for trisomy 21 had prenatal diagnostic confirmation.3Significantlyfewerwomen with positive NIPT for suspected SCA underwent prenatal diagnostic confirmation (36.4%, p < 0.001).The past decade has seen an unprecedented rise in the prenatal diagnoses of SCA on amniocentesis and CVS in our population, driven by NIPT-indicated testing.There is now one prenatal diagnosis of 47,XXY for every 2300 births, though the benefits of screening for SCA remain controversial.It is time for obstetricians and other maternity care providers to be equipped to discuss a diagnosis of 47,XXY in particular, and to have access to appropriate genetics services for post-test counselling and decision support for couples.More prospective research is required to better understand the natural history of SCA in children with a prenatal diagnosis. a Other SCAs included 48,XXXX; 49,XXXXY; 48,XXXY; 48,XXYY.b that many of the SCAs are not liveborn.Our dataset does not include NIPT results per se; but only cases where a positive NIPT result was an indication for prenatal diagnosis.It was not possible to calculate the uptake of NIPT by year as multiple private providers are involved and these data
2022-01-21T06:16:31.011Z
2022-01-19T00:00:00.000
{ "year": 2022, "sha1": "25e65df89ae0e2bb5c4009ba33296f43699380fd", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1002/pd.6103", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e9106df1db007af3b328b5cacc341d342cd869c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221216414
pes2o/s2orc
v3-fos-license
Evaluation of plantar fascia using high-resolution ultrasonography in clinically diagnosed cases of plantar fasciitis Purpose The aim of this study was to assess the efficacy of high-resolution ultrasonography in the assessment of plantar fascia in individuals with heel pain, before and after treatment. Material and methods This study was conducted from 2016 to 2019, during which time 44 clinically diagnosed patients of plantar fasciitis were compared to 50 normal volunteers. There were 25 males and 25 females in the control group and 42 females and two males in the study group. Thirty-eight patients had unilateral disease, and six patients had bilateral disease. The thickness of the plantar fascia was measured just anterior to its calcaneal attachment using ultrasonography. Body mass index (BMI) was also calculated in both groups. Results The plantar fascia was 2-4 mm thick in the control group whereas it was > 4 mm thick in 48 heels in the study group. With cut-off of > 4 mm as diagnostic of plantar fasciitis, this study had a sensitivity of 96%, specificity of 100%, and accuracy of 98%. BMI was increased in 60% of female patients. All patients were treated with local infiltration of corticosteroid. In 37/42 patients (43 heels) who had improved clinically, the thickness of plantar fascia was reduced to < 4 mm when assessed after six weeks of corticosteroid injection. Conclusions Diagnosis of plantar fasciitis can be easily verified by ultrasonography with plantar fascia thickness > 4 mm being suggestive of plantar fasciitis. Ultrasound can also be used to evaluate treatment response. Ultrasono-graphy helps the clinician in confirming the diagnosis of plantar fasciitis and also in assessing the response to treatment. Introduction Heel pain is a common problem among the adult population [1]. There can be many causes of heel pain such as plantar fasciitis (PF), heel spur, or gout, with PF being the most common cause. Planter fasciitis affects approximately 10% of the population, with approximately 1 million people being treated annually [2,3]. Females are more commonly affected compared to males. PF is considered to be a degenerative disease of plantar fascia probably due to overuse trauma that leads to micro tears. It significantly hinders routine activities as well as athletic endeavours [4]. Plantar fascia is a strong connective tissue that extends from the os calcis to the level of the metatarsal heads [5]. It comprises three bundles: medial, central, and lateral, with the central bundle being the most commonly affected. There is also an entity called distal plantar fasciitis, which is a cause of recalcitrant heel pain [6]. Risk factors for developing plantar fasciitis include bio-mechanical factors such as severe pronation and decreased dorsiflexion of ankle, obesity, prolonged standing, walking, running, and improper footwear [7][8][9][10][11]. Rheumatoid arthritis and seronegative spondyloarthropathies are also associated with this disease [12]. The diagnosis is made clinically with the help of history and physical examination with the major symptom being pain on the plantar aspect of the heel, which increases on weight bearing. On clinical examination, there is localised tenderness, predominantly on the inferomedial aspect of the calcaneal tuberosity [13,14]. Imaging is of immense help in arriving at an appropriate diagnosis, providing adequate treatment and in assessing response to treatment. It has been observed that in PF the thickness of the plantar fascia is increased compared to individuals without PF. The modalities used are plain radiograph, ultrasound, or magnetic resonance imaging (MRI). Because it is a disease of soft tissues, plain radiography has been unrewarding [15]. MRI is expensive, time consuming, and is unsuitable for claustrophobic patients. Hence, ultrasonography is now being increasingly used to assess plantar fascia in patients with clinical diagnosis of plantar fasciitis. It has the added advantages of being noninvasive, cost-effective, easily accessible, good with spatial resolution for the superficial structures and evaluation of the tissues with real-time dynamics. It is also useful in guiding treatment. Ultrasound-guided local injection has been shown to produce better pain relief as compared to injection by palpation alone [16,17]. It can also be used in guiding shockwave therapy and successive follow-up of the patients [18,19]. The aim of this study was to assess the efficacy of ultrasonography in the assessment of plantar fascia in individuals with plantar fasciitis. Material and methods The current study was a prospective study. Approval of the Institutional ethics committee was obtained. The study was conducted from January 2016 to May 2019. It consisted of two groups: a control group and a study group. The control group comprised 50 asymptomatic volunteers (100 heels), and the study group comprised 44 patients (50 heels). Written informed consent was obtained from all the individuals included in the control and study groups. In the control group there were 25 males and 25 females. Volunteers in these two sub-groups were age matched. It was designed to establish normal thickness of the plantar fascia at its attachment to the calcaneal tuberosity (within 1 cm) in the asymptomatic population in the region. The majority of these volunteers presented to the department of radiodiagnosis for ultrasonography of a region other than the heel. Patients who had any past history suggestive of heel pain, systemic disease such as rheumatoid arthritis, gout, or had sustained any injury to the heel were excluded. The study group consisted of 44 patients (50 heels); six patients had bilateral disease and 38 patients had unilateral disease. There were 42 females and two males, with ages ranging between 40 and 58 years. Inclusion criteria in the study group were chronic heel pain (> 3 months) and heel pad tenderness on clinical examination. Exclusion criteria were patients who had systemic inflammatory arthritis and neuromuscular disease. Diagnosis was made on the basis of history and clinical examination. Ultrasonography examinations of all patients in both groups was performed by two radiologists. Each radiologist took measurements twice. The mean of the two values was taken into consideration. It was performed with a linear 17-5 MHz probe (Philips iU22, Bothell, WA, USA). Patients were asked to lie prone with feet hanging from the edge of the table, and their ankles were placed in dorsiflexion. Care was taken to maintain the ultrasound beam perpendicular to the plantar fascia so that anisotropy could be avoided. Calcaneal attachment was better appreciated on sagittal images. Plantar fascia appears as a "hyperechoic band with linear fibres" on the background of a hypoechoic matrix [20]. The thickness of the plantar fascia was measured within 1 cm of the calcaneal attachment. Once the range of normal thickness of plantar fascia was established in the control group, the study on symptomatic patients was carried out. Body weight and body height were measured in both the groups, and the body mass index (BMI) was calculated. Statistical analysis After collecting the data, it was entered in a Microsoft Excel spreadsheet. Mean, standard deviation, and standard error were calculated for quantitative data. Frequency and percentages were calculated for qualitative data. Data was analysed by using "IBM SPSS STATISTICS" (version 16.0). Analysis was done by using Student's t-test and χ 2 test. All statistical tests were applied at a significance level of α = 0.05 (p value < 0.05). Results There were 25 males and 25 females, with ages varying from 40 to 65 years (mean age was 38.22 ± 8.38 years) in the control group. The ages of the patients in the study group ranged between 40 and 58 years (mean age was 36 ± 4.24 years). There were 42 females and two males ( Table 1). There was no statistical difference between both the groups with respect to age (p = 0.721). Ultrasound examinations of all patients were performed by two radiologists, with each radiologist taking measurements twice. The mean of the two values was taken into consideration. Analysis of collected data showed excellent intra-observer agreement with an intraclass correlation coefficient (ICC) value of 0.839 (95% confidence interval [CI]: 0.752-0.901) and an excellent inter-observer agreement with an ICC value of 0.842 (95% CI: 0.762-0.907). In the control group, the minimal and maximal thickness of plantar fascia (right or left) was 2.7 ± 0.4 mm and 3.1 ± 0.8 mm, respectively, with mean thickness of 2.9 ±0.7 mm (Figure 1). In the study group the mean thickness of the plantar fascia was 5.2 ± 1.13 mm on the right side and 5.3 ± 1.24 mm on the left side ( Figure 2). The maximal thickness of the plantar fascia (right or left) in these patients was 6.2 ± 1.09 mm, and the minimal thickness of the plantar fascia (right or left) was 4.7 ± 0.4 mm ( Table 2). In 48/50 heels of study (42/44 patients) group participants the plantar fascia thickness was found to be > 4 mm. The thickness of the plantar fascia on the affected side was increased in 36 patients with unilateral involvement as compared to the uninvolved side, whereas in two patients the thicknesses of the plantar fascia on the affected side was 3.82 mm and 3.9 mm as compared to 3.10 mm and 2.80 mm, respectively, on the normal side. The receiver operating characteristic (ROC) curve was analysed ( Figure 3). Area under the curve was 0.950 (95% CI: 0.881-1.020). A cut-off value of 4 mm of plantar fascia thickness provided sensitivity of 96% and specificity of 100%. The mean BMI for the study group was 28.76 ± 2.23 kg/ m 2 whereas the mean BMI in the control group was 25.67 ± 2.47 kg/m 2 . There was a significant difference in BMI between both groups (p = 0.03). The BMI was increased (> 25 kg/m 2 ) in 25/42 (60%) female patients. All symptomatic patients were given local injection containing a mixture of 4 ml of local anaesthetic bupivacaine and 1 ml (40 mg) of corticosteroid methylprednisolone without ultrasound guidance. These patients were re-evaluated with ultrasonography after six weeks of local steroid injection. In 37 patients (43 heels) where there was complete resolution or significant improvement in symptoms, the thickness of the plantar fascia was reduced to < 4 mm (Figure 4). In six patients (six heels) with no notable improvement in symptoms, there was no significant decrease in the plantar fascia thickness as compared to pre-injection thickness. In one patient (one heel) in whom there was no relief of symptoms, the plantar fascia thickness was found to be decreased, i.e. 3.8 mm compared to 4.9 mm before injection. The plantar fascia was hypoechoic in all the study subjects, and it was of normal echogenicity in the control subjects. The outline of the plantar fascia was sharp in control subjects and was indistinct in all the study subjects. Fluid collection, intratendinous calcifications, or rupture of the plantar fascia were not seen in any of the patients in this study. Discussion The plantar fascia is a tough connective tissue that helps to maintain the longitudinal arch of the foot. It is the "tendon aponeurosis" for the superficial layer of the intrinsic muscles of the foot. It absorbs and disperses the loading/weight-bearing forces across the mid-foot joints and helps during gait [21]. PF is the commonest cause of chronic heel pain. It is seen in individuals who do a lot of physical exertion and is common in middle-aged women. It is related also to repetitive micro-trauma. Tarsal tunnel syndrome, osteomyelitis, or stress fracture of calcaneum, gout, and subcalcaneal bursitis are the differential diagnoses [22]. Because the plantar fasciitis is mainly diagnosed clinically, the role of imaging modalities is debated and is generally used to rule out other alternative/rare diseases. Because it is a disease of soft tissues, MRI or ultrasonography is the preferred modality of imaging in these patients. Plain radiography can be used to supplement these modalities. MRI is the imaging modality of choice in confirming the diagnosis of plantar fasciitis. However, ultra-sonography has the advantages of easy accessibility, lower cost, it is relatively fast, and has very good spatial resolution for superficial structures; hence, it is used increasingly in the diagnosis of plantar fasciitis [23,24]. There are few studies on the role of ultrasound elastography, which can detect initial changes in plantar fascia stiffness before the detection of findings on routine ultrasound [25]. Softening of plantar fascia is seen with ageing and in individuals with plantar fasciitis [26]. A study conducted by Lee et al. revealed that plantar fascia softening did not differ significantly between controls and subjects with plantar fasciitis in older individuals, while it differed significantly in a younger group [27]. Some studies suggested that when sonoelastography was combined with routine B-mode ultrasonography in the diagnosis of plantar fasciitis there was a significant increase in diagnostic accuracy [25,28]. Sonoelastography may also be useful in monitoring the response to treatment as studied by Kim et al. because it can detect increased stiffness of the plantar fascia [29]. The aim of this study was to assess the role of diagnostic ultrasonography in establishing the diagnosis of plantar fasciitis. In this study, the majority of patients (42/44) were female, which is in concordance with the results published by Ozdemir et al., where plantar fasciitis was more common in females as compared to males [22]. The ages of the patients in the present study ranged from 40 to 58 years, which is similar to the findings reported by Khalifa et al. where PF was observed in the age group 40-60 years [30]. Increased body weight has been implicated as a causative factor in the evolution of PF. Our study showed that the mean BMI for symptomatic group (study group) was significantly higher, i.e. 28.76 ± 2.23 kg/m 2 , as compared to 25.67 ± 2.47 kg/m 2 in the control group. This difference was statistically significant. This is in agreement with the study conducted by Sabir et al. where the BMI (≥ 25 kg/m 2 ) was significantly higher in the patient group [31]. Increase in thickness and/or hypoechogenicity of plantar fascia, perifascial oedema, fluid collection, intratendinous calcifications, and rupture of the plantar fascia are findings associated with plantar fasciitis. Many studies have used combinations of these for the diagnosis of plantar fasciitis [16,32,33]. In our study, the thickness of the plantar fascia in the control group was in the range 2-4 mm (mean 2.9 ± 0.7 mm). In none of the individuals in this group, the thickness of plantar fascia was > 4 mm. Thickness of plantar fascia was increased in 48/50 heels in the study group (42/44 patients), where the maximal thickness was 6.2 ± 1.09 mm and minimal thickness was 4.7 ± 0.4 mm. With ≥ 4 mm of thickness of plantar fascia as the cut-off, ultrasonography was diagnostic of PF in 42/44 patients (48 heels), giving a sensitivity of 96%, specificity of 100%, and accuracy of 98%. Similar results were reported by Wearing et al. and Akfirat et al., where the mean thickness of the plantar fascia in symptomatic patients was 6.1 ± 1.43 mm and 4.8 ± 1.52 mm, respectively [34,35]. All patients in our study with plantar fasciitis showed hypoechogenicity of plantar fascia, as described by Tsai et al. in their study [36]. Another finding that was seen in patients of plantar fasciitis in our study was loss of sharp outline due to perifascial oedema, as described by Akfirat et al. and Gibbon et al. in patients with plantar fasciitis [32,33]. None of the patients in our study had fluid collection or intratendinous calcifications or rupture of the plantar fascia. Another finding that can be seen in patients with plantar fasciitis is increased vascularity of plantar fascia, which can be assessed by colour Doppler. Colour Doppler ultrasound can identify hyperaemia in the plantar fascia and perifascial tissues. However, it can be better visualised by power Doppler [12,37]. The study conducted by Walther et al. revealed that moderate or marked hyperaemia is seen in individuals with acute plantar fasciitis but not in chronic plantar fasciitis [38]. Thus, power Doppler can be used with routine B-mode ultrasound in the diagnosis of plantar fasciitis and also in determining whether the fasciitis is acute or chronic. However, further studies are needed to validate these findings. All patients included in the study group were treated with local steroid injection without ultrasound guidance. Follow-up ultrasounds performed at six weeks after the injection revealed that that there was a decrease in the thickness of the plantar fascia in almost all patients who showed significant improvement in symptoms. Thus, it can be used to diagnose as well as to assess the response to treatment. Conclusions Diagnosis of plantar fasciitis can be easily confirmed with ultrasonography. Thickness > 4 mm, indistinct margins, and hypoechogenicity are diagnostic. It is low cost, easily available, highly accurate, and has high patient acceptance due to its noninvasive nature. Ultrasonography is highly sensitive and specific in diagnosing plantar fasciitis. Ultrasound provides adequate detail and information to the practicing clinician to confirm the primary diagnosis of plantar fasciitis. It can also assess the response to treatment (post local steroid injection) and helps the clinician in deciding the management regimens and follow-up. However, the major limitation of ultrasonography is that it is operator dependent.
2020-08-18T05:06:31.215Z
2020-07-24T00:00:00.000
{ "year": 2020, "sha1": "9e34ab0575a060ab609fe46ec089a0d40768fda6", "oa_license": "CCBYNCND", "oa_url": "https://www.termedia.pl/Journal/-126/pdf-41528-10?filename=Evaluation%20of%20plantar.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e34ab0575a060ab609fe46ec089a0d40768fda6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225454798
pes2o/s2orc
v3-fos-license
Complete Chloroplast Genome of Japanese Larch ( Larix kaempferi ): Insights into Intraspecific Variation with an Isolated Northern Limit Population : Research Highlights: The complete chloroplast genome for eight individuals of Japanese larch, including from the isolated population at the northern limit of the range (Manokami larch), revealed that Japanese larch forms a monophyletic group, within which Manokami larch can be phylogenetically placed in Japanese larch. We detected intraspecific variation for possible candidate cpDNA markers in Japanese larch. Background and Objectives: The natural distribution of Japanese larch is limited to the mountainous range in the central part of Honshu Island, Japan, with an isolated northern limit population (Manokami larch). In this study, we determined the phylogenetic position of Manokami larch within Japanese larch, characterized the chloroplast genome of Japanese larch, detected intraspecific variation, and determined candidate cpDNA markers. Materials and Methods: The complete genome sequence was determined for eight individuals, including Manokami larch, in this study. The genetic position of the northern limit population was evaluated using phylogenetic analysis. The chloroplast genome of Japanese larch was characterized by comparison with eight individuals. Furthermore, intraspecific variations were extracted to find candidate cpDNA markers. Results: The phylogenetic tree showed that Japanese larch forms a monophyletic group, within which Manokami larch can be phylogenetically placed, based on the complete chloroplast genome, with a bootstrap value of 100%. The value of nucleotide diversity ( π ) was calculated at 0.00004, based on SNP sites for Japanese larch, suggesting that sequences had low variation. However, we found three hyper-polymorphic regions within the cpDNA. Finally, we detected 31 intraspecific variations, including 19 single nucleotide polymorphisms, 8 simple sequence repeats, and 4 insertions or deletions. Conclusions: Using a distant genotype in a northern limit population (Manokami larch), we detected su ffi cient intraspecific variation for the possible candidates of cpDNA markers in Japanese larch. Introduction The chloroplast genome is highly conserved and has a much lower mutation rate than the nuclear genome [1]. Chloroplast DNA (cpDNA) has been widely used to clarify interspecific relationships, and to evaluate the magnitude of intraspecific variation [2,3]. The cpDNA of gymnosperms, particularly of the conifers, is characterized by high levels of intraspecific variation [4,5] and paternal inheritance [6]. Uppsala, Sweden) gradient for chloroplast isolation and the DNeasy Plant Mini kit (QIAGEN, Hilden, Germany) for DNA extraction. The cpDNA sequence reads were obtained using the Illumina platform. CLC Genomics Workbench 9.5.3 software (CLC bio, Aarhus, Denmark) was used for genetic analysis. After trimming low-quality sequences from the reads, bulked reads for all eight individuals were used to determine the draft consensus sequence for L. kaempferi. Reference mapping to L. gmelinii var. japonica (LC228570; [11]) was performed with parameter settings of mismatch cost 3, In/Del cost 3, length fraction 0.9, and similarity fraction 0.9. The complete chloroplast genome for each sample was then determined by mapping reads for each sample to our consensus sequence, using the same parameter settings as described above. The initial annotation of the chloroplast genome was performed using DOGMA [21]. Prediction of tRNA genes was performed using tRNAscan (http://lowelab.ucsc.edu/tRNAscan-SE). The annotation was finalized, with reference to that of L. gmelinii var. japonica (LC228570). To estimate the pseudogenes of ndh (subunits of an NADH dehydrogenase), we referred to the Pinus thunbergii chloroplast genome (NC_001631; [9]). REPuter [22] was used to confirm repetitive sequences in the chloroplast genome, (i.e., tandem repeats, duplicated genes, and IR regions). Finally, the gene map of the circular chloroplast genome of Japanese larch was drawn using OrganellarGenomeDRAW [23]. A phylogenetic tree was constructed based on the chloroplast genome sequences of the eight individuals identified in this study, and of MF990369 in the NCBI database (https://www.ncbi.nlm. nih.gov/) for Japanese larch, as well as five reference sequences of related Larix species derived from the database: LC228570 (L. gmelinii var. japonica), MF990370 (L. gmelinii var. olgensis), NC_016058 (L. decidua), KX880508 (L. potaninii), and NC_036811 (L. sibirica). The alignment of these fourteen chloroplast genome sequences was performed in MAFFT [24], and the final alignment was checked using CLC Genomics Workbench 9.5.3. A phylogenetic tree was constructed by MEGA X [25], based on maximum likelihood (ML) methods. A total of 1000 bootstrap replicates were applied to evaluate the branch supports. The SNP data from the eight Japanese larch individuals was used for subsequent analyses. Haplotype networks have been demonstrated to show alternative genealogical relationships at the intraspecific population level, with low divergence [26]. We estimated haplotype networks for the chloroplast data using the software Network 10 (https://www.fluxus-engineering.com/sharenet.htm). Nucleotide diversity can be used as an inference parameter for evolutionary and demographic forces [12]; here, nucleotide diversity was calculated using DnaSP v6 software [27] and estimated as π. Divergent regions of the chloroplast genomes were identified according to the variation in π, by sliding window analysis, with a 500 bp step size and 10,000 bp window length. Phylogenetic Analysis The phylogenetic tree showed that Japanese larch forms a monophyletic group, within which Manokami larch can be phylogenetically placed based on the complete chloroplast genome, with a bootstrap value of 100% ( Figure 1). Japanese larch is genetically close to L. decidua and L. gmelinii, but distant from L. sibirica and L. potaninii ( Figure 1). The haplotype network among the eight sampled individuals revealed that Manokami larch was genetically distinct from other Japanese larches ( Figure 2). Characteristics of the Japanese Larch Chloroplast Genome Japanese larch circular chloroplast genomes were characterized in the range of 122,394-122,409 bp with accession numbers from the DNA Data Bank of Japan (DDBJ) from LC574969 to LC574976. The gene type, number, and order were identical among the Japanese larch chloroplast genomes used in this study. Lk_Ho1 (LC574969) was used as a representative of Japanese larch; Figure 3 illustrates the physical mapping of its chloroplast genome, which contained a pair of IRs (436 bp each) separated by large single copy (LSC), and small single copy (SSC) regions, of 65,398 bp and 56,136 bp, respectively. The trnI-CAU gene was duplicated within inverted repeats, the trnS-GCU and psbl genes were duplicated as another inverted repeat of 457 bp in the LSC region (Figure 3), and two trnT-GGU genes were dispersed in the LSC and SSC regions, respectively. A total of 119 genes were identified (Table S1), including 72 protein genes, 35 transfer RNA genes, 4 ribosomal RNA genes, and 8 pseudogenes. Thirteen genes contained an intron, including trnA-UGC, trnG-UCC, trnI-GAU, trnK-UUU, trnL-UAA, trnV-UAC, rpoC1, rps12, rpl2, rpl16, petB, petD, and atpF. In addition, ycf3 contained two introns. Furthermore, rps12 was a trans-splicing gene with 5′ end and 3′ end exons, located in the Characteristics of the Japanese Larch Chloroplast Genome Japanese larch circular chloroplast genomes were characterized in the range of 122,394-122,409 bp with accession numbers from the DNA Data Bank of Japan (DDBJ) from LC574969 to LC574976. The gene type, number, and order were identical among the Japanese larch chloroplast genomes used in this study. Lk_Ho1 (LC574969) was used as a representative of Japanese larch; Figure 3 illustrates the physical mapping of its chloroplast genome, which contained a pair of IRs (436 bp each) separated by large single copy (LSC), and small single copy (SSC) regions, of 65,398 bp and 56,136 bp, respectively. The trnI-CAU gene was duplicated within inverted repeats, the trnS-GCU and psbl genes were duplicated as another inverted repeat of 457 bp in the LSC region ( Figure 3), and two trnT-GGU genes were dispersed in the LSC and SSC regions, respectively. A total of 119 genes were identified (Table S1), including 72 protein genes, 35 transfer RNA genes, 4 ribosomal RNA genes, and 8 pseudogenes. Thirteen genes contained an intron, including trnA-UGC, trnG-UCC, trnI-GAU, trnK-UUU, trnL-UAA, trnV-UAC, rpoC1, rps12, rpl2, rpl16, petB, petD, and atpF. In addition, ycf3 contained two introns. Furthermore, rps12 was a trans-splicing gene with 5′ end and 3′ end exons, located in the Characteristics of the Japanese Larch Chloroplast Genome Japanese larch circular chloroplast genomes were characterized in the range of 122,394-122,409 bp with accession numbers from the DNA Data Bank of Japan (DDBJ) from LC574969 to LC574976. The gene type, number, and order were identical among the Japanese larch chloroplast genomes used in this study. Lk_Ho1 (LC574969) was used as a representative of Japanese larch; Figure 3 illustrates the physical mapping of its chloroplast genome, which contained a pair of IRs (436 bp each) separated by large single copy (LSC), and small single copy (SSC) regions, of 65,398 bp and 56,136 bp, respectively. The trnI-CAU gene was duplicated within inverted repeats, the trnS-GCU and psbl genes were duplicated as another inverted repeat of 457 bp in the LSC region (Figure 3), and two trnT-GGU genes were dispersed in the LSC and SSC regions, respectively. A total of 119 genes were identified (Table S1), including 72 protein genes, 35 transfer RNA genes, 4 ribosomal RNA genes, and 8 pseudogenes. Thirteen genes contained an intron, including trnA-UGC, trnG-UCC, trnI-GAU, trnK-UUU, trnL-UAA, trnV-UAC, rpoC1, rps12, rpl2, rpl16, petB, petD, and atpF. In addition, ycf3 contained two introns. Furthermore, rps12 was a trans-splicing gene with 5 end and 3 end exons, located in the LSC region and the SSC region, respectively. The G+C content of the complete chloroplast genome of Japanese larch was 38.7%. LSC region and the SSC region, respectively. The G+C content of the complete chloroplast genome of Japanese larch was 38.7%. Nucleotide Diversity Analysis The value of nucleotide diversity (π) was calculated at 0.00004, based on SNP sites for Japanese larch, suggesting that sequences had low variation. As shown in Figure 4, there were three divergent regions (A, B, and C) in Japanese larch. Two regions (A, C), which were roughly in the range of the rpl16 gene psaB and the rbcL gene psbA, respectively, were classified as moderately variable (π > 0.00004); these regions contained variant sites in psaB, rpl16, ψndhK, atpB, psbK, and matK, and three intergenic spacers (between the rpl23 and psbA-partial genes, between the trnS-GGA and ycf3 genes, and between the trnS-GCU and trnT-GGU genes). The B region (roughly from chlL to rpl32, π > 0.0001) was identified as a hypervariable region, in which mutation occurred twice in the ψndhD and the ycf1. Genes shown outside and inside the circle are transcribed clockwise, and transcribed counterclockwise, respectively. Genes were colour-coded to distinguish different functional groups. The dark and light gray inner circle indicates the GC and AT content of the chloroplast genome, respectively. " †" represents the location of a longer inverted repeat. A, B and C represent hotspot of variation. Nucleotide Diversity Analysis The value of nucleotide diversity (π) was calculated at 0.00004, based on SNP sites for Japanese larch, suggesting that sequences had low variation. As shown in Figure 4, there were three divergent regions (A, B, and C) in Japanese larch. Two regions (A, C), which were roughly in the range of the rpl16 gene psaB and the rbcL gene psbA, respectively, were classified as moderately variable (π > 0.00004); these regions contained variant sites in psaB, rpl16, ψndhK, atpB, psbK, and matK, and three intergenic spacers (between the rpl23 and psbA-partial genes, between the trnS-GGA and ycf3 genes, and between the trnS-GCU and trnT-GGU genes). The B region (roughly from chlL to rpl32, π > 0.0001) was identified as a hypervariable region, in which mutation occurred twice in the ψndhD and the ycf1. Repeat Sequence Analysis of the Japanese Larch Chloroplast Genome Tandem repeats were detected in approximately 25 sites in the Japanese larch chloroplast genomes. Repeated lengths of tandem repeats varied from 12 to 117 bp, and 64% of all tandem repeats occurred in ycf1, which belongs to a protein-coding region containing 76% of all detected tandem repeats. Nineteen SSR motifs were detected in the Japanese larch chloroplast genome. The majority of the detected SSR motifs were mononucleotide motifs, of which the SSR motif of mononucleotide T was the most frequent, followed by mononucleotide A and mononucleotide G. With the exception of three SSR motifs of dinucleotide AT, no other multiple nucleotide motifs were detected in the Japanese larch chloroplast genome. Furthermore, most (77.8%) of the detected SSR motifs were found in the intergenic region, followed by introns (16.7%), and protein-coding genes (5.5%). Eight cpSSR variants out of nineteen cpSSR motifs were detected in the intergenic region of the chloroplast genome of Japanese larch. Genetic Variation among Japanese Larch Chloroplast Genomes Among the eight individuals sequenced in this study, 31 variants (including 19 SNPs, 8 SSRs, and 4 In/Dels) were detected. For SNP variants, six and thirteen SNPs were identified in the intergenic spacer (IGS) and coding sequence (CDS) regions, respectively. These were detected in the ψndhK (one SNP), ψndhD (two SNPs), and protein-coding ycf1 (two SNPs), all of which belong to the SSC region. Six SSR variants were identified in the IGS, whereas two SSR variants were detected in the CDS region. Four In/Del variants were identified in the ψndhK gene (one In/Del variant) and the ycf1 gene (three In/Del variants), belonging to CDS region. (Table 1) Repeat Sequence Analysis of the Japanese Larch Chloroplast Genome Tandem repeats were detected in approximately 25 sites in the Japanese larch chloroplast genomes. Repeated lengths of tandem repeats varied from 12 to 117 bp, and 64% of all tandem repeats occurred in ycf1, which belongs to a protein-coding region containing 76% of all detected tandem repeats. Nineteen SSR motifs were detected in the Japanese larch chloroplast genome. The majority of the detected SSR motifs were mononucleotide motifs, of which the SSR motif of mononucleotide T was the most frequent, followed by mononucleotide A and mononucleotide G. With the exception of three SSR motifs of dinucleotide AT, no other multiple nucleotide motifs were detected in the Japanese larch chloroplast genome. Furthermore, most (77.8%) of the detected SSR motifs were found in the intergenic region, followed by introns (16.7%), and protein-coding genes (5.5%). Eight cpSSR variants out of nineteen cpSSR motifs were detected in the intergenic region of the chloroplast genome of Japanese larch. Genetic Variation among Japanese Larch Chloroplast Genomes Among the eight individuals sequenced in this study, 31 variants (including 19 SNPs, 8 SSRs, and 4 In/Dels) were detected. For SNP variants, six and thirteen SNPs were identified in the intergenic spacer (IGS) and coding sequence (CDS) regions, respectively. These were detected in the ψndhK (one SNP), ψndhD (two SNPs), and protein-coding ycf1 (two SNPs), all of which belong to the SSC region. Six SSR variants were identified in the IGS, whereas two SSR variants were detected in the CDS region. Four In/Del variants were identified in the ψndhK gene (one In/Del variant) and the ycf1 gene (three In/Del variants), belonging to CDS region. (Table 1) Discussion The phylogenetic position of Manokami larch, has been discussed by several researchers [14,16,17]. This study clearly indicates that Manokami larch should be phylogenetically categorized into Japanese larch, with a bootstrap value of 100% (Figure 1). Our findings support the assertion by Shiraishi et al. [15] that Manokami larch must be a Japanese larch. Manokami larch is located far from other Japanese larches ( Figure 2); genetically divergent genotypes, such as that of Manokami larch, could be used to efficiently detect intraspecific variation in Japanese larch. The chloroplast genomes of Japanese larch obtained from this study were similar in size and gene order to those of L. gmelinii [11], L. sibirica [19], L. decidua [30], and L. potaninii [18]. The chloroplast structure types were classified in Pinaceae according to their alignment order and the orientation of the F1 (fragment flanked by trnG-UCC and trnE-UUC), F2 (fragment flanked by clpP and trnT-GGU), T1 (type 1 Pinaceae-specific repeat containing trnS-GCU and psbI), and T2 (type 2 Pinaceae-specific repeats in intergenic spacers) fragments in the LSC region, which can produce eight different cpDNA forms, including A, B, C, D, E, F, and G forms [30]. The chloroplast DNA form used in this study was classified into the C form, the same form identified for L. gmelinii, L. decidua, L. griffithiana, and Pinus elliottii [11,30] based on the alignment order and orientation of T1, T2, −F1 (reverse strand), T2, +F2 (forward strand), and T1. Due to this T1 repeat, there were longer inverted repeats (457 bp) in the LSC region than two IRs (436 bp) in Japanese larch. Extremely shortened IRs, with another pair of inverted repeats, is regarded as a common feature in Pinaceae [30,31]. In this study, three hotspots of variation were detected throughout the entire chloroplast genome (Figures 3 and 4). The ycf1 and ψndhD were included in the hypervariable region (region B), and the ψndhK was included in the moderately variable region (region A). Three In/Del variants occurred in the ycf1, and previous research has reported insertions or deletions in the ycf1 of L. gmelinii [11]. Although it was considered a possibility that the ycf1 might be a nonfunctional pseudogene, another study [32] indicated that ycf1 is a functional gene, and encodes a product essential for cell survival. Dong et al. [33] revealed that the divergence of the ycf1 was obvious in gymnosperms. Additionally, Firetti et al. [34] indicated that the ycf1 was more divergent than the non-coding regions in the genus Anemopaegma. Regarding pseudogenes, eleven ndh genes (ndhA-K) have been identified in the cpDNA sequences of photosynthetic land plants [9,35,36]. In our study, five ψndh genes were found only in Japanese larch, of which ψndhD (two SNPs) and ψndhK (one SNP, one SSR, one In/Del variant) belonged to the region of frequent variation; these genes did not, however, exhibit a function consistent with other Pinus and Larix species [9,11]. Repeat sequences may play an important role in chloroplast genome arrangement and sequence divergence. In particular, tandem repeats may induce In/Dels [37,38]. In this study, tandem repeats were primarily identified in the ycf1. Tandem repeats were also located in the ycf1 of other conifers, such as Cryptomeria japonica [39] and L. gmelinii [11]. Among the eight Japanese larch individuals, we detected 31 variants (19 SNPs,8 SSRs, and 4 In/Dels) located in psaB, ψndhD, ψndhK, psbE, psbK, rpoC1, rpoC2, the intron of rpl16, matK, atpB, ycf1, six intergenic spacers (between the rpl23 and psbA-partial genes, between the trnS-GGA and ycf3 genes, between the trnS-GCU and trnT-GGU genes, between the clpP and trnE-UUC genes twice, between the psbE and petL genes) with SNPs, the intron of atpF, ψndhK, six intergenic spacers (between the trnC-GCA and rpoB genes, between the ycf1 and rps15 genes, between the trnL-CAA and ycf2 genes, between the ψycf2 and trnV-GAC genes, between the trnT-GGU and trnV-UAC genes twice) with SSRs, ψndhK and ycf1 with In/Dels that could prove useful for providing candidate cpDNA markers. Chloroplast simple sequence repeat (cpSSR) markers often contain highly polymorphic variations within a population of conifers (see [7]), although Zhang et al. [40] found only three polymorphic cpSSR markers among 11 candidate markers in Japanese larch. We identified 19 SSR motifs within the chloroplast genome of Japanese larch, preferentially within the intergenic space, and only 8 SSR motifs occurred among 19 SSR motifs in the intergenic region of the chloroplast genome. These results lay a foundation for the development of cpDNA markers for Japanese larch. Conclusions The complete chloroplast genome of Japanese larch (122,398-122,409 bp) was obtained using next-generation sequencing technology. The comparison of whole chloroplast genomes clearly indicated that the isolated population, forming the northern limit of the species' range (Manokami larch), should be placed phylogenetically within Japanese larch. The Manokami larch was found to be genetically different from other Japanese larches, indicating that sufficient genetic variation should be detected within the samples used in this study. Based on an analysis of intraspecific variation, 31 variants were detected, including 19 SNPs, 8 SSRs, and 4 In/Dels, all of which can be applied for the development of cpDNA markers. These variations should be useful for paternity analysis and population genetics analysis of Japanese larch in future studies.
2020-08-20T10:01:35.764Z
2020-08-14T00:00:00.000
{ "year": 2020, "sha1": "8b6a99749ee32de12ce5ff335bdde9157769b9e0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/11/8/884/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "861026c7ed90a395a5e7a751d8e965a856a67276", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
244838610
pes2o/s2orc
v3-fos-license
Wind-Induced Vibration of an Irregular Pentagon Lamella At present, there are increasingly encountering the use of lamellar structures, for example on the roofs of buildings, which, in addition to their visual function, also fulfil the function of reducing the flow of wind into the roof space. These structures are often designed as long and subtle structures and therefore their very common problem is unwanted vibration. In this article, the main focus is to show the methodology of the determination of the effects of wind on the lamella of the shape of an irregular pentagon. A real-size model made of steel with a total length of 2 m and a weight of 7.4 kg was used. Its size and shape were influenced by several factors which are specified in more detail in the paper. In the wind tunnel experiment, it was very important to ensure the exact position of the model and also to secure both ends of the model against shifting (to replicate fixed ends). Dynamic response of the structure in two directions together with wind speed were measured simultaneously. To investigate the wind effects by numerical analysis, fluid-structure interaction software simulation (FSI) on a full-size model was used. The main pitfall of the software solution was to get as close as possible to the conditions of the wind tunnel. The actual wind speed measured under laboratory conditions was used as the input wind speed for FSI simulation. The material of the model and the shape of the model was set in software simulation to be as close as possible to the real structure. Subsequently, other boundary conditions were set and the solution process was executed. The biggest problem, especially in terms of comparing the results of both approaches which greatly affected the results, was the very high stiffness of the model. Due to the extent and interconnectedness of results, findings are presented in more detail in the conclusions of the paper. The methodology of setting up a relatively complex FSI simulation, its results, as well as new findings that we came up with if the measurement of the dynamic effects of wind is the matter of interest are presented in this paper. Introduction The main focus of this paper is to show the methodology of the determination of the effects of wind on the lamella in the shape of an irregular pentagon. Design, light-weight, subtle, irregularly shaped structures of roof commonly called lamella are now widely used. Structures of this type are directly exposed to wind and this fact should be also taken into account in their design. In contrast to flexible cable-type structures, which are still given a lot of attention [1,2], the effects of wind on the lamellas are often overlooked. Lamellar structures are not given sufficient attention and dominate studies that examine the effects of wind on the building as a whole. An example is [3]. Lamellar structures are stiffer compared to cables, therefore the fatigue failure is a very common type of damage. Empirically it is IOP Publishing doi: 10.1088/1757-899X/1203/3/032111 2 almost impossible to determine the range of the oscillation of the irregularly shaped lamella and thus greater emphasis should be placed on the other methods of analysis of wind-induced vibration. Wind-induced vibration can be investigated experimentally, by software simulations or by empirical relations. Each of these methods has its advantages and disadvantages. This paper builds on our research and our previous work presented in [4]. If we want to analyse wind-induced vibrations in a wind tunnel, it is important to maintain dynamic properties of the model and the real structure. It mostly leads to the need to examine the model in real scale with real material properties. Therefore, examined structure is limited by the size of the wind tunnel. As in the previous case, the research is very often limited to flexible cable-type structures whose shape is most often a circle [5]. FSI software analyses are slowly growing in popularity. Some examples are [6,7,8,9]. As mentioned in [10] "the solution of the aerodynamic task is a very complex issue." Two-way FSI software simulation, can evaluate the time-history of the structural response together with the time-history of wind flow. The analysis works on the recurring principle. First computational fluid dynamics (CFD) analysis calculates results of wind pressure acting on analysed structure. This result in the form of a wind load is taken over by structural analysis module where variables e.g., deformation and acceleration are calculated. The whole process repeats and the deformed structure is subjected to CFD analysis again. Empirical relationships have been used mostly in the past for vibration analysis. There are methodologies in [11,12,13] for analysis of wind-induced vibrations of basic geometric shapes such as rectangle, circle etc. These relations have been derived from many experiments, however as mentioned they are only applicable to specific geometric shapes for which they were derived and in this age of modern architecture, their use is declining. Experimental and numerical model shape The task was to choose a model of the lamella that could be subjected to both a wind tunnel test and numerical analysis. The idea was to create a closed profile with the minimum size and weight where the accelerometers could be fitted inside. As a model -an irregular pentagon shown in figure 1 was chosen. Total length of the model was 2 m and its weight was 7.4 kg. The structure was welded of sheet metal of thickness of 2 mm. An important role in choosing and designing the model played the size of the model -because it was limited due to the size of the wind tunnel space and the transport possibilities, weight was also kept as low as possible using steel as a material, the thickness of the sheet metal used for model walls was based on the requirements of the welder so that the sheet do not bent due to the high temperature during welding. Model shape for numerical analysis was created by computer software with exactly the same dimensions as shown in figure 1. Methodology of the study Prior to the installation of the model into the wind tunnel, the uniaxial accelerometers were mounted in its place inside the model. A total of 6 accelerometers were installed inside. Four of them were placed on both ends of the model as close to the tunnel walls as possible to capture vibrations in the connection between the model and wooden reductions (clamp). The remaining two were mounted in the centre of the model to record horizontal and vertical accelerations at the centre. The layout diagram of accelerometers with the numbering is shown in figure 2. Figure 2. Location of uniaxial piezoelectric accelerometers inside the model and its numbering Subsequently, the model was mounted into the front space of the wind tunnel where it was subjected to examination. An important task was to ensure that both sides of the model were connected properly to the tunnel walls (fixed connection was expected). The connection of the model to the walls of the tunnel was made by wood beams and steel threaded rods as shown in figure 3. Due to fixed connection of the model to the tunnel walls the entire measurement was performed for only one direction of wind flow. intended for measuring the dynamic effects of wind and such an experiment was the first of its kind to be carried out here. In view of the above, this tunnel is not equipped with a synchronizing device that would perfectly synchronize the measurement of the acceleration of the structure with the wind speed measurement. Therefore, the synchronization of the measurements was performed only approximately -by an instruction / call to start the measurement (two persons tried to start the measurement of two quantities at the same time as synchronously as possible). Additional manual resynchronization of both measurements had to be taken into account on the basis of mutual comparison of results. Subsequent shift of time axes so that the results correspond at least approximately to each other must be done. During the experiment, it was found out that the model was too stiff and greater excitation must be performed. The wind speed was gradually increased and the presented results are for one of the highest wind speeds that can be simulated in the STU wind tunnel. Figure 4 shows how the wind speed changes depending on time as a result of turbulence (It ≈ 8.3 %). It is clear from figure 5 that there are significant vibrations at the connection of the steel structure to the wooden blocks. These vibrations are associated with the total vibration of the wind tunnel caused by the rotations of wind turbines. Therefore, it was not possible to reduce these vibrations by better fastening of the structure to the tunnel walls. The influence of the stiffness of the connection between the model and tunnel walls on the size of vibration is negligible -vibrations of the whole tunnel were transmitted directly to the analysed structure. Negative effect of vibrations transfer on results cannot be removed within the available possibilities. Horizontal acceleration measured in the centre of the model is shown in figure 6. Vertical acceleration measured in the centre of the model is shown in figure 7. In both cases, the figures show not only the acceleration of the structure but also the wind speed depending on time. In order to make the results clearer, they are presented as filtered (averaged within a time interval of a 0.005 sec = 200 Hz). In the observed time period of 2 seconds, the acceleration values were within ± 0.67 m/s 2 , respectively ± 0.5 m/s 2 . Greater acceleration (± 0.67 m/s 2 ) was achieved in the vertical direction due to lower stiffness of structure in this direction. During the observed time period, the wind speed was in the range of 13.6 m/s to 16.5 m/s. During the measurement no aerodynamic instabilities occurred, which would lead to excessive vibration of the structure or to the resonance vibration. Study of wind-induced vibration by FSI software simulation 4.1. Methodology of the study Two-way FSI analysis is used when it is required to determine the dynamic response of the structure to time-varying wind load. Because the scope and complexity of the whole simulation far exceeds the required scope of the contribution, only the basic assumptions are given. The solution is based on the exchange of solved variables between CFD module and structural (mechanical) module. CFD simulation was performed by the ANSYS CFX software. The discrete environment was divided into 3 million hexahedral final volumes. The mesh layer closest to the object was created with a thickness of 0.1 mm and another 60 mesh layers were created around the object until the thickness of 300 mm were reached, representing the largest edge of the final volume. figure 4. The pressure at the outlet was defined as static pressure of 0 Pa. Walls around discrete environment were set as a "free slip" walls and the model itself was set as a "no slip" wall with the roughness of brushed steel. Mathematical model Shear Stress Transport (SST) was used to solve wind pressure, wind velocity and other flow related variables. Due to the scope of the issue, references [13,14,15] are given where it is possible to find all the information about the mathematical model of SST and also other information about parameters and methodologies used in CFD simulations. Time step was set to 0.005 seconds and simulation time was set to 0.03 seconds. The structural model of the structure was created with exactly the same dimensions as shown in figure 1. This structural model was divided to 4620 SOLID186 elements. The material of the structure was set as structural steel S235. On both ends, fixed supports were defined. Vibrations of tunnel walls caused by wind turbines were not taken into account because of the considerable complication of the whole task which would lead to insolvability due to hardware possibilities. For orientation -the computational demands for such a simulation are as follows: solution itself (60 time steps) took 2 full days (using seven 3.6 GHz processors Intel i7-7700 and RAM 32 GB). After the end of the solution and the convergence check, it was possible to proceed with the results and comparison of both methods. Despite the fact that relatively high wind speed was used for structure excitation, the structure does not react by global vibration. As shown in figure 11, only local bulging occurred. Comparison of both methods First, the wind speed as a function of time obtained by both methods at a point 12 cm in front of the center of the model can be shown. It is clearly visible in figure 12 that the values of wind speed are practically the same between both methods -therefore first very crucial criterion (to provide same wind speed between methods) was fulfilled. In figure 13 and 14 comparison of the horizontal and vertical acceleration obtained by both methods is shown. The results of the acceleration between software and experimental measurements do not match IOP Publishing doi:10.1088/1757-899X/1203/3/032111 9 in this case. It is important to point out that the results of the wind tunnel experiment were highly influenced by unwanted excitation of both ends of the structure. Also, due to lack of synchronization device, it was not possible to precisely synchronize the wind speed and acceleration measurements. Conclusions The synchronization of measurement of wind speed and acceleration in the wind tunnel was based on the reaction time of two persons. However, to accurately and simultaneously measure two or more variables with the high sampling frequency, it is necessary to resolve synchronization procedure by special equipment. This was one of the things we couldn't do in the wind tunnel experiment. Unwanted model excitation due to turbine rotations and subsequent vibrations which were transferred by the tunnel walls to the model ends significantly influenced results. It is necessary to say that the analysed model was very stiff and did not show any significant vibration even at higher wind speed. In case of such a stiff model, even a weak excitation of ends highly influences acceleration of the centre part. If the model would be more flexible (less stiff), the vibration caused by wind would be far If an experiment of a similar task would be carried out in the wind tunnel it is highly recommended based on our experience to test structures whose bending stiffness allows their flexible deformation in bending and thus visible vibration will prevail over shaking respectively tremor of the whole model. FSI simulation confirmed that it is a very good tool for complex analysis of time-dependent tasks and monitoring of vibration of the structure. Although a wind tunnel proved to be a possible solution for investigating wind-induced vibrations, too many complications and limitations entered the measurement. Therefore, it is possible to say that for the analysis of wind-induced vibrations it is more appropriate to use FSI simulation, which when inputs are correctly set can provide results of both wind flow analysis and structural analysis.
2021-12-03T20:06:59.557Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "c94567daee76535d87f7f79a674a7efa2a5921e4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1203/3/032111", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c94567daee76535d87f7f79a674a7efa2a5921e4", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
236880451
pes2o/s2orc
v3-fos-license
COMPARATIVE STUDY OF THE CIVIL SOCIETY ORGANIZATIONS AT THE NATIONAL LEVEL AND IN CARPATHIAN REGION OF UKRAINE The study aims to compare CSOs at the national and regional (Ukrainian part of Carpathian Euroregion) levels for possible disparities. It is mainly based on the results of secondary analysis of available offi cial statistical and fi scal data, as well as those published in the study reports. In particular, there were analyzed the Bulletins «Activity of the Civic Associations in Ukraine» and «Indicators of United State Registry of the Companies and Organizations of Ukraine» published by the State Statistics Service of Ukraine and its regional departments during 2014-2018. Also the quantitative results of studies done by National Institute for Strategic Studies, Corestone Group and GfK Ukraine, CCC Creative Center, and other institutions were examined. Relevance to the problem of research and its regional boundaries has been employed as a main selection criterion for the secondary data used. Civil society organizations (CSOs) perform exceptional role in providing social services to the inhabitants of the Carpathian region of Western Ukraine. However, they operate in a certain environment signifi cantly infl uencing respectively their sustainability and capacity to off er high quality services. Obviously, the impact of environment diff ers regionally, creating unequal opportunities for organizations working in diverse parts of the country. This study aimed to compare CSOs at the national and regional (Ukrainian part of Carpathian Euroregion) levels for possible disparities. The author used secondary analysis of available offi cial statistical and fi scal data, as well as data published in the study reports. He argues that despite actually the same legal, fi scal, and political environment, there are remarkable disparities between CSOs at the national and regional levels. These discrepancies are less evident regarding number and type of operating organizations, their fi elds of activity but are more signifi cant in respect to available funding and its sources, budgets, staff involved, and environment for philanthropy. Introduction. The Carpathian region of Western Ukraine stands out by signifi cant numbers of rural residents, numerous remote settlements, ethnic and religious diversity, absence of large enterprises, high level of labor migration and, to a considerable extent, by more traditional way of life, etc. This particularity explains weaknesses of the offi cial system of social services, with the only exception of oblast centers. In many cases the system is not able to cover all those in need and does not reach out to distant localities. Additionally, socioeconomic conditions of the region explain the severity of such social problems as social orphans, unemployment, child neglect, lack of support of lonely senior age individuals, problems with addictions, etc. The Roma, one of Ukraine's most vulnerable ethnic and cultural minorities, requires special attention. Experts believe that the biggest Roma community of Ukraine, almost 100,000 people, lives in Zakarpatska oblast [1, c.8]. Standard social services provided by the state cannot satisfy the needs of region inhabitants; some servicesas it is the case with disabled individuals -simply do not reach the remote villages. Low eff ectiveness of state services prompted CSOs to become providers of new, innovative social services for the people in Western Ukraine. For example, signifi cant number of familytype orphanages in Zakarpatska oblast was created by CSOs and religious organizations. They provide almost all services to homeless people; they also started the process of de-institutionalization of social services to people with mental disabilities. CSOs created alternative innovative models of rehabilitation services to drug addicts and improved services to individuals with mental and physical disabilities. The network of approximately 300 CSOs provides social services in Carpathian region. Considering signifi cance of these organizations for the region, the issues of their sustainability, funding opportunities, and staff recruiting are becoming very important. Literature review. Scientifi c studies of the civil society organizations to certain extend refl ect the evolution of general perception of the civil society and its institutions in Ukraine. The works published in the early years of independence were mainly devoted to theoretical approaches to the issue [2; 3; 4; 5], development of the legal framework [6; 7], and relationships between the state and public intuitions with the CSOs [8; 9; 10]. This type of publications remains dominating even today. However, new studies have wider scope of objectives and examine the activities and role of CSOs in providing social services [11], fi ghting corruption [12], development of rural areas [13], sustainable development etc. [14,15]. It is worth noting that the works analyzing diff erent types of CSOs [16] and issues related to their funding [17,18] and sustainability [19] also appeared. SCOs themselves undertake regular comprehensive studies of the general state of civil society in Ukraine [20]. Nevertheless, despite of signifi cant volume of obtained data, some information is still missing. In our view, fi rst of all this is data on SCOs activity from a regional perspective. This is important because the vast majority of SCOs is not national, but is based, operate, and provide their services in particular regions, localities, communities. The study aims to compare CSOs at the national and regional (Ukrainian part of Carpathian Euroregion) levels for possible disparities. The study aims to meet the following objectives: 1) to compare CSOs at the national and regional levels in respect to their numbers and main fi elds of activities; 2) to analyse possible discrepancies in human resources of CSOs at the national and regional levels; 3) to explore the main sources of funding of CSOs at the national and regional levels; 4) To scrutinize the impact of legal framework and environment for philanthropy on CSOs operating at the national and regional levels. Research methods. The experts highlight extreme complications with the study of the Civil Society Organizations in Ukraine. This is connected to the mixture of the international, regional (the EU), and national standards, systems, and statistical approached used in the country. For example, statistical data can be structured based on the type of economic activity, on the offi cially defi ned sector of economy, on legal or fi scal status, on the fi eld of activity etc. In its turn, it leads to signifi cant diff erences in data presented in various sources. Moreover, according to the bulletin «Civil Organizations» only 40 per cent of this type of CSOs submits statistical reporting data regularly [20, p.26]. The access to statistical data of other types of organizations is limited. Generalizing both nation legislation and international standards the experts consider 19 groups of organizations, some of them consists of several types of institutions, as the Civil Society Organizations in Ukraine. Moreover there is a discussion whether political parties should be added on the list [20, p.27]. This study is actually contains analyses of data related to two types of CSOs in Ukraine, which are civil organizations (COs) and charitable foundations (CFs). It is mainly based on the results of secondary analysis of available offi cial statistical and fi scal data, as well as those published in the study reports. In particular, there were analyzed the Bulletins «Activity of the Civic Associations in Ukraine» and «Indicators of United State Registry of the Companies and Organizations of Ukraine» published by the State Statistics Service of Ukraine and its regional departments during 2014-2018. Also the quantitative results of studies done by National Institute for Strategic Studies, Corestone Group and GfK Ukraine, CCC Creative Center, and other institutions were examined. Relevance to the problem of research and its regional boundaries has been employed as a main selection criterion for the secondary data used. Results and Discussion. Number of registered COs in the country vs. in the region. Unfortunately, mentioned above diff erences in the methodology are refl ected even in the offi cial statistics. Data provided by the State Statistics Service of Ukraine in the bulletin «Activity of the Civic Associations in Ukraine» and in the «Indicators of United State Registry of the Companies and Organizations of Ukraine» signifi cantly diff er. For example, in the fi rst case 67911 civil organizations are shown in Ukraine [21] however in the second case this fi gure is 64 526 [22, p.15] as for 2015 (Table 1). Obviously, the fi rst fi gure shows number of legally registered organizations but the second one is tax related and represents active COs. Nevertheless, both data demonstrate the same trends, in particular: steady increase in number of civil organizations in the oblasts which belong to Carpathian Euroregion and a slump in general number of COs in Ukraine in 2015, which was overcome in 2018 only. Certainly this decrease was caused by annexation of Crimea and occupation of the parts of Donetsk and Lugansk regions. [25] n/a 2118 (April 1) 2262 2447 2578 CO of Chernivtsi oblast [26] n/a n/a 1286 (April 1) 1370 1443 CO of Carpathian region --10424 11341 12001 Considering population estimation, it is evident that number of civil organizations per capita is slightly higher for the country than for the region -approximately 1 per 450 people and 1 per 500 people correspondently. Division of civil organizations by activity. The data on division of COs in Ukraine by activity is available for 2014 and previous years only ( Table 2). Offi cial statistics uses rather old reporting form which does not represent the actual activity of organization. That is why most of active associations went to the category Other civil organizations. As it was mentioned above, old reporting form is not fl exible and does not refl ect the modern trends in civil society development. Analyses of the data for the years before 2014 shows pretty the same picture. While more fresh data is not available it can be assumed that only two changes might happen. The main increase happened in the category Associations of veterans and disabled. Since beginning of the war more than 40 all-Ukrainian veterans' organizations appeared. Hundreds (395 by 2017) have been established at the regional and local level. Approximately the same is true about the organizations founded by or working with Internally Displaced People. Many existing organizations changed their target group to the veterans or IDPs. Anyway, existing form of reporting does not allow demonstrating this. Thus, the rapid increase in number of NGOs within the last couple of years is connected to the new socially vulnerable groups -the veterans and the IDPs. On the other hand, only few (about 5 per cent) [20, c.24] out of hundreds initiatives which appeared during the Revolution of Dignity have been formalized into organization. Anyway, most of these new organizations would rather go under the category Other civil organizations. An analyses made by Creative Center Counterpart in 2014 provides some additional information on activities of COs. In particular, 70 per cent of organizations are involved in advocacy, 64 per cent -provide services, 38 per centcombine both mentioned activities, 83 per cent -provide trainings and educational activities, 67 per cent -provide informational services, 31 per cent -legal services, and 28 per cent -psychological services [22, c.29]. Unfortunately fresh data for the Carpathian region is absent. Nevertheless comparison of older regional data (Zakarpatska oblast in 2010) with the national data of the same period shows that they are almost identical (diff erence within 1 per cent per category), except a share of Organizations of ethnic and friendly relations (5 per cent higher in Transcarpathia), which can be easily explained by multi-national population of the region. This allows assuming with a great plausibility that the current division of the associations by activity in Carpathian Region represents the general trends of the national level, while preserves some regional features (bigger number of organizations of national minorities). Number of registered charitable foundations in the country vs. in the region. According to the data based on the Indicators of United State Registry of the Companies and Organizations of Ukraine (actually, tax number) the following number of foundations operated in Ukraine within 2014-2018 (Table 3). Offi cial statistics shows ongoing increase in number of charitable organizations. By July 1, 2018 the number of charities went up to 18095. Meanwhile, according to the estimations of the experts of Ukrainian Philanthropists Forum vast majority of the charitable organizations exists on the paper or perform one time activity only. They estimate the real number of active organizations as 500-1000 for the country (Table 4). There is available a regional split of top 100 biggest foundations of Ukraine. Most of them (38%) are based in Kyiv, 18% -in Central Ukraine, 17% -in Southern Ukraine, 12% -in Northern Ukraine, 10% -Western Ukraine (Carpathian region plus Ternopil, Rivne, and Volyn oblasts), and 8% -in Eastern Ukraine). In respect to division of foundations by activity, the recent studies show the following dynamic of charitable funds spent in Ukraine (Table 5): The regional data is absent. Nevertheless, like in the case of civil organizations, it is very plausible that the division by activity in the region is mainly identical to the national split. Number In our view, the current information is generally correct but does not refl ect the whole picture. Indeed organization can have 5 people working for it but they are not necessarily employees. In order to minimize administrative costs, the CSOs employ only minimum of staff (quite often only director) with the minimal, allowed by legislation, wage. Other people work for the organization based on service contract. Formally, from the legal point of view, these people are not employees. Even those employed with the minimal wage usually have a service contract in addition. Thus a fi gure of 5 employees represents rather actual average number of people working for organization but not a number of legal employees. The data for the Carpathian region is absent. In our view, the average fi gure of people working for CSO is a bit lower in the region and can be estimated as 2-4 persons. Average budget of NGOs in country vs. region. There is no available data on average budgets of NGOs in Ukraine as a whole or in the Carpathian region in particular. But basic manipulations with offi cial data allow defi ning approximate fi gures, which represent a ratio of NGOs' incomes at the national and regional levels. In 2016 the aggregated income (from all possible sources) of Ukrainian civil organizations was 7 271 566 800 UAH [30] (roughly EUR 242 385 560). By the end of 2016 there were 75988 civil organizations. Basic mathematic operation gives a fi gure of EUR 3190 per organization. For sure this fi gure does not represent all the shades of reality. It is an average annual income per organization but not a budget. It does not take into account the number of organizations which are registered but not operating, and fi nally it does not refl ect numerous small organizations which work completely on voluntary basis. Meanwhile it allows us to compare fi nancial situation at the national and regional level. The same manipulations for Transcarpathia shows a fi gure of average annual income of NGO at the level of EUR 1389 in 2016; for Lvivska oblast -EUR 1591; for Ivano-Frankivska oblast -EUR 818; for Chernivetska oblast -EUR 1462 ( Table 6). In our view, this diff erence does not demonstrate an inequality between the regions of the country but mainly disparity between the capital and the province. Regardless this type of information on civil organizations is absent, the data on charitable organizations (foundations) shows that top 100 biggest organizations of Ukraine, 38 per cent of them located in Kyiv, possessed two third of all funds in the fi eld. Moreover 5 biggest foundations operated with 1/3 of all funds [28]. Funding resources of NGOs in the country vs. region. Table 7 represents offi cial information on the funding sources of the COs in Ukraine as a whole and in four oblasts which comprise Ukrainian part of Carpathian Euroregion. The data is split by the categories defi ned by State Statistics Service of Ukraine and which are used by COs for quarterly reporting. Presented above statistics ( Fig. 1) suggest that funding from the national budget plays relatively insignifi cant role among income sources of Ukrainian COs. Nevertheless its share is higher in average through the country than in the region. On the other hand, the role of local budgets (local authorities) in funding of the COs in general is higher in the region (except Transcarpathia) than average in the country. Membership fee plays approximately the same role through the income sources both at the national and regional levels (except Transcarpathia, where its share is signifi cantly lower). Income from charitable activity plays very signifi cant role among sources of income both for national and regional levels. At the national level, as well as for 2 (Transcarpathia and Chernivtsi) out of 4 oblasts of the region, it is the main source of income. For others it comprises more than one third of income. Social entrepreneurship plays important role as a source of income for COs at the national level but is underdeveloped in the region. Except Transcarpathia, other sources of income (endowment, money generating activities not connected to social entrepreneurship etc.) play slightly bigger role for the regional COs than at the national level. In our view, distinctive structure of income sources of the COs of Transcarpathia (heave predominance of charity activity, mainly foreign grants and donations) can be explained by relatively easy access to foreign grants. Firstly, because of the big ratio of ethnic minorities in the population structure is attractive to foreign governments, corporate and private donors (Hungarian, Slovak, and Romanian) as well as organizations supporting Roma. Secondly, unique geographical location (the oblast borders with 4 countries, which are the EU members) makes the region eligible almost for all ENP funding programs. For comparison, other oblasts of Carpathian region usually are eligible for 1 or 2 programs only. Thirdly, in our view, the infl ux of foreign funds into Transcarpathia somehow over speed the institutional development of local COs. Many of them are established for one particular project of one particular foreign partner and are not willing or are not able to diversify their sources of income. For sure there are additional factors which infl uence the situation: distance from the capital; it is the only oblast of Ukraine with predominantly rural population and consequently relatively poor municipalities; absence of big business; strong outfl ow of human resources etc. It should be mentioned that most of these additional factors plays the same role for other oblasts of Carpathian region. Table 8 and fi gure 2 suggest important role of foreign donors in the structure of income from charitable activity both at the national and regional level (except Ivano-Frankivs oblast). In our view, the reason is that Ivano-Frankivska oblast is not eligible for most of ENP programs (the main foreign donor). Relatively high share of people's and local companies' donations in Lviv and Ivano-Frankivsk oblasts, in our view, can be explained by long traditions of civil society development (the fi rst NGOs were founded in the end of 19 th century) and extremely active civic position of Ukrainian Greek-Catholic Church which is predominant in these two provinces. Legal environment and its eff ect on NGOs in the region. Actually the most critical, in the recent years, period of their development Ukrainian NGOs went through during the last months of president Yanukovich regime. On January 16, 2014 the parliament had passed a package of legal acts (12 laws) which became known as «Laws on Dictatorship» or «Dragon's Laws». Many of them directly or indirectly infl uenced the NGOs. In particular, the NGOs operated with the foreign funds were declared as a «civil association performing the function of foreign agent», there were signifi cantly limited the civil rights on free gathering, peace protest, political activity, publishing activity. Also there were introduced severe sanctions (including criminal) for violation of the new laws. The NGOs fall under more strict control of law enforcement and tax authorities. Taking into account mentioned above funding structure of civil organizations in Ukraine, it is clear that the law on «foreign agents» threatened almost two third of organizations in Ukraine and approximately 90 per cent of civil organizations in Transcarpathia. Naturally this legislation caused mass protests and gave a new strength to the Revolution of Dignity. After the revolution, on January 28 th , 2014, 9 out of 12 legal acts have been canceled by the parliament, including those aff ected the NGOs. Actually the Law on Associations of Citizens (fi rst passed in 2013 and then several times amended) is the main legal act regulating the activity of Ukrainian NGOs. It means that there were no any drastic changes in legal environment since 2014. Meanwhile a dozen of amendments and legal acts which mainly positively infl uence NGOs have been passed (better protection of property rights, improvement of registration and reregistration process, improvement of the procedure of changes in the statutory documentation, participation in civil councils (advisory and control bodies within governmental structures and local authorities), legal regulation of the activity of the branches of NGOs (they are recognized as legal entities -«separated departments»), free choice of the geographical territory of activity (regardless the place (city, region, national) of registration), improvement of cooperation with authorities etc.). Nevertheless there are several ambiguous and discursive developments. The most scandalous is introduction of income, property, and spending declaring for so-called anti-corruption NGOs in 2017. In fact they (and their top management) are obliged to declare in the same way as the public offi cials. Many experts consider this as a vengeance and an attempt of pressure on anti-corruption activity. Local and international organizations (including foreign governments) heavily criticize this novella. Finally it has been canceled. Also During the period there were introduced the mechanism of electronic petitions, pilot version of the Unifi ed State Portal of Administrative Services, public access to open data information, public access to electronic declarations on income, property, and spendings of offi cials etc. Nevertheless the experts highlight some weaknesses in implementation process, especially at the local level. The most typical are: local authorities try to limit (hide) information on their activity, unclear local plans on the Strategy implementation, and lack (in some regions -absence) of funding for particular activities within the Strategy. In our view, the main changes in legal environment of non-for-profi t organizations in Ukraine were connected to introduction of the new Tax Code in 2015. Firstly, non-for-profi ts received more opportunities to generate income preserving their nonfor profi t status (under condition that this income is spent on statutory goals of organization). Secondly, if non-for profi t organization declares in time that it spent income on purposes which do not lay within its statutory goals, organization pays taxes (as a commercial company) only from the amount spent but preserves its non-forprofi t status. Thirdly, based on two previous changes the non-for-profi t organizations had to clarify and narrow down their statutory goals. Finally, there was changed a regulation of property issues in a case of not-for-profi t organizations' closure. The property can be passed to assignee (other non-for-profi t organization), spent within statutory goals or given to the state. Previously it could be shared among the members of organization. For sure it led to re-registration process in 2016-2017. Many organizations feared that the process would be complicated and would reduce the number of non-forprofi t organizations. However, relatively well organized and quite simple re-registration process went smoothly. Statistics shows the on-going increase in number of civil and charitable organizations during 2016-2017 both at the national and regional levels. In our view, there are no signifi cant diff erences in impact of the legal environment at the national and regional levels. Certain otherness is rather connected to the lower awareness of both NGOs and authorities in the regions about the latest changes. For example, NGOs do not know how to fi ll out new reporting form and the offi cer in the tax offi ce is not able to provide consultation because does not have experience to deal with it. Quite often they have to apply to the capital for clarifi cations and explanations. Taxation policy for philanthropy. Taxation policy for philanthropy is one of the most actively discussed issues in the third sector of Ukraine. On one hand, national legislation (Law on Charity and Charitable Activity, Budgetary Code, and Tax Code) as well as National Strategy of Promotion of Civil Society Development in Ukraine in 2016-2020 is mainly favorable for development of philanthropy in the country. The main tools for encouragement are as follows: -Opportunities for charitable organizations to receive funding from the national and local budgets; -Legal entities, payers of the income tax, can claim their donations for the non-for-profi t organizations as the company spending and correspondently do not pay income tax from the amount of charitable donation (money, services, other works). It is valid under condition that charitable donation does not exceed 4 per cent of previous year's taxable income; -VAT payers do not pay VAT from the amount of charitable donation. There is the same 4 per cent limitation; -Private persons, tax payers, have a right for tax deduction. In order to defi ne the amount of deduction the following algorithm is used: the «pure» salary (amount of salary minus state social and pension insurances) minus amount of spending eligible for tax deduction and multiplied by the tax rate, defi ned by the Tax Code. Amount of spending eligible for tax deduction is accounted with the same 4 per cent limit as for the legal entities. On the other hand, the experts consider existing system as rather complicated and, correspondently, not properly working. Mainly bigger companies use this opportunity. Smaller business and regular people prefer to donate in cash and do not declare this. Recent, big scale, events in the country (revolution, war, social problems) made existing limitation of 4 per cent irrelevant in many cases. For example wounded soldiers require medication the value of which exceeds 4 per cent of their income during the previous year. It means that either wounded soldiers or donors should pay taxes from actually charitable donation. The same situation is with the scholarship for the students (scholarships paid by philanthropists). The other discursive issue is a tax problem connected to the modern electronic means of donation (f/e SMS). The problem is that the service of mobile provider (even if it is free) is a matter of taxation. If one donates money by SMS, part of them goes for taxes of mobile provider. The other fi eld of current discussion is connected to humanitarian aid from abroad. The recent events in the country require new type and volume of supplies from abroad (military equipment, medication etc.) which are often the matter of duty at the customs. Frankly, the authorities demonstrate commitment to solve the existing problems. The process is slow but mainly resultative. The following observations were made as a result of the study. Broadly speaking, steady quantitative growth of the civil society organizations, civil organizations and foundations, is evident both at the national level and in the Carpathian region. Meanwhile, the Western regions of the country avoided temporary slump in number of CSOs related to foreign aggression of 2014. In respect to the COs, there has been signifi cant increase in number of organizations which fall, according to offi cial defi nition, under categories Other civil organizations and Associations of veterans and disabled. Advocacy, provision of the services, trainings and educational activities, informational, legal and psychological services are among the main activities of COs both at the national and regional levels. The only diff erence is related to higher number of the organizations of national minorities in Zakarpatska oblast. In respect to the foundations, they are mainly active in the fi elds of health, social protection, and economic development (community development). Meantime there is a trend of gradual decrease in a share of charities operating in the fi eld social protection and growing number of organizations dealing with health. In this context, the main disparity is related to the fact that most of the big foundations of Ukraine are located in the capital and central regions of the country. Considering the employed staff of the CSOs, it must be pointed out that less than half of them have employed or contracted staff . In general, foundations have bigger number of employees than COs. This is correct both for the national and regional levels, but in the Carpathians the CSOs usually have two times smaller staff . The disparities are even more evident in respect to the average budgets of NGOs at the national and regional levels. In Carpathian region they are two times smaller. Capital based organizations possess two third of all funds in the fi eld. Analysis of the funding sources of CSOs suggests diff erences rather within Carpathian region, than between national and regional levels. Income from charitable activity is the main source of funding at the national level as well as for the CSOs of Zakarpatska and Chernivetska oblasts. In Lvivska oblast it is still important but is well balanced by other sources. Income from organizations and companies of Ukraine is the main funding source for the CSOs in Ivano-Frankivsk oblast. The other feature of Zakarpatska and Chernivetska oblasts is a heave dependency of the organizations on the foreign funds. Obviously since 2014 the legal environment for CSOs in Ukraine has been gradually improving. The repeal of the «Laws on Dictatorship», further improvement of the Law on Associations of Citizens, increased opportunities for income generating activities, wider access to public information are among the most evident changes. However, some weaknesses are evident in the region. They are mainly related to sometime nontransparent activities of the local authorities as well as to the lower awareness of NGOs and authorities in the regions about the latest changes. Regardless of the encouragements for philanthropy introduced in national legislation, the environment for charity remains rather complicated and to certain extend irrelevant to modern challenges. Existing incentives work mainly for big business and are not relevant for the regions like the Carpathians, where small and middle size companies dominate. Obviously it has its impact on the activities of local CSOs, making them signifi cantly dependent on foreign grants and donor-driven programs. Despite actually the same legal, fi scal, and political environment, there are remarkable disparities between CSOs at the national and regional levels. These discrepancies are less evident regarding number and type of operating organizations, their fi elds of activity but are more signifi cant in respect to available funding and its sources, budgets, staff involved, and environment for philanthropy. Список використаної літератури
2021-08-03T04:48:54.733Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "3a86f4b298620b6a19d770df244cacabc5d42074", "oa_license": "CCBY", "oa_url": "http://visnyk-ped.uzhnu.edu.ua/article/download/234922/233411", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3a86f4b298620b6a19d770df244cacabc5d42074", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
225776027
pes2o/s2orc
v3-fos-license
Transfer Processes at the Manufacture of Metal Matrix Composite Materials The main elements of composite materials technological design consist of choosing the appropriate matrix/reinforcement systems and optimal thermodynamic conditions for ensuring compatibility during processing and operation. Composite material properties depend on the volumetric characteristics of components as well on the intensity of links between these components. Initially, the mass transfer takes place under unstable conditions in the liquid matrix and at the solid/liquid interface, and finally, after casting or infiltration, stationary. Modelling of reinforcement elements transfer from gas to liquid is based on total variation of transfer energy or on the variation of the amount of the forces acting on the reinforcement. Once the basis of the models investigated and of the actual model proposed are analyzed, the densities influences, particles granulations, liquid viscosity, thermal conductivities, the energies at liquid/reinforcement interface and critical speeds of reinforcement elements, transfer from liquid to solid. Generalities Composites are multiphase materials with distinct and well defined interfaces between the constituent phases which are strongly interrelated and it presents exceptional properties in well-defined directions and plans. An important direction of research in these areas is to provide models to successfully predict if interactions of physical or chemical nature occur between matrix and reinforcement, as well as for the thermodynamic evaluation of the interfaces during processing and exploitation. Physico-chemical processes at the matrix-reinforcement interface Composites processing and its characteristic depend on the components density and the intensity of connections which are formed between them. According to Dupré's relation, the adhesion energy, Wa, between two different phases can be evaluated in terms of the interfacial tension and contact angle: Wa = σSG + σLG -σSL = σLG ( 1 + cos θ) (1) Where: σSG is the solid-gas interfacial tension; σLG is the liquid-gas interfacial tension; σSL is the solid-liquid interfacial tension; θ is the contact angle. Chidanibaran et. al (1992), [1], consider that ceramics wettability by metals is achieved only if the chemical bond are formed between atoms of the two phases, totally ignoring the contribution of physical interactions. Chemical forces are much stronger and manifests themselves when the chemical reaction between the atoms of liquid and non-metallic component, respectively, shows a negative variation of the free enthalpy. Type of chemical bond between metallic and non-metallic component structures has a particular importance because the transition layer's structure and its mechanism of formation determine the crystallisation and hence the structure of the composite material. The first model that tried to explain the metal-ceramic wettability was proposed by Mc Donald and Eberhart in 1965. It claims that between the chemical component of adhesion energy and the variation of metal-ceramics chemical reaction free energy a linear relation can be established in the form [2]: Where a is the contribution of the physical interactions, while b is a constant. Experimentally it has been shown [3] that adhesion energy Wa does not depend on ΔG° when ΔG° > 300 kJ/mol. For ΔG° < 300 kJ/mol, the value of W a increases with decreasing of ΔG°. Because the chemical reaction between some metals with high reactivity (such as Al) and ceramics is associated with a variation of the free energy smaller than 100 kJ/mol, we might conclude that they'll wet any solid material. In reality, even in the most severe conditions of achievable depression an oxide layer is forming which will affect the wettability conditions. During the production of composite materials there are mechanisms of irreversible loss of energy during the processes of interface creation. These losses are sometimes very important and dictate the process parameters such as the applied pressure or preheating temperature of reinforcement elements. Under these circumstances, it is possible that a contact angle less than /2 not be a sufficient condition for spontaneous wetting of ceramics during composite processing. Physico-chemical processes at metal-oxide interface In the case of non-metallic components consisting of oxides, adherence to melt improves with increasing the metal affinity to oxygen and oxygen concentration, respectively. If it is noted with Me the matrix metal and with M the oxide-bounded metal, then at the interface the next reactions occur: As well as dissolutions and mass transfer between the two phases: If the free enthalpy variation is negative, the reactions arise in the normal direction, and adhesion energy will increase with the increasing of the absolute value of this variation. The sign of free enthalpy variation of the chemical Equation (3) is a criterion for assessing the metal-ceramic reactivity or non-reactivity. A more rigorous criterion for assessing interface chemistry takes into account that the dissolution of M in Me will always takes place and ΔG1-5° is calculated with the Equation: In metal-oxide systems, frequently used for the production of composite materials, the maximum dissolved amount is of the order of few parts per million. Such systems with ΔG1-5° > 0 are known as non-reactive systems and are characterized by a weak and limited wettability. In this case the adhesion energy, Wa, represents only 15-40% of the metal cohesion energy [4], which is considered to be the Wc ≈ 2 σLv. On the basis of the thermodynamic considerations, in 1987 Chatain et. al show that even for nonreactive metal-oxide systems, chemical interactions occurs at the interface and they propose an expression for the adhesion work [5]: Where: The product N0 1/2 VMe 2/3 = ΩMe represents the area occupied by a monolayer of The product N0 1/2 VMe 2/3 = ΩMe represents the area occupied by a monolayer of a mole of matrix metal atoms. According to Equation (9) dissolution of oxygen in the matrix metal impoves its adhesion to ceramics. On consider that the dissolved oxygen combines with the metal atoms and MeO clusters are forming, clusters with partially ionic character as a result of charge transfer from the metal to oxygen. Nowadays it is widely accepted the model proposed by Naidich [6] in 1981, which assumes that dissolved oxygen forms with nearby metal dipole Me 2+ O 2which is adsorbed to the interface due to electrostatic attraction forces exerted on the Me 2+ cation by the anions existing on the ceramic surface. Improving adhesion in non-reactive conditions can be achieved by using immiscible alloying elements into the base metal, which reduce its surface tension. Thus, in the Al alloys, In, Bi, Cd, Na can be added, which are adsorbed at the surface, diffuse together with Al atoms in the ceramic surface and favour the adhesion. Physico-chemical processes at the metal-carbon or metal-carbides interfaces Carbon and carbides wettability by liquid metals can be analyzed by considering three chemical contributions [2]: Me metal affinity to carbon; formed MexC carbide stability; mixing enthalpy of M in Me. Liquid metal adhesion to carbon is strongly dependent on the reactivity between the two phases and it is slightly influenced by the crystallographic shape of the solid. An interdependence between metal's position in Mendeleev's table and their reactivity in respect to carbon was found. Metals from groups I-b, III-a, IV-a and V-a and periods 4, 5 and 6, like: Au, Ag, In, Ga, Ge, Sn, Pb, Sb, Bi are inert to carbon [7], have low adhesion energy, only 100 ÷ 300 mJ/m 2 and the contact angle greater than /2. Elements: Al, Si, B, react and form carbides that wet the carbon, adhesion energy being 1000 ÷ 1500 mJ/m 2 . Even in this case, wettability will practically occur only at temperatures above 900°C because the oxide films prevent direct contact between the two components. Transition metals forming stable carbides are: Ti, V, Cr, Mn, Fe, Co, Ni from period 4, Zr, Nb, Mo, Pd from 5th period and Ta, W, Re, Pt from period 6. In their case the adhesion energy to carbon is 2000 ÷ 3000 mJ/m 2 , increasing with the decrease of the number of electrons on the d layer. Adhesion energy between the metal matrix containing such elements and carbon is more than 90% due to the chemical interaction. Alkaline, alkaline-earth and some rare metals, although diffuse in carbon components and form carbides, not always wet such particles or fibres [15][16]. Experimentally it was proved that Li and Na wet the graphite [17][18]. Through its influence on the nature and kinetics of metal-carbon interfacial reactions, temperature is a control parameter for wettability. Nayeb-Hashemi and Seyyedi [8], in 1989, support the fact that the formation of rhombohedral crystalized aluminium carbide Al4C3 can occur in long time even at temperatures around 500°C. Wang et. al [9] in 2015 after determining the activities and the contact angle, had been calculated the free enthalpy of reaction: Using Equation: (11) They have obtained a free enthalpy significant variation of aluminium carbide formation function by content of Mg from matrix Al alloy. In Figure 1 it can be seen that up to approx. 10% Mg, the free enthalpy of reaction (10) is negative, so Al4C3 can form, but at higher concentrations occurs the decomposition of carbide. To improve adhesion between the graphite's fibres or particles and Al alloys based metal matrix, Ti or B coatings on carbon solid components were made. In this case, special measures are required to prevent titanium or boron oxidation, as well as to limit the interaction because the strong reactions increase the fragility and lead to the degradation of reinforcement. Another measure to improve the adhesion of the aluminium metal matrix alloys and carbon or graphite reinforcement is microalloying with superficially active elements such as Mg, Li, Na or Sr. In Figure 2, SEM and EDS investigations conducted by Anna J. Dolata et. al [10] in 2016 in AlSi7Mg2Sr0.03/SiCp+Cgp composites highlighted a good bond between matrix and reinforcement, as well as an increase in the concentration of the strontium and magnesium concentrations at the interface between matrix and glassy carbon. Figure 2. The AlSi7Mg2Sr0.03/SiCp+Cgp composite: linear distribution of elements at the interface between Al matrix alloy and Cgp, EDS [10]. While Nayeb-Hashemi and Seyyedi [8] noted the formation of chemical compounds at interface: Al4O4C, Al4C3, other, Siemens [11], in 1989, highlights the dependence existing between carbide concentration and temperature, as it can be observed in Figure 3. Research has revealed that the interaction between metal matrix with reinforcement elements containing carbides is much weaker because of the high energy of carbides formation. Wettability of solid carbides like: SiC, B4C, etc. is similar as in the case of carbon because the metals generally have similar behaviour against these elements which forms the carbides. Estimations made on the processes at the SiC-metal interface or on the link between the two phases are not yet theoretically and experimentally grounded. First, the nature and composition of the interface are not precisely known. Thus, in the case of Al-SiC interfaces, three hypotheses exist: -a SiO2 film forms at the interface [12]; -no interactions exist at interface, distinct areas being one of SiC and another of Al [13]; -an Al4C3 film forms at the interface [4]; In the case of Al-Mg alloys an MgO film may form at the interface, as it is deduced from Figure 4. Conclusion Theoretical study at atomic level of the interface is too complex due to reduced symmetry, high number of atoms in the elementary cell and complications that occur when trying to use interatomic potentials along the interface. Li and Arsenault [14] in 1988, claim that for a good understanding of the bond at interface it is necessary to take into consideration the following: a. interface may consist of areas with different aluminium and silicon carbide crystallographic orientations; b. SiC has two different structures:  and ; whiskers have  structure, and particles have  structure which may be in six polymorphic forms; c. interface can be form between carbon and aluminium films or between Si and Al; d. at the interfaces formed with different orientations between SiC and Al, the atoms cannot form a commensurable structure, i.e. it is a network mismatch at the interface; e. it is expected that at the interface atoms move in positions of minimum energy, but there is no experimental or theoretical studies confirming this. Eustathopoulos and Mortensen [4] in 1993 supports the idea of slow decomposition of SiC, the enrichment of aluminium in Si and C at the interface and formation of fine and flat particles of Al4C3. To verify this hypothesis, large additions of Si and Al have been made, finding an increase of SiC thermodynamic stability and a reduction of Al4C3 formation. Wettability kinetics is limited by the dissociation velocity of SiC and the stationary value of contact angle is reached when Al surface layers are saturated in Si and C. The initial angle of 160° is due to Al2O3 film on the Al surface. Figure 5. Variation of contact angle θ with time t, for Al-18%Si alloy with SiC, at 800°C under vacuum of 10-5Pa [7].
2020-07-23T09:06:33.329Z
2020-07-18T00:00:00.000
{ "year": 2020, "sha1": "f27fdeece1995085f056f5e61a53b1d17caf9262", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/877/1/012024", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "43e774660c6edddb0ea270b6a674758247a86478", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
12472214
pes2o/s2orc
v3-fos-license
Determination of iron-overload in thalassemia by hepatic MRI and ferritin Ivan L. Angulo1 Dimas T. Covas2 Antonio A. Carneiro3 Oswaldo Baffa4 Jorge Elias Junior5 Guilherme Vilela6 Accumulation of iron in thalassemia causes organ damage and reduces patient survival due to heart lesions in the second decade of life. Iron deposits are monitored by direct (biopsy) and indirect methods (ferritin) with sequential data being better than isolated measurements. This paper compares two indirect measurements of iron overload; a single hepatic iron concentration (HIC) by magnetic resonance and mean ferritin levels over four years. A retrospective study of 25 patients from the Centro Regional de Hemoterapia in Ribeirão Preto, Brazil was carried out. High HIC (above 7 mg per gram of dry weight) was found in 20 patients and high mean serum ferritin (above 2500 μg/L) in 10 patients. Stratification into three levels (low, moderate and high) of iron overload gave similar results in both tests. Many other factors influence de degree of iron overload in thalassemia. No correlation was found using a non-parametric statistical test between HIC and mean serum ferritin. Both methods provide better planning of chelation therapy. Rev. Bras. Hematol. Hemoter. 2008;30(6):449-452. Introduction The accumulation of iron in thalassemia causes organic injuries and a reduction in survival and must be treated with iron chelators such as deferoxamine.The success of treatment depends essentially on patient adherence and can be evaluated by determining iron loading by direct or indirect methods.The accumulation of transfusional and absorbed iron in thalassemia is approximately 7 to 14 grams per year.The measurement of iron is important for the prognosis (risk of organic and associated injuries) and monitoring chelation. 1,2erritin is the principal iron storage protein, found in the liver, spleen, bone marrow, and to a small extent in the blood (serum ferritin -SF). 3In the majority of clinical centers, the standard method of evaluating the total amount of body iron is measurement of the SF concentration in the blood. 4owever, the correlation between SF and body iron is not sufficiently precise to be of high prognostic value, especially when associated with inflammation or tissue damage.Moreover, alterations in the relationship between blood serum ferritin concentration and body iron content by chelation and vitamin C treatment are complex.For example, the relationship between serum ferritin and body iron appears to be singular for different hematologic conditions. 5SF has been the primary clinical measure of iron stores in thalassemic patients undergoing transfusions.It is non-invasive, widely available, inexpensive, but has not been systematically compared to validated quantitative measurements of liver iron using techniques such as MRI. 6he liver is the main iron storage organ in the body, containing approximately 70% of the total content of the body.Liver iron can be assessed by needle biopsy or, more recently, by noninvasive magnetic resonance imaging (MRI).As liver iron correlates with total body iron, an alternative to evaluating body iron overload is the measurement of liver iron concentration (LIC).Thus, liver iron concentration (LIC) gives a measure of parenchymal and macrophage iron stored in Kupfer cells.Direct methods, such as hepatic biopsy and susceptometry, are not influenced by other factors, but are difficult to achieve, due to their invasiveness and high cost and because the equipment is generally unavailable. 6mongst the indirect methods, measurement of the amount of liver iron by MRI is the best, because of its advantage of not being invasive and also because it allows an anatomical view of iron overload in the liver.This method enables measurement of iron in milligrams per gram of tissue, and estimates of the risk of organic diseases. 7SF has been compared with liver iron in transfused thalassemia major patients and demonstrated a good correlation, but a wide prediction range reduces its clinical utility.Despite the limitations of isolated SF and LIC comparisons, SF followed over time, as a trend or as a mean, has been a reasonable predictor of clinical outcome. 8he clinical consequences of iron overload are varied and reflect the key sites of iron storage.In the liver, the formation of collagen and portal fibrosis have been shown to occur after about two years of transfusion therapy.Iron accumulation in the heart is the leading cause of death in patients with thalassemia major.Endocrine glands are also affected.Patient compliance with treatment regimens and effective chelation therapy are thought to be the main factors associated with improved survival. 4,7Patients with serum ferritin persistently above 2500 µg/L have a greater risk of cardiac injury, but interference of other biological factors exists, turning this into an inexact evaluation 4 with some authors preferring a value of 1500 µg/L. 8Measurements of LIC above 1.6 mg/g of dry weight (mg/gdw) are considered high, there is a small risk of complications when under 7 mg/ gdw, between 7 and 15 mg/gdw are intermediate values and patients with above 15 mg/gdw have a risk of serious injury, including fibrosis and cirrhosis of the liver, and cardiac death. 3,7Infection by Hepatitis C Virus (HCV) with inflammation does not affect the MRI measurements, but may affect ferritin. 8s the accumulation of iron in the myocardium seems to be associated with arrhythmias and organ insufficiency, the measurement of cardiac iron is also important, but is calculated using a different technique. 9he main objective of this paper is to quantify liver iron concentration (LIC) by magnetic resonance in multitransfused thalassemia patients chelated with deferoxamine and compare this to mean ferritin values over a four-year period as well as classify patients for risk of illness and death. Patients and Method A retrospective study in a group of thalassemia major and intermediate patients, followed at the Centro Regional de Hematologia e Hemoterapia (Regional Blood Center) of Ribeirao Preto Hospital and Clinics-HCRP, (Hospital das Clínicas of the Medicine School in Ribeirão Preto) was performed, in which 43 magnetic resonance investigations were carried out for the quantification of LIC.Evaluations were carried out in the Radiology Department of HCRP and in the Physics and Mathematics Department, University of Philosophy, Sciences and Languages of Ribeirão Preto, University of São Paulo, campus of Ribeirão Preto, Brazil.To evaluate liver iron overload, MR images were acquired using two SE and two GRE sequences on a 1.5-T whole body scanner (Siemens Magneton Vision Plus).LIC values from relaxometry (R2) using SSE sequences were computed according to the protocol developed by Clark and St. Pierre, 10,11 who validated their protocol of quantifying iron overload from R2 using more than 100 biopsies. Twenty-five patients including 11 women with an average age of 21 ± 7 years old (range: 6 to 39) were evaluated.They had been multitransfused since infancy, were under chelation with deferoxamine with good adherence to treatment and had a yearly blood consumption below 200 mL per kilogram.The LIC was quantified, as described, in milligrams per gram of dry weight (mg/gdw) of liver with normal values being under 1.6 mg/gdw. 10,11he serum ferritin was assessed by chemoluminescence and an average of 15 measurements per patient were performed over a four-year period.Normal values are 300 µg/L for men and 150 µg/L for women. The data was analyzed using the Spearman correlation by the Instat program version 3.01 (GraphPad Software, Inc.) with values under 0.05 being considered significant. Injuries from iron accumulation were found in 19/25 patients and were mainly heart related (arrhythmia or insufficiency) in three, diabetes mellitus in one, growth retardation in four and hypogonadism in eleven patients.In 12/25 there were antibodies against Hepatitis C Virus (ab anti-HCV).In only 11/25 patients there was concordance between SF and LIC for risk of organ disease.but thanks to aggressive chelation, this rate has dropped to between 50 to 67% and, in particular, increased the survival of women, even though clearance of myocardial iron is slower than hepatic deposits.][15][16] The results presented here are of a group of young patients, transfused since infancy and with good adherence to chelation using deferoxamine.The majority already have endocrine gland lesions due to iron accumulation.There was concordance of risk as estimated by LIC and mean ferritin levels in only 11 individuals, but overall results were very similar.High risk was identified in 10/25 using serial ferritin measurements and in 9/25 patients by LIC.These patients need aggressive chelation therapy immediately.Thus the estimate of the accumulation of liver, and thus body, iron by magnetic resonance did not identify more at-risk patients than the mean SF levels, but only one measurement was required.LIC needs to be performed periodically, for example, once a year, and at shorter intervals for intensively cheated patients.SF must be measured at least 3 times a year.Measurement of liver iron allows real quantification of iron accumulation, the effectiveness of chelation and enables changes in strategy 17 at a glance.This requires much time with SF.There are many other factors that influence iron loading and organ disease in thalassemia and treatment must be individualized.Perhaps the best strategy would be to use both techniques, mean SF and LIC by MRI, frequently.But MRI is not as accessible as SF.Machines are expensive and busy in public hospitals.We suggest that, for thalassemia patients undertaking blood transfusion, the mean SF should be kept under 1500 mg/L by chelation if possible and organ function should be monitored closely.A LIC evaluation performed once per year is an additional tool to monitor iron accumulation however prognosis must not be based on an isolated measurement. Discussion The validation of a method of iron measurement in multitransfused patients is essential to optimize therapeutic chelation, controlling iron loading and avoiding chelator toxicity.Traditionally it is achieved using serum ferritin with known limitations. 12Magnetic resonance imaging seems to be a better method.It is less invasive than liver biopsy and is not affected by interference from fibrosis. 8Cardiac injury was responsible for death of 70% of the patients in the past temente levarão a um melhor planejamento da terapia quelante.Rev. Bras.Hematol.Hemoter.2008;30(6):449-452.
2017-08-30T19:05:03.659Z
2008-12-01T00:00:00.000
{ "year": 2008, "sha1": "0ef8efa41bb3dcd170f23294b721f676676c0c31", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/rbhh/v30n6/v30n6a06.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0ef8efa41bb3dcd170f23294b721f676676c0c31", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
231733732
pes2o/s2orc
v3-fos-license
Astragaloside IV Improves High-Fat Diet–Induced Hepatic Steatosis in Nonalcoholic Fatty Liver Disease Rats by Regulating Inflammatory Factors Level via TLR4/NF-κB Signaling Pathway Objective: Astragaloside IV (AS-IV) is the primary bioactive component purified from Astragalus membranaceus which is one of the traditional Chinese medicines. Research studies found that AS-IV has significant pharmacological effects on focal cerebral ischemia/reperfusion, cardiovascular disease, pulmonary disease, liver cirrhosis, and diabetic nephropathy, but little is known about the effects of AS-IV on nonalcoholic fatty liver disease (NAFLD). In this study, we investigated whether AS-IV has beneficial effects on NAFLD in rats and its potential mechanisms. Methods: Male SD rats were fed with high-fat diet (HFD) for 12 weeks to establish NAFLD rat model, and then, the rats were divided into five groups. The control group rats were fed with normal diet for 12 weeks and then were given normal saline (1.0 ml kg−1 day−1) by intragastric administration for 4 weeks. The model group rats were fed with HFD for 12 weeks and then were given normal saline (1.0 ml kg−1 day−1) by intragastric administration for 4 weeks. The AS-IV-L, AS-IV-M, and AS-IV-H groups were treated with 20, 40, and 80 mg kg−1 day−1 of AS-IV by intragastric administration for 4 weeks and given HFD diet. Then, we detected serum transaminase (ALT, AST), blood lipid (TG, TC), inflammatory cytokines (IL-6, IL-8 and TNF-α), liver histology(NAFLD activity score), TLR4/MyD88 signaling pathway in liver tissue. Results: We found AS-IV significantly reduced serum levels of AST, ALT, TG, TNF-α, IL-6, and IL-8 in NAFLD rats and downregulate the expression of TLR4 mRNA, MyD88 mRNA, NF-κB mRNA, and proteins in liver tissue. Moreover, AS-IV could significantly reduce the NAFLD activity score of NAFLD rat liver. Conclusion: In this study, we demonstrated that AS-IV have a protective effect on NAFLD by inhibiting TNF-α, IL-6 and IL-8 levels and down-regulating TLR4, MyD88 and NF-κB expression in rat liver tissues. INTRODUCTION Nonalcoholic fatty liver disease (NAFLD) is the most common chronic liver disease worldwide. NAFLD includes three subtypes: nonalcoholic fatty liver (NAFL), nonalcoholic steatohepatitis (NASH), and related liver cirrhosis. NAFL can develop into NASH, while NASH can gradually develop into liver cirrhosis and liver cancer (Diehl and Day, 2017). The prevalence of NAFLD in ordinary adults is between 6.3% and 45%, and the average prevalence is as high as 25.24% (Younossi et al., 2016). An epidemiological survey from Shanghai, Beijing, and other regions in China showed that the prevalence of NAFLD in ordinary adults diagnosed by B-type ultrasonography had increased from 15% to more than 31% over a 10-year period (Zhu et al., 2015). The pathogenesis of NAFLD has not been elucidated completely, and the theory of "multiple strikes" is widely accepted by professionals at present (Tilg and Moschen, 2010). Recently, the mechanism of innate immunity in NAFLD pathogenesis has received more and more attention. Studies had indicated that TLR4 signaling pathway was one of the key factors in the pathogenesis of different chronic liver diseases including NAFLD (Soares et al., 2010;Roh and Seki, 2013) and was associated with the progression of NASH Kapil et al., 2016). Singh et al. (2011) revealed that the expression of TLR4 mRNA and its protein in normal liver tissues was lower than that in NASH patients. This suggested the importance of immunological inhibition and immune tolerance in the normal liver. Sharifnia et al. (2015) found that the expression of TLR4 mRNA and interferon regulatory factor-3 (IRF-3) mRNA in the liver of NASH patients were significantly increased compared with NAFLD patients. Furthermore, the expression of TLR4 and its downstream mediators were upregulated after treated with palmitate and lipopolysaccharide (LPS). It indicated that TLR4 had vital function on pathogenesis of NASH and was one of the important factors related to LPS sensitivity and fatty acid damage. Present studies have found that innate immunity plays an important role in NAFLD pathogenesis. TLRs are a series of pattern recognition receptors and play the crucial role in the activation of the innate immune system by identifying pathogenassociated molecular patterns (PAMPs) (Leifer and Medvedev, 2016;Vidya et al., 2018). TLR4 is an important member of TLRs family. It is an endotoxin recognition receptor that mediates innate immunity. It is a bridge between innate immunity and acquired immunity of the human body (Takeda and Akira, 2015). TLR4 is mainly located on the surface of the cell membrane and is the receptor of intestinal-derived endotoxin lipopolysaccharide (LPS) of Gram-negative bacteria (Akira et al., 2006). TLR4 can initiate a series of injury-related immune responses (Erridge, 2010;Wakefield et al., 2010). TLR4 interacts with its downstream adaptor molecule myeloid differentiation factor 88 (MyD88) and then activates nuclear factor-κB (NF-κB) transcription factors to produce and release cytokines. As one of the important pathways associated with inflammatory response, TLR4/NF-κB signal transduction pathway activation can lead to a large number of expressions of inflammatory factors including TNF-α, IL-1, IL-6, IL-8, and adhesion molecules and then induce a series of inflammatory responses (Snyder and Sundberg, 2014;Mitchell et al., 2016). Until now, there is no common acknowledged therapeutic method to NAFLD, although NAFLD is very common (Diehl and Day, 2017). AS-IV is a monomer component purified from Chinese traditional medicine Astragalus membranaceus. The molecular structure was extracted from Astragalus membranaceus in 1983 by Japanese scholars Kitagawa et al. (1983). The molecular formula of AS-IV is C14H68O14, and its molecular mass is 784.97 ( Figure 1). The bioavailability of AS-IV after p.o. administration is only 3.66% in rats (Zhang et al., 2007), and the low absorption is mainly due to its poor intestinal permeability, high molecular weight, low lipophilicity, and its paracellular transport in Caco-2 cells (Huang et al., 2006). It has been revealed that AS-IV had multiple biological activities, such as anti-inflammatory, antioxidation, lipid-regulating, hypoglycemic, and immunomodulating activities (Ren et al., 2013;Li et al., 2017). It has been shown that AS-IV could play anti-inflammatory role through multiple pathways. AS-IV can regulate cytokines, inflammatory factors, signaling pathways, and apoptosis-related genes which are associated with antiinflammatory injury (Li et al., 2017). Zhang and Frei (2015) reported that AS-IV could effectively inhibit LPS-induced acute inflammatory responses in different organs of rats by regulating TLR4/NF-κB, reducing TNF-α and IL-6 expression. Zhou et al. (2017b) demonstrated that AS-IV inhibited TLR4/NF-kB signaling pathways in intervening unilateral ureteral obstruction model mice and LPS-induced epithelial cells. Lv et al. (2010) reported that AS-IV could reduce glycogen phosphatase and glucose-6-phosphate levels in the liver, reduce blood glucose and triglyceride levels, and improve insulin resistance in type 2 diabetic mice. However, the impacts of AS-IV on NAFLD have been rarely reported. In this study, our aim is to investigate whether AS-IV can improve HFD-induced hepatic steatosis by inhibiting the expression of TLR4, MyD88, and NF-κB in the liver tissue of NAFLD rats. Animals and Treatments A total of 53 male Sprague Dawley rats (age, 6 weeks; weight, 200 ± 20 g) were purchased from Beijing Vital River Laboratory Animal Technology Co. Ltd. (Beijing, China, NO.SCXK 2016-0011) and raised in the Experimental Animal Center of the Institute of Radiation Medicine, Chinese Academy of Medical Sciences (Tianjin, China). The rats were maintained under standard conditions of temperature (22 ± 2°C) and humidity (50 ± 5%) in a 12 h light/dark cycle. Animals were allowed free access to food and water throughout acclimatization and experimental periods. After 1 week of acclimatization, they were randomly divided into control group (CON, n 10), model group (MOD, n 13), and intervention group (n 30). The control group was fed with normal diet for 12 weeks, followed by 4 weeks of intragastric administration of saline (1.0 ml kg −1 day −1 ) and continuous normal diet feeding. The model group and the intervention group were fed high-fat diet (HFD, 88% of normal diet, plus 2% of cholesterol and 10% of lard) for 12 weeks for NAFLD modeling. After 12 weeks, three rats of the model group were randomly selected to confirm the successful modeling of NAFLD by liver histopathological examination and the other 10 rats were subsequently intragastrically administered with 1.0 ml kg −1 day −1 saline for 4 weeks alongside HFD feeding. The intervention group was then randomly divided into astragaloside IV (Dalian, China, Cat. No. MB 1955) low-dose group (AS-IV-L, n 10), middle-dose group (AS-IV-M, n 10), and high-dose group (AS-IV-H, n 10). Rats in the intervention groups were intragastrically administered with different concentrations of AS-IV (20 mg kg −1 day −1 for AS-IV-L, 40 mg kg −1 day −1 for AS-IV-M, or 80 mg kg −1 day −1 for AS-IV-H) for 4 weeks, respectively. The weight and food intake of rats in each group were measured weekly. All animal procedures conducted in this study were approved by experimental animal ethics committee, Institute of Radiology, Chinese Academy of Medical Sciences(Ethics number IRM-201905-072). Histopathology Analysis After 16 weeks, all rat livers were collected, part of the liver was fixed in 4% neutral formalin at 4°C overnight, and the liver tissues were prepared into 5 µm liver pathological sections on a microtome. Sections were stained with hematoxylin and eosin (H&E) according to standard techniques. The sections were observed by two blinded experienced pathologists. Twenty high-magnitude visual fields were observed randomly in every section, and the NAFLD activity score (NAS) were obtained under the microscope. NAS was shown in Table 1 (Chalasani et al., 2012). Quantitative Real-Time (RT) PCR Total RNA was extracted from rat liver tissues using TRIzol reagent kit (Applied Biosystems Inc., Carlsbad, USA, Cat. No. 15596-026). cDNA was synthesized from 1 mg of total RNA with a reverse transcriptase kit (Vazyme-biotec, Nanjing, China, Cat. No. R101-01/02). The primer sequences (TSINGKE Biological Technology, Beijing, China) used in the real-time PCR (RT-PCR) assay were shown in Table 2. GAPDH served as an internal reference. SYBR Green qPCR Master Mix (Vazyme-biotech, Nanjing, China, Cat. No. Q111-02) was used for RT-PCR amplification. The cycle conditions were denaturation at 95°C for 10 min followed by 40 repeated annealings at 95°C for 30 s and extension at 60°C for 30 s. The mRNA expression levels were assessed using the 2 −ΔΔCq method. Western Blotting Detection of liver TLR4, MyD88, and NF-κB p65 protein expression was performed according to the kit (Beyotime Biotechnology, Shanghai, China) manufacturer's instructions. 100mg liver tissue was lyzed in PMSF buffer for 30 min and then centrifuged for 5 min at 12,000 rpm and 4°C. The supernatant of liver tissue was used to quantitatively detect the protein levels. Protein samples (50 μg) from rat liver tissues were separated by 12% SDS-PAGE and electrotransferred to a nitrocellulose membrane, followed by 5% dried skimmed milk blocking at room temperature for 2 h and hybridization overnight at 4°C with Statistical Analysis SPSS 20.0 statistical software (SPSS, Inc., Chicago, IL, USA) was used for all statistical analyses. Measurement data were expressed as the mean ± standard error of the mean. Data comparisons between two groups used Student's t-tests and data comparison among multiple groups used one-way ANOVA analysis. A value of p < 0.05 was considered significant difference. The Weight of Rats and Daily Food Intake After 12 weeks of feeding, the body weight of rats fed with normal diet and high-fat diet all increased steadily, while the weight of rats fed with high-fat diet increased significantly faster than that of rats fed with normal diet. After 16 weeks, the weight gain rate of AS-IV-L, AS-IV-M, and AS-IV-H groups were significantly lower than the model group. However, there was no significant difference in the daily food intake among groups. This indicated that AS-IV reduces the weight gain of NAFLD rats, not by reducing their food intake ( Figure 2). Astragaloside IV Reduces the Serum TG, ATL, and AST Levels of NAFLD Rats To investigate the effect of astragaloside IV on NAFLD rats, we tested serum TG, TC, ATL, and AST levels of rats of each group at the end of 16 weeks. NAFLD rats showed significantly higher levels of serum TC and TG than the control group. This was consistent with the characteristics of NAFLD. After treated with high-dose AS-IV for 4 weeks, the serum TG level was significantly deceased in NAFLD rats (p < 0.01), but the low-dose and middle-dose AS-IV have no obvious effect on reducing the serum TG level. Meanwhile, all the low, middle, and high doses of AS-IV did not reduce the serum TC level. This shows that AS-IV has no obvious effect on the regulation of lipid metabolism. The serum AST and ALT levels in the NAFLD rats were significantly higher than those in the control group rats. This was consistent with the characteristics of NAFLD. All of the high-dose (80 mg kg −1 day −1 ), middle-dose (40 mg kg −1 day −1 ), and lowdose (20 mg kg −1 day −1 ) AS-IV treatments showed a significantly decreased level of serum AST in NAFLD rats (p < 0.05). Meanwhile, the middle-and high-dose treatments of AS-IV showed a significant decrease in the ALT levels of NAFLD rats (p < 0.05). The effect of reduced AST and ALT seemed to be dose-dependent. This suggested that AS-IV can significantly reduce the release of transaminase due to hepatocyte injury ( Figure 3). Astragaloside IV Improves Hepatic Steatosis in NAFLD Rats NAFLD is characterized by liver fat deposition. Long-term liver fat deposition could lead to liver inflammation. After 12 weeks of HFD, all the sacrificed rats showed liver fat deposition by the liver H&E staining method. This confirmed that the NAFLD rat model was successfully established. After 4 weeks of administration of AS-IV, the liver lipid deposition of NAFLD rats was reduced through microscope observation ( Figure 4E). The liver NAS score were markedly decreased in all AS-IV-treated NAFLD rats ( Figure 4D). Hepatic steatosis and intralobular inflammation were significantly attenuated in middle-and high-dose AS-IV-treated NAFLD rats (p < 0.05) ( Figures 4A,B), and balloon-like changes were significantly improved in highdose AS-IV-treated NAFLD rats (p < 0.01) ( Figure 4C). The effect of AS-IV on NAS of the liver in NAFLD rats also seemed to be dose-dependent. Through this experiment, we directly confirmed the therapeutic effect of AS-IV on NAFLD rats. Astragaloside IV Inhibits Hepatic TLR4, MyD88, and NF-κB Expression in NAFLD Rats Previous discussions have shown that AS-IV may act on the TLR4 signaling pathway in some diseases and TLR4 plays an important role in the development of NAFLD. To determine whether the AS-IV liver histopathology improving effect was associated with TLR4 signaling pathway, we examined the TLR4 mRNA, MyD88 mRNA, and NF-κB mRNA levels of the liver in NAFLD rats. We found the levels of TLR4 mRNA, MyD88 mRNA, and NF-κB mRNA in the liver tissue of NAFLD rats were markedly upregulated compared to normal diet-fed rats (p < 0.01). This confirmed that TLR4 signaling pathway plays a role in the pathogenesis of NAFLD. After treated with AS-IV for 4 weeks, the levels of TLR4 mRNA, MyD88 mRNA, and NF-κB mRNA in the liver tissue of NAFLD rats were markedly restored at low dose, middle dose, and high dose of AS-IV. Compared to normal diet-fed rats, the Western blot showed that the protein expression of TLR4, MyD88, and NF-κB in the liver of Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 605064 6 NAFLD rats was markedly upregulated and restored after AS-IV administration at middle dose and high dose ( Figure 5). Astragaloside IV Reduces Serum TNF-α, IL-6, and IL-8 Levels in NAFLD Rats As we described in the previous introduction, TLR4 can initiate a series of injury-related immune responses (Erridge, 2010;Wakefield et al., 2010). The overexpression of TLR4 signaling pathway will induce immune inflammatory response. To detect the immune inflammatory response in NAFLD rats, we examined the serum levels of TNF-α, IL-6, and IL-8.We found NAFLD rats showed significantly higher serum levels of TNF-α, IL-6, and IL-8 than the control group. Treatment with different doses of AS-IV significantly reduced the serum TNF-α levels of NAFLD rats (p < 0.05), and this effect seemed to be dose-dependent. The serum levels of IL-6 and IL-8 in NAFLD rats were significantly decreased in the middle-and high-dose AS-IV-treated groups (p < 0.05) ( Figure 6). Representative protein expression bands of hepatic TLR4, MyD88, and NF-κB p65 were analyzed by Western blotting. Data present the mean ± SD. Comparisons between two groups use t-tests, and data comparison among multiple groups use one-way ANOVA.*p < 0.05 and **p < 0.01 compared with the control group; # p < 0.05 and ## p < 0.01 compared with the model group. CON, control group; MOD, model group; AS-IV-L, astragaloside IV low-dose group; AS-IV-M, astragaloside IV middle-dose group; AS-IV-H, astragaloside IV high-dose group. DISCUSSION Present studies have shown that TLR4 was associated with hepatic steatosis and NAFLD (Miura and Ohnishi, 2014). Singh et al. (2011) found that TLR4 mRNA and protein levels in the liver tissue of NASH patients were higher than those in normal population, and the functionally impaired TLR4 mutant mice were resistant to diet-induced NAFLD (Csak et al., 2011). Enterogenic endotoxin LPS, the ligand of TLR4, increased significantly in NAFLD rodent models induced by different diets (Miura and Ohnishi, 2014). Furthermore, injection of LPS in NAFLD mice could increase proinflammatory cytokines and aggravate the hepatic steatosis (Kudo et al., 2009). Recently, Feng et al. (2019) reported that NAFLD models had been established by feeding high-fat diet to ApoE −/− mice, with impaired intestinal mucosal barrier, increased serum LPS levels. They also found that expressions of TNF-α mRNA, IL-1β mRNA, TLR4, MyD88, and NF-κB protein of the liver tissue were upregulated. In our study, we found that TLR4 mRNA, MyD88 mRNA, NF-κB mRNA, and their proteins were significantly upregulated in the NAFLD rat model rats induced by high-fat diet. These suggested that TLR4/ NF-κB signaling pathway was involved in the pathogenesis of NAFLD. TLR4/NF-κB signal transduction pathway is one of the important pathways associated with inflammatory response. Its activation can lead to a large number of expressions of inflammatory cytokines and then induce a series of inflammatory responses. Cytokines are related closely to NASH. Multiple cytokines, including TNF-α, IL-6, and IL-8, involve in the development of NASH (Mendez-Sanchez et al., 2020). In all of them, TNF-α is the first proinflammatory cytokine released in the body's immune response, which further recruits varieties of inflammatory factors and initiates the development of NAFLD (Stojsavljevic et al., 2014). TNF-α has a strong inhibitory effect on lipoprotein lipase, which can reduce the decomposition of peripheral adipose tissue, promote the synthesis of TG in hepatocytes, and induce lipid accumulation in the liver (Giby and Ajith, 2014;Mendez-Sanchez et al., 2020). TNF-α can hinder insulin signaling by inducing the expression of signal transduction inhibitor 3, leading to insulin resistance (IR) (Neuschwander-Tetri, 2010;Tilg and Moschen, 2010). IL-6 is mainly secreted by adipose tissue and highly expressed in plasma and liver tissues of NASH patients (Schleicher et al., 2015). Studies have shown that IL-6 could impede insulin receptor signaling, lead to IR, and aggravate NAFLD development by inhibiting the expression of IRS-1, GLUT4, and phosphatidylinositol 3-kinase (Park et al., 2010;Wided et al., 2014). In NASH patients, IL-8 levels were significantly elevated, which could induce intrahepatic neutrophil infiltration and lead to hepatocyte injury through neutrophil activation and chemotaxis. (Joshi-Barve et al., 2007;Nassir and Ibdah, 2014). Furthermore, IL-8 can activate liver macrophages and promote liver fibrosis/cirrhosis in NASH patients (Zimmermann et al., 2011). In this study, we found that TLR4 mRNA, MyD88 mRNA, NF-κB mRNA, and their proteins were significantly upregulated in the NAFLD rat model rats induced by high-fat diet. Also, serum TNF-α, IL-6, and IL-8 levels were significantly increased in these rats. These suggested that TLR4/NF-κB signaling pathway and its downstream inflammatory cytokines were involved in the pathogenesis of NAFLD. Present studies revealed that AS-IV had multiple biological activities, such as anti-inflammatory, antioxidation, lipid-regulating, hypoglycemic, and immunomodulating activities (Ren et al., 2013;Li et al., 2017). Recent studies had shown that AS-IV could protect multiple organs by regulating TLR4/NF-κB signaling pathway. Yang et al. (2013) reported that AS-IV could prevent isoproterenol-induced myocardial hypertrophy in rats by inhibiting the expression of TLR4/NF-κB signaling pathway and its downstream inflammatory cytokines. Lu et al. (2015) reported that AS-IV could downregulate TLR4/NF-κB signaling pathways and inhibit apoptosis, thus alleviatie myocardial injury in the rat model of myocardial ischemia/reperfusion. Leng et al. (2018) reported that AS-IV could improve hyperglycemia-induced vascular endothelial dysfunction and reduce IL-6 and TNF-α levels by regulating TLR4/NF-κB signaling pathways. However, the effects of AS-IV on NAFLD have been rarely reported. Jiang et al. (2008) reported that AS-IV could alleviate IR and improve liver steatosis in the rat model of type 2 diabetes mellitus. Meanwhile, AS-IV could reduce free fatty acid-induced lipid accumulation in rat hepatocytes. Wu et al. (2016) reported that AS-IV improve lipid metabolism in obese mice by attenuating leptin resistance and regulating the mice heat production network. Zhou et al. (2017a) reported that AS-IV attenuated free fatty acid-induced endoplasmic reticulum stress and lipid accumulation in hepatocytes through adenosine 5′-monophosphate-activated protein kinase (AMPK) activation. Recently, Wang et al. (2018) reported that AS-IV could inhibit IR and lipid accumulation in HepG 2 cells by activating AMPK and reducing phosphorylation of sterol-regulatory element binding proteins ((SREBP)-1c). All of these suggested that AS-IV was a promising drug for the treatment of NAFLD. In our study, we confirmed that AS-IV administration could reduce dyslipidemia and improve hepatic steatosis in HFD-induced NAFLD rats. These suggested that AS-IV could improve HFDinduced hepatic steatosis and NAFLD. Furthermore, we found that AS-IV could downregulate the expressions of TLR4 mRNA, MyD 88 mRNA, NF-κB p65 mRNA, and their proteins in the liver tissue of HFD-induced NAFLD rats, and, meanwhile, reduce serum TNF-α, IL-6, and IL-8 levels. These suggested that AS-IV may have protective function on NAFLD rats by downregulating TLR4, MyD 88, and NF-κB expression and inhibiting TNF-α, IL-6, and IL-8 levels. However, this study did not clearly define the detailed mechanisms of how AS-IV inhibits TLR4/NF-κB signaling pathway activation. Our findings are also restricted to animal study and should be confirmed in an vitro study. As mentioned above, the bioavailability of AS-IV oral administration is very low and most of it is excreted by feces. In the gastrointestinal tract, AS-IV will interact with intestinal flora, so we supposed that AS-IV may play a role in the treatment of NAFLD by regulating the structure and function of intestinal microflora and then decrease the levels of enterogenic endotoxin LPS. Therefore, further studies should be conducted in vivo and in vitro to elucidate the beneficial effects of AS-IV on hepatic steatosis. CONCLUSION In summary, we demonstrated in this study that AS-IV was an effective treatment on HFD-induced NAFLD rats by improving hepatic steatosis and hepatic lipid deposition. This work revealed that AS-IV could inhibit serum TNF-α, IL-6, and IL-8 levels and downregulate the expressions of TLR4 mRNA, MyD88 mRNA, NF-κB mRNA, and their proteins. AS-IV may be a potential drug for the treatment of NAFLD by regulating TLR4/NF-κB signaling pathways. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included within the article/Supplementary Material, and further inquiries can be directed to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by Experimental animal ethics committee of Institute of Radiology, Chinese Academy of Medical Sciences (Ethics number IRM-201905-072).
2021-02-02T18:18:25.022Z
2021-01-29T00:00:00.000
{ "year": 2020, "sha1": "b0d73bc0c899ce04d67797b077a8d90fdb471b8d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2020.605064/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0d73bc0c899ce04d67797b077a8d90fdb471b8d", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
56478178
pes2o/s2orc
v3-fos-license
Chemical profiling of mycosporine‐like amino acids in twenty‐three red algal species Rhodophyta produce a variety of chemically different mycosporine‐like amino acids (MAAs), compounds that are known as some of the strongest ultraviolet (UV) absorbing molecules in nature. Accordingly, they primarily act as photoprotectants against harmful levels of solar ultraviolet radiation in the UV‐A and UV‐B range. In order to get a deeper understanding of the chemical diversity of MAAs in red algae, pure standards of eleven mycosporine‐like amino acids were isolated from three different species (Agarophyton chilense, Pyropia plicata and Champia novae‐zelandiae) using various chromatographic methods. Their structures were confirmed by nuclear magnetic resonance and mass spectrometry. Four out of the eleven MAAs are reported for the first time in algae. In addition, a new high‐performance liquid chromatography method was developed for the separation of all isolated MAAs and successfully applied for the analysis of twenty‐three red algal species of marine origin. All of them contained MAAs, the most abundant compounds were shinorine, palythine, asterina‐330 and porphyra‐334. For some samples, the direct assignment of MAAs based on their UV spectra was not possible; therefore, the target analytes were enriched by a simple concentration step, followed by liquid chromatography‐mass spectrometry analysis of the extracts. This approach enabled a deeper insight into the MAA pattern of red algae, indicating that not only the four dominant ones are synthesized but also many others, which were often described as unknown compounds in previous studies. Abbreviations: D 2 O, deuterated water; FCPC, fast centrifugal partition chromatography; LC-DAD-ESI-MS, liquid chromatography -diode array detectorelectrospray ionization-mass spectrometry; LC-MS, liquid chromatography-mass spectrometry; MAAs, mycosporine-like amino acids; NOESY, nuclear overhauser enhancement spectroscopy; TMS, tetramethylsilane The effects of increased levels of ultraviolet radiation (UVR) reaching the earth's surface have become an important health and ecological concern, mainly as a consequence deriving from ozone-depleting substances such as chlorofluorocarbons in the atmosphere (Cardozo et al. 2011, Montzka et al. 2018). These compounds were banned by the Montreal Protocol in 1987 already; however, many uncontrolled substances like dichloromethane are still under suspicion to also cause ozone reduction (Hossaini et al. 2017). UVR is strongly affecting many benthic marine organisms in the shallow water zone such as seaweeds (Karsten 2007). Algae have developed various adaptation mechanisms to withstand the harmful effects of UV-A (315-400 nm) and UV-B (280-315 nm;Karsten 2007 and references therein), like self-shading by mat formation (Blindow and Sch€ utte 2007), dissipation of excess energy as heat (Ramanan et al. 2014), development of anti-oxidative systems involving enzymatic and non-enzymatic mechanisms (Lee andShiu 2009, Ramanan et al. 2014), and the accumulation of primary (Hartmann et al. 2015a) or secondary metabolites such as mycosporine-like amino acids. MAAs are composed of a cyclohexenone or cyclohexenimine chromophore conjugated with the nitrogen substituent of an amino acid or its imino alcohol (Sinha et al. 2007) and characterized by their small molecular weight, high solubility, and polarity (Wada et al. 2015). They are among the strongest UV-absorbing natural products and their absorption maxima are stretched between 268 and 362 nm depending on their chemical structure. Therefore, MAAs increase the UV-absorbing capacity of respective organisms (Jansena et al. 1998), and are interesting for cosmetic and pharmaceutical use as active ingredients in cosmeceuticals (Rastogi et al. 2017). Rhodophyta are an ideal source for MAAs because they are known to accumulate the highest concentrations per dry weight and to contain the greatest variety among the different algal divisions, with shinorine, porphyra-334, palythine, asterina-330, mycosporine-glycine, usujirene, palythinol and palythene being the most abundant representatives (Karsten et al. 1998(Karsten et al. , 2006. Green algae (i.e., Chlorophyta and Streptophyta) on the other hand, comprise a generally lower number of species containing MAAs, but some terrestrial genera (e.g., Prasiola, Klebsormidium) exhibit comparable or even higher concentrations of specific MAAs compared to Rhodophyta (Kitzing et al. 2014, Hartmann et al. 2016, whereas Phaeophyceae do not contain MAAs in general or exhibit only trace amounts (which might derive from epiphytic algae); furthermore, cyanobacteria have been reported to produce a high variety of MAAs, many of them remain chemically unknown (Carreto and Carignan 2011). Several studies revealed geographic, seasonal and bathymetric trends, showing that red algae from tropical zones accumulate higher amounts of MAAs than those in temperate regions. In general, their concentrations increase during summer and decrease with water depth. All these facts indicate the important role of MAAs as photoprotective agents (Karsten et al. 2006, Ayoub et al. 2012, Tartarotti et al. 2017. Monitoring the MAA pattern is therefore an important task, since it can easily be related to ecological and chemosystematic questions. The preferred technique for MAA analysis is high-performance liquid chromatography (HPLC), and several methodological approaches have been reported already (Carignan et al. 2009, Rastogi and Incharoensakdi 2014, Hartmann et al. 2015b). This study, however, had two goals: first, to improve the current HPLC methodology to enable the assignment of a larger number of MAAs, and second, to screen different red algae for the occurrence of these, partly new, compounds for the first time. MATERIALS AND METHODS Biological material. All of the red algae investigated were collected and morphologically identified by the third author U. Karsten, Prof. J. A. West, University of Melbourne, Australia or Prof. G. C. Zuccarello, Victoria University of Wellington, New Zealand, using their taxonomic expert knowledge in conjunction with standard identification keys (Hiscock 1986; http://www.algaebase.org/); details regarding species, collection date and place are summarized in Table S1 in the Supporting Information. Palythoa tuberculosa, a zoanthid, was collected in KB Channel -east side, in Palau, at depths of 17-20 m in February 2018 and identified by Prof. J. D. Reimer, University of the Ryukyus, Japan. Voucher samples of all specimens are deposited at the Institute of Pharmacy, Pharmacognosy, University of Innsbruck, Austria. Chemicals and reagents. All solvents which are required for extraction and isolation were purchased from VWR International (Vienna, Austria), and ethyl acetate was distilled before use. Solvents for analytical experiments had pro analysis (p.a.) quality at least and were obtained from Merck (Darmstadt, Germany). Deuterated solvents were supplied by Euriso-Top (Saint-Aubin Cedex, France). Ultrapure water was produced by a Sartorius arium â 611 UV (G€ ottingen, Germany) purification system. Silica gel 40-63 lm and prepacked cartridges for flash chromatography were purchased from Merck (Darmstadt, Germany) and B€ uchi (Flawil, Switzerland), respectively. MAA isolation. Three red algae, namely Pyropia plicata, Champia novae-zelandiae, and Agarophyton chilense, were selected for the isolation of individual MAAs. Either the methanol soluble part of the water extract (P. plicata) or the methanolic extract (A. chilense and C. novae-zelandiae) was used for further purification. By combining different techniques like silica gel column chromatography, flash chromatography using C-18 material, semi-preparative HPLC on diverse stationary phases and fast centrifugal partition chromatography (FCPC), eleven pure MAAs were obtained as standard compounds. The following compounds were isolated from the individual species: the fractionation of P. plicata led to the isolation of the compounds 1, 2, 4, 5, 6, 7 and 9, from C. novae-zelandiae compounds 3 and 8 were obtained, while fractionation of the extract of A. chilense led to the compounds 5, 6, 10 and 11. Their MS data and nuclear magnetic resonance (NMR) shift values are described as supplementary information (Appendix S1 in the Supporting Information). Original NMR spectra are available upon request, for compound 5 they can be found in the supplementary material. The methanol soluble part of the aqueous extract of Champia novae-zelandiae (16 g) was first fractionated on a silica gel column in gradient mode with EtOAc and methanol as solvents, to give 9 fractions. Fraction 6 (1.7 g) was further purified 394 with flash chromatography, using a C-18 40 g cartridge (B€ uchi) and a water/methanol gradient. The so obtained sub-fraction 6f (20 mg) was purified by semi-preparative HPLC on a Synergi 4u Polar-RP (250 mm 9 10 mm, 4 lm; Phenomenex) column under isocratic conditions (2% MeOH in 0.25% formic acid) to obtain compound 3 (2 mg). In a next step, sub-fraction 6d (80 mg) was separated using semi-preparative HPLC with an Aqua C18 column (250 mm 9 10 mm, 5 lm; Phenomenex). The mobile phase was comprised of 0.25% (v/v) formic acid in water (A) and methanol (B), utilized with a gradient of 0 min: 2% B, 5 min: 10% B, 25 min: 20% B. The separation resulted in the isolation of compound 8 (2 mg). Fast centrifugal partition chromatography was used for the fractionation of the methanol extract of Agarophyton chilense (9 g) on an instrument from Kromaton (Annonay, France), equipped with a 200 mL rotor. The system was operated in ascending or descending mode, and a 20 mL injection loop was installed. The experiment was repeated three times due to the maximum amount of 3 g of extract which could be injected. The applied two phase solvent systems were selected according to the respective partition coefficients (K-value), which were estimated based on TLC experiments. The following solvent systems were utilized: Solvent System 1: heptane/ ethyl acetate/methanol/water = 2/4/1/5 (v/v), Solvent System 2: heptane/ethyl acetate/butanol/methanol/water = 1/ 4/0.5/1/5 (v/v), Solvent System 3: heptane/ethyl acetate/butanol/methanol/water = 0.5/4/1.5/1/5 (v/v), Solvent System 4: heptane/ethyl acetate/butanol/methanol/water = 0.5/3/ 2.5/1/5 (v/v), Solvent System 5: heptane/ethyl acetate/butanol/methanol/water = 0.5/2/4.5/1/5 (v/v), and Solvent System 6: heptane/ethyl acetate/butanol/methanol/wa- The resulting two phases of each solvent mixture were separated directly before use. The upper organic phase served as the mobile phase, whereas the lower phase was employed as the stationary phase; the consecutive FCPC experiments were conducted in ascending mode, at a flow rate of 5 mL Á min À1 and 910 rpm rotation speed. The extract was dissolved in a 1:1 (v/v) mixture of the biphasic system at a concentration of 0.16 g Á mL À1 . Then, sequential pumping of the upper mobile phases of each solvent system beginning with SS1 was performed in volumes of 200 mL (elution step); the collected fraction size was 10 mL. The extrusion step was initiated in ascending mode using the lower phase solvent as the mobile phase to ensure that any residual extract was recovered from the apparatus. MAA extraction and sample preparation for HPLC analysis. All species that were available in larger quantity (more than 20 g) were crushed to powder in a grinding mill and extracted thrice in an ultrasonic bath (Bandelin Sonorex 35 kHz, Berlin, Germany) for 15 min using dichloromethane. This extract was disposed. For MAA extraction the remaining dry plant material was first extracted using pure methanol under the same conditions, followed by a 3-fold extraction with methanol/water = 1/1 (v/v). This extract was centrifuged at 1,000 g for 6 min and evaporated at 40°C. Subsequently both extracts were combined and freeze-dried (Heto Powerdry 6000; Thermo). Then the residue was re-dissolved in methanol (200 mg Á mL À1 ) in order to remove a precipitate which contained sugars and salts. Since a bigger amount of these extracts was available, a more detailed MAA profiling was attempted after performing a single concentration step. Accordingly, 300 mg of the methanol soluble part of each extract were purified on a 12 g HP silica 20 lm Reveleris cartridge using a mobile phase consisting EtOAc (A) and methanol (B). The gradient was as follows: 0 min: 0% B, 6 min: 10% B, 24 min: 100% B and 54 min: 100% B. For each extract four fractions were collected, fraction 1 (0.0-8.0 min), fraction 2 (8.1-17.0 min), fraction 3 (17.1-29.0 min), and fraction 4 (29.1-48.0 min); all fractions were evaporated and used for MAA screening, using a final concentration of 1 mg Á mL À1 in water. For species that were only available in small amounts (less than 20 g) a different extraction protocol had to be applied. Respective samples were placed into a shaking flask which was cooled with liquid nitrogen and grinded in a Mikro-Dismembrator S 8531722 from Sartorius (G€ ottingen, Germany). The obtained powder was extracted thrice with 1 mL of dichloromethane (this extract was disposed), methanol and methanol/water = 1/1 (v/v) each by sonication. The solutions were centrifuged at 1,000g for 6 min, the supernatants combined and dried under an air-stream due to the small volume. Then, they were dissolved in water to yield a concentration of 1 mg Á mL À1 , membrane filtered (0.45 lm, regenerated cellulose, Phenex, Phenomenex) and directly analyzed by LC/MS. Analytical method for MAA separation. The separation of all 11 standard compounds was performed on a YMC-Pack ODS column (250 mm 9 4.60 mm, 5 lm) from YMC, using a mobile phase comprising 20 mM ammonium formate and 0.6% (v/v) formic acid in water (A) and methanol (B). Liquid chromatography -diode array detector -electrospray ionization -mass spectrometry (LC-DAD-ESI-MS) experiments were performed on an Agilent 1260 HPLC system (Santa Clara, CA, USA), which was coupled to an amaZon iontrap mass spectrometer (Bruker, Bremen, Germany). The HPLC instrument was equipped with binary pump, autosampler, column oven and diode array detector. Elution was performed by maintaining 2% B for the first 15 min, followed by an increase to 10% B in 8 min, to 15% B in 7 min, and to 98% B in further 5 min; this composition was kept for additional 5 min, resulting in a total runtime of 35 min. The column was re-equilibrated for 15 min prior to the next analysis. The DAD was set to 310, 330, and 350 nm, the flow rate, injection volume, and column temperature were adjusted to 0.65 mL Á min À1 , 5 lL, and 20°C. MS-spectra were recorded in positive-ESI mode (capillary voltage 4.5 kV), with a drying gas temperature of 200°C, the nebulizer gas (nitrogen) set to 4.4 psi, and a nebulizer flow (nitrogen) of 6 L Á min À1 . The scanned mass range was set between m/z 100 and 1,200. Aplysiapalythine A (5) was difficult to distinguish from palythinol (another known MAA) only by 1D-NMR and LC-MS, because both only differ in the location of one methyl group, which is either in position 2 0 (aplysiapalythine A) or 1 0 (palythinol) of Mycosporine-alanineglycine (7) 13´2Á plysiapalythine A (5) Mycosporine-glycine (6) 13´2´2 Shinorine (1) 396 the side chain. Therefore, both compounds show identical molecular masses (m/z = 302.10) and absorption maxima, as well as highly similar NMR shift values. By analyzing a sample reported to contain palythinol, the zoanthid Palythoa tuberculosa (Takano et al. 1978a), by LC-MS this similarity became clear. In the corresponding chromatogram, a signal that was identical to the retention time and molecular mass of aplysiapalythine A was visible (Fig. S1 in the Supporting Information). Additionally, in this study the correct structural assignment of the isolated compound 5 was confirmed in 2D-NMR experiments, where the long-range correlation from the protons of the methylene group in position 1 0 (dd, 3.42 ppm and dd, 3.50 ppm) to carbon 1 of the cyclohexenimine ring confirmed our assumption. The nuclear overhauser enhancement spectroscopy (NOESY) spectrum was helpful as well, since there was a visible correlation from the protons of C-1 0 to the methylene group in position C-6 (2H, s, 2.87 ppm). For the original NMR spectra of compound 5 see Figures S2-S8 in the Supporting Information. HPLC method development. For the development of an improved HPLC method, all isolated eleven MAAs were used as standards. Four different stationary phases were selected for an initial screening, two from Phenomenex (Synergi MAX-RP 80A, 150 mm 9 4.60 mm, 4 lm; Gemini C18 110A, 150 mm 9 4.60 mm, 3 lm) and two from YMC (Triart C18, 150 mm 9 3.00 mm, 3 lm; YMC-Pack ODS column, 250 mm 9 4.60 mm, 5 lm). The latter two yielded better results, the Triart C18 phase in respect to a slightly improved peak shape and the YMC-Pack ODS column indicating overall better separation efficiency; thus it was selected for further experiments. Acetonitrile and water as mobile phase resulted in poor retention of the analytes and asymmetric peaks, thus methanol was used instead of acetonitrile. The addition of ammonia did not improve the resolution in contrast to acidic modifiers like acetic acid, formic acid or trifluoroacetic acid (TFA), although some of the analytes were still overlapping. Supplementing ammonium formate to the mobile phase showed to be advantageous, and with the finally selected concentration of 20 mM the best peak symmetry and resolution were achieved. Furthermore, a pH value higher than the selected one (pH 2.6 adjusted with formic acid; TFA was not considered because of ion suppression in MS) decreased selectivity and resolution again. In addition, a reduction in the flow rate to 0.65 mL Á min À1 was required to resolve compounds 8 and 9. For the same reason column temperature was set at 20°C. Analysis of mycosporine-like amino acids in diverse red algae. All of the red algae that were examined contained MAAs. Table 1 lists the respective results divided in two groups, namely species available in small amounts (direct analysis of methanol/water extract) and samples available in larger amounts (analysis of extract after purification step). Individual MAAs could either be assigned by UV/vis (indicated in the table as UV) or only by MS (indicated as T, trace). Shinorine, palythine, and porphyra-334 were present in most of the samples, and except of a few cases they could be directly assigned by UV; therefore, they definitely are quantitatively dominant UV-absorbing molecules. Less abundant compounds were asterina-330 (in 78% of the samples), mycosporine-glycine (34%), usujirene (26%) and palythene (17%); nevertheless, in most species they still were directly detectable due to matching UV spectra and retention times; however, the correct assignment was also confirmed by LC-MS. Approximately half of the investigated algae produced aplysiapalythine A and mycosporine-alanineglycine, but together with aplysiapalythine B and mycosporine-methylamine-threonine these compounds could mainly be assigned in the MAAenriched extracts only. This indicates that they were present in minute concentrations, which is a possible explanation why these four metabolites are reported in our study for the first time in algae. In three of the investigated species (i.e., Pyropia plicata, Agarophyton chilense, and Sarcothalia atropurpurea) all of the standard compounds were present; in P. plicata nine out of eleven compounds were assignable in the UV-chromatogram already (Fig. 3). Samples with the lowest number of identified MAAs were (naturally) those which were analyzed without sample clean-up. For example, in Pterocladia lucida only shinorine could be confirmed, or in Hymenena affinis palythine and porphyra-334 were detected. DISCUSSION Marine organisms are an excellent source for ecologically and pharmacologically relevant natural MAAS IN RED ALGAE products. Among them are mycosporine-like amino acids, known photoprotectants, which are in the focus of the cosmetic industry due to their possible use as sunscreens. Standardized algal extracts containing these compounds are already commercially available and used for their sun protection properties. 398 Mycosporine-like amino acids are also interesting ecologically. Thinning of the stratospheric ozone layer, particularly in the Southern Hemisphere and as reflected in the Antarctic Ozone Hole which resulted in an increase of biologically harmful solar UVR reaching the Earth's surface (Bai et al. 2016) magnifies their importance. The multiple effects of UVR on Rhodophyta and other algae have been studied for decades, and different protective mechanisms against excessive solar radiation were reported (Karsten 2007 and references therein). These include avoidance (e.g., living in the shade of canopy algae and/or in great water depth), numerous physiological and biochemical protective mechanisms (e.g., dynamic photoinhibition, antioxidants, UV-sunscreens) and repair or de-novo synthesis of essential biomolecules (e.g., DNA repair; Karsten 2007). The biosynthesis and accumulation of MAAs is a highly efficient photoprotective mechanism. These compounds act as passive shielding solutes by dissipating the absorbed short wavelength radiation energy in the harmless form of heat without generating photochemical reactions (Bandaranayake 1998). MAAs exhibit extremely high-molar absorptivity for UVA and UVB (molar extinction coefficients between 28,000 and 50,000), and have been shown to be photochemically stable structures, both of which are prerequisites for their sunscreen function (Gleason 1993 and references therein, Conde et al. 2000). The most frequently used protocols for MAA extraction include the aqueous extraction in an ultrasonic bath followed by filtration to remove debris (Whitehead andHedges 2002, Carreto et al. 2005), extraction with pure methanol after soaking of the samples with water in the dark at 4°C overnight (Carreto et al. 2005), or extraction of lyophilized samples in 25% aqueous methanol at 45°C for 2 h (Tartarotti and Sommaruga 2002). In our study, a combination of the previously reported methods for extraction was used, which included two further steps: first, extraction with dichloromethane prior to extraction with methanol in order to remove lipophilic components, and second, the combined methanol and methanol/water extracts were re-dissolved in methanol. This MAA enrichment step resulted in the precipitation and hence removal of sugars and salts, which otherwise would negatively affect the then following HPLC analysis. Several chromatographic techniques for the isolation of MAAs have been used in the past including gel permeation, ion exchange resins, preparative TLC and HPLC, which all are well summarized in a review by Carreto and Carignan (2011). The main advantage of the protocol used in this study was the lower solvent consumption and reduced duration of the isolation procedure especially when FCPC and flash chromatography were combined (both experiments were completed in a few hours). Furthermore, FCPC is ideal for easy scale up and it 400 requires no solid packing material, which can be quite expensive; furthermore irreversible adsorption or sample loss is avoided (Berthod 2007, Chollet et al. 2015. On the other hand, the main drawback of FCPC is usually a time-consuming optimization of the biphasic solvent system and the operating conditions. The isolated MAAs were used as purified standards to develop an improved HPLC method, allowing their qualitative and quantitative determination in a large number of red algal species. All of the isolated compounds were known already from marine organisms; yet, four of them (mycosporine-methylamine-threonine, mycosporine-alanine-glycine, aplysiapalythine A and aplysiapalythine B) were isolated for the first time from an algal species. The compounds aplysiapalythine A and B were previously reported as constituents of the sea hare Aplysia californica (Kamio et al. 2011), and the authors suggested that these animals acquire MAAs from their algal diet. The latter were proven to produce some other mycosporine-like amino acids but not aplysiapalythine A and aplysiapalythine B in a detectable amount. Mycosporine-methylamine-threonine has been isolated from the reef-building corals Pocillopora damicornis and Stylophora pistillata (Won et al. 1995), while mycosporine-alanine-glycine has only artificially been produced by Actinomycetales through heterologous expression (Miyamoto et al. 2014). Since many marine animals contain MAAs, but lack the biosynthetic capability to produce these compounds, a dietary origin from grazing on algae is the only plausible explanation. Indeed, algal diets can regulate MAA concentration and composition in marine invertebrates and fish (Karentz 2001, Shick and. The ingested MAAs are often specifically bioaccumulated in the most UVsusceptible tissues or reproductive structures (e.g., eggs; Adams and Shick 1996), and can be interconverted to animal-specific MAAs in the digestive track by animal enzymes or by endosymbiotic bacteria. The novel HPLC method allows an excellent separation of all 11 standard substances within 35 min and it has several advantages compared to previously published methods. First, none of them permitted the separation of that many MAAs based on pure substances except that of Carreto et al. (2005). Second, despite using a conventional reversed phase column the compounds were adequately retained and their baseline separation was possible. Additionally, due to the use of a volatile mobile phase an MS detector could be used, which allowed the assignment of minor components as well. For those samples which were analyzed after an enrichment step, a more detailed chemical profiling of the MAA composition was possible. For example, species belonging to the families Bangiaceae, Gigartinaceae, Gracilariaceae, and Ceramiaceae contained almost all 11 MAAs identified in this study, while species from other families such as Schizymeniaceae, Pterocladiaceae, and Callithamniaceae exhibited only less than five compounds. Both species from the genus Porphyra had exactly the same MAA pattern, while both species belonging to the genus Gigartina differed in two compounds. In conclusion, it is obvious that conspicuous differences in MAA patterns can be observed between the different families. However, whether such detailed MAA patterns as shown in this study can be used for chemotaxonomic purposes in the red algae has to be addressed in follow-up investigations. At least for green algae within the Prasiola-clade (Trebouxiophyceae), the presence of the MAA prasiolin represents a suitable chemotaxonomic marker (Hotter et al. 2018). Likewise, the possible confusion between aplysiapalythine A and palythinol is due to the fact that both molecules have the same molecular mass and UV spectra (and possibly identical retention times); therefore, their differentiation is not possible using HPLC-MS. However, NMR data unambiguously confirmed that compound 5 which was isolated in this study from two species (Pyropia plicata and Agarophyton chilense) is aplysiapalythine A. None of the previous reports contained conclusive NMR data (including two-dimensional spectra) and therefore the correct assignment of aplysiapalythine A and/or palythinol in the past is questionable. This may also include the first report on the isolation of palythinol from Palythoa tuberculosa, an organism which was used for comparison in our study. Accordingly, all published data within this context have to be critically evaluated and future studies need to be aware of this possible pitfall. In conclusion, our data on MAAs clearly indicate that more of these UV-sunscreens exist in Rhodophyta and probably other algal groups than previously considered. In addition to their pronounced UV-protective effect, some MAAs such as mycosporine-glycine also have moderate antioxidant activity (Dunlap and Yamamoto 1995). The presumed biochemical precursor of MAAs, 4-deoxygadusol exhibits strong antioxidant activity . Therefore, the photo-physicochemical properties of MAAs guarantee both a high UV-protective effectiveness in combination with antioxidant capabilities. Rhodophyta represent excellent model systems to study and understand the underlying mechanisms. With new developments in genomics, proteomics, metabolomics, and analytical chemistry new types of MAAs will continue to be discovered and their biosynthetic and regulatory mechanisms elucidated. Supporting Information Additional Supporting Information may be found in the online version of this article at the publisher's web site: Figure S1. Analysis of the methanolic extract of Palythoa tuberculosa, an anemone reported to contain the MAA palythinol, by HPLC and LC-MS in comparison to a standard mixture of eleven MAAs. Table S1. Overview on the investigated species, their collection sites and dates.
2018-12-20T14:03:11.227Z
2019-01-31T00:00:00.000
{ "year": 2019, "sha1": "3f22d9236668541b067eb42dc7f361549eabafd1", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jpy.12827", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "9e78082ea0ab94ff7fdc99ffb6667081d2efba17", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
267552317
pes2o/s2orc
v3-fos-license
The Effectiveness of Brain Injury Family Intervention in Improving the Psychological Well-Being of Caregivers of Patients With Traumatic Brain Injury: Protocol for a Randomized Controlled Trial Background Globally, traumatic brain injury (TBI) is recognized as one of the most significant contributors to mortality and disability. Most of the patients who have experienced TBI will be discharged home and reunited with their families or primary caregivers. The degree of severity of their reliance on caregivers varies. Therefore, the task of delivering essential care to the patients becomes demanding for the caregivers. A significant proportion of caregivers expressed considerable burden, distress, and discontentment with their lives. Therefore, it is critical to comprehend the dynamic of TBI and caregivers to optimize patient care, rehabilitation, and administration. The effectiveness of the Brain Injury Family Intervention (BIFI) program tailored for caregivers of patients with TBI has been widely proven in Western countries. However, the impact is less clear among caregivers of patients with TBI in Malaysia. Objective This study aims to assess the effectiveness of BIFI in reducing emotional distress and burden of care, fulfilling the needs, and increasing the life satisfaction of caregivers of patients with TBI at government hospitals in Malaysia. Methods This is a 2-arm, single-blinded, randomized controlled trial. It will be conducted at Hospital Rehabilitasi Cheras and Hospital Sungai Buloh. In total, 100 caregivers of patients with TBI attending the neurorehabilitation unit will be randomized equally to the intervention and control groups. The intervention group will undergo the BIFI program, whereas the control group will receive standard treatment. Caregivers aged ≥18 years, caring for patients who have completed >3 months after the injury, are eligible to participate. The BIFI program will be scheduled for 5 sessions as recommended by the developer of the module. Each session will take approximately 90 to 120 minutes. The participants are required to attend all 5 sessions. A total of 5 weeks is needed for each group to complete the program. Self-reported questionnaires (ie, Beck Depression Inventory, Positive and Negative Affect Schedule, Caregiver Strain Index, Satisfaction With Life Scale, and Family Needs Questionnaire) will be collected at baseline, immediately after the intervention program, at 3-month follow-up, and at 6-month follow-up. The primary end point is the caregivers’ emotional distress. Results The participant recruitment process began in January 2019 and was completed in December 2020. In total, 100 participants were enrolled in this study, of whom 70 (70%) caregivers are women and 30 (30%) are men. We are currently at the final stage of data analysis. The results of this study are expected to be published in 2024. Ethics approval has been obtained. Conclusions It is expected that the psychological well-being of the intervention group will be better compared with that of the control group after the intervention at 3-month follow-up and at 6-month follow-up. Trial Registration Iranian Registry of Clinical Trials IRCT20180809040746N1; https://irct.behdasht.gov.ir/trial/33286 International Registered Report Identifier (IRRID) RR1-10.2196/53692 Background and Rationale Traumatic brain injury (TBI) is defined as any injury sustained by the head as a result of blunt or penetrating trauma, acceleration or deceleration forces [1].It remains as a leading cause of mortality and morbidity worldwide [2].In Malaysia, despite investment in various preventive efforts, the incidence of TBI continues to increase yearly [3].In 2009, as many as 166,768 trauma cases were recorded in 8 hospitals in Malaysia, most cases (76.8%) being road traffic accidents [4].A recent study reported an extremely high cost of treatment for patients with TBI in Malaysia.The estimated annual cost of treatment for 49 patients with TBI was as high as MYR 1.5 million (US $313840) [5].In the long term, this would lead to an adverse impact on the social and economic development of the country. The effects of TBI depend largely on the severity and location of the injury and the age and personality of the patient [1].TBI will affect their self-care ability, employment capacity, and social functioning.Upon discharge from hospitals, most patients with moderate to severe levels of TBI would need to move back in with their families to receive the care they need [6,7].Patients with severe TBI are unlikely to return to their previous employment as they require a significant amount of care [8].As most of the patients with TBI are highly dependent, their caregivers are tasked with providing the necessary physical care for them.The caregivers are also constantly involved in any ongoing rehabilitation of the patients, for example, encouraging them to perform physiotherapy exercises and reminding them to take medication.Moreover, the caregivers need to deal with difficult behaviors and challenging emotional states of patients with TBI [9][10][11][12].These extra demands can be unfavorable to the health and well-being of the caregivers. Many studies have highlighted the importance of understanding the dynamics of TBI on caregivers.There is an extensive body of research on the effect of TBI on caregivers globally [13][14][15][16][17][18][19][20][21][22].However, local data are scarce, as only a few studies have been conducted to assess the effects of TBI on Malaysian caregivers [23][24][25].The available studies have found that most TBI caregivers in Malaysia reported high burden and poor life satisfaction as a result of caregiving activities [25].Similarly, there is also a lack of information regarding the needs of caregivers of patients with TBI in developing countries.It was suggested that caregivers of patients with TBI should be provided a postdischarge rehabilitation program to reduce their burden as caregivers [25].A systematic review has suggested that caregivers of patients with TBI should be prioritized in TBI rehabilitation [26], as evidence abounds on the benefits of intervention programs tailored for caregivers toward patients with TBI [27][28][29][30]. In Malaysia, there is a lack of specific or structured intervention programs focusing on the psychological well-being of caregivers of patients with TBI.Identifying caregivers at an increased risk of burden is important for preventing emotional distress and caregiver burden to improve care for both patients and caregivers.Therefore, it is important to design a proper intervention plan to improve the quality of life of caregivers of patients with TBI.Ultimately, this will result in better care and management of patients with TBI. Conceptual Framework This study is based on 2 major theories, namely, the Family System Theory (FST) and Cognitive Behavior Therapy (CBT) [31]. The first theory, FST, assumes that the whole family is interconnected to one another [32].For instance, if a family member is affected by TBI, the whole family system will also be affected [31].Patients with TBI would most likely depend on their family members for activities of daily living, routine follow-up, rehabilitation, and financial support.This would increase the burden on the caregivers.Family members would also need to look for coping resources to overcome their problems [13,33,34].As a result of the sudden changes in the family's functioning, family members often reported a lack of coping skills, in addition to high levels of burden, anxiety, and emotional distress [17,33,[35][36][37][38][39][40], thus resulting in a decrease in psychological well-being [19,[41][42][43].Therefore, Brain Injury Family Intervention (BIFI) incorporates family therapy techniques such as normalization and validation to assist patients with TBI and their families. The second theory, CBT, is widely known for its use in treating various types of psychological disorders [44].Several studies have applied CBT to provide psychological interventions to patients with TBI and their caregivers [28,[45][46][47][48][49][50].A systematic review revealed that CBT has a significant impact on improving the psychological well-being of TBI caregivers [32,45].CBT would equip caregivers of patients with TBI with strategies to deal with psychological problems such as depression and anxiety [44].A specific component of CBT would also be implemented in BIFI that covers important aspects such as psychoeducation, problem-solving skills, management of emotions, setting of realistic goals and expectations, and stress management. Aims of This Study This study aims to assess the effectiveness of BIFI in reducing emotional distress and burden of care, besides fulfilling the needs and increasing the life satisfaction of caregivers of patients with TBI in selected hospitals. The hypotheses of this study are as follows: 1.There is a significant association between the sociodemographic and clinical characteristics of patients with TBI and the emotional distress, burden of care, needs, and life satisfaction of caregivers of patients with TBI. 2. There is a significant difference in the mean scores of caregivers' emotional distress, burden of care, needs, and life satisfaction between the intervention and control groups before the intervention, after the intervention, at 3-month follow-up, and at 6-month follow-up. 3. The sociodemographic and clinical characteristics of the patients are predictors of the intervention outcomes. Study Design This is a 2-arm, single-blinded, randomized controlled trial (RCT).All participants will be randomly assigned to the intervention group or control group.Only the investigators will be aware of the treatment allocation. Study Setting This study will be conducted at Hospital Rehabilitasi Cheras (HRC) and Hospital Sungai Buloh (HSB).These hospitals are the main referral hospitals for patients with TBI.A specific room in each site will be used for data collection. Participants A total of 100 participants were recruited for this study.Of the 100 participants, 50 (50%) were randomly assigned to the intervention and control groups, respectively.All caregivers of patients with TBI who attend follow-up with patients with TBI at the Neuro Rehabilitation outpatient clinic of HRC and HSB were screened for eligibility to participate in this study. Inclusion and Exclusion Criteria To be eligible, caregivers of patients with TBI must be citizens or permanent residents of Malaysia and be aged ≥18 years, regardless of their race, ethnicity, and sex.The participants must be able to read or write in Bahasa Malaysia or English.The caregivers can be the parents, spouses, siblings, sons or daughters, or relatives of the patients with TBI.Only 1 caregiver per patient with TBI can be recruited for the program.Time since injury for the patient with TBI must be >3 months and time spent on caring for the patients must be at least 2 hours per day.All types of injury severity were included.However, paid caregivers were excluded. Withdrawal Criteria The participants may choose to withdraw at any point of time without any penalty.The participants may be withdrawn if the investigator believes that it is harmful or risky for the participants to continue. According to the calculation, a total of 50 participants per arm will be needed for this study. The parameter used to compute the prevalence of caregivers of patients with TBI was taken from the paper titled "Life satisfaction and strain among informal caregivers of patients with traumatic brain injury in Malaysia" [25]. Sample Size Calculation for Prevalence Studies In this formula, n=sample size, z=z statistic for the level of confidence, P=expected prevalence, and d 2 =allowable error.This formula assumes that P and d 2 are decimal values but would also hold true if they are percentages, except that the term, 1-P, in the numerator would become 100-P [52]: Percentage of caregiver's burden=57.4%,level of significance=5%, power=80%, and d=0.05: This study aimed to assess eligibility for at least 376 caregivers of patients with TBI. Allocation Sequence Generation Methods of random allocation are used to ensure that all study participants have the same chance of allocation to the treatment or control group.Caregivers of patients with TBI who are eligible and consent to participate will be number coded before randomization.A simple random sampling method using computer-generated random sampling will be used to assign participants to the intervention group or control group.The procedure would be performed by following the 1:1 allocation format.Randomization will be performed by the principal investigator.All participants will have equal chance of being assigned to the intervention or control group. Contamination Bias To minimize contamination bias, the intervention program will be conducted outside their follow-up clinic time.The intervention program will be conducted at the seminar room of the designated hospitals.Participants are strongly encouraged not to disclose the program materials or discuss them with other caregivers of patients with TBI outside the program. Blinding The method of blinding in RCT is used to ensure that there are no differences in the way in which each group is assessed or managed and therefore minimize bias.In this study, only the principal investigator is aware of the treatment allocated to the participants. Statistical Analysis Data analysis will be performed using SPSS software (version 24.0;IBM Corp).Descriptive statistics will be used to describe the characteristics of participants.Continuous data will be reported as means and SDs or as medians and IQRs.Categorical data will be reported as percentages and frequencies.The comparison of the means between the 2 groups will be performed using a paired samples 2-tailed t test.In addition, 1-way ANOVA and repeated measure ANOVA will be conducted to identify any significant differences in mean scores of emotional distress (BDI), caregiver's needs (Family Needs Questionnaire [FNQ]), burden of care (Caregiver Strain Index [CSI]), and life satisfaction (Satisfaction With Life Scale [SWLS]) before the intervention, after the intervention, at 3-month follow-up, and at 6-month follow-up between the intervention and control groups.Multivariate regression analysis will be used to determine the predictors of intervention outcomes. The intention-to-treat analysis will be performed accordingly.Assumptions of normality and homogeneity of variance will be conducted and adjusted where necessary. Program Assessment and Translation The BIFI manual has undergone several stages of thorough forward and backward translation, content review, and revision by a panel of local experts.The expert panel included 2 clinical psychologists, 1 certified translator (psychology content translator), and 2 rehabilitation physicians.All the panels had experience with working in the related fields for >5 years. On the basis of the experts' feedback and comments, a few changes were made to the module.The first change was the use of a more simple Malay language.The second change was related to the pictures used in the material.The experts suggested changing to universal pictures, so that they can be adapted locally. The final product of the manual was revised to match the local population and to ensure the intervention's fidelity. Pilot Study A pilot study was conducted to assess the feasibility of the BIFI program among caregivers of patients with TBI in Malaysia.In total, 10 caregivers of patients with TBI participated, and only 8 (80%) managed to complete all 5 sessions.A caregiver withdrew owing to work commitments and another withdrew owing to personal issues.Challenges included (1) punctuality of the participants, (2) duration of the sessions, and (3) homework material. Overall, participants were satisfied with the content and delivery of the program.Some suggested to increase the duration of the sessions and to reduce the amount of homework given.All feedback was taken into consideration, and slight alterations were performed to accommodate the participants and the program. Intervention BIFI will be the main intervention tool.It is a structured intervention module specifically for patients with TBI and their caregivers, developed by Professor Dr Jeffry Kreutzer and colleagues from Virginia Commonwealth University, United States.This module is based on CBT [44] and FST [53]. This module consists of several objectives: (1) providing patients and caregivers with fundamental information about brain injury, (2) helping caregivers to better understand the effects of brain injury, (3) teaching patients and caregivers about problem-solving skills, (4) teaching coping strategies, (5) recognizing progress and personal strengths and helping them to access community and professional resources, and (6) teaching effective communication skills. BIFI was designed to be implemented in 5 sessions, with 90 to 120 minutes for each session.In total, 2 or 3 topics will be covered in each session.Using a standardized and family-focused intervention, BIFI was found to be beneficial for caregivers of patients with TBI both immediately and at 3 months after the intervention [31,54,55].Furthermore, another study also showed that patients with TBI and caregivers reported high ratings of helpfulness, goal attainment, and satisfaction regarding the BIFI program [55]. Several measures will be taken to ensure that all participants comply with the intervention program.The program would also be conducted during weekends to accommodate the schedule of caregivers of patients with TBI.They are also allowed to choose any day during weekends at their convenience.However, they are required to complete all 5 sessions within the stipulated time frame. Control Group The control group or treatment-as-usual group will not receive any additional treatment during the study period.Participants in the control group will receive the usual treatment at their respective hospitals.According to the Malaysian Clinical Practice Guidelines for early management of head injury, all patients who have been discharged are recommended to attend follow-up sessions at the hospital [56].It is recommended that patients with moderate and severe head injuries are scheduled for routine clinic follow-up.However, patients with a mild head injury can be followed up via clinic visits or telephone calls. Apart from the routine follow-up, other programs tailored for patients with TBI and their caregivers are also considered as treatment as usual in this RCT.For example, the Acquired Brain Injury Rehabilitation Unit in HRC would offer a program known as "Return to Work" for suitable patients and caregivers.A similar program is also available in HSB. Overview The outcomes of this study will be assessed using self-report questionnaires at four periods: (1) baseline, (2) 5 weeks (after the treatment), (3) 3-month follow-up, and (4) 6-month follow-up.The schedule of enrollment, interventions, and assessments are presented in Table 1. Primary Outcomes The primary outcome for this study is the TBI caregiver's emotional distress, which will be measured using two scales: (1) BDI and (2) Positive and Negative Affect Schedule (PANAS).The measures are described in the Methods section. Data Collection and Time Frame Recruitment for potential participants was conducted by the rehabilitation physicians at both hospitals, who are also the coinvestigators of the study.They screened for potential participants among the caregivers of patients with TBI who attend regular rehabilitation therapy and follow-up at their clinics.These caregivers were then invited to participate in this study.If the caregiver was interested in learning more about the study, they were led to another room, where the investigator explained the study in great detail and answered any questions that the caregivers had.At this stage, a patient information sheet about the nature of the study was provided to the potential participants. The investigator then left the room for 10 to 15 minutes to allow the caregiver to read the information and to think about whether they would like to participate in the study.If they expressed interest in participating, they were asked to sign the consent form before completing the questionnaire.The same questionnaire used at baseline will be distributed immediately after the intervention, at 3-month follow-up, and at 6-month follow-up.The questionnaire will take approximately 30 to 40 minutes to complete.Data collection was performed between December 2018 and December 2020. BDI Questionnaire The original version of BDI is a self-reporting questionnaire consisting of 21 items on a 4-point scale.It is assessed using a Likert scale ranging from 0 (symptom not present) to 3 (symptom very intense).The total score can range from 0 to 63. BDI is used to measure the main symptoms of depression such as mood, pessimism, sense of failure, self-dissatisfaction, guilt, punishment, self-dislike, self-accusation, suicidal ideas, crying, irritability, social withdrawal, indecisiveness, body image change, work difficulty, insomnia, fatigability, loss of appetite, weight loss, somatic preoccupation, and loss of libido [57].Respondents who scored between 0 and 9 are considered negative for depression.In contrast, those who score >9 are screened positive for depression, whereby a score between 10 XSL • FO RenderX and 18 indicates mild to moderate depression, score between 19 and 29 indicates moderate to severe depression, and score between 30 and 63 indicates severe depression (60).The BDI test is widely used globally, and its content, concurrent, and construct validity have been tested.High concurrent validity ratings are detected between BDI and other depression instruments such as the Minnesota Multiphasic Personality Inventory and the Hamilton Depression Scale.Correlation rating of 0.77 was obtained between the inventory and psychiatric ratings.BDI also showed high construct validity with the medical symptoms it measures.The study by Beck and Steer [58] reported a coefficient α rating of 0.92 for patients at outpatient clinics and 0.93 for college students.BDI has been translated into the Malay language and validated for use among the Malaysian population.Internal consistency (Cronbach α) ranged from 0.71 to 0.91, and the validity of BDI-Malay was deemed as satisfactory [59]. PANAS Questionnaire PANAS comprises 2 mood scales that measure the positive affect (PA) and negative affect (NA), respectively.PANAS is used to assess the relationship between positive and negative effects on personality traits.Each of the PA and NA scales consists of 10 items that define their meanings.Respondents need to answer 20 items on a 5-point scale that ranges from 1 (very slightly or not at all) to 5 (extremely).The total score generated will range between 10 and 50, with low scores indicating low (positive or negative) affect and high scores indicating high (positive or negative) affect.The reliability and validity of PANAS were moderately good [60].For the PA scale, the Cronbach α coefficient was between 0.86 and 0.90, and for the NA scale, it was between 0.84 and 0.87.Over 8 weeks, the test-retest correlations were between 0.47 and 0.68 for PA and between 0.39 and 0.71 for NA.PANAS also had strong validity with other measures of general distress and dysfunction, depression, and anxiety [61].The Malay-translated version of PANAS had Cronbach α coefficient of 0.73 [62]. CSI Questionnaire CSI is a self-rated, 13-item questionnaire that measures strain related to care provision.It consists of 5 major domains related to employment, financial, social, time, and physical aspects.The items can be answered as yes (score=1) or no (score=0).The maximum score for the questionnaire is 13.A score >7 is categorized as "having strain," whereas a score <7 is defined as "no strain."There is no age limit for the individuals who could be assessed with the tool.CSI has been translated into the Malay language, and the Cronbach α coefficient for the 13-item CSI-Malay was 0.79, indicating good internal consistency reliability of the scale [63]. SWLS Questionnaire SWLS evaluates the respondents' agreement with 5 statements on overall satisfaction with life (eg, in most ways, my life is close to my ideal).It uses a 7-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree), giving rise to a range of scores between 5 and 35.A score of 20 is considered the neutral point of the scale.Scores between 5 and 9 indicate that the respondent is extremely dissatisfied with life.In contrast, scores between 31 and 35 show that the respondent is extremely satisfied.SWLS is reported as having excellent internal consistency (Cronbach α=0.88) and good test-retest reliability (r=0.68).SWLS has been translated into the Malay language and validated among the Malaysian population [64,65].The Malay SWLS has been found to have good internal consistency (Cronbach α=0.83). FNQ Questionnaire FNQ is a self-administered questionnaire consisting of 37 items.It was designed to determine the family needs of patients with TBI.The family members will rate the importance of each need on a scale ranging from 1 (not important) to 4 (very important).FNQ is divided into 6 areas, namely, health information, emotional support, instrumental support, professional support, community support network, and involvement with care.It has been proven to have good content and construct validity and good internal consistency Spearman-Brown split-half reliability at 0.75 [66]. Study Procedure All the screening procedures to identify the participants based on inclusion and exclusion criteria were performed before the intervention program by rehabilitation physicians in the respective hospitals.The recruitment started in November 2018 and was continued until the required sample size was achieved.The BIFI program will be conducted in the Malay or English language depending on the needs of the caregivers.The participants in the intervention group will be divided into smaller groups of 10 people each.Each group will receive a scheduled time and date to attend the program at the hospital.The program will be conducted by the principal investigator, who is also a clinical psychologist.The clinical psychologist must at least have 5 years of experience. At the baseline of the program (T1), all participants are required to complete the questionnaires (BDI, CSI, PANAS, FNQ, and SWLS).The same questionnaire will be distributed immediately after the intervention program (T2), at 3-month follow-up (T3), and at 6-month follow-up (T4).The average time needed to complete the questionnaire is 30 to 40 minutes.The follow-up sessions (T3 and T4) will be scheduled by the coinvestigators.The session will be conducted at the respective hospitals by the coinvestigators to prevent any bias during follow-up.The participants were also encouraged not to share the intervention materials with other caregivers until data collection is completed. The intervention group will undergo the BIFI program.The BIFI program will be scheduled for 5 sessions as recommended by the developer of the module.Each session will take approximately 90 to 120 minutes.The participants are required to attend all 5 sessions.A total of 5 weeks is needed for each group to complete the program, and 2 groups will be scheduled every week.All the sessions will start with an overview of the topic and end with the summary and homework assignments.Participants will be required to complete the homework given according to the module.This homework will then be reviewed by the principal investigator in the subsequent session. RenderX For the control group that receives the usual standard treatment, the caregivers will need to complete the questionnaires at similar time points as the intervention group. Patient Involvement Patients were involved during the early stage of cultural adaptation of the intervention program.The patients were invited to give their comments and feedback and review all the materials.Their valuable feedback was taken into account to ensure this program is suitable to the current culture and population. Compensation All the participants in the intervention group will be given incentives for attending the intervention program and completing the questionnaires.All participants are compensated with a travel token at the baseline (MYR 25), immediately after the program (MYR 25), at 3-month follow-up (MYR 25), and at 6-month follow-up (MYR 25). Consent The investigator will explain the potential risks and benefits of involvement to the participants using an information leaflet before they determine whether to participate in the study.They might ask the researcher any questions they may have regarding the study before deciding whether to participate.Once the investigator is confident that they have understood the potential risks and benefits of participating, the participant will be asked to sign a consent form.The consent forms are obtained in the written format during primary data collection and secondary data analysis is allowed to proceed without additional informed consent. Data Management Consent forms and paper copies of the questionnaire will be stored separately in a locked filing cabinet at University Teknologi MARA.After data collection is complete, they will be transferred to a locked filing cabinet.They will be maintained by the principal investigator for 10 years according to university regulations.Data will be accessible to the researchers and anyone authorized by Universiti Teknologi MARA to conduct a research audit.Electronic copies of the data will be maintained by the principal investigator.These electronic files will not contain any personal identifying information and will not contain the identifying code that links the paper copies of the questionnaires with the consent forms.They will be stored only on password-protected electronic devices.Once the thesis submission and other publications are completed, these files will be destroyed.The files will be accessible to the principal investigator, the research team, and anyone authorized by Universiti Teknologi MARA to conduct a research audit. No personal identifying details, such as names and contact details, will be recorded on the questionnaire; they will appear only on the consent form.The questionnaires will be linked to the consent forms by a unique code appearing on both documents.No digital record of the personal identifying details will be maintained, and these details will not be included in the data file.The report of the findings will also not include any such details. Dissemination Plan The findings of this study will be published in an academic or medical journal, and they will be presented at academic conferences.Only the research team has access to the data.As with any anonymously obtained data, the participants will not be named in any of the study's reports or publications.Permission from the Medical Research and Ethics Commission will be sought before publication. Results The participant recruitment process began in January 2019 and was completed in December 2020.A total of 100 participants were enrolled in this study, of whom 70 (70%) caregivers are women and 30 (30%) are men.We are currently at the final stage of data analysis.The result of this study is expected to be published in 2024. Anticipated Findings In this study, we will be evaluating the effectiveness of BIFI in reducing the emotional distress and burden of care, fulfilling the needs, and increasing the life satisfaction of caregivers of patients with TBI.A total of 100 caregivers were recruited in this study.Most of the caregivers are women (70/100, 70%) and the remaining are men (30/100, 30%).The age range of the caregivers was between 22 and 55 years, with a mean age of 39.85 (SD 8.184) years.Most caregivers were Malay (65/100, 65%), followed by Chinese (21/100, 21%), Indians (12/100, 12%), and other races (2/100, 2%).Initial analysis showed promising result where there was significant reduction in emotional distress among caregivers (intervention group) immediately after the program and at the 3-month follow-up as compared with the control group. Limitations This study also has limitations.It was observed that during the implementation of the program, most participants (90/100, 90%) requested the program to be conducted during the weekends, whereas others wanted it to be conducted during the weekdays.This was because some of them needed to arrange for the patient's care if they were to attend the program.Hence, the program was conducted during weekends and weekdays to accommodate their request.The participants were allowed to choose when they wanted to attend the program.Owing to this issue, the intervention program took a long period to complete. It is also important to address that data collection was conducted during the COVID-19 pandemic.The intervention program was halted for several months owing to Movement Control Order by the Malaysian Government.Therefore, participants were hesitant to visit the research site owing to fear of contracting COVID-19; the social distancing policy exacerbated this difficulty.It is hoped that future studies might to consider implementing intervention programs over the web to ease the participants.Web-based intervention program is another emerging approach and is more suited to current trends.However, web-based intervention versus physical intervention is still debated, and more studies are needed to answer this question. Conclusions To the best of our knowledge, this study would be among the first to use RCT methods to assess the effectiveness of BIFI in improving the psychological functioning of caregivers of patients with TBI in Malaysia.Perhaps, this module could be incorporated into the Return to Work program as standard clinical care and be made available to all.It is hoped that the results will provide more knowledge and scientific evidence to improve the rehabilitation services for patients with TBI and their caregivers. Figure 1 outlines the study flow according to CONSORT (Consolidated Standards of Reporting Trials) 2010. d PANAS: Positive and Negative Affect Schedule.e CSI: Caregiver Strain Index.f FNQ: Family Needs Questionnaire. Table 1 . Schedule of enrollment, interventions, and assessments. IIR) for the study to be conducted at the Ministry of Health settings.Please refer to Multimedia Appendices 1 and 2 for further details of Research Ethics comments and review.The approval letter was received on December 19, 2018.The recruitment period started in December 2018 and was completed on December 18, 2020.
2024-02-09T16:17:54.405Z
2023-10-16T00:00:00.000
{ "year": 2024, "sha1": "56ff87d7a6867e1d85a8701a2156b5448991b7cb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/53692", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "dc8aceba0340a0a67feceb7e6b7fd036f8572083", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
15877571
pes2o/s2orc
v3-fos-license
A class of Lorentzian Kac-Moody algebras We consider a natural generalisation of the class of hyperbolic Kac-Moody algebras. We describe in detail the conditions under which these algebras are Lorentzian. We also construct their fundamental weights, and analyse whether they possess a real principal so(1,2) subalgebra. Our class of algebras include the Lorentzian Kac-Moody algebras that have recently been proposed as symmetries of M-theory and the closed bosonic string. Introduction Since the realisation in the 1930's that the nuclear forces possessed an isotopic spin symmetry, finite dimensional Lie algebras have played an increasingly crucial role in our understanding of the fundamental laws of nature. In particular, we now believe that three of the four forces of nature are determined by local gauge symmetries with finite dimensional Lie algebras. The discovery of (infinite dimensional) Kac-Moody algebras in the late 1960s considerably enlarged the class of Lie algebras beyond that previously considered, and a subset of these, affine algebras, have played an important role in string theory and conformal field theory (for a review see for example [1]). However, until recently, no significant physical role has been found for more general Kac-Moody algebras, namely those still characterised by a symmetrisable Cartan matrix of finite size. During the last few years it has become apparent that type II superstring theory has a (non-perturbative) description in eleven dimensional space-time in terms of M-theory [2]. Very little is known about the latter, but it would seem reasonable to suppose, given previous developments, that M-theory possesses a very large symmetry algebra. More recent work has tried to identify what some of this symmetry could be, and it has been conjectured that it includes a rank eleven Lorentzian Kac-Moody symmetry denoted e 11 [3,4]. Indeed, substantial fragments of this symmetry have been found in all the maximal supergravity theories in ten and eleven dimensions [4,5]. The rank eleven nature of this symmetry can be seen to be a consequence of the bosonic field content of the maximal supergravity theories and is related to Nahm's theorem that supergravity theories only exist in space-times with up to eleven dimensions [6]. Simple Lie algebras of finite dimensional or affine type are well studied and fully classified, being recognisable in terms of finite, connected Dynkin diagrams (representing their Cartan matrices), said to be of finite or affine type respectively. Despite a considerable literature on the other Kac-Moody algebras [7], knowledge of their properties is much less complete. Indeed, apart from a few cases, even the multiplicities of the various root spaces are unknown. It is possible that for many purposes the class of all Kac-Moody algebras may be too large and that the study of a well-motivated subclass may be more rewarding. One extra class of Kac-Moody algebras that has been studied in some detail are those known as 'hyperbolic'. The Dynkin diagram of a hyperbolic Kac-Moody algebra is a connected diagram such that deletion of any one node leaves a (possibly disconnected) set of connected Dynkin diagrams each of which is of finite type except for at most one of affine type. More specifically, hyperbolic Kac-Moody algebras correspond to hyperbolic diagrams which are the diagrams of this type that are not of finite or affine type. The hyperbolic Kac-Moody algebras have been classified, possess no more than ten nodes and a Cartan matrix that is Lorentzian, that is, nonsingular and endowed with exactly one negative eigenvalue. Furthermore every hyperbolic Kac-Moody algebra has a real principal so (1,2) subalgebra [8]. Since the rank of the symmetry underlying M-theory appears to be eleven it cannot be described by a hyperbolic Kac-Moody algebra. In this paper we consider a larger class that does include the aforementioned e 11 , as well as the proposed symmetry for the bosonic string, k 27 [4]. The Dynkin diagrams we shall consider are connected diagrams possessing at least one node whose deletion leaves a (possibly disconnected) set of diagrams, each of which is of finite type except for at most one of affine type. As was noted by Ruuska, [9], such diagrams are automatically Lorentzian if not recognisably of finite or affine type and include the hyperbolic ones. As we shall see, the corresponding algebras may or may not possess a real principal so(1,2) subalgebra, for example, e 11 does, but k 27 does not. In section 2 we briefly recall the relations between Dynkin diagrams, Cartan matrices and Kac-Moody algebras, and describe more precisely the class of algebras that will be considered in this paper. The advantage of this class is that it is easy to determine the simple roots in terms of those for the reduced diagram, namely the diagram remaining when the central vertex has been deleted. The same is true of the fundamental weights (and hence the Weyl vector as it is their sum) and, to a lesser extent, the determinant of the Cartan matrix. But separate treatments must be made of the two cases that the reduced diagram possesses no affine component, or just one. Section 3 treats the case when the reduced diagram (i.e. the Dynkin diagram for which the central node has been deleted) contains only connected components of finite type. In section 4 it is shown that the overall Dynkin diagram is Lorentzian if and only if there is precisely one affine component given that the reduced diagram is a mixture of connected components of finite and affine type. Then the expression for the determinant of the Cartan matrix simplifies considerably and can be used to identify a large class of unimodular even Lorentzian Cartan matrices. We also study the conditions under which these Lorentzian algebras may possess a real principal so (1,2) subalgebra. In section 5 we describe in detail a special subclass of constructions that seems to be of particular relevance in string theory. We also discuss more specifically the algebras that are associated to even self-dual lattices, and study the question of whether the algebras possess a real principal so (1,2) subalgebra. Section 6 describes further constructions and section 7 contains some conclusions. We have added four appendices in which various more technical points that are needed for our discussion are outlined. A special class of Kac-Moody algebras First let us recall the definition of a Kac-Moody algebra [7] in terms of a generalised Cartan matrix. Suppose A ij is a generalised Cartan matrix with i, j = 1, . . . , r, where r is the rank of the Kac-Moody algebra. We shall only consider generalised Cartan matrices that are symmetric. They satisfy Because the off diagonal entries, (2.2) could possibly take values −2, −3 etc, and it is not appropriate to refer to the case in which A is symmetric as being simply-laced, unless the matrix is of finite or affine type and such values are disallowed. The entries of the matrix A ij can be encoded in terms of an unoriented graph with r nodes, whose adjacency matrix is given by 2δ ij − A ij . ⋆ This graph is called the Dynkin diagram, and it specifies the matrix A ij uniquely (up to simultaneous relabelling of the rows and columns). If the diagram is disconnected, the Cartan matrix has a block diagonal form (when labelling is ordered suitably) and the algebra consists of commuting simple factors. Given a generalised Cartan matrix, the Kac-Moody algebra can be formulated in terms of a set of Chevalley generators H i , E i and F i for each i = 1, . . . , r. These can be identified with the generators of the Cartan subalgebra, and the generators of the positive and negative simple roots, respectively. The Chevalley generators are taken to obey the Serre relations and In equation (2.7) the number of E i 's in the first equation, and the number of F i 's in the second is 1 − A ij . The remaining generators of the Kac-Moody algebra are obtained as multiple commutators of the E i 's and as multiple commutators of the F i 's, using the above Serre relations. The generalised Cartan matrix therefore determines the Kac-Moody ⋆ The (i, j) entry of the adjacency matrix of an unoriented graph describes the number of links between the nodes i and j. Our conventions here differ slightly from those of Kac [7]. algebra uniquely. Although this procedure is fairly simple in principle, explicit descriptions for all generators of a Kac-Moody algebra are only available for a few, rather special, algebras. A convenient basis for the Kac-Moody algebra consists of one for the Cartan subalgebra (which has dimension r) consisting of the Chevalley generators H i with i = 1, . . . , r, and the step operators for roots. The roots are eigenvectors of the Cartan generators (under the adjoint action), and we can therefore think of them as vectors in an r-dimensional vector space. There exists a scalar product ( , ) on this space such that the Cartan matrix is given in terms of the simple roots as Only if the Cartan matrix A is positive definite is the associated algebra of finite dimension. Then the diagram and Cartan matrix is said to be of finite type. If A is positive semi-definite it is said to be of affine type. We shall mainly be interested in those Kac-Moody algebras that are Lorentzian, namely those whose generalised Cartan matrix is non-singular and possesses precisely one negative eigenvalue. We have already mentioned the fully classified subset known as 'hyperbolic' Kac-Moody algebras. In this paper we will study a larger class of Kac-Moody algebras which includes the finite, affine and hyperbolic types and is automatically Lorentzian if neither of finite nor affine type. These correspond to a Dynkin diagram possessing at least one node whose deletion yields a diagram whose connected components are of finite type except for at most one of affine type. We shall call the overall Dynkin diagram C, and the selected node, whose deletion yields the reduced diagram C R , the "central" node. For many examples the central node is not uniquely determined by the property that C R has only connected components of finite and affine type, and what we shall do in the following will apply to every admissible choice for the central node. Unlike the overall C, the reduced diagram C R need not be connected. If it is disconnected denote the n connected components C 1 , C 2 . . . C n . The Cartan matrix of C R is obtained from that for C simply by deleting the row and column corresponding to the central node. Then the overall Dynkin diagram can be re-constructed from the reduced Dynkin diagram and the central node, denoted c, by adding those edges linking the latter to each node of C R . The number of links of the central node c to the i'th node is namely the entries in the row and column of the Cartan matrix of C whose deletion was just mentioned. Note that if the connected components of C R are all of finite or affine type, the offdiagonal elements in its Cartan matrix only take the values 0 or −1 (for convenience A (1) 1 in standard notation is excluded). This limitation does not apply to the values of η i which could be any integer 0, 1, 2 . . .. Schematically, the Dynkin diagram C has the structure: Notice that the diagram C need not be a tree diagram and indeed may possess any number of loops. If the Cartan matrix A is non-singular it is possible to define fundamental weights λ 1 , λ 2 , . . . λ r reciprocal to the simple roots (2.10) For hyperbolic algebras these all lie inside or on the same light-cone (so that the entries (A −1 ) ij = λ i .λ j are all negative or zero), but for Lorentzian algebras that are not hyperbolic some fundamental weights must be space-like. The integer span of the fundamental weights forms a lattice known as the weight lattice Λ W (A). It is reciprocal to the root lattice Λ R (A) yet contains it. Hence it is possible to consider the quotient which forms a finite abelian group Z(A) containing |Z(A)| elements Of course this group is encoded in the Cartan matrix whose determinant, up to a sign, equals the number of elements it contains If A is of finite type it is associated with a finite dimensional semi-simple Lie algebra g which can be exponentiated uniquely to a simply connected Lie group G whose centre is the finite abelian group Z(A). Given the fundamental weights, the Weyl vector ρ also exists A critical question for the existence of a principal so(1,2) subalgebra with desirable reality properties is whether the coefficients r i=1 (A −1 ) ij in (2.12a) are all of the same sign or not. More precisely, by a real principal so(1,2) subalgebra is meant one that has standard hermiticity properties given the hermiticity properties of the Kac-Moody algebra. This has desirable consequences for unitarity, and more details and explanations are given in appendix A. It leads to more stringent conditions than those discussed by Hughes [10]. We shall only consider real principal so(1,2) subalgebras in this paper, and we shall therefore drop from now on the qualifier 'real'. In the finite case the coefficients r i=1 (A −1 ) ij are all positive, while they are all negative in the hyperbolic case and in the Lorentzian case mixed signs are possible. We shall show that, within the class defined above, one of these signs at least is negative. When the reduced diagram is of finite type In this case we shall construct linearly independent simple roots for the overall Dynkin diagram C in terms of those for the reduced diagram C R as well as doing the same for the fundamental weights when that is possible. A formula for the determinant of the Cartan matrix of C will likewise be found. The r − 1 simple roots for C R , α 1 , α 2 , . . . α r−1 , are linearly independent and span a Euclidean space of dimension r − 1 since the reduced diagram C R is of finite type. They will suffice for the corresponding nodes of the overall Dynkin diagram C once they are augmented by the simple root for the central node Here λ i are the fundamental weights (2.10) for the reduced diagram (which is assumed to have a non-singular Cartan matrix) and lie in the space spanned by the simple roots, while x is a vector orthogonal to that space. Evidently this guarantees α i .α c = A ic leaving only the condition which determines the sign of x 2 . If C R is of finite type (whether disconnected or not), its simple roots span a Euclidean space. Thus the simple roots of C span a space that is Euclidean or Lorentzian according as x 2 is positive or negative. Likewise if x 2 vanishes the simple roots span a space with a positive semi-definite metric and so constitute an affine root system. Since Dynkin diagrams of finite or affine type are fully classified they are recognisable as such. Hence if C is of neither of these types it must be Lorentzian, as claimed earlier. It is at this stage that the connection (2.8) between the scalar product and the Cartan matrix is exploited. The r fundamental weights for the overall diagram C will be denoted ℓ c , ℓ 1 , ℓ 2 , . . . ℓ r−1 in order to distinguish them from the (r − 1) fundamental weights λ 1 , . . . , λ r−1 for the reduced diagram. They are related to each other by providing x 2 does not vanish, that is the overall diagram C is not affine. This accords with the fact that the definition (2.10) fails only in this case. The overall Weyl vector is then where ρ = r−1 j=1 λ j is the Weyl vector for the reduced diagram, (the same as the sum of the Weyl vectors for each connected component of C R if it is disconnected). Notice that and hence has the same sign as x 2 given that ν.ρ is positive when C R is of finite type (as all quantities λ i .λ j are). Thus at least one of the coefficients in the expansion of the Weyl vector R in terms of simple roots is negative when C is Lorentzian. An instructive example is provided by choosing the linking coefficients η i to equal each other, taking the valueη say, so that the central node is linked by preciselyη edges to each other node. Then ν in (3.1) equalsη times the Weyl vector ρ and, by (3.2), which are all negative if x 2 is, that is if C is Lorentzian. In particular, this therefore means that there are infinitely many Lorentzian algebras with a principal so(1,2) subalgebra that can be obtained by this construction since we can choose g to be any finite dimensional semi-simple Lie algebra. The simplest example is obtained by consideringη = 1, and taking g = su(m) for m ≥ 4. The Dynkin diagram for the case m = 9 is shown below. The determinant of the Cartan matrix A for the overall diagram C is related to the Cartan matrix B of the reduced diagram C R (obtained from A by deleting the row and column corresponding to the central node) by or, using (3.1) and the fact that 8) or remembering that the adjugate of a matrix (the matrix of cofactors) equals the inverse matrix times the determinant This final version makes it clear that the result is indeed an integer even though the sign is unclear. Equation (3.9) also has the virtue that it makes good sense even when B is singular. Indeed it simplifies considerably as the first term on the right hand side drops out. We shall return to this point in the next section. Equation (3.7) can be proven directly by using (2.8) to factorise det A into products of determinants of matrices made of the components of the simple roots of C. The crucial point is that the simple roots for nodes of C R have no component in the direction of x and this makes it trivial to evaluate the factored determinants. Alternatively (3.9) is just an application of Cauchy's expansion of bordered determinants [11]. Only now is account taken of the fact that the reduced diagram C R may be disconnected with connected components C 1 , C 2 , . . . C n as depicted in Fig 1. The consequence is that after a suitable reordering of rows and columns the Cartan matrix B is block diagonal where B β is the Cartan matrix of the Dynkin diagram for C β . It is convenient to denote ∆ β = det B β . Then (3.9) reads This identity provides an efficient tool for evaluating determinants of Cartan matrices iteratively as will be illustrated in the case where the central node is linked to each disjoint component C β by a single edge attached to a distinguished node of C β that is denoted by * . If B * β denotes the Cartan matrix obtained from B β by deleting the row and column corresponding to the node * (and is automatically of finite type if B β is), and ∆ Notice that when C n contains no nodes it can be deemed to be an empty diagram so that the reduced diagram C R contains only n − 1 connected components. The result (3.12) ought to reflect this fact and it does so if it is understood that ∆ n = 1 and ∆ * n = 0 for an empty diagram. Let us now use (3.12) to determine a few determinants explicitly. For the case of su(N ), det A(su(N )) ≡ ∆(su(N )) is evaluated by considering the a N−1 = su(N ) Dynkin diagram and selecting as the central node one of the two end nodes so that the reduced diagram is connected. Then ∆ 1 = ∆(su(N − 1)), ∆ * 1 = ∆(su(N − 2)) and (3.12) reduces to This is a simple recurrence relation whose general solution is ∆(su(N )) = AN + B. The constants A and B are determined as 1 and 0 respectively by the comments above concerning empty diagrams which imply ∆(su (1) Notice also that e 10 , which is hyperbolic, has a Cartan matrix with determinant −1 so that its root lattice Λ R (e 10 ) is an even, unimodular Lorentzian lattice, a somewhat rare object. In the next section we shall find many more Cartan matrices for such lattices. As explained below e 10 can be thought of as what is called an overextension of the finite dimensional Lie algebra e 8 . Likewise e 11 whose Cartan matrix has determinant −2 can be viewed as a very extended version of e 8 . When the connected components of the reduced diagram are either finite or affine type Suppose that p is the number of connected components of C R that are of affine type. Thus p factors ∆ β = det B β vanish, so that, taking account of cancellations it appears from (3.11) that det A has a (p − 1)-fold zero. In fact A does have corank (p − 1) so that Otherwise it is neither Lorentzian nor affine as its Cartan matrix A has one negative eigenvalue and a (p − 1)-fold zero eigenvalue. This is established by displaying a set of simple roots whose scalar products yield the Cartan matrix A whilst spanning a space of dimension (r − p + 1) equipped with a Lorentzian scalar product. First, simple roots are assigned to the reduced diagram, component by component. For each component C β assign the simple roots α i , i ∈ C β . If C β is of finite type these are linearly independent whilst if it is of affine type these are linearly dependent, where the positive integers n i are the Kac labels for the affine diagram C β . Then the simple roots assigned to the overall diagram C are, in terms of these, The vectors k andk can be thought to lie in the even self-dual Lorentzian lattice II 1,1 whose structure is described in appendix B. The scalar products of the a i realise the overall Cartan matrix A with r rows and columns yet the roots manifestly span a space of dimension r − p + 1. Since for each of the elimination of k yields p − 1 linear relations amongst these simple roots. Let us henceforth concentrate on the case that A is Lorentzian so that p = 1. Let the affine component of the reduced diagram be C 1 so that C 2 , C 3 , . . . C n are all of finite type. Then ∆ 1 = det B 1 vanishes and the right hand side of equation (3.11) simplifies as n of the (n + 1) terms vanish, leaving only the β = 1 term in the sum Now C 1 is a connected simply-laced affine diagram and so it has to have the form of an affine Dynkin diagram for a simple, simply-laced affine Kac-Moody algebra g (1) , say. (1) ). Then its adjugate matrix has the form where again the integers n i are the Kac labels for g (1) . They constitute the unique null vector of B 1 , (B 1 ) ij n j = 0. But, by definition, its adjugate matrix satisfies Hence each column of adj B 1 is proportional to the null vector. The structure above then follows from the fact that adj B 1 , like B 1 is symmetric. The normalisation follows by specialising the suffices i and j to the value 0, denoting the affine node, and remembering that n 0 = 1 while (adj B 1 ) 00 is the determinant of the Cartan matrix for g and hence equals |Z(A(g))| by (2.11b) and comments thereafter. Hence det A further simplifies and det A is explicitly negative, being expressed as minus a product of positive integers. Notice the remarkable fact that any dependence on the quantities The root lattice of any of the diagrams under consideration is an even, integral Lorentzian lattice and, by (2.11) it is self-dual, or self-reciprocal if and only if det A equals −1. By (4.6) this is so only if each factor on the right hand side, being an integer, actually equals unity. The only simply-laced connected diagram of finite type with unimodular Cartan matrix is, by the results of the preceding section, the e 8 Dynkin diagram. So C 2 , C 3 , . . . C n must each be of this type. So also must g be e 8 so that C 1 must be an affine e 8 diagram, or equivalently, an e 9 diagram. Finally the factor i∈C 1 n i η i must equal unity. Thus all η i here vanish, except for just one that equals unity and must correspond to the node of C 1 for which the Kac index equals unity. This is the affine node (or one related to it by a diagram symmetry). Thus the central node of C is linked to C 1 in effect only via the affine node. The dimension of the even, Lorentzian self-dual lattice has therefore to be 8n + 2, in accord with the fact that it is only in these dimensions that such lattices exist. They are denoted II 8n+1,1 and are unique. (For a brief description of these lattices see appendix B.) Nevertheless, notice that, because of the arbitrariness in the quantities η i , i ∈ C 1 , there are very many Cartan matrices (and therefore many inequivalent Kac-Moody algebras) that give rise to each of these when n > 1. If n = 1 this procedure yields only one Cartan matrix whose root lattice is II 9,1 and that is the e 10 Cartan matrix previously mentioned as an over-extension of the e 8 Cartan matrix. The fundamental weights for the overall diagram C are determined in terms of the fundamental weights associated with the reduced diagram as To simplify notation all components of C 2 of finite type are included in C 2 which is no longer taken to be connected, and λ β are the fundamental weights of C 2 . As already mentioned C 1 has to be the extended Dynkin diagram of a finite dimensional simply-laced simple Lie algebra, g, or equivalently the Dynkin diagram for the untwisted affine Kac-Moody algebra g (1) . λ 1 , λ 2 , . . . λ r 1 are the fundamental weights of g and λ 0 = 0. ν records the linkage of the central node to the nodes of C R and η is the quantity that already appeared in the determinant formula (4.6), namely Notice that η 0 = −A c0 contributes to η but not to ν. It is easy to check that the weights (4.7) do satisfy (2.10), given (4.1) and (4.2). So the Weyl vector R for the overall diagram C, being the sum of the fundamental weights, is where ρ(g) = r(g) j=1 λ j is the Weyl vector for g, h(g) = r(g) j=0 n j is the Coxeter number of g and ρ(C 2 ) is the Weyl vector for C 2 . Notice immediately that This is very similar to what happened in the previous section, equation (3.5), and means that if there is a principal three dimensional subalgebra it must be so(1,2) rather than so(3). Also The only negative term is the last and it depends on C 1 and its linkage to the central node and not at all on C 2 . Let us consider in turn two possibilities for the linkage between the central node and the affine component C 1 . First suppose that the only link is to the affine node so η i = δ i0 , i ∈ C 1 . Then η = 1 and ν(g) vanishes so that (4.12) reduces to This can be simplified by the Freudenthal-de Vries strange formula applied to g, (4.14) to yield This cannot be negative unless g has rank r(g) less than 24. Since this is another necessary condition for the presence of a principal so(1,2) subalgebra, it means that there is only a finite number of possibilities for g in this situation. There are also constraints on C 2 , as will be discussed below in section 4.1. A particularly interesting case is when C 2 is empty and the Lorentzian algebra with Dynkin diagram C is said to be an "overextension" of g. Then the condition R 2 < 0 reduces to r(g) < 24, as noted some time ago [1]. Thus The sum of the last two terms of (4.12) is This is negative for a finite number of choices for p, q all entailing n = p + q < 26. The conclusion is that whenever there is a single link between the central node and C 1 , the number of possibilities for the affine diagram C 1 and its linked node is finite when (4.12) is negative. It will be shown below that likewise the number of possibilities for C 2 is also finite. Now we look at the choice of the η j that seems most likely to produce a principal so(1,2) by minimising R 2 , (4.12), given g. First force the middle term on the right hand side of (4.12) to vanish by taking ηρ(g) = h(g)ν(g). The necessary and sufficient condition for this is that all η j , j ∈ C 1 be equal, to η 0 , say. The first term then vanishes if and only if η j , j ∈ C 2 all equal η 0 . In this case In that case the Lorentzian algebra corresponding to C always has a principal so(1,2) subalgebra, whatever g. Constraints on C 2 As we have seen above, if there is a single link between the central node and C 1 , the number of possibilities for the affine diagram C 1 and its linked node is finite if (4.12) is negative. We want to show now that for each of the finitely many choices for C 1 , there are only finitely many choices for C 2 that make (4.12) negative. In particular, this shows that within this class of algebras, the rank of the algebras that possess a principal so(1,2) subalgebra is bounded from above. Given g, the condition that (4.12) is negative is simply that where h 0 = h(g) η , and M 0 (g) = 2h(g)(h(g)+η) only depend on g. By considering the different possibilities for the algebra g it is easy to see that h 0 ≥ 2 for each simply-laced Lie algebra. If C 2 is not connected, we can split the right hand side of (4.18) into a sum over the simple components. For each simple component, the left hand side of (4.18) is strictly positive since h 0 ≥ 2. This suffices to show that we can only have finitely many simple components in C 2 . It therefore remains to show that the rank of each simple component must be bounded. This will be done separately for a r and d r . In the following we shall write ρ = ρ(C 2 ), ν = ν(C 2 ). The case of a r Let us write the vector ν in the orthogonal basis of appendix C, i.e. where l j depends on ν. Given the formula for the fundamental weights (C.7), it now follows that l j+1 − l j = −η j . Thus if we write (ρ − h 0 ν) in the same basis, then m j+1 − m j = η j h 0 − 1. Since h 0 ≥ 2 and η j ∈ N 0 , at least every other m j is in modulus bigger or equal to 1/2, and therefore Because of (4.18) it is then immediate that the rank of C 2 must be bounded. It is also obvious from the above argument that only finitely many choices for η β , β ∈ C 2 will respect the bound (4.18). The case of d r Let us first consider the case when ν is not a spinor weight. Then, given the formula for the fundamental weights (C.13) and the Weyl vector (C.14) it follows that where l i ∈ Z depends on ν. Each of the coefficients of e i for i = 1, . . . , r is integer, and since only every h th 0 number is divisible by h 0 , at most r h 0 + 1 of them vanish. Thus it follows that Since h 0 ≥ 2, it then follows that the rank of C 2 must be bounded. If ν is a spinor weight, then each l i is half-odd-integer. Then the same argument applies, except that h 0 is replaced by h 0 /2 if h 0 is even. For all simply-laced algebras other than g 0 =su(2) h 0 ≥ 3, and (4.23) is then still sufficient. In the case of g 0 =su(2), h 0 l i is then an odd integer, and therefore only every second coefficient in (4.22) can vanish. This is again sufficient to conclude that the rank of C 2 must be bounded. The above arguments are somewhat abstract, and it may therefore be instructive to get a better feeling for what the actual bounds are. For the case where the central node is only linked to the affine node of C 1 and to only one node of each connected component of C 2 , we have made a more detailed analysis (that is sketched in appendix D). Within this class of constructions, it is shown there that the rank of an algebra with a principal so(1,2) subalgebra is always less than 42. Actually, this bound is probably not attained, and it would be interesting to find the actual bound. The largest rank example (within this class of constructions) that we have managed to construct has rank 19 and is found by taking g = d 10 and g 2 = e 7 . Its Dynkin diagram is given below. Very extended Lie algebras The Lorentzian Kac-Moody algebras that actually appear in string theory are rather special examples of the algebras we discussed in section 4: they arise by joining an affine Kac-Moody algebra g (1) via the affine node to a central node that links in turn to the single finite dimensional Lie algebra g 2 =su(2). This special construction is a generalisation of the 'over-extension' construction that is explained in [1], and we shall therefore call the resulting Lie algebra 'very extended'. Since these are the examples of primary interest, it may be worthwhile to describe their structure in some detail. We shall also not assume in the following that g has a symmetric Cartan matrix. Let us begin by considering a finite dimensional semi-simple Lie algebra g of rank r whose simple roots α i , i = 1, . . . , r span the lattice Λ g . Let us denote the highest root of g by θ; we will always normalise the simple roots of g such that θ 2 = 2. This can always be done except for the Lie algebra g = g 2 which our analysis does not cover. We choose the convention that the Cartan matrix is defined as In a first step we enlarge the root lattice of Λ g to be part of Λ g ⊕ Π 1,1 (5.1) by adding to the simple roots of g the extended root where k ∈ Π 1,1 ⊂ Λ g ⊕Π 1,1 is described in appendix B. The corresponding Lie algebra now has r+1 simple roots, and it is just the affine Lie algebra of g which is often denoted by g (1) . However, in view of subsequent developments, we will denote it by g 0 . By construction we have (α 0 , α 0 ) = 2 (since k.k = 0). Let us denote the scalar products involving the new simple root as (α 0 , α i ) ≡ q ′ i and 2 (α i ,α 0 ) (α i ,α i ) ≡ q i . The corresponding Cartan matrix then has the form As has been mentioned before, the determinant of the Cartan matrix A g 0 vanishes, Clearly, the roots of the affine algebra do not span the whole lattice Λ g ⊕Π 1,1 . Rather, the roots of the affine algebra can be characterised as the vectors x in this lattice which are orthogonal to k, i.e. x.k = 0. We may further extend the affine Lie algebra by adding to the above simple roots yet another simple root namely [1] α −1 = −(k +k) ∈ Λ g ⊕ Π 1,1 , where we have again used the conventions of appendix B. We note that α 2 −1 = 2, as well as (α −1 , α 0 ) = −1 and (α −1 , α i ) = 0, i = 1 . . . , r. The Lie algebra so obtained is called the over-extended Lie algebra, and we shall denote it as g −1 . The Cartan matrix associated to the over-extended Lie algebra has the structure Examining the form of the Cartan matrix we conclude that Clearly, the root lattice of g −1 is Λ g −1 = Λ g ⊕Π 1,1 . The algebra g −1 is therefore Lorentzian. It is possible to enlarge the Lie algebra even further by considering the lattice We denote the analogue of k andk in the second Π 1,1 lattice by l andl, respectively. We now add the new simple root α −2 = k − (l +l) . (5.8) We then have that (α −2 , α −2 ) = 2, (α −2 , α −1 ) = −1, while all other scalar products involving α −2 vanish. Let us denote the resulting Kac-Moody algebra by g −2 . The corresponding Cartan matrix is then of the form This Cartan matrix is precisely the Cartan matrix that is obtained from the construction in section 4 with g 2 = su (2). Examining the form of this Cartan matrix we conclude that This is in agreement with (4.6) since the determinant of the Cartan matrix of su(2) equals 2 and η, defined in (4.9), equals η = 1. The root lattice of g −2 consists of all vectors x in Λ g ⊕ Π 1,1 ⊕ Π 1,1 which are orthogonal to the time-like vector where we have used the notation of appendix B. This implies, in particular, that g −2 is a Lorentzian algebra. As before, it is straightforward to calculate the fundamental weights of the overextended and very extended algebras. In the over-extended case the fundamental weights are given as where λ f i are the fundamental weights of g. On the other hand, the fundamental weights of the very extended algebra are It was shown in [1] that the Weyl vector of an over-extended algebra is given by where ρ f is the Weyl vector of the underlying finite dimensional Lie algebra g, and h is its Coxeter number. Similarly, the Weyl vector of the very extended Kac-Moody algebras is given by Weight lattices We now construct the weight lattices of the Kac-Moody algebras introduced above. For simplicity we shall only consider the simply-laced case for which the weight lattice Λ W is just the dual Λ W = Λ * R of the root lattice Λ R . These lattices can be easily found, using the fact that for any two lattices Λ 1 and Λ 2 we have Now the lattice Π 1,1 is self-dual, and thus the weight lattice of g −1 is simply given by In particular it follows that Given (2.11), this result is consistent with the relation between the determinants of equation (5.6). The weight lattice for g −2 is given as since l +l = (1, −1). The last lattice in (5.19) is generated by f = (1/2, −1/2) which is not in Π 1,1 , but for which 2f ∈ Π 1,1 . Thus we conclude that Here the Z 2 results from the fact that the dual lattice contains the vector f in the last factor of Π 1,1 . This is consistent with the factor of 2 between the two determinants of equation (5.10). The relation to self-dual lattices The above extensions were carried out for any finite dimensional semi-simple Lie algebra g of rank r, but we now consider in detail the resulting algebras when Λ g is an even self-dual lattice of dimension r, or a sublattice of such a lattice. Even self-dual Euclidean lattices only exist in dimensions D = 8n, n = 1, 2, . . .. The first non-trivial example of such a lattice occurs in eight dimensions where there is only one such lattice, the root lattice of e 8 . Let us denote the corresponding affine, over-extended and very extended algebras by e 9 , e 10 and e 11 , respectively. We can choose a basis for the root lattice of e 8 , Λ e 8 , to be −1, 0, 0, 0, 0, 0, 0) . (5.22) In order to describe the extension and over-extension of e 8 we consider the lattice Λ e 8 ⊕Π 1,1 . (5.24) Finally, the over-extended root that enhances this to e 10 can then be chosen to be It is easy to see (and in fact well known [1]) that this construction gives the root lattice of e 10 . The lattice Λ e 8 ⊕Π 1,1 is clearly self-dual by virtue of equation (5.16). It is of Lorentzian signature and even. Such lattices only occur in dimensions D = 8n + 2, n = 0, 1, 2 . . . , and the lattice in each dimension is unique and usually denoted by Π 8n+1,1 . It follows that the root lattice of e 10 is precisely this lattice for n = 1, i.e. Λ e 10 = Π 9,1 . Finally, we consider the lattice where the latter lattice is the unique even self-dual lattice of signature (10,2). The very extended root is given by The corresponding algebra, e 11 , has been argued to be a symmetry of M-theory in [4]. From equation ( The 24-dimensional case Next let us consider the extensions of a finite dimensional semi-simple Lie algebra of rank 24 whose root lattice is a sublattice of an even self-dual Euclidean lattice in dimension 24. In dimension 24, there are 24 such lattices, the so-called Niemeier lattices [12]. One of the Niemeier lattices contains the root lattice of d 24 , that can be taken to be spanned by the vectors in Z 24 of the form The root lattice of d 24 is not self-dual by itself since which is consistent with the fact that det A d 24 = 4. The corresponding self-dual lattice is given by It is obtained from the root lattice of d 24 , Λ d 24 , by adjoining a point of length squared six, It is easy to see that g ∈ Λ ⋆ d 24 , and that 2g ∈ Λ d 24 . Let us denote by k 26 the over-extension of d 24 that is obtained by adding to d 24 the affine and over-extended roots. The rank of k 26 is 26, and its root lattice is as discussed for the general case above. We shall denote the corresponding algebra by k 27 ; it has been argued to be a symmetry of the 26-dimensional closed bosonic string [4]. It also follows that Λ ⋆ k 27 The 16-dimensional case For completeness, let us conclude this section with a discussion of the very extended Lie algebra associated to the rank 16 algebra d 16 . As in the previous section, the corresponding root lattice is not self-dual since Λ ⋆ d 16 The corresponding self-dual lattice is given by It is obtained from the root lattice of d 16 , Λ d 16 , by adjoining a point of length squared four, Let us denote by m 18 the over-extension of d 16 that is obtained by adding to d 16 the affine and over-extended roots. The rank of m 18 is 18, and its root lattice is In particular, we therefore have as discussed for the general case above. We shall denote the corresponding algebra by m 19 . We observe that constructing the very extended Lorentzian Kac-Moody algebras based on Euclidean self-dual lattices leads to a very specific set of algebras, namely e 11 , m 19 and k 27 . Remakably, e 11 and k 27 are thought to be symmetries of M theory and the 26-dimensional bosonic string [4]. The above construction also makes it clear that the symmetries of these two theories are related to the unique even self-dual Lorentzian lattices in ten and 26 dimensions, respectively. These observations encourage the speculation that there should also exist a 18-dimensional string with a symmetry k 19 . These and other implications for string theory will be discussed elsewhere. Principal so(1,2) subalgebras It may also be interesting to analyse which of the over-extended and very extended Kac Moody algebras possess a principal so(1,2) subalgebra. As before we have to analyse the condition of equation (A.8). Using (A.9), as well as the explicit expressions for the fundamental weights given in (5.12) and (5.13), we find that for an over-extended Kac-Moody algebra the left hand side of equation (A.8) is Here h, n i , A −1 f ij are the Coxeter number, the Kac labels, and the inverse Cartan matrix of the finite dimensional Lie algebra g, respectively. Similarly, we find for the case of the very extended Kac-Moody algebra We observe that in both cases only the sums a A −1 aj do not automatically satisfy the required condition of equation (A.8). The relevant condition depends therefore on the Kac labels, Coexter numbers, and the corresponding sums in the finite dimensional Lie algebra. For the case of the classical Lie algebras, the relevant data have been collected in appendix C. As an example, let us consider the case of the very extended a n algebra in more detail. Using equation (C.9) of appendix C we find that This has its maximum when j = n+1 2 for n odd and j = n 2 for n even. In the first case the maximum of a A −1 aj is 1 8 (n 2 − 10n − 23) while in the latter case it is 1 8 (n 2 − 10n − 24). Hence a A −1 aj is non-positive if and only if n ≤ 12, and thus a principal so(1,2) subalgebra exists for the very extended a n algebra if n ≤ 12. We summarise the results for all overand very extended Lie algebras in the following table. over-extended very extended a n n ≤ 16 n ≤ 12 Table 1: The algebras with principal so(1,2) subalgebras. The algebras of particular interest to string theory are e 11 and k 27 . These algebras are the very extended algebras corresponding to e 8 and d 24 , respectively. The above table implies that while e 11 admits a principal so(1,2) subalgebra, k 27 does not. We also note that the other algebra that is related to self-dual lattices, m 19 (see section 5.4), also does not admit a principal so(1,2) subalgebra since it is the very extended algebra corresponding to d 16 . Other constructions Up to now we have discussed Lorentzian Kac-Moody algebras that arise by means of a certain simple construction. While these Kac-Moody algebras may be preferred in some way, it is clear that they do not account for all Lorentzian Kac-Moody algebras, and not even for all those with a principal so(1,2) subalgebra. In this section we want to describe some other classes of Lorentzian Kac-Moody algebras that can be obtained by similar types of constructions. In each case we shall also analyse for which examples the resulting algebra has a principal so(1,2) subalgebra. Adding a different node The simplest modification of the above construction leading to a very extended Lie algebra is to attach the very extended node at a different place in the Dynkin diagram of the over-extended algebra. As before, we shall take the roots to belong to the lattice We take the roots of our new algebra to be the roots of the over-extended algebra (see section 5), except that we replace α i 0 by α i 0 = α i 0 + l, where l is defined as in section 5 and i 0 is a chosen index on the Dynkin diagram; in addition we choose α −2 = −(l +l). The corresponding Dynkin diagram is then the diagram that is obtained from the Dynkin diagram of the over-extended algebra by adding a node that is attached to the i th 0 node. The roots of this new algebra are orthogonal to the vector The resulting algebra is therefore Lorentzian if s is time-like, i.e. if For example, if we take g = a n , then the algebra is Lorentzian if We have also analysed (using Maple) which of these algebras have a principal so(1,2) subalgebra. We have found that for n = 16, two of the algebras so obtained have a principal so(1,2) subalgebra; their Dynkin diagrams are shown below. Symmetric fusion The construction of section 4 is somewhat asymmetric in that one affine Kac-Moody algebra is singled out. In this section we want to describe a more symmetrical construction that also gives rise to a Lorentzian Kac-Moody algebra. As will become apparent, this construction can actually be regarded as a special case of the construction of section 4. However, it may nevertheless be interesting to discuss it in its own right. Suppose g 1 and g 2 are two finite dimensional simply-laced simple Lie algebras of rank r 1 and r 2 , respectively. We want to construct a Lorentzian algebra g 1 ⋄ g 2 whose rank is r 1 + r 2 + 2. The root lattice of this algebra will be given by We take the simple roots of the algebra g 1 ⋄ g 2 to be those of g 1 and g 2 , which we denote by α i , i = 1, . . . , r 1 and β j , j = 1, . . . , r 2 , respectively. We add to these two further simple roots, where k andk belong to Π 1,1 , as explained in appendix B, and θ 1 and θ 2 are the highest roots of g 1 and g 2 , respectively. Since g 1 and g 2 are simply-laced algebras, we have The corresponding Cartan matrix is then given by where q i = (α i , α 0 ), and p j = (β j , β 0 ). By construction, it is clear that the Dynkin diagram of g 1 ⋄ g 2 contains the Dynkin diagrams of g 1 and g 2 , respectively. In fact, it is obtained by joining the two affine diagrams with a single line between the two affine roots. If we think of the affine node of either g (1) 1 or g (1) 2 as the central node, the Dynkin diagram is then of the form described in section 4. By considering the column associated with the root α 0 we can calculate the determinant of the corresponding Cartan matrix, and we find as before that This is in agreement with (4.6). It also follows from the analysis of section 4 that the Kac-Moody algebra g 1 ⋄ g 2 is Lorentzian. Next we want to analyse for which cases this Lorentzian algebra has a principal so(1,2) subalgebra. As before, we can determine the weights, and we find that they are given by where λ (p) i are the fundamental weights of g p , p = 1, 2. It is straightforward to show that where j denotes a node of g 1 . The sums over the other columns may be obtained from the above by exchanging 1 ↔ 2. As in our discussion of section 5.5 we therefore conclude that a principal so(1,2) subalgebra exists if the second sum in equation (6.10) is also negative. Let us discuss a few examples in detail. (In deriving these inequalities we have assumed that both n 1 and n 2 are odd, but similar inequalities also hold if n 1 or n 2 are even.) By squaring the first condition and using the second we conclude that n 1 ≤ 7, n 2 ≤ 7. It is straightforward to find the solutions (taking, without loss of generality, n 2 ≥ n 1 ): apart from the case n 1 = n 2 = 1, . . . , 7 and n 2 = n 1 + 1 = 1, . . . , 5 there is the one additional solution n 2 = 3, n 1 = 1. Finally, we have checked that the algebras a n ⋄ e m only have a principal so(1,2) subalgebra provided that m = 6 and n = 7, 8, and that the algebras d n ⋄e m only have a principal so(1,2) subalgebra provided that m = 6, n = 5, 6, 7 or m = 7, n = 8, 9. Furthermore the algebras e n ⋄ e m have a principal so(1,2) subalgebra if and only if n = m with n = 6, 7, 8. It follows from (6.5) and (6.8) that the symmetric fusion of two finite dimensional simple Lie algebras gives rise to an even self-dual root lattices provided that the two finite dimensional Lie algebras are separately self-dual. The only example is therefore the rank 18 Lorentzian algebra e 8 ⋄ e 8 . Another interesting example is the algebra e 8 ⋄ d 16 , that actually equals the algebra k 26 discussed earlier. The root lattice of this algebra is not self-dual, but as explained in section 5, can be made self-dual by the addition of a spinor weight of d 16 . Conclusions In this paper we have described and analysed a certain subclass of (Lorentzian) Kac-Moody algebras that are in many respects rather amenable to a general analysis. These algebras are characterised by the property that their Dynkin diagram contains at least one node, upon whose deletion the diagram becomes that of a direct sum of affine and finite Lie algebras. We have described the conditions under which these algebras are actually Lorentzian, and we have given explicit descriptions for their simple roots and fundamental weights. We have also found simple formulae for the determinants of the corresponding Cartan matrices. Using similar techniques one can derive their characteristic polynomials, thus reproducing (for the case of the finite Lie algebras) known results in a rather elegant fashion. We have discussed the Lorentzian algebras whose root lattices are self-dual. In particular, we have shown how to construct, for a given even self-dual Lorentzian lattice, a large number of inequivalent algebras whose root lattice is the given Lorentzian lattice. Finally we have studied the question of whether our Lie algebras possess a principal so(1,2) subalgebra. A special subclass of the algebras we have considered are what we called very extended Lie algebras. These very extended algebras arise as symmetries of M-theory and the bosonic string [4], thus suggesting that the subclass of algebras described in this paper may play an important rôle in physics. The methods we have described in this paper will probably generalise to other classes of algebras. In particular, one can use for example our determinant formulae iteratively to analyse Dynkin diagrams that reduce to that of affine and finite Lie algebras upon deletion of two or mode nodes, etc. It would be interesting to explore these ideas further. where p i , q i will be determined shortly. Given the hermiticity property (E i ) † = F i , the generators J + and J − inherit the standard hermiticity property (J + ) † = J − provided that p i = q * i . On the other hand, demanding that [J + , J − ] = −J 3 , and using the Serre relations, one finds that The remaining relations of so (1,2) [J 3 , J ± ] = ±J ± are then automatically satisfied. Hence we conclude that a real principal so(1,2) subalgebra exists if and only if We also note that it follows from equation (A.3) that In the following we shall only consider principal so(1,2) subalgebras that satisfy the above reality property; we shall therefore drop the qualifier 'real'. In this paper we will use a description of the lattice II 1,1 in terms of vectors z = (z + , z − ) that are related to the vectors x given above by the change of basis In terms of these vectors the scalar product becomes x.y = −z + w − − z − w + , where w ± are defined in terms of y as in (B.5). In the basis described by (z + , z − ), the vectors of the lattice II 1,1 have the simple form The vector r is now simply r = (1, 0). The null vectors of II 1,1 are clearly of the form (n, 0) and (0, m) and so the primitive null vectors can be taken to be given by k ≡ (1, 0) andk ≡ (0, −1). We have chosen these vectors such that k.k = 1. Clearly, all vectors of the lattice II 1,1 are of the form pk + qk where p, q ∈ Z. There are only two points of length squared two in II 1,1 , namely ±(k +k). Appendix C. Roots and weights of the classical Lie algebras In this appendix we list the roots, weights, inverse Cartan matrices and some other The Kac labels can also be expressed as In our conventions the Cartan matrix is given by A ij = 2 (α i ,α j ) (α i ,α i ) , and the fundamental weights are defined by 2(λ j , α i ) (α i , α i ) = δ ij . (C.4) The inverse Cartan matrix can be expressed in terms of these by C.1. The algebras a r or su(r + 1) Let e i , i = 1, . . . , r + 1, be a set of pairwise orthogonal unit vectors in R r+1 . We can write the roots of su(r + 1) as α i = e i − e i+1 , i = 1, . . . , r . (C.6) The highest root is θ = e 1 − e r+1 = r i=1 α i . Hence, the Kac labels are given by n i = 1 and the Coxeter number is h = r + 1. The fundamental weights are given by (C.7) In particular, we therefore have that the Weyl vector is given by Using equation (C.5) we find that the inverse Cartan matrix is given by Summing on the first index we find that (C.13) The Weyl vector is therefore of the form (C.14) Using equation (C.5) we find that the inverse Cartan matrix is then (C.20) Using equation (C.5) we find that the inverse Cartan matrix is given by (C.21) Summing on the first index we find that Using equation (C.5) we find that the inverse Cartan matrix is then A −1 ij = i , for j ≥ i, i, j = 1, . . . , r − 1 , (C.26) Summing on the first index we find that (C.27) Appendix D. An explicit bound In section 4 we showed abstractly that only finitely many of the algebras for which there is a single link between the central node and the affine algebra C 1 admit a principal so(1,2) subalgebra. Here we want to give a more explicit bound for a certain subclass of such algebras. The subclass of algebras consists of those algebras for which the central node is linked by precisely one edge to each of the simple finite Lie algebras in C 2 , . . . , C n , as well as to the affine node of C 1 . All algebras are assumed to be simply-laced in this appendix. For each C p , let us denote the node that attaches to the central node by s p . Then ν = n p=2 λ (p) s p . Let us introduce the notation (D.1) Next we consider the inequality (A.8) for the case when j corresponds to one of the finite nodes of C 1 . In terms of the inverse Cartan matrix of g 1 , this inequality can be written as n p=2 X (p) s p ≤ 2h(g 1 ) + 1 −
2014-10-01T00:00:00.000Z
2002-05-07T00:00:00.000
{ "year": 2002, "sha1": "445a644fe457ffd8d8cdce1cb15e7dc53988dfeb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0205068", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "445a644fe457ffd8d8cdce1cb15e7dc53988dfeb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
7555829
pes2o/s2orc
v3-fos-license
Zero Energy States of Reduced Super Yang-Mills Theories in $d+1 = 4,6$ and 10 dimensions are necessarily $Spin(d)$ invariant We consider reduced Super Yang-Mills Theory in $d+1$ dimensions, where $d=2,3,5,9$. We present commutators to prove that for $d=3,5$ and 9 a possible ground state must be a $Spin(d)$ singlet. We also discuss the case $d=2$, where we give an upper bound on the total angular momentum and show that for odd dimensional gauge group no $Spin(d)$ invariant state exists in the Hilbert space. Introduction We consider models, which are obtained by dimensional reduction of Super Yang-Mills theory with gauge group SU(N) in d + 1 dimensions, where d = 2, 3, 5, 9. These models were used to formulate a quantum theory of supermembranes living in d + 2 dimensions, and for d = 9 they describe N interacting D0 branes. It is interesting to know, whether these models admit a possible zero energy state and what the properties of such a state are. The general belief, partially proven, is that for d = 2, 3, 5 no zero energy state exists and that for d = 9 there exists a unique ground state. Let us start with a very simple argument that zero energy states in d = 9 are Spin(d) invariant: it is well known [1,2,3] that the supercharges, Q β , of reduced Yang-Mills theory (for definitions and conventions, see the next section) and γ 123 = γ 1 γ 2 γ 3 , satisfy anti commutation relations of the form where J A , J ij , J µν are the generators of SU(N), Spin(3), Spin(6) respectively and H is an operator, whose form is not important here. As so that for SU(N) invariant zero energy states φ, ψ, i.e. states annihilated by the Q β and J A , (φ, J ij ψ) = 0, (φ, J µν ψ) = 0. In the next section we will treat d = 2, 3, 5, 9 on equal footing and, similar to [4], look for anti-commutators to prove that zero-energy states have to be invariant under Spin(d). We do find such anti-commutators for d = 3, 5 and 9. For d = 2, we give an upper bound on the total angular momentum and show that if SU(N) is odd dimensional, i.e. N even, no Spin(d) invariant state exists in the Hilbert space. The discussion below generalizes to other gauge groups. Model and Results Let d = 2, 3, 5, 9, and let (γ i ) αβ denote the real irreducible representation of smallest dimension, called s d , of the γ-matrices in d dimensions, i.e. the relations {γ s , γ t } = 2δ st 1I. We have s d = 2, 4, 8, 16. The model, which we are discussing, contains the self adjoint bosonic degrees of freedom q sA , p sA (s = 1, ..., d, A = 1, ..., N 2 − 1) and the self adjoint fermionic degrees of freedom Θ αA (α = 1, ..., More precisely, we consider the Schrödinger representation (p sA = −i∂ sA ) of (2) on the Hilbert space is the irreducible representation space of (3). The infinitesimal generators of the gauge group SU(N) read where f ABC are real, antisymmetric structure constants of SU(N). The physical Hilbert space H phys , given by the SU(N) invariant states in H, is the Hilbert space of the model. We have a representation of Spin(d) on H (H phys ), with infinitesimal generators The supercharges are given by and the Hamiltonian by The anti-commutation relations for the supercharges are We note that the Operators Q α and H are self adjoint on their maximal domain, i.e. where ( · ) dist is understood in the sense of distributions. The restrictions of H and Q α to H phys are also self adjoint. We are only interested in SU(N) invariant states, i.e. states in H phys . By definition ψ is a zero energy state iff ψ ∈ H phys ∩ KerH. We want to prove the following Theorem 1. (a) For d = 3, 5, 9, a possible zero energy state is a Spin(d) singlet. (b) For d = 2, a possible zero energy state ψ satisfies We start with Lemma 2. We have the following (formal) anti-commutator relations. Proof. By a straight forward calculation we find for d = 2, 3, 5, 9 (a) We have, using (4), The last term in (5) vanishes since the trace over the γ-matrices equals zero. We find (b) We have, using (4), where the term in the second line is zero, as the trace over the five γ-matrices vanishes. (c) follows by a linear combination of (a) and (b). The second term in (6) vanishes since g n a uv α ψ ∈ C ∞ 0 is in the domain of Q α and Q α is self adjoint. By where | · | F stands for the the norm in F or the operator norm in L(F ), the first term in (6) vanishes using the following estimate. A real irreducible representation of the γ-matrices in 2 dimensions is given by γ 1 = σ 1 , γ 2 = −σ 3 . In this representation we have γ 12 = 1 2 [γ 1 , γ 2 ] = iσ 2 . It follows that J 12 ψ ≤ 6 M 12 ψ By linear combination the above equation holds for all states in KerH ∩ H phys . Hence Theorem 1 follows. The case d = 2 is special as the following theorem shows. Proof. By definition As above, we choose γ 1 = σ 1 , γ 2 = −σ 3 . We define the following annihilation and creation operators We find Assume ψ is Spin(d)-invariant, i.e. J 12 ψ = 0. Then If dim SU(N) is odd this contradicts that the spectrum of L 12 − 1 2 λ A ∂ ∂λ A only takes values in 1 2 Z. Hence the claim follows.
2014-10-01T00:00:00.000Z
2002-11-23T00:00:00.000
{ "year": 2002, "sha1": "0d9539f596a1f271fab9347d07d69ce103654c0b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0b7353f0be2503ab899157129057a4eac819c186", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
15037908
pes2o/s2orc
v3-fos-license
Effect and Safety of Mycophenolate Mofetil in Idiopathic Pulmonary Fibrosis Background. Idiopathic pulmonary fibrosis (IPF) is a progressive fibrotic interstitial lung disease with ineffective treatment. Mycophenolate mofetil (MMF) is an immunomodulatory agent which inhibits lymphocyte proliferation. Objective. We sought to determine the safety and efficacy profile of MMF in IPF patients. Methods. We retrospectively identified ten patients, who met the ATS/ERS 2000 criteria for IPF and received MMF 2 gr/day for 12 months. All of them had routine laboratory, pulmonary function and radiological (high resolution computed tomography-HRCT) data available and were enrolled in the study. Forced vital capacity (FVC), total lung capacity (TLC), diffusion capacity of the lung for carbon monoxide (DLCO), 6-minute walking distance (6MWD), HRCT scans and routine laboratory data at treatment onset were compared with respective values 12 months after treatment onset. Results. There were no significant alterations in FVC, TLC, DLCO and 6MWD pre- and 6 and 12 months post-treatment. HRCT evaluation showed deterioration of the total extent of disease (P = 0.002) and extent of ground-glass opacity (P = 0.02). No cases of clinically significant infection, leucopenia, or elevated liver enzymes were recorded. Conclusions. MMF is a safe therapeutic modality which failed to show a beneficial effect both in functional and radiological parameters in a small cohort of IPF patients. Introduction Idiopathic pulmonary fibrosis (IPF) is an irreversible, devastating, progressive type of lung fibrosis that culminates in a fatal outcome irrespective of treatment [1]. Despite innumerable research studies and rapid expansion of scientific knowledge, IPF pathogenesis still remains elusive and controversial [2][3][4][5]. Recent data strongly suggest that the mechanisms driving IPF reflect abnormal deregulated wound healing in response to multiple sites of ongoing alveolar epithelial injury of unknown origin leading to fibroblast activation and exaggerated accumulation of extracellular matrix into the lung parenchyma [2][3][4][5][6]. Therefore, our present understanding of the molecular and cellular pathways has resulted in the testing of therapeutic approaches that modulate specific inflammatory and fibrotic mediators. With a gradually increasing worldwide incidence and no proven therapies other than lung transplantations, IPF treatment is a major challenge for chest physicians [7][8][9]. Mycophenolate mofetil (MMF), an inhibitor of lymphocytes proliferation through blockade of inosine monophosphate dehydrogenase and interference with purine biosynthesis, is commonly used to prevent rejection following solidorgan transplantation [10][11][12][13][14]. Its clinical utility has been expanded for the treatment of several autoimmune and renal disorders [15]. MMF languished in relative obscurity until the past 5 years when it emerged to function not only as an anti-inflammatory but also as an antiproliferative agent by downregulating the expression of several critical growth factors including transforming growth factor-(TGF-) β. This property makes it an attractive candidate drug for the treatment of fibrotic lung disease [16]. However, there is a serious lack of knowledge and clinical experience regarding its safety, tolerability, and efficacy in patients with IPF, a disease with ineffective treatment and a dismal prognosis. This retrospective study seeks to determine the safety profile and demonstrate the effectiveness of MMF treatment during the disease course in a small cohort of IPF patients. 2.1. Patients. This is a retrospective, single-center trial estimating the safety and efficacy of MMF for IPF treatment. After approval by the Local Ethics Committee and the Institutional Scientific Review Board (reference number 45/4 Scientific Committee-16/11/2009) patients (n = 10) were retrospectively identified who met the ATS/ERS 2000 criteria for IPF [1] and received, on an off-label basis, MMF 2 gr/day for >6 months, between September 2006 and October 2008. Mean time from diagnosis drug initiation was 9 ± 2 months. Patients who had no serial routine laboratory, functional, and radiological data available were excluded from the analysis (n = 0). Patients were evaluated on an outpatient basis at the Department of Pneumonology, University Hospital of Alexandroupolis, Democritus University of Thrace, Greece. All patients gave informed consent. Assessment of High-Resolution Computer Tomography (HRCT) Data. High-resolution CT sections (1 mm) were acquired supine, at full inspiration, at 10 mm intervals reconstructed with bone algorithm using a spiral CT scanner (GE Prospeed Series). The scans were scored by a thoracic radiologist with 9 years of experience (A. Oikonomou), blinded to clinical and lung function information [17]. HRCT images were scored at five predetermined levels: (1) origin of great vessels, (2) main carina, (3) pulmonary venous confluence, (4) halfway between the third and fifth section, and (5) immediately above the right hemidiaphragm. HRCT variables evaluated were total disease extent, the extent of reticular pattern, the extent of ground-glass, the proportion of ground-glass opacity, and the coarseness of reticular disease. Extent of Disease. The total extent of interstitial lung disease was estimated to the nearest five percent in each of the five sections, with global extent of disease on HRCT computed as the mean of the scores. Extents of Individual Patterns. HRCT patterns were subdivided into reticular disease (innumerable interlacing line shadows that were fine, intermediate, or coarse, with variable associated distortion of the lung architecture) and ground-glass attenuation (a hazy increase in lung parenchymal attenuation, with preservation of bronchial and vascular markings) [18]. The relative proportions of the two patterns, estimated in each section, were multiplied by the total extent of disease to provide separate extent scores for each pattern, with the global scores computed as mean values, as for overall disease extent. From these scores, the contribution made by ground glass to overall disease extent was calculated (proportion of ground glass). Coarseness of Reticulation. The most severe disease in each section was quantified as grade 0 = ground glass attenuation alone, grade 1 = fine intralobular fibrosis, grade 2 = microcystic honeycombing (air spaces less than or equal to 4 mm in diameter), and grade 3 = macrocystic honeycombing (air spaces greater than 4 mm in diameter). The total coarseness score was the summed score for all five levels (range 0 to 15). Statistical Analysis. Continuous data are presented as medians with ranges or mean + SD. The paired two-tailed Student's t-test was used to assess statistically significant differences in functional parameters at baseline and 12 months after treatment. Linear regression analysis was used to determine whether there was any improvement in FVC, TLC, and DL CO 6 and 12 months after MMF treatment initiation. The paired Wilcoxon signed ranks test, nonparametric tests were employed to analyse radiological findings. Statistical analysis was performed with SPSS software, version 17.0. Baseline Characteristics. Baseline characteristics of patients enrolled in the study are shown in Table 1. As demonstrated, all patients were male, 9 out of 10 (90%) were exsmokers, at the time of treatment initiation. Six out of 10 patients (60%) had histopathological biopsy proven Data are presented as mean ± SD unless stated otherwise, P-value 1 : between baseline and 6 months, P-value 2 : between baseline and 12 months; 6MWD: 6-minute walking distance, FVC: forced vital capacity, NA: nonapplicable, P A-a O 2 : alveolar-arterial gradient of oxygen tension, TLC: total lung capacity. IPF/usual interstitial pneumonia (UIP) whereas in the remaining four diagnosis was based on the radiological UIP pattern. Seven out of 10 patients (70%) were previously untreated whereas the remaining three patients had used low doses of corticosteroids (two under 20 mgrs and one under 10 mgrs of methylprednisdone daily), at the time of treatment initiation. In addition, three patients (30%) had pulmonary hypertension at the time of MMF initiation (sPAP greater than 60 mmHg, with an overall mean sPAP = 37.2 + 19.6 mmHg) estimated by echocardiography and were started with endothelin-receptor antagonists (one with 250 mgrs of bosentan and the remaining two with 10 mgrs of ambrisentan). Underlying autoimmunity was excluded by the absence of signs of arthritis, morning stiffness, sclerodactyly, photosensitivity, and Raynaud's phenomenon coupled with negative immunologic profile (antinuclear antibodies-ANA, anti-ds DNA antibodies, and rheumatoid factor) in eight out of ten patients. Two patients had positive ANA antibodies, with a negative remaining immunologic profile and physical examination, in the remaining two patients, which could not verify the presence of an autoimmune disorder. MMF Treatment Failed to Show Disease Improvement Based on Pulmonary Function Parameters . As demonstrated in Table 2 and Figures 1 and 2, MMF treatment failed to show a beneficial effect as assessed by pulmonary function parameters. Linear regression analysis showed that FVC (P = 0.228, P = 0.081), TLC (P = 0.70, P = 0.081), and DL CO (P = 0.47, P = 0.053) did not change significantly both 6 and 12 months after MMF treatment initiation, respectively. In addition, MMF administration was associated in 6-minute walking distance (6MWD) at baseline and 12 month after treatment (P = 0.09). Finally, no alterations in alveolararterial gradient of oxygen tension (P A-a O 2 ) between preand 12 posttreatment levels (P = 0.67) were noted. MMF Treatment Was Associated with Disease Progression Based on High-Resolution Computed Tomography (HRCT) Data. Eight out of 10 IPF patients treated with MMF had HRCT evaluation before and after treatment with mean time interval between the two HRCT scans of 12 months. The remaining 2 patients had HRCT evaluation only before initiation of MMF treatment because they died due to acute exacerbation and therefore there was no data available. Among the eight patients who had HRCT evaluation both before and after initiation of MMF treatment the mean HRCT scores for the HRCT variables are shown in Table 3. Statistical analysis showed that there was disease progression based on the total extent of disease (P = 0.002) and extent of ground-glass opacity (P = 0.02) while there was no significant change concerning the extent of reticular pattern, the proportion of ground-glass opacity, and the coarseness of reticular disease (P > 0.05). Clinical and Laboratory Acceptable Safety Profile. Patients were followed for 12 months with routine laboratory tests, including liver enzymes and white blood cells count. No cases of liver toxicity, clinically significant infection, and leucopenia were recorded during MMF treatment. In addition, MMF was well tolerated by all patients with no development of abdominal pain, nausea, or vomiting episodes that could lead to treatment discontinuation or dosage reduction. The above data suggest that MMF has an acceptable safety and tolerability profile. Discussion This is the first report in the literature investigating the safety and efficacy profile of a novel immunomodulatory agent, MMF, given to a small cohort of IPF patients. We retrospectively collected laboratory, functional, and radiological data and demonstrated a readily acceptable safety profile with no important adverse events justifying drug discontinuation or dosage reduction. Regarding drug effectiveness, MMF treatment failed to show a beneficial effect as assessed by functional parameters (FVC, TLC, DL CO , 6MWD, and P A-a O 2 ) while disease progression based on HRCT data, as assessed by using a highly standardized scoring system, was seen. The pharmacological treatment that is currently available for IPF is clearly inadequate [8,[19][20][21][22][23][24][25]. The emergence of novel and powerful tools have provided scientists and physicians with numerous avenues of investigation with clinical applications to greatly improve our understanding of IPF pathogenesis. However, this fatal disease still remains without proven therapies other than lung transplantations given to a small minority of individuals [7,9]. In view of the current disappointing survival data arising from large prospective placebo-controlled clinical trials, many chest physicians worldwide apply other therapeutic regimens to attempt IPF treatment. MMF has been extensively used to downregulate hostimmune response following solid-organ transplantation and therefore to prevent rejection [10][11][12][13][14]26]. In addition, MMF has been also proven effective in the treatment of several autoimmune and renal disorders, including systemic lupus erythematosus [15]. Based on the versatile anti-inflammatory and immunomodulatory properties of its active metabolite, mycophenolic acid, MMF treatment has been recently applied with promising results in patients with systemic sclerosis (SSc) with interstitial lung involvement. In particular, Liossis et al. demonstrated a beneficial effect of MMF both in functional and radiological parameters in five patients with SSc-associated alveolitis [27]. Moreover, MMF administration was well tolerated and safe showing no serious adverse events. Further extending their results, Gerbino et al., retrospectively identified 13 patients with SScinterstitial lung disease who were treated with MMF and suggested that MMF improves vital capacity 12 months after Pulmonary Medicine 5 treatment [28]. Findings were also replicated by another group of investigators in a small cohort of SSc patients with interstitial lung disease, where authors reported a beneficial effect of MMF on the functional status of these patients [29]. Since T cells seem to play a vital role in the pathogenesis of scleroderma and mycophenolic acid inhibits, via blockage of inosine monophosphate, T-cell proliferation and downregulates their intracellular adhesion to endothelial cells, it is highly possible that a beneficial effect of this drug might be anticipated. Fueled by this prospect and based on the aforementioned promising results, US investigators have recently launched a large multicentre randomized clinical trial to compare the beneficial effect in lung function parameters of a 2-year course of MMF with those of a 1-year course of oral cyclophosphamide, in patients with symptomatic sclerodermarelated interstitial lung disease. This trial is still ongoing and its results are greatly anticipated (for more information go to http://clinicaltrials.gov/). In past years, the role of T cells in the pathogenesis of IPF was relatively overlooked mainly due to the disappointing results of corticosteroid treatment. However, interest in the role of autoimmunity in IPF pathophysiology was revived by a study showing that CD4+ cells in IPF patients are in a highly activated status and proliferate rigorously when stimulated with IPF lung extracts, suggesting the presence of an autoimmune process through recognition of self-antigens [30]. In line with this premise, our study group demonstrated a numerical and functional impairment of regulatory T cells (Tregs), a specific subset of T cells which is essential for the control of immunologic tolerance and the prevention of autoimmunity, in IPF patients [31]. Furthermore, this global defect was highly correlated with indicators of disease severity, such as functional parameters, implicating an involvement of Tregs in the fibrotic process. Despite relative enthusiasm arising from the above findings implicating autoimmunity in the pathogenesis of IPF and highlighting novel therapeutic targets with clinical applications, functional and radiological results from our current study would downplay the role of T cells during disease progression. It is therefore conceivable to speculate that the inability of the drug to be proven efficacious lies both in the previously suggested minor contribution of T cells in the pathogenesis of IPF [32] as well as in the inevitable progressive clinical course. Nevertheless it is important to clarify that there might be a minority of IPF patients that would benefit from immunosuppressive agents such as MMF, including those waiting for lung transplantation as it happens with patients waiting for renal transplants where MMF is used to prevent solidorgan rejection. Based on MMF's immunosuppressive and antiproliferative properties and since MMF is often part of the posttransplant immunosuppressive regimen in these patients MMF might be considered for use before subjecting the patient to major surgery [33]. Larger prospective studies in highly selective group of IPF patients are needed to extract efficacy outcomes. Our study has a number of limitations. First of all, it is retrospective in its nature and underpowered. Secondly, based on our data it is unknown whether stabilization of functional parameters could be attributed to therapeutic intervention or simply represents a bystander of disease clinical course. Alternatively, it is impossible to establish a clear relationship between drug effect and disease outcome mainly due to study design. Larger, prospective randomized studies are needed to extract outcomes of scientific rigidity and verify our results, as occurred with scleroderma associated interstitial lung involvement. Finally, it is important to underline that in our case series all the functional parameters showed a gradual decline, even though statistically insignificant, evidence that may be attributed to lack of study power. Collectively, MMF was well tolerated and safe, showing no clinically significant side effects while it failed to show a beneficial effect in disease progression as assessed by functional and radiological parameters. Our main findings underline the current disappointing status in the treatment field of this debilitating disease and highlight the necessity for future large, prospective, randomised clinical trials of novel therapeutic agents with versatile properties targeting multiple pathogenetic pathways. Abbreviations 6MWD: 6-minute walking distance DL CO : Diffusion capacity of the lung for carbon monoxide FVC: Forced vital capacity GGO: Ground-glass opacity HRCT: High-resolution computed tomography IPF: Idiopathic pulmonary fibrosis MMF: Mycophenolate mofetil NA: Non applicable P A-a O 2: Alveolar-arterial gradient of oxygen tension sPAP: Systolic pulmonary artery pressure SSc: Systemic sclerosis TGF: Transforming growth factor TLC: Total lung capacity Tregs: Regulatory T cells UIP: Usual interstitial pneumonia.
2014-10-01T00:00:00.000Z
2011-11-01T00:00:00.000
{ "year": 2011, "sha1": "f6c13367d786d83358f8a3b2fee653028ffece46", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/pm/2011/849035.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d8b3f8bd6f2b221d8bde95c845c6428843fac82", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269970904
pes2o/s2orc
v3-fos-license
Current status and research progress of minimally invasive treatment of glioma Glioma has a high malignant degree and poor prognosis, which seriously affects the prognosis of patients. Traditional treatment methods mainly include craniotomy tumor resection, postoperative radiotherapy and chemotherapy. Although above methods have achieved remarkable curative effect, they still have certain limitations and adverse reactions. With the introduction of the concept of minimally invasive surgery and its clinical application as well as the development and progress of imaging technology, minimally invasive treatment of glioma has become a research hotspot in the field of neuromedicine, including photothermal treatment, photodynamic therapy, laser-induced thermal theraphy and TT-Fields of tumor. These therapeutic methods possess the advantages of precision, minimally invasive, quick recovery and significant curative effect, and have been widely used in clinical practice. The purpose of this review is to introduce the progress of minimally invasive treatment of glioma in recent years and the achievements and prospects for the future. Introduction Among the primary malignant tumors of the central nervous system(CNS), neuroepithelial tumors of the brain are the most common.According to The Central Brain Tumor Registry of the United States (CBTRUS) Statistical Report: Primary Brain and Other Central Nervous System Tumors Diagnosed in the United States in 2016-2020, gliomas accounted for approximately 26.3% of all tumors.The most commonly occurring malignant brain and other CNS histopathology was glioblastoma(GBM)(14.2% of all tumors and 50.9% of all malignant tumors) (1).The treatment goal of glioma is to completely remove the tumor, effectively control the recurrence of the tumor, prolong the survival of the patient and enhance the living quality of patients Traditional therapeutic methods such as surgery, radiation and chemotherapy can reduce symptoms and prolong survival, but the therapeutic effect is limited by the malignant degree, growth site and molecular classification of glioma, and have certain limitations.In recent years, with the progress of science and technology, minimally invasive treatment has become the focus of glioma treatment and has been widely used in clinical practice.The new minimally invasive therapy has the advantages of precise targeting, reducing adverse reactions and complications.In addition, it has shown significant advantages in the treatment of tumors deep in the brain or in functional areas where surgery is difficult to reach. Up to now, the main methods of minimally invasive therapy include photothermal therapy, photothermal therapy, laser-induced thermal therapy and the latest treatment strategies nano drug delivery system(NDDS) therapy and so on.In this article, we describe the therapeutic mechanisms and limitations of above approaches.Meanwhile, we will introduce the latest nanomaterial-based approaches for the diagnosis and therapy of glioma and provide new insights and references for future the glioma treatment. Photothermal treatment of tumor Photothermal therapy (PTT) is a treatment that utilize a material with a high conversion rate to inject it into the human body and convert light energy into heat energy under the irradiation of an external light source (generally near-infrared light) to kill cancer cells.Compared with traditional technology, the therapeutic effect of PTT only occurs at the tumor site, effectively avoiding the risk of killing normal cells and damaging the immune system, and it is a non-invasive and selective tumor treatment (2). Mechanism and advantages of PTT PTT is an emerging method to treat tumors by thermal ablation of tumor cells (3).During PTT, the temperature evolution at the tumor site is caused by the conversion of light energy into heat energy by a medium called a photosensitizer.In addition, the increased temperature can kill tumor cells while avoiding severe side effects on normal cells.The approach is effective, because tumor cells are less thermostability than normal cells.Specifically, the photosensitizer is first concentrated at the lesion site.The lesion is then irradiated with near-infrared light.Subsequently, the photosensitizer generates a lot of heat to ablate the tumor cells (4).Near-infrared light has been widely used in PTT because of its excellent tissue penetration and remote control (5).Furthermore, it possesses high resolution time and space adjustability, allowing for precise control (6). In order to explain the high photothermal conversion efficiency of photothermal materials doped with nanomaterials, it is necessary to understand the working principle of photothermal materials.As we all know, nanomaterials are typical mesoscopic systems with surface, small size and macroscopic quantum tunneling effects (7).At the same time, the optical, thermal, electrical, magnetic, mechanical and chemical properties of nanomaterials are significantly different from those of bulk solids (8).Previous studies have shown that metal-based, carbon-based and semiconductor-based nanomaterials can be u sed as photosensitizers in PTT systems.The reason for inorganic photosensitizers is that these materials have an optical phenomenon called local surface resonance (LSPR).After absorbing near infrared light, the electrons in the photosensitizer have obvious plasmon resonance effect.As a result, they can produce a significant thermal effect, heating the surrounding medium and making the temperature rise rapidly (9), which indicates that these nanomaterials are highly absorbent under near-infrared light.For example, Manikandan et al. studied the photothermal effects of platinum nanoparticles (Pt NPs) with a size between 5-6 nm (10).It was found that Pt NPs could increase the temperature by 9°C and could be further used as an ablative for photothermal ablation of Neuro 2A cells.In another study, Elbialy et al. developed multifunctional magnetic gold nanoparticles (Au NPs) with a diameter of 29 ± 4 nm, and confirmed that the prepared nanoparticles were effective as PTT drugs through histopathological and immunohistochemical studies (11) (Figure 1). Research progress of PTT in glioma The development of new diagnostic imaging and precision treatment methods for GBM is of great significance for improving the living quality and prolonging the overall survival of patients.He et al. successfully constructed a novel IR-II photoabsorbent conjugated polymer (PDTP-TBZ) from strong electron donor dithienopyrrole (DTP) and strong electron acceptor thiadiazole and benzotriazole (TBZ).Subsequently, c(RGDfK) cyclopeptide was modified on the surface of PT NPs to obtain a multifunctional nanodiagnostic reagent (cRGD@PT NPs) that can effectively target GBM neovasculature and tumor cells.Both in vitro and in vivo experiments shown that cRGD@PT NPs has high photothermal conversion efficiency and practical photoacoustic imaging ability under 1064 nm laser irradiation.The results of this work indicated that cRGD@PT NPs has great potential in highly efficient IR-II PTT guided by precise photoacoustic imaging (PAI), providing a good prospect for the treatment and diagnosis of GBM.At the cellular level, it has been proved that PDA@CUR NPs has the potential to leap-over blood-brain barrier(BBB) and can be rapidly taken up by brain glioma cells, and "CUR+ photothermal therapy" can effectively inhibit the proliferation of human and mouse brain glioma cells (12).Similarly, Sun et al. have developed a biomimetic nanoplatform AMNP@CLP@CCM for GBM targeted PTT and ICB(Immune checkpoint blockade) synergistic therapy.By loading the immune checkpoint inhibitor CLP002 into isomelanin nanoparticles (AMNPs) and then coating the cancer cell membrane (CCM).Due to the homing effect of CCM, the resulting AMNP@CLP@CCM can successfully cross the BBB and deliver CLP002 to GBM tissues.AMNPs, as a natural photothermal converting agent, is used for tumor PTT.PTT increases the local temperature, not only enhances the penetration of the BBB, but also upregulates the PD-L1 level of GBM cells.Importantly, PTT can effectively stimulate immunogenic cell death, induce tumor-associated antigen exposure, and promote T lymphocyte infiltration, thereby further enhancing the anti-tumor immune response of GBM cells to CLP002-mediated ICB treatment, thus significantly inhibiting the growth of GBM in situ.Therefore, AMNP@CLP@CCM has great potential in the synergistic treatment of in situ GBM by PTT and ICB (13). At present, with the rapid development of nanotechnology, photothermal therapy has achieved fruitful achievements in the treatment of glioma, but it only stays in the cell or mouse test stage, and has not really entered the clinical trial stage.We hope that more in-depth research can be combined with clinical cases to improve the prognosis of patients and improve the survival of patients.There is still a long way to go, but we firmly believe that through the efforts of generations of scientists and we will ultimately be able to overcome this global problem. Limitations of PTT In the near future, PTT will continue to play an important role in clinical applications and will require a lot of effort in related scientific research.First of all, the physical and chemical modifications make the photosensitizer have high photothermal conversion efficiency and good biocompatibility.Second, lightdriven nanomaterials facilitate fast, remote control and tunable movement of NPs.Third, PTT combined with other tumor therapies effectively excised tumor cells without seriously damaging adjacent normal tissue.In addition, designing dualacting tumor therapies is critical for multiple functions such as drug delivery, real-time imaging, and chemical-PTT.Despite impressive progress in developing photothermal nanomaterials, many challenges remain in terms of clinical application.Biocompatibility, long-term toxicity, dose-dependent toxicity, targeting specificity, and biodegradation are still need to be solved.It is important to note that the potential threat of photosensitizers to patients and the environment cannot be ignored.In practice, this review provides valuable information for the preparation of novel photosensitizers and will motivate researchers to invest more effort in PTT methods. Photodynamics Therapy PDT is a modern, non-invasive therapy for the treatment of non-oncologic diseases as well as various types and sites of tumor.It is based on the topical or systematic application of photosensitive compounds-photosensitizers, which are accumulated in pathological tissues.The photosensitizer molecule absorbs the appropriate wavelength of light and initiates the activation process, resulting in the selective destruction of inappropriate cells.Phototoxic reactions occur only within the pathological tissue, in the region where the photosensitizer is distributed, making selective destruction is possible.Over the past decade, the development of nanotechnology has accelerated significantly.The combination of photosensitizers and nanomaterials can enhance the efficiency of PDT and eliminate its side effects.The use of nanoparticles enables a targeted approach that focuses on specific receptors, and therefore, increases the selectivity of photodynamic therapy.This section will briefly describe the anti-cancer application of PDT, its advantages and possible modifications to enhance its effects (14). Mechanism of PDT treatment Molecular mechanism of PDT is based on the three non-toxic components, which produce the desired effects within pathological tissues only by mutual interactions between: There are two main mechanisms of the photodynamic reaction.Both are closely dependent on oxygen molecules inside cells.The first stage of both mechanisms is similar.A photosensitizer, after entering the cell, is irradiated with a light wavelength coinciding with the PS absorption spectrum and is converted from the singlet basic energy state S°into the excited singlet state S1 because of the photon absorption.Part of the energy is radiated in the form of a quantum of fluorescence, and the remaining energy directs a photosensitizer molecule to the excited triplet state T1-the proper, therapeutic form of the compound (Figure 2) (15). Research progress of PDT in treatment of glioma Using photodynamic technology to treat tumor, light source is needed to activate high concentration of photosensitizers in tumor tissue to produce functional oxygen source-mono-linear oxygen.The power of light source is directly related to the killing effect of tumor and the damage of normal tissue (16).Photosensitizers have been shown to accumulate within tumor cells, and PDT targets malignant tumor cells to exert cytotoxic effects.As an auxiliary means of surgical treatment, PDT can effectively inactivate tumor cells and kill residual tumor cells around tumor lesions, which is a feasible treatment plan for brain tumors.In a recent single-center, non-randomized phase I/II clinical study, researchers evaluated the feasibility of PDT in the treatment of malignant brain tumors in children and adolescents.The main key points were the safety of PDT treatment (phase I) and overall survival after PDT (OS, phase II), and the secondary key point was PFS after PDT (17).The pathological findings of the included patients included intracranial stromal tumor, stromal astrocytoma, diffuse midline glioma carrying H3K27M mutation, glioblastoma, and pediatric highgrade glioma, with OS and PFS acquired at 21 months and 6 months, respectively.However, the clinical study included few cases and could not obtain a definitive conclusion on PDT.At the same time, another team used interstitial photodynamic iPDT technology to treat newly diagnosed glioma patients, and the results showed that PFS was 16.4 months and OS was 28.0 months, but the study limited the tumor volume of included patients and selected small tumor volume (diameter<4 cm), thereby reducing the risk of harm due to edema (18).Compared with the classical treatment of glioma, PDT significantly enhanced the overall survival and progression-free survival of patients.However, the data published by different teams did not contain the molecular typing of glioma, such as MGMT methylation information, so the effect of photodynamic therapy on different grades of glioma needs to be studied in multi-center and large samples.In the past few decades, PDT for aggressive tumors of the central nervous system has achieved good clinical results, but there is no consensus among different centers on standardized treatment.The selection of photosensitizers and light source parameters in the large center studies of central nervous system photodynamic therapy in the world are inconsistent.The third generation of photosensitizers developed at this stage can be applied to clinical applications, which will greatly improve the targeting of photodynamics.Combined with optical fiber devices with efficient transmission, it is believed that photodynamics can benefit more patients with aggressive brain tumors, especially GBM patients. Limitations of PDT Although the clinical treatment of PDT has achieved remarkable success, its wide clinical application is limited by the obvious phototoxicity of traditional phototherapy (19).The main cause of phototoxicity is the uncontrolled distribution of photosensitizers, which when exposed to natural light can lead to untargeted effects in normal tissues, including skin, blood vessels and liver, resulting in damage to normal cells.Moreover, the irregular distribution of photosensitizers may result in low accumulation in tumor cells, limiting the efficacy of PDT.From a technical point of view, selective irradiation of tumor cells is a challenge that requires the development of a method that can deliver a controlled and sustained delivery of photosensitizers directly to the tumor site.Furthermore, the PDT treatment area requires more oxygen to obtain oxygen free radicals to kill the tumor cell, but the tumor is in a state of high oxygen consumption, which may further impact the therapeutic effect. 4 Laser-induced thermal therapy of tumor Laser-induced thermal therapy (LITT) is a minimally invasive surgical approach based on thermal ablation provided by laser via flexible conductive fibers, acting by external or interstitial radiation (20, 21) (Figures 3, 4).During the last 30 years, LITT has gained attention in various clinical scenarios, such as liver cancer, lung cancer, brain tumors, and recurrent or advanced head and neck tumors, among others.Since its creation in 1983, there have been technical improvements to increase its safety and precision, especially with advances in magnetic resonance (MR)-guided therapy (22).The basic principles involved including the conversion of light laser energy into photothermal energy (heat) by the absorption of photons by the tissue, as well as thermal diffusion, distributing this photothermal energy progressively at lower levels towards the tissue margins, acting under three mechanisms, as shown in Figure 4: laser-induced coagulation (LIC: > 60°C), dynamic thermal reaction (TDR: 48-60°C) and laser-induced hyperthermia (LIHT: 42-47 °C).In the core of the irradiated area, there is virtually instantaneous irreversible cell destruction at temperatures > 60 °C,while the tissue margins may suffer reversible cell damage (42-60°C), and, in the case of tumors, it becomes a region with a high rate of relapses, acting better in conjunction with chemotherapy (23,24). LITT has been used for many years as a minimally invasive treatment for brain metastases, epilepsy, necrosis, and glioma.With the improvement of thermal monitoring and ablation accuracy, especially the application of MR thermal imaging technology in surgery, and now the emergence of two commercial laser systems, LITT is gradually being accepted by more neurosurgical centers.In recent years, several new concepts for glioma treatment have been proposed and are being investigated, such as adjuvant chemotherapy or radiotherapy after LITT, immunotherapy and LITT combination therapy.The purpose of this study was to summarize the development of LITT, especially brain gliomas and possible future prospects. One interesting possible indication is to use the disruption of the BBB after LITT to make adjuvant chemotherapy more effective.It has been reported that the effects of BBB disruption after LITT can be demonstrated radiologically by enhanced peripheral contrast (25,26).Recently, Leuthardtet et al. reported that by detecting serum specific enolase levels, LITt-induced destruction of the peritumoral blood-brain barrier reached its peak at around 3 weeks and lasted for about 4-6 weeks (27). Although there is no direct evidence (such as case-control studies) to support LITt-induced BBB disruption leading to better outcomes.Carpentier et al. speculated that LITT opening of BBB rather than local control improves survival in patients with recurrent GBM (28).LITT has become an alternative to surgical resection in the treatment of gliomas.However, treatment outcomes for isocitrate dehydrogenase 1 and 2 (IDH1/2) mutant gliomas have not been reported.Johnson's study described a single-institution cohort of patients with grade 2/3 glioma with IDH1/2 mutations receiving LITT.They collected data on patient presentation, radiological characteristics, tumor molecular profiles, complications, and outcomes.We calculated progression-free Graphic representation of photothermal mechanisms in laser-induced thermal therapy.LITT is complementary to surgical resection, radiation therapy, oncology treatment areas, and systemic therapy, and is especially suitable for patients at high risk of surgical resection due to tumors located in good areas or poor functional status.The increased incidence of cerebral edema after LITT compared to surgical resection must be balanced against these factors.LITT has also been shown to induce transient disruption of the BBB, particularly in the area surrounding the tumor, which allows enhanced central nervous system delivery of anti-tumor drugs, thus greatly expanding the Arsenal against brain tumors, including highly effective anti-tumor drugs with low BBB penetration.In addition, heat-induced immunogenic cell death is another secondary side effect of LITT, which makes immunotherapy an attractive adjunct treatment for brain tumors.Many large studies have demonstrated the safety and efficacy of LITT in the treatment of various CNS tumors, and as the literature on this new technology continues to grow, so will its indications (30). Mechanism and advantages of LITT One of LITT's features is its real-time thermal monitoring capability.Researchers McNichols et al. used LITT to treat lesions in the brains of dogs and pigs, and controlled the process of thermal energy and laser ablation through the feedback mechanism based on precise MRI positioning (31).The system effectively regulates heat, eliminates carbonization and evaporation, while protecting the laser's fiber optic attachment.MRI image results can also provide important information such as tumor blood supply and provide a more comprehensive reference for surgery.The compatibility of LITT with real-time MRI temperature measurement ensures the safety and homogeneous management of surgery, thus increasing the efficacy of this method in the treatment of intracranial lesions (32)(33)(34).Compared with traditional surgical resection, LITT is less traumatic, only needs to enter the deep brain through the small hole in the scalp for treatment, and can be used repeatedly without worrying about dose toxicity (such as radiation therapy) or drug resistance (such as chemotherapy).At the same time, LITT can destroy BBB.It can also increase the permeability of therapeutic drugs (27,29,35).Muir et al. 's research results showed that patients who received LITT multiple times for the treatment of recurrent GBM could also tolerate it well, effectively extending the survival time and enhancing living quality of patients (36). Research progress of LITT in glioma The standard treatment for newly diagnosed high-grade glioma (HGG) patients is maximum safe resection followed by chemoradiotherapy.In some cases, this standard strategy cannot be employed when the tumor involves an important or hard-to-access area due to an unacceptable risk of morbidity.In these patients, the standard treatment includes biopsies and chemoradiotherapy, which is unfavorable in terms of tumor cell reduction.Mohammadi Frontiers in Oncology frontiersin.orgcontrol group, from other institutions that did not use LITT for this patient population, received only biopsials followed by chemoradiotherapy.These authors also demonstrated that ablation degree is an independent predictor of disease-specific OS and PFS (37).Bilateral/butterfly glioblastoma (bGBM) has a poor prognosis.Resection of these tumors is limited due to severe comorbidities resulting from surgery.LITT offers a minimally invasive cell reduction therapy for deep tumors such as bGBM.The objective of the study was to evaluate the safety of bilateral LITT in patients with bGBM.A total of 25 patients were included.Fourteen patients underwent biopsy only, and 11 underwent biopsy+LITT(7 bilateral LITT, 4 unilateral LITT).No intraoperative or postoperative complications occurred in the treatment group (0%).Tumor volume was negatively correlated with treatment scope (r2 = 0.44, P = 0.027).Median progression-free survival was 2.8 months in the biopsie-only group and 5.5 months in the biopsy + LITT group (P = 0.026).Median overall survival was 4.3 months in the biopsy alone group and 10.3 months in the biopsy + LITT group (P = 0.035).Bilateral LITT for bGBM can be performed safely and shows early improvements in progression-free survival and long-term survival outcomes in these patients (38).A recent meta-analysis reported the use of LITT in newly diagnosed and relapsed high-grade gliomas.The results are similar to those reported in previous literature, demonstrating the benefit of LITT in OS and PFS as long as more than 95% of tumors are removed.LITT seems to be a reasonable option for patients with deep, hard-to-access, or vital functional tumors.Using the technique, the number of cells in this type of tumor can be reduced with minimal brain manipulation and a complication rate comparable to that of craniotomy.Similar to surgery, in order to obtain meaningful survival benefits, tumors should be ablated by at least 78% to 80% (39).Beaumont et al. indicated that the median survival of LITT treatment was 7 months in patients with corpus callosum HGG, compared with surgical resection of about 65%.In this study, the tumors were larger (≥Patients with 15 cm 3 ) were 6 times more likely to develop complications (40). Limitations of LITT therapy LITT plays an increasingly important role in the treatment of brain glioma, but it also has certain limitations and shortcomings. (1) Indications: LITT is usually suitable for small tumors (including butterfly GBM), and for large tumors or tumors with obvious cystic degeneration, the therapeutic effect needs to be further improved. (2) Imaging: MRI-guided LITT needs to pay close attention to the shape and location of the tumor during surgery.However, the resolution and imaging depth of MRI are limited, which may not clearly show the edge of the tumor or the boundary between the tumor and the surrounding tissue, nor can it visually observe the bleeding during surgery, which may increase the risk of surgery.(3) Ablation scope: The size and energy limitations of the therapeutic equipment used in LITT may not ablate the entire tumor, leading to an increased risk of recurrence.(4) Surgical operation: LITT requires high skills and experience of doctors, otherwise it may lead to surgical failure or serious complications.During the operation, the device must be guided through the skull into the brain, which is a complicated process, and improper operation may increase the operation time and the risk of bleeding. To sum up, LITT is a minimally invasive procedure with a lower complication rate compared to craniotomy.The most common complication of LITT is neurological dysfunction, with temporary disorders ranging from 0-29.4% to permanent disorders ranging from 0-10%.This is associated with direct white matter damage caused by heat, leading to permanent impairment, and temporary impairment caused by white matter tract displacement or edema.Bleeding complications may also occur within the tumor area or locus.Intractable cerebral edema is also associated with laser ablation of larger tumors.Recent observations suggest that these large lesions may require immediate surgical removal.Another rare complication is pseudoaneurysm formation and rupture, which appears to be related to heat damage to large and medium-sized brain arteries.Careful planning before surgery using MRI angiography or catheterization can increase the safety of surgery.Other minor complications, including infections or wound problems, are less common than open surgery because of the advantage of having a smaller skin wound.Designation: Newly diagnosed HGG: Small deep brain tumors (including butterflyshaped gliomas), open surgery may have a high risk of complications, and patient preference.Recurrent HGG: Small or nodular recurrence.For larger relapses, LITT may have advantages over craniotomy because the incision on the irradiated scalp is minimally invasive and small in size.Whether complete or nearcomplete ablation of one or both trajectories is feasible for newly diagnosed or relapsed HGG.The benefits of partial ablation in OS and PFS appear to be limited.For larger tumors, LITT may need to be combined with immediate surgical removal.However, although the procedure is more convenient due to the vascular less nature of the tissue and can be performed with minimal craniotomy, this approach does result in longer operation times for these combined procedures.Radiosurgical resistant metastases.Drug-resistant radionecrosis (RN) as a second-line treatment option.The volume is usually less than 40-50 ml. Tumor treating fields TT-Fields is a new treatment method.As a selective electric field, it interrupts cell division by generating a low-intensity mediumfrequency selective electric field around the tumor, and then kills tumor cells.It can achieve the effect of cancer treatment at the frequency of 200kHz.Studies have shown that TT-Fields can prolong the survival of GBM patients (41).TT-Fields was approved by the FDA in 2011 for the treatment of recurrent GBM and in 2015 for the treatment of newly diagnosed GBM.According to the 2017 Guidelines for Central nervous System Tumors in the United States, TT-Fields can be used in GBM with KPS≥60 and MGMT promoter methylation or non-methylation.After receiving standard concurrent chemoradiotherapy, Temozolomide combined with TT-Fields is recommended for patients aged ≤70 years.TT-Fields may also be used in patients aged 70 who receive non-large frit radiotherapy,i.e., temozolomide adjuvant chemotherapy, which is standard concurrent chemoradiotherapy.TT-Fields may be considered for recurrent GBM, whether diffuse, multiple, locally resectable or unresectable (42). Treatment mechanism of TTfields TTfields disrupt the normal mitotic process by acting on key charged macromolecules or organelles during mitosis, thereby destroying cells and achieving the purpose of tumor suppression.Two basic physics principles are involved in this process: dipole alignment and dielectrophoresis.Under the action of a uniform alternating electric field, in order to maintain a safe parallel with the power vector, the charged molecule will oscillate continuously, and the positive and negative charges within the molecule will be separated, so that they will align themselves parallel to the direction of the exposed power vector.In order for the cell to properly mitosis, key macromolecules and organelles in the process of mitosis and cytoplasmic division are highly polarized, so that the charged structures in each stage of cell division are precisely aligned.Therefore, their random motion is disturbed by externally applied local electrical fields (43).Under the action of the TTfields electric field, the normal random movement of microtubule subunits in the cytoplasm during metaphase division is disturbed, resulting in the suspension of the normal microtubule assembly of the spindle, resulting in asymmetric chromatin separation.In normal division, the 2/6/7 aggregates were recruited to the midline of the spindle at a later stage, and under the action of the parallel distribution of cytofissure fibers, the cleavage groove evolved and gradually narrowed, with the boundary axis parallel to the direction of the applied alternating electric field.The TTfields interfere with this process by disrupting the ability of individual polymers to bind to each other, inhibiting the formation of cleavage proteins.In the absence of normal cleavage protein function, the contraction of dividing cells cannot be confined to the midline of the cell equator, resulting in severe contraction of the cell membrane and abnormal mitotic outlet at the beginning of the anaphase, and eventually a strong cytoplasmic blistering and cell membrane rupture.On the other hand, tubulin has a higher electric dipole moment (1660D), and the effect of TTfields on microtubules may be more significant due to the faster dynamic process of microtubule assembly relay.Therefore, under the action of TTfields, the dividing cells will show asymmetric chromatin separation, mitosis inhibition or division delay, which will lead to uneven distribution of chromosomes in daughter cells, and eventually cause cell stress.Stressed tumor cells induce host immune response under the influence of TTfields.In addition to disrupting cell division, TTfields can also interrupt DNA repair mechanisms (44-50) (Figure 5). Therefore, TTfield has both direct and indirect anti-tumor mechanisms, and when the host immune system is involved at the same time, TTfield's anti-tumor ability can reach the strongest.The abnormal structures of various tumor cells treated by TTfield were observed under immunofluorescence microscopy, including chromosome malarrangement in prepolyp, middle polyp, middle uniaxial and late polyp asymmetrical chromosome separation.These cellular phenomena are affected by the frequency of alternating electric fields, with an effective range of 100-300 KHZ and an optimal frequency of 200kHz.The optimal frequency of TTfield is related to cell size, which may be why the optimal frequency of mesothelioma cell lines, lung adenocarcinoma cells, and breast cancer differs from GBM. TTField is used to treat recurrent GBM In 2007, Kirson and other research teams used TTField to treat relapsed glioma through 10 small-scale clinical trials, and the median PFS of patients reached 26.1 weeks, and the PFS at 6 months was 50%, and there were still 2 patients with no progression until the end of the study.The median overall survival was 62.2 weeks, much higher than the previous median tumor progression time of 9.5 weeks and median OS 29.3 weeks, confirming the efficacy of TTField in the treatment of recurrent glioma.Meanwhile, studies have also confirmed that TTField does not induce arrhythmia and seizures (53).At the same time, Roger Stuppa et al. conducted a phase III clinical trial of recurrent GBM, in which 237 patients with recurrent glioma were randomly divided into a single tumor electric field therapy group (n=120) and an optimized therapy group (n=117) in a 1:1 ratio.Chemotherapy group was the best choice for doctors.The results showed that the median overall survival was 6.6 months and 6.0 months (HR=0.86,P=0.27) in the electric field therapy group and the optimal treatment group, respectively.The 1year survival rates were 20% and the median PFS were 2. Months and 2.1 months (HR=0.81,P=0.16), 21.4% and 15.1% progression-free surial was divided into 6 months (P=0.13), the imaging efficiency are 14% and 9.6% respectively (P=0.19).The living quality assessment showed that the symptoms of constipation, nausea and vomiting caused by tumor electric field therapy were significantly reduced compared with chemotherapy, and helped improve the cognitive function of patients.Although compared with traditional chemotherapy drugs, this trial failed to show that electric field therapy could improve the survival rate, its efficacy was comparable to that of traditional chemotherapy, and at the same time, it could significantly improve the quality of life, becoming a highlight of its treatment (54).Similarly, Mrugala analyzed data from 457 relapsed GBM patients treated with TTfield at 91 U.S. cancer centers and showed that patients treated with TTfields for relapsed GBM had a significant benefit (55). To sum up, in clinical practice, the efficacy of TTfields is worthy of affirmation, and it has high patient tolerance and good safety, which is worthy of clinical promotion. of AKT,JUN,P38,and ERK, resulting in enhanced radiosensitivity while inhibiting ciliogenesis and enhancing the sensitivity of GBM to TMZ.In addition, TTFields combined with Sorafenib or hyperthermia resulted in cell death by inhibiting STAT3.TTFields inhibits ciliogenesis, thereby suppressing sensitivity to TMZ (56) (Figure 6). Limitations of TTfields treatment At present, the most common adverse reaction to TTfields treatment is mild or moderate dermatitis of the skin at the site of electrode placement, which is likely due to a variety of factors, including persistent moisture, poor skin heat loss, chemical irritation of hydrogels and medical tape components, and TTfields may inhibit normal epithelial cell proliferation in the skin.Although most are mild and moderate injuries, which can usually be treated by moving the array 1-2cm or using cortisol locally, severe infections and ulcers can cause permanent damage to the patient.The high cost of TTFields treatment is one of the important factors limiting the adoption of this technology in the treatment of neurological tumors, and TTFields treatment is considered cost-benefit in the United States health care system, but national health care policies vary.Although researchers have explored the therapeutic mechanisms of TTFields, clinicians remain skeptical of the technology. In conclusion, TTfields is a new approach to non-invasive cancer treatment.Clinically, its efficacy and safety have been demonstrated in the treatment of newly diagnosed and relapsed glioblastoma.TTfields is able to selectively kill rapidly dividing cells by interrupting cell division.As a result, TTfields can be applied to a wide range of local tumors, including GBM.In addition to further optimizing treatment options for GBM, TTfields has broad application prospects for treating other cancers.Although glioma therapy remains a huge challenge for researchers and clinicians, the booming of nanotechnology (NT) provides potential approaches for prospective glioma treatment.However, Bloodbrain barrier (BBB), blood-brain tumor barrier (BTB), hypoxia, and complex tumor immune environment (TIE) hinder the development of the new medical technology.In order to address above items, researchers and clinicians, have been dedicated to designing diversified nanoformulations and standard nano drug delivery system for enhancing glioma therapeutic effect (Figure 8). In spite of the limitations of BBB, some nanocarriers have been used to deliver chemotherapy drug for brain tumor therapy.NDDS can provide many preponderances, for instance 1) Enhancing the BBB penetration depth and bioavailability of the drugs in tumor tissues; 2) Targeted and controlled drug release or pH responsive release; 3) Multiple diverse drugs can be co-modified in the surface of nanocarriers, reaching the combined therapy;4)No or little toxicity (60). At present, superparamagnetic iron oxide nanoparticles (SPIONs) represent the most widely used theranostic magnetic MNPs for various biomedical applications, such as high-contrast agents for MRI (60,61), efficient drug delivery (62), and magnetism-based hyperthermia therapy (63).Various SPION-based formulations have been synthesized as functional nanoplatforms for imaging and therapy of brain tumors.Meanwhile, NIR fluorescent nanoprobes, gold nanomaterials, Micro/nanobubbles, Mesoporous silica NPs (MSNPs), Mesoporous ruthenium NPs (MRNs),and Titanium dioxide NPs have been used for diagnosis and therapy of glioma. Nanotechnology provides the full potential to address GBM treatment.However, some challenges and continuing efforts are needed to accomplish the translation of glioma nanomedicine from fundamental research to clinic. Discussion At present, PDT, LITT and TTFields have been applied in the treatment of glioma, and PDT has been combined with surgery, and certain clinical effects have been achieved.Some studies have shown that the overall survival rate of patients within 12 months can reach 95.5%, but large sample studies are still needed to verify.LITT can achieve accurate positioning and real-time monitoring of gliomas, and maximize the protection of non-tumor tissues.For gliomas with recurrence, deep location, unsuitable for surgical treatment or ineffective standard treatment, LITT is considered as a potential local treatment method, which can effectively improve the prognosis of patients and living quality.It can avoid the risks associated with craniotomy to remove tumors, while reducing hospital stays and cost of hospitalization, but the long-term effectiveness of this treatment still needs to be evaluated through rigorous randomized clinical trials.The most successful application so far is TTFields, which has entered clinical translation.However, the clinical application of other minimally invasive treatments is limited due to natural barriers such as skull and blood-brain barrier.However, these programs can be used as auxiliary means to gradually out by selecting appropriate cases at the bedside, but their safety and effectiveness still need to be further verified. Conclusion and future perspectives In recent years, with the continuous development of medical technology, minimally invasive technology of nervous system tumor is constantly improving.Minimally invasive surgery using neural navigation system combined with robotic technology has become an important method in the treatment of various diseases in neurosurgery, which significantly improves the precise positioning of surgery and the thoroughness of tumor resection.In addition, the emergence of new technologies such as NDDS, immunotherapy, gene therapy and cell therapy has also provided more options and applications for minimally invasive treatment of glioma.Although there are still some limitations and challenges, its therapeutic effect is still worthy of recognition and promotion in clinical practice.Future research directions include further improving minimally invasive surgical techniques and endovascular intervention techniques, exploring novel, effective and safe therapeutic approaches, optimizing multimodal treatment strategies, and exploring individualized treatment options to further extend patient survival and improve living quality of patients. FIGURE 2 FIGURE 2Mechanism of PDT treatment. FIGURE 4 FIGURE 4Photon absorption and scatter. FIGURE 5 FIGURE 5 TTFields model for interfering with tumor cell mitosis.(In the anaphase of tumor cell mitosis, TTFields can interfere with the formation and directional movement of microtubulin,ultimately leading to the apoptosis of tumor cells). 6 FIGURE 6Molecular pathway changes caused by TTFields combined with radiotherapy or drugs in GBM. 5. 4 Molecular pathway changes caused by TTFields on glioma and GBM A:After TTFields treatment, Beclin1 increases the binding of Atg14L and Vps34(the positively regulated autophagosome) and decreases Bcl-2(the negatively regulated autophagosome),leading to glioma cells and tumor stem cell autophagy.Meanwhile, activation of the AKT2/mTOR/p70S6K axis also leads to autophagy.B:TTFields up-regulates caspase3, caspase7 or increases BAX, down-regulates BCL-2 expression, and leads to apoptosis.C:TTFields destroys the nuclear membrane, generates micronuclei and double strand breaks, activate the cGAS-Sting signaling pathway to increase the expression of proinflammatory factors and type I interferon, and through the AIM2-Caspase1 inflammasome Cleavage of GSDMD and release of LDH leads to pyroptosis and immune activation ultimately.D: TTFields inhibits IkBa phosphorylation and NF-kB p65 translocation, the expression of MMP2 and MMP9,and ultimately inhibits cell invasion, metastasis, and EMT processes.E: TTFields promotes phosphorylation of GEF-H1,which further activates RhoA, ultimately leading to focal adhesions and cytoskeleton reorganization.F:TTFields causes endoplasmic reticulum stress and releases ATP, which activates AMPK and ULK, leading to resistance to TTFields.G:TTFields attenuates tube formation and angiogenesis by down-regulating the expression of HIF1a and VEGF.H:Upregulation of BRCA1 and GADD45 results in G2/M phase arrest (45, 56-59) (Figure 7). 7 FIGURE 7Molecular pathway changes caused by TTFields on glioma and GBM. FIGURE 8 FIGURE 8Characteristic of standard nanomaterial. (23, and tested factors that were significantly associated with longer PFS.Overall, progression occurred in 22.7% of the cohort during a median follow-up of 1.8 years.The 3-year and 5year PFS are estimated to be 72.5% and 54.4%, respectively.This is the first study to investigate the prognosis of patients with IDH1/2 mutated glioma after LITT.Our findings suggest that LITT is an effective option for the treatment of IDH1/2 mutant gliomas (21).It has also been reported that LITT-induced hyperthermia may have a synergistic effect with ionizing radiation, or may disrupt the BBB and facilitate the delivery of chemotherapy(23, 29).
2024-05-23T15:23:42.463Z
2024-05-21T00:00:00.000
{ "year": 2024, "sha1": "b49c226111288dd9de6a8775766711cbd4f669d4", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2024.1383958/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b40621dfde1e6799a6afaa3090dd01da4259131", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
245561129
pes2o/s2orc
v3-fos-license
Etiology, prevalence and clinical signs of erythema toxicum neonatorum Evidence shows that erythema toxicum neonatorum (ETN) has been described in the literature since the 15th decade as a primarily rash in pediatric patients. Clinical studies show that the lesion of ETN is mainly characterized by the presence of minute yellowish papules and pustules that are usually surrounded by an irregular reddish wheal. It should be noted that evidence also demonstrated the pathology of these lesions is temporary and usually disappears within a few hours. In the present literature review, we discussed the etiology, prevalence, risk factors, and clinical signs of ETN based on findings from relevant research. The etiology of ETN is not clear among the different studies. However, some studies show involvement of immune and potential allergic reactions. The prevalence of the condition among infants is also remarkably variable among the relevant studies worldwide. There is also inconsistency in reporting the significance of the risk factors related to the prevalence and severity of the condition. On the other hand, the clinical signs among studies seem to be consistent and easily detected except when evaluating dark-skinned infants. Further studies are needed better to understand the etiology and epidemiology of the condition. INTRODUCTION Evidence shows that Erythema toxicum neonatorum (ETN) has been described in the literature since the 15 th decade as a primarily rash in pediatric patients. 1 It has been shown that the condition mainly develops secondary to an abnormal reaction between the affected baby's skin and meconium. Over time, studies reported that the nomination of the condition has changed over the past decades from erythema populated to erythema dyspepsia and erythema neonatorum allergicum. 1,2 Finally, another investigation was published by Leiner and suggested that the condition should be termed erythema toxicum neonatorum, which has been used subsequently in the current literature since 1912. 3 Clinical studies show that the lesion of ETN is mainly characterized by the presence of minute yellowish papules and pustules that are usually surrounded by an irregular reddish wheal. It should be noted that evidence also demonstrated the pathology of these lesions is temporary and usually disappears within a few hours. Moreover, evidence indicates that the affected children usually recover within a week or two, and most children usually present within the 1 st week since birth. However, it has been noted that other similar lesions usually develop in another area in the body. 4 In addition, it has been shown that the lesion can affect any part of the affected patient's body except for the palms and soles. In the present literature review study, we aim to discuss the etiology, epidemiology, and clinical manifestations of ETN based on evidence from the present studies in the literature. METHODS This literature review is based on an extensive literature search in Medline, Cochrane, and EMBASE databases which was performed on 27 th November 2021 using the Medical subject headings (MeSH) or a combination of all possible related terms, according to the database. To avoid missing potential studies, a further manual search for papers was done through Google Scholar while the reference lists of the initially included papers. Papers discussing etiology, prevalence and clinical signs of erythema toxicum neonatorum were screened for useful information. No limitations were posed on date, language, age of participants, or publication type. Etiology Among the different relevant studies in the literature, there is no clear evidence regarding the exact etiology of the condition. However, it has been proposed that the underlying pathophysiology is initiated and mediated by a graft-versus-host reaction against maternal lymphocytes. However, it should be noted that among more recent studies, there are no apparent guidelines or evidence events regarding this mechanism and the presence of such maternal cells within the corresponding lesions. 2 On the other hand, it has been suggested that within the first hours since birth, an abnormal immunological reaction usually develops against the underlying microbes within the hair follicles of the affected children. In the lesions of ETN, previous immunohistochemical studies of 1-day-old infants suffering from the condition indicate that relevant immune reactions are noticed with abnormal accumulation and accumulation of immune cells in the lesions of the condition. 5 The immunohistochemical evidence furtherly demonstrated that ETN lesions are usually associated with various immune mediators and cells. These include High mobility group box protein 1 (HMGB 1 ), nitric oxide synthetases 1, 2, and 3, psoriasin, aquaporins 1 and 3, eotaxin, interleukin (IL)-8, IL-1β, and IL-α. Moreover, it has been shown that tryptase-expressing mast cells are abundantly present in ETN lesions based on immunohistochemical analysis. On the other hand, such evidence indicates that cathelicidin antimicrobial peptide LL-37 is not usually detected in these lesions. 6,7 The presence of an allergic reaction might also attribute to the pathogenesis of the condition in the affected infants. In this context, it has been demonstrated that ETN lesions contained extensive amounts of eosinophils. Prevalence and risk factors Studies show that ETN is a condition that is usually selflimited, benign, with evanescent eruption, and transient. However, epidemiological studies indicate that the condition is common among full-term infants. The estimated prevalence of the condition in this population has been reported to range between 48% and 72%. 8 In 1986, a previous Japanese investigation included 5387 infants that were followed up for ten years to investigate the characteristics of skin lesions and the epidemiology of ETN in these infants. The authors reported that the prevalence of ETN among these infants was 40.8%. Furthermore, the authors reported that the most significant risk factor for developing the condition was found to be being a preterm infant (birth weight <2500 g). 9,10 Further epidemiological data show that the condition is not usually confined to a certain race. However, evidence indicates that the condition is more common among males. Another Spanish investigation which included 356 newborns, aimed to estimate the prevalence and other epidemiological parameters of ETN among these infants. It has been reported that the estimated prevalence of ETN among the included infants was found to be 25.3%. Moreover, it has been indicated that the prevalence was significantly higher in males than in female patients (61.9% versus 38.1%, respectively). Some reports show that recurrence might occur in previously recovered children. However, overall evidence indicates that such events are rare in this context. Many other investigations were also published reporting epidemiological data of ETN patients. Budair et al conducted a prospective crosssectional study in Saudi Arabia to find which cutaneous lesions are the most common among included newborns. 11 Among 313 newborns included in this study, the authors reported that all of them had skin lesions. ETN was prevalent in 24.92% of these children accounting for the third commonest skin lesion after Mongolian spot and milia and followed by physiological scaling (63.07%, 61.34%, and 18.01%, respectively). The authors also reported that the condition was more prevalent among female than male Saudi patients (51.2% versus 35.8%). In comparison, non-Saudi male patients with ETN were more frequent than non-Saudi female patients (7.6% versus 5.1%, respectively). Other similar investigations also demonstrated that the prevalence of ETN among their population is hugely variable, being 7% in some and 68% in others. 12,13 The study by Budair et al suggested that gender is not a significant indicator of the prevalence and epidemiology of the condition. 11 However, other studies reported that gender and mode of delivery were both significant risk factors associated with the prevalence of ETN. 14,15 Another Saudi investigation was also conducted by Alakloby et al to assess the epidemiological data of acne neonatorum across the eastern region. 16 The authors reported that they managed to identify 26 infants with these lesions and were included in the study for further evaluation. It is worth mentioning that the diagnosis of acne neonatorum is widely variable and includes different conditions, including a variety of fungal, viral, and bacterial infections. ETN is one of these conditions. Other disorders might also include acneiform reactions to drugs such as phenytoin or lithium, acne venenata infantum, acne infantum, and neonatal sebaceous gland hyperplasia. [17][18][19][20][21][22] Accordingly, it has been demonstrated that proper management of these patients is required after establishing an appropriate evaluation of their conditions. In China, Liu et al also reported that the incidence of ETN in their included infants was 43.68%. 23 The authors furtherly aimed to find the most significant risk factors that were associated with the development and severity of ETN in their population. The authors reported that many risk factors could predict the development of ETN. These include vaginal delivery, being fed with a mixed diet or milk powder substitute, birth season, first-pregnancy birth, and gender. On the other hand, the severity of ETN can be predicted by the total length of labor among infants born with vaginal deliveries. Another investigation by Monteagudo et al was also conducted in Spain to assess the epidemiological and clinical characteristics of ETN in their infants. 24 In this context, the prevalence of ETN has been estimated to be 16.7%. Moreover, it has been shown that higher prevalence rates were reported among Caucasian newborns and those with <two previous pregnancies, maternal age of <30 years, vaginal delivery, increased gestational age, and higher birth weight (p value=0.01, 0.12, 0.28, <0.05, <0.05, and <0.05, respectively). Another cross-sectional investigation was also conducted in Brazil by Reginatto et al to assess the epidemiological and clinical findings for patients with ETN. 3 The authors reported that the prevalence of the condition among 2831 included infants in their multicenter investigation was 21.3%. The authors reported that the prevalence of ETN was significantly correlated with the birth season, birthweight, gestational age, is never admitted to the neonatal intensive care unit, with no gestational risk factors, with 1 min Apgar scores from 8 to 10, being male, and Caucasian. Another similar study was also conducted in Brazil to assess the cutaneous findings among neonates within the first three days of life. The authors reported that the prevalence of ETN was 23%. Many lesions were more prevalent in this study than ETN, including sebaceous hyperplasia, dermal melanocytosis, and skin desquamation (35%, 24.61%, and 23.3%, respectively). On the other hand, many lesions were less common than ETN, including salmon patch, skin erythema, genital hyperpigmentation, eyelid edema, milia, genital hypertrophy, and skin xerosis (20.4%, 19%, 18.4%, 17.4%, 17.3%, 12%, and 10.9%, respectively). 25,26 Clinical signs Two variations were reported for ETN, including pustular or an erythematous papular variant. Evidence shows that an erythematous irregular base surrounds a 1-3 mm in size, yellow-white, and firm pustules or papules is the observed lesion for patients with ETN ( Figure 1). Figure 1: An infant with erythema toxicum neonatorum showed a trunk-related distribution of tiny, discrete, multiple pustules with an erythematous base. 32 It has been furtherly reported that the lesion can be observed as a see of erythema surrounding a papule or as having a characteristic flea-bitten appearance. Evidence furtherly indicates that these lesions can be found in clusters scattered at different body parts or confined to a single area. Besides, they can be found multiple or single lesions as observed in the affected patients. Splotchy erythema has also been reported as a single potential manifestation of the condition in some patients. 27,28 Macules are usually found outside the erythematous lesions during the first days of presentation. They are usually observed during the first days on the cheeks and spread to different body parts later on. In this context, the rest of the forehead is usually involved together with other parts, including the extremities, trunk, and chest. However, it should be noted that further investigations demonstrated that the skin of the scrotum could also be involved in these events. Accordingly, an extensive body workup should be conducted for these patients to detect these lesions adequately. 29 It is worth mentioning that clinicians report that it is difficult to examine infants with dark skins for ETN. Accordingly, it has been reported that identifying the lesions in these children might be challenging in these settings. Papules tend to appear when the persistence of the erythematous macules occurs. However, it has been reported that the latter lesions are usually evanescent, and the papules might be detected as sole lesions. Urticarial lesions on the trunk should be differentiated from these macules, which are confluent and give a blotchy appearance in affected patients. As previously reported that soles and palms are not usually impacted. This indicates the theory suggesting that the pathology of ETN is correlated with the distribution of hair follicles. Papules are usually superficial, particularly when noticed over the skin of the abdomen and back. Secondary infection was also reported to affect pustules. However, this is not very common. The 2 nd day of life is when the incidence of ETN is highest, and the estimated incidence of recurrence is 11%. It should furtherly be noted that the pathology of ETN can be delayed in some infants as evidence shows that some affected infants might present after days to weeks following premature birth. In addition, eosinophilia might be observed in some patients, and estimates show that it might be up to 18%. 30 However, no systemic manifestations usually develop, and the prognosis of the condition is generally good. 25,31 CONCLUSION The etiology of ETN is not clear among the different studies. However, some studies show involvement of immune and potential allergic reactions. The prevalence of the condition among infants is also remarkably variable among the relevant studies worldwide. There is also inconsistency in reporting the significance of the risk factors related to the prevalence and severity of the condition. On the other hand, the clinical signs among studies seem to be consistent and easily detected except when evaluating dark-skinned infants. Further studies are needed better to understand the etiology and epidemiology of the condition.
2021-12-30T16:04:32.300Z
2021-12-27T00:00:00.000
{ "year": 2021, "sha1": "313efe67bc05222fcd5e0acd08036030f24dd10e", "oa_license": null, "oa_url": "https://www.ijcmph.com/index.php/ijcmph/article/download/9332/5659", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1f04b7aa71090f0236a14ca14e435b5cd06335d5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
26492761
pes2o/s2orc
v3-fos-license
Oral health comprehension in parents of Saudi cerebral palsy children Objective To determine oral health comprehension among parents of cerebral palsy (CP) children. Methods A self-administered questionnaire was utilized to obtain the required information. The study was conducted in two main centers for disabled children in Riyadh, Saudi Arabia. Results Parents of all 157 CP children registered in the two centers completed the questionnaire. Mothers mostly (86.6%) completed the questionnaire. Majority (98.7%) of the parents knew the importance of dental health for general health. More than two-third (70%) of the parents thought that teeth should be brushed thrice daily or after each meal. About three in every ten (29.9%) parents were not aware of the beneficial effect of fluoride in preventing dental caries; and very few (9.6%) were aware of water as a source of fluoride. Almost all (98.7%) the parents knew that sugary foods caused dental caries. Three-fourth (75.8%) of the parents were not aware of the possible harmful effects of bottled juices on teeth. There were no significant (p > 0.05) associations between the parental age/gender with any of the dependent variables. Conclusion Parents of CP children generally showed satisfactory oral health comprehension. However, they need further oral health education in several areas. Introduction Knowledge forms the basis for most human actions and behaviors; and those with better level of knowledge are expected to have more appropriate decision making and practices (Heskett, 2017). Parents play an important role in providing knowledge to their children and formation of their habits and behaviors related to health (deCastilho et al., 2013). This is specifically important in case of intellectually and physically challenged children, where parents make most of the decisions for them including oral hygiene and dietary routines (He et al., 2014). Cerebral palsy (CP) is one of the most highly prevalent conditions in the world; population-based studies from around the world report prevalence estimates of CP ranging from 1.5 to more than 4 per 1000 live births (CDC&P, 2017). CP describes a group of permanent disorders of the development of movement and posture, causing activity limitation; attributed to non-progressive disturbances which occurred in the developing fetal or infant brain. The motor disorders are often accompanied by disturbances of sensation, perception, cognition, communication & behavior, epilepsy, and secondary musculoskeletal problems (AACP&DM, 2005). Due to these handicapping characteristics, these children are dependent on their parents/care givers for their daily care including oral hygiene care and dietary intake (Grammatikopoulou et al., 2009). The parents with better and appropriate comprehension in these areas are expected to take good care of their children (Al-Omiri et al., 2006). However, CP parents/care givers have been reported to have low comprehension in these areas (Verrall et al., 2000). Therefore, it is important to collect information about oral health comprehension of these parents; and monitor their knowledge level as well as provide them with oral health education in weak areas. There is a scarcity of information internationally about oral health comprehension in parents of CP children. A study was conducted on this topic in Riyadh, Saudi Arabia about a decade ago, which though reported satisfactory level of oral health knowledge among the CP parents; still pointed out towards several areas where the parents needed further oral health education (Wyne, 2007). No further studies have been published after that. In the wake of rapidly changing socioeconomic environment; continuous monitoring and gauging the CP parents oral health comprehension is necessary. The purpose of the present study was to collect latest information on oral health comprehension of CP children's parents. Methods The study, cross-sectional in design, was conducted in two main centers for disabled children in Riyadh, Saudi Arabia from December 2014 to May 2015. A self-administered questionnaire in Arabic was utilized for the present study; which was a modified version of a questionnaire utilized in a previous study by Wyne (2007). The questionnaire was pre-tested for validity and reliability in 30 parents of CP children not participating in the main study. There was a time interval of two weeks for the test-re-test reliability. Pertinent modifications were made to enhance its clarity for the participating parents. The information collected through the questionnaire is listed below. Demographics: parent's age/gender, and the CP child's age/gender Significance of dental health Significance of optimal dental health for better health in general Reason and frequency for dental visits Oral hygiene routine Various sources and importance of fluoride Possible foods & drinks that cause tooth decay Action to be taken on finding a cavity in their mouth Possible reason(s) for bleeding of gums and action needed if there is bleeding from gums after tooth-brushing. The study was registered with Research Center (CDRC) of King Saud University College of Dentistry. The ethical approval was also obtained for the study from CDRC including the questionnaire utilized in the study. The two centers selected for the study are the main centers for special children in Riyadh, where education and health care are provided to the children with various conditions/disabilities. One (NH) of the researchers visited the selected centers. All the parents of the CP children registered in the two centers were included in the study. The questionnaires were distributed among the parents for completion. The questionnaires had a consent form with a covering letter that explained the research objectives and also ensured the parents about confidentiality of the collected information. The data collected were stored in the computer utilizing Statistical Package for Social Sciences (SPSS -Version #19). Various frequencies were derived. Chi-Square test was utilized to establish any significant (p 0.05) association between various responses and independent variables (such as parental age/gender). Results The parents of all the 157 CP children registered in the two centers completed the questionnaire. Mothers mostly (86.6%) completed the questionnaire. The mean parental age was 34.0 years (SD 7.3, ranging from 20 to 58 years). The CP children's mean age was 6.7 years (SD 2.7, ranging from 2 to 12 years) [males 57.7%, females 42.3%]. Responses to questions on importance of dental and oral hygiene are listed in Table 1. Although most (94.3%) of the parents were aware of good dental health for mastication, more than one third (35.7%) did not consider it important for speech. Almost all (98.7%) of the parents knew that good dental health is important for general health. About two-third (65.6%) of the parents thought that one must visit a dentist every six months. However, one-fifth (20.4%) of them were of the opinion that dental visit is needed only for pain or dental problem. More than two-third (70%) of the parents thought that teeth should be brushed thrice daily or after each meal. A great majority (98.1%) of the parents was using toothbrush or both toothbrush and miswak for tooth cleaning. Table 2 presents results on various questions regarding fluoride. About three in every ten (29.9%) parents were not aware of the beneficial effect of fluoride in preventing dental caries; and very few (9.6%) were aware of water as a source of fluoride. Table 3 lists the parent's responses about tooth decay. Almost all (98.7%) the parents were aware that sugary foods cause dental caries. Similarly, 91.7% of the parents knew about the harmful effects of soft/carbonated drinks on teeth. However, fewer parents had similar comprehension about flavored fizzy drinks (35.7%), sweetened/flavored milks (32.5%) and bottled/canned juices (24.2%). A large majority (84.1%) of the parents would visit a dentist immediately if they find a cavity staring in their tooth/teeth. Most (84.7%) of the parents knew that regular bleeding on tooth brushing could be a sign of gum disease, however, only 52.2% would see a dentist for the problem (Table 4). Parents were divided into three age groups (20-30, 31-40 and 41 years) for the purpose of further analyses. However, there were no significant associations (p > 0.05) between the parental age/gender with any of the dependent variable. Discussion The study has fielded information about oral health comprehension in parents of cerebral palsy children. The results have shown some strong areas of oral health comprehension, while in other areas there appears to be a need for further enhancement. It is contemplated that parents with adequate oral health comprehension would play a better role in oral health care of their CP children. Parents of all the CP children registered in the selected centers completed the questionnaire. The questionnaire was in the native language of the parents and was thoroughly pretested. However, it is worth mentioning that results from a questionnaire study have to be interpreted with caution. Knowing that the survey is being carried out by dentists/health care professionals may prompt favorable responses. Some parents may not have been able to fully comprehend the questions. Although most of the parents were aware of importance of good dental health in effective chewing, yet many did not consider it important for speech and esthetics. It is understandable; as these parents are usually so overwhelmed by the demands of caring for their CP child that importance of these two functions might become secondary for them (Waldman et al., 2010). A previous study in parents of CP children showed similar results (Wyne, 2007). A positive aspect of the result was awareness of the importance of oral health for optimum general health. A complimentary correlation between oral health and systemic health is now well documented (DHSV, 2017). This also forms the basis for recommendation that a routine check-up visit be made at least once a year. However, in present study, considerable number of parents believed that dental visit was only necessary for dental pain or a problem. The present results are in contrast to previous study in parents of Saudi CP children (Wyne, 2007), where most of the parents were aware of the importance of yearly check-up dental visits. The results about frequency of tooth brushing and use of tooth brush and miswak were strongly positive. Previous study in parents of CP Children and those of healthy children have shown similar strong comprehension about oral hygiene practices (Al-Tamimi and Petersen, 1998;Wyne, 2007). Miswak (Salvadora persica) is a wooden toothbrush/chewing stick traditionally used in various parts of the world including Saudi Arabia. The efficacy of miswak in tooth cleaning and its anti-bacterial effects have been wellsummarized (Haque and Alsareii, 2015). Fluoride has been proven to have a clear anti-cariogenic effect (Re Weng et al., 2011), and most of the caries decline is contributed to fluorides (Bratthall et al., 1996). However, some parents did not hear about fluoride or knew that it protects teeth from caries. Similarly, very few parents knew about water as a fluoride source. This mirrors the results of the 2007 study (Wyne, 2007) in CP children's parents, and stress the need of further information about fluoride use in these parents. The parents of CP children showed strong knowledge about harmful effects of sugar containing foods and soft drinks on teeth. However, similar knowledge was lacking about flavored fizzy drinks, bottled/canned juices and sweetened/flavored milks. These results are also similar to the 2007 study (Wyne, 2007) in CP children's parents. As frequent consumption of sweetened drinks is a major reason for the catastrophic carious destruction (Ghazal et al., 2015), this risk factor should be monitored in all chronically ill children. Regarding action on finding a cavity in their tooth/teeth, a majority of the parents knew to visit a dentist immediately, but some preferred to wait till they felt some pain. A great majority of parents knew that blood seen regularly on toothbrush meant gum disease, however only half of them recognized the need to see a dentist immediately. The factor possibly responsible for not visiting a dentist could be dental anxiety/fear that is highly prevalent among Saudi adults (Gaffar et al., 2014). In addition; these parents are usually so overwhelmed by the demands of caring for their CP child that their own health issues may become of secondary importance to them (Waldman et al., 2010). An association between parental comprehension of oral health and oral health habits of their disabled children has been established (Klingberg and Hallberg, 2012;Limeres et al., 2014). Inadequate parental oral health comprehension could be a serious barrier to optimal dental care in children especially CP children (deCastilho et al., 2013;He et al., 2014;Limeres et al., 2014). This study attempted to gauge the oral health comprehension in parents of CP children, so that if needed, assistance can be provided to the parents in this area. The present study has shown mixed results; some areas of strong oral health comprehension, others satisfactory and some weak areas among parents of CP children. The results are not different than those obtained about a decade ago (Wyne, 2007). The results of the present study strongly indicate a need for enhanced efforts towards improvement of oral health comprehension in the parents. A better oral health comprehension shall result in better oral health in these parents (Brennan et al., 2010). It is also expected to consequently benefit the oral health of their CP children (Klingberg and Hallberg, 2012;deCastilho et al., 2013). Conclusions The parental comprehension about importance of oral health and its relation to general health was adequate. The awareness about tooth brushing was also positive. However, one-fifth would visit a dentist only for a dental problem or pain. Very few parents were aware of water as a source of fluoride. The parents were fully aware about harmful effects of sugary foods and soft/carbonated drinks on teeth. However, fewer parents had similar comprehension about flavored fizzy drinks, sweetened/flavored milks and bottled/canned juices. There were no significant associations (p > 0.05) between the parental age/gender with any of the dependent variable.
2018-04-03T00:44:58.189Z
2017-08-02T00:00:00.000
{ "year": 2017, "sha1": "01158a0ec5b76bb4fedb5a85aed88aa33c818861", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.sdentj.2017.07.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4cbb5c62372d12cfe952bda423b073019e70538", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251655988
pes2o/s2orc
v3-fos-license
Absent Metabolic Transition from the Early to the Late Period in Non-Survivors Post Cardiac Surgery After major surgery, longitudinal changes in resting energy expenditure (REE) as well as imbalances in oxygen delivery (DO2) and distribution and processing (VO2) may occur due to dynamic metabolic requirements, an impaired macro- and microcirculatory flow and mitochondrial dysfunction. However, the longitudinal pattern of these parameters in critically ill patients who die during hospitalization remains unknown. Therefore, we analyzed in 566 patients who received a pulmonary artery catheter (PAC) their REE, DO2, VO2 and oxygen extraction ratio (O2ER) continuously in survivors and non-survivors over the first 7 days post cardiac surgery, calculated the percent increase in the measured compared with the calculated REE and investigated the impact of a reduced REE on 30-day, 1-year and 6-year mortality in a uni- and multivariate model. Only in survivors was there a statistically significant transition from a negative to a positive energy balance from day 0 until day 1 (Day 0: −3% (−18, 14) to day 1: 5% (−9, 21); p < 0.001). Furthermore, non-survivors had significantly decreased DO2 during the first 4 days and reduced O2ER from day 2 until day 6. Additionally, a lower REE was significantly associated with a worse survival at 30 days, 1 year and 6 years (p = 0.009, p < 0.0001 and p = 0.012, respectively). Non-survivors seemed to be unable to metabolically adapt from the early (previously called the ‘ebb’) phase to the later ‘flow’ phase. DO2 reduction was more pronounced during the first three days whereas O2ER was markedly lower during the following four days, suggesting a switch from a predominantly limited oxygen supply to prolonged mitochondrial dysfunction. The association between a reduced REE and mortality further emphasizes the importance of REE monitoring. Introduction Energy in the form of adenosine triphosphate [1] and other high-energy compounds are required for all cellular activities; it is most efficiently obtained via the oxidation of nutrients. Macro-and microcirculation as well as mitochondrial functioning are key components to maintain a normal cellular physiology [2]. Therefore, oxygen delivery (DO 2 ) as a measure of macrocirculation and oxygen consumption (VO 2 ) as a combined measure of the microcirculatory distribution of the blood flow and mitochondrial activity are essential to evaluate the metabolic state [3]. A critical fall in VO 2 and, therefore, resting energy expenditure (REE) induces reversible impairments and, finally, irreversible alterations that result in cell death. Maintaining sufficient oxygen availability for the cell is, therefore, fundamental for cell survival [4]. In septic and cardiac arrest patients, decreased VO 2 was associated with a higher mortality rate [5,6]. This association was initially thought to be mainly related to inadequate DO 2 Nutrients 2022, 14, 3366 2 of 13 and is now explained by an impaired oxygen extraction ratio (O 2 ER) due to an altered mitochondrial function [7,8]. Metabolic, endocrine and immunological reactions are triggered by surgical trauma and extracorporeal circulation (ECC) [9]. The metabolic response to trauma over time was first described by Sir David Cuthbertson who coined the terms 'ebb' and 'flow' phases, describing an initially reduced metabolic rate followed by a later increase after an injury [10]. A third chronic phase has more recently been proposed [11]. In the latest update of the ESPEN guidelines in 2019, the metabolic response to trauma was redefined. The 'ebb' phase was renamed the early period and defined by catabolism, a lower body temperature and reduced VO 2 , aiming to reduce post-traumatic energy depletion. The former 'flow' phase is now described as the late period and is followed by the chronic phase, which devolves from a catabolic to an anabolic status. The transition from one state to the next depends on the injury severity and is associated with stress, muscle atrophy, the administration of further medication (catecholamine, sedatives, neuromuscular blocking agents), mechanical ventilation and renal replacement therapy [12]. During the early post-injury period, the REE is usually lower than before the injury. In contrast, in the later phase, the REE may even increase to values higher than before the injury [13,14]. During these transitions from one period to another, further factors impact the REE, including physiological derangements such as fever, hypothermia, changes in the heart rate, shivering, agitation, infections and fasting status as well as therapeutic interventions such as catecholamine support, sedatives, non-selective β-blockers and active cooling [15]. Therefore, it remains difficult to estimate the net balance of the REE in critically ill patients and the metabolic patterns of patients with adverse outcomes remain largely unknown. Thus, in this cohort study, we evaluated whether hyper-or hypometabolic states predominate in survivors and non-survivors. We analyzed the relation between the measured and calculated REE over time. Furthermore, we investigated the longitudinal dynamics of DO 2 , VO 2 and O 2 ER, measured via a pulmonary artery catheter (PAC) after different cardiac procedures over the first 7 days. Finally, we investigated in a uni-and multivariable model the association of the REE with mortality at 30 days, 1 year and 6 years after cardiac surgery. Ethical Approval The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013) and was approved by the Ethics Committee of the Medical University of Vienna (EK1099/2022). The data collection was performed in accordance with the approved ethical guidelines. Study Design and Patients This work was designed as a single-center cohort study. We enrolled 566 consecutive patients from 2012-2015 who underwent elective or emergency heart surgery with a PAC for hemodynamic monitoring. The data on survival time were determined in April 2022. The longest follow-up time, either observed or censored, was 6 years. A decision on PAC insertion was based on institutional practice and individual physician risk evaluations. We included all PAC measurements performed within the first 7 days after surgery and excluded all patients younger than 18 years, patients requiring extracorporeal membrane oxygenation and right heart assist devices. The PACs were inserted using the Seldinger technique, usually through a right internal jugular approach. Correct positioning with the proximal port located in the SVC and the distal port in the PA was confirmed via an X-ray. The PAC measurements were recorded every 10 min. In-hospital mortality was used to divide the patients into survivors and non-survivors. The calculations are depicted in Table 1. Statistical Analysis The demographic data were presented using descriptive statistics. The mean ± standard deviation (SD) were given for the continuous variables. The categorical variables were shown as a frequency (percentage). Variables such as CO and SvO 2 were determined using the PAC and VO 2 , REE, O 2 ER and REE were calculated as shown in Table 1. A Student's t-test was applied for the parametric data, the Mann-Whitney U test was used for unpaired non-normally distributed data and repeated measure ANOVA testing was performed for the multiple comparison analysis. All variables comprising CO, SvO 2 , VO 2 , REE, O 2 ER and DO 2 were averaged for each of the first 7 days and presented as a median and interquartile range (25% percentile and 75% percentile, respectively) for in-hospital survivors and non-survivors. In addition, we calculated the percent increase in the measured REE via the PAC compared with the predicted REE calculated via the formula for survivors and non-survivors, respectively. Furthermore, the REE was averaged over the first 7 days. Patients were divided into two groups depending on their REE being either above or below the median REE of 1640 kcal/d. A survival analysis was performed using the Kaplan-Meier analysis and logrank test. Uniand multivariate Cox regression analyses were calculated for the 30-day, 1-year and 6-year mortality. The data were shown as a hazard ratio (HR) and 95% confidence interval. All tests were two-sided and p-values < 0.05 were considered to be statistically significant. The statistical analyses were performed using R 3.3.1 and SPSS (version 28.0; IBM SPSS Inc., Chicago, IL, USA). The figures were plotted using GraphPad Prism (version 8.0; GraphPad Software Inc., San Diego, CA, USA). Data Availability All data generated or analyzed during this study are included in the published article and its Supplementary Files. Results In this retrospective cohort study, we analyzed 566 ICU patients over the first 7 days post cardiac surgery. The demographic and clinical data are shown in Table 2. Of the participants, 27% of all patients underwent single-valve procedures, 12% received CABG surgery, 17% underwent CABG and a valve procedure, 3% obtained a vascular graft, 16% were LVAD and 19% were HTX patients; 5% received other procedures. Absent Metabolic Transition in Non-Survivors from the Early 'Ebb Phase to the Late 'Flow' Phase In survivors, compared with non-survivors, the measured energy balance significantly increased from day 0 until day 4, but not on days 5, 6 and 7, as shown in Figure 1A. In survivors, we found a negative measured energy balance of −3% compared with the predicted REE on day 0, which subsequently rose significantly until day 4 (Day 0: −3% (−18, 14) compared with day 1: 5% (−9, 21), day 2: 4% (−10, 21), day 3: 4% (−10, 22) and day 4: 5% (−10, 23); p < 0.001), as demonstrated in Figure 1B. In non-survivors, there was a negative measured energy balance compared with the predicted REE from day 0 until Missing transition from the early to the late period in non-survivors. In patients who survived, the REE significantly increased during the transition from the early to the late period. In contrast, the REE did not increase during the transition from the early to the late period in nonsurvivors, as depicted in (A). In survivors, there was a statistically significant percent increase in the REE from day 0 to day 1, day 2, day 3 and day 4, as depicted in (B). In contrast, in nonsurvivors, the REE did not increase during the transition from the early to the late period, as shown in (C). Percent increase = (REE meas − REE pred)/REE pred × 100; REE pred = 20 × kcal/kg/day; meas, measured; pred, predicted; REE, resting energy expenditure; ** p < 0.001, *** p < 0.0001, **** p < 0.00001; # p < 0.05. Missing transition from the early to the late period in non-survivors. In patients who survived, the REE significantly increased during the transition from the early to the late period. In contrast, the REE did not increase during the transition from the early to the late period in non-survivors, as depicted in (A). In survivors, there was a statistically significant percent increase in the REE from day 0 to day 1, day 2, day 3 and day 4, as depicted in (B). In contrast, in nonsurvivors, the REE did not increase during the transition from the early to the late period, as shown in (C). Percent increase = (REE meas − REE pred)/REE pred × 100; REE pred = 20 × kcal/kg/day; meas, measured; pred, predicted; REE, resting energy expenditure; ** p < 0.001, *** p < 0.0001, **** p < 0.00001; # p < 0.05. The REE was significantly reduced on days 0, 1, 2, 3 and 4, but not on days 5, 6 and 7 post cardiac surgery in non-survivors relative to survivors, as depicted in Figure 2A. 3, 4 and 6 compared with survivors after heart surgery, as demonstrated in Figure 2B. DO2 was significantly reduced in non-survivors compared with survivors during the first four post-operative days, as shown in Figure 2C. VO2 significantly decreased within the first four days after surgery in non-survivors compared with survivors, as pictured in Figure 2D. CCO was significantly lower in non-survivors in contrast to survivors within the first five post-operative days, as illustrated in Figure 2E. There was no difference in SvO2 on day 0 after surgery, but from day 1 until day 7 SvO2 significantly rose in non-survivors compared with survivors, as depicted in Figure 2F. Details on the statistically significant differences in the REE, O2ER, DO2, VO2, CCO and SvO2 between survivors and non-survivors are shown in Supplementary Table S1. Non-survivors had solely significantly lower O 2 ER values on post-operative days 2, 3, 4 and 6 compared with survivors after heart surgery, as demonstrated in Figure 2B. DO 2 was significantly reduced in non-survivors compared with survivors during the first four post-operative days, as shown in Figure 2C. Nutrients 2022, 14, 3366 7 of 13 VO 2 significantly decreased within the first four days after surgery in non-survivors compared with survivors, as pictured in Figure 2D. CCO was significantly lower in non-survivors in contrast to survivors within the first five post-operative days, as illustrated in Figure 2E. There was no difference in SvO 2 on day 0 after surgery, but from day 1 until day 7 SvO 2 significantly rose in non-survivors compared with survivors, as depicted in Figure 2F. Details on the statistically significant differences in the REE, O 2 ER, DO 2 , VO 2 , CCO and SvO 2 between survivors and non-survivors are shown in Supplementary Table S1. Increased 30-Day, 1-Year and 6-Year Mortality in Patients with a Reduced REE In the Kaplan-Meier survival analysis for 30 days, 1 year and 6 years we found a significantly reduced overall survival rate for patients with a reduced REE, which was below the median of 1640 kcal/day (p = 0.009, p < 0.0001 and p = 0.012, respectively; Figure 3). Furthermore, in the non-parametric group testing, a low REE was not associated with a low BMI, size and body weight (p = 0.190, p = 0.374 and p = 0.104, respectively). Increased 30-Day, 1-Year and 6-Year Mortality in Patients with a Reduced REE In the Kaplan-Meier survival analysis for 30 days, 1 year and 6 years we found a significantly reduced overall survival rate for patients with a reduced REE, which was below the median of 1640 kcal/day (p = 0.009, p < 0.0001 and p = 0.012, respectively; Figure 3). Furthermore, in the non-parametric group testing, a low REE was not associated with a low BMI, size and body weight (p = 0.190, p = 0.374 and p = 0.104, respectively). Univariate and Multivariate Cox Regression Analyses for 30 Days, 1 Year and 6 Years after Cardiac Surgery A REE ≤ 1640 kcal/d was associated with increased mortality in the univariate model and remained an independent factor in the multivariate analysis for 30 days, 1 year and 6 years after cardiac surgery. Age did not impact 30-day mortality, but was associated with increased mortality at 1 year and 6 years after surgery in the uni-and multivariate analyses. The BMI did not influence the outcome 30 days and 1 year post-surgery; however, after 6 years a BMI between 25 and 30 was associated with significantly decreased mortality compared with patients with a BMI < 25 in the univariate, but not in the multivariate, cox regression analysis. An increased ECC time (>170 min) was associated with a higher mortality rate after 30 days, 1 year and 6 years post-surgery in the uni-and multivariate cox regression analyses. One and six years following surgery, patients with a minimum Hb < 8 mg/dL had a significantly increased mortality in the univariate, but not in the multivariate, analysis. In the univariate analysis for 1-year mortality, the hazard ratio of maximum lactate levels > 3.6 mmol/L was significantly higher. Furthermore, patients receiving > 3 PRBCs had a significantly increased hazard ratio for 1-year and 6-year mortality in the uni-and multivariate analyses (Table 3). Discussion In this cohort study, we observed the following. (1) There was an inadequate metabolic response to stress in patients with early adverse outcomes. The metabolic response of non-survivors failed to transition from the early previously-called 'ebb' phase to the late previously-called 'flow' phase. (2) DO 2 and O 2 ER levels were decreased in non-survivors. During the first three days post cardiac surgery, impaired DO 2 was more pronounced and in the following four days, O 2 ER was markedly lower in non-survivors, suggesting a switch from a predominantly limited oxygen supply to permanent mitochondrial dysfunction over time. (3) Our findings indicated that a reduced REE was associated with reduced 30-day, 1-year and 6-year mortality in patients after cardiac surgery. As already reported 50 years ago by Sir Cuthbertson, we could reproduce a short 'ebb' phase in survivors immediately after major cardiac surgery with a negative measured energy balance compared with the calculated REE, followed by a long-lasting 'flow' phase characterized by increased metabolic rates compared with the predicted REE. In contrast, in non-survivors the negative measured energy balance was maintained over the entire observational period. In the literature, a modest 7% increase in energy metabolism after uncomplicated abdominal surgery compared with the Harris-Benedict equation has been described [16]. In patients with acute pancreatitis, even a hypermetabolic state with a raised metabolic rate of up to 20-30% has been observed [17]. Moreover, a hyperdynamic cardiovascular response with an increased REE was reported in patients with uncomplicated sepsis, sepsis syndrome and septic shock. However, in line with our findings, the percent increase in the REE declined according to the sepsis severity (mean REE + 55% for uncomplicated sepsis, +24% for sepsis syndrome and +2% for septic shock) [13]. We observed a hypermetabolic state only in survivors. Non-survivors remained in a hypometabolic status throughout the observation period. In survivors, we also found a progressive increase in O 2 ER and concomitantly decreasing SvO 2 , which may have reflected a higher level of activity and a higher cellular energy demand, suggesting an improved metabolic state. In non-survivors, the significantly reduced DO 2 compared with survivors early after ICU admission could have resulted from a decreased intravascular volume, loss of vasomotor tone and myocardial depression, as has already been reported in sepsis patients [18][19][20]. The absent O 2 ER increase over time may have been an indication of impaired VO 2 secondary to microcirculatory defects or deteriorated cellular respiration [3]. In line with our findings, a study of sepsis patients found an association between mortality and decreased central venous oxygen saturation (ScvO 2 ) during the very first hours after ICU admission [21] whilst another study reported that maximum ScvO 2 values during the ICU stay were associated with a higher mortality [3]. In conjunction with literature, our results suggested that, especially during the first hours of ICU admission, an impaired oxygen supply seems to be decisive in avoiding adverse outcomes by maintaining mitochondrial functions, taking into account that the Kaplan-Meier curves were separated in the early phase after surgery and proceeded in parallel thereafter. Energy requirements greatly vary among critically ill patients, especially in those with adverse outcomes [22,23]. Generally, a measured REE defines the energy target for the prescription of nutrition [12]. However, during the early phase of an acute illness, endogenous energy production covers most of the energy needs [24]. In this phase, exogenous energy supplementation may easily exceed the energy requirements, especially when full clinical nutrition may be provided [25]. In our study, the energy requirements were reduced in non-survivors and remained low over the total observation period of seven days. However, substrate deficits due to reduced endogenous energy production or an insufficient substrate administration seem unlikely because O 2 ER was lower and, therefore, not limited by O 2 availability; nearly all patients required insulin to maintain normoglycemia. Our findings, therefore, highlight the importance of measuring the REE to avoid inadequate nutritional therapy, especially in vulnerable patients. In this study, we demonstrated that a reduced REE was an independent factor for 30 day, 1-year and even 6-year mortality. Age became a more important factor for survival in the multivariate analysis only after 1 year and 6 years. A duration of ECC > 170 min was an independent factor 30 days after surgery, comprising patients with more complex surgeries such as patients receiving both a CABG and valve procedure, patients undergoing heart transplantation and patients receiving a vascular graft. Missing durations of ECC were more pronounced in patients receiving LVAD procedures. According to our findings, the accurate determination of energy requirements in the early vulnerable phase seems to be highly relevant to optimize the metabolic demand to avoid energy imbalances. Underfeeding increases the hospital length of stay, infection rates and organ failure as well as prolonging mechanical ventilation and even increasing mortality. In contrast, overfeeding has been associated with hyperglycemia, hypertriglyceridemia, hepatic steatosis, azotemia, hypercapnia and, again, increased mortality [22,23]. As a consequence, it is important to monitor the metabolic response of patients over time either via repeated IC measurements, via the PAC or at least via the ventilator [12,26]. Our study has several limitations due to the retrospective study design. Additionally, we assessed the REE via the PAC and did not measure the REE via indirect calorimetry (IC), which is more accurate [26]. However, assessing the REE via the PAC has the advantage of being able to measure VO 2 continuously over a prolonged period of time. Another factor introducing selection bias was that the decision to insert a PAC was based on institutional practice and the individual risk assessments of the treating physicians. Thus, only patients who were more likely to be hemodynamically unstable with a worse prognosis received a PAC and were included in this study. Hence, the mortality rate of this study cohort was higher than the mortality rate of the corresponding ICU. Further, there is no medically determined cut-off value to define a low REE; therefore, we used the median as a cut-off to divide the patients into two groups. Even though the REE is generally determined without including the body weight, we were able to rule out that a low REE was simply associated with a low BMI, size and body weight in the non-parametric group testing. Furthermore, neither BMI nor gender was a confounding factor for adverse outcomes in the uni-and multivariate cox regression models. Conclusions Non-survivors seemed to be unable to metabolically adapt from the early, previouslycalled 'ebb' phase to the late period, the previously-called 'flow' phase, subsequently remaining in a catabolic state. In those patients, our findings indicated an impaired oxygen supply early after ICU admission and persistent mitochondrial dysfunction over time. A lower REE was associated with adverse short-and long-term outcomes, emphasizing the importance of monitoring the REE in critically ill patients either via IC, the PAC or the ventilator. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013) and was approved by the Ethics Committee of the Medical University of Vienna (EK1099/2022). The data collection was performed in accordance with the approved ethical guidelines. Data Availability Statement: All data generated or analyzed during this study are included in this published article. Conflicts of Interest: The authors declare no conflict of interest.
2022-08-19T15:12:12.093Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "79b4e26ee21e9675de97b2217886834e0d309ee2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/14/16/3366/pdf?version=1660715058", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f188aecbcaa36ef78c3818533223b8893ce6cd9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
246568430
pes2o/s2orc
v3-fos-license
Chemical Composition, Fatty Acid and Mineral Content of Food-Grade White, Red and Black Sorghum Varieties Grown in the Mediterranean Environment Grain sorghum (Sorghum bicolor) is a gluten-free cereal grown around the world and is a food staple in semi-arid and subtropical regions. Sorghum is a diverse crop with a range of pericarp colour including white, various shades of red, and black, all of which show health-promoting properties as they are rich sources of antioxidants such as polyphenols, carotenoids, as well as micro- and macro-nutrients. This work examined the grain composition of three sorghum varieties possessing a range of pericarp colours (white, red, and black) grown in the Mediterranean region. To determine the nutritional quality independent of the contributions of phenolics, mineral and fatty acid content and composition were measured. Minor differences in both protein and carbohydrate were observed among varieties, and a higher fibre content was found in both the red and black varieties. A higher amount of total saturated fats was found in the white variety, while the black variety had a lower amount of total unsaturated and polyunsaturated fats than either the white or red varieties. Oleic, linoleic, and palmitic were the primary fatty acids in all three analysed sorghum varieties. Significant differences in mineral content were found among the samples with a greater amount of Mg, K, Al, Mn, Fe, Ni, Zn, Pb and U in both red and black than the white sorghum variety. The results show that sorghum whole grain flour made from grain with varying pericarp colours contains unique nutritional properties. Introduction Sorghum (Sorghum bicolor (L.) Moench) is a widely consumed cereal staple in regions of Africa and Asia [1][2][3][4][5][6][7][8] and is the fifth leading cereal crop in the world, after the crops wheat, maize, rice, and barley [9]. The United States is the number one producer and exporter of sorghum, generating roughly 20% of total production and near 80% of total sorghum exports from 2001-2003 [10]). Where sorghum has been traditionally a basic food staple, it has also been used in several food products and in some cases health food items [8,11,12]. Sorghum does not contain peptide sequences that are toxic to persons with celiac disease, as are found in wheat, barley, and rye, and is therefore a safe food for celiac patients [13][14][15]. With increasing interest related to the unique properties of sorghum, its value as a food in helping to improve human health and to prevent disease has generated increasing research [1,2,4,11,[15][16][17]. Specifically, increased research attention has focussed on the diverse content of phenolic compounds present in sorghum, which is a unique attribute among cereal grains [2]. These phenolic compounds have been shown to have various properties, e.g., inhibiting cancer cell growth [17], and while more research is needed on the health benefits of sorghum, consumption of whole grain sorghum may have the potential to help reduce health problems such as heart disease, diabetes, and obesity [16]. A current trend worldwide is a considerable preference for foods that have additional health benefits beyond basic nutrition. Research has continued to demonstrate that sorghum whole grains have numerous human health benefits, especially as related to antioxidant activity of phenolic compounds present in the outer layers of the grain [18,19]. The free radical scavenging activity of sorghum phenolic compounds has been related to beneficial health attributes, including anti-microbial properties [20], reduced oxidative stress [18], anti-inflammatory properties [21] and anti-cancer activity [17,[22][23][24][25], thereby adding value to sorghum grains and its increasing human consumption [26]. The beneficial activities of sorghum for human health have been attributed mainly to the phenolic compounds found in sorghum grain and which are well known to vary with pericarp colour. While much of the research related to sorghum phenolic compounds and potential health benefits has been conducted using whole sorghum bran, or crude extracts from sorghum grain/bran, e.g., [6,17,18,20,21], numerous types of polyphenols have been identified in sorghum with examples including, flavonoids, hydroxybenzoic acids and hydroxycinnamic acids with specific levels varying according to both genetics and environment [24,27]. Several varieties of sorghum exist with a wide range of pericarp colour, and which can be classified based on the pigmentation of the pericarp [28]. In particular, research has shown that total phenolic content and antioxidant activity in sorghum are correlated with the pericarp thickness and colour, and sorghum with darker and thicker pericarp had greater levels of phenolic compounds and increased antioxidant activity [28,29]. Highly pigmented sorghum may therefore be desirable for use in human foods with improved human health attributes. Substantial research has been conducted with the aim of developing the cultivation of sorghum lines in the Mediterranean area for use in production of human food products [8,[30][31][32]. With that overall goal, the focus of this research was to compare the nutritional composition of sorghum varieties that differed in pericarp colour to (1) determine how nutritional properties other than phenolic content varied and (2) identify varieties with improved nutritional characteristics in addition to phenolic content and thus provide greater potential health value for consumers. Additionally, this research adds to the body of knowledge on sorghum grain nutrient composition, especially for sorghum grown outside the major sorghum producing regions. Sorghum Cultivars The sorghum varieties along with seed sources used in this research are shown in Table 1. In 2019, sorghum production was conducted in San Bartolomeo in Galdo (BN) in the Fortore area, which is in the Campania Region of southern Italy (41 • 25 N, 15 • 01 E and 597 m a.s.l.). The soil in this region is predominantly clay loam, deep and with a good water holding capacity. The milling was carried out starting from 1 month from the harvest of the sorghum grain, which was stored in a dry environment at 16 • C. Flour Sample Preparation Flour samples were produced from approximately 1 kg of grain samples that were milled using a two-roll mill (Chopin Moulin CD1; Chopin S.A., Villeneuve la Garenne, France) and subsequently were sieved using a planetary sieve (Buhler AG, Uzwil, Switzerland) with a screen size of 120 µm 2 . Moisture Moisture was determined according to the method described by Pontieri et al. [31]. Briefly, a ceramic capsule was accurately weighed after a complete desiccation at 100 • C in vacuum-packed (25 mm Hg) conditions using an oven (ISCO mod. NSV9035) and chilled at room temperature in a silica gel dryer. Then, an accurately weighed aliquot of flour samples (about 2 g) was placed in the desiccated ceramic capsule. The humidity was removed from the sample, by keeping it in the same temperature and pressure conditions for about five hours, until a constant weight was achieved. The moisture content was estimated by the weight loss. Ash To measure total ash, sorghum grain samples (ca. 3 g each) were weighed into broad, shallow ashing dishes and incinerated at~550 • C, after which the dishes were placed in a desiccator to cool and then weighed after coming to room temperature [33]. Protein Content Nitrogen content was measured using the Kjeldahl method [34] with total protein content determined with a conversion factor of 6.25. Sorghum grain samples (2 g each) were analysed with a Mineral Six Digester and an Auto Disteam semi-automatic distilling unit (International PBI, Milan, Italy). Total Lipid Content Total lipid content was measured as described by Pontieri et al. [30]. Briefly, approximately 3 g of grain was ground with liquid nitrogen using a mortar and pestle and lyophilized with the FTS-System Flex-DryTM instrument. The ground whole meal was then extracted using a Soxhlet apparatus with chloroform (CHCl 3 ) for 4 h. Extracts were then dried with a rotary evaporator to obtain the crude extracts, which were subsequently weighed to determine the amount of extracted fat. Gas Chromatography of Fatty Acids Esterification of fatty acids from the crude extracts and subsequent gas chromatographic analysis of the fatty acid methyl esters was carried out as described previously [30,31]. Briefly, solid sorghum fat was melted in an oven at 50 • C to determinate its composition. A drop of fat was transferred into a 1.5 mL-vial. One ml of hexane and 100µL of 2N KOH methanolic solution were added. The vial was vortexed for 5 min, and then left under static conditions for 5 min, to enable a complete stratification of the hexanic portion, which contained the methyl ester of the fatty acids. Chromatographic separation was achieved using a GC-2010 (Shimadzu, Kyoto, Japan) equipped with a DB-Wax (Phenomenex, Torrance, CA, USA), 30 m length, 0.25 mm internal diameter, 0.25 µm film thickness column. The GC conditions were as follows: carrier gas, He; pressure, 75 kPa; injector temperature, 220 • C; FID temperature, 250 • C; and oven program, 170 • C for 8 min, 2 • C/min to 185 • C for 10 min, 1 • C/min to 190 • C for 12 min, 10 • C/min to 240 • C for 5 min. Carbohydrates Carbohydrate content was determined by subtraction as the amount of material left after accounting for moisture, ash, protein, and fat content [35]. Fibre Fibre was determined according to the AOAC [36] method. Briefly, fibre was determined as the loss, after incineration, of the sample digested in an acidic environment by H 2 SO 4 (0.255 N), followed by an alkaline digestion with NaOH (0.223 N). Digestion was obtained with an automatic digestor (Velp Scientific mod. FIWE3, Usmate Velate, Monza e Brianza, Italy). Total Minerals Determination The determination of the mineral elements of interest was performed according to Tenore et al. [37] as described by Pontieri et al. [38]. Briefly, for each sample, the ash content was solubilized using ultrapure water (18 MΩ, produced using a Millipore Direct-Q UV3 water purifier) based HNO 3 (Ultrapure, Sigma Aldrich, St. Louis, MO, USA) 5% solution. The solution was filtered using ash-free regenerated cellulose filters. All chemicals were of the highest commercially available purity grade. No glass (flask, pipettes, etc.) was used for any operation. Before use, all plastic containers were cleaned using 10% ultra-pure grade HNO 3 for at least 24 h, and then rinsed copiously with ultra-pure water before use. Element quantification was performed using quadrupole inductively coupled plasma mass spectrometry, ICP-QMS replicate for sample: 5. High purity He (99.9999% He, SALDOGAS Srl, Naples, Italy) and H 2 (99.9999% H 2 , produced by the DBS H 2 generator PGH2-300) were used, in order to minimize the potential problems caused by unidentified reactive contaminant species in the cell. Calibration solutions were prepared from multi-elemental standard stock solutions of 20.00 mg/L. Calibration curves were obtained using 9 calibration solutions. Reagent blanks containing ultra-pure water were additionally analysed to control the purity of the reagents and laboratory equipment. Standards and blanks were subjected to the same treatment as the samples. A mixed solution of internal standard ( 6 Li, 45 Sc, 72 Ge, 89 Y, 103 Rh 159 Tb, 165 Ho, 209 Bi) 10 µg/L was on-line aspired with a T union with the sample and standard solution. ELISA Assay The RIDASCREENR standard test kit [RIDASCREEN R Gliadin (Art. No R7001) R-Biopharm AG] sandwich ELISA based method was used to determine the presence of protein sequences reactive to gliadins in sorghum flour samples [39] following the manufacturer's instructions. Commercial gliadin standard 16-18% N (Sigma Aldrich, Milan, Italy) was used as control. Statistical Analysis With the exception of total lipids analysis, which was performed in triplicate, all analyses were performed in quintuples (n = 5) (technical replicates), and the results are presented as the mean ± SD. Data distributions were evaluated by means of Shapiro-Wilk test. As all data was not normally distributed, differences in means were investigated using the non-parametric Mann-Whitney U test. Analysis of variance (ANOVA) was used to assess if the different values were statistically significant or not. The Tukey post-hoc test was used to identify which samples were different. False discovery rate (FDR) corrected p-value was used to manage the multiple comparisons. Nutrient Composition The chemical composition of white, red, and black sorghum varieties developed in the USA but produced in Southern Italy is shown in Table 2. The table also reports the recommended daily dose (RDA) according to the European legislation [40]. Minor variations in both protein and carbohydrate were observed among the three coloured sorghum varieties analysed, while a higher fibre content was found in both the red and black varieties (p < 0.05). Fatty Acid Composition of Total Lipids The percentages of total fatty acids, also aggregated as saturated, mono-unsaturated and polyunsaturated fats, of white, red and black sorghum varieties are shown in Table 3. Greater levels of total saturated fats (* p < 0.05) was found in the white variety than in the red and black varieties, while the black variety had a lesser amount of total unsaturated and polyunsaturated fats (* p < 0.05) than both the white and red varieties. Oleic, linoleic, and palmitic, were the primary fatty acids in all three of the sorghum varieties analysed, which is in agreement with previously reported results [31,41,42]. The percentage of palmitic acid in the black sorghum variety was slightly lower than both the white and red varieties, while the percentage of linoleic acid was slightly higher in the black variety than in the white and red varieties. Finally, the percentage of oleic acid was comparable between the three varieties of sorghum. Mineral Content Levels of minerals from the three sorghum varieties are reported in Table 4. Statistical analysis was not performed on the mineral content due to the number of minerals tested. However, the levels of macro-elements followed the sequence K > Mg > Ca > Na in all three varieties analysed. Micro-element content followed the sequence Fe > Zn > Al > Mn > Cr > Ni > Cu > Ba > Mo > Pb > Co > Sn > Ag > As > Se > V > Be > Tl in the white variety, while the content of micro-elements followed the sequence Fe > Zn > Al > Mn > Ni > Cu > Cr > Ba > Mo > Pb > Co > Sn > Ag > As > Se > V > Be > Tl in both the red and black varieties analysed. The white variety had a lower element content than that of both the red and black varieties, with K, Fe and Sb were the most abundant macro-element, micro-element, and trace element in all analysed varieties, except Hg which was the most abundant trace element in the white variety. The potassium and sodium content of the samples varied from 26.89 to 35.66 g kg −1 and 0.42 to 0.54 g kg −1 , respectively, with the potassium content of the samples ranging from about 64-fold to 66-fold higher than that of sodium. Therefore, the K:Na ratio was higher than the recommended ratio 5.0 [43] for the human diet. The fact that the sorghum hybrids all contained a high K:Na ratio suggests that sorghum could be used to modulate sodium-related health problems. In fact, diets with a higher K:Na ratio are recommended for certain health conditions such as [44]. Immunochemical Evidence for the Absence of Gluten in Coloured Sorghum Varieties Immunochemical measurement of gliadin concentration in the sorghum flour from all samples tested showed that gluten levels in all sorghum cultivars were less than 5 ppm (the detectable limit) ( Table 5) and are at levels substantially below the 20 mg/kg (ppm) threshold recommended as safe for celiac patients [39]. Discussion As it has been reported that pericarp colour of sorghum grain may vary due to both genotype environmental factors [42,45,46], in this work we compared both the chemical composition and the content of fatty acids and the mineral content of three coloured varieties of sorghum grown in the Mediterranean environment of Southern Italy. The search for varieties of sorghum developed in the USA that have high functional and nutraceutical properties when grown in the Mediterranean area will stimulate the use of sorghum for human use as a health food in European countries; it may encourage European farmers to produce sorghum, as it is a drought tolerant plant very well suited to environmental changes [8]. The composition profiles of white, red, and black food-grade sorghum varieties developed in the USA, and grown in Southern Italy, were overall similar with slight differences in both protein and carbohydrate percentages. The higher fibre content found in the red and black varieties suggests that this variety may have health benefits in addition to those conferred from just phenolic compounds. The black sorghum also had slightly higher total protein levels and less total fat, which could be minor benefits for use of black sorghum flour in human food products. The quantities of the total saturated and mono-unsaturated fats of both the white and red varieties were similar and higher than those of the black variety, while the red and black varieties had similar quantities of total polyunsaturated fats but lower than that of the white variety. Thus, the black variety analysed in this research may have a slight nutritional advantage related to consumption of saturated fat relative the other two varieties. Oleic, linoleic, and palmitic were the primary fatty acids in all the sorghum varieties. Unsaturated fatty acids are important for human nutrition, as they are major components of biological membranes and play a role in modulating the fluidity of membranes. Additionally, unsaturated fatty acids do not have cholesterogenic properties (unlike saturated fatty acids), and reduce the risk of thrombosis, due to anti-aggregating activity of blood lipoprotein particles. Because of these features, unsaturated fatty acids are strongly recommended to lower the risk of atherosclerosis [4,11,16]. The sorghum samples analysed in this work all contained some levels of unsaturated fatty acids and could supplement other plant-based sources of unsaturated fats in human diets. The content of each macro-element followed the sequence K > Mg > Ca > Na in all three coloured sorghum varieties analysed with the primary mineral being K, followed by Mg, which is consistent with the literature [38,[47][48][49]. Furthermore, the concentrations of the above four macro-elements were higher in the red and black sorghum varieties than in the white sorghum variety, confirming previous works whose results indicate that the mineral content of sorghum was affected by both genetic and environmental factors [38]. With regards to macro-element content, this research reported a K:Na ratio greater than what is recommended in the human diet for all sorghum varieties analysed [43]. An improved K:Na ratio may improve bone health, lessen muscle loss, and moderate other chronic diseases such as hypertension and stroke [44]. In addition to the above, the magnesium content in the sorghum varieties was greater than typically found in corn (on average, 0.47 g kg −1 ) and wheat flour (on average, 0.25 g kg −1 ) [50]. Because each of the three types of coloured sorghum varieties analysed have high magnesium content, these sorghum varieties may be good sources of magnesium. Magnesium is an important macro-element because it is required for the function of many enzyme systems and therefore human metabolism [50]. The content of micro-elements followed the sequence Fe > Zn > Al > Mn > Cr > Ni > Cu > Ba > Mo > Pb > Co > Sn > Ag > As > Se > V >Be > Tl in the white variety analysed, while the content of micro-elements followed the sequence Fe > Zn > Al > Mn > Ni > Cu > Cr > Ba > Mo > Pb > Co > Sn > Ag > As > Se > V > Be > Tl in both red and black varieties analysed. The differences in the concentrations of some micro-elements between the white sorghum variety and both red and black sorghum varieties reported above could be affected by the sorghum variety, soil conditions and the state of plant maturity at harvest [38]. The most abundant micro-element was Fe in all three sorghum varieties analysed, confirming the data reported in the literature [38,46,49]. The latter is an essential micro-element in human nutrition, and Fe-deficiency is a major public health threat worldwide [6]. The expanding production of sorghum for human use in the US [11] and in Mediterranean countries [8], suggests the use of this cereal for healthy nutrition. Thus, identifying sorghum varieties with the highest levels of Fe is beneficial when identifying sorghum varieties for production in Europe. The concentrations of trace element content followed the sequence Hg > Sb > Cd > U in the white variety, while it followed the sequence Sb > Hg > Cd > U in both red and black varieties. Importantly, with regards to the trace elements Sb, Hg, Cd, U, their concentration in the three sorghum hybrids analysed in this study did not exceed the maximum permitted by Regulation (CE) n. 41/2009. Regarding the micro-elements content, the results reported in the present study show high content of both Fe and Zn in all sorghums. The latter two elements are essential micro-elements in human nutrition, and Fe and Zn deficiencies are worldwide public health issues [6]. Furthermore, in this work, the sorghum varieties developed in the USA and grown in the Mediterranean environment were also analysed immunochemically to measure the concentration of gliadin to verify previous reports on safety of sorghum for people with celiac disease. As shown in Table 5, the results indicated that the gluten levels in all sorghum cultivars were less than 5 mg/kg (below detectable limits) which is below the 20 mg/kg level proposed as a safe level for celiac patients [39] and agrees with previous results [13][14][15]. Conclusions Consumers worldwide have increasingly expressed interest in both functional and nutraceutical foods due to the additional health benefits provided through their consumption. Substantial research has been focused on identifying the mechanisms associated with the disease prevention or therapeutic potential of such foods. One example of a functional and nutraceutical food that has received increased research attention is sorghum grain. It is well known that sorghum is a genetically diverse crop-that diversity extends to the presence of phenolic content and composition, and results in phenotypic expression in sorghum grain with a range of pericarp colours. Sorghum has been studied for several potential human health benefits, including the role of sorghum phenolic compounds present in types of sorghum that vary in pericarp colour. The present study supports the continued strategy of evaluating sorghum with a range of pericarp colour not only for the properties of their phenolic compounds, but also for additional nutritional properties such as protein and carbohydrate contents, levels of unsaturated fatty acids and minerals. Sorghum varieties developed for production in the USA and grown in the Mediterranean region demonstrate the feasibility of producing a range of different sorghums that vary in polyphenolic content and the high antioxidant capacity of the compound eriodictyol-O-hexoside isolated from the red sorghum variety, a flavonoid very important for human health due to its ability to fight free radicals with high efficiency [51]. The current research provides valuable information on nutrient composition of sorghum and supports the growing research on the unique health benefits of sorghum whole grain consumption. This research also shows that sorghum varying in pericarp colour and in associated phenolic compounds [50] can also vary in overall nutrient composition. Cereals have long been consumed by humans and are staple foods providing a primary source of carbohydrates, proteins, B vitamins and minerals for a substantial portions of the world's population; this is especially so where sorghum is consumed as the primary food source. Sorghum also contains a variety of phytochemicals which may, in addition to basic nutrition, provide some of the health benefits seen in populations consuming primarily plant food-based diets [47]. The fact that the nutritional composition was similar between the same varieties of sorghum grown in the USA and in the Mediterranean area is confirmation that it is possible to utilize sorghum for human use in Europe. Acknowledgments: The authors are grateful to Francesco Salamini for helpful discussion and critical reading of the manuscript. Thanks also to Matthew Malin for a generous gift of the food-grade white, red and black coloured sorghum varieties. The technical assistance of both Federico Gomez Paloma and Concetta Porzio is acknowledged. Names are necessary to report factually on available data; however, the U.S. Department of Agriculture neither guarantees nor warrants the standard of the product and use of the name by the U.S. Department of Agriculture implies no approval of the product to the exclusion of others that may also be suitable. Conflicts of Interest: The authors declare no conflict of interest. Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S. Department of Agriculture. USDA is an equal opportunity provider and employer.
2022-02-06T16:42:32.487Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "a7a8681482288dde137c21c36f60a4a57d1572f0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/11/3/436/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ffab0f823460896316c9ef487223ea3f049b88e", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
259542443
pes2o/s2orc
v3-fos-license
GEMIMEG-II — How metrology can go digital ... The GEMIMEG-II project is intended to pave the way for digitalization in metrology. The central element of this digitalization initiative is the digital calibration certificate (DCC). It contains all calibration information in full digital form. This means, that it is machine readable and machine understandable without human interaction. This enables its utilization by being securely machine interpretable and machine actionable in the entire chain of truly digital workflows and information technology (IT) environments in Industry 4.0. Therefore, the DCC is created automatically in the calibration process in a standardized form based on a digital document schema. This systematic schema enables to safely transfer, process, and interpret all data in the DCC automatically in all subsequent IT based processes. This paper reflects the project status of GEMIMEG-II in its final phase and shares some insights on the concepts developed and solutions implemented as the results will be demonstrated in five Realbeds. Furthermore, the concept of quality of sensing and quality of data will be introduced as it is implemented in the GEMIMEG-II project to convey supplementary information on the measurement, environmental and/or surrounding modalities, and data quality. Finally, a brief outlook will be given on next steps and actions planned in the project related to other digitalization initiatives for the fab of the future. What does digitalization mean? To digitalize given and existing processes often turns out to be much more than simply converting the output of a process into a digital form. In the calibration domain, sometimes the output documentation is already provided as a portable document Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. format (pdf) dataset. Even though, this pdf document is standardized through ISO 32000 [1] in different varieties and level of detail, it is-simply speaking-a portable digital representation of a paper document. This is extremely valuable already as it can be exchanged between different types of computing systems and the content and graphics of the document remain unchanged. Thus, the very broad aim and scope of the pdf specification and representation is to maintain the author's design and original content for any documentation and/or for document exchange, storage and archiving. The content of a pdf document consists of the textual body enriched with all associated formatting information necessary to replicate the document appearance on different computing systems or visual outputs in printouts or displays. There is no formal semantics or ontology structuring the content of the document to make it machine readable or machine executable such as a formal kind of schema for an unambiguous representation of the document content. Nevertheless, such a structured dataset can be included into a pdf document and the content can also be protected when signing the pdf document to prove authorship and the document's originality and authenticity. Full digitalization requires to have a clear, systematic, and stringent semantics for the entire content of a document. Such a formal system is necessary to structure domain specific information unambiguously to make the document's content machine readable and finally also machine executable. As a prerequisite, it is necessary to define and specify all technical terms precisely without any ambiguity. One such semantic system for a given domain is called ontology. The ontology provides relations and/or hierarchies between the semantic elements. The quality of the semantic system and its related ontology is essential to transfer and convey all information of a document between a data source and a data user in secure way in order to avoid any misunderstanding or misinterpretation. This means, that apart from a digital representation of a document as a structured data file-like in the example of the pdf-, there also needs to be a semantic description of all the formal and technical terms used in order to define and specify precisely the content of each element in the document. Therefore, the digital calibration certificate (DCC) [2,3] needs such a complete and efficient semantic description with related ontology for all the terms and information contained in the digital document. From a globalization perspective, it is advantageous to have one common semantic system for the DCC which supports all needs and demands of metrologists around the world. If multiple semantic systems or ontologies would exist, an additional translator software would be needed as a middleware to convert a DCC from one ontology with a given semantic into another ontology with its related semantics. To avoid any additional and error-prone effort related to translators, one common international applicable DCC semantic system where all the stakeholders can contribute is beneficial for Industry 4.0 applications. The semantic structure underlying the digital calibration documents is essential to enable machines to read, understand, interpret and act on the data. It is a mandatory prerequisite for collaborative or autonomous interoperability on system and data level. In consequence all users of this semantic structure can and have to select the correct structural element to assign or read the respective values and/or units and/or datum properly. A user or subscriber to the information of such a dataset must rely on the assumption, that the expected and correct information was assigned to the respective structural element according to its definition. Hence, the quality of the vocabulary and the precision of the related definitions are the key elements for users and programmers to assign and retrieve information properly. Typically, when humans interact on data, they can clarify upcoming questions in direct exchange. Machines will take all values as granted for the respective structural element they are assigned to and simply run their codes with the values as read from the file. In that respect, it is also important that the digitalization of the calibration domain reflects local or regional aspects how information is represented in a calibration certificate. Typical differences are related to metrological units from international system of units SI [4] to various imperial units, the assignment and representation of measurement tolerances and uncertainties and sometimes also the vocabulary or technical terms. Therefore, a truly digital calibration system has to be built upon common semantics and ontology on the one hand, but on the other hand needs to be open or flexible enough to also support regional, local or even application specific aspects. All terms related to special aspects also need to be added to the sematic structure to secure international applicability. Therefore, an international and open approach moderated by a group of trusted maintainers is preferred and suggested to create a common digital DCC ecosystem. It might be the most effective and resource efficient way to create a versatile international digital metrology ecosystem. In daily practice of mutual long-term and trusted collaboration between a tool owner and a calibration service provider highly customized practices or 'internal' technical terms might evolve. Digitalization bears the chance to adopt or replace such individual practices and terms by transparent processes based on good practice of the community or terms used by the community. In principle, even specific additions can be made to the DCC in a separate namespace. The closer a DCC sticks to a good and preferred practice, the easier will be an automated and digital processing of data and content or to secure process resilience in case new calibration service providers are integrated. Customizations typically necessitate some subsequent software adoptions or specific middleware for system owner and/or calibrator. The GEMIMEG-II project The GEMIMEG-II project is a German national funded project to pave the way for digitalization in metrology. It is intended to prepare a first foundation and proof of concept of the DCC application. Even though it is nationally funded, it is open to share and discuss the concept and findings in the international community on conferences and in the peer group of 30+ associated international partners. The acronym reflects the project aspiration by combining GEMIni for the digital twin of the MEtrology equipment for Global application. This digitalization initiative focusses on the DCC and its fully automated application in modern industrial information technology (IT) infrastructures. The project consortium consists of the national metrology institute Physikalisch-Technische Bundesanstalt, Germany (PTB), different industry partners and multiple research institutes. The core of the user story of the project is based on an automated calibration workflow documented in a DCC which is then transferred safely and without human intervention to the customer of the calibration. At the customer site, the DCC is read, processed, and interpreted automatically by machines in the full chain of workflows in typical IT and operational technology (OT) environments in Industry 4.0 to update all relevant information in the plant management system (ERP-enterprise resource planning) and all calibration related information in production. This paper reflects the recent project status of GEMIMEG-II in its final phase and shares some insights on the concept developed and solutions implemented. Five Realbeds will be implemented in the project to showcase and prove the applicability of the DCC in different technical fields. Realbed 1 is the Digital Competence Center for wind power (d-CCW at PTB, Germany [6,7]) as a new calibration system for huge torque moments of up to 5 MNm and 20 MNm at a later stage. Calibration results of this complex system will be reported directly in DCC format. Realbed 2 uses a highly digital and automated 'Factory of the Future' scenario to mimic a modern Industry 4.0 environment with IT, OT and internet of things (IoT) devices. Realbed 3 focusses on the 'Process and Pharma Industry'. A single company in this industry typically has a very high number of 10.000-100.000+ recurrent calibrations for process related equipment every year. This Realbed analysis the benefits from full digital process chains in a highly controlled and regulated environment. Realbed 4 'Autonomous Driving' focusses future mobility aspect of numerous sensors with dynamic and recurrent (minutes-hours timescale) calibrations for safe autonomous functions. Realbed 5 is a 'Legal Simulation Study' to challenge the evidentiary value of the DCC in simulated court proceedings with real advocates and judges for representative cases related to the other Realbeds. This paper will further introduce the concept of quality of sensing (QoS) and quality of data (QoD) as used and implemented in the GEMIMEG-II project to convey supplementary information on the measurement and data quality, as the metrologist would do in today's practice. Figure 1 shows the conceptual diagram of the GEMIMEG-II project for a generic calibration and data processing workflow in a typical quality infrastructure together with related communication technologies. The quality information quality of X (QoX: sensing (S), data (D) and information (I)) is intended to convey relevant information on measurement circumstances and/or indicators for the trustworthiness of a measurement or datum which are not influencing the measured value nor the measurement uncertainty. QoX can be application specific or user defined in contrast to measurement uncertainties which are specified by the guide to the uncertainty in measurement (GUM) [8] and reported in the DCC as integral part of the measurement result for each value measured like today in a calibration certificate. The following chapters are arranged according to the different technical fields of the GEMIMEG-II research agenda starting from more general topics followed by more detailed information in later chapters. Finally, a brief outlook will be given on next steps and actions planned in the project with regard to other digitalization initiatives. The digital document schema The digitalization of the metrology domain requires multiple digital documents. Therefore, it showed out that it is advantageous that all these documents are derived from the same basis, called the digital document schema DX. This common DX concept is expected to bring significant benefits with respect to interoperability of the different documents of the metrology domain. A generic DX concept view chart is shown in figure 2. Since digitalization is evolving at many different places and domains, interoperability is essential for a smooth combination and integration of different software modules. Therefore, the bottom part of the chart in figure 2 shows a selection of underlying norms and standards which can be used or should be followed or obeyed while developing the DX document schema for the metrology domain. This list cannot be complete since digitalization is a fast-developing field in multiple domains and applications. It is in a kind of flow of different somehow interrelating developments. Nevertheless, it is worthwhile to check out to optimize compatibility with other digital solutions in order to reduce tedious rework on the one or the other side. The DX schema as sketched in the central part of figure 2 is a semantic schema containing all terms needed to digitalize the metrology domain. The schema is structured, so that information can be retrieved easier from the different clusters. From the DX schema different document types are derived, like the digital calibration request (DCR), the DCC or the digital calibration answer (DCA). Further document types can be derived in the same way as indicated on the right side of the view chart. This common structure secures, that all documents derived are built upon the same terms and semantics which simplifies file handling, information input and information retrieval. Furthermore, a common ontology and semantics for the data structure in the different documents allows for synergy and scaling effects when developing functionality for subsequent middleware software and application software modules. Most of these software functions can be reused for multiple application with different DX based documents. This will take effect on both sides of the process chain between the calibration or certificate provider and the customer or user of this documentation. In addition, this might also help to pave a way into more automated auditing processes. To derive separate documents from a common underlying structure has the big advantage not to overload one document with features not needed for this specific document function on the one side and to keep all semantics consistent over the entire DX schema on the other side. A clearly defined structure of the DX schema immediately helps, that all documents derived from that schema are machine readable, machine understandable and thus the information contained can be machine executable. Another great benefit of the one semantic structure in the DX schema is, that this schema could be developed only in one language, preferably an English XSD schema, to make it fully international. These semantic words can serve also as reference keys in the XSD schema for the respective fields. Based on the English semantic terms and thus the reference keys, there could be auxiliary files with appropriate translations of all these terms as a respective language pack for each language prepared e.g. by national or regional groups of respective native speakers. This common structure enables to easily and automatically translate the full technical content of DX based files into a specific language properly and unambiguously. These language packs also serve as basis when a human readable output (HRO) is generated for a given DCC XML file. This will ease international applicability and collaboration very much, such as already achieved by the vocabulary in metrology [9]. Only individual or dedicated comments or additional information cannot be translated automatically in that approach but all content that is relevant for machines and for auditing respective processes. The top line of the chart shows the optional part of HRO generator to the respective document type. The style of the HRO might be document dependent and might also need to follow some formal requirements for a respective document type, like the calibration certificate must follow the regulations of ISO 17025 [10] or the certificate of conformance those of ISO 17065 [11]. The digital calibration documents and their application in a calibration workflow To digitalize the calibration domain in metrology, it is much more then to create a DCC. In order to support the digitalization effort in the industry, especially for Industry 4.0, it is of particular importance to take the whole calibration workflow into account. Therefore, the calibration request is needed to initiate the calibration, while the calibration certificate and answer contain its result. For each measurement device like a sensor, a measuring system or a calibration artifact, the calibration requested by the owner of the equipment from the calibration service provider has to be specified precisely. This is of fundamental importance to enable fully automated calibration routines-either on premise of the equipment owner or remotely at the location of the calibration service provider. The digital document to exchange the calibration requirement information is the DCR. It serves as the technical requirement specification of the calibration. The commercial part related to the calibration is left to the procurement system of the companies either by individual orders or in frame contracts. The DCR serves to convey a complete and detailed technical specification for the respective calibration requested in fully digital form to the calibration service provider. Figure 3 shows a schematic diagram of information blocks contained in the DCR. The IT system of the calibration service provider extracts the information from the DCR to perform the calibration as requested by the customer. The administrative data as already entered by the tool owner will be used without any modification and enriched with supplementing administrative information about the calibration service provider and/or equipment used in calibration to generate the full set of administrative data in the DCC. Then, the calibration measurements will be done and evaluated. The final calibration results are put into the data section of the DCC. Even further information can be added to the DCC like the QoS or QoD information. The DCA is a supplementary document. It is intended to convey additional information from the calibration service provider to the equipment owner which cannot be part of the DCC for legal or formal reasons or which the equipment owner does not want to have in the DCC dataset, e.g. when the DCC is part of a product delivery of an equipment manufacturer to an end customer. On the right side of figure 3 is some supplementary information on the content blocks. Furthermore, it suggests, that the DCC may contain the information about its issuer-like if it is a national metrology institute, a calibration lab or a factory calibration or an acceptance test from a service technician. This hierarchy level is helpful to distinguish different documents automatically in order to identify the most relevant DCC in an easy fashion, if there are multiple calibration protocols in the validity period as defined by the equipment owner. Sensor calibration and related aspects In technical applications, there are different kinds of sensors. Physical sensors based on a physical sensing principle, e.g. for temperature, voltage, current, resistance, capacity. More advanced sensors are based on multiple sensor inputs of physical sensors to be combined into a common measurement result. A new multi-modal sensor value gets computed based on physics principles related to the sensor measurand and the input values of the contributing sensors. An example could be a humidity sensor to measure the (absolute/relative) humidity of air or a specific gas. Model based sensors are another group of sensors employing a generic model to generate the sensor output value based on one or multiple input values from physical sensors. The sensor model is based on physics principles and/or a model of the measuring system. The respective model describes the functional relation between one or multiple input variables of input sensors and the generic output value of this model-based sensor. It is not important for the sensor function if the computing of the output values is performed on a sensor device or on a separate computing environment on an edge device or in the cloud. Since the model is characterized by a functional relation between input and output values, this sensor type is transparent to the user in its function by the explicit functional relation. In that respect, explicit sensor models can be considered as 'white box' models. Preferably, this functional relation is based on physics principles in an explicit way. Model-based sensors are typically calibrated for a range of input values for each of the input sensors. Typically, this sensor type generates valid output values for any combination of input values from the different sensors as long as the range of input values was covered in the calibration for the respective input sensor. Another quite modern type of model-based sensors is based on models generated by computer learning, machine learning or artificial intelligence. These learning-based models are learned on a dataset of a given distribution of values of the respective input variables. Thus, it is of critical important that the training data covers the expected range of all input variables in an appropriate way. The quality of the training dataset is decisive for the quality and accuracy of the learning-based model. The sensor model is typically not available in explicit or functional form and thus not transparent to the user. It is contained implicitly in the functional structure of the computer code produced in the training process. In that sense, these learning-based sensors can be considered to be a kind of 'black box' models. This is a major difference with respect to sensors described before. They can be described as white box sensors since the sensor function can be described explicitly by formulas, making them fully and truly transparent in their function or functional principle. Typically, the functional principle of physical and model-based sensors is differentiable with respect to changes of input variables and thus these sensors might still produce quite reasonable output, even when at least one of the sensor input values is (slightly) out of the calibrated range. In contrast, learning based sensor models can behave very different when input values are in combinations which are not covered in the training phase. Due to a highly non-linear nature, the output of such learning based sensors can change drastically in an unpredicted fashion, even when only one of the input variables slightly leaves the range of values covered in the training. Typically, such learning-based models cannot extrapolate-sometimes not even for small differences of input values. Sensor signals typically contain some noise in addition to the value measured. Therefore, some appropriate technologies for noise reduction or suppression get applied either on the analogue signal or the digital values. Typical functions are e.g. averaging over a time span or a number of consecutive measurement values, sliding window approaches with some weighting factors to reduce the impact of signals with increasing time span to the actual measurement. When these functions are specified for all their functional parameters the output of these functions is fully deterministic and explainable and in that sense completely transparent. Sometimes, sensor system suppliers do not explicitly define these functions to their customers but specify minimum response times. Having these differences between the sensor types listed above in mind, the question is, if such learning-based sensors can really be calibrated in a classical sense. There is an ongoing discussion in the expert community related to formal and/or technical aspects. Calibration is defined to compare a system with a standard and to determine a measurement uncertainty. This is typically not possible with a learning-based system. Therefore, the term calibration cannot be used in all cases. Nevertheless, there is a clear need to qualify the output of learning-based sensors by an independent third party. A pragmatic approach to that problem could be, that such sensors get qualified for a given range of the respective input parameters by a qualified third party. The result of such a qualification process gets documented by this third party in a way comparable to a DCC but not in a DCC. As a suggestion, the qualification result can be documented with a digital qualification certificate (DQC). Accordingly, the request for qualification by the owner of such a learning-based sensor can also formally be conveyed to the third party in a technical specification document. Hence, this document can be named digital qualification request (DQR). This DQR document would specify the input variable ranges for all variables or the parameter space in multi-dimensional fashion where each of the input variables represents one dimension of the parameter space. It also needs to be specified which data is used for qualification, since the data quality and content may strongly influence the output of the qualification. The availability of standardized and qualified datasets for qualification might not always be feasible or practical. To outline the present status of this early conceptual idea, the view chart of figure 2 is refined for this aspect of learningbased sensors and/or black box sensor models and shown in figure 4 by adding the DQx document types. In essence, this conceptual idea of the DQx documents shows, how flexible the DX document schema can serve even new applications. At present, it might be too early to list a concise set of stringent criteria when a sensor model is categorized as white or black box or somewhere in between in a kind of 'grey' tone. Future developments might advance or modify this suggestion of a concept to handle this topic appropriately and even learning methods might lead to fairly transparent functional models of a sensor over time. The transfer of DCx documents In a chain of a fully digital workflow, the DCA, DCC and DCR need to be transferred securely between the equipment owner and the calibration service provider. There are different ways imaginable, from a point-to-point connection between the two institutions on the one side or a general exchange portal or platform on the other side. Dependent on the number of calibrations or a security level required by the calibrator or system owner, the one or the other solution might be selected or any setting in between. As an open solution-as envisioned in the GEMIMEG-II project-a platform or multiple platforms consisting of shared file systems or repositories might be preferred. In this approach, multiple physical systems can even be combined into a common virtual system. A platform can serve small or large calibration service providers, since they do not have to run the infrastructure on their own with respect to all security and resilience requirements. From a user perspective, a very limited number of platforms might be favored, to keep the effort small for document exchange and handling. A concept of a virtual platform can help to virtually merge different physical platforms into one logical platform as a single-entry point to share or retrieve a given DCx document. Typical status of calibration test equipment inventory (TEI) and related process with DCR plus procurement document to order and initiate a calibration. The DCC as output of the calibration is returned to the system owner after the calibration together with sensor system. The DCR is issued by the system owner and transferred via the platform or directly to the calibration service provider selected. The DCR might be accompanied by a formal purchase order information. It also might specify how and where the final DCC and optionally a DCA document have to be transferred to the tool owner. A generic flow chart of the calibration lifecycle status of a calibrated equipment is shown in figure 5 for a typical process implementation. DCCs and DCAs are created in the calibration process. They are stored on a repository or platform accessible to the equipment owner. If the DCx documents are stored on a platform accessible by multiple companies, there might be some interest that a specific user can only access documents belonging to his inventory. This is a fundamental prerequisite to protect business related information contained in a calibration document, business relationships or even the business volume of a specific calibration service provider or manufacturer. Therefore, the system owner will need some information like e.g. the file name, exchange platform with credentials and/or potentially encryption key to download and read the DCC and DCA files. What exactly will be needed will be governed by the respective exchange platform. If the system owner has some data section on an exchange platform, the calibrations service provider might be entitled to put the respective files directly into this customer section to ease the exchange process. From a general viewpoint, if the DCC or the DCC platform itself gets protected against unauthorized access, one needs to qualify digitally with credentials to be eligible to access, download, open and read a respective DCC. A high level of security is reached, if a user has to prove his ownership of a respective system on the one side plus that he has to have an encryption secret to read a respective DCC content. Technologies like two-step or multi-factor authentication would help that only the actual system owner is eligible to open the latest calibration documents which might be important when systems are sold over time. Revocation of a DCC document Typically, the exchange of the various DX-based documents as described in section 3.2 can be done in a request-response type of communication. Sometimes, the calibration service provider might be constrained to revoke a certificate document like a DCC he has issued before. Such a revocation can be considered as an event driven push notification to inform the actual user of this respective certificate in timely fashion since the application of the information from this certificate can be safety critical. In case of a DCC the calibrator does not necessarily know the actual user/owner of the device. Therefore, the DCC file name is appended on an open and public DCC or in general a digital certificate revocation list. This base concept is applied successfully with digital certificates already, e.g. for public key infrastructures (PKIs). The DCC revocation list could either be a general list, where all revocations are contained, or a list per exchange platform or calibration service provider. For distributed lists, the user has to check on every platform relevant for his equipment if one of his DCCs got revoked. In case of a direct contact between calibration service provider and system owner, there could also be a direct information about the revocation. Nevertheless, a public revocation list bears the opportunity for fully digital processes for more transparency and more secure processes with faster response times for calibration documents. Great benefit is in more complex cases, e.g. where the calibration of an item gets revoked, but the respective item was sold meanwhile and thus was changing the owner. In effect, digital document revocation concepts help to make the calibration infrastructure more resilient in its operation. A revocation of a DCC or any other DX-based document can only be done by the issuer or issuer organization of this respective document. The system owner is responsible for the utilization period the respective system. If there was an incident of the calibrated item in its application phase, the system operator or user can set the system status to unclear which means that at least a recheck of this item has to be done by a knowledgeable person which can decide if this item can still be used or needs to be repaired and/or recalibrated. The related process parts are also indicated in figure 5 on the left and right side, respectively. Supplementing and supporting digitalization initiatives and technologies When digitalizing the calibration domain in metrology, there are many pre-requisites in digitalization necessary to complete this demanding task. The most relevant ones for the domain of calibration and the calibration certificates are listed in the following sub-sections. Unique product identifier The equipment to be calibrated needs to have a unique identifier. In today's practice a simple sticker is sufficient to identifying the equipment manually as part of the metrological equipment inventory on a company level. In a digital environment, it is preferential, if this unique identifier is computer readable-preferably via standard network technology like (wireless) local area networks. Having unique identifiers for industrial inventory is a general requirement. Therefore, existing approaches can be used like for the digital nameplate [12][13][14] or the identification link [15]. For the purpose of calibration and safe traceability of a measuring system or calibration artifact, it is mandatory that the digital equipment identifier is unique and machine readable. Nevertheless, for factory automation and even brown field automation there also needs to be a fall-back solution for equipment not having a digital interface, e.g. like a pressurized gas cylinder out of metal with calibration gas mixtures for a gas chromatograph. In this case labels with unique identifiers can be attached to the mechanical hardware to enable proper machine readability. These unique identifiers could contain the same information directly or via a link to a network resource, where all relevant information could be found, also including a DCC for the respective configuration or mixture of gases as in the example. In principle, the unique product identifiers as suggested in the various digitalization initiatives have the form: [unique ID manufacturer]_[material number of system from manufacturer]_[serial number of system]. The unique ID manufacturer can be a real unique ID from a kind of a public register or the internet domain of the manufacturer. Alternatively, the legal entity identifier (LEI) [16] could also be a solution for a unique ID, since internet domains could change their owner over time. It is described in ISO standard 17442 [17,18]. For digitalization purposes, the verifiable LEI (vLEI) [19] got introduced. The vLEI is a digitally trustworthy version of the 20-digit LEI code which can be automatically verified, without the need for human intervention. Conceptually, the other two parts are unique, since every manufacturer has a system of unambiguous material and serial numbers for his systems produced. In general, it is preferential if the DX document schema can handle different types of unique identifiers by a combination of a definition of the type of respective identifier used and the unique name of this identifier for the specific sensor or unit. This openness helps for international applicability, since different systems are already implemented for different applications or in companies. In principle, one system could have different types of unique identifiers related to it, like the model number and serial number from the manufacturer, a digital ID of the communication interface or a unique test equipment identifier from the system owner's test equipment inventory list. Complete system configuration documentation For any modern system consisting of hardware and/or software modules it is very important to have a precise documentation for all components employed. On the one side there is a hardware configuration which also might become more flexible or context dependent in its operation in a modern IoT environment when single sensors or sensor systems get combined to provide data to a more advanced sensor type or model-based sensor. On the other side there is the software of a sensor or sensor system which might consist of operating system, firmware, application software including the graphical user interface or human machine interface. When calibrating a sensor or sensor system, it is preferable to document all software modules used on the sensor or sensor system in the calibration by the full version or revision number and potential parameter settings or files used to initialize and operate each of the software modules. The reference to the parameters used can be explicit by listing all parameters used or by referencing back to a version-controlled standardized parameter file. This decision might be taken case specific. Preferably, the information on the respective software version and parameter files can be read out automatically from the sensor or sensor system and be documented in the DCC or DCx. Even software-based sensors and learning based sensors need to be documented properly by software release version, configuration file used, training version or training cycle number plus training data set identifier and the like, whatever is relevant and appropriate to identify the exact configuration. To avoid any ambiguity or unclarity, it is important to document the full system configuration in the DCC. For hardware-based systems, there is a huge effort ongoing for the asset administration shell (AAS) [20] to have a full digital twin of a hardware system. It also includes its precise configuration in the respective product lifecycle management or ERP system of the hardware system owner. In future, the AAS will provide appropriate sub models for a respective sensor type as they are under development in different initiatives for various sensor types already. In this context, there might also be solutions prepared to document sole softwarebased sensor systems properly. An AAS sub model for the DCC is currently under development. Digital unit representation A measurement result consists of the measurement value and the metrical unit related to this value. The international system for metrical units is the Système International d'Unités [4], the SI system. There is a fully digital representation of the SI system called D-SI system [21][22][23]. It is based on the seven metrological base units. All other units can be derived from these base units. In this framework it is also intended, that further units can be represented, including imperial units. They can simply be defined as a derivate unit with a unit name, eventually short name of the unit and unit symbol. Those derived units are referred to a combination of the base units in the respective power together with a scaling factor. The D-SI system is suitable to represent all metrical units used in calibrations worldwide, including all kinds of imperial units. The D-SI also includes a concept to append decimal multipliers to all the units in a systematic fashion. For unit representation in the DCC the D-SI is chosen since it can serve all requirements for a versatile international system. The digital trust chains Documents in the digital world need to be easily verifiable to be unchanged and authentic. Typically, this is achieved by a digital signature of the issuer over the respective (portion of the) document. The identity of the signer is bound to a public key (PK) by a X.509 PK certificate issued by a trusted third party called certificate authority (CA). In ISO 17025 [10] there is no hard requirement on a digital signature for a DCC document, the necessary trust framework, or the signature format that should be applied. With this regard, there is some imbalance and in consequence we expect that calibration service providers on different levels of the calibration pyramid will use PK certificates issued under various policies from different trust anchors (Root CAs). This can make the step to validate the integrity and authenticity of a document like a DCC even more complex. Therefore, it is beneficial, to have a common technical format for digital signatures of a DCC, which includes some necessary information for signature validation. Hence, we propose to use enveloped XML advanced electronic signature (XAdES) [24]according to electronic IDentification, Authentication, and trust Services (eIDAS) [25] nomenclature and standardized by the European Telecommunications Standards Institute [26,27]-on a DCC document as a good practice. This would ease the trusted utilization of a DCC with standard methods for good international cooperation. Furthermore, it can avoid any imbalance between issuers on different levels of the calibration pyramid. Since there is still a lot of development ongoing for electronic signatures and their mutual international acceptance, we might see some evolving requirements and solutions coming up over time. In the GEMIMEG-II project enveloped XAdES will be used in the Realbeds in order to show technical feasibility. The signatures will be based on a public key infrastructure (PKI) framework with a respective root certificate from a root CA. In the project, the CA is from Deutsche Telekom Security. Since it is very likely that there will be not a single Root CA or even trust framework (like eIDAS) accepted worldwide, XAdES allows to apply multiple signatures on one document. In that case, the different signatures can be side-by-side (parallel signatures) or the first signature of the original issuer is the root and further signers can apply countersignatures that confirm the first digital signature. Currently, this concept is developed with those two options and will be detailed further on in a good practice suggested in the future. Secure device enrolment Sensor system devices have to be enrolled to the domain of the operator of the respective production system network prior to their utilization. Typically, IT based devices have been assigned to an initial identity by its manufacturer. During enrollment, a new identity from the operator's domain is assigned to the device. This new identity enables the device to authenticate itself to other devices/services from the operator's domain. There are several methods that can be used to automatically perform the enrollment process after the device has been integrated into the operator's network. The Enrollment over Secure Transport (EST) protocol, standardized by the internet engineering task force (IETF) in request for comments (RFC) [28], replaced by RFC 8951 [29] , is a comparatively simple procedure that can be used to assign a new identity to a device: The device establishes a secure transport layer security (TLS) connection with mutual authentication to the EST server, which is referred to as registrar in the figure. The device then generates a new key pair and generates a certificate signing request, which is verified by the registrar and a connected CA. The CA finally issues a certificate for the device, which is forwarded to the device. EST is performed via the HTTP protocol and specifies additional endpoints that can be used, for example, to query additional CA certificates. One disadvantage of EST is that both the EST server/registrar and the device must already 'know' each other in order to establish a mutual trust relationship. The Bootstrapping Remote Secure Key Infrastructure (BRSKI) standard (RFC 8995) [30] provides an extension that allows the device to establish a trust relationship with an unknown registrar. In the case of BRSKI, the device trusts only its vendor, from whom its initial identity was issued. The device initially does not trust the registrar but accepts the TLS connection anyway. Then the device sends a so-called voucher request to the registrar. It is a digitally signed JSON structure containing information about the client. The registrar then sends an own voucher request to the manufacturer. This request also contains the original voucher request of the device. The manufacturer verifies the voucher and the registrar's identity. After the manufacturer has determined that the device is in the correct domain, it issues the voucher, which is digitally signed by the manufacturer and contains the registrar's certificate. The client on the device can now verify the voucher with the manufacturer's PK certificate and is then able to verify the TLS connection with the registrar afterwards. Then, the EST protocol can be performed over the now trusted connection to obtain a new identity. The whole process chain of BRSKI and EST enrolment process is shown in figure 6. BRSKI is an extension that allows devices to be equipped with a uniform initial configuration, regardless of the domain in which the devices are deployed later. This BRSKI method reduces the effort for the manufacturer of the devices but requires the operation of a corresponding service that issues a voucher for a device in the enrollment process later for the safe and robust enrollment. GEMIMEG-interface When a sensor or sensor system is securely enrolled in the domain of the measuring system owner or registrar, a standardized communication interface helps to exchange the DX or DCC related information. This communication interface is a pre-requisite to establish a digital collaboration to exchange DCCs or other data between different institutions or legal entities. In the GEMIMEG-II project, we follow the concept to have a standardized lean interface which interfaces between the internet on the one side and a respective IT system of an institution or a respective IoT device on the other side. When this interface is available at the two institutions or on their respective devices, the connection between the GEMIMEG-compatible systems can be established. Alternatively, the interface can also be used to exchange documents with the DCC server platform to upload or download DCCs from a repository as shown in figure 7. A generic view on the effect in a distributed system of different calibration service providers, sensor and sensor system manufacturer and integrator is shown in figure 8. The exchange of DCC and related documents is shown via a common DCC platform in the center. The concept and solution how to connect sensors to another IoT system as developed in the project should be considered as a good practice recommended by the project consortium. This good practice is not mandatory to allow all users to also implement their respective solution according to their specific requirements, e.g. stemming from compatibility constraints, to make the solution retrofittable to an already installed base of his products. Finally, the systems need to deal with the GEMIMEG-II protocol as developed in the project. The GEMIMEG-II interface specification document is under development and will be tested and showcased in four Realbeds. Publication is planned at the end of the project in 2023 together with some more detailed information on good practices as suggested and tested. PyDCC tool set PyDCC is a toolset build in Python to use and extract the information contained in a DCC file. It is programmed in an open-source software approach. The PyDCC software toolset [31] with its GitHub repository will be made publicly available by the end of the GEMIMEG-II project the latest. PyDCC is aligned with the latest release version of the DCC schema and as such needs to be adopted to upcoming releases of the DCC schema. All GEMIMEG-II partners contribute to it and Siemens has the role as maintainer. PyDCC will have a basic functionality which can be extended over time. In section 4 different tools and methods or the actual state of concepts were presented which are used in GEMIMEG-II to digitalize the calibration workflow. This entire set of digital tools and functionalities enables machines to unambiguously identify a sensor system, to document its full software and firmware configuration, to have a versatile system to represent all units of measurement, to exchange and understand calibration documents, to have a full digital trust chain, to safely discover and onboard new sensor systems with appropriate methods, to have an interface to exchange calibration files and a toolset to extract and utilize the calibration information. All these functions are prerequisites necessary to implement a fully digital calibration process chain for modern industrial IT and OT environments. The set of functionalities provide a solid basis for piloting the DCC functionality in GEMIMEG-II and further enhancements. Quality of Sensing, of data or of information In a digital world, data scientists have to rely on the data they get. Every single datum is valid and has the same trustworthiness as all the others. There is no awareness about the level of trust which can be assigned to a specific data value. This contrasts with the previous procedures in the analogue world, where the metrologist was consciously checking, if everything around a measurement or measurement setting was under normal operating condition or somehow suspicious and worthwhile checking it out. The quality of a data value can be of critical importance in a fully digital chain of subsequent and dependent data operations. Therefore, a subproject in the GEMIMEG-II project is focused on data quality aspects, where first publications are available with general considerations and for different metrological applications [5,32,33]. The quality of data information can be differentiated according to the domains, where new data is generated. During sensing, it is the quality of sensing (QoS). In the data driven domain for data operations like evaluation, fusion, modelbased sensing, learning based sensing etc. it becomes the quality of data (QoD). In the information domain, when information is inferred from the data produced in the sensing or data domain, it becomes the quality of information (QoI). In the future it might be advantageous to distinguish more different domains when appropriate, but there should be no inflation on the number of domains. The domain specific data quality indicators can be summarized as QoX, where X represents sensing, data, and information. The QoX are shown already in the conceptual diagram of figure 1 in the respective domain. As also indicated in this view chart, even the QoX should be fully traceable in parallel to the data measured. To separate and distinguish the QoX based on the domains where they are determined might seem a bit cumbersome at a first glance, but it bears the unique and important chance that a data user in a subsequent step of data processing can directly know, where this QoX value was generated. Furthermore, QoS support the concept, to make sensor values as much agnostic to the individual physical sensor used. Typically, this can only be done by sensing domain experts who are also in charge to provide respective QoS. Hardware agnostic data are very valuable for a robust data processing chain fed with the data. Thus, it allows for resilient operation of the whole system, even though when a single sensor needs to be exchanged for some reason. Conceptually, the QoX indicators should be handled comparable to sensor systems as described in section 4.1. QoX need to be defined with a unique name for this quality indicator and a definition what sensor system parameter is characterized by the respective QoX e.g. internal parameters concerning operational or functional aspects of the sensor system or This preference directly implies a scale with a positive direction for improvement of a QoX to higher values. When all QoX values have the same range of allowed or expected values, it gets much easier to automatically use and process these QoX data further when the related data also gets processed. Table 1 lists some examples or application fields for potential QoX indicators. Finally, there can be a definition or recommendation of the indicator interpretation, e.g. in which parameter range the respective QoX is good, acceptable, bad or for information only, e.g. at low trust levels. In some cases, e.g. in a trend analysis, it is much better for data evaluation to have a value for the measurement associated with a low QoX instead of no measurement value because of the low quality. When all the information for a QoX is defined, it can be a very versatile and powerful tool when the trustworthiness of data or a specific measurement is under consideration. Since there can be multiple QoX defined for a single measurement, one single QoX can give further insight only for the respective context where it was intended and defined for. The last line of the table gives an example, where one measured value of a battery voltage at known load current may be used to derive a capacity information based on a related battery model as data and a remaining battery life as information. Both, the data and the information will have some quality levels conveyed by respective QoD and QoI indicators. How to compute and combine QoX values? The data created in a measurement will be processed in subsequent processing steps. A parallel and adequate handling of the QoX along with the data is desirable regarding the aspect how the related data quality propagates through the processing chain. When at least one of the data inputs to a respective processing step is critical in its quality, the QoX for the data output should indicate that the data could be erroneous. On the other hand, sometimes, and especially in processes and process control, it is preferable to have at least some data with a low or even very low QoX value than having no data at all. Therefore, a system to compute an output QoX value based on one or multiple QoX values associated with the input data is desirable. Since the QoX are preferably unitless scalars, the combination of different QoX means to assess the quality of a given data value from different parameters or perspectives, just like a metrologist or data scientist would do when qualifying a result. Combining different QoX into one common or aggregating QoX is not to be confused with combining values of different units. In descriptive statistics, there are multiple methods already established to aggregate numbers into one indicator. At least some of them can be adopted and refined to describe the quality or characteristics of data in a dataset. Even better, they can be employed according to our needs to propagate QoX values properly. The mean value is a very prominent example. There are different ways to define a mean value. These mean values average to some extent the input values to create a single output value representative of the dataset or ensemble of values. Based on the computation defined for a given mean value, some mean values are more sensitive to even single outliers for low QoX values. This would mean, that even one input value of questionable quality will result in a reduced QoX value for the output. This in turn will notify a user of the data, to be careful by using the data when the resulting QoX is comparably low-e.g. when comparing this QoX with an application specific threshold value. Here are some examples of different mean values known as Pythagorean means. They are arithmetic mean, geometric mean, and harmonic mean. The generalized parametric representation in the last formula of mean values is the Hoelder mean, respectively: Based on the generalized mean inequality, there is the relation: arithmetic mean ⩾ geometric mean ⩾ harmonic mean. When using some of these computation rules for means, like for the geometric mean, it might be important that none of the QoX has the exact value of zero. Otherwise, due to the product of all QoX calculated, the respective mean will become zero too, regardless of the values of all the other QoX. To stabilize the calculation and to avoid that a single outlier or less important QoX corrupts the information of all other QoX in this combined value, it is suggested to replace an individual QoX below a given threshold value by the threshold value itself as a pragmatic approach to stabilize the mean calculation for the resulting QoX. The user or data scientist has to be cautious when applying this coarse data manipulation to a given mean calculations or with selected QoX parameters only within a mean calculation. In our tests, suitable threshold value were selected appropriate to a respective application. Practical values for a threshold could be 0.05, 0.01 or 0.001 for the QoX in the range [0, 1]. It is important to note, that the resulting combined QoX mean still reflects the poor data quality in sufficiently clear for a respective application. For some applications like process control a low value might still be better than a zero value. When replacing the variable x with the QoX the formulas read:x These formulas imply, that all QoX are contributing to the output QoX in the same way. This must not be the case in a more generalized setting since the data related to a QoX might be used with different powers in the computing process and thus have different impact on the resulting data. To reproduce this effect for the QoX, specific weight factors can be introduced into the formulas for each QoX value: These weight factors can be set individually for each QoX according to any specific requirement of a respective task or application. This concept is already known from statistics in principal where the weight factors typically count the number of identical input values in the calculation. Thus, they are integer values. In our case, the concept of weight factors can be expanded to (positive) real numbers in order to reflect the relative importance of a respective data value or input source of values relative to all the other data values or input sources used in the respective application. The suggestion to limit the range of potential QoX values to [0, 1] is not limiting anything, since the weight factors can be used to balance the respective contribution. Outlook and suggested next steps The digitalization of the calibration domain in metrology depends on and relates to other digitalization initiatives which in turn can cross-fertilize each other to gain speed and momentum. The five Realbeds in the project serve as test environment for the concepts developed in the GEMIMEG-II project. They represent important domains of industry for manufacturing and processing industry, autonomous driving, and a new piece of calibration equipment for huge torque moments up to 20 MNm. Since the project is already in its final year, the focus is shifting from conceptual work to piloting implementations. A final report will be given at the end of the project including public showcases. To further broaden the basis of this significant digitalization effort of GEMIMEG-II and in order to make this effort sustainable, there should be a maintainer of the software solutions created. To further promote internationality and international acceptance of this digitalization initiative, it would be favorable, if a neutral metrological organization like the Bureau International des Poids et Mesures and/or the Comité International des Poids et Mesures (CIPM) could take over the role of a promoter and facilitator of the entire digitalization initiative of the metrology and calibration domain. Thus, a group of national metrological institutes (NMIs) from member states of the CIPM could be mandated officially to jointly continue on the development of the DCC or DX together with the related software topics like D-SI, related language packs for the semantic schemas, PyDCC and potential future topics like DQx documents. This mandate could be temporal and transferred over time between NMIs to secure international support and ownership. These efforts also might be open to industry, Regional Metrology Organization and organizations in legal metrology as additional important drivers of digitalization in this ecosystem. Inviting and integrating all stakeholder of this ecosystem to contribute conceptually and technically could strengthen this digitalization initiative significantly to build a common digital document system. Differentiation between issuer types or roles in a document is possible via a content field issuer or issuer type as shown in the side notes of figure 3. On the other hand, it is also important to keep the DCC compatible with other digitalization initiatives like a unique product identifier, digital product passport, the digital product nameplate, the AAS. In 2023 an AAS sub-schema for calibration will be developed. Data availability statement No new data were created or analysed in this study. Acknowledgments The content presented was created by a great team effort of the entire GEMIMEG-II Project team. I am deeply grateful to all team members from all the contributing organisations for their passion and dedication to digitalize the calibration domain in metrology and to make the DCC real and a good practice. It is a real pleasure working with all of them and to coordinate this light house Project. The GEMIMEG-II Project is funded by the German ministry for economic affairs and climate action based on a decision by the German Bundestag under Grant No. 01 MT20001A.
2023-07-11T16:02:58.238Z
2023-07-05T00:00:00.000
{ "year": 2023, "sha1": "c5871df0745fd32df56c0ea914a89eadcf5fe297", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1361-6501/ace468/pdf", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "d8e109f04ff2dd59d25aeaf75ca176e64c35b0ca", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
11918741
pes2o/s2orc
v3-fos-license
Efficacy of ranolazine in preventing atrial fibrillation following cardiac surgery: Results from a meta-analysis Background Atrial fibrillation (AF) is a common complication after cardiac surgery. Ranolazine is a Food and Drug Administration approved anti-ischemic drug, which also has anti-arrhythmic properties. Recent studies have demonstrated the benefit of ranolazine in preventing post-operative AF (POAF) in patients undergoing cardiac surgery. Hence, we performed a meta-analysis of published studies comparing ranolazine plus standard therapy versus standard therapy for POAF prevention in patients undergoing cardiac surgery. Methods We performed a comprehensive search of Medline, Google Scholar, PubMed, abstracts from annual scientific sessions, and Cochrane library database for studies that assessed the effectiveness of ranolazine plus standard therapy by comparing it with standard therapy alone in preventing POAF in patients undergoing cardiac surgery. From all the studies, data on POAF events among groups were collected, and the random-effects (DerSimonian and Laird) method was used for meta-analysis. Results Four studies with 663 patients were included in the final analysis, with 300 and 363 patients in the ranolazine plus standard therapy and standard therapy groups, respectively. The types of cardiac surgeries were coronary artery bypass grafting (CABG), valve surgery or combination of CABG, and valve surgeries. After pooled analysis, ranolazine plus standard therapy was associated with a significant reduction in POAF events compared to standard therapy alone (risk ratio=0.44 [0.25, 0.78], p-value=0.005). There was no difference in adverse events between the two therapies. However, in one study, more patients in the ranolazine group had transient symptomatic hypotension after the surgery. Conclusions Ranolazine may prove beneficial in POAF prevention following cardiac surgeries. Although the pooled treatment effect is quite impressive with a reduction of more than 50% of risk of developing POAF, small number of studies and variation in ranolazine dose regimen in each study make our results inconclusive, but worthy of further investigation. That is why this result has to be interpreted as only hypothesis generating, rather than conclusion drawing. Introduction Atrial fibrillation (AF) is the most common cardiac arrhythmia, and it commonly occurs after cardiac surgery. AF is often associated with stroke, congestive heart failure, and myocardial infarction, all of which contribute to the increase in the length of hospital stay, higher medical costs, and increase in morbidity and mortality [1][2][3][4][5][6]. Approximately 20%-50% of patients experience post-operative atrial fibrillation (POAF) after cardiac surgery [1,6]. Based on the American and European task forces, AF prevention is one of the essential goals after any cardiac procedure [4,7,8]. POAF prevention has been a therapeutic challenge so far, and a number of medications have been studied, such as beta-blockers, amiodarone, colchicine, and calcium channel blockers [4,5,9,10]. Recent meta-analysis showed that beta-blockers reduces POAF incidence rate from 31% to 16.3% compared with controls, whereas amiodarone decreased the incidence of POAF to 19.4% compared with a 33.3% incidence rate in the control group [5,6]. Ranolazine is a Food and Drug Administration-approved anti-anginal drug (AAD), which also blocks abnormal late sodium channels and rapidly activates delayed rectifier potassium channels, which leads to attenuation of sodium-calcium currents and excessive electrical activity in atrial tissue. Thus, reduced after depolarization reserve suppresses AF [6,11,12]. Moreover, the mechanism of ranolazine to increase the refractory period after repolarization [5,13] could decrease AF after cardiac surgery. Ranolazine has been studied to prevent POAF; however, ranolazine is not required to prevent POAF, based on formal guidelines. Recent studies demonstrated promising results of ranolazine plus standard therapy compared to standard therapy in preventing POAF in patients undergoing cardiac surgery. Effectiveness of ranolazine in preventing POAF has been studied in a few randomized control trials, and the data suggests that ranolazine may have a role in preventing POAF without causing a significant increase in postoperative complications or mortality. Search strategy and study selection We evaluated all the relevant studies published before December 2015. We included all the studies where ranolazine plus standard therapy was used and compared with the standard therapy for prevention of AF following cardiac surgeries. The studies were searched from Medline, Google Scholar, PubMed, Cochrane library database, annual scientific sessions of American Heart Association, American College of Cardiology, Heart Rhythm Society, and European Society of Cardiology. Two independent reviewers performed the search electronically or manually. Disagreements were resolved through discussion to reach final decisions. All the animal, editorial, and review studies were excluded. Data on the type of cardiac surgery, ranolazine dosage, duration of therapy, type of comparison group, type of cardiac surgery, and AF incidence rate following cardiac surgery were collected. Selected published clinical studies We reviewed 116 manuscript publications and 44 conference abstracts (Fig. 1). Out of those, 19 studies assessed the effect of ranolazine on AF after cardiac surgery. We excluded 12 studies because they were review articles, and 1 article was excluded because ranolazine was used in combination with amiodarone in the study group instead of ranolazine alone. Two studies were excluded because they were abstract presentations at conferences for the same study, which was included in our final analysis. Finally, four studies (three manuscript publications and one abstract publication from the American Heart Association's Scientific Session) [14][15][16][17] were included in the final analysis. Statistical analysis We performed a meta-analysis by including four clinical studies to provide an overall estimate of the effect of ranolazine therapy in preventing post-operative AF in patients undergoing cardiac surgery. The presence of heterogeneity among these studies was evaluated with Cochrane Q χ2 test, and inconsistency was assessed with I 2 test that describes the percentage of the variability in effect estimates that is due to heterogeneity. Publication bias was assessed and displayed as a funnel plot of precision. Furthermore, we performed Egger's test and Begg and Mazumdar's rank correlation test to assess publication bias. Statistical level of significance for the summary treatment effect estimate was analyzed by random effect method [18]. Overall p-value of less than 0.05 was considered as statistically significant except for heterogeneity and publication bias testing where a two-tailed p-value of less than 0.1 considered as statistically significant. The meta-analysis was 160 potentially relevant citations/abstracts were screened 16 manuscripts and 3 abstracts were on ranolazine usage in prevention of AF Two randomized and two non-randomized study evaluating the effect of ranolazine in prevention of AF included 32 studies excluded (basic science,animal model and studies on ranolazine usage in treatment of AF) 15 studies excluded (12 review papers, in 1 study ranolazine was used with amiodarone in the study group, 2 were excluded because they were conference abstracts for the same study, which was used in our final analysis) Study characteristics Tables 1 and 2 summarize the study and population characteristics, respectively. Tagarakis et al. assessed the associations between ranolazine and POAF in a prospective, single-center, single-blinded, randomized study of 102 patients (34 patients in the ranolazine group, 68 patients in the standard therapy group) scheduled for elective on-pump coronary artery bypass grafting (CABG). The ranolazine group received 375 mg ranolazine orally twice daily, which was started 3 days before the planned surgery and continued until the discharge day. The control group received standard care which consists of aspirin, atorvastatin, metoprolol and perindopril. Although patients in the ranolazine group were older and required longer aortic cross clamp time, only 3 (8.8%) patients from the ranolazine group demonstrated AF compared to 21(30.8%) patients in the control group (p-value o 0.001). Miles et al. compared effectiveness and safety of ranolazine with amiodarone in prevention of POAF after CABG in a singlecenter, non-randomized retrospective cohort study involving 393 patients. Of these 393 patients, 211 were administered amiodarone 400 mg/day for 7 days before elective CABG, and all patients were maintained on amiodarone 400 mg/day for 10 to 14 days postoperatively. In the ranolazine group, there were 182 patients who received ranolazine 1500 mg one day before CABG or on the day of CABG in an emergent situation. Ranolazine was continued at 1000 mg twice daily for 10-14 days after the surgery. No significant difference was found in the baseline characteristics of both groups, except 3% lower ejection fraction and slightly high incidence of class IV heart failure in the amiodarone group of patients. POAF occurred in 56(26.5%) patients in the amiodarone group compared with 32(17.5%) patients in the ranolazine group (pvalue ¼0.035). No significant difference was found in adverse events across the groups. Hammond et al. performed a single-center, retrospective cohort study to evaluate the incidence of POAF and the role of ranolazine in 205 patients who underwent CABG, valve, or combination surgeries. A total of 136 patients in the non-ranolazine group received standard beta-blocker therapy, and 69 patients were administered 1000 mg ranolazine before the surgical procedure and continued on the same dose twice daily for 7 days or until discharge in postoperative period. Because of nonrandomized nature of the study propensity score matching was adopted and in the final analysis 57 pair of patients were matched on propensity scores that were estimated by using age, sex, ethnicity, comorbidities, type of surgery, urgency of surgery, preoperative medications and type of insurance. By propensity score match analysis POAF incidence occurred in 6 (10.5%) patients in ranolazine group and 26(45.7%) patients in the control group (pvalue o0.001). Bekheit et al. performed a single-center, prospective, doubleblinded, randomized trial involving 54 patients in order to assess the role of ranolazine for primary prevention of POAF in patient undergoing CABG and/or aortic valve replacement surgery. Twenty seven patients were randomly selected to receive ranolazine 1000 mg twice daily for 2 weeks and same number of patient received placebo for similar duration. Incidence rate of POAF was 5 (19%) versus 8(30%) in ranolazine and control group respectively (p-value ¼0.53). Table 1 Characteristics of the included studies in the meta-analysis. Efficacy outcome Overall, 663 patients (300 ranolazine, 363 control group) were included in the analysis. After a pooled analysis, ranolazine was significantly associated with 56% reduction in AF events compared to the control group (risk ratio: 0.44, 95% confidence interval: (0.25, 0.78), p-value ¼ 0.005) (Fig. 2). There was moderate amount of heterogeneity (I 2 : 54.0%); however, it was not statistically significant. There was no publication bias on visual estimation (Fig. 3). Also, there was no evidence of publication bias by Egger's test (p-value ¼0.31) or Begg and Mazumdar's rank correlation test (p-value ¼0.50). Safety outcome Because of variable safety outcome among studies and small number of patients with adverse outcome, pooled estimate was not calculated. In general, there was no difference in adverse events between two groups. Only in the study by Hammond et al., more patients in ranolazine group had transient symptomatic hypotension after the surgery. In that study more patients in the ranolazine group had significant hypotension within 3 days after the surgery, and ranolazine was discontinued for symptomatic hypotension in one patient; however, hypotension did not persist at 1 week after cardiac surgery. No difference was found in the intensive care unit length of stay, 30 days readmission, or mortality between the two groups. In the study by Miles et al., small number of patients in each group developed renal failure, which required dialysis; however, it was not different between the groups. In addition, no significant difference was found in the 30-day readmission or mortality and prolonged ventilation between the two groups. One patient had thromboembolic complication in the ranolazine group. In the study by Tagrakis et al., no adverse outcomes were observed in either group. None of the patients required ionotropic support or moderate blood transfusion. Two patients died in each group; however the cause of death was as a result of a non-cardiac condition. In the study by Bekheit et al., QT duration was longer in the ranolazine group; besides that, no significant difference was found in the 30-day readmission and length of hospital stay between the two groups. Discussion A number of clinical studies have confirmed the beneficial effect of ranolazine in either prevention or treatment of AF. The first strong evidence was provided by MERLIN-TIMI 36 trial [the Metabolic Efficiency With Ranolazine for Less Ischemia in Non-ST-Elevation Acute Coronary Syndrome-Thrombolysis in Myocardial Infarction] [19], which showed that ranolazine may reduce the incidence rate of paroxysmal AF in patients with non-ST elevated acute coronary syndrome, and it also reduced overall AF burden. Few more studies showed the benefit of ranolazine in pharmacological cardioversion. Fragakis et al. [20] concluded that ranolazine-amiodarone combination showed a higher rate Table 2 Baseline characteristics of study patients. Variables Miles et al. [15] Tagarakis et al. [14] Hammond et al.* [16] Bekheit et al. [17] Ranolazine Values are reported as mean 7 SD or n (%). AF, atrial fibrillation; LVEF, left ventricular ejection fraction; CABG, coronary artery bypass grafting * propensity score-matched analysis was used for the study. of pharmacological cardioversion compared to amiodarone alone, suggesting a potential synergistic effect of ranolazine when added to amiodarone. In another study on pharmacological cardioversion conducted by Murdock et al. [21], patients with paroxysmal AF converted to sinus rhythm within only 6 h of ranolazine administration. The HARMONY trial [22] evaluated the safety and efficacy of ranolazinedronedarone combination in the treatment of patients having paroxysmal AF. In that trial, a significant AF reduction was observed by synergistic effect of ranolazine plus dronedarone, with a good safety profile. Another groundbreaking RAFFAELLO clinical trial [Ranolazine in Atrial Fibrillation Following an ELectricaL CardiOversion] [23] assessed the safety and efficacy of ranolazine in the prevention of AF recurrence after successful electrical cardioversion and to ascertain the most appropriate dose of ranolazine. The RAFFAELLO trial was a prospective, multicenter, randomized, double-blind, placebo-controlled parallel group phase II dose-ranging clinical study and concluded that ranolazine on 500 mg and 750 mg significantly reduce recurrence after successful electrical cardioversion. Although several studies have shown the effect of ranolazine in the prevention or treatment of arrhythmia, most of them were designed differently except for the studies on the prevention of AF post-cardiac surgery. Thus, we decided to perform the meta-analysis on Efficacy of Ranolazine in Preventing Atrial Fibrillation following cardiac surgery, and to the best of our knowledge, this is the first study that evaluated the effectiveness of ranolazine in preventing POAF occurrence after cardiac surgery through a meta-analysis. The finding from our study shows that the addition of ranolazine to the standard therapy reduces POAF nearly 55% compared to standard therapy alone. POAF is the most common tachyarrhythmia and frequently occurring complication following cardiac surgery. POAF can lead to severe thromboembolic complications, such as stroke. It reduces the quality of life and increases the hospitalization period. Furthermore, early POAF is the predictor of late recurrence, and hence, preventing POAF incidence is important. AF after cardiac surgery remains a challenge, and the results from currently available treatment options are unsatisfactory. Amiodarone is the most potent AAD and often used along with standard therapy to prevent AF after cardiac surgery; however, it is frequently associated with hepatic, pulmonary, and thyroid adverse events. Therefore, it is imperative to find a treatment plan to prevent the POAF. Ranolazine, an anti-ischemic medication with novel inhibitory action on late inward sodium channels within cardiomyocytes, demonstrates promising potential in AF prevention. Several recent studies have shown the benefit of ranolazine in POAF prevention in patients undergoing cardiac surgery. Moreover, a recently published review article from Saad et al. [24] thoroughly discussed the potential of ranolazine in prevention of not only atrial arrhythmias but also ventricular arrhythmias. Our findings will help in designing a randomized control trial to evaluate the efficacy, safety, dose regimen and costeffectiveness analysis of ranolazine in AF management. Despite this promising finding, our study has several limitations. First, only four studies were included in the analysis, and the overall sample size was small. In addition, a minor difference was found in the study design among the included studies. Out of four studies, two studies were nonrandomized retrospective studies. Moreover, the ranolazine dose was different in one study, and the duration of ranolazine therapy was different in each study. In the study by Miles et al. [15], a major limitation was the retrospective study design and comorbidities, such as heart failure, were more in the amiodarone group, which could have influenced the result. Moreover, it was the only study of the four wherein ranolazine was compared with amiodarone, as amiodarone was the standard therapy at that hospital. Hammond et al. study evaluated the patients by propensity matching, which could have reduced the bias, but it was also a retrospective design. The study by Tagarakis et al. was the first randomized trial comparing ranolazine to placebo in prevention of AF post cardiac surgery but the sample size was too small. In addition, low-dose ranolazine was used in that study compared with other studies. Last, the study by Bekheit et al. was a conference presentation; hence, we were unable to collect data in detail. Despite this minor discrepancy, the role of ranolazine in prevention of AF cannot be disregarded and it will help in designing future randomized clinical trials. Conclusions Ranolazine may prove beneficial in the prevention of POAF following cardiac surgeries. As such the result from this study has to be interpreted as only hypothesis generating, rather than conclusion drawing because of some limitations. Although the pooled ranolazine effect is quite impressive with a reduction of more than 50% in the risk of developing POAF, variation in ranolazine dosage and duration in each study make our results inconclusive, but worthy of further investigation. Conflict of interest All authors declare no conflict of interest related to this study.
2018-04-03T00:28:15.318Z
2016-12-05T00:00:00.000
{ "year": 2016, "sha1": "fa6972308e36124f8c553a1adc00d82229b99ef8", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.joa.2016.10.563", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1e83a5aee97907f3d5d28f2303a8efdbdb8f21c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256229843
pes2o/s2orc
v3-fos-license
Mask Adherence to Mask Mandate: College Campus Versus the Surrounding Community Adherence to masking recommendations and requirements continues to have a wide variety of impacts in terms of viral spread during the ongoing pandemic. As governments, schools, and private sector businesses formulate decisions around mask requirements, it is important to observe real-life adherence to policies and discern subsequent implications. The CDC MASCUP! observational study tracked mask-wearing habits of students on higher-education campuses across the country to collect stratified data about mask typologies, correct mask usage, and differences in behaviors at locations on a college campus and in the surrounding community. Our findings from a single institution include a significant adherence difference between on-campus (86%) and off-campus sites (72%) across the course of this study as well as a notable change in adherence at the on-campus sites with the expiration of a county-wide governmental mandate, despite continuance of a university-wide mandate. This study, completed on and around the campus of East Tennessee State University in Washington County TN, was able to pivotally extract information regarding increased adherence on campus versus the surrounding community. Changes were also seen when mask mandates were implemented and when they expired. Introduction Early in the COVID-19 pandemic there was uncertainty among both the public and the scientific community on curbing community transmission rates. Mask-wearing is a disease prevention tool that has been deployed in many previous outbreaks [1]. Recommendations and mandates for universal mask wearing as infection prevention emerged early in the COVID-19 pandemic. Throughout the pandemic, it has remained a leading mitigation method. Masks are effective against transmission as COVID-19 is a virus that is primarily spread through water droplets from the mouth that are expelled during coughing, sneezing, or speaking. Masks work by blocking a considerable amount of these water droplets from escaping into the air, minimizing the possibility of another individual coming in contact with potentially infectious bodily fluids. A variety of masks have been used throughout the pandemic to help mitigate infection rates, including but not limited to cloth masks, surgical masks, respirators, as well as N95 or KN95 masks [1]. Through the course of the pandemic, studies have investigated the efficacy of wearing a mask to reduce viral transmission rates, yielding controversial results [2]. Additionally, perceptions of mask efficacy, the cultural acceptance of mask wearing, and local policies seem to be regionally and locally different across the United States [3]. Across most states in the US, mask wearing mandates were initially issued to reduce infection rates and enforce existing public health recommendations [4]. Between February and April of 2021 our team worked with members of the United States Public Health Service and set out to investigate the utilization of masking and the different types of masks worn in and around East Tennessee State University in Johnson City, TN, as part of the National Centers for Disease Control MASCUP! Study. The project goals for the study were two-fold. From a professional development standpoint, the project offered the opportunity to train a student team in community data science, study design and implementation, data collection and publication, and CDC passive observation techniques. The observational study goal was to compare differences in mask wearing adherence on campus vs. in the surrounding community. Additionally, a natural experiment was available when the mask mandate for Washington County, TN, was lifted during the study observation period. Methods Eight graduate-level public health students were selected and trained in passive observation skills through a specialized CDC protocol. They then conducted the observational study at five on-campus sites and five off-campus sites. Each student selected a location where study staff had been given permission to observe individuals. Observations took place on weekdays from February 8, 2021, to April 30, 2021. When the observations began, mask mandates were in place both on-campus and in Washington County, TN. On February 20, 2021, the county-wide mask mandate in Washington County, TN, was lifted, but the campus mask mandate remained in effect. Student study staff made observations for one-hour time periods and entered data using a standardized, campus-specific, IRB-approved REDCap form. The students completed observations at the same sites each week for the duration of the study. The team chose to record data for every third individual who entered the designated site. The information recorded included the date, location, time, and point such as entrance or exit of observation. For each observation, data was recorded on whether an individual (1) wore a mask, (2) did not wear a mask, or (3) if it was unknown whether they wore a mask. If the individual was reported as wearing a mask, students were asked to specify whether the mask was worn correctly or incorrectly and then to specify the type of mask: surgical mask, N95, cloth mask, neck gaiter, or other. Lastly, total time was recorded at the end of each observation period. At the end of the REDCap data collection form was a section for explaining why individuals may have worn their masks incorrectly. Students were asked to report the most common errors in incorrect mask usage: (1) nose out, (2) mouth out, (3) only on the chin, (4) hanging from an ear, or (5) hanging from the neck. Then, if there was any circumstance that may have caused incorrect mask usage, that was reported as well such as eating/drinking, exercise/playing a sport, outdoors and not within six feet of others, or none. Lastly, there was an optional free response portion for observers to report any additional notations. Each week, a new REDCap form was completed and submitted for each team member for their observed location. Statistical Analysis Descriptive statistics (frequencies and percentages) were calculated for masking behavior questions combined across all locations for the two time periods of interest (during and post the county-wide mask mandate in effect). The responses on mask wearing behaviors were compared between time periods via generalized linear mixed models (GLMMs) to account for any clustering effect of repeated observations within a specific location. Comparisons were then made separately for the campus-only and non-campus locations between time periods (during and post county-wide mandate) via GLMMs, when the campus mask mandate remained in place. Finally, regardless of whether a mandate was in place, on-campus masking behaviors were compared to off-campus masking behaviors via GLMMs. Significance for all models was determined a priori at the α = 0.05 level. Results In total, the student research team observed N = 3262 individuals over the course of the study, with two observations being excluded due to an unknown status for mask wearing. The time period from the start of observations until the county-wide mask repeal included n = 587 observations, with the time period after the mask mandate was lifted including n = 2673 observations. Observations took place in both Johnson City (5 locations) and on the main campus (6 locations) of East Tennessee State University (ETSU). There was no significant difference between mask wearing when the mask mandate was in place compared to when it was not in place (86% vs. 82%, p = 0.06). Of most of those who wore a mask, there were no differences in wearing it properly between time periods both with and without a mandate (92% vs. 93%, p = 0.19). Mask types were similar between time periods, with cloth masks being the most common (70%), followed by surgical masks (25%), and the neck gaiters (4%). Very few people wore N95 (1%) or another kind of mask (< 1%). Overall mask wearing was common across the study ranging from 61 to 100% (Fig. 1). Having a county-wide mandate effect in place vs. removing the mandate did not indicate differences in mask wearing during this short time frame during the COVID-19 pandemic. However, when only looking at campus locations, having the mandate lifted did slightly reduce the number of mask-wearing individuals on campus, though this was not true for off-campus locations. Regardless of having a mandate in place, mask adherence at on-campus sites were consistently higher than those of the surrounding community locations. When comparing the time periods within the campus locations (n = 1404), there was a statistically significant difference between the mask wearing during the mandate and post-mandate (95% vs 92%, p = 0.01). The odds of wearing a mask on campus were 2.14 (95% CI 1.20-3.83) times higher when the mask mandate was in effect compared to after it had been lifted. For those that were wearing a mask on campus (n = 1294), there was no difference in time periods for whether it was worn properly, with 94% of those wearing a mask in both time periods wearing it correctly (p = 0.74). At non-campus locations (n = 1856), there was no difference in wearing a mask during the mandate compared to after (77% vs. 75%, p = 0.51); however, there was a statistically significant difference for wearing it correctly (p = 0.04). Of those observed wearing a mask at an offcampus location (n = 1398), 88% wore it correctly during the mandate compared to 83% after the mandate was lifted. When examining on-campus and off-campus mask wearing, regardless of time period, individuals observed at oncampus sites were more likely to wear masks than at offcampus observations (92% vs 75, p = 0.047). The odds of wearing a mask was 4.26 times higher (95% CI 1.02-17.78) while on campus compared to off campus (Fig. 2). Most of the on-campus locations had at least 85% mask wearing observed across all observation days, with the exception of one outdoor location (fountain). All off-campus locations had < 85% mask wearing observed across all observation Discussion Since the start of the COVID-19 pandemic, local, regional, and national governments have grappled with concerns over the effectiveness and adherence to mask mandates among different populations [5]. Many officials have also called into question differences between mask types as to the effectiveness and safety of variances among common masks. Likewise, correct usage of a mask is an important distinguishing factor when evaluating the overall effectiveness of such mandates. This observational study, part of the CDC MASCUP! study [6], addresses a critical need for more empirical evidence on how mask wearing changes with mandates versus without them. This evidence may be even more crucial considering that governments are increasingly taking a more hands-off approach to these types of mandates, deferring instead to individual institutions, such as schools and private businesses to make such decisions. Taking place between February and April 2021, this observational study sought to investigate the utilization and intricacies of masking behaviors in a community environment (Washington County, Tennessee) compared to a campus setting that had varying masking restrictions at the county and university levels. This study also encompassed a critical timeframe before and after expiration of the mask mandate in Washington County and how a county mandate impacted masking behaviors in a variety of locations. Limitations of this study include the inequity between the number of observations occurring pre-and post-mandate, with post-mandate observations encompassing the majority of total. Another limitation lies in the accuracy of the true adherence rates depending on observation location. Individuals may deem it necessary to wear a mask in more crowded areas as opposed to a low-traffic campus setting or in indoor vs. outdoor locations or in locations where food is served (i.e., coffee shops). Conclusion The findings suggest several implications for both governmental bodies and university decision-makers. One is that institutional mandates are effective in providing significant adherence differences regardless of the presence of surrounding governance mandates. Despite the high rates of mask adherence, it is also true that campus adherence rates saw a decline after the expiration of the county-wide mandate, implying that while both county and institutional mandates are important, more localized ones may be more effective. The ability to wear a mask correctly, however, is not correlated with mask mandates and may instead be tied to education or willingness. Further research could include tying observational mask data to concrete COVID-19 rates at the community or university levels. It may also be prudent to look at multiple time periods to see pre-mandate, during mandate, and postmandate adherence rates, as these may offer significant differences from the two time periods observed in this study. Controlling for demographic differences such as race, sex, age, education level, and even political affiliation may also yield important results that distinguish the true impact of university mandates on masking behaviors across demographic groups.
2023-01-26T06:16:03.833Z
2023-01-25T00:00:00.000
{ "year": 2023, "sha1": "994eab16d3aa6ebf3a86a6141ed0cf65eb321b72", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10900-023-01187-8.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c5aebbd37f29a3c1a874e5c38c647ea456c92fde", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }