text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Fracture Behavior of the Cement Mantle of Reconstructed Acetabulum in the Presence of a Microcrack Emanating from a Microvoid In this work, the fin ite element method is used to analyze the behavior of the crack emanating from a microvoid in acetabular cement mantle by computing the stress intensity factor. A simple 2D multilayer model developed by Benbarek et al.[1] to reproduce the stress distributions in the cement mantle has been used. To provide the place of birth of the crac k, the stress distribution around the microvoid is determined in several positions for three different loads. The effect of axial an d radial d isplacement of the microvoid in the cement is highlighted. The results indicate that the stress distribution , xx  yy  and xy  induced in the cement around the microvoid are not homogeneous and this, whatever its position. In addit ion, there is a large birth risk of cracks in several radial directions depending on the position of the microvoid in the cement mantle. The crack can be triggered in several d irect ions in mode I or mode II, while the mixed mode is dominant. The KI and KII SIF varies according to the position of the microcrack and the microvoid in the cement. They increase proportionally with the increase of the weight of the patient. It should be noted that the KI SIF are two t imes higher than the SIF KII. The maxima of the KI SIF are obtained for the position of the microvoid α = 100° and θ = 45° of the microcrack and the risk of the propagation of the microcrack is very important for this orientation. Introduction Although the Polymethylmethacrylate has long been known as a fixat ive in orthopedics dental prostheses, its first use in hip arthroplasty in 1962 [2]. Despite the various disadvantages of PMMA, improved techniques of preparation and implementation of cement and implementation methods contributes to the survival of cemented arthroplasties. In addition, the function of fixing the implant, the bone cement is responsible for transferring the loads of the joint to the bone. Face loads transmitted, which can reach in some circu mstances eight times the weight of the patient [3,4], bio-co mpetence cement mus t be good [5]. Thus, the mechanical and physical properties of cement are determin ing in the service life of the imp lant [6,7]. These properties are strongly affected by the size and number of pores in cement [8]. Indeed, the porosity can cause crack initiat ion by fat igue, by creating irregular areas [9,10]. Thus, surgeons tend to reduce the porosity to ensure greater resistance to fatigue. Go ld, that this trend is directly related to the chosen method of mixing during the preparation of cement [11]. For example, the conventional method of mixing leads to a porosity ranging from 5 to 16% depending on the type of cement, while the method of "vacuum mixing" generates a porosity of 0.1 to 1% [12,13]. So me authors assume that the latter method, increases the mechanical properties largely due to the decrease in micropores and macropores [14,15], thus improving the life of the cement [16,17]. The effect of the position and orientation of a crack in the cement in three loads using the finite element method has been studied by Serier et al. [18] and Bachir Bouiad jra et al. [19]. They indicate that, fo r the third case load, the risk of crack p ropagation is higher when the crack is in the horizontal position for both failure modes. Achour et al. [20] presented a study on the mechanical behavior of the damage (failure) of the interface between the cement / bone and cement / stem in total h ip prosthesis. They conclude that interfacial crack (cement / bone) in the distal region can spread by opening and shear; it can cause a risk of brutal fracture if the crack length exceeds 0.6 mm. The risk of failure of the interface cement/bone or cement/stem in the pro ximal area is less important compared with medial and distal areas. Flitti et al. [21] studied the effect of the position of a microcrack on mechanical behavior out of a total hip prosthesis under the effect of a 90kg patient's weight. They concluded that the initiat ion of a crack in the cement area distal femu r gro ws in mixed mode, unlike init iated in the pro ximal zone which can propagate in mode II. Bouziane et al. [22] examined the behavior of microvoids located in the cement of a model of the hip prosthesis simp lified three -dimensional. They show that when the microvoid is located at the proximal and distal areas, the static charge causes a higher stress field that the dynamic load. Un like the work of Benbarak et al. [1] and [18][19][20] (microcrack constant), wh ich showed that the effect of the position of the microcrack constant emanating from the microvoid ; in this paper we have shown the variation K I and K II factors as a function of the length of the microcrack emanating fro m the microvoid and for a plurality o f positions in the cement. These positions are chosen according to the critical amplification Von Mises determined fro m the microvoid on along the circu mference and on the depth of the thickness of the cement (P1-P9). To co mp lete this study, we evaluated the principal stresses at the two interfaces of the cement (upper and lower). Also, the presence of two microcracks fro m of the microvoid is highlighted. The objective of this study are expected to shed light on the influence of the presence of microvoid and a crack emanating fro m the microvoid on the fracture behavior of bone cement, by using finite element method. The effect of the position of the microvoid in cement and effect of the size of the microcrack on the fracture behavior are h ighlighted. The stress intensity factor to the microcrack-tip is used like criterion of rupture. The analysis of the distribution of the Von Mises stresses in the various components of the acetabular part and the implant is made to a zero angle between the necks of the imp lant relat ive to the axis of the cup. We are required to develop a fin ite element model to analyze the p resence of a microvoid on the behavior and strength of bone cement Geometrical Model The geometrical model is generated fro m a roentgenogram of a 4mm slice normal to the acetabulum through the pubic and iliu m. The cup has an outer diameter of 54 mm and an inner diameter of 28 mm. It is sealed with the bone cement mantle to uniform thickness of 2 mm [23]. The inside diameter of the UHMWPE cup is 54 mm. The interfaces between the cup-cement and cement-subchondral bone are assumed to be fully bonded. In this work two cases were analyzed: the first is to take the presence of a microvoid in different positions in cement. The stress concentrations are determined. In the second case we assume init ially the propagation of a microcrack emanating fro m the microvoid in the determined position and characterized by a high stress concentration gradient; and another time it is assumed that the microcrack emanating fro m the microvoid in different positions. The stress intensity factors are evaluated. The model was div ided in seven different reg ions ( Figure 1) according to the different elastic constants with isotropic properties considered in each region. The main areas are: cortical bone, subchondral bone and spongious bone [24][25][26][27][28]. The femo ral head was modeled as a spherical surface that was attached to the spherical acetabular cavity. The acetabular cavity is located on the outside of the hip bone at the junction of its three co mponents ( Figure 1): iliu m, ischium and pubic bone. Table 1 su mmarizes the material properties of cement mantle, cup and all sub-regions of acetabulum bone. Finite Element Modelling The acetabulum was modeled using the finite element code Abaqus 6.11.1 [30]. To simp lify the study, the 2D model of the acetabulum was considered. This representation was used to be representative of a section taken through the transverse plane of the acetabulum. Berg mann [25] found that the variation of the resultant forces acting on the acetabulum is larger in the transverse plane. A very fine discretizat ion was used to represent all possible, and to be closer to reality, and special mesh type ¼ was used near a microcrack tip. Figure 2 shows the mesh of the geometrical model. The geometrical model consists of 20611 elements in total, 13564 quadratic elements of type CPS4R and 7047 triangular elements of type CPS3. We opted for an orientation defined by an angle of 0° between the implant neck and the axis of the cup. The latter reflects a posture of the human body. . For reasons to be in the worst case, we chose a zero inclination angle between the neck of the imp lant and the axis of the cup (see Fig. 3) which was used by Benbarek et al. [1] whose they indicate that they present more stress concentration. The considered Body weight is 70, 140 and 210 kg. The sacroiliac jo int was co mpletely stationary while the pubic joint was free in the sagittal plane. The boundary conditions considered are shown in the configuration of Figure 3, pubic nodes are blocked in all directions, on the wing o f the iliu m the nodes are blocked along the x axis and a uniformly distributed load applied on the implant. The contact between the bone and cement and between the cement and the cup was taken as fully bound, and between femoral head and the cup was assumed to be without frict ion under small slip. Variation of Von Mises Before analyzing the stress intensity factor at the microcrack tip, it is necessary to analyze the stress distribution around the microvoid to predict the microcrack initiat ion. In It is clear that the stress distribution is not uniform around the microvoid. We note several peaks in each rad ial position of the microvoid. All these stresses are due to the compression effect produced by the weight of the patient. At the radial position corresponds to   0  the maximu m stress at the interface is cup-cement of the order o f 20M Pa and the bone-cement interface subchondral is of the order of 35M Pa. The first interface to the second interface stress changes from single to double, this shows that when the microvoid is close to the bone-cement interface subchondral interaction effect is much larger than when it is close to the interface head cup-cement. The maximu m stresses in the microvoid near the interface cement / bone sub-chonral into position   0  are of the order of 35M Pa, 70MPa and 140MPa, respectively for the weight of 70kg, 140kg and 210kg. Th is shows the effect of the interaction between the microvoid and the interface. In these three cases the maximu m stress exceeds the tensile failure, which shows the severity of the defect position in the cement. In addition, depending on the axial position of the microvoid, the constraints become important. Fro m the position P1 where the cavity is close to the cup-cement interface, the maximu m stress increases progressively to approach the interface cement-subchondral bone. This finding is significant regardless of the radial position of the microvoid. The stress levels at the radial position of microvoid  is about four times lower of the compression fracture limit, while the traction is three times lower, wh ich shows that they are relatively low. By against, a weight of 140kg and the position of the microvoid to 100°, the constraints tend to the tensile strength limit to angles 30° and 210°. The stress yy  greatly exceeds the strength in tension and co mpression. In this case, the cement is almost frag mented in tension or compression depending on the position of the microvoid in the binder. and in the interval varying fro m 90° to 120° in both interfaces. Stress Variation The first peak is obtained at 0° and the second at 100° for the two interfaces of the cement. In this case, the Von Mises stresses are almost three times less to tensile strength stress. It should be noted that if a microvoid is in these two areas of peak stress, the defect will quadruplicate the stress and therefore present a high risk of microcrack init iation, and the likelihood of its spread is high. The Von Mises stresses are higher in the cup/cement interface that in the cement interface-subchondral bone and it exp lains that the cement is a stress absorber. If a cav ity is close to the interface, the stresses in the interface and the cavity will be increased as a result of interaction and therefore the risk of damage is major. This behavior shows that the existence of the microvoid is a source of increasing stress concentrations and consequently the risk of loosening of the prosthesis Variation of SIF of Microcrack Emanating from the Microvoid In this section we have studied the evolution of the stress intensity factor K I and K II as a function of the length of the microcrack emanating fro m the microvoid located in the bone cement. This latter is taken in the most unfavorable radial positions previously established. Three patient' s weight loads are considered, 70kg, 210kg and 140kg. According to figures 9.1-9.6, we find that the stress intensity factors K I and K II will vary as a function of the increase in the length of the microcrack emanating fro m the microvoid. This variat ion is more marked with increasing of patient weight. The stress intensity factors KI for the positions of the microvoid 40° are positive and for positions 0° are negative. While K II SIF are negatives whatever the microvoid position. We note that the SIF K I and K II obtained for the position   0  of the microvoid are much larger in absolute value compared to other positions, showing that the birth of a microcrack emanating fro m a cavity at an angle constitute a high risk of rupture co mpared with other positions. This is due to the edge effect. The K II SIF is almost ten times s maller than the KI except for the case of load 70kg, where it is almost neglig ible for large microcracks. In position   100  , the K I SIF shows significant positive values that can cause rupture of the cement easily. This microvoid position affects significantly the bone-cement fracture toughness, which controls the failure process at the interfaces. In Figure 10 we present the Von M ises stress levels for four different orientations . The same behavior has been marked when the microvoid is at the position α = 120°. If the microvoid is at the position α = 40°, the microcrack is susceptible to propagate in pure mode I at θ = 135° or at θ = 335° or pure mode II at θ = 20° or θ = 170°. And if it is at 0°, the SIF K I reaches its maximu m negative at 0 ° and 335 ° for the SIF K II . Conclusions This study was conducted to analyze the fracture behavior of bone cement in the presence of a microvoid and a microcrack emanating fro m the microvoid. Results emerge the following findings: →The distribution induced by stresses
3,663.8
2013-01-07T00:00:00.000
[ "Physics" ]
Epistemological Importance of Philosophy for Understanding of Contemporary Civilizational Processes Epistemological Importance of Philosophy for Understanding of Contemporary Civilizational Processes. Philosophy and Cosmology , Volume 27, Globalization processes in the contemporary world have been implemented in political, economic, and socio-cultural discourses. Compared to others, the philosophical discourse is not anachronistic and amethodic and has been omitted in this respect. Thus, the article presents the importance of philosophy in the area of learning and analyzing globalization processes. The authors, referring to the example of reactivating the concept of the New Silk Road (One Belt One Road Initiative) and the involvement of the People’s Republic of China in this project, show the importance of philosophy in analyzing the ideas and processes associated with it. They emphasize the role of the “spirit” and the values that have been created around this project. The Silk Road is currently one of the most significant globalization initiatives to ensure peace and sustainable development. Therefore, the implementation of philosophy into epistemological and analytical processes of the idea of the Silk Road strongly emphasizes its importance in the contemporary philosophical discourse. Introduction In the modern world, the term "globalization" has become so widespread that hardly anyone wonders what constitutes the existential basis of the processes that are described by this word. The phenomenon of globalization has been absorbed in its interpretative nature by political, economic, and social sciences. The consequence of this was the consolidation of a paradigm in which only the above-mentioned narrative directions matter. The legitimacy of performing analytical activities within the framework of political, economic, or social discourses boils down only to superficial, often anachronistic, and amethodic descriptions of a globalized reality. Anachronism in this context is associated with the dynamic nature of the changes taking place because often emerging political, economic, or social theories aimed at explaining reality become outdated quickly. The amethodic nature, in turn, comes down to the lack of a reliable analysis based on an appropriate methodology. This does not mean that political, economic, and social sciences do not have adequate research tools. Still, it correlates with the claim that they are insufficient for in-depth analysis and explanation of globalization processes. In this context, it seems justified to emphasize the importance of philosophical reflection, which has significantly weakened over the last few years. Therefore, this article aims to indicate the pragmatic aspects of the philosophical approach that can be used in explaining the processes of globalization in the contemporary world. In their deliberations, the authors refer to the work of Peter Sloterdijk In the World Interior of Capital: Towards a Philosophical Theory, which defends philosophy as an effective theoretical tool with which a reliable and in-depth analysis of globalization processes can be made. It highlights the epistemological aspect of the processes taking place in the contemporary world. However, this is done through the prism of philosophical methodological thinking. Peter Sloterdijk accuses the representatives of political, economic, and social sciences of the lack of precision in analyzing globalization processes. He draws attention to the semantic nature of the terms used by researchers. He claims that the modern scientific narrative has been weakened precisely by the "unscientific gibberish" of media stations and pseudo-scientific circles that do not meet the standards of a "real" constructive scientific debate (Sloterdijk, 2013: 12-15). The article's authors, referring to Sloterdijk's thoughts, emphasize the importance of a metaphorical turn in philosophy, which returned to favor thanks to Immanuel Kant. This direction of study also boils down to metaphorology, which means, according to the assumption of Hans Blumberg, a critical perception of the world and rationalization in the selection of concepts by which this world is described (Blumberg, 2010: 38-43). The significance of the metaphorical turn in the context of the philosophical approach in analyzing globalization processes is best reflected by the contemporary idea of the authorities of the People's Republic of China concerning the reactivation of the Silk Road, which was also referred to as the One Belt and One Road initiative. The epistemology of the idea of the Silk Road has a long tradition and history, and in short, it actually means building communication networks connecting the East and the West. Thus, an initiative that has a metaphorical or, in other words, a philosophical foundation in terms of meaning is implemented as a great globalizing project. To accomplish the task set by the authors of the article, several scientific methods have been used. Primarily logical, theoretical, technical, and interpretative analyses were used. It made it possible to study the importance of philosophy in the processes of explaining globalization processes in the contemporary world. In the next part of the article, the stages of shaping philosophical meaning in the context of explaining globalization will be presented. In this respect, the authors refer to the semantic theory of history by Reinhart Koselleck, who derives his considerations from the thesis concerning the limitation of historical time (Koselleck, 2004: 23-25). This means that the story can be broken down into specific stages, most of which end in a paradigm shift. Nevertheless, in the context of these considerations, this time limitation is interpreted as a division into parts within a given whole. This whole is the idea of universalism and the desire to control the world. Philosophy as a Tool Explaining Globalization Processes The philosophical discourse of globalization is related to the concept of cosmopolitanism, which covers the entirety of political, economic, and socio-cultural processes taking place in the world. Currently, in cosmopolitanism, three main directions can be distinguished: egalitarian, libertarian, and mondialistic. These three lines relate to an ancient school of philosophy -Cynics, Cyreneics, or Stoics. However, this does not diminish the importance of the cosmopolitan idea today. It is considered a universal philosophical doctrine in time and space (Shestova, 2021: 203-205). Looking at cosmopolitanism from a global-historical perspective as the basis of globalization processes, its unchanging and permanent essence is noticeable, not dependent on specific conditions. The idea remains the same, only the way of looking at reality changes, which depends on technological and information development (Chumakov, 2015: 154-156). In turn, the already mentioned in this article, Peter Sloterdijk lists three stages of shaping globalization processes in his work that can undoubtedly be included in the trend of cosmopolitan thinking. This German philosopher points to the cosmic-uranic stage, identified with Hellenism; the land-sea stage related to modernity; and the synchronous stage, i.e., the period of existence of the postmodern globalized world. In addition, Sloterdijk presented the world and the processes taking place in it as spherical, i.e., a model of the globe where all points are equally distant from the center (Sloterdijk, 2013: 108-110;234-237). The sphere in a given context, i.e., the sphere's shell, is a metaphor because the globe, due to the movement around its axis and the resulting centrifugal forces, has the shape of an ellipsoid, which means that not all points are as far away from the center as it would be in the case of a globe. Nevertheless, it is worth looking at the various stages of globalization, which constitute a continuity of the cosmopolitan idea but with a paradigm-shifting perception of reality. The first stage, the period of Hellenism, is associated with the representation of being and the cosmos with the help of a sphere. The circular shape, the same everywhere, having no beginning or end, and therefore perfect -is the most important metaphor for being for Greek philosophers. The ancient cosmic order was also used in Leo Strauss's deliberations on the ideal form of the processes taking place in the world (Meier, 2006: 124-128). Moreover, in the era of Hellenism, sphericity was equated with the protective shell of the Earth. It made it easier for the Ancient Greeks to observe or refer to the Platonic sense of this phenomenon -to look into the essence of the world (James & Steger, 2017: 24-25). The second stage, covering the period of modernity, is related primarily to the paradigm shift in perceiving the world. Peter Sloterdijk describes this process as the transfer of transcendence to horizontality (Sloterdijk, 2013: 179-181). The Copernican revolution emphasizes this change even more because in order to exceed the limits of one's knowledge and oneself; one did not have to look skyward -it was necessary to focus on the immense ocean, the exploration of which became a challenge for the next several decades. In the modern era, man also begins to perceive materiality differently. A great example in this context is the Great Geographic Discoveries. The expeditions of Christophe Colombo, Vasco da Gama, Marco Polo, or Hernan Cortes are primarily about finding faster and cheaper trade routes. This period also marks the beginning of the great movement of money, which becomes the nucleus of modern commercial transactions. Adding to this a pragmatic approach and materialism, it can be presumed that this was the period of the birth of modern globalist capitalism. The third stage is the stage of contemporary globalization processes. It is accompanied, as before, by a paradigm shift in the perception of reality caused by technological and information developments. Still, the idea of cosmopolitanism and universalism remains unchanged. The world is subject to the laws of unification because the same rules and values govern regardless of place (in most cases). The rule is to look for cheaper and faster ways of communication, which are then used as tools to increase material resources (Svetelj, 2018: 396-397). The postmodern world is a world of large corporations and electronic money. In 1944, when the global financial system was based on the gold resources of individual players, a parallel process began with the great game of resources, which proved itself to be moving away from paper money as a means of payment. Today, what counts above all is the funds on the bank account, which can be easily operated, making various transactions quickly and efficiently (Baader, 2016: 207-210). The world of corporations and material security, which is the modern philosophy of human existence, creates a network of mutual structural connections. Nowadays, it is easy to see a tendency in every aspect of human life, let's call it "corporate materialism." This process is exemplified by, for example, the education process at the university. It is no longer an institution that is supposed to teach how a person should perceive reality and understand it. The times of such a university, the manifestation of which was the German and English models, have changed along with the paradigm of perceiving reality. Nowadays, the university as an institution is experiencing a crisis. Massification, empowerment, and a decline in educational standards are appropriate adjectives for today's universities (Możgin, 2019: 59-61). The main problem faced by a young person in university is the right course of study. How can this be implemented? The answer is straightforwarddemand in the labor market. The demand for qualified employees has become almost the most important criterion for choosing studies by modern youth. The student today is involved in a multifaceted conflict between the university and the labor market. Therefore, often an institution such as a university, which should educate the elite of society -following the assumption of the idea of the university -today is only the next level on the path of a young person's career. Therefore, modern man's existential goal is to eliminate the concept of space (the next stage will certainly be space exploration and the search for the possibility of extending the spheres of influence to the space of other planets). Interpretation and, above all, understanding the logic of globalization processes require an in-depth analysis. It is impossible to comprehend globalization without knowing the particular phenomena and ideas that underlie specific initiatives (Svyrydenko & Fatkhutdinov, 2019: 87-89). Certainly, the idea that accompanies any initiative to create communication links between different countries or parts of the world, taking into account all connotations -political, economic and socio-cultural, is to achieve peace and sustainable development. Despite the existence of different lines of tension between the various players on the international stage, global, cross-border initiatives are primarily aimed at ensuring effective international cooperation while respecting the diversity existing in the world. This aspect is present in political agendas and shapes the educational discourse in many universities around the world. This aims to develop social skills that will ensure appropriate conditions for cooperation at the level of large corporations and governments of individual countries. It is up to individual states to play a significant role in ensuring sustainable development and peace. Despite the existence of the subjectivity of large corporations and international organizations in the light of international law, the state is still the most important decisionmaker. This claim is quite controversial because looking at the processes taking place in the international arena, corporations and international organizations are equal to states in their activity. However, when decomposing all processes in the contemporary globalized world into factors, we will see that the states decide whether or not to establish a specific organization and that state law regulates the functioning of individual corporations. This way of reasoning indicates that various types of activity in the international arena should be considered from the perspective of the activities of individual countries. Understanding contemporary globalization processes is possible if we look at them from the ideological perspective that forms the conceptual basis of such initiatives. The application of a philosophical approach in this context, allowing to reach the core of the idea of implementing a specific project, will allow not only to understand, but also to indicate further vectors of its development. It is therefore worth taking a closer look at the Silk Road initiative implemented by the People's Republic of China, which, thanks to its economic and socio-cultural potential, allows us to presume that it will be one of the most important initiatives in the future, especially in the context of the weakening role of the European Union and the United States. Philosophical Platform of the Belt and Road Initiative In 2012, the People's Republic of China authorities announced to the world that they were starting to implement the plan to re-vitalize the idea of the Silk Road. During the "China-Eurasia Expo," which took place on September 3, 2012, in the Chinese city of Urumqi, the Prime Minister of that country, Jiabao Wen, gave a speech entitled Towards New Glory of the Silk Road, in which he emphasized the importance of rebuilding the Silk Road and the tradition of ties between equal cultures and continents (Jiabao, 2012). Another signal confirming China's commitment to reactivating the old tradition of cooperation with the West was the speech of the President of the People's Republic of China, Xi Jinping, at the Nazarbayev University of Kazakhstan in Astana regarding the Silk Road Economic Belt. During the speech, declarations were made about reconstructing economic cooperation between China and other countries (Xi, 2013). In 2015, a document was published in China. One refers to the tradition of the old Silk Road and describes and explains the emergence of its new concept. It is essential in the context of this article that the content of this document includes the idea of "the spirit of the Silk Road, which is the historical and cultural heritage of all countries around the world" (Belt and Road Forum, 2015). The scope of getting to know "spirituality" is significant in the process of analyzing the One Belt One Road Initiative. Thus, to do so, one must resort to a philosophical approach that will allow us to reach the existential nature of this phenomenon. In this context, it will be reasonable to apply the teaching of the spirit of Wilhelm Dilthey, who, referring to the Hegelian concept of spirit, created the concept of the objective spirit. By implementing these considerations in the field of the science of spirit, one can presume that the idea of reactivating the Silk Road is pragmatic. "Spirituality," which is referred to in the official documents of the People's Republic of China, primarily emphasizes the values of peace, sustainable development, cooperation, mutual learning and its benefits, progress, prosperity, prosperity, and friendly relations. It is also worth emphasizing the dichotomous nature of this idea and its global importance because, firstly, the old Silk Road is the heritage of all humankind. Secondly, an exemplification of its "spirituality," the new initiative promotes peace and development worldwide (Nobis, 2020: 82-83). The reference to these contents has become an integral part of the official Chinese narrative on the international stage. Emphasizing the importance of reactivating the Silk Road has become a priority task for Chinese officials. It is worth mentioning that the President of the People's Republic of China, Xi Jinping, referred to this content during his speech at the United Nations in 2017, delivering a crucial document entitled Work Together to Build a Community of Shared Future for Mankind (Jinping, 2017). Other players in the international arena also implement the idea of rebuilding the Silk Road. For example, in 2005, the Silk Road Foundation was established in Seoul. In 2008, the Asian Development Bank in Manila launched The New Road Silk, a program describing the development of Central Asia. In addition, the reconstruction of the Silk Road as a tool connecting the East and West was also mentioned in Antalya, Turkey, during the Promoting Trade Among Silk Road Countries Forum, which took place in 2008 (Nobis, 2020: 81). Nevertheless, the involvement of the People's Republic of China made the idea of reactivating the Silk Road gain momentum and became one of the greatest globalist projects. In addition, UNESCO is also implementing its own Silk Road Programme. According to the concept proposed by this organization, the Silk Road is an idea that has been connecting civilizations, cultures, and people of different parts of the world for thousands of years, enabling not only the exchange of goods but also the interaction of ideas and culture that has shaped our world today: "Since 1988, UNESCO has sought to better understand the rich history and shared legacy of the historic Silk Roads, and the ways in which cultures have mutually influenced each other. In light of the enduring legacy of the Silk Roads in connecting civilizations throughout history, the UNESCO Silk Roads Programme revives and extends these historical networks in a digital space, bringing people together in an ongoing dialogue and fostering a mutual understanding of the diverse and often inter-related cultures that have sprung up around these routes. As a part of UNESCO's commitment to creating a culture of peace, the Silk Roads Online Platform seeks to promote this unique history of mutual exchange and dialogue" (UNESCO, 2021). The idea of the new Silk Road also has its metaphorical sense (Bhoothalingam, 2016). Silk was and still remains one of the most valuable commodities. It is known from history that few countries could afford to buy silk. What's more, it was an extraordinary distinction and prestige. In this way, by emphasizing the importance of silk in the history of that time, the importance of the route that it traveled is also emphasized. Today, this metaphor emphasizes the importance of communication routes between the East and the West. Silk, as a commodity, has not played such a role for a long time (without diminishing its importance in contemporary trade relations, of course), but the meaning, or otherwise, its "spirituality" has remained and dictated the contemporary political, economic and socio-cultural discourse. The Old Silk Road was formally introduced into the scientific discussion by the German geographer Ferdinand von Richthofen. After returning from an expedition to Asia, he presented a map marking the old communication routes as only an ideological model. Today, the Silk Road is more than just a communication route, as it is primarily international cooperation for sustainable development and peace. It is the realization of the cosmopolitan idea mentioned earlier in the article. The New Silk Road is a way to combine two significantly different parts into one whole (Ling& Perrigoue, 2018). Thus, the philosophical approach used by the authors made it possible to explain the multifaceted and ambiguous nature of the idea of the Silk Road. At the same time, the role of philosophy as a tool for making such an analysis was emphasized. The Silk Road initiative reactivated primarily by the People's Republic of China is one of the most significant globalist projects, which is why it was so important to get to the existential basis of this project. The aforementioned "spirituality" and the axiological aspect create a specific envelope around this initiative, emphasizing the uniqueness of the One Belt One Road Initiative, which will certainly play an important role in the development of all mankind. Conclusions Political, economic, and social sciences have dominated globalization as a worldwide process. Explaining the phenomena of globalization in recent years has therefore become too superficial. What matters is presenting the fact of a specific process, not an in-depth analysis of it. In this context, the philosophical discourse, which, through its methodical approach, makes it possible to explain the existential nature and the idea of globalization processes, is ignored. To emphasize the importance of philosophy, the authors of this article referred to the example of reactivating the idea of the new Silk Road (One Belt One Road Initiative), which the People's Republic of China is implementing. The "spirituality" of the Silk Road and its axiological context emphasizes the importance of this initiative in the contemporary world. The assumption is to restore communication routes connecting the East and the West and, above all, to ensure sustainable development and peace worldwide. China's involvement in the reconstruction of the Silk Road was a turning point, as it was this country that gave impetus and had a decisive influence on the implementation of this idea of civilizational development (Eom, 2017). The authors also point out that the omission of philosophy in deliberations on globalization processes causes forgetting about their transcendent, ideological character. Only the pragmatic, materialistic aspect is emphasized, which is visible, for example, in contemporary educational models. Today's globalized world is the result of the activities of previous generations. In this context, "globalization" is a relatively new concept that defines specific processes. However, what remains unchanged is the philosophical concept of cosmopolitanism, or universalism, that sets the tone for social development. And to implement the assumptions of world domination, humankind was looking for faster and faster ways of communication and better and better tools to facilitate this activity, thus creating the material world. However, it plays a secondary role in the process of understanding the world. Therefore, to know what we are dealing with, we must remember the philosophy that will answer the fundamental question: "what is it?". Consequently, it is worth emphasizing that Peter Sloterdijk is of considerable importance because restoring philosophy to its proper place in the context of considering us through contemporary globalization processes is essential to be able to analyze, above all, understand them reliably.
5,325.6
2021-01-01T00:00:00.000
[ "Philosophy" ]
A Positive Regulatory Role for the mSin3A-HDAC Complex in Pluripotency through Nanog and Sox2* Large networks of proteins govern embryonic stem (ES) cell pluripotency. Recent analysis of the critical pluripotency factors Oct4 and Nanog has identified their interaction with multiple transcriptional repression complexes, including members of the mSin3A-HDAC complex, suggesting that these factors could be involved in the regulation of Oct4/Nanog function. mSin3A is critical for embryonic development, but the mechanism by which the mSin3A-HDAC complex is able to regulate ES cell pluripotency is undefined. Herein we show that the mSin3A-HDAC complex positively regulates Nanog expression in ES cells through Sox2, a critical ES cell transcription factor and regulator of Nanog. We have identified the mSin3A-HDAC complex to be present at the Nanog promoter only under proliferating conditions concurrent with histone acetylation. We find that Sox2 associates with mSin3A-HDAC complex members both in vitro and in vivo, similar to the interactions found between Oct4/Nanog and the mSin3A-HDAC complex. Knockdown of mSin3A-HDAC complex members or HDAC inhibitor treatment reduces Nanog expression, and overexpression of mSin3A-HDAC complex subunits stimulates Nanog expression. Our data demonstrate that the mSin3A-HDAC complex can positively regulate Nanog expression under proliferating conditions and that this activity is complementary to mSin3A-mediated p53-dependent silencing of Nanog during differentiation. Large networks of proteins govern embryonic stem (ES) cell pluripotency. Recent analysis of the critical pluripotency factors Oct4 and Nanog has identified their interaction with multiple transcriptional repression complexes, including members of the mSin3A-HDAC complex, suggesting that these factors could be involved in the regulation of Oct4/Nanog function. mSin3A is critical for embryonic development, but the mechanism by which the mSin3A-HDAC complex is able to regulate ES cell pluripotency is undefined. Herein we show that the mSin3A-HDAC complex positively regulates Nanog expression in ES cells through Sox2, a critical ES cell transcription factor and regulator of Nanog. We have identified the mSin3A-HDAC complex to be present at the Nanog promoter only under proliferating conditions concurrent with histone acetylation. We find that Sox2 associates with mSin3A-HDAC complex members both in vitro and in vivo, similar to the interactions found between Oct4/Nanog and the mSin3A-HDAC complex. Knockdown of mSin3A-HDAC complex members or HDAC inhibitor treatment reduces Nanog expression, and overexpression of mSin3A-HDAC complex subunits stimulates Nanog expression. Our data demonstrate that the mSin3A-HDAC complex can positively regulate Nanog expression under proliferating conditions and that this activity is complementary to mSin3A-mediated p53-dependent silencing of Nanog during differentiation. Precise modulation of transcriptional activation and repression in ES 2 cells is crucial for proper development, lineage differentiation, and genomic stability. At the core of ES cell selfrenewal are the transcription factors Oct4, Sox2, and Nanog. These factors are central to maintaining pluripotency, as well as directing appropriate lineage commitment (1,2). In addition to regulating the transcription of other genes, Oct4, Nanog, and Sox2 also activate one another's transcription. Oct4 and Sox2 together recognize and bind to a highly conserved consensus sequence, which is essential for Nanog expression in both mouse and human ES cells (3,4). In addition to Oct4 and Sox2, many other transcription factors have been linked to regulation of Nanog expression and ES cell pluripotency. For example, Sall4 (5), FoxD3 (6), and STAT3 and T (7) transcription factors positively regulate Nanog expression, whereas p53 (8) and GCNF (9) suppress Nanog expression. In addition to specific transcription factors, the active or inactive transcriptional state of genes is established through highly regulated modulation of underlying chromatin structure (10). This is achieved by a large family of histone-modifying and chromatin-remodeling enzymes (11). Nanog is known to interact with a variety of chromatin-modifying complexes. Analysis of the Nanog interaction network has shown that it is linked to repressor proteins (12). A recent protein interaction study identified the interaction of Oct4 and Nanog with a novel repressor complex, NODE, which contains several chromatinassociated proteins, including mSin3A, HDAC1, and HDAC2 (13). ES cells that have lost Mbd3, a component of the nucleosome-remodeling complex NuRD, show LIF-independent growth with no effect on Nanog expression or in the expression of lineage-specific genes (14), illustrating how chromatin-modifying complexes can regulate Nanog expression. In contrast, knockdown of Mta1/2-containing repression complexes led to ES cell differentiation by specifically up-regulating endoderm lineage markers (13). Collectively, these studies show that Nanog and Oct4 interact with multiple repression complexes to regulate their target genes and hence to control the fate of ES cells. HDAC1 and HDAC2 are members of a number of deacetylase complexes, including the mSin3A-HDAC complex, the NuRD complex, the BCH10-containing complex, and the CoREST complex (15). Although they have broadly similar functions in regulating transcription, the mSin3A-HDAC complex can be distinguished from these complexes by the presence of the mSin3A protein. mSin3A has been well described as a core component of a multiprotein co-repressor complex known to silence gene expression by deacetylating histones (for review, see Ref. 16) and has been shown to play an essential role in early embryonic development (17). ES cells derived from mSin3A Ϫ/Ϫ blastocysts form significantly smaller colonies and eventually die in culture compared with wild-type or mSin3A ϩ/Ϫ ES cells (18). Likewise, HDAC1 null ES cells also show growth defects compared with wild-type ES cells, and loss of HDAC1 contributes to embryonic lethality prior to E10.5 (19). HDAC2 Ϫ/Ϫ mice display difficulty progressing through gestation (20). These observations allude to a function of the mSin3A-HDAC complex in ES cell survival and proliferation. However the mechanism by which mSin3A-HDAC complex regulates ES cells still remains unclear. * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 Our goal was to investigate the mechanisms underlying Nanog transcription. We sought out co-activator or co-repressor complexes (21) that could regulate Oct4 or Sox2, and consequently Nanog itself. Using chromatin immunoprecipitation (ChIP) we have identified components of the mSin3A-HDAC complex present at the Nanog promoter under proliferating conditions, despite also finding acetylated histones at this loci. This binding is lost upon differentiation, concurrent with histone deacetylation. Depletion of mSin3A-HDAC activity by siRNA or treatment with HDAC inhibitors negatively affects Nanog transcription but does not similarly affect Oct4. Overexpression of mSin3A-HDAC complex members stimulates expression from the Nanog promoter in a reporter system. In addition, mSin3A-HDAC complex members interact with Sox2 in vitro and in vivo, providing a link between a factor known to positively regulate Nanog expression and the mSin3A-HDAC complex. Our studies demonstrate that the mSin3A-HDAC complex is required for ES cell proliferation due to its role in positively regulating the expression of Nanog. Luciferase-reporter Assays-NIH3T3 cells were seeded at a density of 2.5 ϫ 10 5 cells per well of a 6-well dish and cultured in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum and 1ϫ penicillin/streptomycin and maintained at 37°C and 5% CO 2 . DNA transfections were carried out using Lipofectamine 2000 (Invitrogen, 11668-019) as per the manufacturer's instructions. Cells were harvested after 48 h, and the luciferase activity of the lysate was measured with the Dual-Luciferase reporter assay system (Promega, E1960) using the Envision luminometer (PerkinElmer Life Sciences). mSin3A-HDAC Complex Binds to the Nanog Promoter during ES Cell Proliferation-To gain insight into Nanog transcriptional regulation in mouse ES cells, we identified factors that occupy the Nanog promoter during ES cell proliferation and differentiation by ChIP. To accomplish this, we established 5-day retinoic acid (RA) treatment minus LIF as a benchmark for differentiated ES cells. This treatment completely differentiates the cells as marked by loss of Oct4, Sox2, and Nanog expression (Fig. 1, A and B), and increased expression of lineage markers Hand1, HoxA5, and Nestin (Fig. 1A). We chose this method of differentiation over LIF withdrawal, because RA uni-formly differentiates ES cells in to neural lineage (22), whereas LIF withdrawal results in heterogeneous expression of pluripotency and lineage markers. We compared histone modifications and binding of a panel of factors at the Nanog promoter between 5-day RA differentiated and proliferating ES cells, using qPCR primers that span the Oct4-Sox2 binding site (Fig. 1C). As expected, we found active histone modifications (H3Ac, H3K4 di-and trimethyl, and H3K79 dimethyl) (Fig. 1D), transcription factors (Oct4, Nanog, and Sox2), and RNA polymerase II (Fig. 1E) present at the Nanog in the promoter only under proliferating conditions, and not after differentiation. Additionally, repressive histone modifications (H3K9 mono-and dimethyl and H3K27 trimethyl) (Fig. 1D) were present only after differentiation. Surprisingly, we observed HDAC1, HDAC2, and mSin3A, members of the mSin3A-HDAC complex, at the Oct4-Sox2 binding site on the Nanog promoter under proliferating, but not differentiated conditions (Fig. 1E). Because the mSin3A-HDAC complex has been well characterized as a transcriptional repressor, we did not expect to find it at the promoter of an actively transcribed gene. We assayed the expression of mSin3A, HDAC1, and HDAC2 over a 5-day RA treatment of ES cells, and detected all components throughout the time course (Fig. 1B) indicating that the loss of the complex from the Nanog promoter is not due to a global loss of these proteins. As previously reported (8), we also observed increased mSin3A occupancy at the p53 binding sites (diagram of Nanog promoter in Fig. 1C) on the Nanog promoter following RAinduced differentiation (Fig. 1F). These data clearly demonstrate that occupancy of mSin3A at the Oct4-Sox2 binding site is distinct from that of the p53 binding site, and it is possible that mSin3A may have varying functions under different cellular conditions. Knockdown of mSin3A Diminishes Nanog Expression-Because the mSin3A-HDAC complex plays a central role in transcriptional repression (15), it was intriguing to find it bound to the Nanog promoter during proliferation, especially because the histones at this site are heavily acetylated. To assay the possible role for this complex in Nanog regulation, we used siRNAs to inhibit the expression of mSin3A-HDAC complex subunits. If the mSin3A-HDAC complex is essential for positively regulating Nanog, loss of mSin3A-HDAC complex members would be expected to diminish Nanog expression. We knocked down HDAC1, HDAC2, and mSin3A (individually or in combination) with siRNAs for 96 h, and then analyzed Nanog transcript and protein levels. We saw efficient knockdown of target proteins ( Fig. 2A) and found that levels of Nanog protein and mRNA dropped by ϳ80% in the presence of mSin3A siRNA (alone or in combination with HDAC1/2) with no significant change in Nanog levels by HDAC1 or HDAC2 siRNA alone (Fig. 2B). This suggests that mSin3A is the key member of the complex regulating Nanog expression. We did see a reduction in the level of Nanog by knocking down both HDAC1 and HDAC2 together, but this knockdown also resulted in roughly 60% reduction of mSin3A protein levels ( Fig. 2A), further suggesting a key role for mSin3A in Nanog regulation. We also examined earlier time points of mSin3A siRNA knockdown and observed that mSin3A levels were reduced by ϳ80% within 24 h of siRNA treatment, and Nanog levels dropped by ϳ50% (data not mSin3A-HDAC Complex Regulates Nanog MARCH 13, 2009 • VOLUME 284 • NUMBER 11 JOURNAL OF BIOLOGICAL CHEMISTRY 7001 shown). To rule out the possible involvement of p53 in suppressing Nanog expression, we assayed expression of direct p53 targets p21 and Mdm2. Activated p53 increases expression of both p21 and Mdm2, but under these conditions we saw very little change in their expression (data not shown), indicating that p53 has not been activated. This suggests that mSin3A siRNA-mediated down-regulation of Nanog is independent of the p53 pathway. We next wanted to examine the expression of differentiation markers in the mSin3A knockdown ES cells, to eliminate the possibility that changes in Nanog levels are being caused by differentiation due to the loss of mSin3A. As seen previously (Fig. 2B), mSin3A knockdown resulted in decreased expression of not only Nanog but also in Sox2 and Rex1 levels (Fig. 2C). However, mSin3A knockdown had no effect on expression of Oct4 or a variety of lineage markers (Bmp4, Hand1, Gata4, Sox17, Cdx2, Eomes, and Fgf5) (Fig. 2C). Taken together, these results suggest that reduction in mSin3A levels reduces Nanog expression, and this effect is not due to an induction of differentiation either through p53 activation or mSin3A itself. Enzymatic Activity of the mSin3A-HDAC Complex Is Vital for Nanog Expression-Because knockdown of the mSin3A-HDAC complex suggests a positive role for this co-repressor complex in Nanog regulation, we questioned if the enzymatic activity of the complex is important for regulating Nanog. To test this, we blocked deacetylase activity in ES cells by treating them with the HDAC inhibitors valproic acid, sodium butyrate, or trichostatin A (TSA). We found that, within 6 h, ES cells treated with any of these inhibitors showed significantly lower Nanog transcript (Fig. 3A) and protein levels (Fig. 3B) as measured by qPCR and Western blot analysis, respectively. Global histone acetylation was robustly increased (Fig. 3B), demonstrating the potent activity of the deacetylase inhibitors. The effect of these inhibitors did not permanently impact pluripotency, because Nanog expression could be rescued by drug withdrawal (Fig. 3C), and Oct4 levels were largely unaffected (Fig. 3, A-C). In addition, alkaline phosphatase staining and morphology analysis were similar to untreated ES cells (data not shown). To further validate that the reduction in Nanog is p53-independent, we assayed the expression of p53 target genes p21 and Mdm2. Neither of these two targets showed any change in their expression in the TSA-treated ES cells (Fig. 3D), highlighting that prevention of deacetylase activity directly leads to the reduction of Nanog, independent of p53 activity. We next examined histone modifications at the Nanog and Oct4 promoters after treatment with deacetylase inhibitors. The Oct4 promoter still displayed active histone modifications in the presence of deacetylase inhibitors (Fig. 4A), consistent with a transcriptionally active locus. Following inhibitor treatment, the Nanog promoter showed loss of active histone modifications (Fig. 4B), an increase in repressive chromatin modifications (Fig. 4C), and complete loss of mSin3A-HDAC Complex Regulates Nanog Oct4, Nanog, Sox2, mSin3A-HDAC complex, and RNA polymerase II (Fig. 4D). These results indicate that there is an HDAC-sensitive step in the regulation of Nanog and that the enzymatic activity of the mSin3A-HDAC complex is critical to Nanog expression. mSin3A-HDAC Complex Interacts with Sox2 to Positively Regulate Nanog Expression-Given that deacetylase activity is important for Nanog expression, we wanted to determine the target of the deacetylase activity of mSin3A-HDAC complex. We reasoned that it might be a non-histone factor present at the Nanog promoter, because the histones at the Nanog promoter remain acetylated in the presence of the mSin3A-HDAC complex under proliferating conditions. We focused our analysis on Sox2, because previous studies have shown that activity of SRY, a prototypical member of the Sox family, is regulated by HDAC3 during mammalian sex determination (23). A recent study by Liang and colleagues (13) has shown that both Nanog and Oct4 associate with multiple deacetylase-containing complexes, including NuRD, mSin3A, and Pml, but did not demonstrate the mechanism by which these complexes might regulate Nanog or Oct4 activity. Pull-down experiments using NusA-tagged Sox2 and ES cell nuclear extract showed that Sox2 interacts with the mSin3A-HDAC complex (Fig. 5A). To confirm these observations, we co-immunoprecipitated endogenous Sox2 from nuclear and cytoplasmic fractions of ES cells treated for 6 h with either RA or vehicle control. We confirmed the purity of the nuclear and cytoplasmic fractions using Western blots for histone H3 and ␣-tubulin. Sox2 interacted strongly with mSin3A, HDAC1, and HDAC2 in the nuclear fraction of vehicletreated ES cells (Fig. 5B), and to a lesser extent with mSin3A and HDAC2 in the cytoplasm of vehicle-treated ES cells. Following RA treatment, there was a marked reduction in the robustness of this interaction. These results demonstrate that Sox2 interacts with the mSin3A-HDAC complex in vitro and in vivo, and this interaction is more prominent under proliferating conditions. To further confirm the interaction of Sox2 with mSin3A, we performed size-exclusion chromatography on ES cell nuclear extract. We fractionated the nuclear extract into 150 fractions, and probed every 5th fraction for mSin3A and Sox2. Most of the fractions that contained Sox2 also contained mSin3A, confirming that they are part of the large complex (Fig. 5C). We do find some fractions with only mSin3A, or only Sox2, indicating that these two proteins likely have biological functions independent of one another as well. Because we find both the mSin3A-HDAC complex and Sox2 at the Nanog promoter during active transcription of Nanog, we wanted to determine if the mSin3A-HDAC complex cooperates with Sox2 to promote Nanog expression. To address this, we took advantage of a luciferase reporter system in NIH3T3 cells. These cells do not normally express Oct4, Nanog, or Sox2, which allows us to more finely control for which factors are affecting Nanog expression. We engineered three tandem repeats of the Oct4/Sox2 binding sites into the pGL3-TK-Luc reporter vector (Fig. 5D), where the thymidine kinase promoter drives a low level of luciferase expression. This vector, pGL3-O/S-TK-Luc, or its parental , and HDAC2 protein levels in the presence of TSA, valproic acid, and sodium butyrate for the times indicated. Total acetylated histone H3 levels were analyzed using H3Ac-specific antibodies as a positive control for HDAC inhibitor treatment. Tubulin and total histone H3 were used as loading controls. C, qPCR analysis of Nanog and Oct4 mRNA in the presence of 100 ng/ml TSA expressed as -fold change over those untreated. Duration of treatment and washout is indicated at the bottom. D, qPCR analysis of Nanog, p21, and Mdm2 mRNA in the presence of 100 ng/ml TSA for 4 h expressed as gene expression units relative to glyceraldehyde-3-phosphate dehydrogenase. MARCH 13, 2009 • VOLUME 284 • NUMBER 11 vector was transfected into NIH3T3 cells, along with expression vectors for Oct4 and Sox2. Transfection of pGL3-O/S-TK-Luc together with Oct4/Sox2 expression vectors more than doubled the luciferase activity over the parental vector with Oct4/Sox2 (Fig. 5E), verifying that addition of the binding sites increased Oct4/Sox2-mediated transcription. We next transfected the mSin3A-HDAC complex members into the assay system either with or without Oct4/Sox2. Although the absolute increase in luciferase varied between HDAC1, HDAC2, or mSin3A, the addition of any of the complex members increased luciferase activity ϳ3-fold over the complex member without Oct4/Sox2. The increased luciferase activity with HDAC1 or HDAC2 compared with mSin3A may be due to interaction of these factors with endogenous mSin3A protein in 3T3 cells. We next added all three mSin3A-HDAC complex members together (ϮOct4/Sox2), however we did not see any further increase in luciferase activity compared with that seen with each factor alone. Taken together, these results suggest that the mSin3A-HDAC complex positively regulates Nanog expression, and this is enhanced by the presence of Oct4 and Sox2. DISCUSSION Broad cohorts of proteins are involved in the regulation of Nanog (10). We reveal a novel role for the mSin3A-HDAC complex in Nanog transcriptional regulation, whereby the mSin3A-HDAC complex binds to the Nanog promoter at the Oct4/Sox2 binding sites, and its occupancy at the promoter correlates with active Nanog transcription. Luciferase assays confirm that the mSin3A-HDAC complex increases the transcriptional activity from these binding sites upon co-expression with Oct4/ Sox2. This transcriptional activator role of mSin3A is different from its p53-dependent role in Nanog silencing (8). When we examined the region of p53 binding site on the Nanog promoter, we also observed increased mSin3A occupancy following RA-induced differentiation (Fig. 1F). It is not unexpected that the mSin3A-HDAC complex could operate in different capacities at these distinct sites to positively or negatively regulate Nanog expression, perhaps through interactions with specific transcription factors. However, the reduction in Nanog expression we see with mSin3A siRNA is independent of the p53 pathway. Although core histones are the primary substrate of the mSin3A-HDAC complex, increasing evidence points to the importance of regulation of non-histone proteins by this complex (for review, see Ref. 24). Interaction of the mSin3A-HDAC complex with various factors impairs their ability to activate transcription of target genes (25)(26)(27)(28). Recent protein interaction studies identified Nanog and Oct4 as novel interactors of transcriptional repressor complexes, including mSin3A-HDAC (12,13). Wang and colleagues describe both Nanog and Oct4 as interactors of HDAC2 in proliferating ES cells (12), and Liang and colleagues demonstrate that Nanog and Oct4 associate with additional repressor proteins, including mSin3A-containing complexes (13). Our studies focused primarily on the mSin3A-HDAC complex and found that it interacts with Sox2. We show that this interaction can be largely disrupted by inducing differentiation in ES cells. The presence of the mSin3A-HDAC complex at the Nanog promoter can also be disrupted by inducing differentiation or by deacetylase inhibitor treatment, suggesting that acetylation may be important in the regulation of these pluripotency factors. The activities of both histone deacetylases and acetyltransferases have been shown to play a role in ES cell differentiation. Blocking deacetylation of histones using small molecules such as TSA delays the formation of embryoid bodies, indicating that histone deacetylation is necessary for full progression through differentiation (29). However, analysis by McCool and colleagues (30) indicated that there is a mSin3A-HDAC Complex Regulates Nanog global increase in acetylation over the course of differentiation of ES cells and that changes in histone acetylation during differentiation vary across different promoters. Additionally, the authors demonstrate changes in gene expression can be seen within 2 h of TSA treatment. In our study, we examine early stages of deacetylase inhibition by assaying levels of pluripotency factors after 3 or 6 h of TSA treatment. We find that one of the earliest events during deacetylase inhibition (3 h) is the down-regulation of Nanog expression without a concurrent change in Oct4 levels, and we believe that this change is tied to the activity of the deacetylase enzymes. We have also found that long term TSA treatment differentiates ES cells (data not shown). There are clearly diverse roles for deacetylases in ES cells, and it will be interesting to see how different genes are regulated by deacetylases during development. Histone deacetylase enzymes are most commonly associated with the suppression of eukaryotic gene transcription (for review, see Ref. 31). However, there are a few reports documenting a role for deacetylases in gene activation. For example, Sin3p, the yeast homologue of mSin3A, has been shown to regulate transcriptional activation of Hog1 target genes by deacetylating target promoters and conferring resistance to osmotic stress (32). YY1 is actively acetylated and deacetylated, and acetylation of its zinc finger domains decreases its ability to bind DNA (33). We observed that the mSin3A-HDAC complex stimulates Nanog expression, and we surmise that this effect is mediated by interaction with Sox2. It will be interesting to determine if the mSin3A-HDAC complex is involved in regulating the activity of Sox2. In addition to repressor complexes, recent reports using Chip-seq technology demonstrate that the co-activator CBP/p300 is recruited to genomic sequences bound by clusters of transcription factors that include Nanog, Oct4, and Sox2 (34). Depletion of Nanog, Oct4, and Sox2 by RNA interference reduced binding of CBP/p300 to these genomic clusters in ES cells. This finding suggests that there is an active balance of HAT and HDAC activity regulating Nanog and its targets. Further analysis will be needed to determine how different families of co-activator and co-repressor complexes regulate the pluripotency transcription factor network in response to the cellular environment.
5,397
2009-03-13T00:00:00.000
[ "Biology" ]
On Application Oriented Fuzzy Numbers for Imprecise Investment Recommendations : The subtraction of fuzzy numbers (FNs) is not an inverse operator to FNs addition. The family of all oriented FNs (OFNs) may be considered as symmetrical closure of all the FNs family in that the subtraction is an inverse operation to addition. An imprecise present value is modelled by a trapezoidal oriented FN (TrOFN). Then, the expected discount factor (EDF) is a TrOFFN too. This factor may be applied as a premise for invest-making. Proposed decision strategies are dependent on a comparison of an oriented fuzzy profit index and the specific profitability threshold. This way we get an investment recommendation described as a fuzzy subset on the fixed rating scale. Risk premium measure is a special case of profit index. Further in the paper, the Sharpe’s ratio, the Jensen’s ratio, the Treynor’s ratio, the Sortino’s ratio, Roy’s criterion and the Modiglianis’ coe ffi cient are generalised for the case when an EDF is given as a TrOFN. In this way, we get many di ff erent imprecise recommendations. For this reason, an imprecise recommendation management module is described. Obtained results show that the proposed theory can be used as a theoretical background for financial robo-advisers. All theoretical considerations are illustrated by means of a simple empirical case Introduction Imprecision is a natural feature of financial market information. A widely accepted way of representing an imprecise number is a fuzzy number (FN). The notion of an ordered FN was intuitively introduced by Kosiński et al. [1]. It was defined as an FN supplemented by its orientation. A significant drawback of Kosiński's theory is that there exists such ordered FNs that cannot be considered as FN [2]. This caused the original Kosiński's theory to be revised by Piasecki [3]. At present, the ordered FNs defined within Kosiński's original theory are called Kosiński's numbers [4][5][6][7]. If ordered FNs are linked to the revised theory, then they are called Oriented FNs (OFNs) [6,7]. The family of all OFNs has a symmetry axis that is equal to the family R of all real numbers. In Section 2, this axial symmetry is described in detail. The family of all OFNs may be defined equivalently with the use of the discussed axial symmetry as the symmetrical closure of all of the FNs family. Symmetry allows us to avoid problems related to the fact that FNs subtraction is not an inverse operator to FNs addition. A robo-adviser is an internet platform providing an automated financial planning service. This service is always algorithm-driven. Therefore, no robo-adviser requires any human involvement. It implies a minimal operating cost for any robo-advisor. Due to using robo-advisers we can apply different finance models to develop algorithms editing financial advice. Implemented algorithms can inform investors of any change in the market within a short period of time. In this way, robo-advisers efficiently implement any investment strategy by using their built-in automated algorithms [8]. • The use of FNs in financial analysis only leads to averaging the imprecision risk, • The application of OFNs in financial analysis may minimise imprecision risk. Therefore, the main aim of this paper is an extension of the investment-making models described in [46] to the case of imprecise PV estimated by trapezoidal OFNs (TrOFNs). The first attempt of this subject was presented in [70]. Here, we use our experience gathered during our work on the other criteria. Therefore, here we present a revised approach to the considered extension. The paper is drafted as follows. Section 2 presents OFNs with their basic properties and describes the imprecision evaluation by an energy and entropy measure. In Section 3, PV is assessed by TrOFNs. The oriented fuzzy EDF is determined in Section 4. Investment recommendations dependent on the oriented fuzzy EDF are discussed in Section 5. Profitability criteria for investments are extended in Section 6. In Section 7 we explore the management of a set of investment recommendations. Section 8 concludes the article and proposes some future research directions. In Appendix A, the optimisation algorithm used is described in detail. Fuzzy Sets Fuzzy sets (FSs) [73] are a suitable tool that allows to describe and process imprecise values and information. In a given space X an FS A is distinguished by its membership function µ A ∈ [0, 1] X in the following manner A = (x, µ A (x)); x ∈ X (1) By F (X) we denote the family of all FSs in a space X. Multi-valued operations in the FSs family are defined by means of the following identities: FSs are widely used for modelling imprecise information. Following the work in [74], the imprecision is understood as a composition of ambiguity and indistinctness of. Ambiguity is defined a lack of a clear indication of one alternative among others. Indistinctness is defined as a lack of an explicit distinction between distinguished and not distinguished alternatives. More imprecise information is less useful. For this reason, it is sensible to assess the imprecision. For the finite space X = x 1 , x 2 , . . . , x f , the suitable tool for assessing the ambiguity of an FS A ∈ F (X) is the energy measure d : F (X) → R + 0 [75] given as follows: The proper tool for measuring the indistinctness is the entropy measure e : F (X) → R + 0 [76,77] determined by the identity Fuzzy Numbers The fuzzy number (FN) may be intuitively defined as FS in the real line R. The most general FN definition is proposed by Dubois and Prade [78]. Any FN may be defined in an equivalent way as follows [79]: Theorem 1. For any FN L there exists such a non-decreasing sequence (a, b, c, d) ⊂ R that L(a, b, c, d, L L , R L ) = L ∈ F (R) is determined by its membership function µ L (·|a, b, c, d, L L , R L ) ∈ [0, 1] R described by the identity where the left reference function L L ∈ [0, 1] [a,b] and the right reference function R L ∈ [0, 1] [c,d] are upper semi-continuous monotonic ones meeting the conditions: The family of all FNs is denoted by the symbol F. The symbol * denotes any arithmetic operation defined on R. By the symbol we denote such extension of operation * to F that it is coherent with Zadeh's Extension Principle [80]. It means that, for each pair (K, L) ∈ F 2 described by their membership functions µ K , µ L ∈ [0, 1] R , the FN is described by membership function µ M ∈ [0, 1] R determined by the identity: A special case of FNs is trapezoidal FNs (TrFNs). Due to their simplicity and ease of performing operations on them, they are often used in applications. A suitable definition of trapezoidal fuzzy numbers is given in [81]: Oriented Fuzzy Number Ordered FN was defined by Kosiński et al. [1] as an extension of the FN concept. An important disadvantage of Kosiński's theory is that there exists such ordered FNs that cannot be represented by a membership function [2]. On the other hand, ordered FNs' usefulness is a result of their interpretation as FN supplemented by its orientation. The ordered FN orientation describes a forecast of the nearest future changes of FN. This caused Kosiński's theory to be revised by Piasecki [3]. An ordered FN linked to the revised theory is called Oriented FN (OFN) [6,7]. In a general case, OFNs are defined as follows: Definition 2 [3]. For any monotonic sequence (a, b, c, d) ⊂ R, the oriented fuzzy number OFN ↔ L(a, b, c, d, S L , E L ) = ↔ L is a pair of an orientation → a, d = (a, d) and a fuzzy set L ∈ F (R) described by a membership function µ L (·|a, b, c, d, S L , E L ) ∈ [0, 1] R given by the identity The symbol K denotes the space of all OFNs. If a < d, then any ↔ L(a, b, c, d, S L , E L ) is a positively oriented OFN. Any positively oriented OFN may be interpreted as such FN, which can increase in the near future. The symbol K + denotes the space of all positively oriented OFN. If a > d, then any ↔ L(a, b, c, d, S L , E L ) is a negatively oriented OFN, which may be interpreted as decreasing FN. The symbol K − denotes the space of all negatively oriented OFN. For a = d, ↔ L(a, b, c, d, S L , E L ) = a represents the unoriented crisp number a ∈ R. Summing up, we see that Let us consider the mapping U : K → K given by identity This mapping meets following conditions: It shows that the mapping (15) is axial symmetry on the space K of all OFNs. Then the symmetry axis is identical with family R of all real numbers. Moreover, Theorem 1 together with Definition 2 implies that the space F of FNs and the space K + of all positively oriented OFNs are isomorphic. Therefore, we can say that the space K may be determined as symmetry closure of the space F. In the studies planned here, we limit discussion to a special kind of OFNs defined as follows. The symbol K Tr denotes the space of all TrOFNs.On the space K Tr , a relation ↔ K . GE.L was defined as follows This is a fuzzy preorder GE ∈ F (K Tr × K Tr ) described by membership function ν GE ∈ [0, 1] K Tr ×K Tr described in detail in [6]. Due to these results, for any pair ( ↔ Tr(a, b, c, d), h) ∈ K Tr × R we have: Oriented Present Value The present value (PV) is defined as a current equivalent of a payment due at fixed point in time [41]. Therefore, we commonly accept that PV of future payments may be imprecise. This means that PV should be assessed with FNs. Such PV is called a fuzzy one. Buckley [52], Gutierrez [55], Kuchta [56] and Lesage [57] show the soundness of using TrFNs as an imprecise financial arithmetic tool. Moreover, PV estimation should be supplemented by a forecast of PV closest price changes. These price changes may be subjectively predicted. Moreover, closest price changes may be predicted with the help of the prediction tables presented in [82]. For these reasons, an imprecise PV should be evaluated by OFN [7,70]. Such PV is called an oriented PV (O-PV). Any O-PV is estimated by TrOFN where the monotonic sequence V s , V f ,P, V l , V e is defined as follows is the set of all values that do not noticeably differ from the quoted priceP. If we predict a rise in price then O-PV is described by a positively oriented TrOFN. If we predict a fall in price, then O-PV is described by a negatively oriented OFN. Example 1. We observe the portfolio π composed of company shares included in WIG20 quoted on the Warsaw Stock Exchange (WSE). Based on a session closing on the WSE on 28 January 2020, for each observed share we assess its O-PV equal to TrOFN describing its Japanese candle [83]. Shares' O-PVs, obtained in such a manner, are presented in Table 1. For each portfolio componentŜ, we determine its quoted priceP s as an initial price on 29.01.2020. Oriented Expected Discount Factor We assume that duration t > 0 of an investment is fixed. Then, the considered security is determined by two values: a foreseen FV = V t and an estimated PV = V 0 . The benefits from owning this security are characterised by the simple return rate (RR) defined by the identity where the simple RR r t : Ω → R is determined for PV assessed as the quoted priceP. After Markowitz [9] we assume that the RR r rate is the gaussian probability distribution N(r, σ). In our case PV is determined as O-PV According to (10), the simple RR calculated for the O-PV is a fuzzy probabilistic set represented by membership function ρ ∈ [0; 1] R×Ω given as follows Then, the membership function ρ ∈ [0; 1] R of the expected RR is computed in the following manner In [48] it is shown that the fuzzy expected discount factor (EDF) is a better tool for appraising any securities than the expected fuzzy RR. Therefore, we determine EDF for the case of O-PV. In general, for a given RR r t , the discount factor v t is explicitly determined by the identity v t = 1 1 + r t (29) We consider the EDF v ∈ R defined by the identity: In line with (28), the membership function δ ∈ [0, 1] R of an oriented fuzzy EDF (O-EDF) ↔ V ∈ K is given by the identity: Then, O-EDF is given as follows: Example 2. All considerations in the paper are run for the quarterly period of the investment time t = 1 quarter. We research the components of the portfolio π presented in Table 1. Using the one-year time series of quotations, for each portfolio componentŜ we calculate the following parameters: With the application of (30) and (32), we calculated quarterly O-EDF for each component of the portfolio π. All evaluations obtained in this way are presented in Table 2. The O-EDF of a security described in this way is a TrOFN with the identical orientation as the O-PV used for estimation. It is worth stressing that the maximum criterion of the expected RR can be equivalently replaced by the minimum criterion of the EDF. Investment Recommendations We understand an investment recommendation as a counsel given by the advisors to the investor. After evaluating the stocks, the advisor compares the obtained assessment with the current market value of the stocks. The difference between those values determines the potential of the investment return rate. Advisors give various recommendations depending on the volume of the return rate potential and its direction. Experts also define the potential of the return rate in different ways. We will here consider the collection of standardised recommendations, which are applied in [46]. The rating scale is given as the A ++ denotes the advice Buy suggesting that the expected price is well above the current quoted price, • A + denotes the advice Accumulate suggesting that the expected price is above the current quoted price, • A 0 denotes the advice Hold suggesting that the expected price is similar to the current quoted price, • A − denotes the advice Reduce suggesting that the expected price is below the current quoted price, • A −− denotes the advice Sell suggesting that the expected price is well below the current quoted price. The investor attributes each recommendation with the appropriate way of entering the transaction and the value of its volume. The way of entering the transaction describes the investment strategy. Investors can differ among one another by the implemented strategies. Let fixed securityŠ be represented by the pair (r s , s ), where r s is an expected RR onŠ and s is other parameter characterisingŠ. The symbol S denotes the set of all considered securities. Adviser's counsel depends on the expected RR. The criterion for a competent choice of advice can be presented as a comparison of the profit index g(r s s ) and the profitability threshold (PT)Ǧ, where g(·| s ) : R → R is an increasing function of the expected RR. The advice choice function Λ : S × R → 2 A was given in the following way [46] This way, the advice set Λ Š ,Ǧ ⊂ A was assigned. We interpret the advice set Λ Š ,Ǧ as the investment recommendation given for the securityŠ. The securityŠ may be equivalently represented by the ordered pair (v s , s ), where v s is the EDF determined by (30). Then the identity (30) implies The value H s is used as a specific profitability threshold (SPT) appointed for the securityŠ. Then, the advice choice function Λ : S × R → 2 A is equivalently described in the following way We consider the case when the securityŠ is characterised by the ordered pair ( (32). Then the advice choice function Λ Š ,Ǧ is FS described by (44) in the following way: where ν GE : K Tr × K Tr → [0, 1] is membership function of relation "less than or equal" (20). The required values of this function are computed with the use of (21) and (22). From the point of view of invest-making, the value λ A Š ,Ǧ is understood as a recommendation degree of the advice A ∈ A, i.e., a declared participation of the advisor's responsibility in the case of a final invest-made according to the advice A ∈ A. It implies that the investment recommendation Λ Š ,Ǧ is emphasised as a FS in the rating scale A. In turn, the final decision is taken by the investors. Their personal responsibility for taking this investment decision decreases along with the increase in the recommendation degree related to the decision taken. The increase in the ambiguity of the recommendation Λ Š ,Ǧ ∈ F (A) suggests a higher number of alternative recommendations to choose from. This is an increase in the risk of choosing an incorrect decision from recommended ones. This may result in obtaining a profit lower than maximal, that is with a loss of chance. Such risk is called an ambiguity risk. The ambiguity risk burdening the recommendation Λ Š ,Ǧ ∈ F (A) is assessed with an energy measure d Λ Š ,Ǧ computed with the use of (5). An increase in the indistinctness of the recommendation Λ Š ,Ǧ ∈ F (A) suggests that the explicit distinction between recommended and not recommended decisions is more difficult. This causes an increase in the indistinctness risk understood as risk of choosing a not recommended decision. The indistinctness risk burdening the recommendation Λ Š ,Ǧ ∈ F (A) is measured by the entropy measure e Λ Š ,Ǧ computed with the use of (6). An imprecision risk is always determined as a combination of indistinctness and ambiguity risks combined. The Profitability Criteria for Investments We evaluate chosen securities traded on a fixed capital market. We always assume that there exists a risk-free bond instrument represented by the pair (r 0 , 0). Moreover, we distinguish the market portfolio represented by the pair (r M , σ M ). Example 3. We focus on the WSE. We take into account a risk-free bound instrument determined as quarterly treasure bonds with a risk-free RR r 0 = 0.0075. The market portfolio is determined as the portfolio determining a stock exchange index WIG. The RR from WIG has the normal distribution N r M , σ M 2 = N(0.0200, 0.000025). Sharpe Ratio The profit index is defined as Sharpe's ratio estimating the amount of the premium per overall risk unit. Then Sharpe's PT is equal to the unit premium of the market portfolio risk [84]. If the securityŠ is represented by the pair (r s , σ s ), then, in line to Sharpe, the profit index g(·|σ s ) : R → R and the PTǦ are defined as follows: We compute SPT H s with the use of (38) in the following manner: Example 4. Using Equation (52), we compute an SPT H s for all components of the portfolio π described in Examples 1 and 2. Obtained SPT values are compared with O-EDFs in Table 3. If we estimate PV by TrOFN presented in Table 1, then using the Sharpe criterion is simply comparing an imprecise O-EDF with the precise SPT. By means of Equations (45)-(49), we compute the values of the recommendation choice function presented in Table 4. Table 4 also presents information on the imprecision risk burdening individual recommendations. That information will be used to choose the recommendation. Investment recommendations for ALR, CCC, OPL and PKN are burdened with the increased ambiguity risk. Moreover, the recommendations for CCC, OPL and PKN carry the indistinctness risk. For that reason, those recommendations are rejected. Eventually, only the following stocks are attributed with "Buy" or "Accumulate" recommendation: CDR, CPS, DNP, JSW, KGH, LTS, LPP, MBK, PEO, PGE, PGN, PKO, PLY, PZU, SPL and TPE. Thus, the disclosure of imprecision of PV estimations allows rejecting riskier recommendations. Table 4. Imprecise recommendations determined with the use of the Sharpe ratio. Jensen's Alpha The profit index is defined as Jensen's alpha [85], estimating the amount of the premium for market risk. The securityŠ is represented by the pair (r s , β s ), where β s is the directional factor of the CAPM model assigned to this instrument. Then, the profit index g(·|σ s ) : R → R and the PTǦ are defined as follows: We calculate SPT H s with the use of (38) in the following manner Example 5. Using (55), we calculate a specific profitability threshold SPT H s for all components of the portfolio π described in Examples 1 and 2. The CAPM directional factors for each portfolio component are presented in Table 2. Evaluations obtained in this way are presented in Table 5. If now we estimate PV with the use of TrOFN presented in Table 1 then using the Jensen's alpha goes down to the comparison of an imprecise O-EDF with the precise SPT. By means of (45)-(49) we estimate the values of a recommendation choice function presented in Table 6. Investment recommendations for ALR, CCC, CPS, OPL, PGE, PKN and TPE are burdened with the increased ambiguity risk. Moreover, the recommendations for ALR, CCC, CPS and PGE carry an indistinctness risk. For that reason, those recommendations are rejected. Eventually, only the following stocks are attributed with "Buy" or "Accumulate" advice: CDR, DNP, JSW, KGH, LTS, LPP, PEO, PGN, PKO, PLY, PZU and SPL. Advice "Sell" or "Reduce" were associated with MBK. Thus, the disclosure of imprecision of PV estimations allows rejecting riskier recommendations. Treynor Ratio The profit index is defined as the Treynor ratio [86], which estimates the amount of premium for the market risk. The securityŠ is represented by the pair (r s , β s ), where β s is the directional factor of the CAPM model assigned to this instrument. Then the profit index g(·|σ s ) : R → R and the PTǦ are defined as follows: g(r s β s ) = r s − r 0 β s (56) We compute SPT H s with the use of (38) in the following manner Example 6. Using (58), we calculate SPT for all components of the portfolio π described in Examples 1 and 2. Evaluations obtained in this way are presented in Table 7. If we estimate PV with the use of TrOFN presented in Table 1 then using the Treynor ratio criterion goes down to the comparison of an imprecise O-EDF with the precise SPT. By means of (45)-(49) we estimate the values of the recommendation choice function presented in Table 8. Investment recommendation for CCC is burdened with an increased ambiguity risk and carries an indistinctness risk. For that reason, this recommendation is rejected. Eventually, only the following stocks are attributed with "Sell" or "Reduce" advice: ALR, CDR, CPS, DNP, KGH, LTS, LPP, MBK, OPL, PEO, PGE, PGN, PKN, PKO, PLY, PZU, SPL and TPE. Advice "Buy" or "Accumulate" were associated just with the stock of JSW. Thus, the disclosure of imprecision of PV estimations allows rejecting riskier recommendations. Sortino Ratio The Sortino ratio [87] is a tool for risk management under a financial equilibrium. In this model we compare the expected RR r s from considered security and the expected return rate r M from the distinguished market portfolio. We consider the advice choice function where the profit index and the limit value are determined by the Sortino ratio. Then, the profit index evaluates the amount of a specific unit premium for the loss risk. Moreover, the limit value evaluates an amount of the market unit premium for the loss risk. The benchmark of our assessment is a market portfolio represented by such an ordered pair r M , ς M 2 , where the downside semi variance ς M 2 evaluates the market loss risk. The reference point is a risk-free bond instrument represented by the ordered pair (r 0 , 0), where r 0 is a risk-free return rate. The considered securityŠ is represented by the ordered pair r s , ς 2 S , where downside semi variance ς S 2 evaluates the loss risk. Then, Sortino and Price (1997) define the profit index g(·|ς s ) : R → R and the limit value PTǦ as follows: We compute SPT H s Ǧ with the use of (38) in the following manner Example 7. The market portfolio is represented by the ordered pair r M , ς M 2 = (0.0200, 0.000015). Using (61), we calculate SPT for all securities belonging to the portfolio π described in Examples 1 and 2. Evaluations obtained in this way are presented in Table 9. For each considered security, by means of (45)-(49) we calculate membership functions of investment recommendations presented in Table 10. Table 10. Imprecise recommendations determined with the use of the Sortino ratio. Modiglianis' Coefficient In the crisp case, the Modiglianis' Coefficient Criterion is equivalent to Sharpe Ratio Criterion. In this model, the compared values are the expected RR on a security and the expected RR on the market portfolio. Modiglianis' profit coefficient estimates the bonus over market profits. Modiglianis' limit value equals zero. If the securityŠ is represented by the pair r s , σ 2 S , then Modigliani [88] defines the profit index g(·|σ s ) : R → R and the PTǦ as follows: We compute SPT H s with the use of (38) in the following manner We see that in a fuzzy case, the Modiglianis' Coefficient Criterion is also equivalent to the Sharpe Ratio Criterion. In this case, the recommendations obtained with the use of Modiglianis' Coefficient can be found in Table 4. Roy's Criterion Roy [89] has consider a fixed securityŠ, represented by the pair (r s , σ s ), where r s is an expected return onŠ and σ 2 S is the variance of a return rate of the considered financial instrument. After Markowitz [9] we assume that the considered securityŠ has a simple return rate with Gaussian distribution N(r s , σ s ). This distribution is described by its increasing and continuous cumulative distribution function F · r s , σ s : R → [0; 1] given by the identity where the function Φ : R → [0; 1] is the cumulative distribution function of the Gaussian distribution N(0, 1). The Safety Condition [89] is given as follows: where • L-a minimum acceptable RR, • ε-the probability of RR realisation below the minimum acceptable rate. The RR realisation below the minimum acceptable rate is identified with a loss. The Roy's criterion minimises the probability of a loss for a set minimum acceptable rate of return [46]. Additionally, the investor assumes the maximum level ε * of the loss probability. Then the Roy's criterion is described by the inequality In line with (38), SPT is given as follows Example 8. We study recommendations implied by Roy's criterion all components of portfolio π described in Example 1. The investor assumes the minimal acceptable RR L = 0.0075. Additionally, the investor assumes the maximum level of a loss probability ε * = 0.05. Then, we have Φ −1 (0.05) = −1.64. Table 2 lists the values of O-EDF. Using (69), we compute SPT for all components of the portfolio π described in Examples 1 and 2. Evaluations obtained in this way are presented in Table 11. If we estimate PV with the use of TrOFN presented in Table 1 then using the Roy's criterion goes down to the comparison of an imprecise OEF with the precise SPT [70]. By means of (45)-(49) we then estimate the values of a recommendation choice function presented in Table 12. Investment recommendations for ALR and CCC are burdened with an increased ambiguity risk. Moreover, the recommendations for CCC carry the indistinctness risk. For that reason, those recommendations are rejected. Eventually, only the following stocks are attributed with "Buy" or "Accumulate" advice: CDR, CPS, DNP, JSW, KGH, LTS, LPP, MBK, OPL, PEO, PGE, PGN, PKN, PKO, PLY, PZU, SPL and TPE. Thus, the disclosure of imprecision of PV estimations allows rejecting riskier recommendations. Discussions This chapter presented the recommendations obtained by means of ratios representing various criteria of assessment of the current financial efficiency of a considered asset. Here we have • Sharpe ratio and Sortino ratio used to maximise the premium per overall risk unit, • Jensen's alpha and Treynor ratio used to maximise the premium for market risk. • Roy's criterion used to minimise the probability of bearing the unacceptable loss. This opulence of the used criteria explains to some extent the variety of recommendations attributed by the mentioned criteria to the same financial instrument. However, this is not the only reason of the differentiation between those recommendations. We should pay attention to a big differentiation of the recommendations established by Jensen's alpha and Treynor ratio used to maximise the premium for risk. This phenomenon is difficult to explain substantively. Hence, we deduce that while managing the chosen financial instruments, we should take into account a fixed set of recommendations that attributed to them. The next chapter will be dedicated to the issue of managing the fixed set of investment recommendations. Management of Investment Recommendation Set In Sections 5 and 6, the proposed procedure for recommendations was always considered in the case of one established criterion. Due to that we could mark all recommendations with a single symbol. In this chapter we will consider the relations between recommendations with various criteria attributed to them. For a bigger transparency of those considerations we will introduce a modified system of recommendation labels. Any FS Λ γ ∈ F (A) is called a recommendation. The subscript γ means any set of symbols identifying the kind of distinguished recommendation. Any recommendation Λ γ is represented by its membership function λ γ : A → [0, 1] . Also, each recommendation can be noted as In special cases we have Moreover, in notation (70) of recommendation Λ γ we can omit every advice A ∈ A satisfying the condition µ γ (A) = 0. Each securityŠ is assigned a recommendation Λ S,1 , Λ S,2 . . . , Λ S,5 ∈ F (A), where • Λ S,1 -recommendations obtained with the use of the Sharpe ratio, • Λ S,2 -recommendations obtained with the use of Jensen's alpha, • Λ S,3 -recommendations obtained with the use of the Treynor ratio, • Λ S,4 -recommendations obtained with the use of the Sortino ratio, • Λ S,5 -recommendations obtained with the use of Roy's criterion. Example 9. Table 12 presents the recommendation Let us note that various criteria assign different recommendations to the same security. Each recommendation can bear a different imprecision risk. We propose to limit the acceptable recommendations to those that are characterised by the minimal risk of imprecision. However, imprecision is evaluated by the means of two indices, which should be minimised. In this case, to minimise the risk, a multicriterial approach was implemented. Each recommendation Λ S,i is given a pair d Λ S,i , e Λ S,i where d Λ S,i and e Λ S,i respectively mean energy and entropy measures. On the recommendation set we define two preorders " Λ S,i is more acceptable than Λ S,j " : Those preorders are formal models of ambiguity and indistinctness of information minimisation criterion. A multicriterial comparison defined by the preorders Q 1 and Q 2 is a model of satisfying the postulate of minimisation of both factors. Using the multicriterial comparison (73) and (74) for each securityŠ we determine the Pareto optimum O s which includes all acceptable recommendations. To solve this optimisation task, we use an algorithm described in Appendix A. Stock Company Pareto Optimum The results obtained in Example 10 show that in the case of many securities there is a big variety in the sets of optimum recommendations. To unify the final recommendations for each securityŜ we determine: • A weakly justified recommendation (WJR) Λ S,WJR defined as the union of such Pareto optimal recommendations, which are linked to the securityŜ; • A strongly justified recommendation (SJR) Λ S,SJR defined as the intersection of such Pareto optimal recommendations, which are linked to the securityŜ. The WJR Λ S,WJR and the SJR Λ S,SJR are determined respectively by their membership functions given as follows Example 11. Separatelyfor each securityŜ described in Example 1, imprecise recommendations Λ S,1 , Λ S,2 , Λ S,3 , Λ S,4 , Λ S,5 are compared in more detail in Tables 14-33. In the two bottom rows of Tables 14-33 WJRs and SJRs are given along with their imprecision estimates. All Tables 14-33 are linked to the comments by using the names of discussed stock companies. Table 14. Imprecise and Pareto optimal recommendations for ALR. For the ALR shares only the following recommendation is Pareto optimal which was obtained by means of the Treynor ratio. In such situation, for this recommendation, WJR and SJR are identical. The advice Sell and Reduce is recommended by the advisor with the degree that equals 1. It means that the advisor is prepared to take full responsibility for making the investment decisions resulting from the suggested advice. The other recommendations are rejected by the advisor. In such case it is the investor who takes full responsibility for making any other decision resulting from the rejected recommendations of Buy, Accumulate and Hold. From the distribution of a recommendation degree represented by WJR it shows that the advisor definitely rejects the Sell recommendation. WJR also tells us that the Accumulate and Buy recommendations can be taken into consideration. Additional information on the distribution of the responsibility for the decisions taken is reflected in SJR. It supplements the picture with the following information: • The investor bears almost all the responsibility for making an investment decision resulting from the Reduce and Hold recommendations, • The advisor bears full responsibility for making an investment decision resulting from the Accumulate recommendations, • The investor and the advisor share the responsibility among themselves for making the investment decision based on the Buy recommendation, however, the advisor bear approximately two-thirds of that responsibility. After analysing the information and interpretations, the investor takes the decision. We can suspect that the investor characterised by risk-aversion will choose the Accumulate recommendation while the investor who is a risk-taker will choose the Buy advice. Table 16. Imprecise and Pareto optimal recommendations for CDR. Criterion A For the CDR shares the following WJR and SJR were determined From the distribution of a recommendation degree represented by WJR it shows that the advisor definitely rejects the Hold recommendation. It means that the advisor recommends an investment activity without defining its kind. SJR shows that it is the investor who bears full responsibility for any decisions made. It is obvious that such a recommendation is not useful so in such a situation we state that there is no useful recommendation. For the CPS shares, WJR and SJR are determined by (80) and (81). In this situation we state that there is no useful recommendation. Table 18. Imprecise and Pareto optimal recommendations for DNP. For the DNP shares, WJR and SJR are determined by (80) and (81). In this situation we state that there is no useful recommendation. Table 19. Imprecise and Pareto optimal recommendations for JSW. Criterion A For the JSW shares, the following WJR and SJR were determined From the distribution of a recommendation degree represented by WJR it shows that the advisor definitely rejects the Sell, Reduce and Hold recommendations. The advisor strongly recommends Accumulate or Buy. SJR shows that it is the advisor who is willing to take full responsibility for taking the investment decisions resulting from the advised recommendations. Table 20. Imprecise and Pareto optimal recommendations for KGH. For the KGH shares, WJR and SJR are determined by (80) and (81). In this situation we state that there is no useful recommendation. Table 21. Imprecise and Pareto optimal recommendations for LTS. For the LTS shares, WJR and SJR are determined by (80) and (81). In this situation there is no useful recommendation. Table 22. Imprecise and Pareto optimal recommendations for LPP. Criterion A For the LPP shares, WJR and SJR are determined by (80) and (81). In this situation there is no useful recommendation. Table 23. Imprecise and Pareto optimal recommendations for MBK. For the MBK shares, WJR and SJR are determined by (80) and (81). In this situation there is no useful recommendation. (77) is only the one determined by the Treynor ratio. From the distribution of a recommendation degree represented by WJR it shows that the advisor definitely rejects the Hold, Accumulate and Buy recommendations. The advisor strongly recommends Sell or Reduce. SJR shows that the advisor is willing to take full responsibility for taking the investment decisions resulting from the advised recommendations. Table 25. Imprecise and Pareto optimal recommendations for PEO. For the PEO shares, WJR and SJR are determined by (80) and (81). In this situation there is no useful recommendation. For the PGE shares, WJR and SJR are determined by (80) and (81). In this situation there is no useful recommendation. Table 27. Imprecise and Pareto optimal recommendations for PGN. For the PGN shares, WJR and SJR are determined by (80) and (81). In this situation there is no useful recommendation. (80) and (81). In this situation there is no useful recommendation. Table 29. Imprecise and Pareto optimal recommendations for PKO. Criterion A For the PKO shares, WJR and SJR are determined by (80) and (81). In this situation there is no useful recommendation. Table 30. Imprecise and Pareto optimal recommendations for PLY. For the PLY shares, WJR and SJR are determined by (80) and (81). In this situation there is no useful recommendation. Table 31. Imprecise and Pareto optimal recommendations for PZU. For the PZU shares, WJR and SJR are determined by (80) and (81). In this situation there is no useful recommendation. Table 32. Imprecise and Pareto optimal recommendations for SPL. Criterion A For the SPL shares, WJR and SJR are determined by (80) and (81). In this situation there is no useful recommendation. and (81). WJR informs us that the advisor does not exclude any recommendation. SJR shows that full responsibility for taking any investment decision goes to the investor. Therefore, there is no useful recommendation. Summing up, for the public companies considered in the examples, in most cases there was no useful recommendation. Such a situation occurred in the case of CDR, CPS, DNP, KGH, LTS, LPP, MBK, PEO, PGE, PGN, PKN, PKO, PLY, PZU, SPL and TPE. Only for three following companies: ALR, CCC and OPL the recommendations could be considered useful. This situation does not differ from the real phenomena in financial markets. The number of useless recommendations can be decreased by limiting the number of assessment criteria. Also, another set of criteria can be implemented. The solution to those problems should be searched based on finance. An observation can be useful that each pair of WJR and SJR might be presented as an intuitionistic fuzzy set [90] representing a justified recommendation (JR). Then any JR is defined by its membership function equal to the SJR membership function and by its non-membership function equal to the membership function of WJR complement. Conclusions In the subject literature it is shown that OFNs are a more convenient tool for financial analysis than FNs. Therefore, the most important achievement of this work is the implementation of OFNs into the algorithmic system supporting investment decisions. In my best knowledge, the obtained algorithmic system is the only one that applies any set of profitability criteria evaluated with the use of OFNs. Until now, only an analogous system was known to be linked to the Sharpe's criterion. For any security, this simple system assigns exactly one imprecise recommendation. The algorithmic system described in Sections 5 and 6 assigns each security many different imprecise recommendations. For this reason, in Section 7, the proposed system is equipped with an imprecise recommendation management module. Obtained results may provide theoretical foundations for constructing a robo-advice system supporting investment decisions. Then, we can use determined recommendations as behavioural premises for investment decisions. The attempt to use chosen recommendations multiple times leads to establishing an investing strategy. In Example 11, the interpretation of determined recommendations was presented. The shown case study is the reflection of only a little share of the set of all possible recommendations. Therefore, taking up research on a wider spectrum of recommendations established by the described algorithms seems justified. It should be an empirical research leading to establishing the heuristic investment strategy. In financial practice, we can meet with the situation when part of PV securities is imprecisely evaluated without a subjective forecast of future quotation changes. Such PV should be evaluated by unoriented FNs. Against imprecisely evaluated PV, other securities may be equipped with a subjective forecast of rise in quotation. Such PV should be evaluated by positively oriented OFNs. In both of these cases the membership functions are identical. This results in the impossibility of a simultaneous comparison of oriented PV and unoriented PV. This is a significant disadvantage of the proposed algorithmic system supporting invest-making. The intention to deal with this inconvenience points to another direction of research into the OFNs theory. The obtained results may as well be a starting point for future research on the impact of the PV imprecision and orientation on the investment recommendation determined with the use of algorithms presented in this paper. The implementation of intuitionistic fuzzy sets should be preceded by a theoretical and empirical research of the expediency of such approach for a representation of the justified recommendations mentioned in Section 7. the subset of the most acceptable recommendation systems is distinguished as the Pareto's optimum, determined as a two-criteria comparison of minimisation recommendation ambiguity and minimisation recommendation indistinctness. The ambiguity of recommendation Λ S,γ is valued by energy measure d Λ S,γ calculated with the use of (5). The indistinctness of recommendation Λ S,γ is valued by energy measure d Λ S,γ determined by (6). Therefore, we represent each recommendation by the pair d Λ S,γ , e Λ S,γ = d S,γ , e S,γ On the recommendation set we define two preorders " Λ S,i is more than Λ S, j " : The set of all acceptable recommendations we appoint as Pareto's optimum O S determined by multi-criterial comparison Q 1 ∩ Q 2 . To solve this optimisation task, we adapt an analogous algorithm presented in [91]. In order to determine the Pareto optimum O S , we execute the following algorithm: In this way, we obtain the sequence O S of partial optima of Pareto.
10,159.8
2020-10-01T00:00:00.000
[ "Mathematics", "Business", "Computer Science" ]
Visualizing a viral genome with contrast variation small angle X-ray scattering Despite the threat to human health posed by some single-stranded RNA viruses, little is understood about their assembly. The goal of this work is to introduce a new tool for watching an RNA genome direct its own packaging and encapsidation by proteins. Contrast variation small-angle X-ray scattering (CV-SAXS) is a powerful tool with the potential to monitor the changing structure of a viral RNA through this assembly process. The proteins, though present, do not contribute to the measured signal. As a first step in assessing the feasibility of viral genome studies, the structure of encapsidated MS2 RNA was exclusively detected with CV-SAXS and compared with a structure derived from asymmetric cryo-EM reconstructions. Additional comparisons with free RNA highlight the significant structural rearrangements induced by capsid proteins and invite the application of time-resolved CV-SAXS to reveal interactions that result in efficient viral assembly. – exam-ples Despite the threat to human health posed by some singlestranded RNA viruses, little is understood about their assembly. The goal of this work is to introduce a new tool for watching an RNA genome direct its own packaging and encapsidation by proteins. Contrast variation small-angle X-ray scattering (CV-SAXS) is a powerful tool with the potential to monitor the changing structure of a viral RNA through this assembly process. The proteins, though present, do not contribute to the measured signal. As a first step in assessing the feasibility of viral genome studies, the structure of encapsidated MS2 RNA was exclusively detected with CV-SAXS and compared with a structure derived from asymmetric cryo-EM reconstructions. Additional comparisons with free RNA highlight the significant structural rearrangements induced by capsid proteins and invite the application of time-resolved CV-SAXS to reveal interactions that result in efficient viral assembly. Viruses exploit many strategies to encapsidate the genome that codes for its container. The exact packaging process depends on the nature and form of the genetic material as well as the container or capsid. For some dsDNA viruses, there is clear structural separation between container and message; the genome is pumped into the preformed protein capsid by ATPactivated motor proteins (1). The packaged genome, under high pressure, uniformly fills the capsid (2). In contrast, viruses with single-stranded RNA genomes appear to exploit RNAprotein interactions to facilitate encapsidation. The few examples of spatially resolved, encapsidated ssRNA genomes suggest that the genetic material is not uniformly distributed, often asymmetric and with higher density at the protein interface (3-8). The distinct packaging strategies are at least partially because of the very different biophysics of DNA and RNA (9,10). Because ssRNA viruses are prevalent in human disease, there is significant interest in revealing their assembly mechanisms and packaged structures. Fortunately, a fair amount is known about RNA structure from work on other biological RNAs. These polymers contain self-complementary regions and fold on themselves to form series of short, base-paired duplexes joined by a wide range of non-base-paired regions, including junctions, bulges, or loops. Many of these latter motifs serve as protein-binding sites. In viral genomic RNA, recent work suggests that some RNA motifs that bind proteins, so-called packaging signals, are essential for viral assembly (11)(12)(13)(14)(15)(16)(17)(18). Their presence may explain why ssRNA viruses selectively pack their own genome relative to other cellular RNAs. Thus, the prevailing picture of how ssRNA genomes are encapsidated by proteins in viruses may be best viewed as a cooperative, specific protein-RNA folding process, as opposed to a sequential assembly of protein capsid followed by genome insertion (19). The self-assembly of ssRNA viruses can therefore be viewed as a process that exploits the delicate balance between RNA-RNA interactions, RNA-protein interactions and, of course, protein-protein interactions (20,21). From this vantage point, the packaging of some ssRNA genomes can be recast as an RNA folding problem. In the past 10-20 years, significant progress has been made toward defining the rules for RNA folding, largely in response to the growing appreciation of the role of RNA conformational dynamics in biology (22). Many experiments have provided details about RNA folding as it interacts with counterions (23)(24)(25)(26), small ligands (27)(28)(29)(30), and even proteins (31)(32)(33). Time-resolved experiments have been particularly useful at revealing the dynamics and transiently populated states of RNA. When compared with simulations or models, these studies elucidate the principles that direct folding of small to moderately sized (,1 kb) RNAs (24). Of the many biophysical tools applied to study the dynamic restructuring of RNAs, solution small-angle X-ray scattering (or SAXS) is particularly useful for revealing large-scale, timedependent conformational changes. SAXS has been recently applied to monitor the changing structures that accompany capsid self-assembly, providing new information about the underlying mechanisms for the simplest systems (34)(35)(36)(37). These works are informative but confounded by SAXS' sensitivity to all components present: both capsid components and nucleic acid. In fact, most of the information extracted was related to the number, and/or conformation of protein subunits. Thus, past kinetic studies of virus assembly by SAXS primarily reported the assembly of the capsid. Fortunately, contrast variation (CV-) SAXS can be used to extract the structures of individual components of complexes, when these components have different electron densities. The nucleic acid genome, for example, can be selectively detected even when encapsidated. The high X-ray flux available at synchrotron sources has already enabled time-resolved CV-SAXS studies of nucleic acid structuring by proteins in other systems (38). Applying this technique to study viral assembly could reveal valuable information about the role of RNA in the assembly process. The goal of this work is to assess the feasibility of timeresolved CV-SAXS studies of virus assembly. Here we report static CV-SAXS studies of bacteriophage MS2, a model system. Bacteriophage MS2 is an Escherichia coli phage from the leviviridae family with an ssRNA genome of 3569 nucleotides, and a T = 3 icosahedral capsid. This container is composed of 180 copies of the capsid protein and a maturation protein. MS2 is commonly used as model system to study 1ssRNA viral assembly. As such, it has been widely studied both biochemically (39)(40)(41)(42)(43)(44) and structurally, using SAXS (45)(46)(47)(48), as well as electron microscopy (4)(5)(6)(7)(8). Contrast variation studies of bacteriophage MS2 have been accomplished using small-angle neutron scattering (46) which more readily yields information about the protein conformation within the MS2 capsid. In these studies, structural information about the encapsidated RNA was inferred, not directly measured. Furthermore, the ability to perform these measurements with X-rays exploits the higher signals available from synchrotron sources relative to neutron sources. Recent asymmetric cryo-EM reconstructions have produced electron density models of the encapsidated RNA for comparison with our findings (5-7). The MS2 system is an excellent candidate for future time-resolved studies, as recent work suggests that, as a result of specific interactions with capsid proteins, its assembly pathways are orderly (49)(50)(51)(52). Thus, it is an ideal candidate to benchmark the performance of contrast variation SAXS in virus self-assembly experiments. Results and Discussion With the goal of directly measuring the encapsidated viral RNA, we performed CV-SAXS on bacteriophage MS2, as well as a recombinantly produced MS2 virus-like particle (VLP) devoid of viral RNA. Contrast variation SAXS, illustrated in Fig. 1 and described in full detail in Ref. 53, exclusively detects the nucleic acid component of a protein-RNA complex, enabling structural studies of the RNA genome contained within the fully assembled virus and potentially throughout the assembly or disassembly process. Time-resolved CV-SAXS has been successfully implemented in studies of DNA unwrapping from protein (histone) cores in nucleosome core particles (38). However, in nucleosome core particles, the nucleic acid surrounds the proteins, whereas the opposite is true for viruses. We first address the feasibility of CV-SAXS for particles where the nucleic acid is enclosed by protein. Previous work from Zipper et al. (54) suggests that contrast variation works equally well in either geometry. They performed a limited set of contrast variation SAXS on bacteriophages fr and R17. They extrapolated the radial electron density distribution for the phages but did not thoroughly explore the contrast matched condition. The use of high intensity X-ray sources and sophisticated SAXS data analysis tools make it worthwhile to revisit this approach. Contrast variation on VLPs To establish the feasibility of CV-SAXS in virus-like systems, we first performed control experiments on the empty capsid, the MS2 VLP. These studies directly address two primary concerns of CV-SAXS in these systems. The first is that the capsid may be impermeable to the contrast agent, resulting in different electron densities on its inside and outside. As described in Ref. 53, we use high concentrations of sucrose to effect the dramatic changes in solvent electron density required to match the protein density. If the capsid is impermeable to sucrose, the solvent inside the capsid would have an excess negative electron density relative to the solvent outside. Because scattering depends only on density differences, which can be positive or negative, a signal would result. This effect is illustrated using a simple model that treats the capsid as a spherical shell. An analytical solution (see "Experimental procedures") is used to obtain scattering profiles where the density inside the sphere is different from the density outside. Spheres are particularly amenable to characterization using SAXS; their dimensions are encoded in the positions of (multiple) extrema in the scattering profiles. In particular, the positions of the minima reflect the size of the particle. Fig. 2 shows the predicted behavior of the two cases, a permeable versus nonpermeable capsid, as the (outer) solvent electron density is raised. Here, the expected intensity of the scattered X-rays (I on the y axis) is plotted against the momentum transfer q, defined to be q = (4p sin(u)/l), where l is the X-ray wavelength and 2u is the scattering angle. Noise was added to simulate realistic experimental conditions. In the former case, the overall signal decreases without significant change in the shape of the scattering profile, assessed through the positions of the extrema. In contrast, for the latter case of a Figure 1. Illustration of the principle of contrast variation SAXS. In CV-SAXS the electron density of the solvent is increased so that it matches that of one of the components of a multicomponent system. In this figure, electron density is represented as color. In a protein-nucleic acid complex the electron density of the solvent can be increased (schematically shown as color changed from white to red to blue) by the addition of sucrose until it matches the electron density of the protein (red). Under this matched condition any scattering signal that contains a contribution from the protein component blends into the background, e.g. disappears. Only the scattering from the denser nucleic acid is detected. nonpermeable capsid, the positions of the maxima and minima of the scattering profile change, and the signal intensity never drops substantially. A second concern is that sucrose could create osmotic pressures that swell, disrupt, or cause other structural modifications of the capsid. These changes are harder to predict; however, any significant alterations would manifest through a change in the positions of the extrema. The absence of a significant change in the scattering profile of the VLP through the contrast series would suggest that the structure of the capsid is not altered (at the resolution of our measurements) by osmotic effects. Experimentally determined SAXS scattering profiles of the MS2 VLP, acquired in solutions containing different amounts of sucrose, are shown in Fig. 3. As the protein contrast is reduced by increasing the sucrose concentration, the intensity of the scattering profile decreases, but no change in shape is detected: The positions of the minima remain constant despite the change in solvent electron density. Thus, the capsid appears permeable to sucrose and its structure remains roughly constant. With a further increase in sucrose concentration, the characteristic scattering features of the capsid disappear, and we conclude that it is rendered as transparent as possible. A and B, the predicted scattering profiles of an empty protein capsid (modeled as a spherical shell) at different contrast conditions in the case that it is permeable (A) or nonpermeable (B) to the contrast variation agent. The % blanked refers to the contrast match between solvent and solute. Perfect contrast matching corresponds to 100% blanked (the line would be difficult to display on a logarithmic axis; we show 99.9% as an alternative). In the inset, the color designates density as in Fig. 1. The curve colors correspond to the fractions quoted in the box. These models suggest that the permeability of the capsid can be readily evaluated from the shape changes in a contrast series. For the desired case of a permeable capsid (top plot), the signal drops dramatically as the contrast match is achieved (purple curve). An impermeable capsid (bottom plot) has a distinctly different signature. Visualizing a viral genome with contrast variation SAXS These measurements suggest that an ;60% (w/w) sucrose solution represents the match condition. Some signal remains, likely because of impurities or the presence of a small amount of mRNA encapsidated during VLP expression. With the knowledge that the capsid can be (largely) blanked we proceed to measure the scattering from the encapsidated RNA inside the WT bacteriophage MS2. Contrast variation on WT MS2 As a first step toward measuring the scattering from the genomic RNA encapsidated within WT MS2 phage particles, we acquired SAXS profiles of the full particle in solutions containing different sucrose concentrations. Measurements acquired at different solution contrast levels allow us to monitor both protein and nucleic acid components as their scattering strengths are varied. Fig. 4A shows SAXS profiles acquired at sucrose concentrations between 0 and 65% (w/w) in discrete steps. It is interesting to compare the shape of the profiles shown here with those of Fig. 2. In the WT MS2 the signal from the RNA becomes apparent; minima in the scattering profile shift to higher q values as the protein contrast is reduced. This trend is consistent with an increased signal from density within the capsid (smaller radius), relative to the capsid itself. The coincident decrease in the depth of the minima can be explained by a loss of spherical symmetry of the molecule (55). We note that the curves change qualitatively above 60% added sucrose. When coupled with the above measurements of the empty capsid, it appears that the "match point," where the sucrose density equals that of the protein, is around 60%. At higher concentrations, the protein signal reappears, but with lower electron density than the sucroserich solvent. Pair distance distributions functions (P(r)) can be computed from the scattering curves and used to interpret the effects of the changing contrast on the signal from the various components of the phage. This formalism displays information from SAXS profiles in real space; as opposed to momentum transfer (q), the x axis in these plots shows real space distances in angstroms. The pair distance distribution functions shown in Fig. 4B were computed using GNOM (56) (ATSAS). All curve features remain robust against variations in selection of parameters for the P(r) computation, the q range, and maximum particle dimension, Dmax. With increasing sucrose concentration, the maximum of the P(r) curve shifts toward a smaller radius. When the sucrose concentration exceeds the match point of about 60%, a local minimum appears above 200 Å, which likely indicates that the solvent density exceeds that of the protein. Under this condition, the signal from the capsid re-emerges, and creates a negative interference term with the RNA. In this case, the second peak above 225 Å could reflect capsid autocorrelation, and the minima above 200 Å could indicate the interference of the signals from RNA (positive contrast) and protein (negative contrast). In any case, all the above measurements are consistent with a match point near 60% sucrose for both the empty and full capsid. Under this condition, the SAXS profile from the native phage most closely represents the scattering from just the encapsidated RNA. Although the contrast match is imperfect, we can nevertheless extract low-resolution structural features about this RNA, for comparison with other measurements on the RNA, and as a proof of principle of the method. Comparison to EM structure To assess the power and validity of CV-SAXS applied to viruses and virus-like particles, we compare our contrastmatched data (blue curve from Fig. 4) to RNA density derived from a recent cryo-EM study of the MS2 phage by Koning et al. (7). To make this comparison, we computed the pair distance distribution from the electron density map EMD-3404 for voxels above the recommended contour level. Fig. 5 compares the pair distance distribution functions derived from the SAXS data in 60% sucrose (blue curve) with the one computed from this EM density map. The derived distributions are quite similar. Because the spatial resolution of SAXS is lower than that of EM, some of the features in the blue curve are understandably smeared out. Furthermore, the cryo-EM density map accounts for only 95% of the RNA. This loss is attributed to the flexibility near one of the ends of the RNA molecule. In contrast, SAXS retains sensitivity to all the RNA present. Finally, the slight variations at the largest distances may result from incomplete blanking from the capsid shell, e.g. the exact contrast variation point might be 59%, not 60% as measured. Although it is simplest to interpret data derived from a single molecular component, such as the RNA as described above, several computational tools are available to interpret SAXS data of multicomponent systems acquired under different contrast conditions. To explore this approach, we performed ab initio multiphase reconstruction of the SAXS data using the MONSA algorithm from ATSAS (57). With MONSA we Fig. 2, this result suggests that the capsid is permeable and therefore can be blanked. Near ;60% added sucrose there is a significant change in the scattering profile as the signal merges into the noise; this sucrose concentration appears to contrast match, or cancel the protein scattering. The mismatch at low q can be attributed to impurities in the sample; however, we note that even small signals appear amplified when displayed on a logarithmic scale. exploit measurements at many different contrast values to obtain bead models for both the protein and the RNA phases of this complex. Results of these multiphase reconstructions are shown in Fig. 6, adjacent to the EM density map of bacteriophage MS2 (EMD-3403 and 3404) (7). Orthogonal cross-sections are shown to help visualize the structure. All molecules were rendered in UCSF Chimera (58). As with all SAXS reconstructions, the solution is neither unique nor high resolution, but it accurately captures several characteristic features present in the cryo-EM model. Specifically, both models suggest that the RNA is localized close to the capsid shell and the genome displays an asymmetry reflecting the position of a maturation protein (top of model). This agreement provides additional confidence in the validity of the contrast variation method as applied to viruses. Comparison with Mg 21 -induced compaction The contrast variation method offers new opportunities for structural studies of viral genomes. Time-resolved approaches The observed changes are consistent with an increased contribution from the RNA core (relative to the protein contribution) as the solution contrast increases. All curves are normalized to enable comparison and curves in part (A) are offset to aid in visualization. Beyond the match point, near (but likely just below) 60% added sucrose the contribution from the protein shell reappears as the second peak in pair distance distribution. Visualizing a viral genome with contrast variation SAXS can be applied to follow the genome structure as it folds during assembly. Despite the lack of fully detailed structural information, even low-resolution studies can distinguish different models by measuring the extent and global structure(s) of the RNA; very large changes in conformation are required to compress a long RNA into a capsid. Distinct theoretical models, shown as cartoons in Ref. 59, could be readily distinguished as their global structural signatures are quite distinct. Four different models are discussed in that work and each would present a unique signature. The first, a nucleation-elongation model, would be distinguished by a rapid compaction of RNA, which precedes protein binding. The second, micellar condensation, is most likely consistent with a gradual condensation of RNA as it is slowly condensed by protein. The third, an RNA antenna model, would be distinguished by extended RNA structures as protein binds locally. Finally, a packaging signal model would likely display discrete folding steps because of the cooperative nature of the packaging. When coupled with modeling, the key distinct features (and kinetics) of these different models would be easily distinguished. Finally, the global compaction and folding of viral RNAs differs from typical structural of catalytic RNAs. Many of the latter RNAs fold to compact structures following the addition of divalent Mg, which aids in screening the large negative backbone charge and directs catalytic RNAs toward structures that engage tertiary contacts to secure compact states. Fig. 7 shows SAXS profiles and dummy atom reconstructions of free MS2 RNA in a solution containing 150 mM NaCl (purple, top curve) and following the addition of Mg 21 (orange, middle curve) at concentrations that fold many catalytic RNAs compared with the MS2 RNA in virio (blue, bottom curve). Here, scattering profiles are displayed as Kratky plots of Iq 2 versus q. This representation of the SAXS profiles emphasizes compaction and is useful in studies of RNA folding, where, in the absence of Mg 21 , the molecule assumes more extended states. Dummy atom reconstructions of free MS2 RNA were performed through DAMMIF (60) (ATSAS). Multiple reconstructions are shown for each of the models. Although these reconstructions are not unique, they are useful in exemplifying the changes the RNA undergoes. SAXS studies of other functional RNAs reveal large changes upon the addition of Mg 21 (23,25). For this RNA, the addition of divalent ions does not lead to a large change in structure or Rg. In contrast, large differences in the SAXS signal between encapsidated (lower, blue curve) and free RNA are striking. This change is also accompanied by a collapse in the radius of gyration from 169 to 95 Å within the capsid, consistent with measurements of MS2 in solution by Zipper et al. (61), the spatial distribution of MS2 (46), and the change in genome size measured in other viruses (62). Folding of the viral RNA seems to require the protein. Perhaps this is by design; the architecture of this RNA has evolved to fold with capsid proteins so that it can assemble efficiently. Conclusion In conclusion, these studies not only underscore the important role of protein-RNA interactions in compacting/ packaging RNA (as suggested by others) but present a unique strategy for watching this process as it occurs, using CV-SAXS. There is much to learn about the folding of viral RNAs by the proteins that they encode (59). The method, demonstrated here, should be readily transferrable to other nonenveloped viruses and may be useful in unraveling the mechanism of novel antiviral drugs that target assembly, a topic of great current importance. Sample preparation Escherichia coli bacteriophage MS2 (ATCC ® 15597-B1 TM ) was propagated in Escherichia coli (Migula) Castellani and Chalmers (ATCC ® 15597 TM ) in the recommended growth media. After overnight growth, E. coli were pelleted by precipitation, and the supernatant was collected. PEG and NaCl were added to final concentrations of 10% w/v and 0.5 M, re- . Protein is shown in red and RNA is shown in blue. The full reconstruction and three orthogonal cross-sections are shown for each case. Although the spatial resolution obtained through SAXS is lower and the reconstruction is not unique, SAXS data are much simpler to acquire than asymmetric cryo-EM reconstructions. Similar structural features are captured by both methods, including a small, protruding piece of RNA that may reflect the position of the maturation protein. spectively. After incubation for an hour, the bacteriophages were precipitated by centrifugation. The precipitate was resuspended in buffer (50 mM Tris-HCl, pH 7.5, 150 mM NaCl), filtered, and further purified by gel filtration chromatography though a Superdex 200 column. Contrast variation samples were then dialyzed overnight into the appropriate sucrose concentration. This was done to ensure complete matching between buffer and sample. MS2 capsid protein (CP) was produced recombinantly in E. coli (BL21). A plasmid containing the MS2 CP sequence was produced de novo by ATUM. Transformed E. coli were grown in LB-Lennox until mid-log phase, after which expression was induced with 1 mg/ml of isopropyl 1-thio-b-D-galactopyrano-side. Protein expression continued for 4 h. Afterward E. coli were pelleted by centrifugation, and the pellet was stored at 220°C. The pellet was thawed, resuspended in buffer (50 mM Tris-HCl, pH 7.5, 150 mM NaCl), sonicated, and clarified by centrifugation. MS2 CP was then purified as selfassembled VLPs using the same protocol as WT MS2. UV absorbance ratios suggest that the VLPs contain a small amount of mRNA. Because of the exploratory nature of the blanking procedure, sucrose was added to samples with a positive displacement pipette. MS2 RNA was purchased from Sigma and frozen. After thawing, it was buffer exchanged into 50 mM Tris-HCl, pH 7.5, 150 mM by serial concentration. Lastly it was annealed by Visualizing a viral genome with contrast variation SAXS heating to 95°C for 5 min and then cooled with ice. Magnesium was added immediately before measurement with a pipette. SAXS data collection SAXS data were collected at the BioCAT sector of the Advanced Photon Source, in two separate studies. Protocols for acquiring contrast variation SAXS data are detailed in Ref. 53. During the first SAXS data were acquired on MS2 VLP using the standard BioCAT equilibrium setup. Contrast variation data were acquired on MS2 WT in a second beamtime, using a custom built set up that employed coaxial-sheathed continuous flow. In both cases data were collected on a Pilatus 3M detector. Samples were suspended in 50 mM Tris-HCl, pH 7.5, 150 mM NaCl, with added sucrose to enable CV-SAXS. Data reduction was performed using the RAW package (63). Further analysis was performed with the ATSAS suite (64). Contrast variation shell modeling The possible effects of permeability on an empty capsid were calculated using the theoretical form factor formula for a coreshell model. In the above equations, R shell ; R core are respectively the outermost and innermost radii of a shell, and Dr shell ; Dr core are the excess electron density above the solvent for both the shell and core, respectively. In the case of a permeable capsid, Dr core was held at zero as Dr shell was reduced. In the case of the nonpermeable capsid, Dr core was reduced by the same amount as Dr shell . To further simulate experimental conditions, the theoretical SAXS intensity was convoluted with a gaussian function to simulate experimental broadening. Lastly a random noise background was added to simulate the level of signal to noise seen as contrast is reduced. Data availability Data will be made available upon request to the corresponding author, Lois Pollack, lp26@cornell.edu.
6,272
2020-09-10T00:00:00.000
[ "Biology", "Physics", "Chemistry" ]
Influence of the Business Revenue, Recommendation, and Provider Models on Mobile Health App Adoption: Three-Country Experimental Vignette Study Background: Despite the worldwide growth in mobile health (mHealth) tools and the possible benefits of mHealth for patients and health care providers, scientific research examining factors explaining the adoption level of mHealth tools remains scarce. Objective: We performed an experimental vignette study to investigate how four factors related to the business model of an mHealth app affect its adoption and users’willingness to pay: (1) the revenue model (ie, sharing data with third parties vs accepting advertisements); (2) the data protection model (General Data Protection Regulation [GDPR]-compliant data handling vs nonGDPR-compliant data handling); (3) the recommendation model (ie, doctor vs patient recommendation); and (4) the provider model (ie, pharmaceutical vs medical association provider). In addition, health consciousness, health information orientation, and electronic health literacy were explored as intrapersonal predictors of adoption. Methods: We conducted an experimental study in three countries, Spain (N=800), Germany (N=800), and the Netherlands (N=416), to assess the influence of multiple business models and intrapersonal characteristics on the willingness to pay and intention to download a health app. Results: The revenue model did not affect willingness to pay or intentions to download the app in all three countries. In the Netherlands, data protection increased willingness to pay for the health app ( P <.001). Moreover, in all three countries, data protection increased the likelihood of downloading the app ( P <.001). In Germany ( P =.04) and the Netherlands ( P =.007), a doctor recommendation increased both willingness to pay and intention to download the health app. For all three countries, apps manufactured in association with a medical organization were more likely to be downloaded ( P <.001). Finally, in all three countries, men, younger individuals, those with higher levels of education, and people with a health information orientation were willing to pay more for adoption of the health app and had a higher intention to download the app. Conclusions: The finding that people want their data protected by legislation but are not willing to pay more for data protection suggests that in the context of mHealth, app privacy protection cannot be leveraged as a selling point. However, people do value a doctor recommendation and apps manufactured by a medical association, which particularly influence their intention to download an mHealth app. Background Over the last decade, the number of people worldwide who own a mobile phone or another mobile electronic communication device has grown exponentially, fueling the development of mobile health-related services and functions [1,2].Mobile health (mHealth) [3] can be broadly defined as any medical or public health practice that is supported by mobile devices, ranging from the use of mobile phones to improving points of service data collection, care delivery, and patient communication, to the use of alternative wireless devices for real-time medication monitoring and adherence support (for an overview see [4]).One of the main underlying goals of mHealth is to improve the quality of and access to health care while reducing its costs [5]. Given the potential of mHealth for supporting the health of users, it is important to assess the factors that may motivate or hinder the successful adoption of mHealth technologies and apps.After all, adopting a health technology or app is a first necessary step for ensuring effectiveness [4][5][6].However, there is currently insufficient programmatic evidence to inform the implementation and scale-up of mHealth because very little is known about the adoption and effectiveness of mHealth technologies on health [7]. To fill this gap, the aim of this study was to move the field forward by experimentally examining factors that have been suggested to play a role in the adoption of mHealth [8].We operationalized mHealth adoption in two ways: as having a higher intention to download an mHealth app and being willing to pay a higher price for it.We focus on four factors related to the business model of app development, namely the revenue model, the degree of data protection offered to users, the presence of a doctor recommendation, and whether the app is developed by the pharmaceutical industry or by a medical association.In addition, we explored three intrapersonal characteristics that have been identified as important predictors of electronic health (eHealth) adoption in previous research: health consciousness, health information orientation, and eHealth literacy [9]. Finally, we explored differences among three European countries with varying cultures and health care infrastructures.In Spain, the national health system is an agglomeration of public health services established by the general health law.The vast majority of final providers of care are part of the regional health service structure and are not autonomous legal entities.In Germany, there is a statutory health insurance system that allows people with high incomes to opt out in favor of private coverage.In the Netherlands, there is a statutory health insurance system with universally mandated private insurance (national exchange) that is regulated by the government along with subsidies for insurance.We assume that these differences in national health care infrastructure may impact how users value business models. Theoretical Framework mHealth can serve multiple purposes such as treatment adherence and disease management, smoking cessation, weight loss, diet, and physical activity [10], thereby providing ample opportunities for people to better monitor and manage their personal health with the aid of their smartphone and other wearable devices [8].In parallel with the rapid development of mHealth technologies, the focus of health care has shifted from health care providers' paternalistic approach to a more consumer-oriented approach [11].At the heart of this approach is the belief that allowing patients to actively access their personal health records and manage their own health will encourage them to be more involved in their own health care [12].This increased involvement can subsequently strengthen the patient-provider relationship and enhance the (cost-) effectiveness of health care management.Because of these individual and societal benefits associated with mHealth, it is important to gain greater insight into business-and person-level factors that may predict its adoption and use. mHealth Business Models A first factor related to the business model that may play a role in mHealth adoption is the revenue model.mHealth operates at the intersection of health, technology, and finance, making it a complex industry for the development of sustainable revenue models [5].Because consumers do not want to spend a large amount of money on the adoption of health apps [13], a great variety of apps have been developed that make revenue on the basis of advertising; however, personal data are also sold to third parties in some cases.Such apps embrace a revenue model that approaches the "privacy as a product" concept [14].However, it is likely that people experience having their personal health data sold to third parties as a greater "cost" than merely having to accept advertisements in return for "free" access to and use of an mHealth app, as the security of eHealth data is a major concern in the health care industry [5].Hence, we established the first hypothesis (H1): people are willing to pay more for a health app (H1a) and have a higher intention to download the app (H1b) when they can access and use the app in exchange for accepting advertisements than when having to accept either data sharing with third parties, or a combination of advertising and data sharing with third parties. In the arena of health care, previous misuses of patient data have affected public confidence in health care research [15].This was one of the motivating factors for the European Union to implement the General Data Protection Regulation (GDPR) [16].The GDPR aims to protect people's right to protection of their data by establishing rules that are related to the free movement of personal data.The GDPR has received widespread public attention in the public domain, and has led to real and significant changes in the ways in which organizations deal with user data.It is reasonable to assume that the GDPR has sharpened citizens' awareness of and concern for data protection, including when adopting mHealth apps [17].Hence, we may expect that adoption of a health app will be positively influenced by assurance of adequate protection of personal health data, leading to hypothesis 2 (H2): people are willing to pay more for a health app (H2a) and have a higher intention to download the app (H2b) when the health app ensures data protection in line with European legislation than when no information is given about data protection. An additional factor that may play a role in the adoption of mHealth apps is whether the app is recommended by medical professionals, who are considered the gatekeepers of health care delivery [18,19].As an example, in their analysis of factors affecting the adoption of electronic patient records, Raisinghani and Young [20] noted that doctor recommendations were a key factor in the adoption process.Similarly, Peng et al [21] found that patients with type 2 diabetes identified doctor recommendations as a significant factor motivating their adoption of a diabetes mHealth app [22]. There are at least two reasons to explain why a doctor recommendation for a health app can be a strong enforcer for patients to use digital health technologies.First, doctors are considered to be experts in their field of work, and therefore have more influence than nonexperts, particularly since they also know the patients and their interests quite well [19,23].Second, doctors' professionalism forces them to act upon the patients' interests first; most patients therefore trust a doctor more than other actors [24].Hence, we devised hypothesis 3 (H3): people are willing to pay more for a health app (H3a) and have a higher intention to download the app (H3b) when the app is recommended by doctors than when the app is recommended by a patient association. Finally, we examined whether a health app manufactured by a medical association is more likely to be adopted than an app manufactured by the pharmaceutical industry.Pharmaceutical companies need to negotiate the conflict between striving for optimal health care and striving for profit [25].However, in the eyes of the public, it is not always clear that the pharmaceutical industry has patients' interests at heart [26]. With the advent of mHealth, new concerns have arisen with regard to the quality of these apps, and whether their development and manufacturing should be regulated [27].With respect to the implementation of mHealth, there are concerns that when the pharmaceutical industry engages in efforts to disseminate health information via mobile devices, they may strategically use these efforts to promote their products and services [28].In short, given the for-profit nature of the pharmaceutical industry, we may assume that trust in pharmaceutical providers of mHealth apps is generally lower than trust in providers for whom generating profit is not the main goal, such as medical associations or other nonprofit medical associations.This difference in trust may explain a difference in users' adoption of mHealth apps, leading to hypothesis 4 (H4): people are willing to pay more for a health app (H4a) and have a higher intention to download the app (H4b) when the app is manufactured by a medical association than when the app is manufactured by a pharmaceutical company. Personal Factors Affecting mHealth Adoption: Health Consciousness, Health Information Orientation, and eHealth Literacy In addition to mHealth business models, we may also consider psychological antecedents that predict adoption [29] to obtain an adequate understanding of personal characteristics that influence the information-use strategies of the online health consumer [30][31][32].Studies have shown that the determination to adopt mHealth technologies is greater among people who evaluate their health as more vulnerable to diseases and are more concerned about their health [33], and among people who take more care of their own health [34,35]. According to Dutta-Bergman [34], health consciousness, health information orientation, and eHealth literacy are important factors related to the search for online health information and potentially also to the adoption of a health app.Health consciousness means that an individual takes care of their personal health and that those health concerns are blended into their daily lives [33,[36][37][38].Health information orientation, defined as the inclination to seek out health information, could be an important predictor to explain who is most willing to adopt a health app [39,40].Finally, eHealth literacy is considered an important factor predicting health app adoption, since people with higher levels of eHealth literacy have better ability to use health apps [41]. Considering the limited understanding of the general cognitive motivators that trigger people's usage of health apps, it is important to examine which factors can best explain the adoption of health apps.Therefore, a second aim of this study was to examine whether health consciousness, health information orientation, and eHealth literacy predict the adoption of and willingness to pay for a health app. Participants and Design We conducted an online vignette experiment in three countries: Spain, Germany, and the Netherlands.Every participant was exposed to four different vignettes, each describing one specific aspect of the business model of an mHealth app (ie, the first with a specific revenue, the second with data protection, the third with a recommendation, and the fourth with a provider model).Next, the likeliness to adopt the health app and willingness to pay were assessed as outcome measures.For each vignette, a different version was randomly assigned to participants.Vignettes describe a hypothetical situation to which participants respond thereby revealing their perceptions, values, attitudes, and intentions.The advantage of vignette studies is a pragmatic and internal valid method assessing participants' responses to experimental conditions, thereby simulating actual situations the best way possible.Nonetheless, considering that vignettes are a simulation, actual situations might lead to different outcomes.The revenue model was considered between three subject levels (advertising vs data sharing vs advertising and data sharing), data protection was considered between two subject levels (data protection by European Union legislation vs no information), recommendation was considered between two subject levels (recommended by doctors vs patients association), and provider was considered between two subject levels (medical association vs pharmaceutical company).Table 1 shows the distribution of participants over each vignette condition.The data in Spain (N=800) and Germany (N=800) were collected through an online survey administered by a Spanish professional research company.The sample was chosen through a proportionate stratified sampling method considering gender and age.The data in the Netherlands (N=416) were gathered by snowballing a link of the questionnaire via social media platforms.The participant information is shown in Table 2. Employment status was assessed by the question "Which of these descriptions best describes your situation or applies to what you have been doing for the last month?,"with the answer possibilities ranging from "Employed/Self-employed" to "Another not in the labor force."We created a bivariate variable with employed vs nonemployed based on this response. Financial status was assessed with the question "During the last 12 months, would you say you had difficulties in paying your bills at the end of the month…?," with the answer possibilities ranging from "Most of the time" to "Never."All survey participants were informed of the overall study goals and procedures.Only those who agreed to participate in the online survey were given access to the survey.The approval of the Ethical Committee of the university leading the study (Universitat Oberta de Catalunya, Barcelona, Spain) to conduct the experiment was obtained in 2017.We informed participants beforehand that all of the data collected would remain confidential and that they could cease participation at any time. Procedure When individuals agreed to participate in the study, they answered a series of demographic questions (gender, age, education, employment status).The participants were then presented with four vignettes for the revenue, data protection, recommendation, and provider model.For example, the vignette for the revenue model stated: Imagine that an app is presented to you to support you in improving the healthiness of your lifestyle by recording your personal data (for example, nutritional intake, physical behavior, heart rate, glucose level, calories burnt, etc), providing prescriptions and consultations, and checking your health history.Based on your collected data, the app will provide tailored advice to improve your health.Revenues of this health app come from ads shown to you when using the app.We want you, on an as-honestly-as-possible basis, to evaluate how much you want to pay for the health app, if you were to buy it in an app store. The participants were then asked about their willingness to pay and their intention to download the app.Finally, the participants answered general questions relating to health app usage, health consciousness, health information orientation, health literacy, and health issues. Dependent Variables Willingness to pay was measured through responses to the open question "What is the highest price you are willing to pay?" (in Euros).Willingness to download the app was measured with the question "Please indicate on a scale of 1 to 10 how likely it is that you would download the app?" (1=definitely not download the app, 10=definitely download the app). Intrapersonal Factors Health app usage was measured by asking how often the participant used a health app, varying from 0 (never) to 6 (more than 5 times), and how much time the participant spent using a health app in the last week, varying from 0 (0 hours) to 6 (more than 1 hour). Health consciousness was measured using 5 statements that were each rated on a 5-point scale (1 strongly disagree to 5 strongly agree) [39].Reliability of the scale was high (Cronbach α=.88).Health information orientation was measured using 8 statements each rated on a 5-point scale (1 strongly disagree to 5 strongly agree) [39].Reliability of the scale was high (Cronbach α=.93) eHealth literacy was measured using 8 statements each rated on a 5-point scale (1 strongly disagree to 5 strongly agree) [42].Reliability of the scale was high (Cronbach α=.95). Statistical Analyses Multiple linear regression models were conducted for every country and the two dependent variables (willingness to pay and intention to download) separately.Within each regression, in the first step we assessed the effect of the business model; in the second step we included age, gender, and education; and in the third step we included health consciousness, health information orientation, and eHealth literacy.In addition, effect sizes were calculated for each regression model.Following Cohen [43], an effect size of R 2 around 0.1 is interpreted as low, R 2 varying around 0.3 is considered medium, and R 2 >0.5 is interpreted as a large effect. Revenue Models Linear regression analyses were first conducted to explore the role of the revenue model for Spain (see Multimedia Appendix 1).We first examined if Spanish people were more willing to pay more for a health app when they could access and use the app in exchange for accepting advertisements than when having to accept data sharing with third parties, either individually or with a combination of advertising (H1a).No effects were found for data sharing (P=.20) or data sharing and advertising (P=.50) as revenue models (advertising was used as the reference category) on willingness to pay.Furthermore, men (P=.02) and people with a health information orientation (P=.002) were more willing to pay more for the health app.The explained variance for the model including all predictors was 3.2%.Next, we tested the same model but with intention to download the app as the outcome measure (H1b).Again, no effects were found for data sharing (P=.95) or data sharing and advertising (P=.19) as business models on the intention to download in all three models.Men (P=.005), younger people (P<.001), those employed (P=.002), and people with a health information orientation (P<.001) reported greater intentions to download the health app.The explained variance for the model including all predictors was 22.4%.These findings do not support H1a and H1b. Similar results were obtained in the analyses for the German sample (see Multimedia Appendix 2).No effects were found for data sharing (P=.23) or data sharing and advertising (P=.07) as revenue models (advertising as the reference category) on willingness to pay (H1a) in all three models.Furthermore, younger people (P=.02), people who obtained a postgraduate degree (P=.01) compared to students, those who were employed (P=.02), and people with a health information orientation (P<.001) were more willing to pay more for adopting the health app.The explained variance for the model including all predictors was 7.94%.Next, we tested the same model but with intention to download the app as the outcome measure (H1b).Again, no effects were found for data sharing (P=.26) or data sharing and advertising (P=.08) as revenue models on the intention to download in all three models.Men (P=.02), younger people (P<.001), people who finished high school (P=.01) or university (P=.003), those who were employed (P<.001), and people with a health information orientation (P<.001) reported greater intentions to download the health app.The explained variance for the model including all predictors was 29.7%.These findings do not support H1a and H1b. Similar results were also obtained for the Netherlands (see Multimedia Appendix 3).No effects were found for data sharing (P=.38) or data sharing and advertising (P=.17) as revenue models (advertising as the reference category) on willingness to pay in all models.Furthermore, men (P=.02), those who were employed (P=.01), and people with a health information orientation (P=.05) were willing to pay more for adopting the health app.The explained variance for the model including all predictors was 5.3%.For intentions to download the app, no effects were found for data sharing (P=.38) or data sharing and advertising (P=.43) as revenue models in all three models.People with a health information orientation had greater intentions to download the health app (P=.05).The explained variance for the model including all predictors was 3.5%.These findings do not support H1a and H1b. Data Protection Models Next, we explored the role of the data protection model.Linear regression analyses were first performed for Spain (see Multimedia Appendix 4) to examine if people were willing to pay more for a health app (H2a) and had a higher intention to download the app (H2b) when the health app ensured data protection in line with European legislation than when no information was given about data protection.No effects were found for the data protection model (no information about data protection was the reference category) on willingness to pay in all three models.Furthermore, men (P=.03) and people with a health information orientation (P=.006) were willing to pay more for adopting the health app.The explained variance for the model including all predictors was 3.0%.In contrast, participants in the condition whereby data were protected by European Union legislation had greater intentions to download the app (P<.001) in all three models.Furthermore, men (P=.01), younger people (P<.001), and people with a health information orientation (P<.001) had greater intention to download the health app.The explained variance for the model including all predictors was 23.3%.These results do not support H2a but do support H2b. Next, we conducted the linear regression analyses for Germany (see Multimedia Appendix 5).Again, no effects were found for the data collection model (P=.08) on willingness to pay in all three models.People with a health information orientation (P<.001) and with less eHealth literacy (P=.03) were willing to pay more for the health app.The explained variance for the model including all predictors was 4.8%.In contrast, participants in the condition whereby data were protected by European Union legislation had greater intentions to download the app (P<.001) in all three models.Furthermore, men (P=.006); younger people (P<.001); people who finished high school (P=.001), university (P<.001), or had a postgraduate degree (P=.03); and people with a health information orientation (P<.001) reported greater intentions to download the health app.The explained variance for the model including all predictors was 33.6%.These results do not support H2a but do support H2b. Finally, linear regression analyses were conducted for the Netherlands (see Multimedia Appendix 6).Participants in the condition whereby data protection by European Union legislation was explicitly stated were more willing to pay more for the health app than participants who received no information about data protection (P<.001).No significant effects were found for the other factors.The explained variance for the model including all predictors was 5.7%.In addition, participants in the condition where data were protected by European Union legislation reported greater intentions to download (P<.001) in all three models.Furthermore, people with a health information orientation had a greater intention to download the health app (P=.003).The explained variance for the model including all predictors was 11.2%.These results support both H2a and H2b. Recommendation Models Next, we explored the role of the recommendation model.The first linear regression analyses were conducted for Spain (see Multimedia Appendix 7) to examine if people were more willing to pay more for a health app (H3a) and had a higher intention to download the app (H3b) when the app was recommended by doctors than when the app was recommended by a patient association (reference category).No effects were found for the recommendation model on willingness to pay in all three models.Furthermore, men (P=.02) and people with a health information orientation (P=.01) were willing to pay more for adopting the health app.The explained variance for the model including all predictors was 3.4%.In contrast, participants reported greater intentions to download the health app when doctors recommended the health app (P=.04) compared to when the patients association recommended the health app.Furthermore, men (P=.02), younger people (P<.001), those who were employed (P=.01), and people with a health information orientation (P<.001) reported greater intentions to download the health app.The explained variance for the model including all predictors was 21.2 %.These results do not support H3a but do support H3b. Next, we conducted linear regression analyses for Germany (see Multimedia Appendix 8).Participants were more willing to pay for the app when it was recommended by doctors than when it was recommended by the patients association (P=.02), except in the model including all predictors.Furthermore, people who finished university (P=.007) compared to students, and people with a health information orientation (P<.001) were willing to pay more for adopting the health app.In contrast, people with more eHealth literacy were less willing to pay more for the health app (P=.007).The explained variance for the model including all predictors was 7.6%.Participants had greater XSL • FO RenderX intentions to download the health app when it was recommended by doctors compared to when it was recommended by the patients association (P=.01), but only when not controlling for sociodemographic and dispositional factors.Furthermore, men (P=.02); younger people (P<.001); people who finished high school (P=.001) and university (P<.001), or those with a postgraduate degree (P=.009) compared to students; those who were employed (P<.001); and people with a health information orientation had greater intention to download the health app (P<.001).The explained variance for the model including all predictors was 31.2%.These results support both H3a and H3b. Finally, linear regression analyses were conducted for the Netherlands (see Multimedia Appendix 9).Participants were willing to pay more for the health app when it was recommended by doctors compared to when it was recommended by the patients association (P=.01) in all three models.No significant effect was found for the other factors.The explained variance for the model including all predictors was 2.7%.In addition, participants reported greater intentions to download the health app when it was recommended by doctors compared to when it was recommended by the patients association (P=.01).Furthermore, people with a health information orientation had greater intentions to download the health app (P<.001).The explained variance for the model including all predictors was 8.6%.These results support both H3a and H3b. Provider Models Finally, we conducted linear regression analyses to explore the role of the provider model.The first set of analyses were conducted for Spain (see Multimedia Appendix 10).We examined if people were more willing to pay more for a health app (H4a) and had a higher intention to download the app (H4b) when the app was manufactured by a medical association than when it was manufactured by a pharmaceutical company (reference category).No effects were found for the provider model on willingness to pay in all three models.Furthermore, men (P=.03) and people with a health information orientation (P=.004) were willing to pay more for adopting the health app.The explained variance for the model including all predictors was 3.5%.In contrast, participants had less intention to download the health app if it was provided by a medical association compared to when it was provided by a pharmaceutical company (P<.001) in all three models.Furthermore, men (P=.006), younger people (P<.001), those who were employed (P=.009), and people with a health information orientation (P<.001) had greater intentions to download the health app.The explained variance for the model including all predictors was 23.1%.These results do not support H4a but do support H4b. The second set of linear regression analyses were conducted for Germany (see Multimedia Appendix 11).No significant effects were found for the provider model on willingness to pay in all three models.Furthermore, younger people (P=.005), people with a health information orientation (P<.001), and people with less eHealth literacy (P=.002) were willing to pay more for adopting the health app.The explained variance for the model including all predictors was 6.3%.In contrast, participants reported less intentions to download the health app when it was provided by a medical association compared to when it was provided by a pharmaceutical company (P<.001) in all three models.Furthermore, men (P=.002); younger people (P<.001); people who finished high school (P=.005), university (P<.001), or had a postgraduate degree (P=.03), with students as reference; people with a health information orientation (P<.001); and people with less eHealth literacy (P=.02) had greater intentions to download the health app.The explained variance for the model including all predictors was 29.9%.These results do not support H4a but do support H4b. Finally, linear regression analyses were conducted for the Netherlands (see Multimedia Appendix 12).Participants were willing to pay more for the health app when it was provided by a medical association compared to when it was provided by a pharmaceutical company (P=.005) in all three models.No significant effects were found for the other factors.The explained variance for the model including all predictors was 5.0%.In addition, participants reported greater intentions to download the app when it was provided by a medical association than when it was provided by a pharmaceutical company (P<.001) in all three models.Furthermore, people with a health information orientation had greater intention to download the health app (P=.008).The explained variance for the model including all predictors was 7.0%.These results support both H4a and H4b. Principal Findings Given the expected benefits associated with mHealth adoption, both for individual users and health care systems, it is important to gain greater understanding of factors that contribute to or deter from adoption.Therefore, we conducted an online experiment to assess the effect of four variations in the business model of an mHealth app and three intrapersonal characteristics in three different countries (Spain, Germany, and the Netherlands) on individuals' willingness to pay for and their likelihood of adopting an mHealth app. The results showed that in all countries there was no effect of the different revenue models on both willingness to pay and intention to download the health app, thereby not supporting H1.People are not less willing to pay and do not have a reduced intention to download a health app when the revenue model is based on data sharing or advertising and data sharing, compared to that based on advertising only.This finding is surprising, as people in general report being concerned about sharing their personal information [42].Hence, this concern could be expected to drive one's intended and actual disclosure, and their subsequent decision making.Our study does not support this speculation, and instead suggests that people are less than selective and often cavalier in the protection of their own data profiles.To date, few studies have examined this discrepancy between individuals' intentions to protect their own privacy and how they actually behave in the marketplace, which is termed the "privacy paradox" (see [42]) in the context of mHealth.Our findings indicate that further research on this matter is warranted, given that the privacy paradox is an increasing concern when it comes to personal health data [11,44]. RenderX Interestingly, in Spain and Germany, we found no effects of the data protection model on willingness to pay, whereas in the Netherlands, participants in the data protection condition were willing to pay more for the health app compared to when receiving no information regarding on how their health information will be used.In all three countries, participants in the data protection condition were more likely to download the health app, thereby partly supporting H2.Thus, in line with the findings for the revenue model, and supporting the notion of the privacy paradox, people were not willing to pay for the app.Given an industry in which mobile apps are continuously expanding and new health care apps and devices are rapidly being created, it is essential to be very cautious of the collection and treatment of users' personal health information, particularly by the consumers themselves [44]. In Spain, we found no effect of the recommendation model, whereas in Germany and the Netherlands, participants were willing to pay more for a health app recommended by a doctor compared to that recommended by a patient association.In addition, in all three countries, intentions to download the health app were greater when the app was recommended by a doctor. In Spain and Germany, we found no effects of the provider model, whereas in the Netherlands, participants in the medical association provider condition were willing to pay more for the health app than participants in the pharmaceutical provider condition.In all three countries, participants in the medical association provider condition had greater intentions to download the app than participants in the pharmaceutical provider condition. Overall, the findings of our study indicate that endorsement from the medical establishment, either via a doctor recommendation or a medical association provider model, is helpful to increase adoption of an mHealth app.However, the revenue and data protection models seem to have a less consistent and a weaker effect, especially on the willingness to pay for an app.These findings suggest that future app developers can benefit most from a close collaboration with medical experts and organizations to increase adoption rates. In general, the above findings show that certain aspects of the business model can influence the willingness to pay for or the intention to adopt an mHealth app, but that this influence appears conditional; that is, it varies according to the country of residence and seems to interact with dispositional characteristics such as a person's health information orientation.In summary, this suggests that mHealth adoption is a complex process that involves many different factors situated at least at the personal, economic, and cultural level.This implies that in order to increase adoption rates and decrease attrition, developers, organizations, and practitioners need to be weary of one-size-fits-all approaches, as these are likely less successful than an approach that tailors the business model to the population of interest.Given that we currently lack understanding of the precise mechanisms that explain why, under certain conditions, mHealth adoption can be more or less successful, future research is needed to explore these mechanisms in greater depth. Finally, in all three countries, men, younger individuals, people with higher levels of education, and those with a health information orientation were willing to pay more for adoption of the health app and had a higher intention to download the app.In line with previous studies, health information orientation was found to be an important predictor that explains both the willingness to pay and the intention to download the health app [36,39,40].A high level of health information orientation positively affected the amount a participant was willing to pay for the health app and the intention to download it. Overall, the finding that young, highly educated males, and people with a stronger health information orientation were more willing to pay for and download the mHealth app in this study suggests that traditional factors that demarcate access to and use of health services such as gender and age are also at play in mHealth.Owing to the ease of use and widespread diffusion of mobile phones, mHealth initiatives are often applauded for their emancipatory potential (eg, [45]); thus, our study supports earlier observations that future policy efforts aimed at closing "the digital health divide" need to also focus on disparities in mHealth adoption and use [46].To inform these policy efforts, further research is needed to explore the specific barriers hindering participation in mHealth. Strengths and Limitations One of the strengths of the current study is that we collected data among a large group of participants in three different countries.Another strength is that we used a multifactorial experimental design, examining several factors in relation to the business model that are considered to be important in predicting and explaining the adoption of a health app.Third, we assessed the role of three intrapersonal predictors of the adoption of a health app. This study also had some limitations.First, because the study was conducted online, internal validity of the exact experiment cannot be guaranteed since it is difficult to assess how truthfully the participants answered.Nonetheless, because the experiment was not focused on sensitive questions but rather on factors related to adoption of an online health app, using an online questionnaire to assess different factors could be considered a valid and reliable measurement.Second, both in Spain and Germany, data were collected by a professional company in which the participants were paid for their participation, whereas in the Netherlands a convenience sampling approach was used without paying the participants.Overall, the results are quite similar between the countries, although we also noticed some minor differences in some results that could be due to the different sampling methods. Conclusion Over the last decade, the number of people in the world who perform health-related functions on their smartphones has increased rapidly [1,2].However, research into the adoption and effectiveness of mHealth remains scarce.This is unfortunate, given that adopting a health app is a necessary first step for such an app to be effective [6,44,47].It is essential that patient safety (data protection), reducing costs, and creating sound business models are investigated to a larger extent to gain XSL • FO RenderX a better understanding of the major driving forces for the adoption of mHealth in the future.Next, it is important to create standards for mobile apps, whereby doctors and patients associations can have a leading role in informing the potential consumers as a heuristic approach.Governments, large funders, and industry associations should create and adhere to such standards so that mHealth apps can be adopted and used with confidence of the quality, privacy of the data, and with prices that are proportional to the service provided. Table 1 . Number of participants per condition for Spain, Germany, and the Netherlands. Table 2 . Descriptive information about the participants per country.Based on the response to the question "During the last 12 months, would you say you had difficulties in paying your bills at the end of the month…?". a b eHealth: electronic health.
8,967
2019-12-01T00:00:00.000
[ "Business", "Computer Science", "Medicine" ]
Development of simple HPLC/UV with a column-switching method for the determination of nicotine and cotinine in hair samples Nicotine and cotinine in hair are good biomarkers for assessing long-term exposure to smoking. However, analytical devices such as GC/MS are associated with high cost and are not widely used. HPLC/UV is used widely in laboratories, but is unsuitable for measurement of minor constituents, except when using the column-switching method. Thus, we aimed to establish a simple, inexpensive and sensitive method based on HPLC/UV with column switching for measuring nicotine and cotinine in hair. First, we compared the presence and absence of a column selection unit. We then measured amounts of nicotine and cotinine in hair samples collected from the general population, and compared both the corresponding levels and the detection limits with those in previous studies. Finally, initial and running costs of HPLC/UV were compared with other analytical methods. As one of the results, the areas of nicotine and cotinine measured by HPLC/UV with column-switching method were 12.9 and 16.9 times greater, respectively, than those without the column-switching method. The amount of nicotine and cotinine in hair was significantly correlated to number of cigarettes smoked per day (r = 0.228, p = 0.040). In addition, the HPLC/UV method showed similar sensitivity and detection limit (nicotine, 0.10 ng/mg; cotinine, 0.08 ng/mg) as reported in previous studies. The cost of the HPLC/UV method is lower than that of other analytical methods. We were able to establish a low-cost method with good sensitivity for measuring nicotine and cotinine in hair. The HPLC/UV with a column-switching method will be useful as a first step in screening surveys in order to better understand the effects of smoking exposure. INTRODUCTION The risks of smoking are widely recognized and taking action against smoking continues to be a priority issue for public health. Death by cancer or ischemic heart disease is reported to be a major risk of smoking [1,2]. Thus, environmental countermeasures need to be taken for both smokers and non-smokers. Biological monitoring is important as a means to evaluate exposure to smoking. In previous studies, levels nicotine and its metabolite, cotinine, were measured in the urine or saliva of smokers [3][4][5]. However, the levels of nicotine or cotinine in these samples may reflect acute exposure to smoking, but not the amount of habitual smoking. Because human hair grows about 1 cm/month, it is useful in biological monitoring in the medium or long term [6,7]. In addition, the amount of nicotine and its metabolites in the hair reportedly decreases slowly; a decrease of less than 10% was observed after being left to stand for one week at room temperature [6]. In previous studies, measurement of nicotine or co-tinine in hair has been performed by gas chromatography with mass spectrophotometry (GC/MS) [8,9], but this method has high initial and running costs. More recently, high-performance liquid chromatography with electrochemical detection (HPLC/ECD) has been used for the determination of nicotine and cotinine because of its high sensitivity. The initial and running costs of HPLC are lower than those of GC. However, ECD detectors are not commonly present in laboratories, while UV detectors are much more common. Unfortunately, UV detectors are unsuitable for measurement of small amounts of compounds in hair due to poor sensitivity. It has been reported that UV detectors can be installed on columnselection units [10,11], but there have been no reports on the measurement of nicotine in hair using this approach. The column-switching method is able to concentrate samples for analyses; thus, the column-switching method may be used as a method for increasing the sensitivity of HPLC/UV. This study aims to establish a simple, cheap and sensitive method based on HPLC/UV with column-switching in order to measuring nicotine and cotinine in hair. First, we compared the presence and absence of a column selection unit, and we examined the intra-and inter-assay reproducibility of HPLC/UV with column-switching. We then measured the amounts of nicotine and cotinine in hair samples collected from the general population, and compared the quality controls with previous studies. Finally, the initial and running costs of HPLC/UV were compared with other analytical methods. Usefulness of HPLC/UV with Column-Switching Method Sensitivity of HPLC/UV with Column-Switching Method This study examined the sensitivity of the columnswitching method in a preliminary experiment using nicotine and cotinine standard solutions (both 1000 ng/ml; Sigma-Aldrich, Tokyo, Japan). Chromatograms of nicotine and cotinine were compared by area. An internal standard of 100 ng/ml N-ethyl norcotinine (NENC) in methanol was used. Some differences between the preliminary experiment and the present study were found with respect to analytical column, flow rate, injection volume, and introduction of column-switching. Inertsil ODS-3V (GL Sciences, Tokyo, Japan) was used as an analytical column, with a 1.0 ml/min flow rate and a 50 μl injection volume in the preliminary experiment. The present study also introduced a column selection unit HV-2080-01 (JASCO, Tokyo, Japan), as a columnswitching method, for greater sensitivity than in the preliminary experiment. The analytical column used was the Ascentis Express C18 Column (100 mm × 3.0 mm × 2.7 μm; Sigma-Aldrich) with the PU-2089 pump (JASCO). The mobile phase in the analytical column consisted of ammonium formate (50 mM, pH 4.3): acetonitrile = 96:4 at a flow rate of 0.4 ml/min. The concentrating column used the Develosil ODS-UG-5 Column (10 mm × 4.0 mm i.d.; Nomura Chemical, Aichi, Japan) and the DPmodel 203 pump (Eicom, Kyoto, Japan). The mobile phase of the concentrated column consisted of ammonium formate (50 mM, pH 9.0) with a flow rate of 0.5 ml/min. The injection volume was 200 μl. Other conditions were the same for both the preliminary experiment and the present study. We used the HPLC LC-2000 Plus Series, the AS-2055 auto sampler, the UV-2075 detector set at 260 nm, the ChromNAV data disposal device (all from JASCO), and the Waters-CHM column oven (Nihon Waters, Tokyo, Japan) with a column oven temperature at 40˚C. Intra-and Inter-Assay Reproducibility of HPLC/UV with Column-Switching Method The HPLC/UV with column-switching method was examined for intra-assay and inter-assay reproducibility. With regard to pre-treatment for hair, similarly to previous studies on hair analysis [12,13], hair samples were placed in test tubes and washed three times using 3 ml of dichloromethane. After the hair sample was dried, it was weighed, and the following treatment for about 40 mg of hair was used. Samples were mixed with 1.6 ml of NaOH (2.5 M) and 60 μl of NENC (1000 ng/ml; Cosmo Bio, Tokyo, Japan) as an internal standard, followed by incubation at 40˚C until the hair was completely dissolved. Next, 4 ml of solvent mixture (chloroform: isopropyl alcohol = 95:5 (v/v)) was added and the mixture was vortexed for 2 min. The mixture was then centrifuged for 5 min at 2000 rpm, and the supernatant was aspirated under a fume hood. Next, 2 ml of HCl (0.5 M) was added, followed by vortexing for 2 minutes. The mixture was centrifuged for 5 min at 2000 rpm, and the supernatant was transferred to another test tube. NaOH (0.4 ml; 2.5 M) was then added to the test tube. In addition, 1.6 ml of ammonium chloride (pH 9.5) and 4 ml of solvent mixture (chloroform: isopropyl alcohol = 95:5 (v/v)) also was added to the test tube, followed by vortexing for 2 minutes. The mixture was centrifuged for 5 min at 2000 rpm, and the supernatant was discarded and dried under a nitrogen stream. The extract was dissolved with 600 μl of ammonium formate (0.5 M), centrifuged for 1 min at 2000 rpm, and filtered with a 0.45 μm filter. Solvent (200 μl) was then injected into the HPLC system and analyzed. This study was performed under the HPLC/UV conditions described previously. For intraassay assessment, measurements were performed every hour for 5 hours, and for inter-assay assessment, mea-surement was performed once daily for 5 days. cutting scissors, the hair sample was cut at the red line on the paper. Hair used for measurement was that from the cut point to a length of 5 cm. Each hair sample and attached paper was then placed in a plastic bag and stored at −80˚C in a freezer. This study was performed under the pre-treatment and HPLC/UV method conditions described previously. Subjects and Time Period Two thousand subjects were selected in a two-stage stratified random sampling chosen from the "Basic Resident Registries" of municipalities all over Japan. We performed both a questionnaire survey and hair-cutting by home visits. Questionnaires remained anonymous in order to protect private information, and informed consent was obtained from each subject. Questionnaires about smoking behavior were completed during home visit interviews. Screening Survey Smoking information was assessed from questionnaires associated with 287 samples. Participants were categorized into 2 groups; non-smokers and smokers. The mean values of nicotine and cotinine in the hair in each group was calculated. To examine the usefulness of the HPLC/UV method, present methods and results were compared with major studies. In addition, the correlation of sum total nicotine and cotinine in hair and number of cigarettes smoked per day were checked for the smoker group. This statistical analysis was conducted using SPSS statistics 17.0 (Nihon IBM, Tokyo, Japan). All probability values were two-tailed and all confidence intervals were estimated at the 95% levels. Hair Sample Collection, Preservation and Measurement Method Nicotine and cotinine were measured in hair samples collected from 294 people in 2009 and 2010. Subjects were 294 people assessed for smoking status by questionnaire. Two hundred and eighty-seven samples were used for analysis due to a lack of complete data in 7 samples. We developed a hair-cutting kit and explanatory leaflet for the hair extraction method, with the aim of safely colleting hair samples with affecting subject esthetics (Figure 1). The kit included a plastic bag and a rectangular sheet of construction paper (15 cm × 5 cm), with a red line at the 1 cm from the bottom and pressure-sensitive adhesive double-coated tape above the red line. To obtain a hair sample, the top edge of the paper was first applied to the skin of the head, tape-side down. Hair was then affixed to the tape, and then, with hair- Comparison of Apparatus Costs Initial and running costs were compared for each analytical apparatus and we evaluated the usefulness of HPLC/UV. For market rate costs, we referred to a report by Benowitz [14] and analytical apparatus catalogs. We categorized initial costs under US$50,000 as "Low", US$50,000 -$100,000 as "Moderate", US$100,000 -$200,000 as "High", and over US$200,000 as "Extremely high", with US$1 = 100 yen. Meanwhile, run- Usefulness and Accuracy of HPLC/UV with Column-Switching Method For measurement of nicotine and cotinine in hair samples, we used the HPLC/UV with column-switching method. The HPLC/UV with column-switching method was shown have better sensitivity when compared with preliminary experiments (Figure 2). Nicotine and cotinine levels measured by the column-switching method were 12.9 times and 16.9 times greater, respectively, then that of the preliminary experiment. The area of the NENC peak measured by the column-switching method was shown to be 12.2 times that of the preliminary experiment. In addition, measurement time was shortened to around 8 min by the HPLC/UV without column- Intra-and inter-assay reproducibility was stable ( Table 1). Results for intra-assay assessment were 92.2 ± 2.7 ng/mg for nicotine and 10.3 ± 0.2 ng/mg for cotinine (Table 1(a)). Results for inter-assay assessment were 87.0 ± 2.8 ng/mg for nicotine and 10.4 ± 0.3 ng/mg for cotinine (Table 1(b)). Screening Nicotine and Cotinine in Hair by HPLC/UV with Column-Switching Method Among 287 hair sample providers, 205 were nonsmokers and 82 were smokers. The sum total nicotine and cotinine in hair and the number of cigarettes smoked per day were significantly correlated in smokers ( Figure 3; r = 0.228, p = 0.040). The HPLC/UV method used in our study showed a similar sensitivity and detection limit as previous studies for both nicotine and cotinine ( Table 2). In our 2 groups, the mean level of nicotine and cotinine in hair samples was 1.60 ng/mg and 0.20 ng/mg among non-smokers, 23.30 ng/mg and 1.70 ng/mg among smokers, respectively. Non-smokers in previous studies showed a range of 0.58 -2.50 ng/mg nicotine and ND-0.30 ng/mg cotinine in hair samples. Meanwhile, smokers in previous studies showed a range of 6.17 -42.40 ng/mg nicotine and 0.33 -6.30 ng/mg cotinine in hair samples. Using an S (signal)/N (noise) ratio of 3:1, the detection limit for nicotine was about 0.10 ng/mg and that for cotinine was about 0.08 ng/mg in hair samples using our method. The ranges for nicotine data obtained by other analytical methods were 0.05 -0.50 ng/mg. Our results were within these ranges. Cost Comparison by Analytical Method The HPLC/UV method has a relatively low cost when compared to other analytical methods ( Table 3). The HPLC/UV method was categorized as "Low" for both initial and running costs. The initial and running costs for HPLC/UV used in the present study were about US $25,000 and about US$2500/year, respectively. The GC/ MS method, used as a major analytical method, has both "High" initial and running costs. DISCUSSION In the present study, we established a new method using HPLC/UV with column-switching method that has lower costs with similar sensitivity as other analytical methods in order to determine nicotine and cotinine levels in hair samples. Our results suggest that our method allows high sensitivity with good reproducibility to study the effects of long-term exposure to smoking. This study examined HPLC/UV with column-switching method for higher sensitivity measurement of nicotine and cotinine in hair. In the column-switching method, a large amount of sample solution is added to the concentration column, and trace components are first trapped. The flow path is then reversed, followed by processing with an appropriate amount of liquid solvent and measurement using an analytical column. Therefore, we were able to concentrate the samples and reduce analysis time. Using the column-switching method, it was possible to measure larger amounts and to increase nicotine and cotinine sensitivity detection by 10 fold vs. that in the preliminary experiment. In addition, we investigated the utility of the column-switching method based on intra-and inter-assay reproducibility. It was not difficult to obtain 200 μl of extraction liquid from samples, and we confirmed effective extraction with high sensitivity. This study showed higher sensitivity than previous reports measuring nicotine and cotinine in hair samples, where HPLC/UV was used and showed a detection sensitivity of 0.20 ng/mg hair for nicotine and about 0.10 ng/mg hair for cotinine [26]. Thus, the results demonstrate the utility of the column-switching method. There was a significant correlation between the number of cigarettes smoked per day and the total amount of nicotine and cotinine in hair samples. In order to avoid Table 1. Intra-and inter-assay reproducibility of hplc/uv with column-switching method. (a) Intra-assay; (b) Inter-assay. metabolic differences from nicotine to cotinine among individuals, we analyzed the total amounts of nicotine and cotinine. The total of nicotine and cotinine in hair may use as a predictive bio-marker of the number of cigarettes smoked per day. In this study, mean concentrations of nicotine in hair samples were 1.60 ng/mg for non-smokers and 23.30 ng/mg for smokers. Concentrations of cotinine in hair samples were 0.20 ng/mg for non-smokers and 1.70 ng/mg for smokers. The detection limit was 0.10 ng/mg nicotine and 0.08 ng/mg cotinine in hair samples. When compared with previous studies, our method was within the same range of accuracy as other measurement methods for nicotine and cotinine in hair. Therefore, the HPLC/UV with column-switching method appears to have similar sensitivity as the GC/MS and HPLC/ECD methods. The MS method is known to be highly sensitive in identifying materials by mass, but the HPLC/UV method has been shown to be just as sensitive. We believe that the columns and detection devices in the HPLC/UV method have been thoroughly tested [26][27][28], and by introducing the simple improvement of the column-switching method to the HPLC/UV method, it is possible to markedly increase sensitivity. This study found that both the initial and running costs of the present method are "Low", suggesting that experiments can be performed with this method for less than US$53,000 per year. The GC/MS method, a major analytical method used for nicotine and cotinine measurement, was shown to cost over US$105,000 per year. Measurement with HPLC/UV method is therefore able to reduce cost by half when compared with the GC/MS method. In addition, because HPLC and UV detectors are already used widely [10,11], it is possible to reduce the initial costs. It is also possible to introduce a columnselection unit at the low cost of about US$3000, reducing the total cost of this study to around US$27,500. The present HPLC/UV with column-switching method had a sensitivity similar to that of the GC/MS method, as well as a low cost. Based on its accuracy and cost, we believe that measurement of nicotine and cotinine in hair samples using this HPLC/UV method would be useful in initial screening, such as in health check-ups. The HPLC/ UV with column-switching method is able to perform such screening at low cost and with the same accuracy as other analytical methods, thereby facilitating the study of the effects of smoking exposure. This study had several limitations. Because smokers of self-reported their cigarette intake, results may have been underestimated. However, because the questionnaire used in this study was answered during home visiting interviews, we assume the subjects answered honestly. In the future, a comparison between the HPLC/UV with columnswitching method and other analytical methods will be necessary using the same samples. CONCLUSION In this study, we established a simple HPLC/UV with column-switching method for the determination of nicotine and cotinine in hair samples. Using this method, the nicotine and cotinine in hair were found to accurately reflect exposure to smoking. The HPLC/UV with column-switching method is able to detect nicotine and cotinine in hair with an equivalent sensitivity as GC/MS or HPLC/ECD, but at about the half cost of the GC/MS method. Therefore, the HPLC/UV with column-switching method could be more widely applied, particularly for use in screening surveys, in order to better understand the effects of smoking exposure. ACKNOWLEDGEMENTS The present study was supported by a Research Grant for Cardiovascular Disease from the Ministry of Health, Labor and Welfare. The present study was approved by the Fukushima Medical University Ethics Committee (approval number: 1166).
4,267.2
2013-04-11T00:00:00.000
[ "Chemistry" ]
Explainable Prediction of Text Complexity: The Missing Preliminaries for Text Simplification Text simplification reduces the language complexity of professional content for accessibility purposes. End-to-end neural network models have been widely adopted to directly generate the simplified version of input text, usually functioning as a blackbox. We show that text simplification can be decomposed into a compact pipeline of tasks to ensure the transparency and explainability of the process. The first two steps in this pipeline are often neglected: 1) to predict whether a given piece of text needs to be simplified, and 2) if yes, to identify complex parts of the text. The two tasks can be solved separately using either lexical or deep learning methods, or solved jointly. Simply applying explainable complexity prediction as a preliminary step, the out-of-sample text simplification performance of the state-of-the-art, black-box simplification models can be improved by a large margin. Introduction Text simplification aims to reduce the language complexity of highly specialized textual content so that it is accessible for readers who lack adequate literacy skills, such as children, people with low education, people who have reading disorders or dyslexia, and non-native speakers of the language. Mismatch between language complexity and literacy skills is identified as a critical source of bias and inequality in the consumers of systems built upon processing and analyzing professional text content. Research has found that it requires on average 18 years of education for a reader to properly understand the clinical trial descriptions on ClinicalTrials.gov, and this introduces a potential self-selection bias to those trials (Wu et al., 2016). Text simplification has considerable potential to improve the fairness and transparency of text information systems. Indeed, the Simple English Wikipedia (simple.wikipedia.org) has been constructed to disseminate Wikipedia articles to kids and English learners. In healthcare, consumer vocabulary are used to replace professional medical terms to better explain medical concepts to the public (Abrahamsson et al., 2014). In education, natural language processing and simplified text generation technologies are believed to have the potential to improve student outcomes and bring equal opportunities for learners of all levels in teaching, learning and assessment (Mayfield et al., 2019). Ironically, the definition of "text simplification" in literature has never been transparent. The term may refer to reducing the complexity of text at various linguistic levels, ranging all the way through replacing individual words in the text to generating a simplified document completely through a computer agent. In particular, lexical simplification (Devlin, 1999) is concerned with replacing complex words or phrases with simpler alternatives; syntactic simplification (Siddharthan, 2006) alters the syntactic structure of the sentence; semantic simplification (Kandula et al., 2010) paraphrases portions of the text into simpler and clearer variants. More recent approaches simplify texts in an end-toend fashion, employing machine translation models in a monolingual setting regardless of the type of simplifications (Zhang and Lapata, 2017;Guo et al., 2018;Van den Bercken et al., 2019). Nevertheless, these models are limited on the one hand due to the absence of large-scale parallel (complex → simple) monolingual training data, and on the other hand due to the lack of interpretibility of their black-box procedures (Alva-Manchego et al., 2017). Given the ambiguity in problem definition, there also lacks consensus on how to measure the goodness of text simplification systems, and automatic evaluation measures are perceived ineffective and sometimes detrimental to the specific procedure, in particular when they favor shorter but not necessar-ily simpler sentences (Napoles et al., 2011). While end-to-end simplification models demonstrate superior performance on benchmark datasets, their success is often compromised in out-of-sample, real-world scenarios (D'Amour et al., 2020). Our work is motivated by the aspiration that increasing the transparency and explainability of a machine learning procedure may help its generalization into unseen scenarios (Doshi-Velez and Kim, 2018). We show that the general problem of text simplification can be formally decomposed into a compact and transparent pipeline of modular tasks. We present a systematic analysis of the first two steps in this pipeline, which are commonly overlooked: 1) to predict whether a given piece of text needs to be simplified at all, and 2) to identify which part of the text needs to be simplified. The second task can also be interpreted as an explanation of the first task: why a piece of text is considered complex. These two tasks can be solved separately, using either lexical or deep learning methods, or they can be solved jointly through an end-to-end, explainable predictor. Based on the formal definitions, we propose general evaluation metrics for both tasks and empirically compare a diverse portfolio of methods using multiple datasets from different domains, including news, Wikipedia, and scientific papers. We demonstrate that by simply applying explainable complexity prediction as a preliminary step, the out-of-sample text simplification performance of the state-of-the-art, black-box models can be improved by a large margin. Our work presents a promising direction towards a transparent and explainable solution to text simplification in various domains. Text simplification at word level has been done through 1) lexicon based approaches, which match words to lexicons of complex/simple words (Deléger and Zweigenbaum, 2009;Elhadad and Sutaria, 2007), 2) threshold based approaches, which apply a threshold over word lengths or certain statistics (Leroy et al., 2013), 3) human driven approaches, which solicit the user's input on which words need simplification (Rello et al., 2013), and 4) classification methods, which train machine learning models to distinguish complex words from simple words (Shardlow, 2013). Com-plex word identification is also the main topic of SemEval 2016 Task 11 (Paetzold and Specia, 2016), aiming to determine whether a non-native English speaker can understand the meaning of a word in a given sentence. Significant differences exist between simple and complex words, and the latter on average are shorter, less ambiguous, less frequent, and more technical in nature. Interestingly, the frequency of a word is identified as a reliable indicator of its simplicity (Leroy et al., 2013). While the above techniques have been widely employed for complex word identification, the results reported in the literature are rather controversial and it is not clear to what extent one technique outperforms the other in the absence of standardized high quality parallel corpora for text simplification (Paetzold, 2015). Pre-constructed lexicons are often limited and do not generalize to different domains. It is intriguing that classification methods reported in the literature are not any better than a "simplify-all" baseline (Shardlow, 2014). Readability assessment Traditionally, measuring the level of reading difficulty is done through lexicon and rule-based metrics such as the age of acquisition lexicon (AoA) (Kuperman et al., 2012) and the Flesch-Kincaid Grade Level (Kincaid et al., 1975). A machine learning based approach in (Schumacher et al., 2016) extracts lexical, syntactic, and discourse features and train logistic regression classifiers to predict the relative complexity of a single sentence in a pairwise setting. The most predictive features are simple representations based on AoA norms. The perceived difficulty of a sentence is highly influenced by properties of the surrounding passage. Similar methods are used for fine-grained classification of text readability (Aluisio et al., 2010) and complexity (Štajner and Hulpus , , 2020). Computer-assisted paraphrasing Simplification rules are learnt by finding words from a complex sentence that correspond to different words in a simple sentence (Alva-Manchego et al., 2017). Identifying simplification operations such as copies, deletions, and substitutions for words from parallel complex vs. simple corpora helps understand how human experts simplify text (Alva-Manchego et al., 2017). Machine translation has been employed to learn phrase-level alignments for sentence simplification (Wubben et al., 2012). Lexical and phrasal paraphrase rules are extracted in . These methods are often evaluated by comparing their output to gold-standard, human-generated simplifications, using standard metrics (e.g., token-level precision, recall, F1), machine translation metrics (e.g., BLEU (Papineni et al., 2002) ), text simplification metrics (e.g. SARI (Xu et al., 2016) which rewards copying words from the original sentence), and readability metrics (among which Flesch-Kincaid Grade Level (Kincaid et al., 1975) and Flesch Reading Ease (Kincaid et al., 1975) are most commonly used). It is desirable that the output of the computational models is ultimately validated by human judges (Shardlow, 2014). End-to-end simplification Neural encoder-decoder models are used to learn simplification rewrites from monolingual corpora of complex and simple sentences (Scarton and Specia, 2018;Van den Bercken et al., 2019;Zhang and Lapata, 2017;Guo et al., 2018). On one hand, these models often obtain superior performance on particular evaluation metrics, as the neural network directly optimizes these metrics in training. On the other hand, it is hard to interpret what exactly are learned in the hidden layers, and without this transparency it is difficult to adapt these models to new data, constraints, or domains. For example, these end-to-end simplification models tend not to distinguish whether the input text should or should not be simplified at all, making the whole process less transparent. When the input is already simple, the models tend to oversimplify it and deviate from its original meaning (see Section 5.3). Explanatory Machine Learning Various approaches are proposed in the literature to address the explainability and interpretability of machine learning agents. The task of providing explanations for black-box models has been tackled either at a local level by explaining individual predictions of a classifier (Ribeiro et al., 2016), or at a global level by providing explanations for the model behavior as a whole (Letham et al., 2015). More recently, differential explanations are proposed to describe how the logic of a model varies across different subspaces of interest (Lakkaraju et al., 2019). Layer-wise relevance propagation (Arras et al., 2017) is used to trace backwards text classification decisions to individual words, which are assigned scores to reflect their separate contribution to the overall prediction. LIME (Ribeiro et al., 2016) is a model-agnostic explanation technique which can approximate any machine learning model locally with another sparse linear interpretable model. SHAP (Lundberg and Lee, 2017) evaluates Shapley values as the average marginal contribution of a feature value across all possible coalitions by considering all possible combinations of inputs and all possible predictions for an instance. Explainable classification can also be solved simultaneously through a neural network, using hard attentions to select individual words into the "rationale" behind a classification decision (Lei et al., 2016). Extractive adversarial networks employs a three-player adversarial game which addresses high recall of the rationale (Carton et al., 2018). The model consists of a generator which extracts an attention mask for each token in the input text, a predictor that cooperates with the generator and makes prediction from the rationale (words attended to), and an adversarial predictor that makes predictions from the remaining words in the inverse rationale. The minimax game between the two predictors and the generator is designed to ensure all predictive signals are included into the rationale. No prior work has addressed the explainability of text complexity prediction. We fill in this gap. An Explainable Pipeline for Text Simplification We propose a unified view of text simplification which is decomposed into several carefully designed sub-problems. These sub-problems generalize over many approaches, and they are logically dependent on and integratable with one another so that they can be organized into a compact pipeline. Figure 1: A text simplification pipeline. Explainable prediction of text complexity is the preliminary of any human-based, computer assisted, or automated system. The first conceptual block in the pipeline (Figure 1) is concerned with explainable prediction of the complexity of text. It consists of two sub-tasks: 1) prediction: classifying a given piece of text into two categories, needing simplification or not; and 2) explanation: highlighting the part of the text that needs to be simplified. The second conceptual block is concerned with simplification generation, the goal of which is to generate a new, simplified version of the text that needs to be simplified. This step could be achieved through completely manual effort, or a computer-assisted approach (e.g., by suggesting alternative words and expressions), or a completely automated method (e.g., by selftranslating into a simplified version). The second building block is piped into a step of human judgment, where the generated simplification is tested, approved, and evaluated by human practitioners. One could argue that for an automated simplification generation system the first block (complexity prediction) is not necessary. We show that it is not the case. Indeed, it is unlikely that every piece of text needs to be simplified in reality, and instead the system should first decide whether a sentence needs to be simplified or not. Unfortunately such a step is often neglected by existing end-to-end simplifiers, thus their performance is often biased towards the complex sentences that are selected into their training datasets at the first place and doesn't generalize well to simple inputs. Empirically, when these models are applied to out-of-sample text which shouldn't be simplified at all, they tend to oversimplify the input and result in a deviation from its original meaning (see Section 5.3). One could also argue that an explanation component (1B) is not mandatory in certain text simplification practices, in particular in an end-to-end neural generative model that does not explicitly identify the complex parts of the input sentence. In reality, however, it is often necessary to highlight the differences between the original sentence and the simplified sentence (which is essentially a variation of 1B) to facilitate the validation and evaluation of these black-boxes. More generally, the explainability/interpretability of a machine learning model has been widely believed to be an indispensable factor to its fidelity and fairness when applied to the real world (Lakkaraju et al., 2019). Since the major motivation of text simplification is to improve the fairness and transparency of text information systems, it is critical to explain the ra-tionale behind the simplification decisions, even if they are made through a black-box model. Without loss of generality, we can formally define the sub-tasks 1A, 1B, and 2-in the pipeline: Definition 3.1. (Complexity Prediction). Let text d ∈ D be a sequence of tokens w 1 w 2 ...w n . The task of complexity prediction is to find a function f : D → {0, 1} such that f (d) = 1 if d needs to be simplified, and f (d) = 0 otherwise. Definition 3.2. (Complexity Explanation). Let d be a sequence of tokens w 1 w 2 ...w n and f (d) = 1. The task of complexity explanation/highlighting is to find a function h : ..c n , where c i = 1 means w i will be highlighted as a complex portion of d and c i = 0 otherwise. We denote d|h(d) as the highlighted part of d and d|¬h(d) as the unhighlighted part of d. Definition 3.3. (Simplification Generation). Let d be a sequence of tokens w 1 w 2 ...w n and f (d) = 1. The task of simplification generation is to find a function g : .w m and f (d ) = 0, subject to the constraint that d preserves the meaning of d. In this paper, we focus on an empirical analysis of the first two sub-tasks of explainable prediction of text complexity (1A and 1B), which are the preliminaries of any reasonable text simplification practice. We leave aside the detailed analysis of simplification generation (2-) for now, as there are many viable designs of g(·) in practice, spanning the spectrum between completely manual and completely automated. Since this step is not the focus of this paper, we intend to leave the definition of simplification generation highly general. Note that the definitions of complexity prediction and complexity explanation can be naturally extended to a continuous output, where f (·) predicts the complexity level of d and h(·) predicts the complexity weight of w i . The continuous output would align the problem more closely to readability measures (Kincaid et al., 1975). In this paper, we stick to the binary output because a binary action (to simplify or not) is almost always necessary in reality even if a numerical score is available. Note that the definition of complexity explanation is general enough for existing approaches. In lexical simplification where certain words in a complex vocabulary V are identified to explain the complexity of a sentence, it is equivalent to highlighting every appearance of these words in d, or ∀w i ∈ V, c i = 1. In automated simplification where there is a self-translation function g(d) = d , h(d) can be simply instantiated as a function that returns a sequence alignment of d and d . Such reformulation helps us define unified evaluation metrics for complexity explanation (see Section 4). It is also important to note that the dependency between the components, especially complexity prediction and explanation, does not restrict them to be done in isolation. These sub-tasks can be done either separately, or jointly with an end-toend approach as long as the outputs of f, h, g are all obtained (so that transparency and explainability are preserved). In Section 4, we include both separate models and end-to-end models for explanatory complexity predication in one shot. Empirical Analysis of Complexity Prediction and Explanation With the pipeline formulation, we are able to compare a wide range of methods and metrics for the sub-tasks of text simplification. We aim to understand how difficult they are in real-world settings and which method performs the best for which task. Candidate Models We examine a wide portfolio of deep and shallow binary classifiers to distinguish complex sentences from simple ones. Among the shallow models we use Naive Bayes (NB), Logistic Regression (LR), Support Vector Machines (SVM) and Random Forests (RF) classifiers trained with unigrams, bigrams and trigrams as features. We also train the classifiers using the lexical and syntactic features proposed in (Schumacher et al., 2016) combined with the n-gram features (denoted as "enriched features"). We include neural network models such as word and char-level Long Short-Term Memory Network (LSTM) and Convolutional Neural Networks (CNN). We also employ a set of state-of-the-art pre-trained neural language models, fine-tuned for complexity prediction; we introduce them below. ULMFiT (Howard and Ruder, 2018) a language model on a large general corpus such as WikiText-103 and then fine-tunes it on the target task using slanted triangular rates, and gradual unfreezing. We use the publicly available implementation 1 of the model with two fine-tuning epochs for each dataset and the model quickly adapts to a new task. BERT (Devlin et al., 2019) trains deep bidirectional language representations and has greatly advanced the state-of-the-art for many natural language processing tasks. The model is pre-trained on the English Wikipedia as well as the Google Book Corpus. Due to computational constraints, we use the 12 layer BERT base pre-trained model and fine-tune it on our three datasets. We select the best hyperparameters based on each validation set. XLNeT (Yang et al., 2019) overcomes the limitations of BERT (mainly the use of masks) with a permutation-based objective which considers bidirectional contextual information from all positions without data corruption. We use the 12 layer XLNeT base pre-trained model on the English Wikipedia, the Books corpus (similar to BERT), Giga5, ClueWeb 2012-B, and Common Crawl. Evaluation Metric We evaluate the performance of complexity prediction models using classification accuracy on balanced training, validation, and testing datasets. Candidate Models We use LIME in combination with LR and LSTM classifiers, SHAP on top of LR, and the extractive adversarial networks which jointly conducts complexity prediction and explanation. We feed each test complex sentence as input to these explanatory models and compare their performance at identifying tokens (words and punctuation) that need to be removed or replaced from the input sentence. We compare these explanatory models with three baseline methods: 1) Random highlighting: randomly draw the size and the positions of tokens to highlight; 2) Lexicon based highlighting: highlight words that appear in the Age-of-Acquisition (AoA) lexicon (Kuperman et al., 2012), which contains ratings for 30,121 English content words (nouns, verbs, and adjectives) indicating the age at which a word is acquired; and 3) Feature highlighting: highlight the most important features of the best performing LR models for complexity prediction. Evaluation Metrics Evaluation of explanatory machine learning is an open problem. In the context of complexity explanation, when the ground truth of highlighted tokens (y c (d) = c 1 c 2 ...c n , c i ∈ {0, 1}) in each complex sentence d is available, we can compare the output of complexity explanation h(d) with y c (d). Such per-token annotations are usually not available in scale. To overcome this, given a complex sentence d and its simplified version d , we assume that all tokens w i in d which are absent in d are candidate words for deletion or substitution during the text simplification process and should therefore be highlighted in complexity explanation (i.e., c i = 1). In particular, we use the following evaluation metrics for complexity explanation: 1) Tokenwise Precision (P), which measures the proportion of highlighted tokens in d that are truly removed in d ; 2) Tokenwise Recall (R), which measures the proportion of tokens removed in d that are actually highlighted in d; 3) Tokenwise F1, the harmonic mean of P and R; 4) word-level Edit distance (ED) (Levenshtein, 1966): between the unhighlighted part of d and the simplified document d . Intuitively, a more successful complexity explanation would highlight most of the tokens that need to be simplified, thus the remaining parts in the complex sentences will be closer to the simplified version, achieving a lower edit distance (we also explore ED with a higher penalty cost for the substitution operation, namely values of 1, 1.5 and 2); and 5) Translation Edit Rate (TER) (Snover et al., 2006), which measures the minimum number of edits needed to change a hypothesis (the unhighlighted part of d) so that it exactly matches the closest references (the simplified document d ). Note these metrics are all proxies of the real editing process from d to d . When token-level edit history is available (e.g., through track changes), it is better to compare the highlighted evaluation with these true changes made. We compute all the metrics at sentence level and macro-average them. Datasets We use three different datasets (Table 1) which cover different domains and application scenarios of text simplification. Our first dataset is Newsela (Xu et al., 2015), a corpus of news articles simplified by professional news editors. In our experiments we use the parallel Newsela corpus with the training, validation, and test splits made available in (Zhang and Lapata, 2017). Second, we use the WikiLarge corpus introduced in (Zhang and Lapata, 2017). The training subset of WikiLarge is created by assembling datasets of parallel aligned Wikipedia -Simple Wikipedia sentence pairs available in the literature (Kauchak, 2013). While this training set is obtained through automatic alignment procedures which can be noisy, the validation and test subsets of WikiLarge contain complex sentences with simplifications provided by Amazon Mechanical Turk workers (Xu et al., 2016); we increase the size of validation and test on top of the splits made available in (Zhang and Lapata, 2017). Third, we use the dataset released by the Biendata competition 2 , which asks participants to match research papers from various scientific disciplines with press releases that describe them. Arguably, rewriting scientific papers into press releases has mixed objectives that are not simply text simplification. We include this task to test the generalizability of our explainable pipeline (over various definitions of simplification). We use alignments at title level. On average, a complex sentence in Newsela, WikiLarge, Biendata contains 23.07, 25.14, 13.43 tokens, and the corresponding simplified version is shorter, with 12.75, 18.56, 10.10 tokens. Ground Truth Labels The original datasets contain aligned complexsimple sentence pairs instead of classification labels for complexity prediction. We infer groundtruth complexity labels for each sentence such that: label 1 is assigned to every sentence for which there is an aligned simpler version not identical to itself (the sentence is complex and needs to be simplified); label 0 is assigned to all simple counterparts of complex sentences, as well as to those sentences that have corresponding "simple" versions identical to themselves (i.e., these sentences do not need to be simplified). For complex sentences that have label 1, we further identify which tokens are not present in corresponding simple versions. Model Training For all shallow and deep classifiers we find the best hyperparameters using random search on validation, with early stopping. We use grid search on validation to fine-tune hyperparameters of the pre-trained models, such as maximum sequence length, batch size, learning rate, and number of epochs. For ULMFit on Newsela, we set batch size to 128 and learning rate to 1e-3. For BERT on WikiLarge, batch size is 32, learning rate is 2e-5, and maximum sequence length is 128. For XLNeT on Biendata, batch size is 32, learning rate is 2e-5, and maximum sequence length is 32. We use grid search on validation to fine-tune the complexity explanation models, including the extractive adversarial network. For LR and LIME we determine the maximum number of words to highlight based on TER score on validation (please see Table 2); for SHAP we highlight all features with positive assigned weights, all based on TER. For extractive adversarial networks batch size is set to 256, learning rate is 1e-4, and adversarial weight loss equals 1; in addition, sparsity weight is 1 for Newsela and Biendata, and 0.6 for WikiLarge; lastly, coherence weight is 0.05 for Newsela, 0.012 for WikiLarge, and 0.0001 for Biendata. Complexity Prediction In Table 3, we evaluate how well the representative shallow, deep, and pre-trained classification models can determine whether a sentence needs to be simplified at all. We test for statistical significance of the best classification results compared to all other models using a two-tailed z-test. In general, the best performing models can achieve around 80% accuracy on two datasets (Newsela and WikiLarge) and a very high performance on the Biendata (> 95%). This difference presents the difficulty of complexity prediction in different domains, and distinguishing highly specialized scientific content from public facing press releases is relatively easy (Biendata). Deep classification models in general outperform shallow ones, however with carefully designed handcrafted features and proper hyperparameter optimization shallow models tend to approach to the results of the deep classifiers. Overall models pre-trained on large datasets and finetuned for text simplification yield superior classifi- .48% * * * Shallow models perform similarly and some are omitted for space; Difference between the best performing model and other models is statistically significant: p < 0.05 (*), p < 0.01 (**), except for †: difference between this model and the best performing model is not statistically significant. cation performance. For Newsela the best performing classification model is ULMFiT (accuracy = 80.83%, recall = 76.87%), which significantly (p < 0.01) surpasses all other classifiers except for XL-NeT and CNN (char-level). On WikiLarge, BERT presents the highest accuracy (81.45%, p < 0.01), and recall = 83.30%. On Biendata, XLNeT yields the highest accuracy (95.48%, p < 0.01) with recall = 94.93%, although the numerical difference to other pre-trained language models is small. This is consistent with recent findings in other natural language processing tasks (Cohan et al., 2019). Complexity Explanation We evaluate how well complexity classification can be explained, or how accurately the complex parts of a sentence can be highlighted. Results (Table 4) show that highlighting words in the AoA lexicon or LR features are rather strong baselines, indicating that most complexity of a sentence still comes from word usage. Highlighting more LR features leads to a slight drop in precision and a better recall. Although LSTM and LR perform comparably on complexity classification, using LIME to explain LSTM presents better recall, F1, and TER (at similar precision) compared to using LIME to explain LR. The LIME & LSTM combination is reasonably strong on all datasets, as is SHAP & LR. TER is a reliable indicator of the difficulty of the remainder (unhighlighted part) of the complex sentence. ED with a substitution penalty of 1.5 efficiently captures the variations among the explanations. On Newsela and Bien- data, the extractive adversarial networks yield solid performances (especially TER and ED 1.5), indicating that jointly making predictions and generating explanations reinforces each other. Table 5 provides examples of highlighted complex sentences by each explanatory model. Benefit of Complexity Prediction One may question whether explainable prediction of text complexity is still a necessary preliminary step in the pipeline if a strong, end-to-end simplification generator is used. We show that it is. We consider the scenario where a pre-trained, end-toend text simplification model is blindly applied to texts regardless of their complexity level, compared to only simplifying those considered complex by the best performing complexity predictor in Table 3. Such a comparison demonstrates whether adding complexity prediction as a preliminary step is beneficial to a text simplification process when a state-of-the-art, end-to-end simplifier is already in place. From literature we select the current best text simplification models on WikiLarge and Newsela which have released pre-trained models: • ACCESS (Martin et al., 2020), a controllable sequence-to-sequence simplification model that reported the highest performance (41.87 SARI) on WikiLarge. We apply the author-released, pre-trained AC-CESS and DMLMTL on all sentences from the validation and testing sets of all three datasets. We do not use the training examples as the pre-trained models may have already seen them. Presumably, a smart model should not further simplify an input sentence if it is already simple enough. However, to our surprise, a majority of the out-of-sample simple sentences are still changed by both models (above 90% by DMLMTL and above 70% by ACCESS, please see Table 6). We further quantify the difference with vs. without complexity prediction as a preliminary step. Intuitively, without complexity prediction, an already simple sentence is likely to be overly simplified and result in a loss in text simplification metrics. In contrast, an imperfect complexity predictor may mistaken a complex sentence as simple, which misses the opportunity of simplification and results in a loss as well. The empirical question is which loss is higher. From Table 7, we see that after directly adding a complexity prediction step before either of the state-of-the-art simplification models, there is a considerable drop of errors in three text simplification metrics: Edit Distance (ED), TER, and Fréchet Embedding Distance (FED) that measures the difference of a simplified text and the groundtruth in a semantic space (de Masson d'Autume et al., 2019). For ED alone, the improvements are between 30% to 50%. This result is very encouraging: considering that the complexity predictors are only 80% accurate and the complexity predictor and the simplification models don't depend on each other, there is considerable room to optimize this gain. Indeed, the benefit is higher on Biendata where the complexity predictor is more accurate. Qualitatively, one could frequently observe syntactic, semantic, and logical mistakes in the modelsimplified version of simple sentences. We give a few examples below. • Healthy diet linked to lower risk of chronic lung disease → Healthy diet linked to lung disease (DMLMTL) • Dramatic changes needed in farming practices to keep pace with climate change → changes needed to cause climate change (DMLMTL) • Social workers can help patients recover from mild traumatic brain injuries → Social workers can cause better problems . (DMLMTL) All these qualitative and quantitative results suggest that the state-of-the-art black-box models tend to oversimplify and distort the meanings of outof-sample input that is already simple. Evidently, the lack of transparency and explainability has limited the application of these end-to-end black-box models in reality, especially to out-of-sample data, context, and domains. The pitfall can be avoided with the proposed pipeline and simply with explainable complexity prediction as a preliminary step. Even though this explainable preliminary does not necessarily reflect how a black-box simplification model "thinks", adding it to the model is able to yield better out-of-sample performance. Conclusions We formally decompose the ambiguous notion of text simplification into a compact, transparent, and logically dependent pipeline of sub-tasks, where explainable prediction of text complexity is identified as the preliminary step. We conduct a systematic analysis of its two sub-tasks, namely complexity prediction and complexity explanation, and show that they can be either solved separately or jointly through an extractive adversarial network. While pre-trained neural language models achieve significantly better performance on complexity prediction, an extractive adversarial network that solves the two tasks jointly presents promising advantage in complexity explanation. Using complexity prediction as a preliminary step reduces the error of the state-of-the-art text simplification models by a large margin. Future work should integrate rationale extractor into the pre-trained neural language models and extend it for simplification generation.
7,593.6
2020-07-31T00:00:00.000
[ "Computer Science" ]
Dependency-Aware Clustering of Time Series and Its Application on Energy Markets In this paper, we propose a novel approach for clustering time series, which combines three well-known aspects: a permutation-based coding of the time series, several distance measurements for discrete distributions and hierarchical clustering using different linkages. The proposed method classifies a set of time series into homogeneous groups, according to the degree of dependency among them. That is, time series with a high level of dependency will lie in the same cluster. Moreover, taking into account the nature of the codifying process, the method allows us to detect linear and nonlinear dependences. To illustrate the procedure, a set of fourteen electricity price series coming from different wholesale electricity markets worldwide was analyzed. We show that the classification results are consistent with the characteristics of the electricity markets in the study and with their degree of integration. Besides, we outline the necessity of removing the seasonal component of the price series before the analysis and the capability of the method to detect changes in the dependence level along time. Introduction There is a huge amount of literature dealing with the analysis of price series in energy markets, in particular focused on the study of dependencies among different electricity markets. For example, the European Union is developing the process of electricity market integration, which means for the Union the possibility to allocate new generation resources better, to allow the integration of more renewable sources in the power mix and to reduce the annual costs of the markets, mainly for the customer.These objectives need the development of several indicators based on prices, such as the ones presented in this work, and others, such as cross-border power flows or the integration of non-energy markets (balances, capacity) to analyze the degree of integration of present markets, physical constraints and their interest and potential for the integration in the future.Only from an economic point of view, it is worthy to evaluate the degree of coupling among several markets.According to the Agency for the Cooperation of Energy Regulators simulations, with this policy of integration, the Central West Europe (CWE) region has achieved gains of around 250 million euros with respect to previous isolated national markets.The European Parliament (2015) showed [1] that in a coupled market, less generation capacity is required, and the annual costs avoided were estimated at 1.2 billion euros (capital costs) and 448 million euro (fixed operational costs) for electricity and gas markets. The interest for the effective development of market integration in the EU has driven the European Commission and some authors to perform different theoretical studies on the quantitative analysis of market integration [2].In this kind of analysis, the authors give an indicator of energy markets' integration, mainly focused on markets, such as Nord Pool, CWE or the Spanish-Portugal case.The indicator used in several of these works is the correlation between peak-hours prices.However, the approach has some drawbacks: first, high prices between two areas can appear with or without market coupling (for example, in the Australian market due to cross-border congestion, see [3,4]).Second, low price periods are also of interest to know the interaction between two energy markets.In this context and based on a cointegration analysis, [5] studies whether the three electricity markets of Switzerland, Austria and Germany are integrated and converge towards one single price.The work in [6] investigates the dependencies among the spot prices of different European electricity markets through Kendall's tau and Spearman's rho coefficients and also using copulas.This work concludes the strongest dependency between the spot electricity prices of Austria and Germany and the weakest between Nord Pool and Spain.Moreover, it indicates that analyzed power exchanges exhibit a different degree of integration and have a higher level of dependency rather on a regional level.The work in [7] studies the interdependencies existing in wholesale electricity prices in six major European countries, whereas [8] analyze integration dynamics using multivariate cointegration techniques. There are many studies regarding the problem of detecting dependencies between two time series.For example, [9,10] propose statistical tests for independence between two stationary time series, based on the residual cross-correlation.Later, [11] introduced an alternative test using symbolic dynamics through permutations, which is able to detect linear and nonlinear dependencies.The permutation entropy, also known as the Shannon permutation entropy, was introduced by [12] to study the complexity of a time series, and it has been widely used to determine the complexity changes of biological time series; see [13,14], among others.In this context, [15] proposed to measure the volatility of price series in energy markets through the use of permutations.They highlight the utility of these new measures in identifying factors that can produce changes in the predictability of the price series, such as loads, weather or market regulations. The problem of time series clustering has been widely studied, and it has many applications across different fields, such as finance, biology or informatics.The goal is to classify a set of time series into homogeneous groups, that is similar time series should lie in the same cluster.Therefore, an essential part of the clustering process is the selection of appropriate similarity (or distance) measures, according to the classification objectives. The other two important parts of the process are the clustering approach and the clustering algorithm.The most popular clustering algorithms are the agglomerative hierarchical techniques, k-means, fuzzy c-means and the self-organizing maps (see [16] for more details).Regarding the clustering approach, three different types can be distinguished [16,17] depending on whether they work directly with raw data (raw data-based or shape-based approach), indirectly with a vector of features extracted from the raw data (feature-based approach) or indirectly with the model parameters obtained from the raw data (model-based approach). As we mentioned before, a key part in clustering is the similarity or distance measure used, which has to be properly selected depending on the classification purposes (see [17]).For example, if one wants to find similar time series in time, correlation-based distances or Euclidean distance are proper.In this context, [18] study the degree of market integration between Germany and eight neighboring countries by means of price correlations and price-difference stationarity.When finding similar time series in shape, it is assumed that the time occurrence of patterns is not important, and in this case, dynamic time warping (DTW) distance is suitable (see [19]).For example, in the field of energy markets, [20] analyze the effect of different similarity measures in time series clustering, and they outline the efficiency of DTW distance with some applications to discover buildings' energy patterns.Some other distances used in time series clustering are the short time series (STS) distance introduced in [21] or the Kullback-Leibler distance studied in [22].Finally, it is worth mentioning the symbolic representation of time series called SAX (symbolic aggregate approximation) introduced in [23], which is combined with the minimum distance to cluster time series. The aim of this paper is to propose an alternative approach to classify time series according to the strength of dependency among them.For that, we combine the next three aspects: firstly, the time series are codified by means of permutations (symbolic dynamic), which transform each time series into a discrete probability distribution; secondly, several similarity and distance measures for discrete distributions are chosen, with the objective of detecting dependencies among the time series; thirdly, different linkages (single, complete and average) are considered to apply the hierarchical algorithm.To illustrate the proposed method, we apply it to fourteen price series of different electricity markets worldwide.After applying the method, the clustering results are commented on, trying to show that the outcomes are reasonable with the degree of integration of these markets and the appearance of physical constraints in the internal or interconnection transmission networks. The paper is organized as follows: Section 2 is devoted to introducing the codifying process of the time series using permutations and to introducing the similarity and distance measurements; Section 3 deals with the applications of the proposed approach to different electricity markets; and Section 4 depicts the conclusions. Similarity and Distance Measures Based on Permutations Firstly, we summarize the codifying process of two time series.Let us consider (x n ) T n=1 , T ∈ N, a real time series.A natural way of codifying a single time series using permutations can be developed as follows.Let S m be the group of permutations of length m, with cardinality #S m = m!The positive integer m is called the embedding dimension.Let x m (r) = (x r , x r+1 , ..., x r+m−1 ), 1 ≤ r < T − m + 1, be a sliding window taken from the sequence (x n ) T n=1 .The window x m (r) is said to be π-type, π ∈ S m , if and only if π = (i 1 , i 2 , ..., i m ) (also called a codeword) is the unique element of S m satisfying the two following conditions: and: Therefore, any sliding window x m (r) is uniquely mapped onto a vector (i 1 , i 2 , ..., i m ), which is one of the m! permutations of m distinct symbols (0, 2, . . ., m − 1).Now, let us consider (x n ) T n=1 and (y n ) T n=1 , T ∈ N, two real time series, and (z n ) T n=1 , the corresponding two-dimensional time series with z n = (x n , y n ), for all n = 1, ..., T. Let z m (r) = (x m (r), y m (r)), 1 ≤ r < T − m + 1, be a two-dimensional sliding window taken from the sequence (z n ) T n=1 .The window z m (r) is said to be π i × π j -type, π i , π j ∈ S m , if and only if x m (r) is π i -type and y m (r) is π j -type. After the codifying process, all of the empirical information is collected in a contingency table, see Table 1, where O i,j denotes the observed frequency of the symbol π i × π j (also called a codeword). Hence, the relative frequency of each symbol is given by: and under the hypothesis of independence between the two time series, it holds that: Some common statistics in the context of contingency tables are Pearson's chi-square, the likelihood ratio and the Cressi-Read statistics, which are used in [11] to test the independency between two time series.That paper also shows the efficiency of the method in detecting linear and nonlinear dependence.For example, Pearson's chi-square statistic for the contingency Table 1 is given by: where e i,j denotes the expected frequencies under the independency hypothesis, that is: In general, Pearson's chi-square, the likelihood ratio and the Cressi-Read statistics measure the discrepancy between the observed frequencies and the expected frequencies when independency is assumed.Even though they allow us to test the independency in a contingency table, they cannot be used to quantify the strength of the association because they depend on the sample size.In our context (codified time series using permutations), values of Pearson's chi-square statistic depend on T (length of the time series) and m (embedding dimension). In order to eliminate the effect of sample size, we can consider an association measure defined from Pearson's chi-squared statistics in a general contingency table, which ranges from zero to one, and it is called Cramer's V. Let us consider X and Y as two random variables, and assume that we have a contingency table to test the independency of these two variables.Cramer's V is given by: where n is the sample size, χ 2 is Pearson's chi-square statistic and I and J are the number of rows and columns in the corresponding contingency table.Values of Cramer's V close to zero mean no association (independency) and close to one mean strong association (dependency).An interesting interpretation can be found in [24], who says that this coefficient represents the information that flows from Y towards X.If the information about Y is irrelevant in determining X, the coefficient is zero. In our context of codifying two time series with an embedding dimension m, we have that I = m! is the number of rows in the contingency table, J = m! is the number of columns in the contingency table and n = T − m + 1 is the number of sliding windows of size m.Therefore, given two time series (x n ) T n=1 and (y n ) T n=1 , T ∈ N and an embedding dimension m, we can define the association measure Cramer's V between the two time series as follows: where e i,j is the expected frequency given in (8).Additionally, its corresponding distance measure is defined by: In the field of probability and information theory, the concept of mutual information measures the dependency between two variables X and Y, that is it quantifies the reduction of one's variable uncertainty when the other variables are known.Given two discrete random variables X and Y, the mutual information coefficient is defined by: where p(x i , y j ) is the join probability function of (X, Y) and p 1 (x i ) and p 2 (y j ) are the marginal probability functions of X and Y, respectively.The mutual information coefficient can be computed using the concept of entropy as follows: where: is the entropy of X, is the entropy of Y and: is the entropy of (X, Y). The mutual information coefficient is a dependency measure because I(X, Y) = 0 if and only if X and Y are independent.Moreover, it is symmetric and non-negative, but there is not a fixed upper bound.There exist several normalized versions of the mutual information coefficient; see [25][26][27], among others.The former outlines the uncertainty coefficient, defined by: Note that the uncertainty coefficient is a symmetric association measure that reaches zero for independent variables and one for perfect dependency. Given two time series (x n ) T n=1 and (y n ) T n=1 , T ∈ N, and an embedding dimension m, we can define the association measure between two time series, called the uncertainty coefficient, as follows: Additionally, the corresponding distance measures are given by: Based on the concept of mutual information again, the following two universal distance measures can be considered ( [28]): and: They are true metrics because they satisfy non-negativity, symmetry and triangular inequality properties.Additionally, they are universal in the sense that if any other distance measure states that X is near Y, then the universal distances state the same. In our context, after the codifying process of the time series, we define the distance measures between two time series as follows: and: (23) Note that, taking into account the nature of the time series codifying process through permutations, the distance measurements between two time series defined in (11), ( 19), ( 22) and ( 23) have the capability to detect linear and nonlinear dependencies (see [11] for more details). Applications to Electricity Markets In this section, we study the dependencies among prices of different electricity markets, with or without geographical proximity and with or without the same system operator.We have considered the following electricity markets over the same time period, which ranges from 2004 to 2009: Ontario, Omel, Austria, four Australian markets and several Nord Pool markets (data available at [29][30][31][32][33]).This set of data contains, for the period under consideration (2004 to 2009), markets with different and similar characteristics in some sense: the market design (for example, Australia and Nord Pool, which are basically based on the energy-only market design); the liquidity of the market (7% of energy traded in the market in Austria in contrast to 70% in Omel and Nord Pool); the mix of generation (68% of hydro and renewable in Austria, 56% in Sweden or 20% in Australia); the size of the market (387 TWh per year in Nord Pool and 310 TWh per year in Omel); or the role of the region as a net importer (Finland) or exporter (Sweden and Queensland). With respect to the time period selected for the analysis, it is necessary to state that this period has interesting characteristics from the technical and economical points of view: some years had high peak prices, whereas others had flat price periods; the stability of bidding zones; the volatility of gas markets and its influence on generation costs; and finally, the great amount of available information with respect to network congestions and the limitation of inter-connectors' export capacity, which partially explains market splitting in this period in Australia and Nord Pool (for example, the limitation of electricity export in Sweden due to internal bottlenecks on several inter-connectors during a significant number of hours in the period from January 2002 to April 2008, events that have raised European Commission concerns [34] and that explain the division of the Swedish area into four regions in 2011). For a better understanding of the classification results, we include a brief description of some markets analyzed. Description of Some Electricity Markets Analyzed The four Australians markets selected in this study are New South Wales (NSW), Queensland (QLD), South Australia (SA) and Victoria (VIC).The Australian National Electricity Market (NEM) promotes efficient generation and demand use by a wholesale market, which allows electricity trade among five regions in the east of Australia (see Figure 1 Each region has different characteristics (generation mix and load) and interconnection capacities.For example, New South Wales is a net importer of electricity and has limited capacity to cover the highest peaks of demand, and for this reason, it needs generation support from QLD, Snowy Hydro and VIC.Victoria had in the period under study (2004 to 2009) a substantial low cost base-load capacity, making it a net exporter of electricity.Queensland is a net exporter too, mainly to NSW, due to their geographical and electrical proximity.South Australia is a net importer (a high percent of its demand was covered outside this region until 2005-2006 because a new investment in wind generation was developed in this area).Table 2 (adapted from [35]) shows the inter-regional trade of these regions.The NEM market works at unison when the electricity can flow freely among all areas, but this does not mean that the price is the same in the five areas during these periods.The "integrity" or price alignment of the NEM market as a percentage of trading hours ranges between 70% and 80% across the regions.Australia manages congestion periods by splitting its regions, allowing different and more independent marginal prices in each area.This separation occurs when a transmission inter-connector becomes congested and limits inter-regional power flows.In these cases, each area needs to reconsider offers from the generation in its own region, and in this way, a different behavior of the market occurs in each area (the generation mix is different for each region).This scenario may occur at times of peak demand or when an inter-connector experiences some outage or is under maintenance tasks.The inter-connectors in Australia are shown in Figure 1.Notice that Australia does not have a meshed link among regions (QLD, NSW, SA, VIC, TA), but a radial one. The Nord Pool markets are divided into several bidding areas.The available transmission capacity may vary and congest the flow of power between the bidding areas, and thereby, different area prices are established.For each Nordic country, the local transmission system operator (TSO) decides into which bidding areas the country is divided.The bidding areas has changed along time, and for the time period analyzed (the years 2004 to 2009), we have considered the following: Sweden (SE), Finland (FI), Western Denmark (DK1), Eastern Denmark (DK2), Oslo (NO1) and Trondheim (NO2).Nord Pool calculates a price for each bidding area for each hour of the following day.The Nord Pool System price (NPS) is calculated based on the sale and purchase orders disregarding the available transmission capacity between the bidding areas in the Nordic market. The Nordic area is a good example of a well-linked region.From the early 1990s, these countries made solid foundations for the development of a supra-national market, but despite this fact, the integrity of price areas is not the same (see Figure 2).The Nordic Transmission grid connects the four countries of this area, and the congestions between the countries are managed by implicit auctions through Nord Pool spot.The Nordic electricity grid has several AC and DC inter-connectors to link the different countries in the region and to interconnect adjacent areas.For example, in the period under study (2004 to 2009), the Denmark West-Germany corridor had 1500 MW and 950 MW in the opposite direction.Finland is strongly connected to Sweden (2050 MW Sweden-Finland and 1650 MW in the opposite direction), but weakly with North Norway (100 MW) and Estonia (in 2007 with a capacity of 350 MW).Finland forms its own bidding area.The weakest linked area is Western Denmark (DK1) because it was part of the Continental European synchronous power system, the former UCTE area (Union for the Coordination of the Transmission of Electricity) and now the Continental European Group of ENTSO-E (European Network of Transmission System Operators for Electricity), whereas Eastern Denmark (DK2) was part of the Nordic synchronous area (the former Nordel, now the Baltic Regional Group of ENTSO-E [36]).The second one, according to Figure 2, is the NO1 area (Oslo region) due the capacity problems of the west coast Swedish corridor.Moreover, the capacity usually available from SE to NO2 and NO3 is limited.The most coherent areas in the period analyzed were FI and SE due to the high transmission capacity between Finland and North Sweden. Classification Results For each electricity market, hourly price series from 2004 to 2009 are used in the analysis.The proposed measures allow us to determine which markets present strong relationships and which ones are not related.Furthermore, the strength of the relation can be measured along the year in order to detect periods with the most or the least price dependency. For that, the whole time series has been divided into non-overlapping blocks of size w (block size), and then, given an embedding dimension m, the distance measures proposed in this paper are computed for each block.The block size selected when computing distance measures usually corresponds to a year approximately (w = 8760 h) or to a season of the year (w = 2190 h), because the proposed measures do not depend on the block size w, and we are interested in studying whether the dependency level is homogeneous along time.However, a suitable combination of embedding m and block size w should be chosen when developing the independency test.A general rule to get a good performance is that the block size w ought to be roughly w = 5•5•m!•m!.For example, when the embedding dimension is m = 3, a block size of w = 5•5•3!•3!= 900 is recommended.See [14] for more details. Firstly, we highlight the necessity of removing the seasonal component before the analysis.Note that hourly electricity price series have daily and weekly seasonal components (period = 24 h and period = 168 h, respectively), and these seasonal parts are more relevant (higher values) than the stochastic part of the series.Taking into account this framework, we wondered if the dependence test was appropriate for series with a seasonal behavior.Let (x t ) t=T t=1 be the original price series of a specific electricity market.In this context, we consider three different ways to remove seasonality in the price series to extract the stochastic component: • Taking weekly seasonal differences: • First taking weekly seasonal differences and then daily differences: • Using the method proposed in [37]: where N + 1 = 5 is the number of weeks used for calibration.This approach is more popular among practitioners because it combines differencing at various lags with moving average smoothing. Note that the the length of the resulting stochastic component is less than the length of the original series in all cases, because the first part of the data cannot be used. Let us consider the hourly price series in the whole period 2004 to 2009 of two very different electricity markets, Ontario and Omel, which are far away and have different market regulations.It is clear that the prices of both markets are independent, but the presence of seasonality leads to the wrong conclusion if the seasonal component is not previously removed.Figure 3 shows the correlograms of the two price series, which reveals clear daily and weekly seasonal components (peaks in Lags 24, 168 and their multiples).Now, we compute Pearson's chi-squared, the likelihood-ratio and the Cressie-Read statistics in four different situations: using the original data (without removing the seasonal component) and using the stochastic component extracted in the three ways mentioned above.Figure 4 shows the results for Pearson's chi-squared statistics (the others statistics were nearly the same), and the dotted line represents the limit of the rejection region.An embedding dimension of m = 3 and a block size of w = 5•5•3!•3! = 900 were chosen for the test.When original data are considered (see Figure 4a), the statistic lays in the rejection region, so we would conclude that both price series are dependent.However, after removing the seasonal component with any method (see Figure 4b-d In the rest of the paper, we have applied Weron's method to all price series before each analysis, so the stochastic components of the price series have been used instead of the original data. As we mentioned before, the proposed distance measurements can be used to study the strength of the dependency along time.To illustrate this task, let us consider the hourly price series of Finland and Sweden from 2004 to 2009, two electricity markets that are strongly related.First, we compute the dependency statistics with m = 3 and w = 900 to show a true price dependence between these two electricity markets; see Figure 5.Note that the resulting series are of a size of 51,768 h after applying Weron's method, so there are 57 windows of a size of w = 900 along the period analyzed.To explain, from a physical point of view, the results shown in Figure 6, it is interesting to consider two aspects.First, the fact that the share of electricity bought from the power exchange in relation to electricity consumption has increased considerably since Finland and Sweden joined the Nordic power market.For example in Finland, the share of electricity bought from the Nordic power exchange has increased from 5% to 60% of the Finnish consumption in 2012 [38].This means a higher dependence (potentially) among Finland and Sweden (and, obviously, with the Nord Pool area) and explains the slight increase in dependency level along the period shown in Figure 6.The second is the management of congestions.In the Nordic area, two mechanisms are used: counter trade and congestion rents.The first is used with market agents to relieve both national and inter-regional congestions during the daily network operation.The cost of this mechanism in Finland decreased from 0.86 million euros in 2004 to 0.085 million euros in 2009 [39].The second mechanism is the most important to evaluate cross-border congestions, the so-called congestion rents.Congestion rents come up in the situation where transmission capacity between bidding zones is not sufficient to fulfill the demand.The congestion splits the price bidding zones into separate price areas, and the power exchange and TSOs receive congestion income from the congested interconnection.The congestion rents are computed as the product of the commercial flow on the day ahead market and the difference of the area prices.In this way, high levels of congestion rents between two areas in some periods of time mean that these areas were more independent during those periods.Historical congestion rents between Finland and Sweden [39] have been analyzed (from summer of 2006 to autumn 2009), and they are shown in Figure 7.Note that the right part of Figure 6 Finally, we study the dependence structure among all of the electricity markets analyzed.First, we compute the corresponding distance matrix, and then, we obtain the hierarchical classification of the markets.The distance matrices are computed for each one of the proposed distance measures (D V , D U , D 1 and D 2 ), for each year of the analyzed period (2004, 2005, 2006, 2007, 2008 and 2009) and for the whole period 2004 to 2009.An embedding dimension of m = 3 is selected for individual years and m = 4 for the six-year period.As examples, Tables 3 and 4 show the distances between each pair of markets for the six-year period and Tables 5 and 6 for the individual year 2007. The hierarchical clustering of the electricity markets has been developed from the previous distance matrices and using different linkages (single, complete and average).For instance, Figure 8 shows the classification results for the whole six-year period, V-Cramer distance and single linkage.Dendrograms for all distance measurements and all linkages reveal the same hierarchical classification.Four clusters can be distinguished: two of them are isolated markets (Omel and Ontario, respectively); the third one consists of the four Australian regions (Victoria, New South Wales, South Australia and Queensland); and the forth cluster includes all Nord Pool regions (Finland, Sweden, Trondheim, Oslo, East Denmark, West Denmark and the system) together with Austria.Note that West Denmark is DK2 Note that the clustering approach proposed in this paper produces plausible, non-trivial results that can be intuitively explained in the given scenario.Obviously, the final classification results depend on several aspects jointly, such as the size of the regions, the system's regulation laws, demand daily patterns, costs for the spinning reserve or fees for cross-border energy transmission.Below, we try to highlight some aspects that partially justify the clustering results in spite of the fact that it is not the aim of the work.The isolation of the Ontario market in this analysis does not need any comment, and the one of the Spanish market is also well known.For instance, the capacity of cross-border connection from Spain to France in 2008 was only 1400 MW (3% of Spanish demand), and France did not join the European Power Exchange (EPEX) initiative until 2009 to 2010, as well.According to the European Association of Regulators (ACER), up to 2010, the percentage of hours for equal hourly day-ahead prices in the pair France-Germany was 0%.In this way, Spain had no possibility of economic or physical linkage with other European markets, such us Nord Pool or Austria, outside the limited possibility of exchange with France.Therefore, it is very unlikely that Omel and Nord Pool had been linked through EPEX (via France-Germany) during that period.On the other side, the dendrograms reveal that Austria exhibits a weak dependence with Denmark areas.This is due to the fact that Austria and Denmark areas (DK1 and DK2) are linked through Germany.Austria has a high capacity of cross-border lines with Germany (10020 MW and 3664 MW in 2009).However, from 2004 to 2008, the energy volume traded by the Energy Spot Market in Austria (EXAA), which covers German areas) did not get 7% with respect to Austrian overall demand [40].In September 2008, the EPEX (Germany-Austria) was founded, but in its first year, it traded less than 17% of the Austrian gross demand of electricity.Hence, the market integration was very weak in that period. The results obtained for the Nordic regions are in agreement with the integrity levels showed in Figure 2, where DK1 has the lowest integrity percentage with the rest of regions, whereas FI and SE have the highest one.To explain the hierarchical classification in the case of Australia, two aspect can be considered: first, inter-connectors' capacity and their constraints, and second, the annual power flows between Australian areas.With respect to annual power flows between areas, Figure 9 shows a snapshot of the NEM market for 2006/2007 (adapted from [41]).This figure and the above-mentioned conditions of transmission inter-connectors and physical energy exchanges among regions can explain the distance matrices and dendrograms.From these power flows, it can be seen that NSW needs support from QLD and VIC.On the other side, QLD has a sufficient amount of generation in its area (the area is more independent), and its dependency with VIC and SA is lower than the link with NSW.Finally, SA needs imports from VIC (a net exporter area), but not from NSW (a net importer from VIC and QLD). In general, dendrograms for each individual year lead to clustering results similar to that of the six-year period, but some differences are worth being outlined (see Figure 10).For instance, in 2005, there was a strong dependence between prices of Nord Pool's system and Oslo (even higher than the dependence level between Finland and Sweden).In 2008, the dependency strength of Oslo's region with the rest of the Nordic regions went down, and it became the weakest (even lower than the association of West Denmark with the rest of the regions).In that year, the hydropower production in Norway was higher to compensate lower Swedish production (because the availability of nuclear power plants in Sweden went down during 2008, reaching 65% during some months, especially in November and December) and also due to some problems with the imports from the Central-West European area Although we have focused on electricity prices, the proposed approach could be helpful to study the relationships among other kinds of time series like electricity loads.Below, we consider a set of twelve time series corresponding to the hourly electricity loads in four different regions along three different years (2007, 2008 and 2009).Specifically, we have analyzed the electricity load series of three regions in Australia: New South Wales (NSW), South Australia (SA) and Victoria (VIC); and the load time series of Ontario's market.The objective is to apply the proposed clustering procedure to this set of time series in order to obtain groups of series that present dependency among themselves. Recall that the steps of the procedure can be summarized as follows: • First, the seasonal component of the time series must be removed.We suggest using Weron's method given in (27), but other techniques can be applied. • Secondly, the resulting time series (after removing the seasonal component) are codified by means of permutations.For that, the researcher has to choose the embedding dimension. • Thirdly, the distance between each pair of time series (through their codes) is computed, and the corresponding distance matrix is obtained.In this step, we propose using four different dissimilarity measures (D V , D U , D 1 and D 2 ). • Finally, the dendrogram is computed obtaining the clustering results.For that, the researcher has to choose the distance measure and the linkage of the hierarchical method. Once we have removed the seasonal component of each time series and we have codified the resulting series, we compute the distance matrices.Figure 11 shows the distance matrices (Crammer's V distance and Universal Distance 2) of the twelve time series, using embedding dimension m = 3.Additionally, Figure 12 shows the corresponding classification results choosing different linkages.The electricity loads of New South Wales for 2007, 2008 and 2009 are denoted by NSW07, NSW08 and NSW09, respectively, and similar notation is used for South Australia (SA07, SA08 and SA09), Victoria (VIC07, VIC08 and VIC09) and Ontario (Ont07, Ont08 and Ont09).In Figure 12, two different clusters can be seen: the first one formed by the three load series of Ontario's market and the second one formed by the nine load series of the Australian market.Moreover, in the second cluster, there are three subgroups that are well separated, one for each year analyzed.Therefore, we can state that the strength of dependency is greater among the Australian regions (NSW, SA and VIC) for a specific year than among the years for a specific region. In each of the three subgroups of the Australian cluster, we can see that the strongest dependency corresponds to the load series of South Australia and Victoria, whereas New South Wales has the weakest dependency inside its subgroup.On the other hand, the three load series of Ontario present a weak dependency level among them, but high enough to create a different cluster from the Australian load series. Finally, we compare some of our results with those obtained using a classical clustering approach for time series: a raw data-based approach and the Euclidean distance.In this case, we work directly with the original data, that is the time series are neither transformed nor codified.Additionally, the Euclidean distance is used as a dissimilarity measure, which is combined with different linkages.Figure 13 shows the Euclidean distance matrix of the twelve time series also considered in Figure 11.Recall that the Euclidean distance is not upper bounded; it is very sensitive to transformations; and the proximity notion relies on the closeness of the values observed at corresponding points of time.Figure 14 shows the corresponding clustering results for the electricity loads of Ontario and Australia over different years. Once again, two clusters can be distinguished: one composed of Ontario's loads and the other one composed of the Australian loads.However, when we compare Figure 12 with Figure 14, an essential difference can be observed.This time, the cluster of the Australian loads is divided into three subgroups corresponding to each region analyzed.Therefore, if we classify this set of time series according to the information that they share (using the clustering approach proposed in the present paper), we get that the strength of dependency is greater among the regions (for each specific year), whereas if we classify them looking for similarities in time, we get that the similarity in time is greater among the years (for each specific region).This example illustrates the importance of choosing a suitable clustering approach and dissimilarity measure depending on the classification purpose. Conclusions The problem of time series clustering has great interest and applications in many disciplines.For instance, in the field of electricity markets, the study of relations among price time series becomes essential to give a first indicator of the degree of market integration. The present paper proposes a novel approach in time series clustering, where the aim is to classify the series into homogeneous groups according to the dependency level among them.That is, given a set of time series, the proposed clustering method creates groups of time series that are related.The new approach combines three aspects: a permutation-based coding of the time series, distance measures that quantify dependencies between two discrete distributions and different linkages for hierarchical clustering.It is able to detect linear and nonlinear relationships, due to the nature of the symbolic representation of the time series done in the codifying stage. The method was applied to several electricity markets from Europe, North America and Australia to illustrate its performance, using electricity prices and electricity loads, as well.We show that the proposed method produces plausible, non-trivial results that can be intuitively explained in the given scenario.Furthermore, some of our results were compared with those obtained using a raw data-based approach and the Euclidean distance, exhibiting the importance of choosing an appropriate approach depending on the clustering target. Therefore, the method developed in this paper allows the researcher to classify a set of time series according to the degree of information that they share, creating groups of time series that are or non-linear dependent.On the other hand, some practical examples show the necessity of removing the seasonal component of the series before the analysis and the utility of this approach to study the variation of the dependency level between two price series along time. Figure 2 . Figure 2. Integrity of price areas in Nord Pool in 2008 and 2009.In the top of each rectangle, the percentage of "integrating" time for the year 2008, in the bottom, the percentage for the year 2009.DK, Denmark; SE, Sweden; FI, Finland; NO, Norway.(a) Percentage of integrity for SE and FI (green areas); (b) Percentage of integrity for SE, FI, DK2, NO2 and NO3; (c) Percentage of integrity for SE, FI, DK2, NO2, NO3 and NO1; (d) Percentage of integrity for SE, FI, DK2, NO2, NO3, NO1 and DK1. ), the statistic states independency between the price series.The selection of m = 4 and w = 5•5•4!•4! = 14,400 leads to the same conclusions. Figure 4 . Figure 4. Independency tests between Omel and Ontario markets using Pearson's chi-squared statistic (y-axes) in four different situations: (a) using original price data; (b) removing the seasonal component using Equation (24); (c) removing the seasonal component using Equation (26); (d) removing the seasonal component using Equation(27). Figure 5 . Figure 5. Independency test between the Finland and Sweden markets after removing the seasonal component through Equation (27).An embedding dimension of m = 3 and a block size of w = 2190 (a season of the year, approximately) are now selected to evaluate how the dependency level varies along time.Note that the resulting series are of a size of 50,724 h after removing the seasonality through Weron's procedure and starting in 21 March 2004 (spring).Therefore, there are 23 windows of a size of w = 2190 along the period analyzed, from spring 2004 to autumn 2009.Figure 6 reveals that the dependency level is not homogenous along time.On the one hand, a slight increase of the dependency level can be appreciated along the years analyzed (distance presents a decreasing trend).On the other hand, there are some dependency peaks (valleys in the distance graph) in autumn of 2004, spring 2005, spring-summer of 2006, spring-summer of 2007, spring-summer of 2008 and autumn of 2009.Furthermore, note that the four distances provide a similar pattern, but the scales change, except for the uncertainty distance (D U ) and the Universal Distance 2 (D 2 ), which are roughly the same. Figure 6 . Figure 6.Distance measures between Finland and Sweden markets for each season in 2004 to 2009.(a) D V distance; (b) D U distance; (c) D 1 distance; (d) D 2 distance. Figure Figure Congestion rents from Finland to Sweden, in euros. Figure 8 . Figure 8. Dendrograms for the whole period 2004 to 2009.(a) D 1 distance and average linkage; (b) D 2 distance and complete linkage. [42].Both facts originated congestion problems with the transmission inter-connectors and a loss of price integrity in the NO1 area.Finally, the dependence scheme of the four Australian regions has been changing along the years: in 2005 and 2006, NSW and VIC were the most related; in 2007 and 2008, the highest dependency went to the couple SA and VIC; but in 2009, NSW and QLD reached the maximum dependence level. Figure 12 . Figure 12.Dendrograms for electricity load series.(a) D V and single linkage; (b) D V and average linkage; (c) D 2 and single linkage; (d) D 2 and average linkage. Figure 13 . Figure 13.Euclidean distance for electricity load series. Figure 14 . Figure 14.Dendrograms for electricity load series: a raw data-based approach.(a) Euclidean distance and single linkage; (b) Euclidean distance and average linkage. Table 1 . Contingency table of the codified time series. Table 2 . Inter-regional trade as a percentage of regional energy demand.
9,966.6
2016-10-11T00:00:00.000
[ "Computer Science", "Economics", "Environmental Science" ]
A spatio-temporal attention fusion model for students behaviour recognition Student behavior analysis can reflect students' learning situation in real time, which provides an important basis for optimizing classroom teaching strategies and improving teaching methods. It is an important task for smart classroom to explore how to use big data to detect and recognize students behavior. Traditional recognition methods have some defects, such as low efficiency, edge blur, time-consuming, etc. In this paper, we propose a new students behaviour recognition method based on spatio-temporal attention fusion model. It makes full use of key spatio-temporal information of video, the problem of spatio-temporal information redundancy is solved. Firstly, the channel attention mechanism is introduced into the spatio-temporal network, and the channel information is calibrated by modeling the dependency relationship between feature channels. It can improve the expression ability of features. Secondly, a time attention model based on convolutional neural network (CNN) is proposed, which uses fewer parameters to learn the attention score of each frame, focusing on the frames with obvious behaviour amplitude. Meanwhile, a multi-spatial attention model is presented to calculate the attention score of each position in each frame from different angles, extract several saliency areas of behaviour, and fuse the spatio-temporal features to further enhance the feature representation of video. Finally, the fused features are input into the classification network, and the behaviour recognition results are obtained by combining the two output streams according to different weights. Experiment results on HMDB51, UCF101 datasets and eight typical classroom behaviors of students show that the proposed method can effectively recognize the behaviours in videos. The accuracy of HMDB51 is higher than 90%, that of UCF101 and real data are higher than 90%. Introduction Artificial intelligence technology and big data technology have promoted the transformation of modern education system [1,2].Adaptive personalized learning driven by artificial intelligence technology is the most potential application scenario in the field of education.As the main place of classroom teaching in colleges and universities, multimedia classroom has been gradually upgraded to smart classroom.Classroom is also the main battlefield of "golden course" construction.Teachers play a decisive role in the construction of "golden course".How to do fusion innovation, how to effectively improve the quality of "golden course" construction, and how to effectively analyze and evaluate classroom dynamic generative teaching data have been widely concerned by education experts and front-line teachers.At present, the research focus is on the theoretical analysis, technical application and value discussion of the dynamically generated content.There are few researches on the teaching and learning data recording, data analysis and teaching application of the dynamically generated content.However, the key points and difficulty of these researches lie in the automatic detection and recognition of students' classroom behavior. Behaviour recognition [3] has been widely used in many fields, such as video surveillance, smart home, video retrieval, intelligent human-computer interaction, etc. Video has the characteristics of complex environment, large transformation range of visual angle and human behaviour, which makes the feature representation of video have a lot of redundant information in spatio-temporal.Therefore, it is very important for behaviour recognition to effectively utilize the information of key areas on the frames with obvious behaviour amplitude in the video. Behaviour recognition methods in the video can be divided into traditional methods [4,5] and deep learningbased methods [6,7].Traditional methods have made some progress in the field of behaviour recognition, but they rely heavily on artificial feature design, and the generalization ability of the algorithm is insufficient.Deep learning-based methods can automatically learn the features of videos for classification, especially, the dualstream method [8] can effectively combine the spatiotemporal information in videos and has relatively better performance.Dai et al. [9] proposed the dual-stream model for the first time, which input single-frame image and multi-frame density optical flow field image into spatial flow and temporal flow respectively.Then it fused and classified the features of the two streams.Wang et al. [10] proposed temporal piecewise network, using sparse sampling and video supervision strategies to further improve the recognition accuracy.However, the dualstream method can not effectively utilize the key spatiotemporal information of video, and it ignores the information difference of different channels when extracting video features.In order to obtain the information of saliency regions in the video, references [11,12] used object detection or posture estimation to extract multiple key regions or body parts in the video, and then input them into the network for behaviour recognition.However, object detection or posture estimation in advance will increase the overall calculation cost.Moreover, the results of detection and estimation can affect the performance of recognition. The behaviour recognition method based on attention mechanism [13] can automatically learn the key information in the video.Hu et al. [14] designed a channel attention network to model features from channels to highlight key channel information.Sharma et al. [15] proposed the spatial attention model to highlight the saliency areas in each frame.Du et al. [16] used the temporal attention model designed by recurrent neural network (RNN) to assign corresponding weights to different frames, which could effectively utilize the key frames of the video.Yang et al. [17] used bidirectional LSTM to design a spatio-temporal attention model.The above methods have the following deficiencies: a) The time attention model designed by RNN or LSTM has many parameters.RNN has a fixed serial structure, so video frames must be processed in accordance with the sequence of time, and the recognition efficiency is low. b) When extracting spatial saliency information, it will lead to the problem of inaccurate information of the extracted regions using only one spatial attention model to extract multiple behaviour regions of a frame. To solve the above problems, this paper proposes a new students behaviour recognition method based on spatiotemporal attention fusion model.The main contributions of this paper are as follows. 1) The channel attention is integrated into the spatiotemporal network, and the channel information of the features is recalibrated while considering the spatiotemporal features, which enhances the expression ability of the features. 2) Attention model based on CNN is proposed to focus on the frame with a strong understanding on the temporal domain.Compared with the temporal attention of RNN model, this model calculates the attention score of each frame in the temporal dimension of the video.The model has fewer parameters and the calculation cost is small.It can realize the parallel operation of multiple frames and improve the overall operation efficiency. 3) A multi-spatial attention model is proposed to learn the weight of each frame from different angles by using multiple models to obtain multiple discriminant behaviour regions, which reduces the interference of background information. 4) The temporal and spatial features are fused to further enhance the feature representation of the video.Experiment results on UCF101, HMDB51 datasets and eight typical classroom behaviors of students show that the proposed model is an end-to-end and efficient behaviour recognition model. Spatio-temporal attention mechanism for behaviour recognition The video can be regarded as a combination of spatial and temporal.In spatial, RGB images contain the appearance information about the scenes and objects.In temporal, the optical flow image includes the behaviour information of the object.In this paper, the appearance flow with RGB image and the behaviour flow with optical flow image are used as the design basis.A new behaviour recognition model is proposed to enhance the feature representation, distinguish the features of different channels, and focus on the multiple saliency areas of behaviour in the frames with strong discriminant power, so as to realize the behaviour recognition.The overall structure of the proposed recognition model is shown in figure 1.In order to obtain appropriate input fragments, the new model performs sparse sampling on the video.The implementation method is as follows: dividing the video into N segments at equal intervals, sampling one frame randomly for each segment, and inputting the RGB image and optical flow image into the spatio-temporal network. SE-BN-Inception module Multi-channel feature vectors are generated when features of video frames are extracted using convolutional networks.Each channel of the vector describes the current frame in a specific way, and different channels represent information of varying importance.However, the previous deep learning-based feature extraction methods ignored the differences of different channels, resulting in poor feature representation capability.The channel attention mechanism can learn the importance of each feature channel, increase the channel features that are useful for current recognition according to the importance, and suppress the channel features with weak recognition power.This paper introduces the channel attention implementation network SE-net (Squeeze-andexcitation network) to the BN-inception [18].The SE-BN-Inception module is obtained to calibrate the information of different channels and enhance the expression ability of video features.SE-net is shown in figure 2 Spatio-temporal attention module The spatio-temporal attention module is composed of CNN-based temporal attention model [19], multi-spatial attention model and the fusion of spatio-temporal features.The temporal attention model and the multispatial attention model focus on key frames and multiple saliency behaviour regions from the temporal and spatial dimensions of the video, respectively.The fusion of spatio-temporal features can effectively combine the extracted key spatio-temporal information, further enhance the feature representation of video, and improve the accuracy of behaviour recognition. CNN-based temporal attention model Behaviour is a process of constant change.Different frames in a video have different contributions to behaviour recognition, so the frames with rich information and obvious behaviour changes should be selected for classification.The temporal attention model can give more attention to the key frame.However, the previous temporal attention model is designed and implemented based on RNN, which has many network parameters, complex structures and it cannot be represents the selected frame number of the video.C represents the feature dimension degree. H W × represents the number of grid cells of the feature map.For the feature vector i x of i-th frame of the video, it is first linearly mapped through the full connection layer, and the mapped feature is i x ˆ.The linear mapping of the same video frame uses the same parameters as shown in equation (1). Where conv represents the convolution operation. it considers the importance of each selected frame in the video. Multi-spatial attention model Video consists of sequential images, and each frame can be divided into regions with saliency behaviour and other regions in spatial.For behaviour recognition videos, the saliency behaviour areas are usually the moving parts of the human body and the position of the moving objects, such as the behaviour of drinking water.The behaviour can be accurately recognized by using the features of the arm, head area and the cup.Therefore, the focus should be placed on areas with significant behaviour in each frame.Generally, object detection [20], posture estimation [21] and other methods are used to extract the information of key regions for behaviour recognition, which results in large workload and complex implementation. Spatial attention mechanism can solve the above problems.However, in references [22,23], only one spatial attention model is used to extract information of different saliency regions.Some of the extracted saliency regions are inaccurate.In order to accurately extract the spatial information of different regions of the frame that interact with the behaviour, this paper proposes a multispatial attention model, the specific structure is shown in figure 4.Where w 2 , w 3 , b 2 , b 3 are the learning parameters in the network.The size of the convolution kernel of the second convolution layer is 5×5 and the convolution step is 1. , l denotes the model number of the spatial attention.In formula (5), Spatio-temporal feature fusion Spatio-temporal feature fusion is used to judge the categories of human behaviours by combining the temporal and spatial features extracted from video.The fusion of spatio-temporal features can represent the change information of key frame's saliency area of behaviour, which further enhances the expression ability of features and carries out more accurate recognition of behaviour.For example, when playing golf, frames with obvious swing behaviour will get more attention through the temporal attention model.Through spatial attention model, the arm, golf club, ball and other key areas are extracted.The spatial and temporal features can be fused to focus on several saliency motion areas on the frame with obvious swing action, so as to better recognize the behaviour.The fusion of features is shown in figure 5. l spatial features j s f and one temporal feature are obtained for each video.First, it maps each spatial feature to a temporal feature.That is, l features l F are obtained by adding the spatial feature j s f of the video and the temporal feature t f of the video respectively.Then it connects these l features to get the spatio-temporal feature F of the video: Where concate denotes the connect operation. Experimental data sets and evaluation criteria The data sets used in this paper are two publicly available video data sets UCF101 and HMDB51 [24].Then we also select real classroom behaviours.The UCF101 data set contains 101 behaviours and 13320 videos.The data set has a strong diversity in behaviour acquisition, including camera motion, object appearance motion, attitude change and background change.The movement category is divided into five groups: human-object interaction, body movement, person-to-person interaction, playing musical instruments and sports.The data set has problems such as large intra-class differences and small inter-class differences.HMDB51 data set contains 6676 videos and 51 types of actions.The video samples are mainly from public data such as movies, Youlube and Google video, but many videos are with poor quality.Therefore, it is challenging to perform behaviour recognition on the two data sets.For the two data sets, this paper adopts the official division method, that is, each data set is divided into three splits, 70% of the videos are training sets and 30% are testing sets. In this paper, 60 students majoring in software engineering in 2020 from one university are selected as the research objects.The involved two courses are "Fundamentals of Programming" and "Data Structure".Two complete lectures are recorded for each course.The analysis algorithm in this paper is based on the video data as the data input object, and the camera adopts the television broadcast system (PAL), which is 25f/s (frames per second).There are four classroom teaching videos, each of which lasts 50 minutes.One classroom teaching video of each course is divided into two training sets, and According to the various manifestations of students classroom behavior, we focus on the basic behavior categories that can reflect students' basic states and constitute complex learning activities.In this study, eight classroom behaviors are recognized and analyzed including concentration, interaction, bowing their heads, playing with mobile phones, sleeping, reading, writing and mind wandering.The performance of the proposed algorithm is evaluated.Therefore, it is necessary to annotate the training sets and testing sets, and manually complete the coding of four videos. In this paper, top-1 recognition accuracy is adopted as the evaluation standard.The recognition accuracy of each data set is obtained by weighted average of the action recognition accuracy of its three splits. Experimental Analysis In this paper, the performance of behaviour recognition under different segments of video, different spatial attention models and different fusion weights are compared.Then the performance of behaviour recognition with channel attention network is analyzed experimentally.Finally, the effectiveness of the proposed method is analyzed by comparing the proposed method with the state-of-the-art methods. Performance analysis of behaviour recognition in different video segments In this paper, the sparse sampling method is used to sample the frames in the video and take them as the input data of the network.To analyze the influence of different video segment number on behaviour recognition performance, this paper carries out a comparative experiment on the first split of HMDB51 data set.3, 4, 5 and 6 segments are sparsely sampled from the video for behaviour recognition, and the experimental results obtained on the appearance flow are shown in figure 6.The experimental results show that the recognition accuracy increases with the increase of the number of video segments.When the number of video segments is 6, the network has the highest recognition accuracy, because the network can learn more information from an increasing number of samples.As can be seen from figure 6, when the number of video segments is greater than 5, the rising trend of recognition accuracy gradually slows down with the increase of the number of segments.Moreover, due to the limited computer video memory, more segments cannot be tested.In this paper, each video is divided into six segments for subsequent experiments. Performance analysis of behaviour recognition under different spatial attention models The multi-spatial attention model proposed in this paper can extract multiple saliency behaviour regions for behaviour recognition.With the increase of the spatial attention model number, the extracted saliency areas of behaviour also increased gradually.In order to analyze the impact of spatial attention model number on behaviour recognition performance, a comparative experiment is carried out on the first split of HMDB51, and the results are shown in figure 7.As can be seen from figure 7, when the number of spatial attention models is less than 4, the recognition accuracy gradually improves with the increase of the spatial attention model number.When the number of spatial attention models is 4, the performance of behaviour recognition is the best.When the number of spatial attention models is 5, the recognition rate decreases.Due to the limited computer video memory, the experiment cannot run when the number of spatial attention models is greater than 5. Therefore, this paper adopts four spatial attention models to carry out subsequent experiments. Performance analysis of behaviour recognition with different fusion weights The influence of different fusion weights of appearance flow and motion flow on behaviour recognition performance is analyzed through experiments, and the results are shown in table 1.As can be seen from table 1, the recognition accuracy of single motion flow is higher than that of appearance flow.Fused flow is better than single flow.When the appearance flow and motion flow are combined with 1/4 and 3/4 weight, the behaviour recognition results are the best.Therefore, in this paper, the fusion weight of appearance flow and motion flow is selected as 1:3 for subsequent experiments. Comparison analysis with state-of-the-art behaviour recognition methods In order to further verify the proposed method in this paper, we conduct comparison with some classical behaviour recognition methods, and the results are shown in table 4. As can be seen from table 4, compared with the traditional method IDT [25], the proposed method has a higher recognition accuracy, indicating that the proposed spatio-temporal attention model can effectively extract the key spatio-temporal information in the video and improve the effect of behaviour recognition.The end-to-end structure of the proposed method makes the calculation more concise.Compared with the dual-stream model [7] and the temporal segmentation network (TSN) [10], the proposed method improves the recognition accuracy by 3.2% and 0.8% on UCF101 data set, and 6.5% and 2.5% on HMDB51 data set, respectively.It shows that the spatio-temporal attention model can effectively extract more behaviour features on key frames, and the behaviours in the video can be more accurately recognized by these information.Compared with the TDD [26], the deeply trained C3D network [27], the spatio-temporal residual model ST-ResNet [28], the spatio-temporal pyramid model [29], ARTNet [30] and TSM [31], it can be seen that the proposed method has a better recognition effect.The proposed method takes the dual-flow features, and the recalibration of channel features into account which highlights the key channel information.The proposed spatio-temporal attention model fully mines the key spatio-temporal information of video, it obtains the video features with enhanced expression ability, and establishes the comprehensive behaviour description. Comparison analysis of behaviour recognition methods using attention mechanism In order to verify the validity of the spatio-temporal attention model proposed in this paper, the proposed algorithm without SE-net is compared with other behaviour recognition methods with attention mechanism.The results are shown in table 5.It can be seen from table 5 that the proposed method in this paper has a higher accuracy.Compared with the temporal attention model [32] generated by the RNN method, the accuracy of proposed algorithm without SE-net on the HMDB51 dataset has been improved by 6.3%.This is because temporal attention only extracts the key frames, while the proposed method not only extracts the key frames, but also pays attention to the saliency areas of motion in the spatial dimension, indicating that the combination of temporal and spatial information can effectively improve the recognition accuracy.The recognition effect of the proposed algorithm without SE-net is better than that of RSTAN [16] and ISTPAN [33], which indicates that with the same backbone, the spatial and temporal attention model proposed in this paper is simple in structure, but it can effectively extract the key spatial and temporal information of the video.Compared with attention cluster [34], Bi-LSTM attention [17] and R-STAN [35], the proposed algorithm without SE-net has better performance.References [34,17,35] all use ResNet as backbone for behaviour recognition, and ResNet network performance is better than BN-Inception.However, this paper uses BN-Inception as backbone and still gets good recognition effect.This shows that the spatio-temporal attention model proposed in this paper can effectively make up for the deficiency of BN-Inception, it can accurately extract the key spatio-temporal information in the video, and improve the accuracy of behaviour recognition.After adding the SE-net, the recognition accuracy of the proposed method in the three data sets is further improved, indicating that the proposed method can improve the performance of behaviour recognition by calibrating the information of feature channels combined with channel attention network. Conclusion Traditional behaviour recognition methods ignore the difference of channel information, cannot distinguish video redundant frames, background, etc, which results in the poor feature expression ability and the low recognition rate.In order to improve the efficiency of students in class, this paper proposes a new students behaviour recognition method based on spatio-temporal attention fusion model.In this paper, channel attention is first integrated into the spatio-temporal structure, and channel information is calibrated through the modeling of channel features to improve the ability of feature expression in videos.The temporal attention model and multi-spatial attention model based on CNN are presented to focus on multiple saliency areas of behaviour on the frames to further enhance the feature representation of the video.In this paper, comparison experiments are carried out on UCF101, HMDB51 data sets and real classroom behaviours.Compared with the advanced methods, the proposed method has achieved a higher recognition accuracy.In the future, we will apply more advanced deep learning methods for students behaviour recognition. Figure 1 .F Figure 1.Structure of the proposed spatio-temporal network (a).Firstly, the input features are pooled globally along the channel dimension.The dependencies between the channels are then modeled through the two fully connection layers.The first fully connection layer reduces the input channel dimension by 1/16 to reduce computation.And then it increases the nonlinearity by activating the ReLU function.The second fully connection layer returns the channel to its original dimension.The normalized weights are obtained by a sigmoid function.Finally, the weight is weighted to the features of each channel through feature redirection operation.As shown in figure2(b), SE-BN-Inception consists of nine Inception operations.The SE-net is added after each inception.Because the output of the fully connection layer is not sensitive enough to space and position, the output of the convolution layer preserves the spatial structure of the image to a certain extent, so BN-Inception is retained to the last convolution layer. Figure 2 . Figure 2. Structure of SE-net and SE-BN-inception parallelized over time.In order to solve this problem, this paper proposes a temporal attention model based on CNN, which uses CNN to generate the attention score of each frame.The attention score is used to determine the importance of each frame in the video relative to the behaviour recognition.It selectively focuses on the key frames.The video feature representation is further enhanced in time dimension.The temporal attention model designed in this paper not only has fewer parameters and a simple structure, but also can calculate the attention score of all frames in parallel, it makes full use of the advantages of GPU hardware.The CNN-based temporal attention model is shown in figure 3. Figure 3 . Figure 3. Temporal attention model based on CNN Where 1 w and 1 b are the learning parameters in the model.The map feature of the whole video is EAI Endorsed Transactions on Scalable Information .The video feature dimension is changed to 1×N through a convolution layer with size of 1×1.It uses the softmax function along the time dimension of the video frame to get the time attention score t i α of each frame in the video: contribution of i-th frame to the recognition.After the attention score t i α of i-th frame is obtained, the time feature of i-th frame is obtained by multiplying it with features.The time features of all frames is summed to get the temporal feature t f of the whole video. i v and i u are the input and output signals.α and β are trainable parameters, and m and var represent mean and variance.feature.Since l spatial attention is used, l spatial features can be extracted per frame.The j-th spatial feature in selected frame of each video is summed to obtain the j-th spatial feature j s f of the whole video. This experiment is performed on the GPU with PyTorch. the used backbone in this article isBN-Inception.BNinception model is an upgraded version of GoogleNet model, which has a good balance between accuracy and efficiency.The network is initialized using model parameters pre-trained on the ImageNet dataset.In order to keep the optical flow data consistent with RGB data, this paper first adopts TV-L1 algorithm to obtain optical flow data, and then quantifies optical flow data to [0,255] through linear transformation.a) Training stage.Firstly, the size of the input frame is adjusted to 240×320, and then the size of the clipping area is adjusted to 224×224 by using fixed corner clipping and horizontal flip.It adds the Dropout layer before the full connection layer of the classification network.The dropout values are set to 0.8 and 0.7 for appearance and behaviour flow, respectively.The parameters are optimized by small batch random gradient descent algorithm, and the batch size is 32.The weight attenuation coefficient is set to 0.0005.The momentum is set to 0.9.The appearance flow starts with a learning rate of 0.001.After 30 epochs and 60 epochs, it is reduced to 1/10 of the original epoch, and a total of 80 epochs are trained.The initial learning rate of the behaviour flow is 0.001, which is reduced to 1/10 after 190 epochs and 300 epochs, respectively.340 epochs are trained.b) Test stage.25 frames are selected from each sample using mean sampling.For each frame image, data is enhanced by cropping and flipping, and 10 test samples are obtained.Classification results are obtained by averaging the output category probability of 10 samples. Figure 7 . Figure 7.Comparison of recognition accuracy with number of different video segment spatial attention models Table 2 . Comparison of recognition accuracy between TSN and TSN+SE-net on UCF101 and HMDB51 Table 3 . Comparison of recognition accuracy between TSN and TSN+SE-net on real classroom behaviour Table 4 . Comparison of average recognition accuracy with other methods/% Table 5 . Comparison of average recognition accuracy with attention-based methods/%
6,252.4
2018-07-13T00:00:00.000
[ "Computer Science", "Education" ]
Infrared Lightwave Memory-Resident Manipulation and Absorption Based on Spatial Electromagnetic Wavefield Excitation and Resonant Accumulation by GdFe-Based Nanocavity-Shaped Metasurfaces An arrayed nanocavity-shaped architecture consisting of the key GdFe film and SiO2 dielectric layer is constructed, leading to an efficient infrared (IR) absorption metasurface. By carefully designing and optimizing the film system configuration and the surface layout with needed geometry, a desirable IR radiation absorption according to the spatial magnetic plasmon modes is realized experimentally. The simulations and measurements demonstrate that GdFe-based nanocavity-shaped metasurfaces can be used to achieve an average IR absorption of ~81% in a wide wavelength range of 3–14 μm. A type of the patterned GdFe-based nanocavity-shaped metasurface is further proposed for exciting relatively strong spatial electromagnetic wavefields confined by a patterned nanocavity array based on the joint action of the surface oscillated net charges over the charged metallic films and the surface conductive currents including equivalent eddy currents surrounding the layered GdFe and SiO2 materials. Intensive IR absorption can be attributed to a spatial electromagnetic wavefield excitation and resonant accumulation or memory residence according to the GdFe-based nanocavity-shaped array formed. Our research provides a potential clue for efficiently responding and manipulating and storing incident IR radiation mainly based on the excitation and resonant accumulation of spatial magnetic plasmons. Introduction With the rapid development of current micro-nano-techniques, many types of metasurfaces are already constructed by arrays integrating basic nano-architectured elements such as nanoholes, nanocylinders, nanostripes [1], or nanodisks [2], as well as patterned nano-composites from common ferroelectric or ferromagnetic materials involving some noble metals.Generally, several functional metasurfaces still need to be configurated by layered semiconductive or dielectric films based on surface or spatial plasmon excitation and oriented transportation and localized re-arrangement or resonant residency, according to featured electromagnetic wavefield modes induced by incident lightbeams on the basis of essential energy [3] and a momentum conservation mechanism [4].As shown, a resonant generation and ordered distribution of plasmon polaritons can be selectively manipulated by particularly configurating or carefully modifying the micro-nano-layout with a suitable parameter option [5], such as the radius and depth of typical nanoholes or nanocylinders and their arrangement period.So, an arrayed micro-nano-architecture can be particularly configurated by an intermediate film or film system, leading to a type of resonant cavity for holding more electromagnetic wavefields than those only attached on a special surface or interface [6].According to traditional concepts, the magnetic field component of lightwaves can be efficiently compressed into a relatively large space or a magnetic medium pipeline or cavity, but the electric field component should be inductively constrained by a two-dimensional facet based on the surface "free electron" excitation and transportation and arrangement according to surface plasmon modes [7]. As demonstrated, many ferromagnetic materials can exhibit a relatively powerful magnetic response induced by a weak and guided magnetic field such as an oscillated magnetic component [8] of incident lightwaves [9].Considering the common situation of the magnetic energy state far lower than the electric field of lightwaves in a wide wavelength range including visible light and infrared (IR) radiation, an accumulating enhancement in the magnetic fields in a sealed or a semi-opened shallow cavity with a suitable geometry can be more easily achieved or further efficiently manipulated corresponding to the electric field component coupled closely with them.It should be noted that resonant magnetic field stockpiling will be more effectively realized by a typical micro-nano-cavity according to modern optoelectronic technologies [10].So, a recent hotspot is already focused on how to construct novel nano-architectures to generate strong magnetic plasmon resonance so as to greatly enhance the collecting efficiency of incident lightwaves, which also means a remarkable prompting of the capturing ability of incident radiation.As shown, incident lightwaves can be used to stimulate strong magnetic plasmon oscillations similar to traditional waveguide modes [11] in a typical layered magnetic film system [12].Due to the concurrent presence of the surface plasmons excited and thus a strong magnetic response leading to a tremendous enhancement in magnetic induction intensity in a particular magnetic medium cavity [13], the coupled electric field strength from the same functional structures can also be significantly amplified [14] and thus enable to more efficiently collect incident light energy [15,16]. Considering the situation of IR radiation absorption basically being based on the magnetic nanocavity of the metasurface developed by us, a common GdFe as a typical soft magnetic material has been selected as the main functional material for effectively responding to a rapid variance in incident lightwaves.In general, optical metasurfaces based on the specific construction of the metal-insulator-metal architecture can be used to achieve an effective narrow or broadband radiation absorption.However, a broadband-absorbing metasurface may play an important role in energy harvesting applications, leading to a multilayer stacking approach.So, a kind of metasurface based on the GdFe-based nanocavity presented in this paper is formed by vertically cascading two GdFe-SiO 2 nanocavities.The GdFe films of 100 nm and 50 nm as well as the SiO 2 films of 900 nm and 500 nm are already layered and configurated over each Ag film of 10 nm in the nanocavities.As an IR memory-resident absorption metasurface (IMAM), it can be potentially employed to sufficiently capture incident lightwaves in a wide wavelength range far more than in the visible range, for instance, realizing full spectral solar power generation according to the radiation characteristics of the sun, achieving full-time photovoltaic generation through receiving the IR radiation of the earth [17] as another "sun" to illuminate all objects on the earth at night, or even applying it in the IR stealth [18] field [19].At first, the main characteristics of the IMAM developed by us are simulated for efficiently performing surface plasmon excitation and then short-range transportation and thus localized resonance enhancement.And a type of nanocavity-shaped metasurface is designed and devised for a strong resonant accumulation of the spatial magnetic fields also closely correlated with a strong spatial electric field distribution, which can be viewed as a memory-resident radiation absorption mechanism apparently different from the conventional photothermal absorption.Finally, the key IR absorption characteristics of IMAMs with different facial configuration are evaluated experimentally.Current research indicates that the IR response of IMAMs can be improved remarkably through carefully tuning the morphology of the electromagnetic wavefields existing in a patterned micro-nano-architecture.At present, a strong spectral IR absorption of nearly 100% is already realized in a wide wavelength range of 3-14 µm, only through the layered stacking of both the GdFe film and SiO 2 layer, leading to an arrayed dual nanocavity-shaped metasurface, which is filled fully or memorably resided by excited electromagnetic plasmons.And the joint action of the surface oscillated net charges over the charged metallic films and the surface conductive currents including equivalent eddy currents surrounding the layered GdFe and SiO 2 materials is also available to interpret the IR absorption of the patterned GdFe-based nanocavity-shaped metasurface. Materials and Methods The GdFe-based nanocavity-shaped metasurfaces proposed by us are basically constructed through alternately depositing a GdFe film and SiO 2 dielectric layer, as shown in Figure 1.The transmission coefficient of the proposed metasurfaces can be expressed as S 21 = [sin(nkd) − i 2 (Z + 1 Z ) cos(nkd)]e ikd , where the parameter d is the thickness of the metasurface, and n = n 1 + in 2 denotes the complex refractive index of the metasurfaces, Z its impedance, and k the wave-vector of the incident lightwave.When the substrate is thick enough, the transmission coefficient will approximate 0. By finely adjusting the key parameters, including the thickness and the material composition, several factors such as the phase and the amplitude and the transmissivity of incident IR light can be precisely manipulated.As known, a periodic overlapping of the medium films stacked between two metallic films will result in a functioned dual nanocavity-shaped film system for sufficiently suppressing or even completely eliminating [20] the reflection loss of incident electromagnetic wavefields [21] through the phase cancellation operation [22], such as typical beam destructive interference. memory-resident radiation absorption mechanism apparently different from the conventional photothermal absorption.Finally, the key IR absorption characteristics of IMAMs with different facial configuration are evaluated experimentally.Current research indicates that the IR response of IMAMs can be improved remarkably through carefully tuning the morphology of the electromagnetic wavefields existing in a patterned micro-nanoarchitecture.At present, a strong spectral IR absorption of nearly 100% is already realized in a wide wavelength range of 3-14 μm, only through the layered stacking of both the GdFe film and SiO2 layer, leading to an arrayed dual nanocavity-shaped metasurface, which is filled fully or memorably resided by excited electromagnetic plasmons.And the joint action of the surface oscillated net charges over the charged metallic films and the surface conductive currents including equivalent eddy currents surrounding the layered GdFe and SiO2 materials is also available to interpret the IR absorption of the patterned GdFe-based nanocavity-shaped metasurface. Materials and Methods The GdFe-based nanocavity-shaped metasurfaces proposed by us are basically constructed through alternately depositing a GdFe film and SiO2 dielectric layer, as shown in Figure 1.The transmission coefficient of the proposed metasurfaces can be expressed as , where the parameter d is the thickness of the metasurface, and denotes the complex refractive index of the metasurfaces, Z its impedance, and k the wave-vector of the incident lightwave.When the substrate is thick enough, the transmission coefficient will approximate 0. By finely adjusting the key parameters, including the thickness and the material composition, several factors such as the phase and the amplitude and the transmissivity of incident IR light can be precisely manipulated.As known, a periodic overlapping of the medium films stacked between two metallic films will result in a functioned dual nanocavity-shaped film system for sufficiently suppressing or even completely eliminating [20] the reflection loss of incident electromagnetic wavefields [21] through the phase cancellation operation [22], such as typical beam destructive interference.As shown in Figure 1, a dual nanocavity-shaped metasurface is shaped by vertically cascading two basic GdFe-SiO 2 nanocavities.The upper nanocavity consists of a top and bottom GdFe film and an intermediate SiO 2 layer and the lower by a top GdFe film (also the bottom GdFe film of the upper nanocavity) and an intermediate SiO 2 layer and a Ag film over an n-type Si wafer, which are fabricated by a traditional film system growing flow.A cross-sectional view of the fabricated IMAM sample is exhibited by a SEM photograph shown in Figure 1a.The film system parameters are expressed by a set of the film thickness of {h 1 -h 4 ,h s }, where the depth of the upper nanocavity is h 2 + h 3 + h 4 and the lower by h 1 + h 2 .The yellow IR lightwaves including typical perpendicular and inclined beams are incident upon the surface of the metasurface constructed.Generally, the GdFe material can be intensively magnetized by a guiding magnetic field being similar to a common solenoid with a ferromagnetic core so as to present a total magnetic induction intensity of B + B M , which is usually far more than the guiding magnetic induction intensity B colored in red. Based on the general mechanism, the incident magnetic induction lines slanted upon the surface of a magnetic material will almost completely enter into the magnetic medium along its internal surface, because of a tremendous difference between the magnetic medium and circumstance, as shown in Figure 1b.So, an almost entire magnetic field component of the incident beams can be guided into a magnetic surface or interface, generally presenting an enhancement of more than 3 orders of magnitude.The above effect also means that there is a transient surface electric field response of the magnetic medium because of a close coupling between the transient electric field and the magnetic field components according to traditional Maxwell electromagnetic relation, but a certain extent phase retard is usually exhibited between them such as a maximum π value or an antiphase state. According to the layout design of the IMAM, a relatively strong magnetic induction intensity will be generated in the top and also the intermediate GdFe films from the guiding magnetic field component of the incident lightwaves already crossing the incident surface and further the lower interfaces so as to effectively re-orient the micromagnetic domains in GdFe films.It can be expected that the surface plasmons will also be excited mainly over the top surface of the first GdFe film, thus resulting in a surface "free electron" oscillation and then transportation and thus redistribution according to the surface "free electron" density wave, simultaneously.The action above will be further transferred onto the next metallic film to continuously arouse the surface induction electric current fields and also the inevitable Joule loss of the IR radiation within a specific wavelength range [23].So, a detailed analysis and evaluation about the memory-resident or resonant IR absorption characteristics of the IMAM are as follows. A symbiotic architecture for resonantly forming a set of layered spatial magnetic fields distributed in each functional film is shown in Figure 2. A basic configuration based on GdFe and SiO 2 and Ag materials leading to a nanocavity-shaped scheme for responding to the IR radiations according to the transverse electric field (TE) and the transverse magnetic field (TM) incidence is shown in Figure 2a.The orientation of both the TE and TM waves are parallel to the surface of the top magnetic film, but the wave-vector k is perpendicular to it.As shown in Figure 2b, the time-varying TM component labeled by the black arrows of exiting the paper will penetrate the film system.The other magnetic field components labeled by two types of arrows of entering and exiting the paper are stimulated simultaneously by the surface electric current fields including the black surface current J 1 of the transient surface plasmons excited by incident radiations, two similar yellow eddy currents J 2 (J 2U , J 2B ) and J 3 (J 3U , J 3B ) over the top and bottom endfaces of a single GdFe film, and a similar brown eddy current J 4 (J 4U , J 4B ) over two endfaces of the bottom Ag film, respectively, where (J 2U , J 2B ) and (J 3U , J 3B ) and (J 4U , J 4B ) denote the currents distributed over the upper and lower endfaces of the films above.So, an initial invariant H of the incident IR beams will penetrate the film system in the nanocavity and thus guide the formation of the spatial magnetic field in each film.Through overlapping the black penetrating magnetic fields and the layered magnetic field components labeled by two types of red arrows of entering and exiting the paper excited by the surface currents and the similar eddy currents, as a sequence of {(J 1 )@B/B S /B G3 /B A + (J 2U ,J 3B ∪J 4U )@B G2 /B S /B G3 + (J 2B ,J 3U )@B S + (J 2B ,J 4B )@B S /B G3 /B A + (J 4U ,J 4B )@B A }, a net spatial magnetic field can be constructed in the dual nanocavity according to a spatial magnetic plasmon resonance (MPR) mode.Since the IR loss based on the MPR is much smaller than the LSP formed mainly according to the surface electric field resonance, a very strong magnetic field resonance induced mainly in the dual nanocavity can be generated, which also means a significant strengthening of the spatial electric fields closely associated with the magnetic fields so as to result in a spatial standing wave enhancement or a memory-resident absorption of high-energy state light fields. A symbiotic architecture for generating the layered magnetic fields distributed in each functional film labeled by two arrows of entering and exiting the paper, the surface electric current fields including the black surface current J1 of the transient surface plasmons excited by the incident radiations, two similar yellow eddy currents J2 and J3 over the top and bottom endfaces of a single GdFe film, and the similar brown eddy current J4 over two endfaces of the bottom Ag film, respectively.(a) A basic configuration based on GdFe and SiO2 and Ag films leading to a nanocavityshaped architecture for responding to the IR radiations according to the transverse electric field (TE) and the transverse magnetic field (TM) incidence, where the time-varying TM component labeled by the black arrows of exiting the paper will penetrate the film system.(b) Overlapping the black penetrating magnetic fields and the layered magnetic field components labeled by two red arrows of entering and exiting the paper, which are excited by the surface equivalent eddy current above to form the net resonant magnetic fields according to the spatial magnetic plasmon modes. Considering the relative magnetic permeability of the magnetic material utilized, a z = 0 plane is selected as an incident surface of IR radiation.A basic wave equation of and the relations of can also be induced.In order to obtain the guided wavefield modes confined near the interface, the wave-vectors perpendicular to the interface of the metallic and dielectric materials must present an opposite orientation, for example, Re[k1] > 0 and Re[k2] > 0. By substituting the Hy component into the wave equation above, an asymptotic impedance matching can be acquired as . When z > 0, an important relation of can be obtained in the dielectric region, and thus both the electric field and magnetic field components can be expressed, respectively, as And in the metal region, a relation of can be induced, and further, both the electric field and magnetic field components are A symbiotic architecture for generating the layered magnetic fields distributed in each functional film labeled by two arrows of entering and exiting the paper, the surface electric current fields including the black surface current J 1 of the transient surface plasmons excited by the incident radiations, two similar yellow eddy currents J 2 and J 3 over the top and bottom endfaces of a single GdFe film, and the similar brown eddy current J 4 over two endfaces of the bottom Ag film, respectively.(a) A basic configuration based on GdFe and SiO 2 and Ag films leading to a nanocavityshaped architecture for responding to the IR radiations according to the transverse electric field (TE) and the transverse magnetic field (TM) incidence, where the time-varying TM component labeled by the black arrows of exiting the paper will penetrate the film system.(b) Overlapping the black penetrating magnetic fields and the layered magnetic field components labeled by two red arrows of entering and exiting the paper, which are excited by the surface equivalent eddy current above to form the net resonant magnetic fields according to the spatial magnetic plasmon modes. Considering the relative magnetic permeability of the magnetic material utilized, a z = 0 plane is selected as an incident surface of IR radiation.A basic wave equation of = α are used to describe the transportation behaviors in a layered film system, where σ and α are the surface net charge density and the surface conductive current density, respectively.According to the situation without any surface conductive current over both the GdFe and Ag films, a continuity of Hx and Hy can be expected.Based on the momentum conservation over the incident surface, a relation of k 1 ε 1 + k 1 ε 2 = 0 can also be induced.In order to obtain the guided wavefield modes confined near the interface, the wavevectors perpendicular to the interface of the metallic and dielectric materials must present an opposite orientation, for example, Re[k 1 ] > 0 and Re[k 2 ] > 0. By substituting the Hy component into the wave equation above, an asymptotic impedance matching can be ) can be obtained in the dielectric region, and thus both the electric field and magnetic field components can be expressed, respectively, as And in the metal region, a relation of k induced, and further, both the electric field and magnetic field components are To non-magnetic media, the surface plasmon wave-vector k should be k = k 0 Due to the radiation absorption that happened between the Si substrate and the top magnetic film, the nanocavity-shaped architecture already exhibits a unique property of efficiently capturing and storing incident radiation.Because the permeability µ 1 of the GdFe material is far greater than 1 and the permeability is µ 2 for non-magnetic materials, a relation of > 1 can be expected.It should be noted that a wave-vector k under the condition of using special magnetic materials will be far more than (k 0 when only using non-magnetic materials, thus resulting in a shorter wavelength of the surface plasmon generated. It is well known that both the electric field and magnetic field components are tightly interrelated or coupled into an entire electromagnetic wavefield transporting or even memory-staying in a specified spatial region.Although the magnetic field energy is lower than the electric field component, the lightwave energy state can still be greatly enhanced by mainly increasing or resonantly accumulating magnetic fields based on an intrinsic mechanism dominated by traditional Maxwell electromagnetic relation.In other words, the light fields can also be precisely modulated or remarkably enhanced only through efficiently manipulating the magnetic field component, which may guide a better process for efficiently manipulating lightwaves and simultaneously reduce the system burden.As shown in Figure 2a, a thin Ag film contacted directly with the GdFe material pre-deposited over an n-typed Si wafer also acts as a mirror to sufficiently reflect IR lightwaves incident upon its surface into the nanocavity again, so as to further enhance the spatial interference in the nanocavity and thus remarkably decrease the radiation transmission of the metasurface developed.Currently, many conventional metallic materials such as Au or Ag or Cu [24] are suitable for fabricating the bottom reflector.So, Ag with a weak diamagnetism is selected for fabricating the bottom film according to our mature technology. Layered Magnetic Response Architecture Typical simulations of the IR absorption of both the GdFe/Ag composite film and the GdFe-SiO 2 -Ag nanocavity-shaped metasurface are shown in Figure 3.The simulated IR absorption data are directly acquired by removing the IR reflectance and transmittance from the incident IR power.An obvious comparison according to the average IR absorption level, such as a very low value of ~13% to the GdFe/Ag composite film with a relatively high value of ~65% to the GdFe-SiO 2 -Ag nanocavity-shaped metasurface, can be observed clearly.As shown in Figure 3a, the simulated IR absorption spectrum of the GdFe/Ag composite film configurated by an optimal Ag thickness of 10 nm and the different GdFe thickness, including 50 nm, 80 nm, and 100 nm, present a similar variance trend, which begins from an initial oscillating descent in a wavelength range of 3-9 µm to a relatively stable presentation of ~4% in a wavelength range of 9-14 µm.Three absorption curves start from different initial values in a sequence of {50 nm-black} < {80 nm-red} < {100 nm-yellow} and also demonstrate an almost identical interval of ~10% at a 3 µm wavelength.It can be obtained that a suitable thickness for GdFe film should be 100 nm to remarkably reduce the surface reflectance. Next, a SiO 2 dielectric layer with a needed thickness is further added into the structure indicated by Figure 3a, and the absorption curves obtained by changing the SiO 2 thickness from the initial 900 nm to 1000 nm and then 1100 nm and the final 1500 nm are shown in Figure 3b.IR absorption is significantly increased after adding a SiO 2 dielectric layer between GdFe and Ag films.Specifically, the average IR absorptivity reaches a relatively high value of 65.45% corresponding to having 900 nm thick SiO 2 , while the average IR absorptivity corresponding to a thickness sequence of {1000 nm, 1100 nm, 1500 nm} of SiO 2 film is in a sequence of {59.49%, 55.79%, 56.68%}.In addition, the absorption spectra of the monolayer electric-magnetic composite films are decreased as a whole with a continuous increase in SiO 2 thickness, and the frequency points corresponding to the wave peaks of the absorption spectra will present an obvious red-shifting, but the amplitude will decrease rapidly when the film thickness is increased from 900 nm to 1000 nm.It is worth pointing out that there are two absorption peaks in the near-infrared region corresponding to a SiO 2 film thickness of 900 nm at two wavelengths of ~10.46 µm and ~7.36 µm, respectively, and the absorption peaks are as high as 97.49% and further close to 99.49%.wavelength.It can be obtained that a suitable thickness for GdFe film should be 100 nm to remarkably reduce the surface reflectance.Next, a SiO2 dielectric layer with a needed thickness is further added into the structure indicated by Figure 3a, and the absorption curves obtained by changing the SiO2 thickness from the initial 900 nm to 1000 nm and then 1100 nm and the final 1500 nm are shown in Figure 3b.IR absorption is significantly increased after adding a SiO2 dielectric layer between GdFe and Ag films.Specifically, the average IR absorptivity reaches a relatively high value of 65.45% corresponding to having 900 nm thick SiO2, while the average IR absorptivity corresponding to a thickness sequence of {1000 nm, 1100 nm, 1500 nm} of SiO2 film is in a sequence of {59.49%, 55.79%, 56.68%}.In addition, the absorption spectra of the monolayer electric-magnetic composite films are decreased as a whole with a continuous increase in SiO2 thickness, and the frequency points corresponding to the wave peaks of the absorption spectra will present an obvious red-shifting, but the amplitude will decrease rapidly when the film thickness is increased from 900 nm to 1000 nm.It is worth pointing out that there are two absorption peaks in the near-infrared region corresponding to a SiO2 film thickness of 900 nm at two wavelengths of ~10.46 μm and ~7.36 μm, respectively, and the absorption peaks are as high as 97.49% and further close to 99.49%. As the IR absorptivity of the metasurfaces is decreased dramatically at 9 μm after adding a SiO2 layer, a part of the SiO2 material is replaced by Si3N4 material, and the calculation results for the absorptivity of the metasurface according to simulations are presented in Figure 4.Because the IR absorptivity of the metasurface is increased again at a As the IR absorptivity of the metasurfaces is decreased dramatically at 9 µm after adding a SiO 2 layer, a part of the SiO 2 material is replaced by Si 3 N 4 material, and the calculation results for the absorptivity of the metasurface according to simulations are presented in Figure 4.Because the IR absorptivity of the metasurface is increased again at a wavelength of 9 µm with gradually increasing the thickness of Si 3 N 4 , a remarkable decrease in the IR absorptivity of the metasurface at 9 µm should be caused by the optical properties of the SiO 2 material utilized. In order to further significantly improve the IR absorptivity of the metasurface, the number of functional film layers in the layered configuration is gradually increased from a single-layer magnetic composite structure based on GdFe material on a Si wafer.As shown in Figure 5a, the addition of two SiO 2 layers already significantly improves the IR absorptivity in the 3-7 µm band.The average IR absorptivity of the top SiO 2 layer is ~64.61% when the thickness indicated by h 3 is 300 nm, 71.24% when h 3 is 500 nm, 69.32% when h 3 is 700 nm, and 69.16% when h 3 is 700 nm.And with the increase in the thickness, the IR absorption spectra of the magnetic metasurfaces exhibit an overall trend of initially increasing and then decreasing.When the film thickness is increased from 300 nm to 500 nm, the increase is more significant, and most of the peaks of the absorption spectra are also gradually red-shifted with the increase in the film thickness.When h 3 = 500 nm, three absorption peaks of the red curve correspond to wavelengths of λ 1 = 7.4 µm and λ 2 = 9.61 µm and λ 3 = 12.64 µm, respectively, which correspond to the absorptions as high as ~99.77% and ~65.76% and ~97.73% that can be observed.Finally, a dual-nanocavity architecture is constructed by further depositing a GdFe film with different thicknesses of 30 nm or 50 nm as well as 70 nm or 100 nm over the top of the SiO 2 layer so as to significantly reduce the spectral IR absorption of the metasurfaces, as demonstrated in Figure 5b.In order to further significantly improve the IR absorptivity of the metasurface, the number of functional film layers in the layered configuration is gradually increased from a single-layer magnetic composite structure based on GdFe material on a Si wafer.As shown in Figure 5a, the addition of two SiO2 layers already significantly improves the IR absorptivity in the 3-7 μm band.The average IR absorptivity of the top SiO2 layer is ~64.61% when the thickness indicated by h3 is 300 nm, 71.24% when h3 is 500 nm, 69.32% when h3 is 700 nm, and 69.16% when h3 is 700 nm.And with the increase in the thickness, the IR absorption spectra of the magnetic metasurfaces exhibit an overall trend of initially increasing and then decreasing.When the film thickness is increased from 300 nm to 500 nm, the increase is more significant, and most of the peaks of the absorption spectra are also gradually red-shifted with the increase in the film thickness.When h3 = 500 nm, three absorption peaks of the red curve correspond to wavelengths of λ1 = 7.4 μm and λ2 = 9.61 μm and λ3 = 12.64 μm, respectively, which correspond to the absorptions as high as ~99.77% and ~65.76% and ~97.73% that can be observed.Finally, a dual-nanocavity architecture is constructed by further depositing a GdFe film with different thicknesses of 30 nm or 50 nm as well as 70 nm or 100 nm over the top of the SiO2 layer so as to significantly reduce the spectral IR absorption of the metasurfaces, as demonstrated in Figure 5b. A comparison of the simulated IR absorption characteristics of the basic structures, including a GdFe/Ag film, a GdFe-SiO2-Ag nanocavity, a SiO2/GdFe-SiO2-Ag nanocavity, and a cascaded GdFe-SiO2-GdFe/GdFe-SiO2-Ag nanocavity-shaped metasurface, is given in Figure 6.As shown, in the wavelength range of 3-14 μm, the overall IR absorption of the SiO2/GdFe-SiO2-GdFe composite structure represented by a green curve is increased from ~13% to about 85% compared with that of the GdFe/Ag film represented by a black curve.This indicates that an optical nanocavity-shaped structure remarkably enhances the IR absorption efficiency through spatial magnetic field coherence.The nanocavity shaped according to the top and the bottom magnetic film configuration will firstly stimulate a type of surface "free electron" displacement current in a common surface plasmon mode over the incident surface of each magnetic film and further magnetize the GdFe material intensively, which also means that a surface equivalent eddy current over two endfaces of each magnetic film are generated effectively.A spatial magnetic field resonance mainly restricted in the nanocavity can be generated by coupling the binding eddy currents over the surface of each magnetic film contacting directly with the SiO2 material. 680nm@SiO2+220nm@Si3N4 780nm@SiO2+120nm@Si3N4 900nm@SiO2 Wavelength(µm) Absorption Wavelength (µm) wavelength (μm) A comparison of the simulated IR absorption characteristics of the basic structures, including a GdFe/Ag film, a GdFe-SiO 2 -Ag nanocavity, a SiO 2 /GdFe-SiO 2 -Ag nanocavity, and a cascaded GdFe-SiO 2 -GdFe/GdFe-SiO 2 -Ag nanocavity-shaped metasurface, is given in Figure 6.As shown, in the wavelength range of 3-14 µm, the overall IR absorption of the SiO 2 /GdFe-SiO 2 -GdFe composite structure represented by a green curve is increased from ~13% to about 85% compared with that of the GdFe/Ag film represented by a black curve.This indicates that an optical nanocavity-shaped structure remarkably enhances the IR absorption efficiency through spatial magnetic field coherence.The nanocavity shaped according to the top and the bottom magnetic film configuration will firstly stimulate a type of surface "free electron" displacement current in a common surface plasmon mode over the incident surface of each magnetic film and further magnetize the GdFe material intensively, which also means that a surface equivalent eddy current over two endfaces of each magnetic film are generated effectively.A spatial magnetic field resonance mainly restricted in the nanocavity can be generated by coupling the binding eddy currents over the surface of each magnetic film contacting directly with the SiO 2 material.So, a confined resonant enhancement in the spatial electromagnetic wavefields mainly according to the constructive interference based on a layered configuration allows for strong IR absorption in a wide wavelength range of 3-14 µm.According to the spectral variance trend of the dual nanocavity-shaped metasurface based on a composite architecture of {GdFe/SiO 2 /GdFe} + {GdFe/SiO 2 /Ag}, an ideal spectral IR absorption with almost 100% can be observed at three wavebands, which can be indicated by three featured wavelength points of 3.19 µm and 8.13 µm and 13.04 µm selected roughly.So, the electric field and magnetic field components of incident beams are further simulated, as shown in Figure 7.The separated electric field and magnetic field components should exist in different regions and seemingly present a half-wavelength or π phase retard.In a relatively long wavelength region roughly exceeding ~11.5 µm, an instantaneous electric field mainly distributes in the top nanocavity and roughly presents a variance trend from the maximum value of 1 to the minimum value of 0.02 selected.And the magnetic fields are mainly in the bottom nanocavity and also exhibit an opposite variance trend from the initial maximum value of 1.1 × 10 −4 at the bottom to a small value of 0.002 selected roughly.In the intermediate waveband, a similar wavefield distribution can also be observed. E(x) H(y) μm, an instantaneous electric field mainly distributes in the top nanocavity and roughly presents a variance trend from the maximum value of 1 to the minimum value of 0.02 selected.And the magnetic fields are mainly in the bottom nanocavity and also exhibit an opposite variance trend from the initial maximum value of 1.1 × 10 −4 at the bottom to a small value of 0.002 selected roughly.In the intermediate waveband, a similar wavefield distribution can also be observed.The analysis of the color block variance, as shown in Figure 7, reveals that the spatial magnetic field distribution in the metasurfaces is consistent with the distribution characteristics of the common plasmon excitation, i.e., the magnetic field strength is the largest in the metal film, and the magnetic field away from the metal film will gradually decay in an exponential form.And the electric field strength in the upper SiO2 structure will be increased gradually with increasing the wavelength, and at the wavelength of ~13.04 μm, the electric field inside the upper SiO2 structure is thus extremely strong so as to indicate that the nanocavity formed between the upper and lower magnetic films already excites a surface plasmon over the surface of the magnetic film and thus generates an induced current oscillation on the upper and lower magnetic film surface of the nanocavity, respectively.Due to the displacement currents generated in the nanocavity and at the junction corresponding to the magnetic film, a strong induced magnetic moment is generated, which will confine the incident light field within the layered composite structure, leading to a strong IR absorption at ~13.04 μm wavelength.Since the magnetic and electric fields are in the form of a standing wave inside the nanocavity, the absorptivity of the metasurface can reach a peak when the wavelength of the incident light is five times the length of the nanocavity.For the simulated dual-nanocavity metasurface with a height of 1560 nm, for example, the absorptivity peaks will present at ~7.8 μm, whereas the height of the top The analysis of the color block variance, as shown in Figure 7, reveals that the spatial magnetic field distribution in the metasurfaces is consistent with the distribution characteristics of the common plasmon excitation, i.e., the magnetic field strength is the largest in the metal film, and the magnetic field away from the metal film will gradually decay in an exponential form.And the electric field strength in the upper SiO 2 structure will be increased gradually with increasing the wavelength, and at the wavelength of ~13.04 µm, the electric field inside the upper SiO 2 structure is thus extremely strong so as to indicate that the nanocavity formed between the upper and lower magnetic films already excites a surface plasmon over the surface of the magnetic film and thus generates an induced current oscillation on the upper and lower magnetic film surface of the nanocavity, respectively.Due to the displacement currents generated in the nanocavity and at the junction corresponding to the magnetic film, a strong induced magnetic moment is generated, which will confine the incident light field within the layered composite structure, leading to a strong IR absorption at ~13.04 µm wavelength.Since the magnetic and electric fields are in the form of a standing wave inside the nanocavity, the absorptivity of the metasurface can reach a peak when the wavelength of the incident light is five times the length of the nanocavity.For the simulated dual-nanocavity metasurface with a height of 1560 nm, for example, the absorptivity peaks will present at ~7.8 µm, whereas the height of the top nanocavity is 650 nm, so the absorptivity peak will present at about 3.19 µm.So, the absorption of the metasurface will present several obvious peaks near the wavelengths of 13.04 µm, 7.8 µm, and 3.19 µm, which is consistent with those shown in Figure 6. So, the spatial distribution morphology above can be viewed as a basic intensity evolution mode (IEM) shaped by merging the electric field and magnetic field subpatterns.But in the short waveband, the spatial electromagnetic wavefields existing in the metasurface obviously present a layered character, which can be viewed as a cascaded IEMs with different amplitudes according to an intensity sequence of {E-Top Nanocavity} > {E-Bottom Nanocavity} and {H-Top Nanocavity} > {H-Bottom Nanocavity}.Generally, the total energy by integrating the electric field and magnetic field components distributed in the composite architecture above are almost the same. A schematic diagram of a basic GdFe/SiO 2 /Ag film system leading to a Si-based nanocavity-shaped metasurface is shown in Figure 8.The main technological process for preparing the key GdFe film involves two steps: magnetron sputtering (PVD) [25] and plasma-enhanced chemical vapor deposition [26] (PECVD) [27,28], as shown in Figure 8a.Generally, the adhesion between SiO 2 and Ag is relatively weak.In order to enhance their adhesion, a 5 nm thick Gr film is firstly sputtered as an intermediate adhesion layer for increasing the adhesion before performing the magnetron sputtering of the Ag film with a 10 nm thickness.And then, a SiO 2 dielectric layer is grown by PECVD and subsequently a GdFe film deposited by similar magnetron sputtering using a GdFe alloy target of 99.9% purity (Gd:Fe = 26:74, Φ76.2 × 3 mm).The above operation will obviously enhance the performance of the composite films, followed by the application of PECVD to deposit the corresponding thickness of SiO 2 , and finally PVD is applied again to complete the preparation of the uppermost GdFe layer.The magnetron sputtering coating equipment is Sputter-Lesker-Lab18 (USTC Center for Micro-and Nanoscale Research and Fabrication), as shown in Figure 8b.The plasma-enhanced chemical vapor deposition coating equipment is ICPPECVD-SENTECH-SI500 (USTC Center for Micro-and Nanoscale Research and Fabrication), as indicated in Figure 8c.The created samples are presented in Figure 8d. composite architecture above are almost the same. A schematic diagram of a basic GdFe/SiO2/Ag film system leading to a Si-based nanocavity-shaped metasurface is shown in Figure 8.The main technological process for preparing the key GdFe film involves two steps: magnetron sputtering (PVD) [25] and plasma-enhanced chemical vapor deposition [26] (PECVD) [27,28], as shown in Figure 8a.Generally, the adhesion between SiO2 and Ag is relatively weak.In order to enhance their adhesion, a 5 nm thick Gr film is firstly sputtered as an intermediate adhesion layer for increasing the adhesion before performing the magnetron sputtering of the Ag film with a 10 nm thickness.And then, a SiO2 dielectric layer is grown by PECVD and subsequently a GdFe film deposited by similar magnetron sputtering using a GdFe alloy target of 99.9% purity (Gd:Fe = 26:74, Φ76.2 × 3 mm).The above operation will obviously enhance the performance of the composite films, followed by the application of PECVD to deposit the corresponding thickness of SiO2, and finally PVD is applied again to complete the preparation of the uppermost GdFe layer.The magnetron sputtering coating equipment is Sputter-Lesker-Lab18(USTC Center for Micro-and Nanoscale Research and Fabrication.),as shown in Figure 8b.The plasma-enhanced chemical vapor deposition coating equipment is ICPPECVD-SENTECH-SI500(USTC Center for Micro-and Nanoscale Research and Fabrication.),as indicated in Figure 8c.The created samples are presented in Figure 8d.The IR absorption of the GdFe film as a layered composite utilized by us generally exhibits polarization insensitivity due to a structural symmetry in the x-or y-direction mentioned above.The IR absorption behaviors under two polarized TM and TE modes to incident angle θ varied in a range of 0-70 • are further simulated, as illustrated in Figure 10.So, the spectral absorption graphs can be divided by a dotted line at ~30 • selected roughly according to the incident angle of the TE and TM components.When θ is less than 30 • , the spectral absorption demonstrates a fairly uniform distribution with a normalized absorption of 1 indicated by the color scale attached in the wavelength range of 3.19-14 µm.After exceeding the 30 • line, the spectral absorption will present an oscillation trend based on a featured incident angle indicating the IR absorption shutoff as gradually increasing the wavelength, which is demonstrated in Figure 10a for the TE mode and Figure 9b The IR absorption of the GdFe film as a layered composite utilized by us generally exhibits polarization insensitivity due to a structural symmetry in the x-or y-direction mentioned above.The IR absorption behaviors under two polarized TM and TE modes to incident angle θ varied in a range of 0°-70° are further simulated, as illustrated in Figure 10.So, the spectral absorption graphs can be divided by a dotted line at ~30° selected roughly according to the incident angle of the TE and TM components.When θ is less than 30°, the spectral absorption demonstrates a fairly uniform distribution with a normalized absorption of 1 indicated by the color scale attached in the wavelength range of 3.19-14 μm.After exceeding the 30° line, the spectral absorption will present an oscillation trend based on a featured incident angle indicating the IR absorption shutoff as gradually increasing the wavelength, which is demonstrated in Figure 10a As shown, a type of bidirectional metasurface with a broadband and further narrowband radiation absorption corresponding to the top and the bottom surfaces based on alternating overlapping dielectric layers and metal films [29] has been proposed by Wang et al.When the lightbeams are incident upon the metal layer in the +z direction, the metasurface acts as a narrowband absorber and achieves 99.9% absorption at 771 nm and is then incident upon the dielectric layer in the −z direction as a broadband absorber, thus achieving a stable absorption of more than 90% in a relatively wide wavelength range from 500 nm to 1450 nm.Compared with their work, the double-layer coupled magnetic nanocavity-shaped metasurfaces proposed in this article already present excellent absorption performance in a wider wavelength range of 3~14 μm.As shown, a type of bidirectional metasurface with a broadband and further narrowband radiation absorption corresponding to the top and the bottom surfaces based on alternating overlapping dielectric layers and metal films [29] has been proposed by Wang et al.When the lightbeams are incident upon the metal layer in the +z direction, the metasurface acts as a narrowband absorber and achieves 99.9% absorption at 771 nm and is then incident upon the dielectric layer in the −z direction as a broadband absorber, thus achieving a stable absorption of more than 90% in a relatively wide wavelength range from 500 nm to 1450 nm.Compared with their work, the double-layer coupled magnetic nanocavity-shaped metasurfaces proposed in this article already present excellent absorption performance in a wider wavelength range of 3~14 µm. Patterned GdFe-Based Nanocavity-Shaped Metasurface Considering the case that the spatial magnetic fields existing in the nanocavities mentioned above are generated mainly by the equivalent eddy currents surrounding each SiO 2 layer between two adjacent GdFe films, as shown in Figure 2b, an arrayed GdFe micro-diamond cap shaped by patterned segmenting an entire GdFe film leading to a new IMAM architecture is further proposed.The spatial magnetic fields can be continuously enhanced by time-varying electric fields originated from the net charge couple inducted and redistributed locally over the GdFe and Ag films, respectively.Generally, the magnetic induction intensity B can be tremendously increased around a single pointed top of a basic or element magnetic micro-nano-structure design.And a very high tip density of both the positive and negative net charges compressed towards two opposite tips can be expected.So, the above factors for remarkably enhancing the spatial electromagnetic wavefields will point to an ideal prospect of further regulating the incident IR radiation through the contribution of the patterned GdFe-based nanocavity-shaped architecture by sufficiently generating and then strengthening spatial time-varying electric fields. So, a thin Cu film of 10 nm thickness is attached over the backside of the top GdFe film with 100 nm thickness, which is fabricated by a conventional technological process.And a GdFe/Cu micro-diamond array is shaped through maintaining an effective wire connection between adjacent GdFe/Cu micro-diamonds along the x-direction and further connecting each cluster of the GdFe/Cu micro-diamonds over two terminals along the y-direction.A new type of GdFe-based nanocavity-shaped metasurface based on an arrayed GdFe/Cu micro-diamond cap is shown in Figure 11.The layout of the GdFe/Cu micro-diamond cap array over a SiO 2 dielectric layer is shown in Figure 11a, and a basic or element micro-diamond cap is also shown with key structural parameters including the period P x = 2.2 µm and P y = 1.4 µm and both a long and short diagonal length of 2 µm and 760 nm.The adjacent micro-diamond caps are connected by a rectangular strip with a width of 100 nm and a length of 640 nm.A cross-sectional view of a single GdFe nanocavity from a patterned GdFe-based nanocavity-shaped metasurface shown by a SEM photograph in Figure 11c is given in Figure 11d.A single GdFe/Cu cap is thus coupled with a bottom Ag film of 10 nm thickness, which also acts as a reflector, so as to form a semi-opened nanocavity filled fully by a SiO 2 dielectric layer with a thickness of 900 nm.bottom Ag film of 10 nm thickness, which also acts as a reflector, so as to form a semiopened nanocavity filled fully by a SiO2 dielectric layer with a thickness of 900 nm.As shown in Figure 12a, two bright points with different intensities including a max- As shown in Figure 12a, two bright points with different intensities including a maximum value of ~66 at the wavelength points of ~7.51 µm and ~12.27 µm, which are located at two opposite tips of each GdFe/Cu micro-diamond along the x-direction, also reveal the net charges as the sources of the spatial electric field mainly distributed over the charged Cu film.The relatively weak linear electric field over each apex of a GdFe/Cu composite mask should be generated by a couple of inducted charges located at the tip of the Cu film and the upper apex of the GdFe film.As shown by Figure 12b, the spatial magnetic field should be composed of two identical parts with a similar appearance corresponding to a top GdFe/Cu micro-diamond and further two similar patterned parts in SiO 2 medium cavities on the both sides of a micro-diamond along the x-direction.So, the appearance of the top spatial magnetic fields can be attributed to a couple of conductive currents towards or away from the central region of a single micro-diamond, leading to a patterned net charge distribution above, and thus present a spectral intensity sequence of {~12.27µm} > {~7.51 µm} > {~3.69 µm} > {~9.767 µm}.In the y-direction, the spatial electric fields are already divided into two parts by the Cu film.The electric field distributed over the Cu film should be generated by the central net charges of a single micro-diamond, and those existing between the SiO 2 medium cavities mainly originated from the inducted net charges located at the apexes of the GdFe/Cu microdiamond.According to the measurements and further evaluation, a similar spectral intensity sequence of {~9.767 µm} < {~7.51 µm} < {~12.27µm} < {~3.69 µm} can be obtained.The spatial magnetic fields that originated from the inductive currents excited over the GdFe/Cu film and also the Ag film are given in a similar spectral intensity sequence of {~9.767 µm} < {~3.69 µm} < {~7.51 µm} < {~12.27µm}.By the conducted simulations above, a total memory-resident electromagnetic wavefield distribution attributed to a patterned GdFe-based nanocavity-shaped metasurface can be ranked in a spectral intensity sequence of {~9.767 µm} < {~7.51 µm} < {~3.69 µm} < {~12.27µm}. The typical distributing characteristics of both the surface net charge and conductive current, as the sources of the spatial electromagnetic wavefields corresponding to the patterned GdFe-based nanocavity-shaped metasurface, are shown in Figure 13.A transient charged fashion of a single GdFe/Cu micro-diamond cap and a bottom Ag film is illustrated in Figure 13a.As shown, a couple of the central orientated electric dipoles P 1 stimulated by the surface plasmons excited through a beam of IR radiation incident upon the top surface of the GdFe film will induct other electric dipoles P 2 out from the same negative net charges located at the central region of the Cu mask and continuously a couple of the relatively weak electric dipoles P 3 having the same orientation with that over the top surface along the x-direction.And there is a relatively strong and weak alternate arrangement of the positive net charges according to the central charge distribution over a single micro-diamond and further two inducted linear arrangements over both the Cu and Ag films along the ydirection, respectively.A transient surface conductive current morphology of a single GdFe nanocavity is illustrated in Figure 13b.As shown, a couple of the conductive currents J 1 with two opposite directions colored by black are also stimulated by the surface plasmons excited from the incident IR beams, then the yellow surface eddy current J 2 inducted by the incident magnetic field component, the surface inducted current J 3 towards two tips over two endfaces of the Cu film colored by green, and then the red conductive current J 4 having an opposite direction of that colored by green over the upper surface of the bottom Ag film only along the x-direction.So, the spatial electromagnetic wavefields can be resonantly accumulated and greatly enhanced in the nanocavity according to constructive interference leading to the spatial magnetic plasmon, which may imply a new IR radiation response and absorption manner being different with the conventional irreversible optothermal sensing and absorbing techniques.A typical fragment corresponding to a spatial plasmon mode with a periodic fashion from the resonant spatial electromagnetic wavefields integrated by the electric field and magnetic field components is shown in Figure 14.Both the basic spatial electric field and magnetic field appearances at a typical wavelength of 12.27 µm along the x-and y-direction, which are formed by assembling three basic fashions from Figure 12, are shown in Figure 14a,b, respectively.A transient surface net charge and current morphology of a partial GdFe-based nanocavity-shaped metasurface involving a basic sealed nanocavity and also a basic semi-opened nanocavity directly exposing the SiO 2 material to the incident radiation, which can be classified as two types of nanocavity architectures, is further illustrated in Figure 14e. ances at a typical wavelength of 12.27 μm along the x-and y-direction, which are formed by assembling three basic fashions from Figure 12, are shown in Figure 14a,b, respectively.A transient surface net charge and current morphology of a partial GdFe-based nanocavity-shaped metasurface involving a basic sealed nanocavity and also a basic semi-opened nanocavity directly exposing the SiO2 material to the incident radiation, which can be classified as two types of nanocavity architectures, is further illustrated in Figure 14e. . . ++ ++ Nano-cavity2 Nano-cavity1 nents is shown in Figure 14.Both the basic spatial electric field and magnetic field appearances at a typical wavelength of 12.27 μm along the x-and y-direction, which are formed by assembling three basic fashions from Figure 12, are shown in Figure 14a,b, respectively.A transient surface net charge and current morphology of a partial GdFe-based nanocavity-shaped metasurface involving a basic sealed nanocavity and also a basic semi-opened nanocavity directly exposing the SiO2 material to the incident radiation, which can be classified as two types of nanocavity architectures, is further illustrated in Figure 14e. . .The technological flow for the preparation of Si-based GdFe nanorhombic array magnetic metasurfaces mainly includes the following: magnetron sputtering coating (PVD), plasma-enhanced chemical vapor deposition (PECVD), direct laser writing (DLW)/electronbeam lithography (EBL), magnetron sputtering coating (PVD), and removing photoresist film or masks, as shown in Figure 15a.The new IMAM architecture is fabricated by adding a crucial step of the EBL of EBL-JEOL-6300 demonstrated in Figure 15b.After completing basic operations such as electron beam exposure and development and fixation for defining the structural pattern, the subsequent steps involving the sputtering deposition of GdFe magnetic film followed by common ultrasonic treatment are performed.A methodical process will ensure meticulous separation between the magnetic film and the photoresist, so as to result in an effective creation of the GdFe micro-diamond array.The final sample is exhibited in Figure 15c. pleting basic operations such as electron beam exposure and development and fixation for defining the structural pattern, the subsequent steps involving the sputtering deposition of GdFe magnetic film followed by common ultrasonic treatment are performed.A methodical process will ensure meticulous separation between the magnetic film and the photoresist, so as to result in an effective creation of the GdFe micro-diamond array.The final sample is exhibited in Figure 15c.As shown, the overall average IR absorption is ~71.7%, which is much lower than that shown in Figure 9c because of the intrinsic SiO2 absorption, which is roughly consistent with that indicated at the wavelength of 9.76 μm in Figure 12.The overall trans- The typical characteristics of the patterned IMAM sample acquired by us are shown in Figure 16.A basic GdFe/Cu micro-diamond cap and a SEM photograph of a partial sample are illustrated in Figure 16b,c, respectively.The IR absorption characteristics in the wavelength range of 3-14 µm are given in Figure 16a. pleting basic operations such as electron beam exposure and development and fixation for defining the structural pattern, the subsequent steps involving the sputtering deposition of GdFe magnetic film followed by common ultrasonic treatment are performed.A methodical process will ensure meticulous separation between the magnetic film and the photoresist, so as to result in an effective creation of the GdFe micro-diamond array.The final sample is exhibited in Figure 15c.As shown, the overall average IR absorption is ~71.7%, which is much lower than that shown in Figure 9c because of the intrinsic SiO2 absorption, which is roughly consistent with that indicated at the wavelength of 9.76 μm in Figure 12.The overall trans- As shown, the overall average IR absorption is ~71.7%, which is much lower than that shown in Figure 9c because of the intrinsic SiO 2 absorption, which is roughly consistent with that indicated at the wavelength of 9.76 µm in Figure 12.The overall transmittance is almost zero with an overall reflectance of ~30%.It can be noted that the patterned IMAM already achieves a relatively stronger IR absorption, because the joint action of the surface oscillated net charges distributed over the charged metallic films and the surface conductive currents including eddy currents will generate stronger spatial electromagnetic wavefields in an arrayed nanocavity-shaped architecture under the condition of completely eliminating SiO 2 absorption. Conclusions A type of IMAM consisting of the key GdFe films and SiO 2 dielectric layers is proposed for realizing ideal IR lightwave manipulation and absorption mainly based on spatial electromagnetic wavefield excitation and resonant accumulation.The simulations and measurements demonstrate that the GdFe-based nanocavity-shaped metasurfaces already achieve an average IR absorption of ~81% in a wide wavelength range of 3-14 µm, experimentally.It can be expected that the joint action of the surface oscillated net charges distributed over the charged metallic films and the surface conductive current including equivalent eddy currents will generate strong spatial electromagnetic wavefields by constructing a patterned surface metallic micro-diamond array and the film system leading to a nanocavity-shaped array.It should be noted that the IR lightwave manipulating and memory-resident absorbing in an electromagnetic storage manner can be further improved through continuously optimizing the metallic and medium material configuration and also the patterned surface layout based on the joint action of the spatial electric field and magnetic field components stimulated in the nanocavity-shaped architecture.This work highlights several potential applications such as highly efficient thermal radiation responding and sensing, IR radiation manipulating and re-arranging for detection, and miniaturized photonic devices. Figure 1 . Figure 1.A dual nanocavity-shaped metasurface shaped by vertically cascading two basic GdFe-SiO2 nanocavities.(a) A SEM photograph of a fabricated cross-sectional metasurface sample.(b) Layered configurating GdFe films with h2 and h4 thickness and SiO2 layers with h1 and h3 thickness over an Ag film with hs thickness, respectively.As shown in Figure 1, a dual nanocavity-shaped metasurface is shaped by vertically cascading two basic GdFe-SiO2 nanocavities.The upper nanocavity consists of a top and bottom GdFe film and an intermediate SiO2 layer and the lower by a top GdFe film (also the bottom GdFe film of the upper nanocavity) and an intermediate SiO2 layer and a Ag film over an n-type Si wafer, which are fabricated by a traditional film system growing flow.A cross-sectional view of the fabricated IMAM sample is exhibited by a SEM photograph shown in Figure 1a.The film system parameters are expressed by a set of the film thickness of {h1-h4,hs}, where the depth of the upper nanocavity is h2 + h3 + h4 and the lower by h1 + h2.The yellow IR lightwaves including typical perpendicular and inclined beams Figure 1 . Figure 1.A dual nanocavity-shaped metasurface shaped by vertically cascading two basic GdFe-SiO 2 nanocavities.(a) A SEM photograph of a fabricated cross-sectional metasurface sample.(b) Layered configurating GdFe films with h 2 and h 4 thickness and SiO 2 layers with h 1 and h 3 thickness over an Ag film with h s thickness, respectively. describe the transportation behaviors in a layered film system, where σ and α are the surface net charge density and the surface conductive current density, respectively.According to the situation without any surface conductive current over both the GdFe and Ag films, a continuity of Hx and Hy can be expected.Based on the momentum conservation over the incident surface, a relation of1 Figure 2 . Figure2.A symbiotic architecture for generating the layered magnetic fields distributed in each functional film labeled by two arrows of entering and exiting the paper, the surface electric current fields including the black surface current J 1 of the transient surface plasmons excited by the incident radiations, two similar yellow eddy currents J 2 and J 3 over the top and bottom endfaces of a single GdFe film, and the similar brown eddy current J 4 over two endfaces of the bottom Ag film, respectively.(a) A basic configuration based on GdFe and SiO 2 and Ag films leading to a nanocavityshaped architecture for responding to the IR radiations according to the transverse electric field (TE) and the transverse magnetic field (TM) incidence, where the time-varying TM component labeled by the black arrows of exiting the paper will penetrate the film system.(b) Overlapping the black penetrating magnetic fields and the layered magnetic field components labeled by two red arrows of entering and exiting the paper, which are excited by the surface equivalent eddy current above to form the net resonant magnetic fields according to the spatial magnetic plasmon modes. Figure 3 . Figure 3. Simulations of the IR absorption of the GdFe-Ag film system and GdFe-SiO2-Ag nanocavity-shaped metasurface.(a,b) Spectral absorption characteristics of the GdFe films with different thicknesses designed and the GdFe-SiO2-Ag configuration based on different SiO2 thicknesses, respectively. Figure 3 . Figure 3. Simulations of the IR absorption of the GdFe-Ag film system and GdFe-SiO 2 -Ag nanocavityshaped metasurface.(a,b) Spectral absorption characteristics of the GdFe films with different thicknesses designed and the GdFe-SiO 2 -Ag configuration based on different SiO 2 thicknesses, respectively. Nanomaterials 2024, 14, x FOR PEER REVIEW 8 of 19wavelength of 9 μm with gradually increasing the thickness of Si3N4, a remarkable decrease in the IR absorptivity of the metasurface at 9 μm should be caused by the optical properties of the SiO2 material utilized. Figure 4 . Figure 4.The IR absorptivity of the metasurfaces after replacing a portion of SiO2 with Si3N4. Figure 4 . Figure 4.The IR absorptivity of the metasurfaces after replacing a portion of SiO 2 with Si 3 N 4 . Figure 5 . Figure 5. Simulations of the IR absorption of the nanocavity-shaped metasurfaces designed.(a,b) Spectral absorption characteristics of the GdFe-SiO2-Ag nanocavity-shaped metasurface with a top SiO2 layer having different thickness and a cascaded nanocavity-shaped metasurface with a top GdFe film having a different thickness, respectively. Figure 5 . Figure 5. Simulations of the IR absorption of the nanocavity-shaped metasurfaces designed.(a,b) Spectral absorption characteristics of the GdFe-SiO 2 -Ag nanocavity-shaped metasurface with a top SiO 2 layer having different thickness and a cascaded nanocavity-shaped metasurface with a top GdFe film having a different thickness, respectively. Figure 5 . Figure 5. Simulations of the IR absorption of the nanocavity-shaped metasurfaces designed.(a,b) Spectral absorption characteristics of the GdFe-SiO2-Ag nanocavity-shaped metasurface with a top SiO2 layer having different thickness and a cascaded nanocavity-shaped metasurface with a top GdFe film having a different thickness, respectively. Figure 7 . Figure 7.Typical simulations of the resonant electromagnetic wavefield distribution in the dual nanocavity-shaped metasurface at several featured wavelengths with an almost 100% absorption.(a) Electric field distribution.(b) Magnetic field distribution. 4 Figure 7 . Figure 7.Typical simulations of the resonant electromagnetic wavefield distribution in the dual nanocavity-shaped metasurface at several featured wavelengths with an almost 100% absorption.(a) Electric field distribution.(b) Magnetic field distribution. Figure 8 .Figure 8 . Figure 8.A schematic diagram of a Si-based nanocavity formed by a basic GdFe/SiO2/Ag film system.(a) Typical manufacturing process.(b) Sputter-Lesker-Lab18 magnetron sputtering equipment.(c) ICPPECVD-SENTECH-SI500 plasma-enhanced chemical vapor deposition equipment.(d) The typical surface appearance of the metasurface sample fabricated.Both the simulated and measured IR absorption characteristics of the Si-based nanocavity-shaped metasurface are obtained by directly removing the reflection (reflectance, R) and transmission (transmittance, T) from incident radiations.The IR absorption characteristics of the samples are analyzed by carefully evaluating the variance in transmitted and reflected radiations according to the incident light using a Fourier transform infrared spectrometer of Nicolet iN10(Huazhong University of Science and Technology Analytical and Testing Center.), as shown in Figure 9a.The test results are shown in Fig-Figure 8.A schematic diagram of a Si-based nanocavity formed by a basic GdFe/SiO 2 /Ag film system.(a) Typical manufacturing process.(b) Sputter-Lesker-Lab18 magnetron sputtering equipment.(c) ICPPECVD-SENTECH-SI500 plasma-enhanced chemical vapor deposition equipment.(d) The typical surface appearance of the metasurface sample fabricated.Both the simulated and measured IR absorption characteristics of the Si-based nanocavityshaped metasurface are obtained by directly removing the reflection (reflectance, R) and transmission (transmittance, T) from incident radiations.The IR absorption characteristics of the samples are analyzed by carefully evaluating the variance in transmitted and reflected radiations according to the incident light using a Fourier transform infrared spectrometer of Nicolet iN10 (Huazhong University of Science and Technology Analytical and Testing Center), as shown in Figure 9a.The test results are shown in Figure 9b, where the blue curve indicates reflectance and the orange colour indicates transmittance.So, the absorptance is calculated by 1-R-T, as presented in Figure 9c.The graph already exhibits three distinct absorption peaks at the wavelength points of {~3.52 µm, ~8.09 µm, ~12.19 µm} corresponding to the absorption of {~89.1%,~98.63%, ~98.23%}.The IR absorption of the GdFe film as a layered composite utilized by us generally exhibits polarization insensitivity due to a structural symmetry in the x-or y-direction mentioned above.The IR absorption behaviors under two polarized TM and TE modes to incident angle θ varied in a range of 0-70 • are further simulated, as illustrated in Figure10.So, the spectral absorption graphs can be divided by a dotted line at ~30 • selected roughly according to the incident angle of the TE and TM components.When θ is less than 30 • , the spectral absorption demonstrates a fairly uniform distribution with a normalized absorption of 1 indicated by the color scale attached in the wavelength range of 3.19-14 µm.After exceeding the 30 • line, the spectral absorption will present an oscillation trend based on a featured incident angle indicating the IR absorption shutoff as gradually increasing the wavelength, which is demonstrated in Figure10afor the TE mode and Figure9bfor the TM mode.Actually, the spectral absorption oscillation according to the incident angle already happens from the wavelength point of ~3.19 µm, demonstrated by a relatively low for the TM mode.Actually, the spectral absorption oscillation according to the incident angle already happens from the wavelength point of ~3.19 µm, demonstrated by a relatively low average absorption of 0.61.Both the TE and TM components of the incident IR beams still seemingly present a half-wavelength or π phase retard.Nanomaterials 2024, 14, x FOR PEER REVIEW 12 of 19 ure 9b, where the blue curve indicates reflectance and the orange colour indicates transmittance.So, the absorptance is calculated by 1-R-T, as presented in Figure 9c.The graph already exhibits three distinct absorption peaks at the wavelength points of {~3.52 μm, ~8.09 μm, ~12.19 μm} corresponding to the absorption of {~89.1%,~98.63%, ~98.23%}. Figure 9 . Figure 9. (a) Nicolet iN10 FTIR spectrometer(Huazhong University of Science and Technology Analytical and Testing Center.).(b) The measured reflectance and transmittance of the Si-based GdFe metasurface.(c) Both the simulated and measured IR absorption characteristics of the Si-based nanocavity metasurface. Figure 9 . Figure 9. (a) Nicolet iN10 FTIR spectrometer(Huazhong University of Science and Technology Analytical and Testing Center).(b) The measured reflectance and transmittance of the Si-based GdFe metasurface.(c) Both the simulated and measured IR absorption characteristics of the Si-based nanocavity metasurface.Nanomaterials 2024, 14, x FOR PEER REVIEW 13 of 19 Figure 10 . Figure 10.Simulations of spectral IR absorption according to incident angle θ under two polarized TM and TE modes.(a) The spectral IR absorption of the TE mode (a) and the TM mode (b) when varying the incident angle θ in a range of 0-70°. Figure 10 . Figure 10.Simulations of spectral IR absorption according to incident angle θ under two polarized TM and TE modes.(a) The spectral IR absorption of the TE mode (a) and the TM mode (b) when varying the incident angle θ in a range of 0-70 • . Figure 11 . Figure 11.A new type of GdFe-based nanocavity-shaped metasurface.(a) The typical layout of the GdFe/Cu micro-diamond cap array over a SiO2 dielectric layer.(b) An element micro-diamond cap with several key structural parameters.(c) A SEM photograph of the top patterned appearance of the metasurface fabricated.(d) A cross-sectional view of a single GdFe nanocavity.Typical simulations of the spatial electromagnetic wavefield distribution corresponding to a patterned GdFe-based nanocavity-shaped metasurface at several featured wavelength points of ~3.69 μm, ~7.51 μm, ~9.76 μm, and ~12.27 μm are shown in Figure 12.The spatial electric field and magnetic field components are displayed along the x-and y-direction, respectively.As shown in Figure12a, two bright points with different intensities including a max- Figure 11 . 19 Figure 12 . Figure 11.A new type of GdFe-based nanocavity-shaped metasurface.(a) The typical layout of the GdFe/Cu micro-diamond cap array over a SiO 2 dielectric layer.(b) An element micro-diamond cap with several key structural parameters.(c) A SEM photograph of the top patterned appearance of the metasurface fabricated.(d) A cross-sectional view of a single GdFe nanocavity. Figure 12 . Figure 12.Typical simulations of the spatial electromagnetic wavefield distribution corresponding to a patterned GdFe-based nanocavity-shaped metasurface at several featured wavelength points of ~3.69 µm, ~7.51 µm, ~9.76 µm, and ~12.27 µm, respectively.(a,c) Spatial electric field distribution along the x-and y-direction.(b,d) Spatial magnetic field distribution along the x-and y-direction. Figure 13 . Figure 13.Typical distributing characteristics of both the surface net charge and conductive current as the sources of the spatial electromagnetic wavefields constrained by a patterned GdFe-based nanocavity-shaped metasurface.(a) Charged GdFe/Cu and Ag films along the x-and y-direction, respectively.(b) Surface inducted currents including bounding eddy currents for exciting spatial magnetic fields only along the x-direction in the GdFe/Cu nanocavity. Figure 14 . Figure14.A schematic diagram of the basic spatial electric field and magnetic field appearances at a typical wavelength of 12.27 μm along the x-and y-direction, leading to an electromagnetic plasmon resonance in the z-plane.The basic fashion formed by integrating three basic electric fields (a,c) and magnetic fields (b,d), and a transient surface net charge and current morphology of a partial GdFe nanocavity (e). Figure 13 . Figure 13.Typical distributing characteristics of both the surface net charge and conductive current as the sources of the spatial electromagnetic wavefields constrained by a patterned GdFe-based nanocavity-shaped metasurface.(a) Charged GdFe/Cu and Ag films along the x-and y-direction, respectively.(b) Surface inducted currents including bounding eddy currents for exciting spatial magnetic fields only along the x-direction in the GdFe/Cu nanocavity. Figure 13 . Figure 13.Typical distributing characteristics of both the surface net charge and conductive current as the sources of the spatial electromagnetic wavefields constrained by a patterned GdFe-based nanocavity-shaped metasurface.(a) Charged GdFe/Cu and Ag films along the x-and y-direction, respectively.(b) Surface inducted currents including bounding eddy currents for exciting spatial magnetic fields only along the x-direction in the GdFe/Cu nanocavity. Figure 14 . Figure14.A schematic diagram of the basic spatial electric field and magnetic field appearances at a typical wavelength of 12.27 μm along the x-and y-direction, leading to an electromagnetic plasmon resonance in the z-plane.The basic fashion formed by integrating three basic electric fields (a,c) and magnetic fields (b,d), and a transient surface net charge and current morphology of a partial GdFe nanocavity (e). Figure 14 . Figure14.A schematic diagram of the basic spatial electric field and magnetic field appearances at a typical wavelength of 12.27 µm along the x-and y-direction, leading to an electromagnetic plasmon resonance in the z-plane.The basic fashion formed by integrating three basic electric fields (a,c) and magnetic fields (b,d), and a transient surface net charge and current morphology of a partial GdFe nanocavity (e). Figure 15 . Figure 15.(a) Process preparation flow for silicon-based GdFe nanorhombic array magnetic supersurfaces.(b) EBL-JEOL-6300 Electron Beam Lithography Equipment.(USTCCenter for Micro-and Nanoscale Research and Fabrication.)(c) Nanorhombic array magnetic hypersurface.The typical characteristics of the patterned IMAM sample acquired by us are shown in Figure16.A basic GdFe/Cu micro-diamond cap and a SEM photograph of a partial sample are illustrated in Figure16b,c, respectively.The IR absorption characteristics in the wavelength range of 3-14 μm are given in Figure16a. Figure 16 . Figure 16.Typical characteristics of the patterned IMAM sample acquired by us.(a) The measured optical response characteristics of the sample.(b) A SEM photograph of the patterned IMAM sample fabricated.(c) A 3D view of a single GdFe/Cu micro-diamond cap. Figure 15 . Figure 15.(a) Process preparation flow for silicon-based GdFe nanorhombic array magnetic supersurfaces.(b) EBL-JEOL-6300 Electron Beam Lithography Equipment.(USTCCenter for Micro-and Nanoscale Research and Fabrication.)(c) Nanorhombic array magnetic hypersurface.The typical characteristics of the patterned IMAM sample acquired by us are shown in Figure16.A basic GdFe/Cu micro-diamond cap and a SEM photograph of a partial sample are illustrated in Figure16b,c, respectively.The IR absorption characteristics in the wavelength range of 3-14 μm are given in Figure16a. Figure 16 . Figure 16.Typical characteristics of the patterned IMAM sample acquired by us.(a) The measured optical response characteristics of the sample.(b) A SEM photograph of the patterned IMAM sample fabricated.(c) A 3D view of a single GdFe/Cu micro-diamond cap. Figure 16 . Figure 16.Typical characteristics of the patterned IMAM sample acquired by us.(a) The measured optical response characteristics of the sample.(b) A SEM photograph of the patterned IMAM sample fabricated.(c) A 3D view of a single GdFe/Cu micro-diamond cap.
16,745.4
2024-07-01T00:00:00.000
[ "Physics", "Materials Science", "Engineering" ]
Impact of polymorphisms in DNA repair genes XPD, hOGG1 and XRCC4 on colorectal cancer risk in a Chinese Han Population Background: This research aimed to study the associations between XPD (G751A, rs13181), hOGG1 (C326G, rs1052133) and XRCC4 (G1394T, rs6869366) gene polymorphisms and the risk of colorectal cancer (CRC) in a Chinese Han population. Method: A total of 225 Chinese Han patients with CRC were selected as the study group, and 200 healthy subjects were recruited as the control group. The polymorphisms of XPD G751A, hOGG1 C326G and XRCC4 G1394T loci were detected by the RFLP-PCR technique in the peripheral blood of all subjects. Results: Compared with individuals carrying the XPD751 GG allele, the A allele carriers (GA/AA) had a significantly increased risk of CRC (adjusted OR = 2.109, 95%CI = 1.352–3.287, P=0.003). Similarly, the G allele (CG/GG) of hOGG1 C326G locus conferred increased susceptibility to CRC (adjusted OR = 2.654, 95%CI = 1.915–3.685, P<0.001). In addition, the T allele carriers (GT/TT) of the XRCC4 G1394T locus have an increased risk of developing CRC (adjusted OR = 4.512, 95%CI = 2.785–7.402, P<0.001). The risk of CRC was significantly increased in individuals with both the XPD locus A allele and the hOGG1 locus G allele (adjusted OR = 1.543, 95%CI = 1.302–2.542, P=0.002). Furthermore, individuals with both the hOGG1 locus G allele and the XRCC4 locus T allele were predisposed to CRC development (adjusted OR = 3.854, 95%CI = 1.924–7.123, P<0.001). The risks of CRC in XPD gene A allele carriers (GA/AA) (adjusted OR = 1.570, 95%CI = 1.201–1.976, P=0.001), hOGG1 gene G allele carriers (CG/GG) (adjusted OR = 3.031, 95%CI = 2.184–4.225, P<0.001) and XRCC4 gene T allele carriers (GT/TT) (adjusted OR = 2.793, 95%CI = 2.235–3.222, P<0.001) were significantly higher in patients who smoked ≥16 packs/year. Conclusion: Our results suggest that XPD G751A, hOGG1 C326G and XRCC4 G1394T gene polymorphisms might play an important role in colorectal carcinogenesis and increase the risk of developing CRC in the Chinese Han population. The interaction between smoking and these gene polymorphisms would increase the risk of CRC. Introduction Colorectal cancer (CRC) is currently the third most common malignancy worldwide and ranks fourth in cancer-related mortalities [1].Together with economic bloom, improvement in quality of life, changes in dietary patterns and environmental deterioration, the incidence of CRC is sharply increasing in developing countries including China [1].According to the registration data collected from the National Central Registry of China, 3,763,000 new CRC cases (2,157,000 for male and 1,606,000 for female) and 1,910,000 cancer deaths (1,111,000 for male and 800,000 for female) were estimated from 2009 to 2011 [2].The exact mechanisms underlying colorectal carcinogenesis remain unknown despite the epidemiological data indicating that numerous factors might contribute to the etiology of CRC, including high rates of red meat 1 Downloaded from http://portlandpress.com/bioscirep/article-pdf/39/1/BSR20181074/844238/bsr-2018-1074.pdf by guest on 14 October 2020 consumption, tobacco use, alcohol intake, lack of exercise and family history [3].However, these conventional risk factors do not fully account for all cases, especially in young subjects, who often do not have any of these factors.In addition, family history of tumors significantly increased susceptibility to CRC, which showed that genetic factors might be related to CRC etiology, similar to the etiology of any other major malignancy [4,5]. A common type of genetic variation in the genome, known as single-nucleotide polymorphism (SNP), has been found to be associated with susceptibility to cancer [6].Many of these polymorphisms are found in the genes that regulate potentially oncogenic pathways [6].The DNA repair pathways play critical roles in maintaining genome integrity, and a diminished capacity to repair DNA lesions predisposes individuals to an increased susceptibility to cancer [7,8].Individuals with CRC have been shown to have a lower DNA repair capacity [9,10].We hypothesize that genetic polymorphisms in DNA repair genes may affect DNA repair capacity and increase the risk of CRC in a specified population.There are multiple DNA repair genes, each dealing with specific DNA damage [7].Xeroderma pigmentosum group D (XPD), 8-oxoguanine DNA-glycosylase 1 (hOGG1) and X-ray repair cross-complementing protein 4 (XRCC4) are among the key DNA repair genes; XPD is thought to be involved in the nucleotide excision repair, hOGG1 is implicated in repairing oxidatively damaged DNA and DNA single-strand breaks, and XRCC4 primarily addresses DNA double-strand breaks, repaired by homologous and nonhomologous end-joining recombination (NHEJ) [11][12][13]. To obtain a comprehensive estimate of the putative influence of the genetic polymorphisms of these genes on CRC risk, XPD G751A, hOGG1 C326G and XRCC4 G1394T polymorphisms in CRC patients were detected in this case-control study in a sample of the Chinese population with the aim of providing a theoretical basis for the treatment and prognosis of the disease. Patient characteristics This based case-control study was conducted from August 2014 until October 2017.A total of 225 consecutive CRC patients (140 males, 85 females; aged 53.1 + − 11.2 years) who underwent surgical resection at the Gastrointestinal Surgery, the Affiliated Wenling Hospital of Wenzhou Medical University were enrolled in the present study as the observation group, whereas 200 age-and sex-matched controls (118 males, 82 females; aged 49.7 + − 11.2 years) were individuals who received health screening at Tongde Hospital of Zhejiang.Subjects in the observation group were sporadic CRC patients without a family history of CRC.All individuals enrolled in the present study had no chronic diseases including diabetes mellitus, hypertension, cardiovascular and cerebrovascular disease, chronic kidney disease or other systemic diseases.All patients with CRC did not receive radiotherapy or chemotherapy before surgery.The data collected included sex, age, smoking and alcohol intake habits, tumor location, T stage and grade, lymph node status, distant metastases, and neoadjuvant chemotherapeutic treatment.Informed consent was obtained from all subjects.The study was approved by the Ethics Committee of the Affiliated Wenling Hospital of Wenzhou Medical University. DNA extraction and gene polymorphism detection Peripheral venous blood (5 ml) was obtained from all subjects in the morning, while they were in a fasting state.Genomic DNA was extracted using the DNeasy Kit (QIAamp DNA Blood Midi Kit, Qiagen, Cat#51104, Hilden, Germany) according to the manufacturers' instructions.The extracted DNA was stored at −80 • C for further use.The XPD G751A, hOGG1 C326G and XRCC4 G1394T polymorphisms were genotyped using a polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) assay.The primers and restriction enzymes for determining XPD G751A, hOGG1 C326G and XRCC4 G1394T genotypes were shown in Table 1.The 25-μl PCR reaction system was used, containing 2.5 μl 10×Taq polymerase buffer solution, 2 μl magnesium chloride (2 mM), 2 μl dNTP mix (0.2 mM), 1 μl forward primer (10 pmol), 1 μl reverse primer (10 pmol), 2 μl genomic DNA (100 ng/μl), 0.5 μl DNA Taq polymerase enzyme and 14 μl distilled water.The PCR amplification conditions were as follows: denaturation at 94 • C for 4 min, then 35 cycles of denaturation at 94 • C for 30 s, annealing at 52 • C for XPD, 60 • C for hOGG1 and 59 • C for XRCC4 for 30 s, and extension at 72 • C for 40 s, followed by a final extension cycle at 72 • C for 10 min.The PCR reaction products were treated with the appropriate restriction enzymes in 37 • C water overnight, and the digested products were separated by electrophoresis in 2% agarose gels.Finally, an ultraviolet gel imager was used to visualize the electrophoretic results and determine sample genotypes.The PCR product information for the XPD G751A, hOGG1 C326G and XRCC4 G1394T variants were shown in Figure 1. Statistical analysis Statistical analyses were conducted with the SPSS 18.0 software package (SPSS Inc, Chicago, IL).The heredity equilibrium was assessed by the Hardy-Weinberg test, and the differences in genotype frequencies of XPD G751A, hOGG1 C326G and XRCC4 G1394T between the CRC patient group and the control group were evaluated by the chi-square test.The linkage disequilibrium of gene polymorphisms was measured by D 0 , and the r 2 value was calculated via the Haploview program (http://www.broad.mit.edu/mpg/haploview/).The correlations between polymorphism phenotype alleles and clinicopathological parameters, demographic variables and environmental factors were analyzed by the chi-square test or the Fisher exact test.The odds ratios (OR) with the 95% confidence interval (CI) were calculated to analyze the strength of the association of polymorphism phenotype alleles with CRC risk.Bonferroni adjustments were made for P-values for the results of any SNP by multiplying the number of SNPs tested for the gene.Receiver operating characteristic (ROC) curve analysis was performed to determine the optimal cut-off value to divide the smoking status between two groups.All P-values were two-sided, and *P<0.05 was considered to indicate statistical significance. Population characteristics At baseline, no statistically significant differences between the study and control groups were noted regarding demographics (age and sex), BMI, smoking and alcohol status (Supplementary Table S1). The joint effect of XPD, hOGG1 and XRCC4 gene polymorphisms on CRC risk Gene loci of XPD G751A, hOGG1 C326G and XRCC4 G1394T were not interlinked (D 0 , 0.005; r 2 , 0.002).According to the frequencies of the relevant genotypes for XPD G751A, hOGG1 C326G and XRCC4 G1394T, a simultaneous occurrence of the XPD locus A allele, the XRCC4 locus T allele and the hOGG1 locus C allele was deemed risk-free.Given that all three polymorphisms, XPD G751A, hOGG1 C326G and XRCC4 G1394T, were not found in any individual, we examined the pairwise joint effect of variant alleles of XPD G751A, hOGG1 C326G and XRCC4 G1394T on CRC risk (Table 3).Interestingly, individuals carrying the XPD locus A allele and OGG1 locus G allele showed an increased CRC risk (OR = 1.85, 95%CI = 1.331-2.584,P<0.001; adjusted OR = 1.543, 95%CI = 1.302-2.542,P=0.002).Similarly, the combined effect of the hOGG1 locus G allele and the XRCC4 locus T allele was to increase susceptibility to CRC development (OR = 2.461, 95%CI = 1.826-3.317,P<0.001; adjusted OR = 3.854, 95%CI = Figure 2. ROC curve for smoking status The area under the ROC curve was 0.72.The optimal cut-off value was 16 packages/year. Stratification analysis by smoking for the three gene polymorphisms and CRC risk We further investigated the associations between the XPD, hOGG1, XRCC4 gene polymorphisms and CRC risk in a study stratified by smoking status.ROC curve analysis was performed to discriminate smoking status.As shown in Figure 2, the area under the ROC curve was 0.72 and the optimal cut-off value was 16 packages/year.The results are shown in Smoking (package/year). Stratification analysis by sexes for the three gene polymorphisms and CRC risk As there was significant difference in the incidence of CRC between males and females, we studied the impact of sexes on the relationship between the three gene polymorphisms and CRC risk [14].As shown in Table 5, we observed no association between the XPD, hOGG1, XRCC4 gene polymorphisms and risk of CRC in patients stratified by sexes. Discussion CRC is one of the most commonly diagnosed malignancies in East Asia and many other parts of the world.With the development of modern gastrointestinal endoscopy technologies and the establishment of surveillance protocols for individuals at high risk, more cases are diagnosed at the early stages, providing more opportunities for curative surgical resection [15].However, due to the high malignant potential of CRC, approximately 40% of the surgically cured patients experience cancer recurrence within 5 years.CRC usually arises through a multistep carcinogenic process involving the accumulation of numerous genetic and epigenetic changes in oncogenes and suppressor genes, leading to dysregulation of multiple signaling pathways, which disrupt the cell cycle and the balance between cell proliferation and cell death [16,17].In recent years, considerable interest has arisen in genetic factors that seem to modulate individual susceptibility to multifactorial diseases, characterized by SNPs that can be associated with a predisposition to and a high risk for development of carcinogenesis upon exposure to similar environmental and lifestyle factors [18,19]. Thousands of DNA lesions occur in cells, resulting from exposure to a variety of endogenous and exogenous chemical and physical agents [7].If not correctly repaired, the accumulation of DNA damage could lead to global genomic instability and DNA rearrangements, which are commonly found in the majority of cancer cells.Efficient repair of this damage helps to maintain DNA stability [8].There is certain evidence that deficiencies in the DNA repair capacity predispose individuals to an increased susceptibility to cancer [8].Not surprisingly, individuals with CRC have been reported to have a lower DNA repair capacity.XPD, a prototypical 5'-3' translocating DNA helicase, is part of the transcription factor (TF)IIH complex that is essential for signaling events triggering transcription, cell cycle checkpoints and DNA damage repair [20].In the present study, we found that the XPD G751A gene polymorphism was associated with an increased risk of CRC, which was consistent with previous studies of other malignancies [21].The XPD G751A polymorphism caused the amino acid substitution from Lys to Gln, which was closely associated with the impaired DNA repair capacity and thus predisposed individuals to an increased susceptibility to cancer [22,23].Nevertheless, no significant association was found between the XPD G751A polymorphism and CRC susceptibility in the study from a Polish population [24].Several reasons may contribute to the inconsistent result.First, Caucasians and Asians have different genetic backgrounds.The minor allele frequency of the XPD G751A genotype in control subjects was significantly lower in Chinese studies than that in Polish studies.Second, cancer is a complex disease affected by interactions between genetic, epigenetic and environmental factors.These factors may modify CRC risk in a distinct way in different populations.In other words, the same genotype of SNP might play an opposite biological role in tumor development in different ethnic groups.It is extremely valuable to address this issue specifically in a Chinese population and it is of great help to determine the relation in larger populations. Base excision repair (BER) is an important DNA repair pathway of base damage and single-strand breaks caused by X-rays, oxygen radicals or alkylating agents.One of the key enzymes in the BER pathway is human hOGG1 [25].Genetic variants of hOGG1 may affect the expression and function of the OGG1 protein, thus contributing to the risk of cancer [26].Ser326Cys is the most extensively studied hOGG1 variant, and the Cys326 allele is increasingly reported to be associated with an increased risk of cancer [22,27,28].Our results indicated that the hOGG1 C326G gene polymorphism and the CRC susceptibility were significantly correlated: the G allele carriers had a higher risk of developing CRC, which corresponded well with the previous study by Park et al. [29].Sliwinski et al. [24] detected the XPD G751A and hOGG1 C326G gene polymorphisms in a Polish population and found no association between the locus gene polymorphisms and the risk of CRC.This discrepancy can be also explained by the different ethnicities studied or the complex underlying genetic architecture or multifactorial genetic factors of CRC as indicated above.But in addition, we noticed that this study contained a very limited sample size, which may as well lead to the controversial finding. XRCC4, located on chromosome 5q14.2, is an important DNA repair gene involved in the NHEJ pathway.XRCC4 directly interacts with Ku70/Ku80 and plays a central role in the precise end-joining of blunt DSBs [30].It has been reported that inactivation of the XRCC4 gene causes growth defects, premature senescence, inability to support V(D)J recombination, late embryonic lethality accompanied by defective lymphogenesis, and defective neurogenesis manifested by extensive apoptotic death of newly generated post-mitotic neuronal cells [31].Mutations in the coding region of this gene might result in a more deficient NHEJ capacity and increase cancer risk.In the present study, we found that the XRCC4 G1394T gene polymorphism had a substantial association with the increasing risk of CRC, in which T allele carriers had a higher risk of CRC (adjusted OR = 4.512, 95%CI = 2.785-7.402,P<0.001).In consistent with this observation, the T allele homozygotes for this SNP has been demonstrated to exhibit a defective DNA repair capacity and correlate with higher chromosome aberration frequency. We also examined the joint effects of XPD, hOGG1 and XRCC4 gene polymorphisms on the risk of CRC development and found that the AG/AA genotype of XPD together with the GG genotype of hOGG1 increases the rate of developing CRC.In addition, a combination of the G allele of hOGG1 and the T allele of XRCC4 conferred higher CRC susceptibility.Identifying patients at risk of developing CRC through genotyping may allow a more personalized approach to moderate the risk of CRC development.We further investigated whether tobacco smoking behavior affect the interactions between the polymorphisms and CRC risk.After stratifying the subjects by smoking degree, the A allele carriers of XPD G751A, the G allele carriers of hOGG1 C326G and the T allele carriers of XRCC4 G1394T were associated with a significantly higher risk of CRC.Tobacco has been recognized as a major risk factor for CRC.It contained a large amount of toxic and harmful substances and can produce free radicals, which can lead to DNA damage [32].There was great potential that an increase DNA damage and a reduced DNA repair capacity coordinately increase tumor susceptibility.Despite incidence of CRC was higher in males, we observed no association between the three gene polymorphisms and risk of CRC in the stratified analysis based on sexes.This finding was plausible as XPD G751A, hOGG1 C326G and XRCC4 G1394T were common autosomal variants, by which the relationship with CRC risk should not be differed by sexes. Our findings might be helpful in early detection of CRC through identifying population at risk and the clinical monitoring value is greater among individuals with a severe smoking level.However, these findings should be treated with caution because of the relatively modest sample size and heterogeneity.Well-designed studies with a larger scale and more ethnic groups are needed to validate the risk factor. Conclusion Our study provided compelling evidence that XPD G751A, hOGG1 C326G and XRCC4 G1394T gene polymorphisms were associated with the susceptibility to developing CRC in a Chinese Han population.Although there were some existing limitations in the study regarding the diversity or ethnicity of samples, our findings provided further insights into the pathogenesis of CRC. Table 3 Combined effects of XPD751, hOGG1, XRCC4 and CRC risk * 'OR' adjusted by age, sex, alcohol and smoking; CI, confidence interval.
4,166.4
2018-11-14T00:00:00.000
[ "Biology" ]
An Empirical Evaluation of Different Electronic Payment Channels in Nigeria Payments system plays a very crucial role in any economy, being the channel through which financial resources flow from one segment of the economy to the other. This study has tried to empirically evaluate the electronic-payment channels and their penetration level in Nigeria from 2012 to first quarter of 2019. The main research objective was to determine given the technology revolution the level of e-payment penetration in Nigeria. Concepts of e-payment, E-payment in Nigeria and the different E-payment platforms were reviewed. The data were obtained from the secondary sources like CBN, journals, and commercial banks quarterly bulletins. The study employed descriptive statistics to ascertain the level of penetration E-payment in Nigeria. The data were analyzed using percentages. From the study, it was found that ATM dominated the penetration of E-payment in terms of volume in Nigeria from 2011 to first quarter of 2019. In terms of value NEFT dominated in 2012 and 2013 while NIP dominated from 2014 to first quarter of 2019. It is recommended that more electronic channels should be open to deepen the electronic transactions in the economy to fast tract transaction as the world move into tech revolution. Also, the issue of fraud emanating from electronic transaction should be checked and reduce to give trust to consumers of such transactions. Introduction The payments system plays a very crucial role in any economy, being the channel through which financial resources flow from one segment of the economy to the other. It, therefore, represents the major foundation of the modern market economy. Payment system according to Ojo [1], can be viewed as an arrangement consisting of institutions, instruments, organizations, operating procedures, and information and communication systems, usually within a nation's financial system, used to initiate and transmit payment information from payer to payee and to settle payment or discharge financial obligation among economic units. Payment systems may be physical or electronic and each has its own procedures and protocols. [2] Payment systems or methods are very many in a modern day business. They include but not limited to cash, cheque, credit or debit cards, money order, bank transfer and online payment services such as PayPal, EPOS which is a selfcontrolled, computerized equipment that performs all tasks of a store, checkout counter, it allows payment by bank or credit cards, verifies transactions, provide sales reports, coordinates inventory data, and performs several other services normally provided by employees. Kelvin [3] declared that the introduction of technology based payments systems has done a lot to increase the convenience of bank's customers, staff as well as the society at large. Nigeria is lagging way behind most of the world in the general quest to boost micro economic activity by reducing the role played by physical cash in daily transactions and by encouraging the creation of cashless society, this can be averted [4]. In Nigeria, there are different payment systems and the focus of this article, is the analysis of different methods or systems of e-payment with a view to exposing the penetration level of the different methods or systems. Electronic Payment Electronic payment which is also called online payment, is a payment option that completely exclude the use of cash and cheque. It is a non cash system which uses electronic media such as credit card, debit card and the automated clearing house (ACH) network. It requires customer action, payment authentication and payment to accounts. It is considered to be fast or time saving, control expenses, convenient, user friendly but faced with challenges which includes restrictions in terms of the number of transactions per day, in some cases but generally with limited amount per day, the risk of hacking, lack of anonymity and the need for internet access among others. Electronic Payment in Nigeria Electronic payment can be said to be relatively new in Nigeria as compared to some other countries especially those referred to as developed. The use of cash or cheque as payment option, is still popular especially in petty transactions with petty traders and artisans. Poor awareness of e-payment solutions, ignorance, poor banking culture, lack of trust, illiteracy and the love for the status quo has been fingered as responsible for the volume of cash transactions in Nigeria [5]. Ayodele [6] observed that constant power failure has led to deficiencies in infrastructures such as ATMs, computers etc. which slow down the rate of electronic transaction. Frequent failure of network from communication service providers in Nigeria too has been a major challenge to e-banking vis a vis e-payment. Oladayo, T. & Adeniyi [7] observed that the unreconciled interbank transactions are compounding the uses of ATM. ATM users' are often face with issues of being debited sometimes when the machine did not dispense cash and reversal sometimes take time especially when it is interbank transactions. Possessing ATM card and its password has proven not to be a sufficient proof of ownership because fraudsters have found a way ground it. It is a known fact that for so many years, the use of cheque has been the main option to cash in Nigeria. Cheques have not been generally acceptable to sellers especially petty traders. Cases of bounce or dud cheques and mistrust have hindered its acceptance by even big merchants. This is not unconnected with lack of electronic means of ascertaining the validity of cheques until they are presented at banks. Although the central bank of Nigeria (CBN), has introduced inter-bank electronic cheque clearing system, the effort of CBN has only reduced the time of verification and redemption of cheques thus far, making cash payment, the most credible and general accepted means. Okafor [5] averred that a payment system that can replace or compete with cash, must win the trust of merchants in the economy. This he said can only happen, if there is a way the merchants can verify the validity of the purchases, the payment solutions must also be easily convertible to cash or as good as cash, since most merchants in Nigeria are in business on subsistence basis. E-payment or e-transactions solutions will play a role here, however, it has not gained much ground. Automated Teller Machine (ATM) Ayodele [8] described ATM card as a chip device consisting of circuit's elements on single silicon chip, used by customers to perform balances, inquiry, mini-statements and cash withdrawal as well as transfer, through the use of automated teller machines. Okafor [9] perceives the ATM as an electronic device which allows a financial institution customer to use a secure method of communication to access their accounts, make cash withdrawals or cash advances using credit cards and checking their account balances without need for human teller or cashier. It is adjudged to be the most popular e-transactions means in Nigeria especially with the introduction of Point of Sale (POS) terminals. It is convenient, easy to withdraw cash, make payments, make transfers and even check account balances. However, it has not done much in reducing the volume of cash in the Nigeria economy. These services are provided by Inters witch, Vpay, Quick cash etc in Nigeria. Point of Sales (POS) Terminal The use of POS terminal (machine) has greatly reduced the use of cash for transactions. Ayodele [6] also described it as a payment device that allows credit/debit cardholders makes payment at sales/purchase outlets. It allows customers to perform services inquiry, airtime, vending, loyalty, redemption and printing of mini statement. [10] Credit or Debit Cards and E-Wallets Credit or debit cards and e-wallets makes shopping cashless. Unlike the ATM, credit cards, debit cards and ewallets will help the Central Bank of Nigeria (CBN), to achieve the cashless policy. Credit and debit cards use Point of Sales (POS) terminals located at accredited retail outlets. However, most ATM cards in Nigeria, also serve as both credit and debit cards. [10] Internet Fund Transfer Is it the use of internet by a customer to send money from his/her account to another account and/or back. The customer requires an internet enable computer or phone to carry out the fund transfer. It is fast and convenient but could be hindered by network failure. Examples include Central Bank of Nigeria Real Time Gross Settlement System (RTGS), Western Union Money Transfer etc. Nigeria Interbank Settlement System (NEFT) The Nigeria Interbank Settlement System (NIBSS) electronic fund transfer is an irrevocable funds transfer instruction. The user logs into his/her bank's internet banking platform using his/her ID and Password. He or she would go to fund transfer tab and enter the necessary details of the receiver in terms of bank, account number and amount and click send. NEFT can be useful to transfer fund to many recipients at the same time, but the receiver get (s) the value sometimes in 24 hours. NIBSS Instant Payment It is a transfer option which can be used to transfer money from one customer's account to any bank account in Nigeria. NIP can only be used to transfer money to one or two persons at the same time but the transfer is instantaneous because the receiver gets value within minutes. NIBSS Automated Payment Services (NAP) It is an integrated multibank e-payment, e-collection and payroll and bulk payment platform. By its design, it is suitable for instant processing of payroll, pension, personnel records and execution of funds transfer, direct debit, collections, schedule delivery and payment instruction, but could be hindered by the availability of network. Mobile Payment Mobile payment refers to payment services operated under financial regulations and it is performed using mobile devices. Its models include mobile wallets, card-based payments, carrier billing, contactless payments and direct transfer between payer and payee bank account. It is fast and convenient but depends on availability of network. [10] Remita Remita is a solution that addresses the payment needs of individuals and organizations. It was developed by FinTech Systemspecs. It helps to receive and make payment easily without activation fee. It is secure and has Human Resource (HR) and payroll functions. It is actually an innovative way to manage electronic payments, collections, employees payrolls and schedules. There is Remita Personal and Remita for corporate users. It protects user's data with multiple security protocols but requires internet connection. Others Are NIBSS e-Billspay It is an account-based, online real-time product that facilitates the payment of bills from an account. The Automated Clearing House (ACH) is an electronic payment option that allows participants to pay Customs duties, taxes, and fees electronically. ACH allows importers to pay duties with one electronic transaction in a secure environment. ACH fulfills the need for swift, accurate payment transfers in today's competitive business environment. It reduces administrative and check processing cost PMS Allows importers deposit duties on the 15th business day of the month following the month in which the goods are released. It means merchandise released on the first to the last day of the month can be scheduled for duty payment on the following month's Periodic Monthly Statement. This eliminates the need to process duty payments on a transaction by transaction basis. Customs does not assess any interest charges for payments made via PMS. A user can have a once a month, interest free duty payment. Web Payment Web payment service is an online service that manages the transfer of funds from a customer to the merchant of an ecommerce website. The money may come from a prepaid account or credit card stored in a digital wallet in the user's device or stored in the service's datacenter. [11] NIBSS Central Payment NIBSS Central payment gateway is an e-commerce application service solution. CentralPay is a payment gateway application developed by NIBSS. With CentralPay, web merchants can easily receive payments online from their customers in exchange for goods and services. The unstable network situation affects its usage. M-Cash This is an innovative solution designed to facilitate lowvalue retail payments, grow e-payments by providing accessible electronic channels to a wider range of users and to further enhance financial inclusion in Nigeria, by extending e-payment benefits to Payers and Merchants. The unreliable nature of network hinders its usage too. Methodology To analyze the data obtained from the secondary sources like CBN, Journals, commercial banks quarterly bulletins the study employed descriptive statistics to ascertain the level of penetration e-payment in Nigeria. The data were analyzed using percentages. The result table 2 indicates that by volume ATM has the highest volume of penetration or usage while EBillsPay has the least. A breakdown of the statistics shows that NEFT has 8.05%, ATM has 79.71%, POS has 2.54%, Internet has 0.78%, Mobile Money has 4.30%, NIP has 4.62%. EBillsPay, has less than o.1% Remita, M-Cash, Central Pay and NAPS were not in use. Data Presentation and Analysis For value, NEFT has the highest while EBillsPay has the least. A breakdown of the statistics shows that NEFT has 50.59%, ATM has 9.97%, POS has 0.57%, Internet has 0.17%, Mobile Money has 0.50%, NIP has 38.20%. EBillsPay, has less than 0.1% and Remita, M-Cash, Central Pay and NAPS were not in use. The result table 3 indicates that by volume ATM has the highest volume of penetration or usage while EBillsPay has the least. A breakdown of the statistics shows that NEFT has 5.49%, ATM has 74.05%, POS has 3.85%, Internet has 1.03%, Mobile Money has 5.13%, NIP has 7.55%. EBillsPay, has 0.11% Remita, has 2.78% and M-Cash, Central Pay and NAPS were not in use. For value, NIP has the highest while EBillsPay has the least. A breakdown of the statistics shows that NEFT has 33.21%, ATM has 8.40%, POS has 0.71%, Internet has 0.17%, Mobile Money has 0.77%, NIP has 45.43%. EBillsPay, has 0.10%, Remita, has 11.21% and M-Cash, Central Pay and NAPS were not in use. The result table 4 indicates that by volume ATM has the highest volume of penetration or usage while Central Pay has the least. A breakdown of the statistics shows that NEFT has 4.51%, ATM has 67.65%, POS has 5.26%, Internet has 1.24%, Mobile Money has 6.85%, NIP has 11.11%. EBillsPay, has 0.19% Remita, has 3.03% and M-Cash, not in use, Central Pay has 0.01%, and NAPS 0.15%. For value, NIP has the highest while Central Pay has the least. A breakdown of the statistics shows that NEFT has 26.11%, ATM has 7.92%, POS has 0.89%, Internet has 0.18%, Mobile Money has 0.88%, NIP has 50.96%. EBillsPay, has 0.43%, Remita, has 12.42% and M-Cash, not in use, Central Pay has less than 0.1% and NAPS has 0.20%. [14] and National Bureau of Statistics [13] The result table 5 indicates that by volume ATM has the highest volume of penetration or usage while Central Pay has the least. A breakdown of the statistics shows that NEFT has 3.16%, ATM has 62.67%, POS has 6.77%, Internet has 1.50%, Mobile Money has 5.00%, NIP has 16.31%. EBillsPay, has 0.11% Remita, has 4.06% and M-Cash, not in use, Central Pay has 0.01%, and NAPS 0.42%. For value, NIP has the highest while central pay has the least. A breakdown of the statistics shows that NEFT has 20.52%, ATM has 7.02%, POS has 1.07%, Internet has 0.19%, Mobile Money has 1.06%, NIP has 53.62%. EBillsPay, has 0.48%, Remita, has 14.99% and M-Cash, not in use, Central Pay has less than 0.1% and NAPS has 1.06%. [14] and National Bureau of Statistics [15] The result table 6 indicates that by volume ATM has the highest volume of penetration or usage while M-Cash has the least. A breakdown of the statistics shows that NEFT has 2.10%, ATM has 54.15%, POS has 9.89%, Internet has 1.96%, Mobile Money has 3.23%, NIP has 25.08%. EBillsPay, has 0.06% Remita, has 2.69% and M-Cash, 0.01, Central Pay has 0.03%, and NAPS 0.80%. For value, NIP has the highest while M-Cash has the least. A breakdown of the statistics shows that NEFT has 15.05%, ATM has 6.48%, POS has 1.42%, Internet has 0.19%, Mobile Money has 1.11%, NIP has 56.57%. EBillsPay, has 0.55%, Remita, has 13.63% and M-Cash, less than 0.01%, Central Pay has less than 0.1% and NAPS has 5.00%. [14], First Bank of Nigeria [16] and National Bureau of Statistics [15] The result table 7 indicates that by volume ATM has the highest volume of penetration or usage while M-Cash has the least. A breakdown of the statistics shows that NEFT has 1.29%, ATM has 42.22%, POS has 14.27%, Internet has 2.45%, Mobile Money has 4.20%, NIP has 31.98%. EBillsPay, has 0.05% Remita, has 2.14% and M-Cash, 0.01, Central Pay has 0.06%, and NAPS 1.32%. For value, NIP has the highest while M-Cash has the least. Findings and Conclusion The result study shows that in 2012, by volume ATM has the highest volume of penetration or usage while internet and mobile money have the least. And by value, NEFT has the highest while internet and mobile money have the least. In 2013, by volume ATM has the highest volume of penetration or usage while EBillsPay has the least. For value, NEFT has the highest while EBillsPay has the least. In 2014, by volume ATM has the highest volume of penetration or usage while EBillsPay has the least. For value, NIP has the highest while EBillsPay has the least. In 2015, by volume ATM has the highest volume of penetration or usage while Central Pay has the least. For value, NIP has the highest while Central Pay has the least. In 2016, by volume ATM has the highest volume of penetration or usage while Central Pay has the least. For value, NIP has the highest while central pay has the least. In 2017, by volume ATM has the highest volume of penetration or usage while M-cash has the least. For value, NIP has the highest while M-Cash has the least. In 2018, by volume ATM has the highest volume of penetration or usage while M-cash has the least. For value, NIP has the highest while M-Cash has the least. In 2019 first quarter, by volume NIP has the highest volume of penetration or usage while Mcash has the least. For value, NIP has the highest while M-Cash and central money have the least. From the study, it is clear that ATM dominated the penetration of E-payment in terms of volume in Nigeria from 2011-first quarter of 2019. Out of the eight years surveyed, in terms of volume ATM has seven while NIP has one and it is on the first quarter of 2019. In terms of value NEFT dominated in 2012 and 2013 while NIP dominated from 2014 to first quarter of 2019. Although, the system (e-payment) is challenged security, infrastructures, legal and regulatory issues as well as socio-cultural issues [17], it is recommended that more electronic channels should be open to deepen the electronic transactions in the economy to fast tract transaction as the world move into tech revolution. Also, the issue of fraud emanating from electronic transaction should be checked and reduced to give trust to consumers of such transactions. Lastly, banks must perform more education and advertisement on electronic payments so that the Nigerian population will appreciate and use electronic payment channels available.
4,445
2019-09-29T00:00:00.000
[ "Economics", "Business", "Computer Science" ]
THERMODYNAMIC PROPERTIES CALCULATION OF AIR – WATER VAPOR MIXTURES THERMAL PLASMAS Knowledge of air-water vapor mixtures thermal plasmas thermodynamic properties is important to estimate the performances of electrical arc cutting in this gas by a circuit breaker. In this paper, air-water vapor mixtures thermal plasmas thermodynamic properties are calculated in a temperature range from 5000K to 30000K. The calculations are carried out by supposing local thermodynamic equilibrium at pressure of 1; 5 and 10 atm. The obtained results show the influence of the water vapor initial proportion but also that of the pressure on these plasmas thermodynamic properties. A bibliographical research shows that only the transport properties of water and argon mixture plasma (Petr, 2008;Hrabovsky, et al., 2006) and those of pure water plasma (Aubreton, et al., 2008;Hrabovsky, et al., 1993) were already studied. So we undertake in this study the calculation of air-water vapor mixtures thermal plasmas thermodynamic properties. The mixture can be naturally formed because of air humidity. But the water vapor partial pressure in the atmospheric air even in zones with great humidity led to a maximum percentage of water vapor around 7% at temperature of 40°C. As this value is not sufficient to bring enough hydrogen in the mixture, air-water vapor mixtures with high water vapor percentages is used in this study: 80% air-20 % water vapor, 50% air-50 % water vapor and 20% air-80 % water vapor. The increase of water vapor in the mixture could be done during the breaking of the electrical current by water injection in the circuit breaker. The proportions of air and water vapor of chosen mixtures are given in volume. These three (03) mixtures are chosen to be able to determine the water vapor influence on the plasmas thermodynamic properties. Air and pure water vapor plasmas thermodynamic properties are also calculated in order to make comparisons. This theoretical study completes our last works (Kagoné, et al., 2012, PP. 211-221;Kagoné, et al., 2012, P.012004;Kagoné, 2012;Kohio, et al., 2014, PP. 711-715;Kohio, et al., 2014, PP. 240-246) on the same plasmas. The objective of this study is to show water vapor influence on air plasma thermodynamic properties. It concerns calculation of mass density, enthalpy, specific heat capacity and sound velocity in the plasma. These parameters are necessary to determine the plasma enthalpy density and enthalpy flux. The values obtained constitute, in addition, the data usable for modeling of the discharges of arcs in the airwater vapor mixture. CALCULATION METHOD The plasma composition calculation constitutes the first step in the determination of its thermodynamic properties and its others characteristics. PLASMA COMPOSITION In the local thermodynamic equilibrium (LTE) regime, the studied plasmas (Table 1.) compositions calculation is based on the mass action laws using the method of Newton -Raphson (Kagoné, et al., 2012, PP. 211-221) in a temperature range from 5000K to 30000K at pressure of 1; 5 and 10 atm. We suppose that the air is constituted by 80 % of nitrogen and 20 % of oxygen in volume. The three values of the pressure are also considered to estimate the impact of the pressure on the plasmas thermodynamic properties. Given the temperature range considered here, the various chemical species taken into account in the plasma composition are electrons, diatomic molecules, neutral atoms and their corresponding ions charged one or two times: MASS DENSITY The plasma mass density is determined from the following relation: where i n and i m are respectively numerical density and mass of particle i . M is the plasma average molar mass, R is the perfect gases constant and P is the plasma pressure. ENTHALPY Knowing the various chemical species internal functions of partition, their numerical density and the plasma mass density, one can write : The enthalpy i H of each constitute is: is the difference of energy between the fundamental level of each species i and a common level of reference. SPECIFIC HEAT CAPACITY The plasma specific heat capacity at constant pressure P C is calculated by enthalpy numerical derivation: Knowing the system specific internal energyU , its mass density and its specific enthalpy, one can determine its specific heat capacity at constant volume V C , with the following expression : where f Z is the compressibility factor defined by: R is the perfect gases constant and P is the pressure. SOUND VELOCITY In a compressible fluid, the sound velocity a can be written: where Z is the compressibility of the medium. Hansen (Kagoné;2012) shows that the equation (8) right-hand side second term of the member is generally near to 1, so for a perfect gas: RESULTS We present the evolution, versus temperature, of the following thermodynamic properties: mass density ρ , enthalpy H , specific heat at constant pressure P C and sound velocity a in plasma. Other very significant parameters for the analysis of the arcs electrical extinction performances and their modeling are presented. Those are the enthalpy density (or enthalpy per unit of volume) H ρ and the energy flux (or enthalpy flux) Ha ρ . Some results of plasma composition calculation are presented briefly. Representatives figures of these different properties are figure 1 to figure 7 for a fixed pressure (1 atm) and figure 8 to figure 13 for a variable pressure (1, 5, 10 atm). Calculations are applied to the plasmas indicated in table 1. WATER VAPOR INFLUENCE For a pressure P = 1 atm, Mix 2 plasma composition appears on figure 1. These curves represent the evolution, versus temperature, of plasma particles numerical densities. We limited results to the Mix 2 case because the curves relate to the other mixtures (Air, Mix 1, Mix 3 and Water) present a great similarity of form with those of figure 1. One could note on this figure that: The numerical density gradient of the diatomic neutral particles is very significant. The N 2 numerical density is the most significant of all these diatomic particles at low temperature. This is to be attached to the high value of this particle dissociation energy; (ii) The numerical density of the hydrogen atom H is higher than that of the other neutral atoms (O, N) in all the temperature range; (iii) The electrons numerical density is primarily due to the ionization of the hydrogen atom when the temperature is higher than 15000 K; (iv) The numerical densities of the atoms doubly ionized are very low in all the range of temperature considered. (v) In addition, it should be noted that the numerical density of the hydrogen atom in plasma increases with the percentage of water vapor in the mixture. Figure 2 indicates the variation of the mass density ρ , versus temperature for the five studied plasmas. The evolution of ( ) T ρ is identical for these plasmas. Thus, when the temperature increases, the mass density decrease. It should be noted that the gradient of ( ) T ρ are much more significant at low temperature (T < 9000 K). This is a direct consequence of dissociations leading to the progressive disappearance of molecules as the temperature increases. In addition, at a same temperature, plasmas of the mixtures characterized by a strong percentage of water vapor (thus a strong percentage of hydrogen) have the lowest mass densities. ENTHALPY The enthalpy H evolution versus temperature of the five chosen plasmas appears on figure 3. This characteristic increases much with the temperature. The abrupt increases of H take place in the field of temperature corresponding to different chemical reactions: N 2 molecules dissociation around 7000 K; ionization of atoms N, O and H around 15000 K; ionization of N + around 27000 K. In addition, for a given temperature, the mixture containing the strongest percentage of hydrogen leads to the highest enthalpy. This is related to the low value of the hydrogen mass density which contributes thus to the increase of the enthalpy H . 4) is strongly related to that of the enthalpy. The peaks of P C correspond to reactions of dissociation and ionization. As for the enthalpy, the quantity of hydrogen contained in the mixture determines the value more or less high of the P C maximum. Fig. 4: Specific heat at, constant pressure, of plasma versus temperature at pressure 1 atm. SOUND VELOCITY Evolution of this characteristic, versus temperature, appears on figure 5 for the five plasmas. These curves which have a similarity of form reveal that the sound velocity is higher in plasmas resulting from the mixtures with strong proportion of water vapor. Knowing the plasma thermodynamic properties, one can calculate others essential characteristic for the electrical arc modeling in circuit breaker. These characteristics are the enthalpy density (or enthalpy per unit of volume) H ρ and the energy flux (or enthalpy flux) Ha ρ . Figure 6 shows the evolution of the enthalpy density, versus temperature, for the studied mixtures. The curves of this figure present a great similarity of form. At first approximation, each mixture plasma enthalpy density can be considered variable little versus temperature. This is due to variations of the mass density and the enthalpy H . Indeed, when the temperature increases, ρ decreases whereas H increases. These variations are much slower at high temperature (T > 25 000 K). It should especially be noted that for T > 6 000 K, the plasma enthalpy density decreases with the percentage of water vapor in the mixture. ENTHALPY FLUX This parameter is one of the most significant characteristics when one approach modeling arcs of circuit breakers. The product PRESSURE INFLUENCE This study on selected plasmas shows that concerning the pressure; all these mixtures have the same behavior. We thus limited to the case of plasma resulting to 50% air -50% water vapor mixture (Mix 2), to give the most significant results. The remarks are also valid for plasmas of the other mixtures. MASS DENSITY The curves of figure 8 represent the variations of ratio P ρ of mass density on the pressure of Mix 2 plasma versus temperature, for various values of the pressure. They show that for a given temperature T, the mass density increases with the pressure P; this is explained by the expression of the state equation. This increase is order of the value of pressure. This phenomenon is due to displacements of the dissociation and ionization equilibrium towards the increasing temperatures. These reactions which contribute to replace the heavy species (mole cules) by species more light (atoms, ions and electrons) appear more tardily when the pressure increases; it results from it, for a given temperature, a rise of ρ . ENTHALPY For a given temperature (figure 9), the mixtures plasmas enthalpy decreases when the pressure increases. It should be noted that the reduction is more marked in the zones of temperature where dissociations and ionizations of the particles take place. When the pressure increases, the different reactions (dissociation and ionization) occurs, as we already indicated, at higher temperatures. In other words, for a given T, the rise of the pressure involves an increase of the plasma mass density ρ . This involves reduction of the enthalpy because it varies in direction reverses of ρ . This phenomenon not appears at very low temperature because the neutral molecular species are not very sensitive to the pressure modification. SPECIFIC HEAT The pressure influence on the specific heat ( P C ) of plasma of 50% air -50% water vapor mixture (Mix 2) is shown on figure 10. Increase of the pressure involves a maximum translation (towards the higher temperatures) and a light lowering of their values. This phenomenon is related to the enthalpy evolution versus pressure. Indeed, in the range of temperature studied, the increase of the pressure leads to a relatively slow evolution of the numerical densities of the particles. It results from it a slower variation of enthalpy and consequently, a weak reduction of the specific heat at constant pressure peaks value. SOUND VELOCITY The pressure influence on the sound velocity ( ) a of 50% air -50% water vapor mixture plasma (Mix 2) is presented on figure 11. The sound velocity in Mix 2 plasma varies very little with the pressure. For a given temperature, it decreases slightly when the pressure increases. The maximum variations which take place in the zone of ionization (12000 -24 000 K) don't exceed 12% when the pressure varies from 1 to 10 atm. CONCLUSION Thermodynamic properties serve as indispensable input for theoretical modeling and important ingredients for experimental understanding. In this paper, the calculation of three air and water vapor mixtures thermal plasmas thermodynamic properties is carried out. Calculation is made in the range of temperature going from 5000 K to 30000 K and for three values of pressure based on the assumption of local thermodynamic equilibrium. All the equations necessary to determine accurately the thermodynamic properties of studied plasmas have been given. Results show that the water vapor improves the thermodynamic properties of the mixture plasma on most of the temperature range (9000 K to 25000 K). What could have a positive influence on the cut of the current in this mixture. But, it is clear that these results are not sufficient to be able to come to a conclusion about this subject. Although some other parameters (electrodes oxidation, dielectrical rigidity, etc.) will have to be taken into account; these results can already be used in the modeling of the discharges of electrical arcs in the airwater vapor mixture. The influence of the pressure is also studied. It results that the pressure increase involves a translation of the characteristics of plasma towards high temperatures.
3,159.8
2018-06-25T00:00:00.000
[ "Engineering", "Physics", "Chemistry" ]
Oxytocin-Gly-Lys-Arg: A Novel Cardiomyogenic Peptide Background Oxytocin (OT), synthesized in the heart, has the ability to heal injured hearts and to promote cardiomyogenesis from stem cells. Recently, we reported that the OT-GKR molecule, a processing intermediate of OT, potently increased the spontaneous formation of cardiomyocytes (CM) in embryonic stem D3 cells and augmented glucose uptake in newborn rat CM above the level stimulated by OT. In the present experiments, we investigated whether OT-GKR exists in fetal and newborn rodent hearts, interacts with the OT receptors (OTR) and primes the generation of contracting cells expressing CM markers in P19 cells, a model for the study of early heart differentiation. Methodology/Principal Findings High performance liquid chromatography of newborn rat heart extracts indicated that OT-GKR was a dominant form of OT. Immunocytochemistry of mouse embryos (embryonic day 15) showed cardiac OT-GKR accumulation and OTR expression. Computerized molecular modeling revealed OT-GKR docking to active OTR sites and to V1a receptor of vasopressin. In embryonic P19 cells, OT-GKR induced contracting cell colonies and ventricular CM markers more potently than OT, an effect being suppressed by OT antagonists and OTR-specific small interfering (si) RNA. The V1a receptor antagonist and specific si-RNA also significantly reduced OT-GKR-stimulated P19 contracting cells. In comparison to OT, OT-GKR induced in P19 cells less α-actinin, myogenin and MyoD mRNA, skeletal muscle markers. Conclusions/Significance These results raise the possibility that C-terminally extended OT molecules stimulate CM differentiation and contribute to heart growth during fetal life. OT-G is converted by an a-amidating enzyme to C-amidated nonapeptide which is released into the circulation in this form. OT-X forms have been detected in the developing brain of animals and in fetal plasma. In rats, enzymatic OT-X conversion to OT is almost complete in adulthood, but not in fetuses, which accumulate OT-X in the brain [4,5]. Similarly, the plasma OT-X elevation reported during early fetal development in sheep [3] is reduced in late gestation, when OT begins to predominate in the circulation. OT's acts on only one type of OT receptor (OTR), an integral membrane protein that is a member of the rhodopsin-type (class I) G protein-coupled receptor family, which includes arginine vasopressin (AVP) receptor subtypes (V1aR, V1bR and V2). The peptide sequences of AVP and OT differ only in 2 amino acids in positions 3 and 8, which enable these hormones to interact with the respective receptors [6]. OTR and OT biosynthesis is detected in the atria and ventricles of the heart, and OT is thought to be involved in atrial natriuretic peptide (ANP) release from the cardiomyocytes (CM) of newborn rats [7,8] and humans [9]. Indeed, OTR immunostaining of the heart is predominantly detected in CM [10]. Because radioimmunoassay (RIA) indicates OT elevation in fetal and newborn hearts at a stage of intense cardiac hyperplasia, we hypothesized a role for OT in CM differentiation [8]. Our initial experiments demonstrated that OT induces CM differentiation of the mouse embryonal carcinoma (EC) P19 cell line, a common cell model for studying early heart differentiation [11]. Several reports have confirmed OT-stimulated cardiomyogenesis in different lines of embryonic stem (ES) cells [12,13,14]. Some of these observations pointed to a Ca 2+ mobilization mechanism in response to OT treatment in ES D3 cells differentiating into CM [15]. We established that the OTRnitric oxide-cGMP pathway is essential for the OT-elicited differentiation of P19 stem cells into CM in association with elevation of transcription factors GATA-4 and myocyte-specific enhancer factor 2c (Mef2c) [12]. More recently, we obtained evidence that OT-GKR possesses biological activity. Since treatment with OT-GKR stimulated glucose uptake in rat CM [16] and enhanced spontaneous cardiomyogenesis of ES D3 cells more potently than OT nonapeptide [15]. In the present study, we reasoned that if OT-GKR plays a role in cardiomyogenesis we should detect the molecule in fetal and newborn animal hearts. To investigate whether the biological actions of OT-GKR are mediated by the OTR, the interactions of these molecules were analyzed by computer-generated 3D models. Because EC P19 cells do not spontaneously differentiate into CM after their aggregation in the presence of fetal calf serum [11], as has been observed for ES D3 cells [15], therefore we used these cells to analyze whether OT-GKR treatment generate the contracting colonies expressing CM markers. The specificity of this reaction was examined by inhibition by OT antagonists (OTA) and OTR suppression by specific small interfering RNA (siRNA) and similar condition was investigated for V1aR. OT-GKR is the dominant OT form in developing hearts Synthetic OT-X molecules and OT were used to identify elution profiles from the HPLC column. Specific wavelength intervals at 215 and 280 nm were calibrated by spectrophotometry. Subsequently, HPLC-specific retention factor for OT standards was k' = 0.67 for OT, k' = 1.34 for OT-GKR, k' = 1.67 for OT-GK, and k' = 2.01 for OT-G. Similar retention factors of these molecules have been disclosed in heart extracts from newborn rats. Analysis of eluates, released from the column, revealed a small peak at k' = 0.67, corresponding to OT, and a large second peak at k' = 1.34, calibrated as a retention factor of OT-GKR. A minor peak was observed at k' = 1.67, the point of OT-GK elution, and a minimal peak, if any, was detected at the point of elution of OT-G. The presence of OT-GKR in specific HPLC fractions was confirmed by specific RIA with antibodies specific for OT and antibodies detecting OT-X forms. Immunocytochemistry reveals OT-GKR in developing hearts Immunocytochemistry of whole mouse embryos (embryonic day 15) demonstrated the entire OT system in somites, mesoderm masses distributed along neural tubes and developing into the dermis, skeletal muscle and vertebrae. As shown in Figure 1A, significant OT-GKR expression was found inside, whereas OT nonapeptide (Fig. 1B) and OTR (Fig. 1C) were viewed on the periphery of somites showing apoptosis (Fig. 1D). Control staining with OT-GKR-specific antibody, pre-absorbed with OT-GKR, was negative in somites ( Fig. 1E) as well as in whole mouse sections (Fig. 1F). In sections stained with OT-GKR antibody, intense brown staining was observed in fetal hearts ( Fig. 1G and 1H). Double staining by immunofluorescence indicated OT-GKR deposits in cells stained with the CM marker troponin C (Fig. 1I). Fetal heart sections were also stained with anti-OTR antibody (Fig. 1J) and with anti-OT antibody the staining was barely visible (Fig. 1K). Immunostaining with anti-vasopressin ( Fig. 1L) was negative and some staining was found with anti-V1aR antibody (Fig. 1M). No staining was seen in the corresponding negative control (Fig. 1N). Docking analysis shows OT-GKR interaction with OTR Because of significant molecular differences between OT-GKR and OT, the question has been raised whether OT-GKR interacts with OTR binding sites. For this reason, we performed computational docking analysis of OT molecules in OTR binding sites. Figure 2 illustrates the docking of 3-D human OTR with OT-GKR and OT-modeled molecules in front upright view ( Fig. 2A) and a view from the extracellular side (Fig. 2B). Three conformations for both OT-GKR and OT were analyzed. Six related receptor-Ga-segment-OT-GKR complexes were obtained. Possible hydrophobic and electrostatic interaction points in dynamic complexes of these molecules were indicated by estimated binding affinity energies of 26.660.4 kJ/mol for OT-GKR and 211.860.6 kJ/mol for OT. Using distance criteria, the program identified receptor amino acid residues interacting with ligands. The essential hydrogen bond and strong electrostatic interactions between both OT molecules and the receptors were characterized by visual inspection. The results are reported in Figure 2D and Table 1. Several amino acid residues have been proposed to interact with OT and OT-GKR in the OTR model where the red bars represent docking with OT-GKR, and the black bars indicate docking with OT molecule (Fig. 2D). Docking to OTR in positions V115, K116, Q119, M123, Q171, F185, T205, Y209 and Q295 was noted for both OT and OT-GKR models (Table 1). These docking positions constituted 42% of all observed docking sites of OT-GKR. Among OT-GKR interactions, the special notice should be given to binding of the arginine-12 (R12), (Fig. 2 D) to OTR. We also analyzed the docking of OT-GKR and OT molecules in the 3-D human V1aR model molecules were similar binding affinity energies of -7,3860.3 kJ/mol for OT-GKR and 211,1160.6 kJ/mol for OT. Figure 2E and Table 1 show OT-GKR and OT docking at respective binding sites. Both molecules were docked in positions of Q104, K128, Q311, L335, S338 and N340. The OT-GKR was bound exclusively in positions of G134, S138 and A299. OT-GKR stimulates contracting cell colonies in EC P19 cells EC P19 cell cardiogenic differentiation is not spontaneous or very rare. With the hanging drop method in the presence or absence of inducers, EC P19 embryoid bodies (EBs) were analyzed for their beating activity from day 2 until 14 after plating. In cell cultures not exposed to inducers (NI) no beating cell colonies were found (Fig. 3A), although they sometimes displayed rare beating foci. We chose micromolar concentrations to compare the action of OT and OTX forms. The 10 26 M concentration was found to be the most efficient in inducing cardiomyogenesis. As with OT treatment, extended OT forms induced the appearance of numerous colonies, with cell-specific organization, in linear, parallel arrays or round clusters displaying synchronized contractions ( Fig. 3B-E). Videoanalysis revealed that OT and OT-X were equally efficient in producing beating cell colonies by day 14 (Fig. 3F). However, observations on day 8 disclosed a significantly larger number of contracting cell colonies in samples induced by OT-GKR (960.6) compared to cells induced by OT (560.4), OT-G (460.6) or OT-GK (760.6), p,0.05. To ascertain OTR involvement in OT-GKR-mediated cardiomyogenesis, OT-GKR-treated EBs were additionally exposed to 10 26 M OTA (Fig. 3F). Indeed, OTA effectively inhibited the number of OT-GKR-stimulated contracting colonies (2603 foci in 24 wells at day 8, and 460.6 at day 12) vs. OT-GKR (2360.2/ 24 at day 12, p,0.05). Application of V1R antagonist also diminished the number of beating cells induced by OT-GKR (to 2603 foci in 24 wells at day 8 and 860.6 at day 12, p,0.05), indicating that both OTR and V1aR contribute to the generation of contracting cell colonies. To verify receptor function in the cardiomyogenesis of EC P19 cells, we used specific siRNA for OTR and V1aR silencing. As illustrated in Figure 3G, OTR staining with OTR antibody disclosed OTR expression in most EC P19 cells. Similar staining for V1aR was presented in the Fig. 3I. This was significantly reduced in cells subjected for the both siRNA treatments (OTR - Because OT-GKR accumulation in fetal hearts was seen inside cells expressing the CM marker, we investigated whether the endogenous production of OT-GKR in EC P19 cells initiates cardiomyogenesis. For this purpose, the cardiogenic potency of OT-GKR was studied in EC P19 cells stably expressing the pcDNA3.1/Amp-OT-GKR-IRES/EGFP (green fluorescence) construct (Fig. 4A). As shown in Figure 4B-E, OT-GKR protein was disclosed by immunofluorescence in approximately 30% of transfected cells. Although at day 12, the beating cell colonies induced by endogenous OT-GKR were fewer (10/2460.4) than those receiving OT-GKR from the medium (21/2460.6, p,0.05), the OT-GKR-transfected cells displayed large clusters of beating activity (Fig. 4F). For the quantitative assessment of cardiomyogenic differentiation, the EGFP expression pattern in P19Cl6-GFP cells was analyzed by fluorescence microscopy and flow cytometry. Figure OT-GKR induces differentiation markers in EC P19 cells Cardiac and skeletal markers were altered during OT-mediated cell differentiation. Figure 6 illustrates the expression of genes at the 6 th day of EC P19 cell differentiation, when the first beating cell colonies were detected. Exposure of EC P19 cells to OT and OT-GKR increased GATA-4 mRNA (Fig. 6A), the transcription factor involved in cardiac development, and Mef2c mRNA ( Fig. 6B), the gene involved in cardiac morphogenesis and myogenesis. As shown in Figures 6C and 6D, treatment also induced the mRNA genes involved in skeletal muscle morphogenesis, myogenin mRNA ( Fig. 6C) and MyoD mRNA (Fig. 6D). Interestingly, the expression of these markers was higher in OTinduced cells than in cells stimulated by OT-GKR. Induction of all tested mRNA by OT-GKR was reduced in the presence of OTA to the level seen in NI controls ( Fig. 6A-D). The differentiation process in cells induced by OT and OT-GKR was indicated by loss of OCT-4 immunostaining, a marker of the undifferentiated state (Figs. 7A-A4). Cells treated with OT ( Fig. 7B2) or OT-GKR (Fig. 7B3) displayed DHPRa markers of the advanced contractile apparatus and cardiac transcription factor MLC-2v, marker of the ventricular phenotype (Figs. 7C2 and 7C3). Sarcomeric a-actinin, the marker of both skeletal and cardiac muscles, was produced in larger quantities by cells treated with OT ( Fig. 7D2) than those stimulated with OT-GKR (Fig. 7D3). Discussion This study reports original observations that: (i) Fetal mouse and newborn rat hearts produce the OT-GKR molecule in cells expressing the CM marker troponin C; (ii) Experiments on EC P19 cells, the model of early heart differentiation, demonstrate the Table 1. List of the OTR, V1aR residues, involved in the List of the OTR interactions with OT-GKR, and OT; * -AVP agonist binding site; ** -OTA antagonist binding site. OTR V1aR OT Table 1. doi:10.1371/journal.pone.0013643.g002 Our previous data indicated that extended OT forms could be produced in the developing heart since OT synthesis was seen in CM cultures from newborn rats [8] and in EC P19 cells [12]. In the present study, HPLC analysis of newborn rat hearts and immunocytochemistry of whole mouse embryos revealed that OT-GKR is abundant in developing rodent hearts. The selectivity of OT-GKR antibodies and the lack of their reaction with OT nonapeptide have already been demonstrated by RIA crossreactivity analysis [15]. Moreover, confocal microscopy in D3 stem cells producing OT-GKR from transfected cDNA constructs further indicated positive reactions with anti-OT-GKR but negative reactions with anti-OT antibodies [15]. On the other hand, recent data suggest the positive effects of OT on osteoblast development, together with bone formation defects of OT and OTR deficiency [17]. In the present study, localization of the OT system in somites, embryonic vertebral precursors, is consistent with findings that OT is involved in osteogenesis and is important in skeleton mineralization [17,18]. This result provides further evidence that identification of the OT system in fetal hearts is specific and not due to methodological aberrations. We have already reported that OT-GKR increases cytosolic Ca 2+ in ES D3 cells [15]. Among receptors of neurophyseal hormones, this effect is attributed to OTR and V1aR, whereas V2R is coupled with adenylate cyclase and the second messenger cAMP [6]. Moreover, all of these receptors have the ability to bind AVP and/or OT with varying affinities. Thus, both ligands are capable of initiating signaling cascades mediated by either receptor [19]. In fact, our studies indicate involvement of NO synthases in the CM differentiation mediated by OT [12] as well as AVP [20]. This suggests physiological relevance of both OT and AVP systems and its versatility in the cardiomyogenesis. Early studies had demonstrated that eNOS favors the maturation and cardiomyogenesis of murine embryoid bodies in vitro, because chronic NOS inhibition in these embryonic cells with the guanylyl cyclase inhibitor, 1 H- [1,2,4]oxadiazolo-[4,3-a]-quinoxalin-1-one (ODQ), resulted in differentiation arrest; the event reversed upon incubation with the NO donor [21]. Likewise, NO donors and human iNOS-gene adenoviral transfection in mouse embryonic stem cells cultured in embryoid bodies facilitated their differentiation into beating cardiomyocytes [22]. OT signaling targeting eNOS is generally activated through a PLC/calcium/calmodulin pathway [23], but eNOS activation may also occur via the phosphatidylinositol-3-kinase (PI-3-K)/AKT pathway [24] as has been found in endothelial cells [25] and in CM [16,26]. This raises the possibility of NO being a critical OT signaling molecule for both the preservation of cardiac cells and the differentiation of cardiac stem cells reserve. In P19 cells expressing GFP under a cardiac-specific promoter to monitor their CM differentiation, NO stimulated guanylate cyclase to produce cGMP, an activator of cGMP-dependent protein kinase G [12]. Recent study indicate that NO pathways that promote CM differentiation include repression of self-renewal genes, such as NANOG, and increase in differentiation genes such as GATA4 [27]. During proliferation of HUVEC, calcium mobilization in response to OT treatment seems to be instrumental in the activation of the NO pathway, as shown by the dramatic reduction of OT-induced NO release when calcium is chelated [28]. We demonstrated that in CM, chelation of intracellular calcium by BAPTA, and inhibition of calmodulin kinase II dramatically reduced the OT-mediated glucose uptake [16]. Ca 2+ mobilization study in D3 cells showed functional OT-GKR and OT activity in embryonic stem cells [15]. The observed sustained effect on Ca 2+ may be due to the nature of both peptides. In support of this notion, prolonged and long-lasting Fura-2 mobilization of Ca 2+ was demonstrated in cardiac cells in response to AVP [29,30]. Further data suggest that an activation of OTR provide a local Ca 2+ signal that induces eNOS activation [15] and possibly natriuretic peptides [11]. By comparison, the guanylyl cyclase receptors of natriuretic peptides generate cGMP that can also contribute to CM differentiation from embryonic stem cells [31]. The increase in the mRNA expression of the cardiac-specific transcription factor Nkx2.5 and cardiac markers MLC2 and MHC followed treatment of embryonic stem cells with NO donors and cGMP activators [32]. In our previous studies in P19 cells, the GATA4 expression was only moderately reduced when l-NAME was administered together with OT, but the transcription factors MEF2c and Nkx2.5 were extensively downregulated. This lack of balance of transcription factors can severely impair the cardiomyogenic program, which requires physical interaction and synergistic modulation of target gene expressions [33,34]. The differences in the expression of myogenic regulatory factors, MyoD and myogenin, as well as GATA4 and MEF-2, in response of P19 cells to OT and OT-GKR stimulation, can influence the developmental decisions of stem cells differentiating into the skeletal or cardiac muscle lineage. Indeed, the difference a-actin expression in OT and OT-GKR-differentiated P19 might be directly related to the level of myogenic regulatory factors promoting skeletal muscle lineage [35]. We performed computerized docking analysis to assess the relationship between OT-GKR, OTR and V1aR. Interactions between OTR residues and 4 OT-GKR amino acids Ile-3, Leu-8, Tyr-2 and Arg-12 were observed in OTR-OT-GKR complexes. The recognition of Ile-3 by OTR seems to be specific for OT molecules, because this amino acid residue is absent in the cyclic part of the AVP molecule. Replacement of Ile 3 by other amino acid residues causes a significant decrease of affinity for OTR [36]. Usually, the OT-binding site is formed by transmembrane helices 3-7 and extracellular loops 2 and 3 of the receptor [37]. Based on studies on other members of the OT-VP receptor family, specifically the V1aR, it is hypothesized that that the cyclic part of OT is lodged in the upper third of the receptor binding pocket and interacts with transmembrane domains 3, 4, and 6, whereas the linear C-termini part of the OT molecule remains closer to the surface and interacts with transmembrane domains 2 and 3, as well as with the connecting first extracellular loop [6]. This hypothesis is supported by various findings using site-directed mutagenesis techniques, as well as by domain swapping experiments between the OTR and the V2R [6,19]. In our study, molecular docking of OT and OT-GKR showed that while both peptides are able to interact with OTR with significant binding energies, the binding pocket for OT-GKR might be slightly different from the binding pocket for OT. The results demonstrate that OT-GKR can interact with several sites in transmembrane domains 3, 4, 5 and 7 of V1aR. OT binding sites disclosed in the V1aR model are in total agreement with those reported by Ślusarz et al. [38]. These results provide guidelines for experimental site-directed mutagenesis and if confirmed, they may be helpful in designing new selective OT analogs with agonistic properties for OTR and V1aR. Some data suggest that OT-GKR binding to V1aR is functional. V1aR is present in ES D3 cells in the very early stages of cardiac development and is then strongly down-regulated [20]. The inhibition of P19 cells differentiation involving OTA revealed that it did not reduce the number of OT-GKR-stimulated beating cell colonies to control levels. This indicates the presence of other signaling pathways in response to OT-GKR. The high sequence identity between OT and AVP receptors [39], suggest that AVP receptors may be involved, at least in part, in OT-GKR-mediated pathways. We observed that both V1aR antagonist as well as V1aR silencing partially blocked the cardiomyogenic effects of OT-GKR in EC P19 cells. The results are consistent with the observation that OTA completely blocked OT-mediated stimulation of glucose uptake in rat neonatal CM, whereas the glucose uptake induced by OT-GKR was only partially blocked [16]. Extended C-terminal OT peptides stimulate EC P19 cell differentiation into beating cell colonies expressing CM markers. OT-GKR displays the highest cardiomyogenic action among OT molecules. Both OT and OT-GKR treatment of EC P19 cells and their derived clone, EC P19 Clone 6, expressing a GFP reporter under the transcriptional control of the MLC-2v promoter, produced similar morphological changes and induced GFP fluorescence. For the moment, this action relates to differentiation to the ventricular CM phenotype since the GFP-P19Cl6 model of CM differentiation is controlled by the ventricle-associated MCL-2v promoter [40]. Nevertheless, the potential of OT and OT-GKR to promote the ventricular phenotype is of direct interest in the development of cell therapies for the heart. Already known is a positive effect of OT/OTR in inflammation [41,42,43]. Furthermore, recently, OT treatment had a beneficial effect in healing myocardial infarction [26,43,44]. The clinical application of OT-GKR in this pathology could be safe because of the specific interaction with OTR and V1aR, as described in the present study. The weaker effects of OT-X than OT on uterine contractions have already been reported [45], and OT replacement by OT-GKR in the therapy of cardiac pathologies could reduce vasoconstriction attributed to V1aR activation by OT [46]. A specific question is whether the cardiomyogenic action of OT-X is the result of cleavage to the site of OT peptide by proteolytic activity potentially present in EC P19 cultures. In a study by Altstein et al. [4,5], however, rat brain and plasma pro-AVP cleavage efficiency in adults and fetuses was high (99 and 95% cleavage, respectively), resulting in the formation of fully processed amidated AVP forms, with no detectable, partiallyprocessed peptides. Pro-oxytocin (pro-OT) processing in adults was very similar (over 99% cleavage), eliciting the formation of fully-processed, amidated OT. However, pro-OT processing efficiency in the fetus was very low and incomplete, culminating in 40% unprocessed precursor and the accumulation of Cterminally extended OT-X. In the same line of reasoning, it is possible that OT-GKR exhibits different efficiencies compared to OT with respect to other OT functions in the heart, such as stimulation of ANP release [47]. If this is the case, we can speculate that the relative levels of OT and OT-GKR (and, hence, the relative levels of OT-processing enzymes) could have a finelytuned, regulatory impact on heart development and homeostasis. Conclusion The present study demonstrates that OT-GKR peptides have a cardiomyogenic action via OTRs and also via V1aR. The results raise the possibility that C-terminally extended OT molecules can contribute to heart growth during fetal life, even if the posttranslational machinery of OT processing is not completely developed. This cardiac OT-GKR differentiation is important in the development of cell therapies for hearts injured by infarction, to induce the cardiac differentiation of somatic stem cells in diseased adult hearts for their regeneration. Materials and Methods High pressure liquid chromatography (HPLC) and RIA Dried acetone-extracted homogenates were dissolved in an aqueous solution of 20% acetonitrile containing 0.1% trifluoroacetic acid and applied to a Vydac 218-TP24 column (56250 mm) for reverse-phase HPLC (Waters, Milford, MA). The column was eluted with a linear gradient of 20-50% CH3CN/0.1% TFA at a flow rate of 1.2 ml/min. The fractions were collected and lyophilized in a Speed-Vac. Direct RIA [15], after reconstitution of the samples in RIA buffer, ascertained the presence of OT-GKR in HPLC fractions. OT-VA18 antibody was used to measure OT-GKR concentration. Synthetic OT as well as pituitary gland extracts, chromatographed under identical conditions, served as standards [6]. Molecular docking MolDock software investigated the interaction of OT molecules with OTR and V1aR [48]. This program makes use of predicted cavities during the docking process and identifies potential ligandbinding modes (see www.molegro.com for details). 3-D models of OT-GKR and OT were constructed with the Biopolymer module of the SYBYL molecular modeling package (Tripos Associates, St. Louis, MO). These 3-D models of activated OTR (OTR Gq11) and human vasopressin V1aR were described previously [38]. The MolDock scoring function served to pre-compute score grids for dock evaluation. Potential binding sites were detected with the grid-based cavity prediction algorithm. The saved conformations for ligand-receptor complexes were subjected to detailed 3-D analysis for interactions at active sites. Cell culture and differentiation P19 cells (Crl-1825) from the American Type Culture Collection (Manassas, VA, www.atcc.org) were cultured as reported elsewhere for parental P19 cells [11]. Green fluorescent protein (GFP)-P19Cl6 cells, a gift from Dr. C. L.Mummery (Hubrecht Laboratory, University Medical Center, Utrecht, Netherlands) were cultured as described for parental P19 cells [12]. To analyze cardiomyogenesis in conditions of endogenous OT-GKR production in stem cells, the OT-GKR-IRES-EGFP construct (prepared as reported previously) [15] [20]. Differentiating P19 cell EBs in suspension were transferred to 24well plates, followed by transfection with OTR siRNA (Cat. No. SI01367779, Qiagen) and V1aR siRNA (Qiagen Cat No SI02673083) according to the manufacturer's instructions. After 24 h, the cells were re-transfected as before and incubated in differentiation media in the presence or absence of OT-GKR. EB outgrowths were examined for beating activity. Immunocytochemistry and microscopic analysis Cell morphology was examined under a Model IX51 inverted microscope (Olympus, Tokyo, Japan, www.olympus.com) equipped for epifluorescence analysis. Phase contrast micrographs were taken with a Q Imaging QICAM-IR Fast 1394 Digital CCD camera. At day 8, contracting cell colonies in fields of 3 independent samples were counted from the video record with Image J software (National Institutes of Health, Bethesda, MD, www.nih.gov). Immunocytochemistry was performed in P19 cells, as described elsewhere [10]. To obtain green fluorescence, secondary biotinylated rabbit antibody against mouse IgG (BA-2001, Horse Vector Laboratories, Burlingame, CA) was followed by streptavidin-Alexa Fluor 488 conjugate (S11223, Invitrogen Life Technologies). Control staining, obtained by overnight pre-incubation of anti-OT-GKR antibody at 4uC in the presence of 10 -6 M synthetic OT-GKR or the omission of primary antibodies, was negative, emphasizing specificity of immunocytochemistry. Panoramic, cross-sectional, digital images of stained whole embryos were prepared with Adobe Photoshop CS software (Adobe Systems Inc., San Jose, CA). Fluorescence-activated cell sorting (FACS) analysis GFP-P19Cl6 cell culture was digested to single-cell suspension with Accutase (Cat. No. AT104, Innovative Cell Technologies, Inc., San Diego, CA, www.innovativecelltech.com) for adhered cells (days 6 and 14) or Accumax (Cat. No. AM105, Innovative Cell Technologies, Inc.) for suspended EBs (day 5 of differentiation). The dissociated cells were washed with PBS, suspended in PBS containing Ca 2+ (1 mM) and Mg 2+ (0.5 mM) at room temperature, and filtered with a cell strainer with 70 mm nylon mesh (Falcon, Cat. No. 352360, BD Biosciences, Mississauga, ON, Canada, www.bdbiosciences.ca). GFP-positive cells were quantified by the passage of minimum 10,000 viable cells and sorted in the FL1 channel on the basis of forward-scattered and side-scattered light in a FACS Aria TM Cell Sorter (BD Biosciences) as the cells traversed the beam of an argon ion laser (488 nm). The BD Biosciences software program CellQuest was applied for data acquisition and analysis. Non-induced cells served as negative controls. Sorted GFP-positive cells were collected in culture media and allowed to reattach to culture dishes, at least for the confirmation of beating contractility. Reverse transcription-polymerase chain reaction (RT-PCR) Total cellular RNA was extracted with TRIzol Reagent (Cat. No. 15596-018, Invitrogen Life Technologies) according to the manufacturer's protocol. To remove genomic DNA, total RNA was treated with 2 units of Turbo DNase (Turbo DNA-free, Cat. No. AM1907, Applied Biosystems/Ambion. Streetsville, ON, Canada). First-strand cDNA was synthesized in a final volume of 40 ml containing first-strand buffer, 4 mg of cellular RNA, 4 ml of hexanucleotide primers (Cat. No. 588753, Invitrogen Life Technologies) and avian myeloblastosis virus reverse transcriptase (12 units/mg RNA, Cat. No. 28025-013, Invitrogen Life Technologies) for 180 min at 37uC. First-strand cDNA (5 ml) was then utilized for PCR amplification with exon-specific oligonucleotide primers in a Robocycler Gradient 40 thermocycler (Stratagene, La Jolla, CA). For all PCR studies, the number of cycles employed was within the linear range of amplification. These values were normalized to corresponding 18S mRNA. Primer sequences and conditions for the PCR analysis of 143-bp GATA-4 NM_008092 (annealing temperature 61uC, 32 cycles), 57-bp Mef2c NM_025282.1 (annealing temperature 60uC, 34 cycles), 272-bp Myogenin NM_008092 (annealing temperature 54uC, 18 cycles) and 144-bp MyoD NM_010866 (annealing temperature 54uC, 30 cycles) mouse transcripts have already been described [12]. The PCR products were size-fractionated by 2% agarose gel electrophoresis and visualized with the Storm 840 Imaging System and ImageQuant software (Version 4.2, Molecular Dynamics Inc., Sunnyvale, CA). Statistics The results are expressed as mean 6 SEM. Comparisons between groups were evaluated by 1-way ANOVA, followed by Newman-Keuls multiple comparison test with the PRISM computer program. Statistical significance was taken as p,0.05.
6,799.8
2010-10-26T00:00:00.000
[ "Biology", "Medicine" ]
Training Self-Regulated Learning in the Classroom : Development and Evaluation of LearningMaterials to Train Self-Regulated Learning during Regular Mathematics Lessons at Primary School The aim of the intervention based on the self-regulation theory by Zimmerman (2000) was to promote a powerful learning environment for supporting self-regulated learning by using learning materials. In the study, primary school teachers were asked to implement specific learning materials into their regular mathematics lessons in grade four. These learning materials focused on particular (meta)cognitive and motivational components of self-regulated learning and were subdivided into six units, with which the students of the experimental group were asked to deal with on a weekly basis. The evaluation was based on a quasiexperimental pre-/postcontrol-group design combined with a time series design. Altogether, 135 fourth graders participated in the study. The intervention was evaluated by a self-regulated learning questionnaire, mathematics test, and process data gathered through structured learning diaries for a period of six weeks. The results revealed that students with the self-regulated learning training maintained their level of self-reported self-regulated learning activities from preto posttest, whereas a significant decline was observed for the control students. Regarding students’ mathematical achievement, a slightly greater improvement was found for the students with self-regulated learning training. Introduction According to Boekaerts et al. [1], the concept of selfregulation is used in a variety of psychological fields (see also [2]).In research on educational settings, self-regulated learning [3] is classified as an important factor for effective (school-based) learning and academic achievement (e.g., [4][5][6]). Regarding theories and models of self-regulation, there are different approaches to describe the construct.Some models regard self-regulation as consisting of different layers (e.g., [7]), while other models emphasize the procedural character of self-regulation and describe different phases (e.g., [8][9][10]).In our study, we refer to the self-regulation model developed by Zimmerman [8], who defines selfregulation as a cyclical process that "refers to self-generated thoughts, feelings, and actions that are planned and cyclically adapted to the attainment of personal goals" (page 15).The model distinguishes between three learning phases: the forethought or planning phase, the performance or volitional control phase, and the self-reflection phase.For each of these phases, two components are uniquely characterized which are again represented by specific processes. As components of the forethought phase, both the analysis of the given task (task analysis) and self-motivation beliefs are relevant variables in the beginning of the learning process.Task analysis includes processes of goal setting and strategic planning.According to Locke and Latham [11], goal setting has been defined as a decision upon specific outcomes of learning or performance.Highly selfregulated students organize their goal systems hierarchically and tend to set process goals in order to achieve more Education Research International distal outcome goals [8].Furthermore, strategic planning is a process relevant to the forethought phase-and closely related to goal setting-because after selecting a specific goal, students engage in planning how to reach it [9,12].Indeed, these processes are quite useless if students are not motivated or cannot motivate themselves to use corresponding strategies.Therefore, self-motivation beliefs, such as self-efficacy, outcome expectations, intrinsic value, and goal orientation, are relevant motivational variables of the forethought phase and they affect direction, intensity, and persistence of students' learning behavior [13,14].Self-efficacy refers to "personal beliefs about having the means to learn or perform effectively" [15, page 17], whereas outcome expectations refer to the judgments of the consequences that behavior will produce [16].In line with Deci and Ryan [17], intrinsic value is defined "as the doing of an activity for its inherent satisfaction rather than for some separate consequences" (page 56).Regarding goal orientation, there is a first distinction between a mastery goal construct and performance goal construct (e.g., [18]): whereas mastery goals (also called mastery orientation) are focused on learning and self-improvement, performance goals (also called performance orientation) represent a more general concern with demonstrating ability and trying to do better than (or to not appear worse than) others [19,20].There is a distinction between two different types of performance goals: performance-approach goals and performanceavoidance goals [18].Students can be motivated to try to outperform others in order to demonstrate their competence (performance-approach) or to avoid failure in order to avoid looking incompetent (performance-avoidance).With respect to self-regulated learning theory, a positive influence of mastery goals on the different components of self-regulated learning was found [10].In addition, these motivational variables are important components of selfregulated learning as they initiate the learning process and affect students' performance [14]. In the next phase-the performance or volitional control phase-self-regulated learning is determined by processes of self-control and self-observation.In this regard, selfcontrol strategies-or volitional strategies-are necessary when disturbances occur while performing a task [21,22].In his model, Zimmerman [8] differentiated between selfinstruction, task strategies, imagery, and attention focusing as important strategies of self-control.Corno [23] emphasized that a flexible use of volitional strategies assists selfregulated learning because it enables students to shield their goal-related behavior from distractions.In the framework of our study, we concentrated on attention focusing as an effective self-control strategy in avoiding distractions and speculations of irrelevant matters [24]. Another important component of the performance phase concerns the ability of self-observation, which is described as the systematic observation and documentation of thoughts, feelings, and actions regarding goal attainment [25].Regarding self-regulated learning, students cannot adequately engage in self-regulatory behavior without self-observation because they are only able to modify their behavior if they are attentive to relevant aspects of it [26].As for the processes of self-observation, Zimmerman [8] adduced the processes of self-recording and self-experimentation. Self-recording has the advantage of retaining personal information at the point when it occurs and includes the possibility of altering or modifying the behavior.Self-experimentation offers the possibility of systematically varying different aspects of behavior.As a common self-recording technique, Zimmerman [8] argued for diaries to support self-observation processes because of the reactivity effect [27]. Subsequent to the performance phase, the completion of a task is the initial point of the self-reflection phase.This phase is characterized by the components of selfjudgment and self-reaction.Zimmerman [8] describes selfjudgment as consisting of two processes: self-evaluation and causal attributions, which includes the comparison of one's behavior with one's goals [28].Students evaluate their learning results and draw conclusions concerning further learning behavior.In this context, there are different types of criteria to evaluate one's performance.In line with Zimmerman [8], we distinguished between normative criteria and self-criteria.In this context, self-criteria are regarded as being more effective for self-regulated learning [29] because they involve the comparison of current performance with earlier levels of performance and allow judgments about the learning progress.Self-evaluative judgments are related to causal attributions.Students attribute their behavior by considering the results.There is evidence that in cases of poor performance, attributions to insufficient effort or a poor task strategy can be beneficial to motivational aspects; in cases of successful performance, attributions to one's ability are beneficial to motivation [30,31].The comparisons of results to goals, as well as causal attributions, are linked to the students' affect or self-reactions.In this context, Zimmerman [8] described perceptions of satisfaction or dissatisfaction (called self-satisfaction) and distinguished between adaptive or defensive interferences that modify a person's self-regulatory approach during subsequent efforts to learn or perform.Thereby, the feedback resulting from current performance influences prospective performance.Zimmerman [8] designated this procedural nature of selfregulation as a feedback loop.The theoretical model is depicted in Figure 1. As self-regulated learning has become a key construct in education in recent years because of its importance in influencing learning and achievement in school and beyond [33], there are many studies on enhancing students' self-regulatory abilities by training them either during or after their regular classes (e.g., [34][35][36]).Leopold et al. [37] fostered text understanding by the intervention of text highlighting and self-regulation strategies.Souvignier and Mokhlesgerami [38] focused on the enhancement of cognitive, motivational, and metacognitive aspects of selfregulated learning with respect to reading comprehension.Regarding science lessons, Labuhn et al. [39] trained seventh graders in cooperation with teachers.The target groups of these studies were students at the secondary school level (ranging from fifth to eleventh grade).As the development of self-regulation begins in early childhood [40,41], and in line with the results of a meta-analysis by Dignath and Task strategies Imagery Self-instruction Attention focusing Self-observation Self-recording Self-experimentation Self-motivation beliefs Self-efficacy Outcome expectations Intrinsic motivation Goal orientation Self-evaluation Causal attribution Self-reaction Self-satisfaction Adaptive/defensive inferences Büttner [42], interventions have been developed to foster self-regulated learning of students in primary school [43,44] or even kindergarten [45].Dignath et al. [46] pointed out that improving the self-regulated learning of primary school students has positive effects on learning outcomes, strategy use, and motivation (see also [47]).Otto [43] trained primary school students, as well as their teachers and parents, and was able to compare direct and indirect effects of selfregulation training.Rozendaal et al. [48] followed a similar approach.In the framework of their study, they trained significant reference persons (teachers) on how to improve students' self-regulated learning abilities [49]. The abovementioned studies represent different approaches to enhance self-regulated learning by training either students themselves or other relevant persons, such as teachers or parents.Thereby, self-regulated learning was combined with different academic subjects such as reading comprehension, text understanding or mathematical modelling, and problem-solving.This approach is in line with the results of a meta-analysis conducted by Hattie et al. [50], which pointed out that the direct and isolated instruction of self-regulated learning strategies had turned out to be less effective regarding its transferability on students' learning behavior.Instead, the authors argued that direct instruction of strategies ought to be linked to factual content in order to apply these strategies in a natural setting.With regard to mathematical learning, De Corte et al. [51] argued that "self-regulation constitutes a major characteristic of productive mathematics learning" because the main goal of learning and teaching mathematics concerns "the ability to apply meaningfully learned knowledge and skills flexibly and creatively in a variety of contexts and situations" (page 155).There are a few studies (e.g., [47,49]) that combine the instruction of mathematical problem-solving strategies with multidisciplinary self-regulated learning strategies.The presented study was designed with regard to the approach of De Corte et al. [52], who promoted the conception of the powerful learning environment, which fosters the application of self-regulatory learning strategies.Therefore, the teachers received teaching materials that included instructions to train their students in their natural learning environment at school.Following the processual character of Zimmerman's model [8], these materials focused on particular strategies of each of the three phases.In detail, the forethought phase was represented by strategies of goal setting, strategic planning, and intrinsic value.With respect to the following phases, the learning materials focused on attention focusing as a strategy of the performance or volitional control phase and on causal attribution as a strategy of the self-reflection phase.In order to enhance their transferability, the learning materials were related to the current mathematics curriculum.As self-regulated learning strategies are transferable to different situations and areas [53], students should be thus enabled to use these strategies in different contexts. Hypotheses As the intervention was designed in order to improve selfregulated learning strategies of fourth grade students, the purpose of the study dealt with the influence of self-regulated learning interventions on students' self-regulated learning.In addition, an effect was expected on students' mathematics achievement because the intervention was conducted with respect to mathematical contents and conducted during regular mathematics lessons.In the framework of the study, a training to improve self-regulated learning was developed and implemented into regular mathematics lessons for a period of six weeks.In this process, the teachers received learning materials and instructions on how to train their students.It was expected that training particular self-regulatory processes could have an effect on students' self-regulated learning.Longitudinally, there should be an increase in selfregulated learning strategies in the trained group compared to the control group.In detail, the variables goal setting, strategic planning, intrinsic value, attention focusing, and causal attribution, as well as self-regulated learning, should be enhanced in the experimental group.As the training was linked to the contents of the mathematics curriculum, an effect of the intervention on the mathematical achievement of the trained students was expected, too.There should be found a stronger increase in mathematics achievement in the trained group compared to the control group.As the training effects were expected to be stable, there should be no significant changes of variables between posttest and followup measurement in the experimental group. Beyond the pre/posttests, the students of the experimental group were also asked to complete a structured diary task addressing their self-regulated learning.Therefore, process data could be analyzed by means of interrupted time series analyses.With regard to the trained variables goal setting, strategic planning, intrinsic value, attention focusing, and causal attribution, intervention effects were assumed.In addition, it was expected that variables, which were not part of the training but dealt with within the diary, improved over the intervention period.This should be the case for the variables self-efficacy, self-recording, and self-evaluation as well as for self-regulated learning in general. Participants. The study was conducted in seven German primary schools with altogether 135 fourth graders.The participation was voluntary and the students' legal guardians were asked for their consent.In the experimental group (EG), 63 students took part, whereas 72 students were assigned to the control group.The mean age of the participants was 9.26 (SD = .56),and 50.40% were female.There were no significant differences between the experimental and control group concerning students' mathematics marks (t = −1.56,P = .12),and the mathematics marks on their report card (t = −0.44,P = .66).The students of the experimental group were involved in training carried out by their teachers.The control group did not receive any training. Design. The study was evaluated by a time series design combined with a longitudinal design, including pretesting and posttesting of an experimental group (EG) and a control group (CG).The experimental group was trained in selfregulated learning and each student was asked to fill out a learning diary for the duration of the training.The control group was a group receiving neither training nor diaries. Intervention. Based on the study of Perels et al. [49], learning materials to foster self-regulated learning strategies were developed with respect to fourth grade students' learning abilities.The learning materials were addressed to (meta)cognitive strategies, such as goal setting, and strategic planning, as well as to volitional/motivational strategies, such as intrinsic value, attention focusing, and causal attribution.On the one hand, these strategies were selected with respect to the (meta)cognitive abilities of primary school students because it had to be taken into account that students of this age have a growing (metacognitive) awareness of their own thinking processes and have the opportunity to control them [40].As Bronson pointed out, primary school students "can learn to consciously set goals, select appropriate strategies to reach the goals, monitor progress and revise their strategies when necessary, and control attention and motivation until a goal is reached" [40, page 213].On the other hand, the learning materials focused on the abovementioned strategies in order to represent the different phases of Zimmerman's self-regulation model [8].Therefore, goal setting, strategic planning, and intrinsic value were selected according to the forethought phase, while the strategy of attention focusing represented the performance and volitional control phase.As a strategy belonging to the self-reflection phase, causal attribution was selected. The learning materials focused on the abovementioned strategies and were differentiated between six units.Each of these units-excluding the first-referred to one particular self-regulated learning strategy.In order to impart these selfregulatory contents to the students in a playful and childoriented manner, a fictitious character named Kalli Klug was developed with which the students could identify themselves, and which guided them through the different units.The first unit aimed to introduce the fictitious character to the students; therefore, a one-page profile of Kalli Klug was handed out to the students.The students learned that the character was an endearing bear of the age of nine, which had learned several strategies that helped him to improve his learning behavior and who wanted to relay this information to the students.In this context, a learning diary was introduced as one method to optimize learning behavior.The contents of units 2 and 3 were related to cognitive and metacognitive strategies.In detail, the third unit of the learning materials includes cognitive and metacognitive strategies because the students were asked to apply particular cognitive learning strategies such as organizing as well as metacognitive strategies like comprehension monitoring.The units 4 and 6 dealt with motivational strategies, such as self-motivation and favorable attributional styles.The fifth unit focused on volitional strategies, such as attention focusing.Table 1 gives an overview of the contents of the units. Every unit was designed for the duration of one lesson (45 minutes).The teachers received the learning materials in the form of units according to the number of students in the classroom and the instruction plans on how to impart the contents.Additionally, they received supporting documents which explained the theoretical background of the units.Every unit followed the same procedure: each began with a short repetition of the preceding unit.Then, the teachers demonstrated a new problem with which the character had been confronted (e.g., how to deal with distractions that restrict one from learning).Following this, the students had to think about this problem and find strategies to solve the problem.Alternatively, they learned the strategies which the character used in order to solve the problem by itself.In addition, the students had to transfer these strategies to their own learning behavior.The units finished with a task that had to be done for homework.The teachers were asked to work on these learning materials together with their students during their regular mathematics lessons.In order to support the implementation of the contents, the teachers received instructions with recommendations for proceeding.It was the teachers' task to transfer these interdisciplinary strategies to the mathematical contents of their lessons.For example, the second unit focused on goal setting.The students learned how to set goals and were prompted to set their personal goals for their mathematics learning for the following week.Therefore, it can be said that the teachers were actively and personally involved in the implementation of the training. The learning materials were made available to the teachers a week before the official start of the training.As the students had to work on one unit per week, there was enough time for the teachers to familiarize themselves with the learning materials.Further support was available in the form of a mentor, available at a teacher's discretion [58]. Instruments 3.4.1.Self-Regulated Learning Questionnaire.Within the framework of the study, a questionnaire was used to measure fourth grade students' self-regulated learning.A first version of this questionnaire was tested and revised in a pilot survey with a parallel student target group (N = 58).The students filled out the questionnaire a week before and after the intervention, as well as after a period of twelve months (follow-up measurement).The responses were coded on a scale with scores ranging from 1 to 4 (1: I disagree, 2: I somewhat disagree, 3: I somewhat agree, and 4: I agree).Some of the items have been taken from established instruments [43,[59][60][61], and, if necessary, selected scales were newly developed (for details, see Table 2).Reliabilities (Cronbach's alpha) were assessed for all scales (Table 3). The questionnaire was applied during regular classes and instructed by qualified experimenters in a standardized way.On the one hand, the questionnaire was designed to represent the several contents of the units; on the other, the instrument was developed with respect to the phases and processes of Zimmerman's self-regulation model [8], such as goal setting, strategic planning, intrinsic value, attention focusing, self-recording, self-evaluation, and causal attribution.These processes were chosen to represent the scales of the overall scale self-regulated learning.Following the model, the forethought phase was composed of the scales goal setting, strategic planning, and intrinsic value, with 13 items altogether.Regarding the performance or volitional control phase, two scales with nine items in total were composed which covered themes of attention focusing and self-recording.The self-reflection phase referred to the scales self-evaluation and causal attribution, which were measured by nine items.Altogether, the questionnaire consisted of 31 items.In Table 3, the reliabilities of the questionnaire are depicted for the measurements (pretest/posttest/follow-up measurement).The reliabilities of the posttest were regarded as criterion.Since Cronbach's alpha ranged between 0.61 and 0.85, the reliability of the instrument can be rated as satisfactory (α > .60).As the study was designed for regular mathematics lessons, the scales were related to mathematics; for example, "Before I start with a mathematics task, I plan how to begin." Learning Diary. In order to measure self-regulated learning on the state level, the students of the experimental group were also asked to fill out paper-and-pencil diaries for a period of six weeks.The items of the diary had to be filled out before and after performing homework tasks and were related to items of other instruments, which were already developed in this context (see [43,54]).As with the questionnaire, they corresponded to the phases of self-regulated learning and were presented in a closed format, coded on a four-point Likert-type scale, with scores ranging from 1 to 4 (1: I disagree, 2: I somewhat disagree, 3: I somewhat agree, and 4: I agree).Altogether, the students had to estimate 19 items which asked for their daily learning behavior at home.Therefore, the items were worded concerning the current learning behavior for that day.Before doing their homework, the students had to answer eight items with regard to the processes of the forethought phase (e.g., goal setting: "I know exactly what I want to learn today" or intrinsic value: "Today, I have a mind to learn").After having finished their homework, they were asked to answer eleven items related to processes of the volitional control phase and the self-reflection phase (e.g., attention focusing: "Today I've learned very concentratedly" or self-recording: "Today while learning, I thought about my learning process"). A split-half reliability was calculated (odd-even coefficient) by dividing the days for each person into two groups, one with even numbers and one with odd numbers.The mean values of each person were correlated for the variables.Table 4 shows the detailed results for each self-regulatory variable, which was measured by the diary.All variables correlated highly significantly (P < .001). Mathematics Test. Additionally, the students had to work on a standardized mathematics test [62] consisting of eight tasks altogether, which dealt with arithmetic, calculations concerning practical problems, and geometry.As the students were asked to work on it before and after the intervention, two versions were administered which were similar regarding item difficulty (approximately P i = .67)and item-scale correlation (approximately r i(t−i) = 0.33).The students were able to reach a maximum number of ten points. Teacher's Register. As the training was carried out by teachers, it was interesting to measure teachers' evaluation of the learning materials including the instructions.The teachers' assessments of the learning materials were used as an indicator for the implementation of the materials.Therefore, a kind of teacher's register was handed out to teachers in order to evaluate each unit regarding design, applicability, and comprehensibility.With respect to a teacher's daily work routine, the evaluation system followed the German system of notation (1: very good, 2: good, 3: satisfactory, 4: adequate, 5: poor, and 6: insufficient).Additionally, the teachers were asked to estimate the motivation of their students while working on the learning materials (1: not motivated, 2: less motivated, 3: motivated, and 4: very motivated).A further function of this register was to give teachers an opportunity for feedback and suggestions for useful variations of the learning materials. Results Following the succession of the hypotheses, the results of the longitudinal data are reported firstly followed by the tests of time series hypotheses.The research questions postulated that training on selfregulated learning leads to an improvement of self-regulated learning variables.We expected no changes for the untrained group (control group).The differences between the experimental group and control group were calculated by means of analyses of variance, with time as a repeated measurement factor.As it was not possible to randomly assign the students to the conditions, the pretest differences were controlled first.Regarding self-regulated learning variables, significant pretest differences between the groups were found for the scales strategic planning, t(133) = 2.57, P = .01,d = .43,and self-recording, t(133) = 2.09, P = .04,d = .34.As can be seen, the students of the experimental group reported higher pretest values than the students of the control group did (see Table 5).Because of these pretest differences, analyses of covariance with the pretest value as covariate were conducted to control these differences.Table 5 gives an overview of the results of interaction time × training, as well as means and standard deviations for the overall scale and the scales.The results indicate a significant interaction effect for the overall scale self-regulated learning, F(1, 133) = 6.58,P = .01,η 2 = .05,as well as for the scales goal setting, F(1, 133) = 3.99, P = .04,η 2 = .03,and intrinsic value, F(1, 133) = 6.68,P = .01,η 2 = .05.There were no significant interaction effects for the scales attention focusing, self-evaluation, and causal attribution.Regarding strategic planning and self-recording, the results of the analysis of covariance showed significant effects for both scales (strategic planning: F(1, 133) = 5.74, P = .02,η 2 = .04;self-recording: F(1, 133) = 4.51, P = .04,η 2 = .03). Results of the Longitudinal Analyses Regarding the overall scale self-regulated learning, there was a small nonsignificant increase among the students of the experimental group, whereas a significant decline was found for the students of the control group, t(71) = 3.36, P = .001,d = 0.41.With respect to the self-regulated learning variables, this significant decline for the students of the control group was also detected for the scales strategic planning, t(71) = 2.73, P = .01,d = 0.32, intrinsic value, t(71) = 4.06, P = .00,d = 0.49, and self-recording, t(71) = 2.82, P = .01,d = 0.33.For the students of the experimental group, there was a significant increase concerning the scale goal setting, t(61) = −2.28,P = .03,d = 0.28. Figure 2 presents the results for the students' self-regulated learning and mathematical achievement separately for experimental and control group. Pre/Postanalysis of the Mathematics Test. Regarding the mathematical competencies of the students, the experimental group as well as the control group should improve their mathematics achievement because both groups were continuously taught in mathematics.However, the experimental group should benefit from training on self-regulated learning strategies in terms of a greater increase in their mathematics achievement.The results of the t-test showed that the mathematical competencies of both groups were improved after the training period (see Figure 2).Regarding the effect size, the experimental group showed a stronger increase, t(62) = −5.29,P = .00,d = .68,than the control group, t(71) = −2.61,P = .01,d = .31. In addition, it was examined if a training effect could be found.As there were significant pretest differences between the groups of the overall measure (sum over all tasks of the test), an analysis of variance was conducted with pretest values as covariate.The results showed no significant training effect.Figure 2: Interaction time × group for the overall scale self-regulated learning as well as for mathematical achievement.Mathematical achievement measures could take values from 0 to 10; self-regulated learning was rated on a four-point scale. Follow-Up Measurement. The students of the experimental group received the same questionnaire again in order to measure the stability of the training's effect after a period of twelve months.The data of the variables should be stable, which means that no significant additional effects were expected and that the values should not decrease significantly.Therefore, the assumption that there were no changes regarding goal setting, strategic planning, intrinsic value, self-recording, self-evaluation, attention focusing, causal attribution, and the overall scale self-regulated learning was tested and the alpha-level was increased to 20% [63].In general, results show that the variables did not change significantly between the posttest and the follow-up measurement.Table 6 shows the detailed results for the scales as well as for the overall scale self-regulated learning. Results of the Training Evaluation Based on Process Data. In order to describe the training evaluation based on process data of the experimental group, interrupted time series were conducted for the trained self-regulated learning variables related to the units of the learning materials and trend analyses were conducted for the untrained variables selfefficacy, self-recording, and self-evaluation.As 70% of the diaries were filled out with more than 22 data points (>73%), data for the variables of the learning diary were aggregated from 44 students and included into analyses.Therefore, the mean of the variable computed across all participants could be generated for each day.In order to examine the training effects for the components related to the units based on the learning diary data, a multiple baseline design was used and interrupted time series analyses were conducted. Step functions were expected to show an immediate impact and to continue over the long term.In order to analyze ARMA processes, the residuals were used [64].With the residual data, autocorrelations and partial autocorrelations were conducted to identify ARMA processes. In Table 7, the results for the trained variables of each unit are depicted.The first column represents the subscales of the diary.The b 0 score shows the intercepts for the variable as an indicator for the basic level, whereas b 1 is the indicator for the change level.Using the t-score, the means before (baseline) and after the training can be analyzed to expose changes.The ARMA model describes how the level of the variable, measured at a previous point in time, influences the same variable at a following point in time.The number of terms in autoregressive (AR) terms of the model reports the dependency among successive observations.Thereby, each term has an associated correlation coefficient that describes the magnitude of this dependency.With regard to the moving average (MA) terms, the model represents the persistence of a random shock from one observation to the next.After the model estimation, (partial) autocorrelations were computed in order to test white noise residuals (with Ljung-Box-Q test). The results showed that after the first training unit, students reported having been able to improve their goal setting strategies (t = 4.64, P = .00).The second unit caused no enhancement with respect to the variable strategic planning.After the third unit, the variable intrinsic value improved significantly (t = 2.65, P = .01).In contrast, with respect to the variables attention focusing and causal attribution, there were no effects of the fourth and fifth units.However, the variable causal attribution showed AR (1) process.For the other variables, there were no dependencies among successive observations (white noise). Additionally, trend analyses were conducted for the variables that were not explicitly trained but should have been influenced by the intervention.Because of the reactivity effect (see [65][66][67]), positive linear trends were expected for the nontrained variables self-efficacy, self-recording, and self-evaluation, as well as for the overall scale selfregulated learning.Regarding the variables self-efficacy and self-evaluation, there were no significant changes, whereas significant linear trends were found with respect to selfrecording (P = .04;b 0 = 3.07; b 1 = .01;RSQ = .14)and selfregulated learning (P = .03;b 0 = 3.31; b 1 = .01;RSQ = .16). Thereby, the time trend over a period of 30 days could explain 14% of the variance of self-recording and 16% of the variance of self-regulated learning.Figure 3 shows the results for the linear trend of the overall scale self-regulated learning. Teachers' Evaluation of the Learning Materials. The teachers' assessment of the learning materials regarding their design, application, and comprehensibility ranged between 1.60 and 1.67 (design: M = 1.60,SD = .72;applicability: M = 1.73,SD = .95;comprehensibility: M = 1.67,SD = .61).The students' motivation while working on the learning materials was estimated with a mean value of 3.30 (SD = .62).Based on these results, the implementation of the learning materials should be carried out successfully. Discussion The aim of the intervention was the enhancement of fourth grade students' self-regulated learning by working on interdisciplinary teaching materials, which were related to particular strategies of Zimmerman's self-regulation model [8].By means of analyses of variance with time as repeated measurement factor, significant interaction effects were found for the overall scale self-regulated learning, as well as for the scales goal setting, intrinsic value, strategic planning, and self-recording. Regarding the results within the groups, it could be pointed out that the overall scale self-regulated learning Education Research International did not change in the expected direction.Instead of a significant increase for the experimental group, there was a significant decrease for the control group, whereas for the experimental group the overall scale remained stable.Regarding the experimental group, this result for the overall scale was supported by the results of the scales strategic planning, intrinsic value, attention focusing, self-recording, self-evaluation, and causal attribution.Except for the scale goal setting, a significant increase was found as expected.For the control group, the results of the scales strategic planning, intrinsic value, and self-recording showed a significant decline as did the overall scale self-regulated learning.Twelve months after training, the students of the experimental group filled out the same questionnaire again, in order to measure stability of intervention effects.There should be no significant change of the data according to an increase or decline.The results show that all scales were stable after a period of twelve months. Besides the improvement in students' self-regulated learning, we also expected an effect with respect to students' mathematical achievement.As the learning materials were related to mathematical contents and implemented during regular mathematics lessons, we dealt with the question of whether there was a supportive effect of self-regulated learning on students' mathematics achievement [5].Regarding the effects between the groups, no significant interaction effect was found.The results showed an enhancement for the experimental group as well as for the control group.As both groups have been taught mathematics, this increase was not unexpected.Regarding the effects within the groups, we expected a greater increase in mathematics achievement for the experimental group than for the control group.With respect to the effect sizes, the students of the experimental group showed better improvement in their mathematics achievement than the control group did.These results were in line with Perels et al. [49].In their study, they also found an improvement for both groups, but a greater increase for the students belonging to the experimental group. On the level of process data, interrupted time series analyses indicated an increase in value of some of the trained variables in the expected direction after the training.In detail, this was the case for the variable goal setting after the second unit, as well as for the variable intrinsic value after the fourth unit.Regarding strategic planning, attention focusing, and causal attribution no significant changes were found.Additionally, linear trends were performed for the nontrained variables self-efficacy, self-recording, and selfevaluation, as well as for the overall scale self-regulated learning.Although these variables were not part of the training, the students had to answer items corresponding to them by filling out the diary each day.Therefore, we expected an influence in terms of the reactivity effect [27,65].Regarding the scale self-recording and the overall scale selfregulated learning, significant linear trends were found as expected whereas there were no trends for the variables selfefficacy and self-evaluation.The absent linear trends for these variables are in contrast to the results of other studies (see, e.g., [43,67]).Therefore, the postulated reactivity effect [65] has to be considered critically because evidence for it was limited.In this study, the learning diary primarily seemed to serve as an evaluation instrument and not as a part of the intervention. In summary, the results lead to the assumption that the learning materials seemed to be beneficial with regard to fourth grade students' self-regulated learning and mathematics achievement.However, the results of the pretest and posttest measurements for self-regulated learning have to be discussed critically.Regarding the experimental group, there was only a small, nonsignificant increase found for the overall scale and the scales strategic planning, intrinsic value, attention focusing, self-recording, self-evaluation, and causal attribution.Additionally, no interaction effects were found for the variables attention focusing, self-evaluation, and causal attribution.As the variables self-recording and self-evaluation were not involved as part of the training, this result was not unexpected.Obviously, it was not possible to improve these variables by training other specific processes of self-regulated learning.With respect to the other variables, the lack of effects was not expected.It can be discussed as to whether there was enough time to practice and transfer the strategies of these units, which were very complex.The students worked on the teaching materials for the duration of one lesson per week and had to deal with one task per training session.It would probably have been useful if the students had worked on more than one task during each training session to make sure that they transferred the learned strategies to their everyday work.Furthermore, it may be possible that the imparted strategies initially interfere with already existing strategies [68].As the study was realized at grade four, the students may already have developed their own strategies to regulate their learning behavior.Greater effects might be expected when there is a continuous and fairly long-term instruction of self-regulated learning in regular classes [69]. Moreover, there are limiting factors and unanswered questions regarding this study: for the assessment of selfregulated learning, only self-report methods (questionnaire and learning diary) were used.These self-report methods only measured students' evaluation of their use of strategies, but not their actual use [70].In future research, multimethod approaches should be used.In this study, the students were also videotaped during regular mathematics lessons (before and after the intervention phase).For further analysis, the observation data has to be analyzed and referred to the results of the self-report data.Consequently, it will be possible to analyze if students actually used the self-regulated learning strategies supported by the learning materials.In this context, also other on-line methods like thinking-aloud protocols might be of interest (see [71]). Additionally, there is another question concerning the measurement of self-regulated learning.By using learning diaries, we were able to assess and analyze students' selfregulated learning on a daily basis.Following Schmitz and Wiese [9], we used this data as process data to conduct time series analysis.This approach has to be regarded critically because learning diaries represent self-report measurements.It has to be questioned to which extent this data could be concerned as process data. Another limitation concerns the state aspect of Zimmerman's model [8].He postulated that self-regulation is an adaptable and cumulative process.According to these assumptions, his self-regulation model tends to focus on state aspects of self-regulation.However, in the study we used self-report data, which rather concerns trait aspects of self-regulation.Thus, there is a discrepancy between the theoretical framework of the study and the chosen assessment methods.However, other authors, such as Schmidt [54] or Hong and O'Neil [72], regard self-regulation at both the state and trait levels.They hypothesize that academic self-regulation consists of transitory (meta)cognitive states and relatively stable (meta)cognitive traits.For example, students with high self-regulatory traits tend to use their metacognitive skills more effectively than students with low trait self-regulation [73].Hong [74] compared state and trait self-regulation models and came to the conclusion that every self-regulation state refers to a general trait component (see also [75]).Furthermore, she reported high correlations between state and trait constructs (see also [76]).Therefore, analyzing self-regulatory traits by using questionnaire data makes assumptions about self-regulatory states, as postulated in Zimmerman's self-regulation model [8]. Furthermore, the implementation of the developed learning materials has to be discussed because the contents of the units were imparted by the teachers themselves.From the teachers' point of view, the learning materials and the instructions were evaluated as very good to good with respect to design, applicability, and comprehensibility.Furthermore, the teachers estimated the motivation of their students while working on the learning materials to be very positive.These estimations indicate that the developed teaching materials could be successfully implemented in the regular classroom situation.In fact, an innovation such as these learning materials can be evaluated as being successfully introduced as soon as the teachers have adopted it [77].Adoption in this context means that the teachers are able and willing to implement an innovation into their lessons.Moreover, they have to feel confident in their ability to adapt it to the needs and abilities of their students.Following Bitan-Friedlander et al. [78], teachers' adoption of an innovation in the educational field depends on "agreeing with the theoretical content and with the pedagogical value of the innovation" [78, page 617].The extent to which an innovation might be adopted by a teacher can be defined in terms of the teacher's personal concerns.In the present study, the teachers expressed being excited about the learning materials.However, there were no other clues as to what extent the teachers were involved and motivated to work with the learning materials.For further studies, this might be an interesting and helpful approach. Another limitation refers to the question of how the students were assigned to the experimental and the control group.As the learning materials needed to be implemented by teachers into students' regular learning environment, it was not possible to realize a randomized assignment of the students to experimental and control group.Therefore, students' pretest values of self-regulated learning and mathematical achievement were controlled. Finally, the significant interaction effect for the overall scale self-regulated learning and the scales goal setting, intrinsic value, strategic planning, and self-recording mainly occurred due to the significant decline of the control group.This decline was not expected and cannot be explained in the framework of this study.For further intervention research, it might be worthwhile to assess more information concerning the control group. In this context, it also might be of interest to design an intervention which involves more or even all of the postulated strategies of Zimmerman's self-regulation model [8].In our study, there had to be a focus on the selected strategies for two reasons.Firstly, the (meta)cognitive abilities of the target group had to be considered (see [40]).Secondly, the duration of the intervention was determined because the learning materials were implemented into regular mathematics lessons.This implied that the more time was spent on the learning materials, the less time could be spent on the regular mathematics contents.Therefore, and for developmental psychological reasons, the intervention was reduced to six units.However, the study involved both (meta)cognitive and motivational aspects of self-regulated learning corresponding to the three learning phases of Zimmerman's model [8].This represents an advantage of the study in contrast to other trainings which focused either on (meta)cognitive or motivational components (for an overview, see [79]). In summary, present findings show that it is possible to maintain a rather high level of self-regulated learning by using self-regulated learning materials which were implemented by teachers.To our opinion it is worth emphasizing that the embedding of specific self-regulated learning strategies into regular mathematics lessons was not at the cost of students' mathematical achievement, but supported it.Thus, it might be assumed that if an improvement of students' selfregulated learning occurs, this improvement might be related to improvements in mathematical achievement.Further studies should investigate if and under what conditions this assumption holds true.Therefore, the learning materials should be optimized and the evaluation instruments adapted to other subjects. The present study implies practical consequences of creating powerful learning environments for supporting selfregulated learning.As the results show, it is possible to embed self-regulated learning strategies in regular lessons by using interdisciplinary learning materials.As self-regulated learning represents an important factor for academic and lifelong learning [80], teaching these strategies should be integrated into regular elementary school lessons in order to improve the development of advantageous learning behavior as early as possible. Figure 3 : Figure 3: Trajectory and linear trend for self-regulated learning measured on a four-point scale. Table 1 : Overview of the contents of the different units. Table 2 : Overview of the scales of the self-regulated learning questionnaire regarding the sources, authors, and changes. Table 3 : Reliabilities of the self-regulated learning questionnaire. N: number of items; followup: follow-up measurement after 12 months. Table 4 : Split-half reliabilities of diary scales, evaluated with the odd-even method. Table 5 : Descriptive data of the self-regulated learning variables and results for the interaction time × training. Table 6 : Results of the t-tests for follow-up measurements of the experimental group.= 58 (three students were absent on the day of the follow-up measurement); d: effect size. N a −indicates an increase, +indicates a decrease. Table 7 : Results of the interruption time series analysis to examine the effects of the intervention.
10,238.2
2012-12-17T00:00:00.000
[ "Education", "Mathematics" ]
A Platform-Independent Method for Detecting Errors in Metagenomic Sequencing Data: DRISEE We provide a novel method, DRISEE (duplicate read inferred sequencing error estimation), to assess sequencing quality (alternatively referred to as “noise” or “error”) within and/or between sequencing samples. DRISEE provides positional error estimates that can be used to inform read trimming within a sample. It also provides global (whole sample) error estimates that can be used to identify samples with high or varying levels of sequencing error that may confound downstream analyses, particularly in the case of studies that utilize data from multiple sequencing samples. For shotgun metagenomic data, we believe that DRISEE provides estimates of sequencing error that are more accurate and less constrained by technical limitations than existing methods that rely on reference genomes or the use of scores (e.g. Phred). Here, DRISEE is applied to (non amplicon) data sets from both the 454 and Illumina platforms. The DRISEE error estimate is obtained by analyzing sets of artifactual duplicate reads (ADRs), a known by-product of both sequencing platforms. We present DRISEE as an open-source, platform-independent method to assess sequencing error in shotgun metagenomic data, and utilize it to discover previously uncharacterized error in de novo sequence data from the 454 and Illumina sequencing platforms. Introduction Accurate quantification of sequencing error is the single most essential consideration of sequence-dependent biological investigations. While true of all investigations that utilize sequencing data, this is particularly true with respect to metagenomics. Metagenomic studies produce biological inferences as the nearexclusive product of computational analyses of high throughput sequence data that attempt to classify the taxonomic (through 16s ribosomal amplicon sequencing [MG-RAST [1], QIIME [2]]) and functional (through whole genome shotgun sequencing [MG-RAST [1]]) content of entire microbial communities. The accuracy of these inferences rests largely on the fidelity of sequence data, and consequently, on the ability of existing methods to quantify and account for sequencing error. Surprisingly, the most widespread methods to determine sequencing-error in metagenomic data lack essential features and/or produce underestimates of the overall error that disregard a substantial portion of sequencing-related experimental procedures. Sequence-based experimental inferences, particularly those related to the identification and characterization of features (protein or 16s rRNA coding regions, regulatory elements, etc.) are greatly affected by the presence of sequencing errors [3]. Errors in metagenomic amplicon-based sequencing have led to grossly inflated estimates of taxonomic diversity [4,5,6]. While methods such as denoising [2,7,8] have been developed to address these issues in amplicon-based metagenomic sequencing [2,9], no analogous techniques have been reported to account for noise/ error in the context of shotgun-based metagenomic sequencing. Limitations inherent to methods used to assess de novo sequencing error are largely to blame. At present, two methods are commonly used: reference-genome and score -based. Reference-genome-based methods compare de novo sequenced reads to preexisting standards (published genomes). Samples are typically cultured from a clonal isolate for which a reliable reference genome is readily available. Sequenced reads undergo an initial alignment to the selected reference genome to match de novo sequences with the regions in the reference genome to which they correspond. Reads that do not exhibit a high enough level of identity with the reference genome are excluded from further analysis. Reads that exhibit a high fidelity match to a region in the reference genome are compared to that region in great detail. Deviations between sequenced reads and their corresponding loci in the reference genome are scored as errors; these are typically reported with respect to frequency and type (i.e. insertion, deletion, substitution). Selection of the most appropriate reference genome is essential. This is problematic when the best available reference is a related strain or species. In these cases, real biological variation can be mistaken for sequencing error [10,11,12,13]. Reference-genome-based methods provide a particularly effective means to examine sequencing error in the context of genomic (i.e. single genome sequencing or resequencing) data, but are not applicable to metagenomic samples as these typically contain enormous taxonomic diversity (samples contain a broad spectrum of species) for which little adequate reference data is available. Many species have no appropriate reference genome(s), and reference metagenomes do not currently exist. Score-based methods use an alternative approach. Sequencer signals are compared with sophisticated, frequently proprietary, probabilistic models that attempt to account for platform-dependent artifacts, generating base calls, each with an affiliated quality (Phred or Q score) that provides an estimate of error frequency, but no information regarding error type. Although score-based methods are applicable to metagenomic data, their inability to consider error type can prove to be a substantial limitation. For example, similarity-based gene annotation is extremely sensitive to frameshifting insertion/deletion errors but only moderately affected by substitutions [3]. In this context, knowledge of error type, specifically the ratio of insertion and/or deletions to substitutions provides crucial information, knowledge unattainable with conventional Phred or Q scores. The absence of information regarding error type is an even greater concern in light of documented platform-dependent biases in sequencing error type: Illumina-based sequencing exhibits high substitution rates [14], whereas 454 technologies exhibit a preponderance of insertion/deletion errors [13]; identical Q scores from these two technologies are likely to represent different types of error, rendering ostensibly similar metrics incomparable [13,15,16,17,18,19,20]. The most concerning, but paradoxically least discussed and perhaps least understood, deficit of score-based methods is their implicit disregard of experimental procedure. Typical sequencing efforts employ a host of procedures to extract, amplify, and purify genetic material, experimental processes that necessarily contribute errors (i.e. introduction of non-biological bias in sequence content and/or abundance relative to original biological template sequences); however, as these errors are introduced before the actual act of sequencing, they can not be accounted for with score-based methods. Thus, a large portion of experimental error in sequencing is frequently overlooked (Figure 1a) (an in depth literature search revealed no works that directly address this issue). Reference-genome and score-based sequencing error determination methods require extensive prior knowledge in the form of reference genomes and/or elaborate platform dependent error models. At present it is not possible to apply reference-genomebased methods to metagenomic data. Score-based methods provide, at best, an incomplete assessment of error that is incomparable between technologies and provides no information with respect to error type. Neither of these approaches is well suited to platform-independent analysis of error in shotgun-based metagenomic data. The absence of an appropriate means to assess sequencing error, in a platform independent manner, in the context of metagenomic data, grows more acute with the increasing democratization of high-throughput sequencing technologies (www.technologyreview.com/biomedicine/26850/) and the rapid proliferation of projects that utilize them [21,22,23,24] (in addition, www.1000genomes.org, www.commonfund.nih.gov/ hmp, www.earthmicrobiome.org). This includes an increasing trend toward meta-analyses (studies that consider data from multiple sources) to examine collections of samples that can exhibit a diverse technical provenance [25,26,27,28]. Meaningful comparisons of technically diverse sequence data require accurate and platform-independent measures of sequencing error, such that bona fide observations can be differentiated from background noise. Current methods, score-based methods in particular, are not well equipped to provide these comparisons. A brief description of Duplicate Read Inferred Sequencing Error Estimation The limitations of reference-genome and score -based methods inspired the creation of Duplicate Read Inferred Sequencing Error Estimation (DRISEE). DRISEE exploits artifactually duplicated reads (ADRs), nearly identical reads that share a completely identical prefix, present with abundances that greatly exceed chance expectations, even when a modest level of biological duplication is taken into account [12,26]. We exploit the highly improbable abundances of ADRs to distinguish them from other reads (see Methods for details). While 100% identity in the prefix region is used to cluster reads, only the non-prefix bases (those not required to exhibit identity with other reads) are used in the error calculations. No additional requirement for sequence identity/ similarity is required of the non-prefix bases. Given their technical origins, it is reasonable to assume that sequence variation within groups of ADRs are more likely to be the product of technical artifact(s) (i.e. sample processing and/or sequencer errors) than a reflection of genuine diversity in the originally sampled population or culture. Based on this premise, DRISEE utilizes multiple alignment (by default, multiple alignments are processed with QIIME [2] integrated Uclust [29] -users will soon be able to choose from a variety of other multiple alignment algorithms) of groups of prefix-identical clusters of ADRs to create internal standards (consensus sequences) to which each individual duplicate read is compared. Sequencing error is determined as a function of the variation that exists within clusters of ADRs. This strategy is platform-independent and can be used to quantify error in metagenomic or genomic samples with respect to error frequency and type. DRISEE identifies duplicate reads using stringent requirements for prefix length and abundance that are extremely unlikely to occur unless the sequences have been artifactually duplicated. In the work presented here, a prefix length of 50 bases Author Summary Sequence quality (referred to alternatively as the level of sequencing error or noise) is a primary concern to all sequence-dependent investigations. This is particularly true in the field of metagenomics where automated tools (e.g. annotation pipelines like MG-RAST) rely on high fidelity sequence data to derive meaningful biological inferences, and is exacerbated by the capacity of next generation sequencing platforms that continue to expand at a rate greater than Moore's law. We demonstrate that the most commonly utilized means to assess sequencing error exhibit severe limitations with respect to analysis of metagenomic data. Furthermore, we introduce a method (DRISEE) that accounts for these limitations through the application of a novel approach to assess sequencing error. DRISEE-based analyses reveal previously unobserved levels of sequencing error. DRISEE provides a platform independent measure of sequencing error that objectively assesses the quality of entire sequence samples. This assessment can be used to exclude low quality samples from computationally expensive analyses (e.g. annotation). It can also be used to evaluate the relative fidelity of analyses after they have been performed (e.g. annotation of error prone samples is less reliable than that of samples with low levels of sequencing error). and a minimum abundance of 20 reads was used; chance occurrence < 4E-32 (see Methods). It is important to note that this probability is so small as to be deemed effectively impossible in biological sequence data (by way of comparison, the number of atoms in the human body has been estimated at ,E28 [30]); however, ADRs routinely exhibit abundances that greatly exceed these expectations, making it relatively easy to identify these sequences and simultaneously differentiate them from much lower abundance biological duplication (there are obvious exceptions to this notion, conserved regions in 16s ribosomal genes, repetitions in eukaryotic DNA etc.). Figure 1b provides a visual overview of DRISEE; text S1 (Supplemental Methods) outlines a typical DRISEE workflow in much greater detail. DRISEE tables, the preliminary output of DRISEE The initial output of a DRISEE analysis is a table, excerpted examples of which are presented as Tables 1 and 2. It indicates the number (Table 1), or percent (Table 2), of sequences (indexed by consensus sequence position) in all considered clusters of ADRs that match or do not match the consensus derived from the ADR cluster to which they belong. DRISEE tables can indicate the match/mismatch counts for a single cluster of prefix-identical reads from a single sequencing sample, for multiple clusters from a single sample (Tables 1 and 2 present one such example), or for multiple clusters collected from a large number of samples that may represent some common trait of interest (e.g. samples produced with the same sequencing technology, that used the same RNA/DNA extraction procedures, that were collected as part of the same sequencing project etc.). This adaptable tabular format represents the simplest incarnation of a DRISEE error profile; it can be analyzed and visualized in a number of ways (numerous examples are presented below -see to garner detailed platform-independent information regarding sequencing error in genomic and metagenomic shotgun sequencing data. A more detailed description of the tabular format is included in the legend for Tables 1 and 2. Validation of DRISEE with simulated and real sequencing data; Comparison of DRISEE to reference-genome-based estimation of sequencing error Initial validations of DRISEE with simulated data showed nearly perfect correlations between known and DRISEE-based error estimates (Figure 2a, R 2 = 0.99). Additional validations with real genomic sequencing data exhibit good correlation with error estimates produced by conventional reference-genome-based analyses [12] of the same samples ( Figure 2b, R 2 = 0.89, excluding outliers). DRISEE reveals unexpected levels of error in genomic and metagenomic data from two widely utilized highthroughput sequencing technologies In further trials, DRISEE was applied to genomic and metagenomic shotgun data produced by two widely utilized (1) Simplified procedural diagram of a typical sequencing protocol. Sample collection: First, the biological sample is collected, Extraction/Initial purification: Then the RNA/DNA undergoes extraction and initial purification procedures, Pre-sequencing amplification(s): Next, the extracted genetic material may undergo amplification (e.g. whole genome amplification -see main text) followed by additional purifications and/or other processing procedures, ''Sequencing'': Genetic material is placed in the sequencer itself, and is sequenced. Note that sequencing itself frequently involves additional rounds of amplification, Analyses of sequencing output: Sequencer outputs are analyzed. (2) Given a procedure such as A, the portion of the procedure over which score/ Phred-based methods can detect error is indicated in red. (3) Given a procedure such as A, the portion of the procedure over which referencegenome-based methods can detect error is indicated in green. Note that reference-genome-based methods are only applicable to single genome data; they cannot consider metagenomic data. (4) Given a procedure such as A, the portion of the procedure over which DRISEE-based methods can detect error is indicated in blue. Note that DRISEE methods can be applied to metagenomic or genomic data, provided that certain requirements are met. See methods. [4,11,12,13,19,31]. The majority of samples (n = 307) exhibit DRISEE-based errors that fall outside the range of reported sequencing errors (error,0.25%, n = 73; error.4%, n = 234; avg 6 stdev = 12.63615.12) (Figure 3). The Supplemental Methods (Text S1) include a description as to how data sets were selected. DRISEE detects error levels much higher than those produced by a conventional score-based approach; Comparison of DRISEE to Phred-based estimation of sequencing error To compare DRISEE derived errors with those determined with a more conventional score-based approach, we obtained FASTQ data (i.e. Phred scores) via SRA (http://www.ncbi.nlm. nih.gov/sra) for subsets of DRISEE-analyzed samples: 20 of the 65 metagenomic 454 samples and 12 of the 159 Illumina metagenomic samples. Per base DRISEE and Phred [32]-based errors for these samples were calculated and compared (see Methods). In 454 and Illumina-based metagenomic sequencing data, DRISEE profiles reveal error levels much higher than those reported by archived Phred values (Figure 4a & b). It is also intriguing to note that, whereas Phred values exhibit nearly indistinguishable trends between the 454 and Illumina data, DRISEE error profiles differ markedly for each technology (Figure 4a DRISEE reveals drastic differences in sequencing error among experiments and even between individual samples from the same experiment After observing differences in error profiles between 454 and Illumina technologies, we explored the possibility that DRISEE could be used to observe differences in sequencing error produced by a single sequencing platform (Illumina). Sequencing samples from five projects (i.e. groups of samples that were produced together in a single experimental framework) were explored by comparing the total DRISEE error profile for each (Figure 4c). While two projects exhibited similar error profiles (Sample Sets 2 and 5), most were unique. The ability of DRISEE to resolve unique error profiles was tested further by exploring two individual samples taken from the same project/experiment (Sample Set 3), those that exhibited the highest and lowest average DRISEE errors. Although the two samples were produced on the same sequencing platform as part of the same experimental project, the individual error profiles are drastically different (Figure 4d). The two samples underwent annotation via MG-RAST, a summary of the annotation results for each sample appears, as a pie-chart, imbedded in the plot of the DRISEE profiles. DRISEE provides detailed data regarding error type We also used DRISEE to provide data regarding error type. Figure 5 presents all error types together (total error) as well as a [14], whereas 454 data exhibit a majority of insertion/ deletion errors [13] (Figures 5a and 5b). No other method provides estimates with respect to error type in metagenomic shotgun data. Discussion DRISEE provides a more complete estimate of sequencing error than is possible with score-based methods, one that accounts for error introduced at any/all procedural steps in a sequencing protocol -all steps that have the potential to introduce errors (i.e. deviation from the original biological template sequences) -from collection of a biological sample to extraction of DNA/RNA, intermediary processing of the extracted material and, finally, sequencing itself (see Figure 1a). Error introduced by processes outside of the actual act of sequencing are ignored by score-based methods, thus it is not surprising that DRISEE derived errors are generally larger than Q/Phred scores, as they account for errors introduced over a much broader scope of experimental procedures, from sample collection, to a wide variety and number of possible intermediary processes, to sequencing itself. An example may help to illustrate the critical importance of this consideration: Amplification is commonly utilized to generate sufficient quantities of material for sequencing from an initial RNA/DNA sample. Here we refer specifically to amplification performed outside of the sequencer/sequencing protocol. Various methods exist -classically variants of the polymerase chain reaction were used, more recent incarnations have adopted isothermic techniques -all depend on high fidelity enzymes (e.g. Taq or W29 DNA polymerase), and are experimental processes, prone to experimental error. Even with high fidelity enzymes, amplification products will contain errors (i.e. deviations from the original biological template). Successive amplification(s) propagate previous errors and introduce new ones, leading to populations of reads that increasingly diverge from their original biological templates. Amplification products are frequently used as the starting material for a sequencing run, thus the starting material may contain large numbers of unique reads that do not exist in the original biological sample. Score-based methods have no means to distinguish these unique and non-biological reads from the original templates. Scores do provide useful information, the fidelity with which sequencer base calls are made, but these estimates possess no information with respect to the origin of the sequenced read: is the sequence genuine/biological or an error containing artifact of imperfect amplification? Through the careful examination of prefix-identical reads, DRISEE is able address this question; in the context of shotgun metagenomic data, no other method can. We assert that reference-genome-based error determination methods provide the most complete and accurate measure of sequencing error. This is due to the fact that (1) such methods consider the entire scope of procedures that accompany a typical sequencing experiment and (2) they compare raw sequence data to an absolute standard, a reference genome. Score-based metrics (e.g. Q or Phred) only consider error introduced by the actual act of sequencing (ignoring error introduced by any processes that precede actual sequencing -e.g. DNA/RNA extraction, sample amplification and purification, etc.) and are the product of proprietary black-box software products that can vary consider-ably among different sequencing technologies. Unfortunately, reference-genome-based methods cannot be applied to metagenomic data (reference metagenomes do not exist, and are unlikely to anytime in the near future). DRISEE can be thought of as a Figure 2. DRISEE performance on simulated and real data. (a) Simulated data sets were generated from real whole genome sequences [12], taken from a single sequenced genome, and randomly fragmented into reads that exhibit length distributions consistent with different sequencing technologies (see Methods). Total DRISEE error rates for each sample (Y-axis) are plotted against the known, artificially introduced error rates (X-axis). The equation and R 2 values represent a linear regression of displayed data. (b) DRISEE and a conventional reference-genome-based error method were applied to a set of published genomic data sets [12] reference-genome-like method, the key difference is that the reference sequences are derived internally from the pool of artifactual duplicate reads, and not from an external reference genome. The similarity between reference-genome and DRISEE derived errors for the same genomic sequencing samples (Figure 2b) is not surprising; both methods rely on comparisons to reference standards. Unfortunately, reference-genome-based methods cannot be applied to metagenomic data (the appropriate reference standards do not exist). Reference-genome-based methods possess another potential fault, the utilization of preliminary identity/similarity filters that may lead to artifactual deflation of error estimates. In particular, conventional reference-genome-based methods employ a preliminary similarity search to align sequenced reads to the most similar portion of the selected reference genome. Reads that fail to align to the reference genome with the selected initial level of stringency (criteria are generally lenient, e.g. 90% identity for the full length of the read [12]) are discarded from subsequent analysis. In this way, the most error prone reads, those that do not align well to the reference genome, even with lenient criteria, and would contribute significantly to calculated error, are not considered. DRISEE takes a very different approach. Reads are binned based on 100% identity in their prefix region, but no identity/similarity requirement is made of the non-prefix bases. Criteria for prefix length and abundance provide conditions so improbable as to preclude any possibility other than technical duplication. Technical duplicates should be identical to each other, not just in their prefix region, but through the length of the entire read, except for differences introduced by error. While 100% identity in the prefix region is used to cluster reads, only the non-prefix bases (those not required to exhibit identity/similarity with other reads) are used in the error calculations. As no additional requirement for sequence identity/ similarity is required of the non-prefix bases, DRISEE can provide estimates of error that are less constrained by filters placed in conventional reference-genome based methods. As an example, consider a 100 bp read. Under the reference-genome-based method utilized by Niu et al. (see Figure 2b), 90 bp would be required to perfectly align with a reference genome before error analyses are conducted; thus, the maximum detectable deviation from the reference standard is 10% (i.e. a maximum of 10% error can be detected). Alternatively, DRISEE would cluster the read into a bin of reads with the same 50 bp prefix and would subsequently ignore this prefix to produce an estimate of error solely on the non-prefix bases (those not required to exhibit identity/similarity with other reads in their respective bin). This allows DRISEE to consider errors that span a much broader range (errors in excess of 50% have been observed -see Figure 3). Given that DRISEE considers the complete scope of procedures implemented in a given sequencing experiment, and score-based methods only provide information with respect to the actual act of sequencing, it is not surprising that DRISEE produces error estimates that are generally higher (Figures 3, 4a, & 4b). The uniqueness of DRISEE error profiles was unexpected. Distinct error profiles are observed for each of two sequencing technologies, 454 ( Figure 4a) and Illumina ( Figure 4b); each exhibits a clearly unique error profile, whereas Q-value derived error profiles for the very same samples are indistinguishable from each other. Furthermore, unique profiles were observed when samples processed with the same sequencing technology (Illumina) were grouped by experiment, suggesting the presence of platformindependent technological or lab-dependent errors (Figure 4c). Even finer distinctions are observable among the error profiles for single samples taken from the same experiment (Figure 4d). DRISEE provides a means to assess the relative quality of sequencing between technologies (Figure 4a and b), experiments performed on the same platform (Figure 4c), and even between individual samples taken from the same experiment (Figure 4d). The ability of DRISEE to provide a preliminary estimate of sample quality, and indications as to the suitability of a sample for subsequent analyses, is clearly demonstrated in Figure 4d. Two samples from the same experiment exhibit vastly different DRISEE error levels (1 vs. 45% average error). These values are reflected in the MG-RAST-based annotations of the samples. Nearly 90% of the reads from the high error sample fail MG-RAST quality control procedures; just 4% of the reads are successfully annotated as known proteins. The higher quality data set loses a much smaller portion of its reads to quality control (23%) and has eight times as many reads annotated as known proteins (33%). In the current age of compute-constrained bioinformatics, the identification and correction/removal of low quality sequence data, from relatively mild procedures like read trimming -DRISEE informed read trimming is currently under development -to more drastic action, including the elimination of entire sequencing samples, is an acute and steadily growing necessity. DRISEE can provide researchers with the ability to identify low quality sequence data before time-consuming and potentially costly analyses are performed. DRISEE also provides researchers with a platform-independent means to assess error among samples, after they have undergone analyses, allowing a quantitative assessment as to the fidelity of analysis-derived inferences. As an example, annotations related to high error samples like that presented in Figure 4d (purple DRISEE profile) should be treated with a great deal more skepticism than those derived from a higher quality data set (e.g. 4d, green DRISEE profile). This is especially true when considering samples with subtle differences that may easily be obscured by high levels of sequencing error. Arguably, DRISEE has some limitations. At present, it is not applicable to eukaryotic data, sequences with low complexity, and/or known sequences that may exhibit an unusually high level of biological repetition, particularly amplicon ribosomal RNA data. These types of data are likely to meet DRISEE requirements for prefix length and abundance, but represent real biological variation that could be misinterpreted by DRISEE as sequencing error. Moreover, DRISEE operates on artifactually duplicated reads-an approach that works well with current platforms such as 454 and Illumina but may require procedural modifications (such as the intentional inclusion of highly abundant sequence standards) if future developments eliminate ADRs. In summary, DRISEE provides accurate assessments of sequencing error of metagenomic (Figures 3-5) and genomic ( Figure 2) data, accounting for error type as well as frequency ( Figure 5). DRISEE error profiles can be used to explore correlations between sequencing error and metadata (e.g. Figure 4a & b suggests the presence of platform dependent trends in DRISEE calculated errors; Figure 4d demonstrates a correlation between DRISEE calculated error and the percent of reads that MG-RAST is able to successfully characterize), allowing investigators to differentiate experimentally meaningful trends from artifacts introduced by previously uncharacterized sequencing error. Traditional score-and reference-genome-based methods do not allow for such observations with respect to shotgun metagenomic data. DRISEE also offers the advantage that it requires no data other than an input FASTA or FASTQ file. Moreover, DRISEE considers error independent of sequencing platform, without prior knowledge. These characteristics make DRISEE a promising method-particularly with respect to the enormous quantities of shotgun-based metagenomic data that are anticipated in the near future. DRISEE will soon be available to analyze sequencing samples in MG-RAST. We also provide MG-RAST independent code to allow users to perform DRISEE analyses without MG-RAST: https://github.com/MG-RAST/DRISEE. Overview Duplicate Read Inferred Sequencing Error Estimation (DRI-SEE) can be applied to sequence data produced from any sequencing technology. It provides an error profile (Tables 1 and 2 provide an excerpted example) that can be used to explore the sequencing error, as well as biases in error, that are present in a single sequencing run or any group of sequencing runs. The latter capability enables the user to produce error profiles specific to a particular sequencing technology, sample preparation procedure, or sequencing facility-in short, to any quantified variable (i.e., metadata) related to one or more sequencing samples. DRISEE exhibits several desirable characteristics that are not found in the most widely utilized methods to quantify sequencing error: reference-genome-based methods that rely on comparison to standard sequences (generally a published sequenced genome): and quality score-based methods that rely on sophisticated, platformdependent models of error to derive base calls with affiliated confidence estimates (Q or Phred scores) for each sequenced base. DRISEE can be applied to metagenomic or genomic data produced with any sequencing technology and requires no prior knowledge (i.e., reference genomes or platform-dependent error models). DRISEE relies on the occurrence of artifactually duplicated reads-nearly identical sequences that exhibit abundances that greatly exceed expectations of chance, even when a modest amount of possible biological duplication is taken into account. Illumina and 454 platforms exhibit a well documented [12,26], but poorly understood, propensity to produce large numbers of ADRs. DRISEE utilizes this artifact as a means to create internal sequence standards that can be used to assess error within a single sample, or across multiple samples. We identify ADRs as those reads that exhibit an identical prefix (prefix = the first l bases of a read) at some threshold abundance (n) that exceeds chance expectations, even those that take biological duplication into account. The precise values of l (prefix length) and n (prefix abundance) can be varied to accommodate the scale of any sequencing technology. In the work presented here, bins (groups) of duplicate reads were used to calculate error values if they exhibited an identical prefix length (l) of 50 bases with an abundance (n) of 20 or more reads. These requirements are arbitrary, but were selected on sound statistical and biological assumptions. Chief among these is the extreme improbability that such an occurrence (20 reads, each with identical 50 bp prefixes) could occur by chance, (i.e. without technical duplication via WGA or PCR etc. These criteria are stringent enough to justify assumptions of biological and statistical uniqueness; indeed, such an occurrence is extremely unlikely by chance: where p is the probability that a prefix of length l (50 bp) will be observed n (20) times; 4 represents the number of possible bases (A, T, C, and G). Even in data that are Illumina scale (on the order of 1 million reads per run), a chance observation of 20 reads that exhibit the same 50 bp prefix is highly improbable (chan-ce<1E0664E-32 = 4E-26); however, ADRs frequently exceed these limits, making them easy to detect, and providing an ideal population to probe for sequencing error -a population of reads that should be completely identical (i.e. identical beyond their 50 bp prefix) except for errors introduced by sequencing procedures. The default values for nucleotide length and number of reads required for a bin of ADRs to undergo DRISEE analysis are arbitrary; however, they possesses a key feature, improbability far beyond that expected by chance, even if biological repetition was present, and even when data are Illumina scale (1E06 reads). Less stringent criteria (prefix length 20 bp, prefix abundance 20; p = 5E-14) were applied to data generated by 454 technologies, yielding extremely similar estimations of error (data not shown). Much more stringent criteria were selected for this study such that the method could be applied to 454 and Illumina data without concern for the difference in scale in the outputs of the two technologies (454<1E05, Illumina<1E06 reads per run). DRISEE exhibits a universality that other methods lack, but only if the data under consideration meet the following criteria: (1) Data must be in FASTA or FASTQ format. (2) There must be enough ADRs to safely infer that they are the product of artifact and not of real biological variation. (3) Input sequence data should not be culled, trimmed, or modified in any way by sequencer processing software: note that while DRISEE utilizes ADRs in its calculations, it does not cull these sequences from processed datasets (4) Data under consideration should be the product of random (i.e. shotgun) sequencing. (5) At this time, amplicon data-specifically, directed sequencing of amplicon ribosomal RNA data, are not suitable for DRISEE analysis; ribosomal amplicon reads start with highly conserved regions (primer target sites) followed by regions that exhibit a large degree of real biological variation (the hypervariable regions) that DRISEE could misinterpret as error. Data access Unless otherwise indicated, data sets examined in this study were obtained via SRA or MG-RAST. Table S1 (Supplemental Table 1) contains a complete list of sequence data used in the accompanying manuscript. Datasets are referenced by their SRA (http://www.ncbi.nlm.nih.gov/sra), MG-RAST (http://metage nomics.anl.gov/), or both identifiers/accession numbers. An MG-RAST independent version of DRISEE code, with detailed documentation, including installation and running instructions as well as runtime related statistics, can be downloaded from https://github.com/MG-RAST/DRISEE. See Text S1 (Supplemental Methods) and Figure 1b for a detailed workflow-based description of DRISEE. Table 1 and 2 overview DRISEE analysis tables take the same form if they exhibit the counts derived from a single bin of artificially duplicated reads, multiple bins from the same sample, or much larger collections of bins spanning multiple samples. The excerpted tables displayed here represent the raw and percent scaled DRISEE error profile for all considered prefix-identical bins in a single metagenomic sequence sample (MG-RAST ID 4462612.3). The DRISEE table is presented as raw counts per base pair position (Table 1) or percent error per position (Table 2). Tables 1 and 2 contain three sections (ID, Summary, and bp counts), described in the legends below. Text S1 Supplemental Methods. Contains an extended workflow description of a typical DRISEE analysis and some additional detailed descriptions of methods briefly referred to in the main text. Supporting Information (DOC)
7,758.8
2012-06-01T00:00:00.000
[ "Biology", "Computer Science" ]
Gauged Peccei-Quinn Symmetry - A Case of Simultaneous Breaking of SUSY and PQ Symmetry Recently, a simple prescription to embed the global Peccei-Quinn (PQ) symmetry into a gauged $U(1)$ symmetry has been proposed. There, explicit breaking of the global PQ symmetry expected in quantum gravity are highly suppressed due to the gauged PQ symmetry. In this paper, we apply the gauged PQ mechanism to models where the global PQ symmetry and supersymmetry (SUSY) are simultaneously broken at around $\mathcal{O}(10^{11-12})$\,GeV. Such scenario is motivated by an intriguing coincidence between the supersymmetry breaking scale which explains the observed Higgs boson mass by the gravity mediated sfermion masses, and the PQ breaking scale which evades all the astrophysical and the cosmological constraints. As a concrete example, we construct a model which consists of a simultaneous supersymmetry/PQ symmetry breaking sector based on $SU(2)$ dynamics and an additional PQ symmetry breaking sector based on $SU(N)$ dynamics. We also show that new vector-like particles are predicted in the TeV range in the minimum model, which can be tested by the LHC experiments. I. INTRODUCTION The Peccei-Quinn (PQ) mechanism [1][2][3][4] provides us with a very successful solution to the strong CP problem. The effective θ-angle of QCD is canceled by the vacuum expectation value (VEV) of the pseudo-Nambu-Goldstone boson, the axion a, which results from spontaneous breaking of the global U (1) Peccei-Quinn symmetry, U (1) P Q . The solution of the strong CP problem based on a global symmetry is, however, not on the very firm theoretical ground. As the QCD anomaly explicitly breaks the U (1) P Q symmetry, it cannot be an exact symmetry by definition. Besides, it is also argued that all global symmetries are broken by quantum gravity effects [5][6][7][8][9][10]. The explicit breaking of the PQ symmetry easily spoils the success of the PQ mechanism. In Ref. [11], a simple prescription has been proposed, with which the global U (1) P Q symmetry is embedded into a "gauged" U (1) symmetry, U (1) gP Q . There, the anomalies of the gauged PQ symmetry are canceled between the contributions from two (or more) PQ charged sectors. With appropriate charge assignment of U (1) gP Q , the PQ charged sectors are highly decoupled with each other, and a global U (1) P Q symmetry appears as an accidental symmetry. As a part of the gauge symmetry, the accidental U (1) P Q is also well protected from explicit breaking caused by quantum gravity effects. This prescription provides a concise generalization of previous attempts to achieve the PQ symmetry as an accidental symmetry resulting from (discrete) gauge symmetries [12][13][14][15][16][17][18][19][20][21]. In this paper, we apply the construction of the gauged PQ symmetry to a model in which the global PQ symmetry and supersymmetry are simultaneously broken at around O(10 11−12 ) GeV [22]. Such scenario is motivated by an intriguing coincidence between the supersymmetry breaking scale which explains the observed Higgs boson mass by the gravity mediated sfermion masses in the hundreds to thousands TeV range [23] and the PQ breaking scale which evades all the astrophysical and the cosmological constraints. 1 The organization of the paper is as follows. In section II, we summarize the supersymmetric version of the gauged PQ mechanism. In section III, we construct a model in which supersymmetry and the PQ symmetry are broken simultaneously by SU (2) strong dynamics. In section IV, we apply the gauged PQ mechanism to the model of simultaneous symmetry 1 For correspondence between the sfermion mass scale and the Higgs boson mass, see also [24][25][26]. For constraints on the PQ breaking scale, see, e.g., [27][28][29]. breaking. The final section is devoted to our conclusions. II. GENERAL PRESCRIPTION OF THE GAUGED PQ MECHANISM In this section, we briefly summarize a supersymmetric version of the gauged PQ mechanism [11]. A. Would-be Goldstone and Axion Superfields As a simple example, let us consider two global PQ symmetries U (1) P Q 1 and U (1) P Q 2 , which are broken by the VEVs of Φ 1 ,Φ 1 and Φ 2 ,Φ 2 , respectively. For instance, such vacuum is achieved by the superpotential, Here, Φ i andΦ i (i = 1, 2) have charges ±1 under U (1) P Q i and have vanishing charges under U (1) P Q j (j = i), respectively. The superfields, X 1,2 , have vanishing charges under both the PQ symmetries. The parameters λ 1,2 are coupling constants, and Λ 1,2 are dimensionful parameters. After the spontaneous breaking of the PQ symmetries, Φ's lead to the Goldstone superfields A 1,2 , 2 By using the Goldstone superfields, the PQ symmetries are realized by, The PQ symmetries are communicated to the supersymmetric Standard Model (SSM) sector by introducing extra quark multiplets as in the KSVZ axion model [30,31]. Throughout this paper, we assume that the extra multiplets form 5 and5 representations of the SU (5) gauge group of the Grand Unified Theory (GUT). Let us suppose that Φ 1,2 couple to N 1 and N 2 flavors of the KSVZ extra multiplets 5 i ,5 i (i = 1, 2), respectively, Through the above coupling, both the global PQ symmetries are broken by the Standard Model anomaly. The anomalous breaking of the global PQ symmetries lead to the anomalous coupling of the Goldstone superfields, where, W α l (l = 1, 2, 3) denote the field strength superfields of the Standard Model gauge interactions. 3 We normalize the gauge field strength so that the gauge kinetic functions are given by with where g l and θ l are the gauge coupling constants and the vacuum angles of the corresponding gauge interactions. An important observation here is that there is a linear combination of the PQ symmetries for which the Standard Model anomalies are absent. In fact, a U (1) symmetry under which Φ 1,2 have charges q 1 and q 2 is free from the Standard Model anomaly for In the gauged PQ mechanism, we identify the anomaly-free combination to be a gauge symmetry U (1) gP Q . The gravitational anomaly and the self-anomaly of the U (1) gP Q are canceled by adding U (1) gP Q charged singlet fields. Hereafter, we take q 1 and q 2 are both 3 Here, the gauge indices of SU (3) c and SU (2) L are suppressed, and the GUT normalization is used for positive and relatively prime numbers without loss of generality. In the gauged PQ mechanism, one of the linear combinations of A 1,2 is the would-be Goldstone supermultiplet, and the other combination corresponds to the physical axion superfield. To see how the physical axion is extracted, let us consider the Kähler potential of Φ's, where V and g are the U (1) gP Q gauge supermultiplet and the gauge coupling constant, respectively. Under the U (1) gP Q gauge transformation, the gauge field is shifted by, with Θ being the gauge parameter superfield. By substituting Eqs. (2) and (3), the Kähler potential is reduced to The physical axion and the would-be Goldstone superfields A and G are obtained by By using A and G, the Kähler potential is rewritten by, The final expression of Eq. (15) shows there is no bi-linear term which mixes A andṼ . Therefore, we find that A corresponds to the physical axion superfield, while G is the would-be Goldstone superfield which is absorbed by V in the unitarity gauge. It should be noted that the physical axion A is invariant under the gauge U (1) gP Q transformation. For a later purpose, let us discuss the domain and the effective decay constant of the axion. The domains of the imaginary parts of A 1,2 (corresponding to the phases of Φ 1,2 ) are given by where When q 1 and q 2 are relatively prime integers, the gauge invariant axion interval is given by [11], Accordingly, the global U (1) P Q symmetry is realized by where F a is defined as an effective decay constant, B. Accidental Global PQ Symmetry As argued in [5][6][7][8][9][10] where M P L = 2.4 × 10 18 GeV denotes the reduced Planck scale. When supersymmetry is spontaneously broken in a separate sector, the above superpotential contributes to the axion potential through the supergravity effects, 4 where m 3/2 denotes the gravitino mass. In the final expression, we use Φ q 2 1Φ q 1 2 = Λ q 2 +q+1 /2 (q 2 +q 1 )/2 e ia/Fa , and the intrinsic θ angle of QCD is absorbed by the definition of the axion field. The first term represents the axion mass term due to the QCD effects [3], where m u,d are the u-and d-quark masses, m π the pion mass, and f π 93 MeV the pion decay constant. As a result, the effective θ angle at the vacuum of the axion is given by, Thus, for q 1 + q 2 > ∼ 12, m 3/2 = O(10 6 ) GeV, and Λ 1,2 = O(10 12 ) GeV, the explicit breaking terms of the global PQ symmetries are small enough to be consistent with the measurement of the neutron EDM, i.e. θ eff < 10 −11 [32]. In this way, a high quality global PQ symmetry appears as an accidental symmetry in the gauged PQ mechanism. 4 In supergravity, a superpotential term W i directly appears in the scalar potential as with n i being the mass dimension of W i . C. Domain Wall Problem Before closing this section, let us briefly discuss the domain wall problem. The anomaly cancelation condition in Eq. (10) is generically solved by, where N GCD ∈ N is the greatest common devisor of N 1,2 . Then, the anomalous coupling in Eq. (7) is rewritten by, and hence, It should be noted that the anomalous coupling of the axion respects a discrete symmetry, for N GCD > 1. The Z N GCD symmetry is eventually broken in the vacuum of the axion. Thus, the model with N GCD > 1 suffers from the domain wall problem if the global PQ symmetry is broken after inflation since the average of the axion field value in each Hubble volume is randomly distributed. To avoid the domain wall problem, spontaneous breaking of the global PQ symmetry is required to take place before inflation, which in turn requires a rather small inflation scale to avoid the axion isocurvature problem (see, e.g. Ref. [28,33] on the other hand, corresponds to the configurations in which the phases of Φ 1 and Φ 2 wind q 1 times and q 2 times simultaneously. With the U (1) gP Q gauge field winding simultaneously, the tension of the local string is finite even in the limit of infinite volume for the local string. A striking difference between the global strings and the local strings is how the axion field winds around the strings. Around the local strings, only the would-be-Goldstone field winds, while the axion winds around the global strings. Thus, when the axion potential is generated at around the QCD scale, the axion domain walls are formed only around the global strings, while they are not formed around the local strings. Once the domain walls are formed around the global strings, they immediately dominate over the energy density of the universe, which causes the domain wall problem. Therefore, for the domain wall problems not to occur, the local strings should be formed preferentially at the phase transition. The string tensions of the global strings and the local strings, however, depend on model parameters. Thus, there is no guarantee that only the local strings preferentially survive in the course of the cosmic evolution. As an example, let us consider a case with Φ 1 Φ 2 . In this case, the cosmic strings are formed at the first phase transition, i.e. Φ 1 = 0 with Φ 2 = 0. At this stage, strings around which the phase of Φ 1 winds just once are expected to be dominantly formed. They are local because we can take an appropriate charge normalization for the U (1) gP Q . As the temperature of the universe decreases, the string networks follow the scaling solution where the number of the cosmic strings in each Hubble volume becomes constant (see, e.g., Ref. [34]). Once the temperature becomes lower than the scale of the second phase transition, i.e., Φ 2 = 0, the local strings formed at the first phase transition become no more the local strings. 5 Besides, formations of the global strings of Φ 2 are also expected at the second phase transition in which the phase of Φ 2 winds just once. To form a genuine local string, it is required to bundle q 1 ex-local strings (formed by Φ 1 ) and q 2 global strings (formed by Φ 2 ) into a single string. However, the confluence of global strings into a local string is quite 5 The configuration of the gauge field formed at the first phase transition does not coincide with the one required for the local string with Φ 2 = 0. unlikely as there is no correlation between the nature of the cosmic strings in the adjacent Hubble volumes. Therefore, when Φ 1 Φ 2 , the domain wall problem is expected to be not avoidable even if N GCD = 1. 6 In summary, let us list up possibilities to avoid the domain wall problem. The first possibility is a trivial one where both the gauged and the global PQ symmetries are broken before inflation. This solution does not require N GCD = 1. In this possibility, there is a constraint on the Hubble scale during inflation from the axion isocurvature problem. The next possibility is only applicable for N GCD = 1 with q 1 = 1 and q 2 = N (> 1). Here, it is assumed that the first phase transition (i.e. Φ 1 = 0) takes place before inflation while the second phase transition (i.e. Φ 2 = 0) occurs after inflation. In this second possibility, the local strings formed at the first phase transition are inflated away. The global strings formed at the second phase transition, on the other hand, do not cause the domain wall problem as each of the global string is attached to only one domain wall [35,36]. In addition to these two possibilities, there can be another possibility which is applicable It should be noted that the second possibility (and the third possibility if numerically confirmed) is one of the advantages of the gauged PQ mechanism over the models in which the global PQ symmetry results from an exact discrete symmetry, such as Z N . In such models, the axion potential also respects the Z N symmetry, and hence, the domain wall problem is not avoidable when the global PQ symmetry is spontaneously broken after inflation. In the gauged PQ models, on the other hand, it is possible that the global PQ symmetry is broken after inflation without causing the domain wall problem nor the axion isocurvature problem. III. DYNAMICAL SUPERSYMMETRY/PQ SYMMETRY BREAKING In this section, we discuss a model of a simultaneous breaking of supersymmetry and the global PQ symmetry. As we are interested in solutions to the strong CP -problem without severe fine-tuning, it is natural to seek models in which the PQ breaking scale is generate by dynamical transmutation. Thus, in the following, we construct a model of a simultaneous supersymmetry/PQ symmetry breaking sector based on a strong dynamics. For now, we do not consider the gauged PQ mechanism which will be implemented in the next section. A. Simultaneous Breaking of Supersymmetry and Global PQ Symmetry As the simplest example of the dynamical supersymmetry breaking models, we consider a model of supersymmetry breaking based on SU (2) gauge dynamics (the IYIT model) [37,38]. The advantage of this model is that the nature of dynamical supersymmetry breaking is calculable by using effective composite states. The model consists of four SU (2) doublets, Q i (i = 1 − 4), and six singlets, . Those superfields couple via the superpotential where λ kl ij denote coupling constants with λ kl ij = −λ kl ji = −λ lk ij . The maximal non-abelian global symmetry of the IYIT model is SU (4) flavor symmetry, SU (4) f , which is broken by the superpotential, in turn, does not allow a supersymmetry breaking scale lower than the Planck scale due to the condition for the flat present universe. In addition, it is also known that R-symmetry (or at least an approximate R-symmetry) is relevant for supersymmetry breaking vacua to be stable [39,40]. Given its importance, we assume that the Z N R (N > 2) symmetry is an exact discrete gauge symmetry [41][42][43][44][45][46][47]. 7 In this paper, we take the simplest possibility, Z 4R , assuming a presence of an extra multiplet of the 5,5 representations of the SU (5) GUT. The Z 4R symmetry is free from the Standard Model anomaly when the R-charges of the bilinear term of the Higgs doublets and that of the extra multiplets are vanishing [48][49][50][51]. 8 In this model, we identify the global PQ symmetry with a U (1) subgroup of SU (4) f (Tab. I). As it is a subgroup of SU (4) f , the PQ symmetry is free from the SU (2) anomaly. Under the global U (1) P Q symmetry, the superpotential is reduced to where λ's are dimensionless coupling constants withλ kl ij = 0 for ij = 12, 34 or kl = 12, 34. Hereafter, we take λ 12 12 = λ 34 34 = λ for simplicity,although it is straightforward to extend the following analysis for λ 12 12 = λ 34 34 . As we will see shortly, the PQ symmetry is spontaneously broken by the VEV of Q 1 Q 2 and Q 3 Q 4 . 7 In Ref. [18], it is proposed to achieve the global PQ symmetry as an accidental symmetry protected by the exact discrete R-symmetry without relying on the gauged PQ mechanism. 8 For GUT models which are consistent with the Z 4R symmetry, see, e.g., [52,53]. By assuming the KSVZ axion model, the PQ symmetry is communicated to the SSM sector through couplings to the KSVZ extra multiplets in 5 and5 representations of the The PQ charges of the KSVZ extra multiplets are given in Tab Now, let us discuss how supersymmetry and the PQ symmetry are broken spontaneously. Below the dynamical scale of SU (2) dynamics, Λ, the IYIT model is well described by using the composite fields, M ij ∼ Q i Q j , with an effective superpotential, Here, are the PQ neutral mesons. The coupling constantsλ and the singlets Z 0 's are also rearranged accordingly. In the effective superpotential, the quantum modified constraint [54] is implemented by a Lagrange multiplier field X . By assuming that λ's are perturbative, and λ ± (= λ) are smaller thanλ's, the VEVs of M ± are given by Other fields do not obtain VEVs of O(Λ). 10 At this vacuum, the PQ symmetry is spontaneously broken by M ± while supersymmetry is broken by the VEVs of the F -components of Z ± , i.e., simultaneously. 9 The KSVZ extra multiplets should be distinguished the extra multiplets required to cancel the Standard Model anomaly of the Z 4R symmetry. 10 The scalar components of Z ± and X obtain small VEVs of O(m 3/2 ). Here, let us comment that the Z 4R is not enough to restrict the superpotential in the form of Eq. (31). In fact, there can be superpotential terms such as Z 3 0 or Z 0 Z + Z − without the U (1) A (or Z 4 ) symmetry. As those terms make the supersymmetry breaking vacuum in Eqs. (35) and (36) metastable, the coefficients of those terms should be rather suppressed to make the vacuum long lived. Such suppression can be achieved, for example, by assuming that a subgroup of Z 4 and U (1) P Q is an exact symmetry where Z 0 's are charged but Z ± are neutral. 11 It is also possible to suppress the unwanted terms by extending the SU (2) dynamics of the IYIT sector into a conformal window by adding extra doublets [55][56][57]. B. Axion Supermultiplet The degeneracy due to the PQ symmetry breaking is parametrized by the axion superfield A, with which the PQ symmetry is realized by Here, we reduce the domain of the U (1) P Q rotation parameter from α = 0−4π to α = 0−2π, since all the SU (2) gauge invariant fields have the PQ charge of ±2 (see Tab. I). In other words, the sign changes of Q's by a phase rotation with α = 2π can be absorbed by a part of SU (2) transformation. The effective Kähler potential and superpotential of M ± and Z ± are given by, where the ellipses denote the higher dimensional operators. By substituting the axion su- 11 As this symmetry is not broken spontaneously at the vacuum, and hence, Z 0 's and M 0 's are predicted to be stable. Thus, the simultaneous breaking of the IYIT sector should take place before inflation to avoid the production of those stable particles if we assume the above symmetry. perfield, the effective theory is reduced to with some irrelevant holomorphic terms omitted in the Kähler potential. The scalar potential is accordingly given by, 12 In the final expression, we rearranged the scalar fields by introducing complex scalar fields S and T , so that the PQ symmetry is manifest in the scalar potential. The above scalar potential shows that the complex scalar T and the real component of with which the pseudo-flat direction is stabilized at its origin. 13 The superpotential in Eq. (42) also shows that the fermion partners of A (the axino) and T obtain a Dirac mass of λΛ, with each other. The fermion partner of S corresponds to the 12 Throughout the paper, we use the same symbols to describe the superfields and their scalar components. 13 Here, we neglect the one-loop contributions from the U (1) gP Q gauge interaction by assuming that the gauge coupling constant is small. The contributions from the gauge interaction, in fact, destabilize the origin of the pseudo flat direction [59][60][61]. goldstino which is absorbed into the gravitino by the super-Higgs mechanism. Putting together, the model achieves dynamical breaking of supersymmetry and the PQ breaking simultaneously. The axion supermultiplet splits into a massless axion and massive saxion/axino with masses of the supersymmetry/PQ breaking scale. The axion couples to the SSM sector via the coupling in Eq. (33), i.e., where a = √ 2 Im[A] denotes the axion field and f a = √ 2Λ. After integrating out the extra KSVZ multiplets, the axion couples to the SM gauge fields through with which the strong CP problem is solved with κ being a dimensionless coupling constant. 15 The corresponding symmetry breaking terms in the scalar potentials are given by, 14 Here, we require that U (1) P Q is not broken by renormalizable interactions as a part of definition of the global symmetry. 15 Lower dimensional operators which break the PQ symmetry, such as Z 4 + /M P L , are forbidden by the Z 4R symmetry. Here, we inserted the VEVs of M ± and those of F -terms of Z ± . Due to the explicit breaking, the VEV of the axion, and hence, the effective θ angle is shifted to . Thus, unless Im[κλ] is finely tuned to be smaller than O(10 −11 ), the effective θ angle is too large to be consistent with the measurement of the neutron electric dipole moment (EDM) [32]. IV. GAUGED PQ EXTENSION OF SIMULTANEOUS BREAKING MODEL Let us now implement the gauged PQ mechanism to the model of the simultaneous breaking of supersymmetry and the PQ symmetry in section III. For that purpose, we introduce an additional sector based on SU (3) dynamics which breaks a PQ symmetry spontaneously. In the following, we call this model the SU (3) model, and put primes on the superfields and the symmetry groups in this sector. In addition to the global U (1) P Q symmetry, the superpotential possesses a continuous Rsymmetry and a U (1) A symmetry (broken down to a Z 6 symmetry by the SU (3) anomaly) TABLE II. Charge assignment of the dynamical PQ symmetry breaking sector. The chiral superfields, Q 's, and Z 's, are the SU (3) triplets and singlets, respectively. The U (1) P Q symmetry corresponds to U (1) B symmetry in the SU (3) sector. The KSVZ extra multiplets are denoted by 5 and5 . The U(1) R and U (1) A symmetries are accidental symmetries, with Z 4R being an exact symmetry. The R-charges of the KSVZ extra multiplets are taken to be r 5 + r 5 = 2. in Tab. II. As discussed previously, however, we consider that only Z 4R is an exact symmetry, and assume that U (1) R and U (1) A are accidental symmetries broken by higher dimensional operators. 16 Below the dynamical scale of SU (3) , Λ , the SU (3) sector is well described by the composite mesons and baryons, with an effective superpotential, Here, the second term implements the deformed moduli constraint by a Lagrange multiplier field X [54]. The mesons are neutral under U (1) P Q while the baryons have charges ±3. From the superpotential in Eq. (56), we find that the PQ symmetry is spontaneously broken by the VEVs of B ± . Accordingly, the vacuum is parametrized by the Goldstone superfield A , 17 16 Without U (1) A (or Z 6 ), the superpotential terms such as Z 3 are allowed even if we assume the Z 4R symmetry. Such terms, however, do not change the following discussion. 17 The origin of A is set at which B + = B − , and B + = B − for A = 0, accordingly. TABLE III. The charge assignment of the gauged PQ symmetry and the Z 4R symmetry. The singlet fields Y 's and Y 's are introduced to cancel the self-triangle and gravitational anomalies of U (1) gP Q (see subsection IV G). with Λ 2 = √ 2Λ . By using A , the PQ symmetry is non-linearly realized by As in the case of the IYIT sector, the domain of the PQ symmetry is reduced from α = 0−6π to α = 0 − 2π as the SU (3) invariant fields have the PQ charges of ±3. The U (1) P Q symmetry in this sector is also communicated to the SSM sector through the couplings to N f flavors of the KSVZ extra multiplets, 5 and5 . With the charge assignment in Tab. II, the baryons couple to the extra multiplets in the superpotential, Once U (1) P Q is broken, the axion obtains the anomalous coupling to the SSM gauge fields, while the extra multiplets obtain masses of O(Λ 3 /M 2 PL ). B. Gauged PQ Symmetry Now, we are ready to find out a model of the gauged PQ symmetry by combining the simultaneous supersymmetry and the PQ symmetry breaking model in section III and the PQ symmetry breaking model in subsection IV A. To apply the prescription in section II, let us first identify Φ 1 with the meson operator M + in section III and Φ 2 with the baryon operator B + , i.e., and assign U (1) gP Q charges of q 1 and −q 2 to them (Tab. III). 18 Then, the anomaly-free condition of the U (1) gP Q symmetry in Eq. (10) is given by, C. Accidental Global PQ Symmetry As discussed in the previous section, the global PQ symmetry can be explicitly broken by the U (1) gP Q invariant operator consisting of the fields in the two sectors. Among the explicit breaking terms, the most relevant ones are given by, 19 with κ being a dimensionless coupling constant. It should be noted that these terms are consistent with the Z 4R symmetry, and hence, no factor of m 3/2 is required unlike the terms in Eq. (22). These operators roughly contribute to the axion potential, where the VEVs of M ± , B ± , and those of the F -terms of Z ± are inserted, Therefore, in the simultaneous breaking model with the gauged PQ mechanism, the effective θ angle at the 18 The U (1) gP Q charges of Q 1,2 andQ 's corresponds to q 1 /2 and q 2 /3, respectively. 19 There are lower dimensional operators which break the global PQ symmetry with M ± replaced by M P L × Z ± in Eq. (64). The explicit breaking effects of those operators are comparable to the ones of Eq. (64) due to suppressed A-term VEVs of Z ± = O(m 3/2 ). vacuum is given by, Thus, for 3q 1 + 2q 2 > ∼ 14, the explicit breaking of the global PQ symmetries are small enough to be consistent with the measurement of the neutron EDM, i.e., θ eff < 10 −11 [32]. D. Mass Spectrum of the KSVZ Multiplets The KSVZ multiplets, (5,5) and (5 ,5 ) were introduced to communicate the PQ symmetries to the SSM sector. After PQ symmetry breaking, those extra multiplets obtain supersymmetric masses of the order of respectively (see Eqs. (33) and (60)). The scalar components of the KSVZ multiplets also obtain masses of O(m 3/2 ) through supergravity effects. Thus, most of the KSVZ multiplets become heavy and beyond the reach of the LHC experiments except for the fermion components of (5 ,5 ). 20 The KSVZ extra multiplets are assumed to couple to the SSM particle via, where5 SM denotes the SSM matter multiplet, and ( ) are coefficients. Here, we take r 5 = r 5 = 1 so that5 and5 have the same R-charges with5 SM . Through the mixing terms, the KSVZ extra multiplets decay immediately into the SSM particles. 20 The extra multiplet to achieve the Z 4R symmetry also obtains the mass of O(m 3/2 ) from the R-symmetry breaking effects [62]. Finally, let us note that there can be mixing terms between (5,5) and (5 ,5 ) through, Although these operators consist of the fields in the two PQ symmetric sectors, they are invariant under not only the gauged PQ symmetry but also under the global PQ symmetries. Thus, these terms do not affect θ eff . They do not affect the KSVZ mass spectrum significantly neither. From these reasons, we neglect these mixing terms throughout this paper. E. PQ Charges in the SU (3) Model For a given q 1 and q 2 , there are upper limits on Λ and Λ to achive a high-quality global PQ symmetry (see Eq. (67)). The dynamical scales are also constrained from below for an appropriate supersymmetry breaking scale and for heavy enough KSVZ extra multiplets. As a lower limit on the supersymmetry breaking scale, i.e., Λ, we require so that the observed Higgs boson mass, m H 125 GeV, is achieved by the gravity mediated sfermion masses of O(m 3/2 ). As a lower limit on the KSVZ extra multiplets, we put from the null results of the searches for a heavy b-type quark at the LHC experiments [63][64][65][66]. In Fig. 1 given by As the extra multiplets contribute to the renormalization group evolutions of the SSM gauge coupling constants and make them asymptotically non-free, the perturbative unification puts upper limits on N f and N f , and hence, on q 1 and q 2 . In Fig. 1, we color the charges by red, with which θ eff 10 −10 is not compatible with the perturbative unification. Here, we use the renormalization group equation at the one-loop level and require that g 1,2,3 < 4π below the GUT scale, i.e., M GUT 10 16 GeV. We also take the masses of the sfermions, the heavy charged/neutral Higgs boson, and the Higgsinos to be at the gravitino mass scale. The gaugino masses are assumed to be dominated by the anomaly mediation effects [67,68] which are roughly given by (see, e.g. [69]), although the constraints do not depend on them significantly as long as they are in the TeV range. The gravitino mass is take to be within 10 TeV ≤ m 3/2 ≤ 10 PeV. These choices are motivated by the pure gravity mediation model in Refs. [70] (see also Refs. [71][72][73][74] for closely related models). 22 In the renormalization group evolution, we also take into account an extra multiplet required for the anomaly free condition of the Z 4R symmetry, whose masses are also at the gravitino mass scale. The figure shows that the requirement for perturbative unification excludes the charges . This is expected as N f flavors of the KSVZ extra multiplets have masses of 10 TeV m KSV Z 10 PeV. 23 On the other hand, a large q 1 is allowed. This is because the explicit breaking terms are suppressed by (Λ /M PL ) 3q 1 , and hence, a high-quality global PQ is possible even for a large Λ as long as q 1 is large. For a large Λ , m KSV Z also becomes large, with which the perturbative unification is possible even if N f = q 1 is large. It should be noted, however, that the effective field theory approach is no more reliable when Λ is too close to the Planck scale. In the figure, we color the charges by orange if they require a large Λ , i.e., 10 16 GeV Λ 10 17 GeV. For N GCD ≥ 2, there are no appropriate charges with which θ eff < 10 −10 and the perturbative unification are compatible. F. Parameter Regions in the SU (3) Model In Fig. 2, we show the parameter regions for a given q 1 and q 2 . In each panel, we take m 3/2 < 10 PeV and λ = 1, 10 −1 , 10 −2 , respectively. The gray shaded region is excluded, as θ eff < 10 −10 is not satisfied (see Eq. (67) The figure shows that the dynamical scale Λ is tightly constrained from above to achieve θ eff < 10 −10 for the minimum charge choice, i.e., q 1 = 5 and q 2 = 1. This is understood as the explicit breaking terms are not effectively suppressed for rather small charges. As a result, the PQ breaking scales are required to be low to avoid large explicit breaking effects. The 22 Here, the Higgsino mediation effects neglected for simplicity. Besides, the gaugino spectrum is deflected from the anomaly mediation in the presence of the KSVZ extra multiplets [75]. 23 If we restrict to m 3/2 < 1 PeV, the constraint becomes tighter and the charges with q 2 > 5 are excluded. upper limit on Λ becomes tighter for a larger Λ as is expected from Eq. (67). Furthermore, as the dynamical scale Λ becomes larger for a smaller λ, the upper limit becomes even tighter for a smaller λ for a given m 3/2 . The constraints from the perturbative unification are, on the contrary, weaker since m KSV Z becomes larger for a smaller λ for a given m 3/2 . An interesting property of the minimum choice is that the model predicts the KSVZ extra multiplets (5 ,5 ) For q 1 = 7 and q 2 = 1, the upper limit on Λ is weaker than for the minimum choice. This is because the suppression factor of the explicit breaking term, (Λ /M PL ) 3q 1 , can be very small even for a rather large Λ due to a large exponent. The constraint form the perturbative unification is, on the contrary, tighter for a large q 1 as N f is proportional to q 1 . For a large N f , the masses of the KSVZ extra multiplets, m KSV Z , is required to be high to avoid the blow-up of the gauge coupling constants below the GUT scale. For q 1 = 1 and q 2 = 7, the upper limit on Λ is also weaker than the minimum choice for λ = 1 due to a strong suppression of the explicit breaking terms by (Λ/M PL ) 2q 2 . As the suppression factor is sensitive to Λ, the upper limit on Λ becomes very tight for a smaller λ for a given gravitino mass. In all cases, we find that the gravitino mass is required to be in the hundreds TeV or larger, and hence, the model can be consistent with the observed Higgs boson mass achieved by the gravity mediated sfermion masses. It is also notable that the dynamical scale Λ is larger than Λ in the allowed parameter region. Therefore, both the accidental global PQ symmetry and supersymmetry are broken by the IYIT sector while the gauged PQ symmetry is mainly broken by the SU (3) sector. This feature is attractive as it explains the coincidence between the global PQ breaking scale and the supersymmetry breaking scale. Before closing this subsection, let us comment on the axion dark matter abundance. The axion starts coherent oscillation when the Hubble expansion rate becomes comparable to the axion mass, which leads to the present axion dark matter density [76], Here, θ i is the initial misalignment angle of the axion field. Thus, the axion can be a dominant component for dark matter of F a = O(10 12 ) GeV, i.e., Ω DM 0.12 [77]. As the figures show, F a = O(10 12 ) GeV is possible in a wide range of the parameter space. Therefore, the model based on SU (3) can be consistent with the axion dark matter scenario. 24 G. Cancellation of Self-and Gravitational Anomalies As mentioned in section II, the gravitational anomaly and the self-anomaly of U (1) gP Q are canceled by adding U (1) gP Q charged singlet fields. In this subsection, we show a concrete model of the anomaly cancelation. In the IYIT sector and the SU (3) sector, the U (1) gP Q charged fields are paired with fields with opposite charges. Thus, the fields in these sectors do not contribute to the self-anomaly nor the gravitational anomaly. The charges of the KSVZ extra multiplets are, on the other hand, not paired, and hence, they contribute to the anomalies, respectively. The easiest way to cancel the anomaly is to introduce 5N f singlet superfields Y with a charge q 1 and 5N f singlet superfields Y with a charge −q 2 . The charges of Y 's and Y 's are given in Tab. III. As the singlet fields do not have mass partners with opposite charges, the supersymmetric masses of them are generated only after U (1) gP Q breaking. The mass terms of Y 's are given Here, we take the Z 4R charge of Y 's to be 1, so that their scalar and fermion components are odd and even under the R-parity, respectively. The factor m 3/2 encapsulates the effects of spontaneous breaking of the Z N R symmetry. As a result, the fermionic components of 24 For m 3/2 O(1) PeV, the wino is expected to be heavier than O(1) TeV, whose relic abundance exceeds the observed dark matter density. In such parameter region, we need to assume either a dilution mechanism of dark matter or R-parity violation. The supersymmetric masses of Y 's are even smaller, Here, we take the Z 4R charge of Y 's to be 1, and the factor m 3/2 encapsulates the effects of spontaneous breaking of Z N R again. As a result, the fermionic components of Y 's obtain, If the light fermions are abundantly produced in the early universe, they contribute to the dark radiation and result in an unacceptably large number of effective neutrino species, N eff . To evade this problem, we assume that spontaneous breaking of U (1) gP Q takes place before the end of inflation. We also assume that the gauge superfields of U (1) gP Q are heavier than the reheating temperature after inflation. Furthermore, it is also assumed that the branching fraction of the inflaton into Y 's and Y 's are suppressed. With these assumptions, we can achieve cosmologically consistent models where the self-and the gravitational anomalies are canceled by the U (1) gP Q charged singlets. H. SU (N ) Dynamical PQ Symmetry Breaking Model So far, we have considered the dynamical PQ breaking sector based on the SU (3) gauge theory. There, the deformed moduli constraint plays an important role to break the global PQ symmetry (i.e., the baryon symmetry) spontaneously. In this subsection, we discuss the models of dynamical PQ breaking based on SU (N ) gauge theory other than N = 3. We call such models, the SU (N ) dynamical PQ breaking model. symmetry, and the global PQ symmetry is identified with a subgroup of the maximal nonabelian group SU (4) f as in the case of the IYIT sector. Then, the global PQ symmetry breaking is achieved by introducing four PQ neutral singlet superfields, Z . 26 In this model, the KSVZ extra multiplets coupling to the SU (2) sector obtain masses via, leading to the dynamical scale Λ should be much higher than Λ to satisfy m KSV Z 750 GeV. Here, Q · · ·Q denotes the baryon operators of the SU (N ) sector. The SU (N ) models are very similar to the SU (3) model except for the dynamical scale Λ , although we do not discuss details of the SU (N ) model further. 26 It is tempting to make the SU (2) sector also be the IYIT supersymmetry breaking sector by introducing six singlet fields, Z 's, instead. In this case, however, supersymmetry and the gauged PQ symmetry are broken by the dynamics, while the global PQ symmetry is broken separately. V. CONCLUSIONS In this paper, we apply the gauged PQ mechanism to a model in which the global PQ symmetry and supersymmetry are broken simultaneously. As a concrete example, we considered models which consist of simultaneous supersymmetry/PQ symmetry breaking sector based on SU (2) dynamics (the IYIT sector) and a dynamical PQ symmetry breaking sector based on SU (N ) dynamics (the SU (N ) sector). As we have seen, the SU (3) Finally, let us comment an advantage of the gauged PQ mechanism over the models in which the high-quality global PQ symmetry results from an exact discrete symmetry, such as Z N . As we have discussed briefly in subsection II C, the gauged PQ mechanism with N GCD = 1, q 1 = 1 and q 2 = N (> 1) allow models which are free from both the domain wall problem and the axion isocurvature problem. The assumption here is that the first stage of the phase transition (i.e. Φ 1 = 0) takes place before inflation while the second stage of the phase transition (i.e. Φ 2 = 0) occurs after inflation. Then, the local strings formed at the first phase transition are inflated away, while the global strings formed at the second phase transition do not cause the domain wall problem as Φ 2 couples to only one-flavor of the KSVZ extra multiplet. 27 As the global PQ symmetry is broken after inflation, the model does not suffer from the axion isocurvature problem. This option is not available in the models with an exact discrete symmetry where the axion potential is also symmetric under 27 In this case, the axion dark matter density is dominated by the axions produced by the decay of the string-domain wall networks, which requires F a = O(10 11 ) GeV [35,36]. Such a rather low F a is, for example, achieved in the SU (2) model. the discrete symmetry.
10,346
2018-03-02T00:00:00.000
[ "Physics" ]
SU(5) grand unified theory with A4 modular symmetry We present the first example of a grand unified theory (GUT) with a modular symmetry interpreted as a family symmetry. The theory is based on supersymmetric $SU(5)$ in 6d, where the two extra dimensions are compactified on a $T_2/\mathbb{Z}_2$ orbifold. We have shown that, if there is a finite modular symmetry, then it can only be $A_4$ with an (infinite) discrete choice of moduli, where we focus on $\tau = \omega=e^{i2\pi/3}$, the unique solution with $|\tau|=1$. The fields on the branes respect a generalised CP and flavour symmetry $A_4\ltimes \mathbb{Z}_2$ which is isomorphic to $S_4$ which leads to an effective $\mu-\tau$ reflection symmetry at low energies, implying maximal atmospheric mixing and maximal leptonic CP violation. We construct an explicit model along these lines with two triplet flavons in the bulk, whose vacuum alignments are determined by orbifold boundary conditions, analogous to those used for $SU(5)$ breaking with doublet-triplet splitting. There are two right-handed neutrinos on the branes whose Yukawa couplings are determined by modular weights. The charged lepton and down-type quarks have diagonal and hierarchical Yukawa matrices, with quark mixing due to a hierarchical up-quark Yukawa matrix. I. INTRODUCTION The flavor puzzle, the question of the origin of the three families of quarks and leptons together with their curious pattern of masses and mixings, remains one of the most important unresolved problems of the Standard Model (SM). Following the discovery of neutrino mass and mixing, whose origin is fundamentally unknown, there are now almost 30 undetermined parameters in the SM, far too many for any complete theory. The lepton sector in particular involves large mixing angles that suggest an explanation in terms of discrete non-Abelian family symmetry [1,2]. Furthermore, such discrete non-Abelian family symmetries have been combined with grand unified theories (GUTs) in order to provide a complete description of all quark and lepton (including neutrino) masses and mixings [3]. It is well known that orbifold GUTs in extra dimensions (ED) can provide an elegant explanation of GUT breaking and Higgs doublet-triplet spitting [4]. Similarly, theories involving GUTs and flavor symmetries have been formulated in ED [5][6][7][8][9][10][11][12]. These EDs can help us to understand the origin of the discrete non-Abelian group symmetry such as A 4 and S 4 which may be identified as a remnant symmetry of the extended Poincaré group after orbifolding. Some time ago it was suggested that modular symmetry, when interpreted as a family symmetry, might help us to provide a possible explanation for the neutrino mass matrices [13,14]. Recently it has been suggested that neutrino masses might be modular forms [15], with constraints on the Yukawa couplings. This has led to a revival of the idea that modular symmetries are symmetries of the extra-dimensional spacetime with Yukawa couplings determined by their modular weights [16]. However to date, no attempt has been made to combine this idea with orbifold GUTs in order to provide a unified framework for quark and lepton masses and mixings. In this paper we present the first example in the literature of a GUT with a modular symmetry interpreted as a family symmetry. The theory is based on supersymmetric SUð5Þ in 6d, where the two extra dimensions are compactified on a T 2 =Z 2 orbifold, with a twist angle of ω ¼ e i2π=3 . Such constructions suggest an underlying modular A 4 symmetry with a discrete choice of moduli. This is one of the main differences of the present paper as compared to recent works with modular symmetries which regard the modulus τ as a free phenomenological parameter [15,16]. We construct a detailed model along these lines where the fields on the branes are assumed to respect a flavor and generalized CP symmetry A 4 ⋉ Z 2 which leads to an effective μ − τ reflection symmetry at low energies, implying maximal atmospheric mixing and maximal leptonic CP violation. The model introduces two triplet flavons in the bulk, whose vacuum alignments are determined by orbifold boundary conditions, analogous to those used for SUð5Þ breaking with doublet-triplet splitting. There are also two right-handed neutrinos on the branes whose Yukawa couplings are determined by modular weights. The charged lepton and down-type quarks have diagonal and hierarchical Yukawa matrices, with quark mixing due to a hierarchical up-quark Yukawa matrix. The remainder of the paper is organized as follows. In Sec. II we discuss the orbifold T 2 =Z 2 and its symmetries, as follows. In Sec. II A, we give a review of modular transformations while in Sec. II B we describe how the orbifold T 2 =Z 2 is only consistent with modular A 4 symmetry and a choice of modulus. In Sec. II C, we explicitly show the orbifold T 2 =Z 2 with twist angle ω ¼ e i2π=3 and modular A 4 symmetry. In Sec. II D we study the remnant symmetry after compactification on the T 2 =Z 2 orbifold, while Sec. II E connects this remnant symmetry and the modular symmetry. In Sec. II F we discuss the enhanced A 4 ⋉ Z 2 on the branes. In Sec. III, we present the field content of the SUð5Þ GUT with A 4 modular symmetry and a Uð1Þ shaping symmetry, including the Yukawa sector and the specific structure for the effective alignments that the modular symmetry can generate, resulting in the low energy form of the SM fermion mass matrices which we show can lead to a very good fit to the observables. Finally in Sec. IV we present our conclusions. In order to make the paper self-contained, some necessary background information is included in the Appendixes. In Appendix A we show the explicit proof that only the A 4 modular symmetry is consistent with the branes, with specific choices of modulus. We supplement the general A 4 group theory in Appendix B, the consistency conditions for generalized CP symmetry consistent with A 4 in Appendix C and the general theory for modular forms in Appendix D. Finally we show sample fits of the observed data in Appendix E. A. Review of modular transformations In this subsection we present the general theory of modular transformations. The structure of the extra-dimensional torus is defined by the structure of the lattice by where ω 1;2 are the lattice basis vectors. The variable z refers to the complex coordinate z ¼ x 5 þ ix 6 , where x 5 and x 6 are the two extra-dimension coordinates. The torus is then characterized by the complex plane C modulo a two-dimensional lattice Λ ðω 1 ;ω 2 Þ , where Λ ðω 1 ;ω 2 Þ ¼ fmω 1 þ nω 2 ; m; n ∈ Zg, i.e., T 2 ¼ C=Λ ðω 1 ;ω 2 Þ . The lattice is left invariant under a change in lattice basis vectors described by the general transformations or equivalently if a; b; c; d ∈ Z and ad − bc ¼ 1. These are called modular transformations and form the modular group Γ [15]. Without loss of generality, the lattice vectors may be rescaled as such that the torus is equivalent to one whose periods are 1 and τ ¼ ω 2 =ω 1 and we can restrict τ to the upper halfplane H ¼ Imτ > 0. The modular transformations on the rescaled basis vectors which leave the lattice invariant are given by 1 A SLð2; ZÞ transformation on the modulus parameter τ and its negative are equivalent, as can be seen from Eqs. (2) and (4). Therefore, we can use the infinite discrete group PSLð2; ZÞ ¼ SLð2; ZÞ=Z 2 , generated by to describe the transformations that relate equivalent tori. This is also called the modular groupΓ satisfyinḡ Γ ¼ Γ=fAE1g. 2 The generators of the infinite-dimensional modular group can also be written as They satisfy the presentation where S; T ∈ SLð2; ZÞ. We will be considering the finite-dimensional discrete subgroups by imposing an additional constraint on T M , where M is a positive integer, where S; T ∈ SLð2; Z M Þ. These groups, with M ≤ 5, are isomorphic to the known discrete groups asΓ 2 ≃S 3 ,Γ 3 ≃A 4 , We now introduce a convenient (if nonunique) representation for the modular transformations consistent with the presentation in Eq. (8), which satisfies the presentation of theΓ M group, for any integer M > 2. This representation will be useful in the following discussion. B. Why the orbifold T 2 =Z 2 suggests modular A 4 symmetry with modulus τ = ω In this subsection we present an argument which shows that a particular T 2 =Z 2 orbifold (as assumed in this paper) suggests an underlying modular A 4 symmetry with specific modulus parameters. We begin by defining the orbifold T 2 =Z 2 in terms of two arbitrary lattice vectors ω 1 and ω 2 , The action of the orbifold in Eq. (10) leaves four invariant 4d branes given by 3 After compactification, the symmetries of the branes remain unbroken; therefore it is relevant to study any possible symmetry among the branes which will affect the fields localized on them. Therefore, we want to check if the modular transformations in Eq. (9) leave an invariant set of branes for some value of M. At this stage the modulus τ ¼ ω 2 =ω 1 can apparently take any value. However we present a proof in Appendix A that only the A 4 symmetry is consistent, meaning that M ¼ 3, when the basis vectors are related by The p and q are integers satisfying that is an integer, which has infinitely many discrete solutions. Furthermore, since the modular forms restrict τ to be in the upper complex plane, then q < 0. This paper's approach is to focus on the orbifold first and then derive the modular symmetries, instead of going directly into the modular symmetries. We will restrict ourselves to the case where jω 1 j ¼ jω 2 j, which happens when p ¼ −1, q ¼ −1. This way we focus on studying the effects only of the angle between both vectors. We can, without loss of generality, choose ω 1 ¼ 1. Furthermore, the modular symmetries require τ to lie in the upper complex plane; in this case the only solutions to Eqs. (12) and (13) are ω 2 ¼ ω ¼ e i2π=3 . This uniquely fixes the modulus coming from the orbifold T 2 =Z 2 . We emphasize that this is one of the main differences of the present paper as compared to recent works with modular symmetries which regard the modulus τ as a free phenomenological parameter [15,16]. In our work, we assume a specific orbifold T 2 =Z 2 , for which we have shown that one consistent choice for a surviving modular symmetry is A 4 with fixed modulus τ, although we shall not address the problem of moduli stabilization [17]. and modular A 4 symmetry Following the argument of the previous subsection, we henceforth focus on the orbifold T 2 =Z 2 with particular twist angle denoted as ω ¼ e i2π=3 , identified as the modulus τ associated with a particular finite modular symmetry A 4 , where A 4 is the only choice consistent with this orbifold. This orbifold then corresponds to the identification where the first two equations are the periodic conditions from the torus T 2 and the third one is the action generated by the orbifolding symmetry Z 2 . The twist corresponds to ω ¼ e i2π=3 . The orbifold symmetry transformations leave four invariant 4d branes as shown in Fig. 1: The transformations permute the branes and leave invariant the set of four branes in Eq. (15). These transformations satisfy where the first line is the presentation of the group A 4 and both lines complete the presentation of S 4 [1]. In Fig. 1 we show how these transformations act on the extra-dimensional space and how the "remnant A 4 symmetry" is realized. Fixing M ¼ 3, the set of branes is invariant under the modular transformations on the lattice vectors ð1; ωÞ T . These transform the basis vectors as (noting that 1 þ ω þ ω 2 ¼ 0) leaving the lattice invariant as can be seen from Fig. 2. The matrices S; T ð3Þ fulfill the presentation of the group they generate to be where S; T ð3Þ ∈ SLð2; Z 3 Þ, so that the branes are indeed invariant under the discrete modular groupΓ 3 ≃ A 4 . As we will see in Sec. II F, this symmetry will be enlarged. D. Remnant brane symmetry for T 2 =Z 2 with ω = e i2π=3 So far we have shown that the choice of orbifold T 2 =Z 2 is consistent with the finite modular symmetry A 4 with a discrete choice of moduli, where we focus on τ ¼ ω ¼ e i2π=3 . Now we will take a step back, forget about modular symmetries for a while, and just consider the symmetries of the branes with a twist angle ω ¼ e i2π=3 . We will discover an S 4 symmetry that has apparently nothing to do with modular symmetry, which we refer to as "remnant S 4 symmetry." In the next subsection we shall show how the subgroup remnant A 4 symmetry is related to the previous A 4 finite modular symmetry. In this section, we will find that the branes are invariant under an S 4 and its subgroup A 4 symmetry which can be identified as a remnant symmetry of the spacetime symmetry after it is broken down to the 4d Poincaré symmetry through orbifold compactification. Here, we assume that the spacetime symmetry before compactification is a 6d Poincaré symmetry. The compactification breaks part of this symmetry. However, due to the geometry of our orbifold with twist angle ω ¼ e i2π=3 , a discrete subgroup is left unbroken. This group may be generated by the spacetime transformations (which belong to the extradimensional part of the 6d Poincaré). The orbifolding leaves four invariant branes, and this specific orbifold structure leaves them related by the group S 4 . This symmetry, together with 4d Poincaré transformations, is a subgroup of the extra-dimensional Poincaré symmetry that survives compactification. This is the standard "remnant symmetry" [6,18]. Any field located in the branes will transform under the 4d Poincaré group as usual. Since the branes transform into each other by the remnant symmetries, the fields on the brane should also transform under them. The four branes transform under the remnant S 4 symmetry and we choose the embedding of the representation 4 → 3 þ 1 so that the fields in the branes can only transform under those irreducible representations [7,11]. The fields in the branes do not depend on z but are permuted into each other by the S, T, U transformations. E. The connection between remnant A 4 symmetry and finite modular A 4 symmetry We have shown that the set of branes is invariant under a remnant S 4 (and its subgroup A 4 ) subgroup of the extradimensional Poincaré symmetry. We shall return to the S 4 symmetry in the next subsection and we now show that the remnant A 4 symmetry can be identified with the finite modular A 4 symmetry discussed earlier. Essentially, if we impose a modular symmetry A 4 on the whole space, its action on the branes is the same action as the remnant spacetime symmetry; i.e., it permutes the branes but leaves invariant the whole set. The modular symmetry acts on the basis vectors of the torus while the remnant symmetry is a spacetime symmetry and acts on the fixed points; therefore from the point of view of the branes, the "remnant A 4 symmetry" is an active transformation while the finite modular A 4 symmetry is an equivalent passive transformation. This way we may identify the remnant A 4 symmetry of the branes as a modular symmetry since the effect on the branes of each type of transformation is identical; it is just a choice of "picture" (active or passive) which we choose. The action of the modular symmetry on the branes behaves as a "normal" symmetry (i.e., modular forms are not relevant) since fields located on the brane do not depend on the extra-dimensional coordinate. The modular symmetry can therefore be imposed as any usual symmetry. In the orbifold T 2 =Z 2 , the branes are only consistent with the modular groupΓ 3 , as shown in Appendix A. Any theory with this orbifold and fields allocated on the branes will only be consistent with the modular symmetryΓ 3 . In such a setup, the branes see the finite modular symmetry as simply equivalent to a remnant symmetry, a subgroup of the extradimensional Poincaré group. We can see in Eq. (18) that the S, T transformations (and therefore theΓ 3 modular transformations) correspond to specific passive reflections, rotations and translations. In this way theΓ 3 must be a subgroup of the 6d Poincaré group. All modular groups are. However not all modular groups are consistent with the invariant branes, as we have shown. On the other hand, fields in the bulk, which feel the extra dimensions, will also transform under some representation of the 6d Poincaré; however in this case they will transform under a nonlinear realization of thisΓ 3 symmetry, and this is precisely what are referred to as the modular forms [15]. We conclude that the modular symmetryΓ 3 acts either as a linear or nonlinear realization of the remnant symmetry A 4 , depending on whether we are concerned with brane fields or bulk fields. F. Enhanced A 4 ⋉ Z 2 symmetry of the branes We now recall that, in our setup, the brane fields enjoy a larger S 4 symmetry than the remnant A 4 symmetry, as shown in Sec. II D. However this larger S 4 symmetry is not related to a finite modular symmetry, since the branes can only be invariant under the modular transformations corresponding toΓ 3 ≃ A 4 , and is not enjoyed by the fields in the bulk. SUð5Þ GRAND UNIFIED THEORY WITH A 4 MODULAR … PHYS. REV. D 101, 015028 (2020) We note here that S 4 ≃ A 4 ⋉ Z 2 . The symmetry generated by U from Eq. (16) is a remnant symmetry of the orbifolding process, but it cannot be interpreted as a modular transformation. We conclude that the remnant symmetry of the branes isΓ 3 ⋉ Z 2 ≃ A 4 ⋉ Z 2 . The Z 2 symmetry is generated by C · U where U is the usual matrix representation of the generator from S 4 and C stands for complex conjugation of the complex coordinate, which is equivalent to a change of sign in x 6 , i.e., the parity transformation of the sixth dimension P 6 . The Z 2 is not a modular symmetry while the A 4 is. The product of both symmetries is not direct since the generator U does not commute with all A 4 generators and is the corresponding S 4 generator. After compactification, the remnant Z 2 (which is not a subset of A 4 ) acts on the brane fields generalized CP transformation where the transformations P 1 ; …; P 5 are trivial whileP 6 ¼ P 6 U, where P 6 is the trivial parity transformation, while the U is a family transformation [19]; thus a field transforms as Under CP the fields transform as shown in Appendix C. As stated before, this effective symmetry transformation only affects nontrivially the brane fields and the fields in the bulk are unaffected, transforming under canonical CP and not forced to preserve it. Thus in our approach the generalized CP is a remnant symmetry in a particular sector of the theory, corresponding to the fields on the branes. We have shown that the remnant Z 2 symmetry on the branes behaves as an effective generalized CP transformation. In Appendix C we check its compatibility with the A 4 flavor symmetry and find that it is consistent, as indeed it must be. A. The model In this section we construct a supersymmetric SUð5Þ GUT model on a 6d orbifold T 2 =Z 2 with twist ω ¼ e i2π=3 , with an A 4 modular symmetry as a flavor symmetry, extended by the Z 2 symmetry on the branes. Furthermore we impose a global Uð1Þ as a shaping symmetry. We impose different boundary conditions at each invariant brane, which break the original symmetry into the minimal supersymmetric standard model (MSSM). The Uð1Þ as a shaping symmetry forbids any higher order terms, while a discrete Z N would allow them, the smaller the N, the corrections would appear at lower order. The A 4 modular symmetry will require the Yukawa couplings to be specific modular forms, while the Z 2 symmetry will further restrict the possible mass matrix structure so that the theory has strong predictions for leptons [20]. As we shall see later, the up quarks will lie in different A 4 singlets with modular weight zero, so that only the subgroup Z 3 is a remnant while the Z 2 behaves trivially. This forces stringent relations for the lepton mass matrices but not for the quarks. All the fields in the bulk ψ will transform under the modular transformations where ρ is the usual matrix representation of the corresponding A 4 transformation. Each field has a weight −k, with no constraint in k since the fields are not modular forms. The superfields that are located on the brane do not depend on the extra dimensions and therefore they must have weight zero [15]. We arbitrarily choose a weight for each of the bulk fields. The whole field content is listed in Tables I and II. The fields that do not have weight or parity under the boundary conditions are located on the branes and feel the symmetry A 4 ⋉ Z 2 ; see Table I. The transformations of the fields TABLE I. Fields on the branes, including matter and righthanded neutrino superfields. A working set of charges is fq 1 ; q 2 ; q 3 g ¼ f2; 0; 1g. Note that the 3 representations on the brane transform under A 4 ⋉ Z 2 as shown in Table VI and Eq. (C4). Representation Localization under this symmetry are discussed in Appendix B. The 3 representations on the brane transform under A 4 ⋉ Z 2 are as shown in Table VI and Eq. (C4). The field F contains the MSSM fields L and d R and is a flavor triplet. It is located on the brane. The fields T AE i contain the MSSM u R , e R , Q; they are three flavor singlets. There are two copies of each T with different parities under the boundary conditions; as we shall see in the next section, this allows different masses for down quarks and charged leptons. There are only two right-handed neutrinos N c a;s . The MSSM Higgs fields h u;d are inside the H 5;5 respectively. We have two flavons ϕ 1;2 that help to give structure to the fermion masses. Finally, the field ξ generates the hierarchy between the massesà la Froggat-Nielsen [21]. B. GUT and flavor breaking by orbifolding Since the orbifold has the symmetry transformations of Eq. (14), the fields must also comply with them. However since we are in a gauge theory, the equations need not be fulfilled exactly but only up to a gauge transformation, so any field complies with where the G's are gauge transformations that must fulfill where the first equation comes from the fact that it belongs to the parity operator, the second is due to the fact of the commutativity of the translations and the third one denotes the relation between parity and translations. Since the branesz i are invariant under the orbifold symmetry transformations of Eq. (14), they act as boundaries which, due to the G's gauge transformations, impose the boundary conditions which correspond to a reflection at each of the branes. These boundary conditions are related to the gauge transformations as For simplicity, we choose all G's to commute, meaning that G 5;6 ¼ G −1 5;6 , and therefore all boundary conditions become matrices of order 2. The boundary conditions imply an invariance at each brane under some A 4 × SUð5Þ transformation, and they are chosen to break the symmetry in a particular way as follows: where I 3 ; T 1;2 ∈ SUð3Þ, while I 5 ; diagð−1; −1; −1; 1; 1Þ ∈ SUð5Þ, and explicitly and the last boundary condition is fixed and defined by the other boundary conditions as The boundary condition P 0 breaks the effective extended N ¼ 2 → N ¼ 1 SUSY. The boundary conditions P 1=2;ω=2 leave their corresponding Z 2 symmetry invariant and together break A 4 completely and SUð5Þ → SUð3Þ × SUð2Þ × Uð1Þ. The fields F; N c a;s ; ξ lie on the brane and are unaffected by the boundary conditions. The fields T AE are A 4 singlets and do not feel the A 4 breaking conditions. They have different parities and feel the SUð5Þ breaking condition. The fields T þ contain the light MSSM u R , e R fields, while T − contains the light field Q. This allows for independent masses for charged leptons and down quarks since they come from different fields. The Higgs fields feel the SUð5Þ breaking condition leaving only the light doublets, solving the doublet triplet splitting problem [7] (for a recent discussion see e.g., [11]). The flavons ϕ 1;2 feel the A 4 breaking conditions. They have different parities under the conditions and this fixes their alignments to be We may remark that these flavon vacuum expectation value (VEV) alignments do not break the Z 2 symmetry generated by U, even though they are in the bulk. We see that the orbifolding breaks the symmetry SUð5Þ × A 4 ⋉ Z 2 → SUð3Þ × SUð2Þ × Uð1Þ × Z 2 while solving the doublet triplet splitting, separating charged lepton and down-quark masses and completely aligning flavon VEVs. We do not show an explicit driving mechanism for the VEVs v 1;2;ξ . We assume that they are driven radiatively [22]. C. Yukawa structure In 6d, the superpotential has dimension 5 while each superfield has dimension 2. A 6d interacting superpotential SUð5Þ GRAND UNIFIED THEORY WITH A 4 MODULAR … PHYS. REV. D 101, 015028 (2020) is inherently nonrenormalizable. We work with the effective 4d superpotential, which happens after compactification. We assume the compactification scale is close to the original cutoff scale. We use Λ to denote both the compactification scale and the GUT scale, which is taken to be the cutoff of the effective theory. Assuming this makes the Kaluza-Klein modes to be at the GUT scale so that they do not spoil standard gauge coupling unification or any of the current precision tests. With the fields in Tables I and II, we can write the effective 4d Yukawa terms where i, j ¼ 1, 2, 3. Due to the stringent Uð1Þ shaping symmetry, there are no higher order terms. The field ξ has a VEV and generates hierarchies between familiesà la Froggat-Nielsen [21]. The first line in Eq. (30) gives the two right-handed (RH) neutrino Majorana masses without any mixing. The fields in both terms have zero weight so the modular symmetry does not add anything new. The second line generates Dirac neutrino masses. They have nontrivial weights and their structure will be discussed in Sec. III D. The third line gives masses to charged leptons. They are all weight zero automatically and the mass matrix is diagonal. The fourth line generates a diagonal down-quark mass matrix. Since it involves a different field (T − instead of T þ ) the coupling constants are independent. Finally the fifth line gives masses to the up quarks. It is a general nonsymmetric mass matrix with complex entries. Since the fields in these terms have a nontrivial weight but the T AE are singlets, the modular symmetry does not change the matrix structure. We remark that the top-quark mass term is renormalizable. At the GUT level, the μ term is forbidden, so it should be generated by another mechanism at a much smaller scale [23]. D. Effective alignments from modular forms In Eq. (30) we have a few terms involving nontrivial weights under the modular symmetry. This implies that the couplings y ν s ; y ν a ; y u ij ð31Þ are modular forms with a positive even weight [24]. They involve the Dedekind η function and its exact form can be found in Appendix D. The modular forms are functions of the lattice basis vector parameter τ from Eq. (3). Usually this parameter is chosen to give a good fit to the flavor parameters. In our case, the specific orbifold of our model is set to fix and the modular form structure is fixed up to a real constant due to the extra condition coming from the generalized CP symmetry. The modular form y ν s must be a triplet under A 4 to construct an invariant singlet with the triplet field F. Furthermore, it has weight α to compensate the overall weight of the corresponding term. We show the effective triplet alignments it can have in Table III for different weights α. The possibilities are very limited since many modular forms vanish when τ ¼ ω, as shown in Appendix D. Larger weight modular forms repeat the same structure so that this table is exhaustive, as discussed in Appendix D. The modular form y ν a must have weight β. It multiplies the flavon ϕ 2 , so that they must be contracted into a triplet ðy ν a hϕ 2 iÞ 3 which will generate the effective alignment. In the case of y ν a being a singlet under A 4 , the effective alignment is simply given the flavon VEV hϕ 2 i in Eq. (29), which was fixed by the orbifold boundary conditions. When y ν a is a triplet under A 4 , it must be contracted with ϕ 2 as shown in Appendix B, 3 × 3 → 1 þ 1 0 þ 1 00 þ 3 a þ 3 s . This gives different possible products for the effective triplet. The actual effective alignment is an arbitrary linear combination of all possibilities and can be found in Table IV. For β ¼ 0 the only modular form is a singlet, so the only triplet that can be built is hϕ 2 i. For β ¼ 2, the only modular form is the triplet Y ð2Þ 3 shown in the Appendix D. The effective triplet is the linear combination TABLE III. The effective alignments of the modular form y ν s as a triplet, depending on its weight α. The parameter y is an arbitrary real constant to comply with the extended symmetry 3;2 , so that the actual alignment comes from the linear combination of hϕ 2 i × Y 1;1 0 → 3 and hϕ 2 i × Y 3 → 3 a þ 3 s . By choosing the weights α, β, the structure of the neutrino mass matrix is completely defined. In principle, the y in Table III and y 1 , y 2 , y 3 in Table IV correspond to general complex numbers; however as we will see below they are constrained to comply with the nontrivial CP symmetry of the model. We have obtained all the possible A 4 invariant modular forms. However we have to comply with the extended symmetry A 4 ⋉ Z 2 . The U generator only transforms nontrivially the triplet field F which is contracted to a triplet modular form. A U transformation of the field F can be reabsorbed by transforming the modular form by where the C stands for complex conjugation. Invariant terms under the full symmetry must involve modular forms that are also invariant under the Z 2 transformation. From Table III, the only invariant case is when α ¼ 6 with a real y. From Table IV, the only invariant cases happen when β ¼ 0 with real y 1 or β ¼ 6 with y 1;2 real and y 3 imaginary. The triplet field F is not only taking part in the Dirac neutrino mass terms but also in the down-quark and charged lepton mass terms; therefore they also must be invariant under the enhanced symmetry A 4 ⋉ Z 2 . In this case, the field F is contracted with the flavon field ϕ 1 and it is easy to check that the transformation in Eq. (33) leaves the VEV invariant when real and therefore the charged lepton and down-quark mass terms when the parameters y d i and y e i involved are real. Finally, the modular form y u ij must have weight α þ 2γ to build an invariant. All the fields in the corresponding terms are singlets, so these modular forms must be singlets also and will not change the structure. Depending on i, j, the modular form y u ij must be a different type of singlet. The weight α þ 2γ has to be large enough so that the space contains the three types of singlets. This modular form does not add anything to the structure of the up-quark matrix but allows us to build the A 4 invariants for all T i T j combinations. The smallest weight that allows modular forms of all three types of singlets is 20, as discussed in Appendix D. These modular forms y u ij are in general complex. The case β ¼ 0 does not have enough freedom to fit the neutrino data. We conclude that the smallest phenomenologically viable choice for weights is E. Mass matrix structure We are now able to express the mass matrices following Eq. (30) and the effective alignments given in Sec. III D. First, we define the dimensionless parameters where Λ is the original cutoff scale. The down-quark and charged lepton mass matrices are diagonal: where the parameters y d i and y e i are real due to the enhanced symmetry on the branes A 4 ⋉ Z 2 while y u ij are in general complex. The down-quark and charged lepton mass matrices in Eq. (36) are diagonal so the fit to the observed masses is straightforward. The hierarchy between the masses of the IV. The effective alignments of the modular form y ν a contracted with hϕ 2 i into a triplet, depending on its weight β. The parameters y i are dimensionless constants constrained by the A 4 ⋉ Z 2 symmetry. β ðy ν a hϕ 2 iÞ 3 =v 2 0 (2020) 015028-9 different families is understood through the powers ofξ and can be achieved assuming the dimensionless couplings to be of order Oð1Þ. All the contributions to quark mixing come from the up sector. The complex parameters in the up-type mass matrix [see Eq. (37)] fix the up, charm and top quark masses as well as the observed Cabibbo-Kobayashi-Maskawa (CKM) mixing angles. We can obtain a perfect fit for weight γ ¼ 7. Different values ofṽ 1 ,ṽ 2 and ξ can fit the observed masses using different dimensionless couplings still of order Oð1Þ. We show an example in Appendix E. The form of the Dirac neutrino mass matrix depends on the weights α and β. All the possible alignments are given in Tables III and IV. The Z 2 symmetry restricts ourselves to the case α ¼ 6 and β ¼ 0 or β ¼ 6. In the case of β ¼ 0, we only have two free parameters fy; y 1 g and we cannot find a good fit with the solar and the reactor angle being too small. Therefore, the only phenomenologically viable case is for α ¼ β ¼ 6 and we restrict ourselves to this case in the following. As shown in Appendix B, we have to take into account the Clebsch-Gordan coefficients when contracting the modular form ðy ν s FÞ 1 and ðy ν a hϕ 2 iFÞ 1 into singlets, i.e., 3 × 3 → 1, given by after which the effective alignments for α ¼ 6 and β ¼ 6 look like respectively. The Dirac neutrino mass matrix is then given by The RH neutrino Majorana mass matrix is diagonal: with hierarchical RH neutrino masses given by the different powers of the field ξ. Furthermore, we have very heavy RH neutrino Majorana masses such that the left-handed neutrinos get a very small Majorana mass through type I seesaw [25]: The neutrino mass matrix looks like where α 6 and β 6 are the alignments defined in Eq. (39). The effective parameters at low energy are fy; y 1 ; y 2 ; y 3 g, previously defined in Tables III and IV. The Z 2 symmetry fixes the parameters fy; y 1 ; y 2 g to be real while y 3 is purely imaginary. Finally we remark that this structure, with the expected hierarchy between the RH neutrinos, can give the correct baryon asymmetry of the Universe (BAU) through leptogenesis naturally. Leptogenesis is achieved through the CP violation in the neutrino Dirac mass matrix. The correct order of the BAU happens when the RH neutrino masses are M 1 ∼ 10 10 GeV and M 2 ∼ 10 13 GeV [26]. In this model, these are the natural expected masses as we can see from Eq. (41) and the sample fit in Appendix E. The contributions from the entries of the neutrino Dirac mass matrix and the expected BAU will fix the precise value of M 1 . We conclude that the CP violation in the neutrino sector and the RH neutrino mass hierarchy of the model assures us that the BAU can be generated naturally [11]. F. μ − τ reflection symmetry The neutrino mass matrix in Eq. (43) is μ − τ reflection symmetric (μτ-R symmetric). This corresponds to the interchange symmetry between the muon neutrino ν μ and the tau neutrino ν τ combined with CP symmetry, namely where the star superscript denotes the charge conjugation of the neutrino field. This can easily be seen from the alignments in Eq. (39) which construct the neutrino mass matrix in Eq. (43). Since the parameters fy; y 1 ; y 2 g are real while y 3 is purely imaginary, the transformation in Eq. (44) leaves the alignments invariant and accordingly the neutrino mass matrix. For a review of μτ symmetry see e.g., [27] and references therein; also see the recent discussion [28]. It is known that having a neutrino mass matrix μτ-R symmetric in the flavor basis (which is our case) is equivalent to μ − τ universal (μτ-U) mixing in the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix; see Ref. [29]. The consequence of having μ − τ symmetry is that it leads to having a nonzero reactor angle, θ 13 , together with a maximal atmospheric mixing angle and maximal Dirac CP phase: We remark that this is a prediction of the model, due to having A 4 ⋉ Z 2 symmetry on the branes. The parameters fy; y 1 ; y 2 ; y 3 g in the neutrino mass matrix (43) will fit the rest of the PMNS observables, namely fθ l 12 ; θ l 13 ; Δm 2 21 ; Δm 2 31 g, together with the prediction of the The contribution to a χ 2 test function comes only from these predictions and we use the recent global fit values of neutrino data from NuFit4.0 [30]. The best fit points together with the 1σ ranges are θ 23 = ∘ ¼ 49.6 þ1.0 −1.2 and δ l = ∘ ¼ 215 þ40 −29 for normal mass ordering and without the Super-Kamiokande atmospheric neutrino data analysis. However, the distributions of these two observables are far from Gaussian and the predictions of having maximal atmospheric mixing angle θ 23 ¼ 45°and maximal CP violation δ l ¼ −90°still lie inside the 3σð4σÞ region with a χ 2 ¼ 5.48 (6.81) without (with) Super-Kamiokande. Appendix E explains how a numerical fit can be performed and Table V shows two numerical fits, although this is only an example as we can find a good fit for a large range of parameters y; y 1 ; y 2 and y 3 . 4 This is because the predictions of the model θ 23 ¼ 45°and δ l ¼ −90°are due to the μτ-R symmetry and the four free parameters are used to fit the rest of the observables in the PMNS matrix. The best fit from NuFit4.0 is for normal mass ordering with inverted ordering being disfavored with a Δχ 2 ¼ 4.7 ð9.3Þ without (with) the Super-Kamiokande atmospheric neutrino data analysis. We tried a fit to inverted mass ordering and the χ 2 test function goes up to χ 2 ∼ 6800. Therefore, the model predicts normal mass ordering together with maximal atmospheric mixing and CP violation and a massless neutrino m 1 ¼ 0 since we are only adding two RH neutrinos. IV. CONCLUSIONS In this paper we have presented the first example in the literature of a GUTwith a modular symmetry interpreted as a family symmetry. The theory is based on supersymmetric SUð5Þ in 6d, where the two extra dimensions are compactified on a T 2 =Z 2 orbifold. We have shown that, if there is a finite modular symmetry, then it can only be A 4 . Furthermore, if we restrict ourselves to the case in which jω 1 j ¼ jω 2 j, the only possible value of the modulus parameter is τ ¼ ω ¼ e i2π=3 . We emphasize that this is one of the essential distinctions of the present model in contrast to recent works with modular symmetries, which regard the modulus τ as a free phenomenological parameter [15,16]. In the present paper, we assume a specific orbifold structure which fixes the modulus to a discrete choice of moduli, where we focus on the case τ ¼ ω ¼ e i2π=3 , although we do not address the problem of moduli stabilization. We have shown that it is possible to construct a consistent model along these lines, which successfully combines an SUð5Þ GUT group with the A 4 modular symmetry and a Uð1Þ shaping symmetry. In this model the F fields on the branes are assumed to respect an enhanced symmetry A 4 ⋉ Z 2 which leads to an effective μ − τ reflection symmetry at low energies, which predicts the maximal atmospheric angle and maximal CP phase. In addition there are two right-handed neutrinos on the branes whose Yukawa couplings are determined by modular weights, leading to specific alignments which fix the Dirac mass matrix. The model also introduces two triplet flavons in the bulk, whose vacuum alignments are determined by orbifold boundary conditions, analogous to those responsible for Higgs doublet-triplet splitting. The charged lepton and down-type quarks have diagonal and hierarchical Yukawa matrices, with quark mixing due to a hierarchical up-quark Yukawa matrix. The resulting model, summarized in Tables I and II, provides an economical and successful description of quark and lepton (including neutrino) masses and mixing angles and CP phases. Indeed the quarks can be fit perfectly, consistently with SUð5Þ, using only Oð1Þ parameters. In addition we obtain a very good fit for the lepton observables with χ 2 ≈ 5 ð7Þ without (with) Super-Kamiokande data, using four Oð1Þ parameters which determine the entire lepton mixing matrix U PMNS and the light neutrino masses (eight observables), which implies that the theory is quite predictive. The main predictions of the model are a normal neutrino mass hierarchy with a massless neutrino, and the μ − τ reflection symmetry predictions θ l 23 ¼ 45°a nd CP phase δ l ¼ −90°, which will be tested soon. APPENDIX A: UNIQUENESS OF A 4 AS A MODULAR SYMMETRY FOR THE BRANES In this Appendix, we show that the set of branes in Eq. (11) is invariant under the modular transformationsΓ 3 for an infinite set of discrete values of the modulus parameter τ. First, we apply the finite modular transformations in Eq. (9) on the lattice vectors Therefore, the S-transformed branes becomē Using the orbifold transformations from Eq. (10), we can add ω 1 to the second and fourth branes, and obtain the original set in Eq. (11). Therefore the brane set is always invariant under the S transformation, for any value of ω 1 and ω 2 . On the other hand, the T-transformed branes arē If the set is to be invariant, up to permutations of the branes, the second term in Eq. (A3) must correspond to one of the original branes. Let us make the Ansatz that the second brane from Eq. (A3) corresponds to the fourth original brane ðω 1 þ ω 2 Þ=2 up to orbifold transformations, so that it must satisfy where p, q are general integer numbers and represent the general orbifold transformations. This relates the basis vectors ω 1 and ω 2 as and we may rewrite the transformed branes from Eq. (A3) in terms of ω 1 : We now make the Ansatz that the third brane corresponds to the third original brane ω 2 =2 (and automatically the fourth corresponds to the original second one), so that it must satisfy where r, s are general integer numbers which represent the general orbifold transformations. After some simple manipulations this can be written as The left-hand part is complex while the right-hand side is real. To cancel the imaginary part we must have so that 2ð2p þ 1Þ cosð2π=MÞ where, since the right-hand side is an integer, the left-hand side must also be which forces and the equation becomes which is a Diophantine equation to be solved for r. We have made two Ansätze to obtain this equation. This is the only solution since making any other Ansatz would obtain an equation without solutions (the equation would be an odd number equal to zero). These are straightforward calculations, which are done in the same way as this one and seem repetitive to show. The invariance condition on the branes fixes them to be: where the choice M ¼ 3, 6 only changes with m. Since only ω 2 is physical (and not the specific integers p, q), we can reabsorb the m dependence into p. As we stated before we will be studying discrete modular symmetries with M ≤ 5 so that the only solution is M ¼ 3 which fixes the relation of the branes to be where the p, q are integers that satisfy that is an integer, which has infinitely many discrete solutions. APPENDIX B: GROUP THEORY A 4 is the even permutation group of four objects, which is isomorphic to the symmetry group of a regular tetrahedron. It has 12 elements that can be generated by two generators, S and T, with the presentation A 4 has four inequivalent irreducible representations: three singlet 1; 1 0 ; 1 00 and one triplet 3 representations. We choose to work with the same complex basis as [15] and the representation matrices of the generators are shown in Table VI. The product of two triplets, φ ¼ ðφ 1 ; φ 2 ; φ 3 Þ and ψ ¼ ðψ 1 ; ψ 2 ; ψ 3 Þ, decomposes as 3×3 ¼ 1þ1 0 þ1 00 þ3 s þ3 a , where 3 s;a denote the symmetric or antisymmetric product. The component decompositions of the products are shown in Table VII. The 12 elements of A 4 are obtained as 1; S; T; ST; TS; T 2 ; ST 2 ; STS; TST; T 2 S; TST 2 and T 2 ST. The A 4 elements belong to four conjugacy classes: where mC k n refers to the Schoenflies notation where m is the number of elements of rotations by an angle 2πk=n. APPENDIX C: GENERALIZED CP CONSISTENCY CONDITIONS FOR A 4 Here, we check the compatibility of the Z 2 symmetry on the branes with the A 4 flavor symmetry. The remnant Z 2 symmetry behaves as an effective generalized CP transformation and the fields on the branes will transform under Z 2 as where x 0 ¼ ðt; x 1 ; x 2 ; x 3 ; x 5 ; −x 6 Þ and X r is the representation matrix in the irreducible representation r. To combine the flavor symmetry A 4 with the Z 2 symmetry, the transformations have to satisfy certain consistency conditions [31], which were specifically applied to A 4 flavor symmetry in [20]. These conditions assure that if we perform a Z 2 transformation, then apply a family symmetry transformation, and finally an inverse Z 2 transformation is followed, the resulting net transformation should be equivalent to a family symmetry transformation. It is sufficient to only impose the consistency conditions on the group generators: D 101, 015028 (2020) where ρ r denotes the representation matrix for the generators S and T; see Table VI. As shown in [20], S 0 and T 0 can only belong to certain conjugacy classes of A 4 , [see Eq. (B2) to find out the elements in each conjugacy class]. The transformations under the generalized CP symmetry Z 2 are then which are consistent with Eqs. (C2) and (C3) for S 0 ¼ S and T 0 ¼ T. However in the model under consideration, we do not have any field on the branes transforming under the 1 0 and 1 00 representation. Thus the Z 2 transformation only affects the 3 representations. We conclude that the 3 representations on the brane transform under A 4 ⋉ Z 2 as shown in Table VI and Eq. (C4). APPENDIX D: MODULAR FORMS In this section, we show the construction of modular forms forΓ 3 ≃ A 4 following [15]. In model building, the difference between the usual cited discrete symmetries, which arise as remnant symmetries of the branes, and the modular symmetries of the spacetime lattice is that, in the latter case, the fields transform, under a transformation of τ, as where ρ is the usual matrix representation of the transformation and k is called the weight and is an arbitrary number. The invariance of the action forces the usual dimensionless coupling in the superpotential y to behave as [32] y → ðcτ þ dÞ k y ρ y y; where k y is the weight and must be an even integer [24] and ρ y the usual matrix representation of the transformation. To build the invariant in global supersymmetry, we need to satisfy two conditions: first the weight k y has to cancel the overall weights of the fields and second the product of ρ y times the representation matrices of the fields has to contain an invariant singlet. When k ¼ 0 for every constant, we have the usual discrete symmetry. The weight 0 form is just a constant, singlet under A 4 . The first nontrivial modular form is of weight 2 that following Eq. (4) transforms as The weight 6 modular forms are written as Due to relations of the Dedekind functions, the modular forms satisfy which reduce the number of possible modular forms. In our case which reduces even further the possible modular forms. The only triplet that is different from zero in Eq. All modular forms are built from products of the weight 2 triplet. We can build the modular forms for weight 8. Following [15], this is a 15-dimensional space that must be decomposed as 2 × 1 þ 2 × 1 0 þ 2 × 1 00 þ 3 × 3. For simplicity we can work out only the specific case where τ ¼ ω. This case is greatly restricted and can be checked by doing all possible multiplications of 3 × 3 × 3 × 3 that the only nonzero modular forms are where we can see that the triplet has the same structure as the weight 2 one. From this we conclude that any higher weight triplet would only repeat the previous structures without having any new one. For weight 10 we would have the same triplet as in weight 4 but two singlets since we can have the nontrivial products so that this is the first space that has two singlets. The next space that has the three singlets is built from powers of these singlets, so the modular form must have weight 20. APPENDIX E: NUMERICAL FIT We perform a χ 2 test function when fitting the effective neutrino mass matrix in Eq. (43) with input parameters x ¼ y; y 1 ; y 2 ; y 3 , from which we obtain a set of observables P n ðxÞ. We minimize the function defined as where the observables are given by P obs n ∈ fθ l 12 ; θ l 13 ; θ l 23 ; δ l ; Δm 2 21 ; Δm 2 31 g with statistical errors σ n . We use the recent global fit values of neutrino data from NuFit4.0 [30] and we ignore any renormalization group running corrections as well as threshold corrections associated with the two extra dimensions. Most of the observables follow an almost Gaussian distribution and we take a conservative approach using the smaller of the given uncertainties in our computations except for θ l 23 and δ l . The best fit from NuFit4.0 is for normal mass ordering with inverted ordering being disfavored with a Δχ 2 ¼ 4.7 ð9.3Þ without (with) the Super-Kamiokande atmospheric neutrino data analysis. We tried a fit to inverted mass ordering and we found a χ 2 ∼ 6800; therefore in the following results we only focus on the case of normal mass ordering. The model predictions are shown in Table VIII. The neutrino mass matrix in Eq. (43) predicts the maximal atmospheric mixing angle, θ l 23 ¼ 45°, and maximal CP violation, δ l ¼ −90°, within the 3σ region from the latest neutrino oscillation data. This is a consequence of the μτ-R symmetric form of the neutrino mass matrix when y; y 1 ; y 2 TABLE VIII. Model predictions in the neutrino sector for weights α ¼ β ¼ 6. The neutrino masses m i as well as the Majorana phases are pure predictions of our model. We also predict the maximal atmospheric mixing angle θ l 23 ¼ 45°and maximal CP phase δ l ¼ 270°. The bound on P m i is taken from [34]. The bound on m ee is taken from [35]. There is only one physical Majorana phase since m 1 ¼ 0. Data Model Central value 1σ range α ¼ β ¼ 6 are real while y 3 is imaginary. Furthermore, since we only have 2RH neutrinos, m 1 ¼ 0 and there is only one physical Majorana phase α 23 [33]. The predicted effective Majorana mass m ee [33] is also given in Table VIII. The fit has been performed using the Mixing Parameter Tools (MPT) package [36]. The values of y; y 1 ; y 2 and y 3 are shown in Table V. Fit 1 shows a good fit where all of the dimensionless real parameters y are of Oð1Þ. However a large range of parameters can give an equally good fit; see for example fit 2. The VEV ratios jξ;ṽ i j are parameters that do not enter the fit directly and they are chosen to reproduce the hierarchy between the fermion Yukawa couplings, making the dimensionless couplings more natural numbers, i.e., Oð1Þ. In the case of the neutrino mass matrix, even for fixed jξj and jṽ 2 j, there is a large range of parameters y; y 1 ; y 2 and y 3 that can give a good fit to the observables, meaning that the modular forms for weight α ¼ 6 and β ¼ 6 give a constrained form of the neutrino mass matrix which is phenomenologically suitable. For comparison, we also give the value of the χ 2 test function in the case of β ¼ 0, in which we only have two free parameters y and y 1 , and it goes up to χ 2 ∼ 1500, while for β ¼ 6 with four free parameters we have found a perfect fit for a variety of values of y; y 1 ; y 2 and y 3 . These VEV ratios jξ;ṽ i j also appear in the quark and charged-lepton mass matrices in Eqs. (36) and (37). For different values of jξj, as in fits 1 and 2 in Table V, different dimensionless Oð1Þ parameters y d i ; y e i and y u ij can be used to give the correct mass of the down-and up-type quarks and charged leptons and we show an example in Table IX for fit 1. In this case we take into account the running of the MSSM Yukawa couplings to the GUT scale and we follow the parametrization done by [37]. The matching conditions at the SUSY scale are parametrized in terms of four parametersη q;b;l which we set to zero and the usual tan β for which we choose tan β ¼ 5. IX. Input parameters that enter in the charged fermion mass matrices in Eqs. (36) and (37), giving the correct charged fermion masses and CKM parameters for a choice ofξ corresponding to fit 1. The Yukawa parameters y u ij are in general complex; however most of the phases can be reabsorbed and we are left with only four physical phases ϕ 21 , ϕ 23 , ϕ 31 and ϕ 32 where the subscript refers to the entry in the mass matrix where we are adding the phase.
13,617.8
2018-12-14T00:00:00.000
[ "Physics" ]
Efficient and Universal Merkle Tree Inclusion Proofs via OR Aggregation Zero-knowledge proofs have emerged as a powerful tool for enhancing privacy and security in blockchain applications. However, the efficiency and scalability of proof systems remain a significant challenge, particularly in the context of Merkle tree inclusion proofs. Traditional proof aggregation techniques based on AND logic suffer from high verification complexity and data communication overhead, limiting their practicality for large-scale applications. In this paper, we propose a novel proof aggregation approach based on OR logic, which enables the generation of compact and universally verifiable proofs for Merkle tree inclusion. By aggregating proofs using OR logic, we achieve a proof size that is independent of the number of leaves in the tree, and verification can be performed using any single valid leaf hash. This represents a significant improvement over AND aggregation, which requires the verifier to process all leaf hashes. We formally define the OR aggregation logic, describe the process of generating universal proofs, and provide a comparative analysis demonstrating the advantages of our approach in terms of proof size, verification data, and universality. Furthermore, we discuss the potential of combining OR and AND aggregation logics to create complex acceptance functions, enabling the development of expressive and efficient proof systems for various blockchain applications. The proposed techniques have the potential to significantly enhance the scalability, efficiency, and flexibility of zero-knowledge proof systems, paving the way for more practical and adaptive solutions in the blockchain ecosystem. Introduction Zero-knowledge proofs (ZKPs) have garnered significant attention in recent years due to their ability to enhance privacy and security in various applications, particularly in the domain of blockchain technology [1,2].ZKPs allow one party (the prover) to convince another party (the verifier) that a statement is true without revealing any additional information beyond the validity of the statement itself [3].This property makes ZKPs a powerful tool for enabling secure and privacy-preserving transactions, smart contracts, and other applications in blockchain systems [4][5][6][7]. One of the fundamental building blocks of many blockchain protocols is the Merkle tree [8][9][10], which is a data structure that enables the efficient and secure verification of large datasets.Merkle trees are used to store transactions, account balances, and other critical information in a compact and tamper-evident manner [8].To prove the inclusion of a specific data element within a Merkle tree, a prover must provide a Merkle proof, which consists of a path of hashes from the leaf node (representing the data element) to the root of the tree [8][9][10]. However, the efficiency of Merkle proofs becomes a critical issue when dealing with large-scale blockchain systems.Specifically, we address the following problem: • For a given set X of leaves in a Merkle tree, create a universal proof that allows for efficient verification of whether an arbitrary pair (b, h) belongs to X, where h is the hash value of b, without the need to provide or process all leaves from X during each verification. This challenge is particularly relevant in scenarios where selective verification of individual leaves is required, such as in decentralized exchanges or supply chain management systems, where the ability to efficiently prove the inclusion of specific transactions or items without revealing the entire dataset is crucial. While Merkle trees offer efficient verification for individual elements, proving the inclusion of multiple elements or generating universal proofs for all elements in the tree remains a challenge.This limitation becomes particularly apparent in scenarios that require frequent verifications or deal with large-scale datasets, where the cumulative overhead of multiple Merkle proofs can impact the system performance [11][12][13]. Traditional proof aggregation techniques based on AND logic, where multiple proofs are combined into a single proof, were proposed to address this issue [14,15].However, these methods often result in increased verification complexity and data communication overhead, especially for large Merkle trees, as they require processing all leaves during verification.Recent work has explored alternative aggregation strategies, including the use of OR logic in the context of Sigma protocols [16,17].Building upon these foundations, we propose a novel application of recursive OR aggregation specifically tailored for Merkle tree proofs, which allows for the efficient verification of individual leaves without the need to process the entire dataset. In this paper, we present a practical approach to compressing Merkle proofs into a single, compact zero-knowledge proof using recursive OR aggregation.Our method enables the generation of a universal proof that can attest to the inclusion of any leaf in the Merkle tree, significantly reducing the overall proof size and verification complexity.This approach is particularly valuable in blockchain systems, where efficient proof generation and verification are crucial for scalability and performance. The key contributions of our work are as follows: 1. We adapt and extend the concept of OR aggregation, which was previously discussed in the context of Sigma protocols, to create a recursive aggregation scheme specifically designed for Merkle tree proofs. 2. We provide a detailed description of the process for generating a universal, compact proof for Merkle tree inclusion using recursive OR aggregation. 3. We present a comparative analysis that demonstrates the advantages of our approach in terms of proof size, verification data, and universality, particularly in contrast to traditional AND aggregation methods.4. We discuss the practical implications of our method for blockchain applications, including potential optimizations for smart contract execution and improvements in the overall system efficiency. The rest of this paper is organized as follows: Section 2 provides the necessary background on zero-knowledge proofs, Merkle trees, and existing proof aggregation techniques.Section 3 introduces our proposed recursive OR aggregation scheme for Merkle tree proofs, including the formal definitions and the process of generating universal proofs.Section 4 presents a comparative analysis of our approach with traditional aggregation methods and discusses potential applications and extensions of our scheme.Finally, Section 5 concludes this paper and outlines future research directions. Zero-Knowledge Proofs ZKPs are cryptographic protocols that allow a prover to convince a verifier that a statement is true without revealing any additional information beyond the validity of the statement [2].The concept of ZKPs was first introduced by Goldwasser, Micali, and Rackoff in 1985 [3], and since then, it has been extensively studied and applied in various domains, including authentication, digital signatures, and blockchain technology [18,19]. Completeness: If the statement is true, an honest prover should be able to convince an honest verifier of its validity. 2. Soundness: If the statement is false, no prover (even a dishonest one) should be able to convince an honest verifier that it is true, except with a negligible probability. 3. Zero-knowledge: The verifier should not learn any information from the proof except for the validity of the statement. Merkle Trees Merkle trees, also known as hash trees, are a fundamental data structure used in many blockchain protocols to enable the efficient and secure verification of large datasets [28].A Merkle tree is a binary tree in which each leaf node contains the hash of a data block, and each non-leaf node contains the hash of its child nodes' hashes [28,29].The root of the tree is a single hash value that represents the entire dataset. The primary advantage of Merkle trees lies in their ability to provide efficient proofs of inclusion for individual elements without requiring the verifier to process the entire dataset [30].This property is particularly valuable in blockchain systems, where it enables light clients to verify transactions without downloading the full blockchain [31]. To prove the inclusion of a data element in a Merkle tree, a prover needs to provide a Merkle proof, which consists of the hashes along the path from the leaf node (representing the data element) to the root of the tree.The verifier can then reconstruct the root hash using the provided hashes and compare it with the known root hash to verify the inclusion of the data element [30]. While Merkle trees offer efficient verification for individual elements, the cumulative cost of generating and verifying multiple proofs can become significant in scenarios involving large-scale data or frequent verifications.This issue has led researchers to explore various optimization techniques and alternative proof structures [11][12][13]. Proof Aggregation Techniques As blockchain networks scale and the volume of data stored in Merkle trees grows, the efficiency of proof generation and verification has become an increasingly important consideration.To address this challenge, various proof aggregation techniques were proposed [32,33]. The most common proof aggregation approach is based on AND logic, where the aggregated proof is considered valid only if all the constituent proofs are valid [32,33].In the context of Merkle tree inclusion proofs, AND aggregation allows the prover to combine the proofs for multiple data elements into a single proof.However, the verifier still needs to process all the leaf hashes to validate the aggregated proof, leading to high verification complexity, especially for large Merkle trees. Recent research has explored alternative aggregation strategies to overcome the limitations of AND-based approaches.Notable among these is the concept of OR aggregation, which has been discussed in the context of Sigma protocols [16,17].OR aggregation allows for the construction of proofs that are valid if at least one of the constituent proofs is valid, potentially offering advantages in terms of proof size and verification efficiency. Other proof aggregation techniques explored in the literature include the following: • Batch verification [6,23]: this approach allows for the simultaneous verification of multiple signatures or proofs, reducing the overall computational cost. • Recursive proof composition [1,12]: this technique involves using the output of one proof as an input to another, enabling the construction of more complex proofs from simpler building blocks. • Probabilistic proof aggregation [34,35]: these methods use probabilistic techniques to reduce the proof size and verification time, often at the cost of introducing a small probability of error. A particularly relevant work in this context is the Maru project [36], which proposes an approach for embedding Merkle path elements into proofs.While this method offers improvements in terms of proof size and verification efficiency, it results in proofs that are specific to individual leaves rather than universal for the entire tree. Our work builds upon these foundations, particularly the concept of OR aggregation, and extends it to create a recursive aggregation scheme specifically tailored for Merkle tree proofs.By doing so, we aimed to address the limitations of existing approaches and provide a more efficient and flexible solution for generating compact, universal proofs of inclusion in Merkle trees. Enhanced Aggregation Logic Before introducing our enhanced aggregation logic for Merkle tree proofs, it is crucial to establish the foundations upon which our work is built.We begin by reviewing key concepts from Sigma protocols, which form the basis for many zero-knowledge proof systems. In the context of this paper, "aggregation" refers to the process of combining multiple individual proofs or data elements into a single coherent structure that can be verified as a whole.Specifically, in the realm of zero-knowledge proofs within Merkle trees, aggregation aims to consolidate numerous individual proofs of inclusion into a unified proof.This unified proof not only asserts the validity of multiple data elements concurrently but also optimizes the computational and communication overhead associated with their verification.We utilized OR aggregation logic, where a single composite proof is deemed valid if at least one of its constituent proofs holds true.This method contrasts with AND aggregation, which requires all constituent proofs to be valid for the composite proof to be accepted, and typically involves higher complexity and resource demands. Foundations: Sigma Protocols and OR Composition Sigma protocols, which were introduced by Cramer et al. [37], are three-move public coin protocols that allow a prover to convince a verifier of the validity of a statement without revealing any additional information.A Sigma protocol Π for a relation R consists of algorithms (P 1 , P 2 , V), where the following occurs: 1. P 1 (x, w) → a: the prover's first move, which generates the initial message a. Sigma protocols possess three key properties: 1. Completeness: an honest prover can always convince an honest verifier. 2. Special soundness: given two accepting transcripts (a, c, z) and (a, c ′ , z ′ ) for c ̸ = c ′ , one can efficiently extract a witness w. 3. Special honest-verifier zero knowledge: there exists a simulator that can produce transcripts indistinguishable from real protocol executions. Building upon Sigma protocols, Cramer et al. [37] introduced the OR composition technique, which allows for proving knowledge of at least one witness among multiple statements.This technique forms the theoretical basis for our approach to Merkle tree proof aggregation.Now, we introduce our novel approach to proof aggregation in zero-knowledge proof systems for Merkle trees, which addresses the limitations of traditional AND aggregation logic.Our enhanced aggregation scheme, which is based on OR logic, enables the generation of compact and universally verifiable zk-proofs for Merkle tree inclusion. Motivation for an Improved Universal Proof Let M be a Merkle tree with n leaves, where n = 2 d for some integer d ≥ 0. Each leaf is associated with a data block b i (i = 1, . . ., n) and the corresponding leaf hash is computed as h i = H(b i ), where H(•) is a cryptographic hash function.The Merkle tree is constructed by recursively hashing pairs of adjacent nodes until a single root hash h root is obtained. In traditional approaches, proving the inclusion of a leaf in a Merkle tree requires providing a path of hashes from the leaf to the root.While this is efficient for single-leaf verification, this method becomes cumbersome when proving the inclusion of multiple leaves or when generating a universal proof for all leaves. To address this, previous work explored proof aggregation techniques.The most common approach is based on AND logic, where an aggregated zk-proof π AND is considered valid only if all constituent zk-proofs π 1 , . . ., π m are valid.Formally (Figure 1), where V (π i , h i ) denotes the verification function that outputs 1 if π i is a valid proof for h i and 0 otherwise. 2. Special soundness: given two accepting transcripts ( , , ) a c z and ( , , ) , one can efficiently extract a witness w .3. Special honest-verifier zero knowledge: there exists a simulator that can produce transcripts indistinguishable from real protocol executions. Building upon Sigma protocols, Cramer et al. [37] introduced the OR composition technique, which allows for proving knowledge of at least one witness among multiple statements.This technique forms the theoretical basis for our approach to Merkle tree proof aggregation.Now, we introduce our novel approach to proof aggregation in zero-knowledge proof systems for Merkle trees, which addresses the limitations of traditional AND aggregation logic.Our enhanced aggregation scheme, which is based on OR logic, enables the generation of compact and universally verifiable zk-proofs for Merkle tree inclusion. Motivation for an Improved Universal Proof Let  be a Merkle tree with n leaves, where for some integer and the corresponding leaf hash is computed as , where ( ) H  is a cryptographic hash function.The Merkle tree is constructed by recursively hashing pairs of adjacent nodes until a single root hash root h is obtained. In traditional approaches, proving the inclusion of a leaf in a Merkle tree requires providing a path of hashes from the leaf to the root.While this is efficient for single-leaf verification, this method becomes cumbersome when proving the inclusion of multiple leaves or when generating a universal proof for all leaves. To address this, previous work explored proof aggregation techniques.The most common approach is based on AND logic, where an aggregated zk-proof AND  is considered valid only if all constituent zk-proofs 1 , , m    are valid.Formally (Figure 1), AND where ( , ) AND( , , ) Aggregation logic "AND" of zero-knowledge proofs. While AND aggregation has been effective in various scenarios, it poses significant challenges when applied to large Merkle trees.The main issue is verification complexity: the verifier needs to process all leaf hashes to validate the proof, leading to high computational and communication overhead for large trees. To illustrate this, consider the problem of proving the inclusion of a single leaf i b in a Merkle tree  .In a standard Merkle proof, the prover provides the verifier with a path of hashes from the leaf i b to the root root h , along with the corresponding sibling hashes While AND aggregation has been effective in various scenarios, it poses significant challenges when applied to large Merkle trees.The main issue is verification complexity: the verifier needs to process all leaf hashes to validate the proof, leading to high computational and communication overhead for large trees. To illustrate this, consider the problem of proving the inclusion of a single leaf b i in a Merkle tree M. In a standard Merkle proof, the prover provides the verifier with a path of hashes from the leaf b i to the root h root , along with the corresponding sibling hashes at each level.The verifier can then recompute the root hash and compare it with the known value to verify the inclusion of b i . However, if we were to use AND aggregation to create a single zk-proof for the inclusion of l i (highlighted in yellow in Figure 2), the prover would need to provide proofs for all the leaves in the tree, i.e., π 1 , . . ., π n , where n = 2 d .The aggregated proof π AND would then be validated by verifying each constituent proof (Figure 2): clusion of i l (highlighted in yellow in Figure 2), the prover would need to provide proofs for all the leaves in the tree, i.e., 1 , , n    , where 2 d n  .The aggregated proof AND  would then be validated by verifying each constituent proof (Figure 2): ,( , , , , , , )) The main challenge with using AND aggregation for Merkle tree inclusion proofs is the verification complexity.While the size of the aggregated proof AND  itself may be compact, the verifier would need to be provided with all the leaf hashes 1 , , n h h  to validate the proof (highlighted in red in Figure 2).In a tree with 30 2 leaves (corresponding to a 1 GB data block), this would require the prover to send and the verifier to process 30 2 hash values, each of which is typically 256 bits long, resulting in a total communication overhead of 32 GB.This makes the verification process impractical for large Merkle trees. H H H H H H H H H H H H H H H H One way to mitigate this issue is to embed the specific Merkle path elements for a particular leaf into the final proof, as was done in the Maru project [36].This approach eliminates the need to provide all the leaf hashes during verification.However, the resulting proof is no longer universal, as it is tailored to prove the inclusion of a single, specific leaf.If the prover wants to demonstrate the inclusion of a different leaf, a new proof must be generated, embedding the corresponding Merkle path elements. Formally, let AND ( ) denote the AND-aggregated proof for the inclusion of leaf i b , with the Merkle path elements 1 2 , ,.. ., d h h h for i b embedded in the proof (highlighted in yellow in Figure 3).The verification of AND ( ) would only require the leaf hash i h and the root hash root h (highlighted in orange in Figure 3):: AND root ( ( ),( , )) 1 The main challenge with using AND aggregation for Merkle tree inclusion proofs is the verification complexity.While the size of the aggregated proof π AND itself may be compact, the verifier would need to be provided with all the leaf hashes h 1 , . . ., h n to validate the proof (highlighted in red in Figure 2).In a tree with 2 30 leaves (corresponding to a 1 GB data block), this would require the prover to send and the verifier to process 2 30 hash values, each of which is typically 256 bits long, resulting in a total communication overhead of 32 GB.This makes the verification process impractical for large Merkle trees. One way to mitigate this issue is to embed the specific Merkle path elements for a particular leaf into the final proof, as was done in the Maru project [36].This approach eliminates the need to provide all the leaf hashes during verification.However, the resulting proof is no longer universal, as it is tailored to prove the inclusion of a single, specific leaf.If the prover wants to demonstrate the inclusion of a different leaf, a new proof must be generated, embedding the corresponding Merkle path elements. Formally, let π AND (b i ) denote the AND-aggregated proof for the inclusion of leaf b i , with the Merkle path elements h 1 , h 2 , . . ., h d for b i embedded in the proof (highlighted in yellow in Figure 3).The verification of π AND (b i ) would only require the leaf hash h i and the root hash h root (highlighted in orange in Figure 3): x-public statement (highlighted in green in Figure 3); • w-secret witness (highlighted in red in Figure 3); • ||-concatenation function (combining vectors). While this approach reduces the communication overhead and verification complexity compared with AND aggregation, it comes at the cost of proof universality.If the prover wants to demonstrate the inclusion of a different leaf, a new proof must be generated, embedding the corresponding Merkle path elements.Consequently, the prover must generate a separate proof π AND (b i ) for each leaf b i (i = 1, 2, . . ., n) they want to prove inclusion for.This can be inefficient in scenarios requiring frequent proof generation for different subsets of leaves or when dealing with a large number n of leaves in dynamic environments. In contrast, our OR aggregation method addresses this limitation by creating a single, universal proof that can verify the inclusion of any leaf without requiring regeneration for different leaves or subsets.This approach maintains the efficiency of verification while providing greater flexibility and reducing the computational overhead for the prover in dynamic scenarios.Figure 3 shows the following:  x -public statement (highlighted in green in Figure 3);  w -secret witness (highlighted in red in Figure 3);  || -concatenation function (combining vectors). While this approach reduces the communication overhead and verification complexity compared with AND aggregation, it comes at the cost of proof universality.If the prover wants to demonstrate the inclusion of a different leaf, a new proof must be generated, embedding the corresponding Merkle path elements.Consequently, the prover must generate a separate proof AND ( ) ) they want to prove inclusion for.This can be inefficient in scenarios requiring frequent proof generation for different subsets of leaves or when dealing with a large number n of leaves in dynamic environments. In contrast, our OR aggregation method addresses this limitation by creating a single, universal proof that can verify the inclusion of any leaf without requiring regeneration for different leaves or subsets.This approach maintains the efficiency of verification while providing greater flexibility and reducing the computational overhead for the prover in dynamic scenarios. OR Aggregation for Merkle Tree Proofs Building upon the concept of OR composition in Sigma protocols, we propose an enhanced aggregation scheme based on OR logic specifically tailored for Merkle tree proofs.Our approach allows for the construction of a valid proof if at least one of the constituent proofs is valid, significantly reducing the verification complexity. Formally, let 1 , , m    be proofs for the validity of leaf hashes 1 , , m h h  , respectively.The OR aggregation of these proofs, denoted by  , is defined as follows (Figure OR Aggregation for Merkle Tree Proofs Building upon the concept of OR composition in Sigma protocols, we propose an enhanced aggregation scheme based on OR logic specifically tailored for Merkle tree proofs.Our approach allows for the construction of a valid proof if at least one of the constituent proofs is valid, significantly reducing the verification complexity. Formally, let π 1 , . . ., π m be proofs for the validity of leaf hashes h 1 , . . ., h m , respectively.The OR aggregation of these proofs, denoted by π OR , is defined as follows (Figure 4): This definition ensures that the aggregated proof π OR is valid if and only if at least one of the constituent proofs π 1 , . . ., π m is valid.This property is crucial for our approach, as it allows for efficient verification using any single leaf.Specifically, if we supply any valid leaf hash h i ∈ h 1 , . . ., h m to the proof-checking function V (π OR , h i ), we obtain confirmation of inclusion for that leaf: This formulation demonstrates that our OR-aggregated proof can verify the inclusion of any leaf in the Merkle tree using a single, compact proof. The OR aggregation logic enables a more efficient traversal of the Merkle tree, where proofs for individual leaves can be aggregated in a way that naturally follows the tree structure. While our OR aggregation process follows a structure similar to the standard Merkle tree construction, it operates on proofs rather than hash values.This key distinction allows us to create a universal proof for leaf inclusion without modifying the underlying Merkle tree structure. Let M be a Merkle tree with n leaves, and let b 1 , . . ., b n be the leaf nodes with corresponding hashes h 1 , . . ., h n .The aggregation process begins at the leaf level and progresses upward, combining proofs for adjacent nodes to form aggregated proofs for their parent nodes.At each level, we apply our OR logic to the proofs: Here, π OR Parent is the aggregated proof for a parent node, which is derived from the proofs π left and π right of its left and right child nodes, respectively.This operation preserves the critical property that the aggregated proof remains valid if either of its constituent proofs is valid. This approach directly addresses the challenge of efficient selective verification, allowing us to prove the inclusion of any leaf b i with hash h i in the Merkle tree using a single, compact proof.Unlike standard Merkle proofs, our method does not require providing the entire path from leaf to root for each verification. Cryptography 2024, 8, x FOR PEER REVIEW 8 of 14 This definition ensures that the aggregated proof OR  is valid if and only if at least one of the constituent proofs 1 , , m    is valid.This property is crucial for our approach, as it allows for using any single leaf.Specifically, if we supply any valid leaf hash , we obtain confirmation of inclusion for that leaf: This formulation demonstrates that our OR-aggregated proof can verify the inclusion of any leaf in the Merkle tree using a single, compact proof. The OR aggregation logic enables a more efficient traversal of the Merkle tree, where proofs for individual leaves can be aggregated in a way that naturally follows the tree structure. While our OR aggregation process follows a structure similar to the standard Merkle tree construction, it operates on proofs rather than hash values.This key distinction allows us to create a universal proof for leaf inclusion without modifying the underlying Merkle tree structure. Let  be a Merkle tree with n leaves, and let 1 , , n b b  be the leaf nodes with corresponding hashes 1 , , n h h  .The aggregation process begins at the leaf level and progresses upward, combining proofs for adjacent nodes to form aggregated proofs for their parent nodes.At each level, we apply our OR logic to the proofs: Aggregation logic "OR" of zero-knowledge proofs. Generating a Universal Proof for Merkle Tree Inclusion Our OR aggregation scheme enables the generation of a universal proof that succinctly attests to the inclusion of any valid leaf in the Merkle tree.This process consists of the following steps (Figure 5): 1. Generate proofs for each leaf: For each leaf node b i (i = 1, . . ., n) in the Merkle tree, generate a zero-knowledge proof π i that attests to the correctness of the leaf hash h i .This can be done using a suitable zero-knowledge proof system, such as zk-SNARKs or zk-STARKs.2. Aggregate proofs using OR logic: Starting from the leaves, recursively aggregate the proofs of adjacent nodes using OR logic, as described in Section 4.2.At each level, the proofs of sibling nodes are combined to form a proof for their parent node (highlighted in yellow in Figure 5).This process is repeated until a single proof π OR root is obtained for the root of the tree. 3. Output the universal proof: The aggregated proof for the root of the Merkle tree, π OR root , serves as the universal proof of inclusion.This proof has the property that it can be validated by providing any one of the valid leaf hashes as the input: Cryptography 2024, 8, 28 9 of 13 obtained for the root of the tree.3. Output the universal proof: The aggregated proof for the root of the Merkle tree, OR root  , serves as the universal proof of inclusion.This proof has the property that it can be validated by providing any one of the valid leaf hashes as the input: The resulting universal proof OR root  is compact, as its size is independent of the number of leaves in the tree.Moreover, the proof can be efficiently verified by providing any one of the valid leaf hashes without requiring the prover to send all the leaf hashes or embed specific Merkle path elements for each leaf. Comparison with Existing Approaches Our OR aggregation scheme for Merkle tree proofs builds upon the theoretical foundations of Sigma protocols and OR composition techniques, while addressing the specific challenges of Merkle tree verification in blockchain systems.Unlike the approach used in the Maru project [36], which embeds Merkle path elements for a particular leaf, our method generates a truly universal proof that can be verified using any leaf in the tree. Furthermore, our approach differs from traditional Sigma-protocol-based systems in its specific application to Merkle trees and its recursive nature.While standard OR composition allows for proving knowledge of one out of many witnesses, our scheme enables The resulting universal proof π OR root is compact, as its size is independent of the number of leaves in the tree.Moreover, the proof can be efficiently verified by providing any one of the valid leaf hashes without requiring the prover to send all the leaf hashes or embed specific Merkle path elements for each leaf. Comparison with Existing Approaches Our OR aggregation scheme for Merkle tree proofs builds upon the theoretical foundations of Sigma protocols and OR composition techniques, while addressing the specific challenges of Merkle tree verification in blockchain systems.Unlike the approach used in the Maru project [36], which embeds Merkle path elements for a particular leaf, our method generates a truly universal proof that can be verified using any leaf in the tree. Furthermore, our approach differs from traditional Sigma-protocol-based systems in its specific application to Merkle trees and its recursive nature.While standard OR composition allows for proving knowledge of one out of many witnesses, our scheme enables the aggregation of proofs across all levels of the Merkle tree, resulting in a single, compact proof for the entire structure. By leveraging the efficiency of OR logic in this context, we achieve a significant reduction in proof size and verification complexity compared with AND-based aggregation methods, especially for large Merkle trees.This makes our approach particularly suitable for blockchain applications where efficient proof generation and verification are crucial for scalability and performance. Comparative Analysis of Merkle Tree Proof Techniques Our proposed OR aggregation logic for Merkle tree proof aggregation offers several advantages over traditional approaches.To quantify these benefits, we conducted a comprehensive comparative analysis of our method against standard Merkle proofs, AND aggregation, and the Maru project's approach [36]. Table 1 presents a summary of our findings, comparing key metrics across the four approaches.The key observations from Table 1 are as follows: 1. Standard Merkle proof: while efficient for single-leaf verification, it lacks universality and scales logarithmically with tree size.2. AND aggregation: offers a universal proof but requires all leaf hashes for verification, leading to a high data overhead. Our OR aggregation: combines the advantages of constant-size proofs, minimal verification data, and universality. Practical Implications for Blockchain Systems The efficiency gains provided by our OR aggregation technique have several practical implications for blockchain systems: 1. Improved throughput: By reducing the verification complexity to O(1), our approach allows for significantly higher transaction throughput in blockchain networks.This is particularly important for large-scale, high-volume applications. 2. Reduced storage requirements: the compact nature of our universal proofs means that less storage is required for maintaining proof data, potentially leading to reduced costs for node operators. 3. Enhanced light client functionality: our method enables more efficient light client implementations, as clients can verify the inclusion of any leaf in the Merkle tree with minimal computational and data transfer overhead.4. Flexible verification: the ability to verify the inclusion of any leaf using a single universal proof provides greater flexibility in how blockchain data can be accessed and verified. Extending the Technique to New Applications The introduction of OR aggregation logic alongside traditional AND aggregation opens up new possibilities for constructing complex acceptance functions at the proofgeneration level.By combining these aggregation functions, we can create sophisticated proof systems that cater to various business logic requirements in blockchain applications. For instance: 1. Partial group verification: in scenarios where a condition must be met by at least one participant from a group, OR aggregation can be used to efficiently verify this without checking each proof individually. 2. Complete group verification: for cases requiring all participants to satisfy a condition, AND aggregation can be employed to create a single, verifiable proof of complete compliance. 3. Nested conditions: complex scenarios involving combinations of conditions (e.g., "all participants from group A OR at least one from group B") can be represented by nesting AND and OR aggregations. This flexibility in constructing acceptance functions at the proof level can significantly enhance the expressiveness and efficiency of blockchain applications.It allows for the offloading of complex verification logic from smart contracts to the proof generation phase, potentially leading to more streamlined and cost-effective contract execution. Potential Limitations and Future Work While our OR aggregation technique offers significant advantages, it is important to acknowledge potential limitations and areas for future research: 1. Proof generation overhead: Although verification is highly efficient, the initial proof generation process may be more computationally intensive than traditional methods.Future work could focus on optimizing this process.2. Security considerations: As with any new cryptographic technique, thorough security analysis is crucial.Future studies should focus on formal security proofs and potential attack vectors. 3. Integration with existing systems: further research is needed to explore the best practices for integrating our approach with existing blockchain protocols and infrastructure. 4. Extension to other data structures: while our focus has been on Merkle trees, future work could explore the application of similar OR aggregation techniques to other cryptographic data structures used in blockchain systems. 5. Theoretical foundations: further research could explore the theoretical underpinnings of our approach, potentially leading to new insights in the field of zero-knowledge proofs and their applications. In conclusion, our OR aggregation technique for Merkle tree proofs represents a significant advancement in the field of blockchain scalability and efficiency.By enabling constant-time verification and compact universal proofs, our approach addresses key limitations of existing methods and opens new possibilities for high-performance blockchain applications.As the blockchain ecosystem continues to evolve, techniques like ours will play a crucial role in enabling the next generation of scalable, efficient, and secure distributed systems. Conclusions In this paper, we introduce a novel proof-aggregation technique based on OR logic, which addresses the limitations of traditional AND aggregation in the context of Merkle tree inclusion proofs.Our approach, which builds upon and extends the OR composition concept from Sigma protocols, enables the generation of compact and universally verifiable proofs, allowing for efficient and scalable verification of Merkle tree inclusion. We formally defined the OR aggregation logic and described the process of generating a universal proof for Merkle tree inclusion using this approach.The resulting proof is not only compact in size but also universal, capable of being verified using any single valid leaf hash.This provides a significant advantage over traditional Merkle proofs and AND aggregation methods, particularly for large-scale blockchain applications. Through a comparative analysis, we demonstrated the benefits of our proposed approach in terms of the proof size, verification data, and universality.Our OR aggregation scheme achieves constant-size proofs and verification data, regardless of the size of the Merkle tree.This represents a substantial improvement over standard Merkle proofs, which scale logarithmically, and AND aggregation, which requires linear growth in verification data. Furthermore, we discuss the potential of combining OR and AND aggregation logics to create complex acceptance functions at the proof generation level.This flexibility enables the development of expressive and efficient proof systems that can cater to various business logic requirements in blockchain applications.While our approach offers substantial benefits, we acknowledge that there are areas for future research and potential limitations to address.These include optimizing the proof-generation process, conducting thorough security analyses, and exploring integration strategies with existing blockchain protocols. The proposed techniques have the potential to significantly enhance the scalability, efficiency, and expressiveness of zero-knowledge proof systems in the context of Merkle tree inclusion proofs and beyond.As the adoption of zero-knowledge proofs continues to grow in blockchain applications, the ability to construct flexible and efficient proof aggregation schemes will be crucial in enabling the development of scalable and practical solutions. In conclusion, our OR aggregation technique for Merkle tree proofs represents a significant step forward in addressing the scalability and efficiency challenges faced by current blockchain systems.By enabling constant-time verification and compact universal proofs, our approach opens new possibilities for high-performance blockchain applications and contributes to the ongoing evolution of secure and scalable distributed systems. Figure 2 . Figure 2. AND logic to create a single zk-proof of inclusion. Figure 2 . Figure 2. AND logic to create a single zk-proof of inclusion. Figure 3 Figure3shows the following: Figure 3 . Figure 3. Logic for generating a single inclusion proof with Merkle path embedding (as used in the Maru project [36]): (a) Merkle tree; (b) Proof generation scheme. Figure 3 . Figure 3. Logic for generating a single inclusion proof with Merkle path embedding (as used in the Maru project [36]): (a) Merkle tree; (b) Proof generation scheme. Figure 5 . Figure 5. OR logic to create a single zk-proof of inclusion. Figure 5 . Figure 5. OR logic create a single zk-proof of inclusion. Table 1 . Comparative analysis of Merkle tree proof techniques, where n is the number of leaves in the Merkle tree.
9,142
2024-05-13T00:00:00.000
[ "Computer Science" ]
A Non-Volatile All-Spin Analog Matrix Multiplier: An Efficient Hardware Accelerator for Machine Learning We propose and analyze a compact and non-volatile nanomagnetic (all-spin) analog matrix multiplier performing the multiply-and-accumulate (MAC) operation using two magnetic tunnel junctions – one activated by strain to act as the multiplier, and the other activated by spin-orbit torque pulses to act as a domain wall synapse that performs the operation of the accumulator. Each MAC operation can be performed in ~1 ns and the maximum energy dissipated per operation is ~100 aJ. This provides a very useful hardware accelerator for machine learning (e.g. training of deep neural networks), solving combinatorial optimization problems with Ising type machines, and other artificial intelligence tasks which often involve the multiplication of large matrices. The non-volatility allows the matrix multiplier to be embedded in powerful non-von-Neumann architectures. I. INTRODUCTION A RTIFICAL intelligence (AI) is pervasive and ubiquitous in modern life (smart cities, smart appliances, autonomous self-driving vehicles, information processing, speech recognition, patient monitoring, etc.). Estimates by OpenAi predict an explosive growth of computational requirements in AI by a factor of 100  every two years, which is a 50  faster rate than Moore's law governing the evolution of the chip industry [1]. Most AI applications leverage machine learning (or deep learning based on neural networks) to perform two primary functions -training and inference. Algorithms for these tasks require multiplication of large matrices, such as in updating the synaptic weight matrices in deep learning networks, which is an essential feature of training a neuronal circuit, solving combinatorial optimization problems ______________________________________ Hardware accelerators that can perform matrix multiplications rapidly and efficiently are therefore very attractive since they can speed up AI tasks immensely. They are particularly useful in computer vision [2], image and other classification tasks [3], approximate computing [4], speech recognition [5], patient monitoring [6] and biomedicine [7]. The earliest ideas for devising hardware-based matrix multipliers date back to 1909. Percy Ludgate conceived of a machine made of mechanical parts that was understandably unwieldy, slow and unreliable [8]. Modern matrix multipliers employ electronic charge-based circuitry that are fast, convenient and reliable [9], but also energy-hungry and volatile, i.e. they lose all information once powered off. Recently, matrix multipliers have been implemented with optical networks [10,11], which can be extremely energyefficient and fast, but their drawback is the large footprint. They too are usually volatile since they use capacitors. In this paper, we present an all-magnetic (all-spin) implementation of a matrix multiplier, which is energy efficient, fast and has a much smaller footprint than its optical counterparts. Its most important advantage is that it is non-volatile and hence the matrix products can be stored indefinitely in the device after powering off. Consider the matrix multiplication operation . This operation consists of multiplying pairs of numbers (one member of the pair picked from a row of one matrix and the other from a column of the other matrix) and then adding up the products of the pairs to produce an element of the product matrix. Thus, one would need: (1) a "multiplier" to multiply pairs of numbers, and (2) an "accumulator" (which accumulates the individual products and adds them up). These are the two ingredients of a hardware accelerator for matrix multiplication. In this work, we implement the multiplier with a single straintronic magnetic tunnel junction (MTJ) and the accumulator with another magnetic tunnel junction (driven by spin-orbit torque) acting as a domain wall synapse [12]. Each MTJ can have a footprint of ~(100 nm) 2 , and with all the peripherals, the footprint of the entire device can be < 1 m 2 . The matrix multiplier can operate at clock rates of ~GHz and dissipate ~100 aJ of energy per multiply-and-accumulate (MAC) operation. In the next two sections, we describe the multiplier and the accumulator. II. MULTIPLIER A schematic of the proposed multiplier is shown in Fig. 1. It consists of an elliptical MTJ that has a (magnetically) "hard" layer and a "soft" layer, separated by an intervening insulating spacer layer. Any residual dipole interaction between the hard and the soft layer creates an effective magnetic field Hd in the soft layer that is directed along the latter's major axis (easy axis) in a direction opposite to the magnetization of the hard layer. The soft layer is magnetostrictive and placed in elastic contact with an underlying poled piezoelectric thin film deposited on a conducting substrate (this construct constitutes a 2-phase multiferroic). Two electrically shorted electrodes, delineated on the piezoelectric film, flank the MTJ, while the back of the substrate is connected to ground. When a (gate) voltage V G is applied to the shorted electrode pair, it generates biaxial strain in the piezoelectric film pinched between the two electrodes, which is transferred to the elliptical soft layer. The strain is either compressive along the major axis and tensile along the minor axis of the soft layer, or vice versa, depending on the voltage polarity [13]. With the right voltage polarity, these strains rotate the soft layer's magnetization away from the major axis of the ellipse (the easy axis) towards the minor axis (hard axis) because of the Villari effect. The rotation is arrested midway by the magnetic field Hd and hence the magnetization ultimately settles into a steadystate orientation that subtends some angle  ss with the major axis (or the magnetization of the hard layer). The value of  ss depends on the applied strain and H d . The hard layer's magnetization remains unaffected. Since the resistance of the s-MTJ depends on the angle  ss between the magnetizations of the hard and the soft layers, the strain changes the MTJ resistance since the value of H d is fixed. This is the operational basis of a "straintronic" MTJ (s-MTJ), whose basic function was demonstrated in [14]. To implement the multiplier, a constant current source I bias is connected between the hard and soft layers of the s-MTJ (terminals '1' and '2'), as shown in Fig. 1(a). This drives a current through the s-MTJ. The gate voltage V G is applied at terminal '3' to generate the strain in the soft layer, and a fourth terminal is connected to the hard layer (common with terminal '1'), which outputs a voltage V 0 . Terminal 2, connected to the soft layer, is grounded and hence 0 is the resistance of the s-MTJ that can be altered by the gate voltage V G generating strain, as explained before. A. Rotation of the soft layer's magnetization due to the gate voltage We have modeled the rotation of the soft layer's magnetization as a function of the gate voltage V G in the presence of H d and thermal noise using stochastic Landau-Lifshitz-Gilbert simulations [15]. This allows us to find the  ss versus V G relation. The s-MTJ resistance is given by   3 3 characteristic, which we show qualitatively in Fig. 1(b). With proper choice of the s-MTJ parameters, we can produce a linear region in the G s-MTJ vs. V G characteristic where We show this analytically in the Appendix. In Fig. 2, we plot the  ss versus V G characteristics obtained from the stochastic Landau Lifshitz Gilbert simulation and the resulting G s-MTJ versus V G plot. The simulation procedure is described in ref. [15] and the Appendix. The parameters for the elliptical soft layer of the s-MTJ used in the simulation are given in Table I. The soft layer is assumed to be made of Terfenol-D, which has large magnetostriction. The piezoelectric film is assumed to be (001) PMN-PT which has a large piezoelectric coefficient. The plot in Fig. 2 . When the gate voltage V G is chosen to be in that region, one can perform an analog multiplication of two input voltages V in1 and V in2 encoding the two matrix elements that are to be multiplied. We elucidate this in the next subsection. Table I. B. Operation of the multiplier To understand how the multiplier works, refer to Fig. 1(c) and note that 1 That implements a "multiplier" since the current I out flowing through the s-MTJ (which is also the current through the series resistor R) is proportional to the product of the two input voltages V in1 and V in2 . The voltage out V is proportional to this current and hence it too is proportional to the product . Similar ideas were used to design probability composer circuits for Bayesian inference engines in the past [16]. In our case, V in1 and V in2 are voltage "pulses" of fixed width and varying amplitude. Their amplitudes are proportional to the two matrix elements to be multiplied. Note from Fig. 2(b) that the linear region in the plot extends over a voltage range of ~100 mV. Therefore, for this choice of parameters, the amplitude of the V in1 pulse should be no more than ~50 mV. Since we would like the two voltage pulses V in1 and V in2 to have similar limits on the amplitude, both should have an amplitude no more than 50 mV. We can, of course, increase the voltage range by redesigning with different parameters, but that will increase the energy dissipation per MAC operation. III. ACCUMULATOR Next, imagine that the resistor R of Fig. 1(c) is a heavy metal (HM) strip, on top of which we place a p-MTJ (which is an MTJ whose ferromagnetic layers have perpendicular magnetic anisotropy) with the soft layer in contact with the HM strip. We can insert a thin insulating layer and a thin metallic layer between the soft layer and the heavy metal, which will not 4 4 impede the operation of the accumulator. This configuration is shown in Fig. 3(a). The current pulses I out pass through the heavy metal strip (which is the resistor R) and because of spinorbit interaction in that strip, they inject spins into the soft layer of the p-MTJ (through the thin insulating and metallic layers). That causes domain wall motion in the latter during each pulse because of spin orbit torque due to the spin Hall effect [17][18][19]. The distance a domain wall moves over the duration of a pulse is approximately proportional to the amplitude of the pulse since the domain wall velocity is proportional to the current density. The arrangement is shown in Fig. 3(b). After any number of pulses, a fraction of the soft layer will have its magnetization parallel to that of the hard layer, a small fraction will be un-magnetized and will be the "domain wall" separating two domains, and the remainder of the soft layer will have its magnetization antiparallel to that of the hard layer. The fractions with parallel and anti-parallel magnetizations change with successive current pulses flowing through the heavy metal. Fig. 1. (c) The conductance of the p-MTJ is the conductance of the parallel combination of three conductors associated with the anti-parallel configuration, domain wall interface, and parallel configuration. This is the well-known basis of a domain wall synapse [12]. Here, we have used a p-MTJ in the spirit of ref. [12], but there is no reason why an MTJ with in-plane magnetic anisotropy cannot be used instead. The conductance of the p-MTJ (measured between its hard and soft layers) is the conductance of the parallel combination of three conductors corresponding to the parallel configuration of the p-MTJ, the domain wall (DW) interface and the antiparallel configuration [12], as shown in Fig. 3(c). If the domain wall in the soft layer of the p-MTJ is located at a distance x from one edge and L is the length of the soft layer (excluding the domain wall width, which is w), then [12]  where G P is the p-MTJ conductance in the parallel state, G AP is the conductance in the antiparallel state and G DW is the conductance associated with the domain wall in the soft layer. A. Operation of the accumulator To understand how the accumulator works, consider the fact that the amplitudes of the voltage pulses V in1 and V in2 are proportional to the two matrix elements a and b that are to be multiplied. The pulses all have a fixed width of t. The current The i-th current pulse will move the domain wall by an amount and v i is the domain wall velocity imparted by the i-th current pulse. The domain wall velocity is proportional to current density for low densities [18] and is hence proportional to the amplitude of the current pulse. Therefore, from Equation (4), we get The last equation is an important result showing that the amount by which the domain wall moves after each pulse is proportional to the product of the two numbers a and b. Since constants. Finally, from Equation (6), we obtain Fig. 4 shows the composite system that constitutes the allspin matrix multiplier. In addition to the multiplier shown in Fig. 1(c) and the accumulator shown in Fig. 3(a), we use a voltage source V s proportional to 1/B, a conductor whose conductance is equal to A, and another conductor whose which is proportional to the (i, j)-th element of the product matrix. The voltage dropped over the last conductor is proportional to this current and hence proportional to the (i, j)th element of the product matrix c ij . We just have to measure this voltage after the pulse sequence has ended (i.e. one row has been multiplied with one column) to obtain a voltage proportional to c ij , which is the result of multiplying the i-th row of the first matrix with the j-th column of the second. After obtaining c ij , the domain wall synapse is reset with a magnetic field or a reverse current pulse to make x = 0, and then the process is repeated to obtain the product of multiplying another row of the first matrix with another column of the second (which would be the next element of the product matrix). B. Energy dissipation The energy dissipation incurred during the rotation of a nanomagnet's magnetization due to strain is very smalltheoretically around 1 aJ at room temperature [15], while the energy dissipation associated with domain wall motion will be on the order of 2 where I is the current inducing the domain wall motion, R is the resistance of the heavy metal strip and t is the pulse width. There is some additional dissipation in the passive resistors, but they can be made arbitrarily small by choosing the bias voltages to be small. We will neglect any other dissipation due to domain wall viscosity, which would be comparatively smaller. Therefore, the energy dissipated during each MAC operation is ~2 I R t  . We will assume that the HM strip has a width of 50 nm and thickness 5 nm (cross-sectional area = 250 nm 2 ) and length 100 nm. Hence its resistance is R = 40 ohms, if it is made of Pt whose resistivity is 10 -7 ohm-m. From Fig. 1(c) we see that the current through the heavy metal strip will have a maximum value of will have a maximum value of ~ 50 A since V in2 (max) ~ 50 mV and R P = 1 kW. Assuming a pulse width t = 1 ns, the maximum energy dissipation per multiply-and-accumulate (MAC) operation is This is a small energy price to pay for the small footprint and the non-volatility of this device. IV. CONCLUSION We have shown how to implement a matrix multiplier with two MTJs, passive resistors and some bias sources. The energy dissipation per multiply and accumulate (MAC) operation is much smaller than what would be encountered in traditional electronic implementations, although not as small as in optical implementations [10]. Our matrix multiplier is also not as fast as optical implementations, or even electronic implementations, but it is non-volatile and will retain the result of the operation (i.e. the matrix element cij) indefinitely after powering off. The non-volatility is a major advantage since it will allow most or all computing to be performed at the edge without the need to access the cloud. This reduces the likelihood of hacking, data loss, intrusion and eavesdropping. Cybersecurity is critical for artificial intelligence and the ability to perform all or most computing at the edge offers increased protection against cyber threats. The extremely low energy dissipation also offers protection against hardware Trojans, which are disastrous for AI and are very hard to detect. Trojans, however, have an effect on the power consumption and therefore can be detected with a technique called side channel analysis [20], which searches for anomalies in power consumption. A low power matrix multiplier, which consumes very little power itself, will exacerbate power anomalies and facilitate Trojan detection. A.1: We consider the elliptical soft layer of a straintronic MTJ as shown in Fig. 5. This figure shows the axis designation with the z-axis along the major (easy) axis of the soft layer and y-axis along the minor (hard) axis. We will assume that the hard layer's magnetization is along its own easy axis and is pointing along the +z-direction. In that case, the polar angle  shown in Fig. 5 is the angle between the magnetizations of the hard and soft layers of the s-MTJ. Ref. [15] showed that the stochastic Landau-Lifshitz-Gilbert equation yields the following equations to describe the temporal evolution of the polar and azimuthal angles of the magnetization vector () in the soft layer in the presence of thermal noise: Here M s is the saturation magnetization of the soft layer,  is the uniaxial stress generated in the soft layer along the major axis by the applied gate voltage V G (we neglect the strain generated along the minor axis since it is much smaller),  s is the saturation magnetization of the soft layer's material,  is the soft layer's volume, L maj is the length of the major axis, L min is the length of the minor axis and d is the thickness of the soft layer. The quantity  0 is the permeability of free space and  B is the Bohr magneton. The field h i (t) [i = x, y, z] is the random magnetic field due to thermal noise and     Table I to find the steady state value of  (i.e.  ss ) as a function of V G . This is shown in Fig. 2(a). Thermal noise introduces some randomness in the magnetization trajectory, and hence we find  ss as a function of V G by averaging over 100 trajectories. This yields the plot in Fig. 2(a) and ultimately the plot in Fig. 2 where  ss is the steadystate angle between the magnetizations of the hard and the soft layer at any given stress (or, equivalently, any given V G ). From ref. [15], we obtain that the magneto-static energy in the plane of the nanomagnet (i. e. when  = 90 0 ) for any magnetization orientation and at any given stress is   where H d is the effective magnetic field in the soft layer due to any residual dipole coupling with the hard layer. As mentioned earlier, this field is antiparallel to the magnetization of the hard layer. The strength of this field can be tailored by engineering the material composition of the hard layer, which is usually made of a synthetic antiferromagnet. It can also be adjusted with an external in-plane magnetic field, if needed. The steady state value of the angle  is that where the magneto-static energy is minimized. Taking the derivative of Equation (A3) with respect to  and setting it equal to zero, we find the angle where the energy is minimum, and it corresponds to the steady state value ss. We get   Using the values in Table I, G V   and that is what we observe in Fig. 2 and hence 1 1 . Thus, we have derived the existence of the linear region in the G s-MTJ vs. V G characteristic analytically and found that it exists when G V   is close to  . Since  = 0.26 V and  = -0.001 V, while R AP = 2 k, we find that  = -0.96 (k-V) -1 and  = -0.261 V. This value of  shows excellent agreement with what we obtained in Fig. 2(b), 8 8 but  is larger in magnitude by more than a factor of 2, which is still acceptable within the limits of the approximations used to derive this analytical result. A. Maximum current pulse amplitude The maximum current that flows through the heavy metal strip was calculated as 50 A. The corresponding current density through a 250 nm 2 cross-section is 2  10 11 A/m 2 . If v d is the domain wall velocity at that current density (which is material dependent), then the maximum domain wall displacement caused by the maximum current pulse of duration 1 ns is v d t with t = 1 ns. Since the soft layer of the p-MTJ is 100 nm long, it can sustain N = 100 nm/v d t current pulses before the domain wall moves completely through it. Hence the largest matrix size that can be handled is N N  . B. Digital (non-binary) multiplier If we wish to use this device as a digital, but not just binary, multiplier (meaning its elements can have integral values that are not just 0 and 1), then we need to know what is the largest digit we can have as a matrix element. That depends on how small we can make the quantization step size when we digitize. The minimum step size is, say, twice the thermal noise voltage appearing at any input terminal and that is 2 where C in is the input terminal capacitance [21]. We can reasonably assume that C in ~ 1fF when we factor in line capacitances. This makes the minimum step size ~4 mV at room temperature. Hence the largest digit that we can encode is 50 mV/4 mV = 12. We can, of course, increase this number by using optimized design where the amplitude of the voltage pulses can exceed 50 mV. This would require decreasing . Here, however, we were interested in demonstrating just the basic principle and hence have not focused on design optimization. We can also calculate the current density through the HM strip at the minimum step size of 4 mV. This is ~ 4 A. The corresponding current density is 4 A/250 nm 2 = 1.6 10 10 A/m 2 , which is more than enough to induce domain wall motion in many materials [22]. Hence, the smallest digit is 1 since the current pulse corresponding to this digit can also induce domain wall motion.
5,348
2021-09-26T00:00:00.000
[ "Computer Science", "Engineering", "Physics" ]
Frictional Energy Dissipation due to Phonon Resonance in Two-Layer Graphene System The frictional energy dissipation mechanism of a supported two-layer graphene film under the excitation of the model washboard frequency is investigated by molecular dynamics simulations. The results show that two local maxima in the energy dissipation rate occur at special frequencies as the excitation frequency increases from 0.1 to 0.6 THz. By extracting the vibrational density of states of the graphene, it is found that large numbers of phonons with frequencies equal to the excitation frequency are produced. A two-degree of freedom mass-spring model is proposed to explain the molecular dynamics results. Since the washboard frequency for atomically surfaces in wearless dry friction can be analogous to the excitation frequency in the molecular dynamics simulations, our study indicates that the phonon resonance would occur once the washboard frequency is close to the natural frequency of the frictional system, leading to remarkable local maxima in energy dissipation. Introduction Friction is a common phenomenon that occurs in aspects of our lives. Nevertheless, it is difficult to give a universal understanding of the energy dissipation mechanism due to its complexity. For example, the friction generally shows a gradually decreasing trend with the increase of sliding velocity on the macro-scale [1], while on the atomic scale the friction usually presents a trend of gradual increase with the increase in the sliding velocity [2,3]. The variation of friction is closely related to the energy dissipation mechanism in the friction process [4]. For a long time, the researchers generally use the potential barrier height [5,6] and shear strength [7,8] of friction interface to compare and quantify the magnitude of friction force when two interfaces slide relative to each other. The maximum friction generally implies a unique energy dissipation channel [9]. However, the fundamental physics involved in the friction process, such as the phonon [10][11][12][13] and/or electron [14,15] related dissipation, remains unclear. The friction process can excite large numbers of nonequilibrium phonons (elastic waves) [16], which dissipates the mechanical kinetical energy through the phonon scattering [11,17,18] with other phonons, boundaries, and/or impurities. As for the two-wall carbon nanotube oscillators, Tangney et al. [19] showed that the friction between the inter-tube and the out-tube would have a maximum value when the group velocity of excited phonons was equal to the sliding velocity. Panizon et al. [20] deduced a formula to calculate the friction force based on linear response theory. Their results show that the resonance would occurs and can cause a local maximum in the friction force when the group velocity of the excited phonons is equal to the phase velocity. Our recent theoretical and experimental studies [21] also suggest that when the atomic force microscope (AFM) tip slides on the atomically flat surface, the phonon mode of the substrate excited by the slider also resonates with the entire friction system, resulting in multiple local maxima in the friction force with the increasing sliding velocity. The similar resonance principle is also adopted by quartz crystal microbalance (QCM) to measure the energy dissipation rate of adsorbed films [22] and two-dimensional materials [23]. Although it is common sense that a physical resonance can lead to a local maximum in energy dissipation, it would be interesting to relate resonance under the washboard frequency to the energy dissipation mechanism of wearless dry friction, because the reaction rate theory [3], not the resonance, is often used to explain the energy dissipation mechanism traditionally. Therefore, we speculated that the excited phonons will be strengthened and the energy dissipation will increase if the excited phonons in the wearless dry friction process are consistent with the natural frequency of the whole friction system. Conversely, when the excited phonon frequency is far away from the natural frequency of the friction system, the energy dissipation may decrease or be suppressed. The phonon is the physical particle representing mechanical vibration and is responsible for the transmission of everyday sound and heat [24]. The classical frictional force and power dissipation are essentially due to the excitation of phonons in the dry friction [20]. In general friction processes, such as on rough surfaces, friction excites many phonon modes with different frequencies because rough surfaces are random and aperiodic. But on atomically flat surfaces, only the phonons with washboard frequency and/ or its harmonics may be excited in the wearless dry friction [21,25]. In order to verify the importance of the phonon resonance to dissipate the frictional energy, this work investigated the frictional energy dissipation mechanism caused by inter-layer shear motion of graphene films using molecular dynamics (MD) method. An external periodic excitation is applied to make one layer of graphene vibrate tangentially on the other layer of graphene. The similar method is also used by Sokoloff to explore the possible nearly frictionless sliding for mesoscopic solids [26]. The frequency of the periodic excitation is analogy for the washboard frequency v/a in the wearless dry friction on atomically flat surfaces with sliding velocity v and substrate period a. Thus, only a single phonon mode with excitation frequency dissipates energy in each MD simulation. The resultant energy dissipation due to periodic excitation is also extremely important for the performance of graphene-based MEMS devices [27,28]. Our study shows that two local maxima in the energy dissipation rate of the system does appear at special excitation frequencies. We further explain the simulation results in details by extracting the vibrational density of states and the natural frequency of the system with various system parameters. Figure 1a shows the atomic model for all our MD simulations, including a period-excited upper graphene and a supported lower graphene. The in-plane size is about 90×104 Å 2 and there are 6912 atoms in the simulation system. To illustrate the details of the model more clearly, a schematic diagram of the atomic model is also shown in Fig. 1b. The upper graphene is connected to a support by three independent springs along x, y, and z direction with stiffness k x , k y , and k z , respectively. The k x and k y are the shearing stiffness of the spring and k z is the normal stiffness. By applying periodic excitation to the support end along the x direction, like the washboard excitation in the wearless dry friction, the upper graphene would vibrate relative to the lower graphene and result in energy dissipation. To simplify the simulations, we also used a set of springs along x, y, and z directions with stiffness of k subx , k suby , and k subz to attach to the lower graphene, while the other ends of these springs are fixed. Both k subx and k suby are set as the effective interfacial shearing stiffness k 0 between two graphene layers at the state of equilibrium. The k 0 depends on the used interlayer interactions and can be calculated by differencing the interfacial shearing force with respect to the lateral displacement [21]. Model and Method In order to distinguish the excited phonons in the friction process from the intrinsic phonons within the graphene at the finite temperature, the simulation temperature is set at 0.001 K. The optimized Tersoff potential [29] is used to describe the carbon-carbon interactions within each graphene layers, while the Lennard-Jones (LJ) potential [30] for the interlayer carbon-carbon interactions between the two graphene layers. Since no normal load is applied in the upper graphene in the MD simulations, the interlayer distance between the two layers of graphene is 3.35 Å. The normal stiffness k z and k subz are set the same as k z = k subz = 18.0 eV/ Å 2 . The shearing stiffness between the two-layer graphene is calculated to be k 0 = 7.47 eV/Å 2 from the used LJ potential. We first equilibrium the simulation system at 0.001 K for 500 ps, and then a periodic excitation with the frequency f 0 and the amplitude Am are applied to the support to imitate the dry friction on an atomically flat surface [21,26]. The Langevin thermostats on both the upper and lower graphene are used to control the temperature of the simulation system. There are no fixed atoms in the simulation system. The periodic boundary conditions are applied along the in-plane directions. The equations of motions are solved with the velocity-Verlet algorithm with a timestep of 0.5 fs. Results and Discussion Firstly, we set k x = k y to k 0 , and the amplitude and excitation frequency to Am = 0.1 Å and f 0 = 0.1 THz, respectively. Figure 2 shows the relation between the accumulated energy in the thermostat and the effective simulation time. The results show that the accumulative energy is proportional to the simulation time, indicating the energy dissipation is stable during the periodic excitation. When we increased the excitation frequency, such as to 0.2, 0.4, and 0.6 THz, the cumulative energy in the thermostat still shows a good linear relation with the simulation time, as also shown in Fig. 2. Thus, the energy dissipation rate in the dissipative system under periodic excitation is defined as the proportional coefficient between the accumulated energy and the effective simulation time. Next, the excitation frequency is gradually increased from 0.1 to 0.6 THz in the MD simulation, and the corresponding energy dissipation rate is calculated as mentioned above. Figure 3c shows that the energy dissipation rate presents two local maxima that corresponds to the excitation frequencies Fig. 1 The atomic model (a) and the corresponding schematic diagram (b) to investigate the energy dissipation of two-layer graphene system under periodic excitations. The stiffness k subx and k suby of the supported springs are set as the effective shearing stiffness k 0 of two-layer graphene, to simulate the lower graphene being supported on the bulk graphite. The stiffness k x and k y of the dragged springs can be adjusted freely based on the effective shearing stiffness k 0 of two-layer graphene in the molecular dynamics simulations. The normal stiffness k z and k subz are set the same as k z = k subz = 18.0 eV/Å 2 In order to confirm our observations, we reduced the shearing stiffness of the spring between the support and the upper graphene layer by two times and four times, respectively. In these cases, the interfacial shearing stiffness between the two graphene layers is greater than that of the spring between support and the upper graphene layer. Figure 3a, b show that two maxima in the energy dissipation rate still appears with the increase of excitation frequency from 0.1 to 0.6 THz. We also increase the shearing stiffness of the spring by a factor of 2, corresponding to the situation on which the interfacial shearing stiffness is less than that of the spring. Figure 3d shows that the energy dissipation rate of the system only has a maximum value with the increasing excitation frequency. However, the energy dissipation rate with the excitation frequency in Fig. 3d can obviously be fitted using two Lorentz functions with center frequency of 0.25 and 0.40 THz, indicating these two special excitation frequencies can result in significant energy dissipation in the friction system. In order to clearly present the details in Fig. 3, the relation between the maximum energy dissipation rate and the shearing stiffness k x = αk 0 at two characteristic peaks and the relation between the frequency at the maximum energy dissipation and the shearing stiffness are presented in Fig. 4a, b, respectively. Figure 4a shows that the energy dissipation rate at two characteristic peaks increases significantly as the shearing stiffness k x increases. Interestingly, the energy dissipation rate in the first peak at 0.16 THz dominates with the k x of 0.25k 0 , whereas the energy dissipation rate in the second peak at 0.4 THz dominates with the k x of 2k 0 . To explain the above observation, we first extracted the velocity amplitude of the center of mass of the upper layer graphene along the excitation direction (or x direction). The results in Fig. 5a show that the atomic amplitudes of the two characteristic peaks increase with increasing shearing stiffness. We also extracted the velocity amplitude of the lower graphene in Fig. 5b and found that the velocity amplitude also increases. In atomic-scale friction models, such as the Prandtl-Tomlinson model [2], the energy dissipation rate depends on the atomic velocity amplitude. Generally, the higher the velocity amplitude, the larger the energy dissipation rate will be. Both Fig. 5a, b show that the velocity amplitude increases markedly with increasing shearing stiffness k x for both graphene layers. This is because, when the shearing stiffness k x is small, the shearing force exerted on graphene under the same excitation amplitude will be small, and the corresponding velocity amplitude and energy dissipation rate will also be small. On the contrary, when the shearing stiffness is increased, the shearing force exerted on graphene will be large, and the corresponding velocity amplitude and energy dissipation rate will also be large. Figure 5a also show that the velocity amplitude of the first characteristic peak is larger than that of the second peak when the shearing stiffness is 0.25k 0 . However, when k x = 2.0k 0 , the velocity amplitude of the first characteristic peak is smaller than that of the second peak. Figure 5b shows that the velocity amplitude of the first peak is slightly larger than that of the second peak. Which peak dominates, the first peak or the second peak, under different shearing stiffness depends on the relative velocity amplitude at the two characteristic peaks. By considering the data in Fig. 5a, b together, it is evident that the velocity amplitude at the first peak dominates at low shearing stiffness k x , while at high shearing stiffness k x the velocity amplitude at the second peak dominates. Figure 4b shows that the frequencies of the two characteristic peaks increase gradually with increasing shearing stiffness k x . For example, when the shearing stiffness increases from 0.25k 0 to 2.0k 0 , the excitation frequency corresponding to the first peak of energy dissipation rate increases from 0.16 to 0.25 THz, and the excitation frequency corresponding to the second peak of the energy dissipation rate also increases from 0.34 to 0.40 THz. This is because the enhancement of the shearing stiffness will increase the effective stiffness of the system assuming that the two layers of graphene in the MD simulations are respectively regarded as two concentrated masses in series. Increasing the effective stiffness of the system certainly increases the resonant frequency. More quantitative explanation is presented with a mass-spring model as followed. In order to understand the relationship between the energy dissipation rate and the excitation frequency, we first extracted the time-dependent atomic velocities under the excitation for a duration time of 500 ps. Then, these atomic velocities in the lower graphene layer are transformed into the vibrational density of states (vDOS) from the Fourier transform of the velocity autocorrelation function [31]. Figure 6 shows that when the excitation frequency is 0.1 THz, the phonons with frequencies around 0.1 THz are obviously present in the vDOS of the lower layer graphene, while other phonon frequencies are strongly suppressed due to the low simulation temperature. We also separately calculated the vDOS perpendicular to the excitation direction with a duration time of 500 ps. The inset of Fig. 6 shows that the intrinsic phonons within graphene do appear in the vDOS but the intensity is extremely weak. This indicates that the MD simulations at low temperatures are indeed able to suppress the intrinsic phonons within graphene. When changing the excitation frequency, the excited phonons with the same frequency appear in the vDOS of the lower graphene layer. This indicates that large numbers of phonons with the same phonon frequency [THz] vDOS along the z-direction Fig. 6 The vibrational density of states of the lower graphene layer with the excitation frequency of f 0 = 0.1 THz. The high-frequency phonons are suppressed and not presented here due to the low simulation temperature. The inset shows the extremely weak phonon intensity in the z-direction vDOS frequency as the excitation frequency are generated in the lower graphene under external periodic excitation. It is expected that the local maximum of the energy dissipation rate may be resulted from the phonon resonance when the excited phonons in the substrate have the same frequency with the natural frequency of the friction system. Note that the intensity of the phonon mode in the calculated vDOS is almost constant over time in our MD simulations, even on the resonance condition that the excitation frequency equals the natural frequency of the system. This is because the dissipated energy during each excitation period is removed artificially from the simulation system by the thermostat in order to keep the temperature of the simulation system unchanged. In order to explain the MD results quantitatively, a two-degree of freedom mass-spring model is established as shown in the inset of Fig. 7. By comparing with the MD model, we set both mass m 1 and m 2 in the model as m 1 = m 2 = m = N × m c , where N is the atom number within the upper graphene layer and m c is the mass of each carbon atom. There is no normal load applied in the MD simulation and the energy dissipation mainly depends on the shearing stiffness between the support and the upper graphene layer. Thus, the spring stiffness k in that connects the two mass is set as the effective interfacial shearing stiffness k 0 between the two graphene layers in the state of equilibrium, i.e., k in = k 0 . The spring stiffness k sub that connects the lower mass and the fixed substrate is also set as k 0 , i.e., k sub = k 0 , and the spring stiffness k that connects the upper mass and the vibrating support can be adjusted with the MD model. When a small periodic excitation is applied on the vibration end, the resultant equations of motion for the two-degree of freedom model can be written as where A is a small amplitude, ω = 2πf 0 is the excitation angular frequency, y 1 and y 2 are the corresponding displacements of the lower and the upper degree of freedom, γ 1 and γ 2 are their corresponding damping. By solving the above equation, it is found that the below condition should be satisfied when the amplitude of both degrees of freedom reach the maximum, The detailed process to obtain the above equation can be found in the supporting information. As stated above, the interfacial shearing stiffness between the two graphene layers is equal to the shearing stiffness of the spring that connects the lower graphene layer and the fixed end in the MD simulations. This means that both k in and k sub should be constants and equal to k 0 in the two-degree of freedom mass-spring model, i.e., k in = k sub = k 0 to make the model be consistent with the MD simulations. Thus, we set k sub ∕k = k in ∕k=k 0 ∕k = 1∕ , then the predicted excitation angular frequency that responds to the maximum energy dissipation rate in the friction system should be the two solutions to the Eq. (3), i.e., ωand ω +, , which can be easily obtained. The two special frequencies ωand ω + are the natural angular frequency of the two-degree of freedom, which are shown in Fig. 7 as a function of the shearing stiffness of the spring k = αk 0 . It is shown that the two natural frequencies gradually increase when increasing the shearing stiffness of the dragged spring. (1) Fig. 7 The natural frequencies of the two-degree of freedom massspring system as a function of the spring stiffness k = αk 0 . The circles on the lines represent four sets of simulation conditions in the MD. The inset is the two-degree of freedom mass-spring model to explain the MD simulations The special frequencies predicted by the two-degree of freedom mass-spring model are further compared with the results of MD simulation. Table 1 shows that, when the controlled shearing stiffness in MD simulations gradually changes from 0.25k 0 to 2k 0 , the excitation frequency corresponding to the maximum energy dissipation rate of the system in Fig. 3 and Fig. 4b is basically consistent with the natural frequency of the system predicted by the two-degree of freedom mass-spring model, which explains our MD simulation results. Conclusion In summary, we investigated the frictional energy dissipation in two-layer graphene system under periodic excitation using molecular dynamics simulations. The results show that the energy dissipation rate presents two local maxima when the excitation frequency increases gradually from 0.1 to 0.6 THz. By extracting the vibrational density of states, it is found that the periodic excitation can induce large numbers of phonons in the substrate, which has the same frequency with the excitation frequency. When the frequency of these phonons is equal to the natural frequency of the frictional system, the maximum in the energy dissipation rate appears. The proposed two-degree of freedom massspring model is also confirmed the MD results. Since the excitation frequency in our MD simulation is analogous to the washboard frequency for an atomically flat surface in wearless dry friction, the results indicate that the energy dissipation rate in the frictional system can be modulated by controlling the washboard frequency and the resonant frequency of the system. This may explain the local maximum of the friction force with the sliding velocity [21]. As for the commensurability-dependent friction behavior on an atomically flat surface, the phonon resonance can occur at the commensurate interface because the excited phonon modes on the two sides of the interface are the same due to the same period along the sliding direction. But at the incommensurate interface, the atomic periods on the two sides of the interface are different along the sliding direction, which cannot result in phonon resonance. Thus, the phonon resonance mechanism can also explain the larger friction at commensurate interfaces than that at incommensurate interfaces [32]. Our study establishes the relationship between the excitation frequency and energy dissipation rate in a model two-layer graphene system. The uncovered phonon resonance mechanism can be used to regulate the friction force and energy dissipation in many systems, including the nano-sensors and actuators.
5,238.8
2022-09-26T00:00:00.000
[ "Physics", "Materials Science" ]
EXISTENCE AND ASYMPTOTIC BEHAVIOR FOR L 2 -NORM PRESERVING NONLINEAR HEAT EQUATIONS . We consider a nonlinear parabolic equation with a nonlocal term, which preserves the L 2 -norm of the solution. We study the local and global well posedness on a bounded domain, as well as the whole Euclidean space, in H 1 . Then we study the asymptotic behavior of solutions. In general, we obtain weak convergence in H 1 to a stationary state. For a ball, we prove strong asymptotic convergence to the ground state when the initial condition is positive. Introduction In this work, we study the existence, uniqueness, and asymptotic behavior of the solution to the following nonlinear, nonlocal parabolic equation ( 1) , where u : R + × Ω → R, g ∈ R, σ > 0, and the functional µ[•] is defined by . Here we consider both the case when Ω ⊂ R d is a regular, bounded domain with boundary ∂Ω of class C 2 , and when Ω = R d is the Euclidean space.In the latter case, the Dirichlet boundary condition may be interpreted as u → 0, as |x| → ∞. Let us note that the functional µ[u] ensures that the L 2 -norm is preserved along the flow, namely u(t) for any t > 0. Hence, µ[u] may be interpreted as a Lagrange multiplier that takes into account the fact that the solution is constrained to stay on a given sphere in L 2 , whose radius is prescribed by the initial datum.In this respect, equation (1) may be viewed as an L 2 gradient flow constrained on a manifold.More precisely, having set λ = u 0 L 2 , let us define M = {u ∈ H 1 0 (Ω) : u L 2 = λ}.Then (1) can be written as Date: April 19, 2024. 1 where ∇ M is the L 2 -gradient projected onto the tangent space T u M and the energy functional is defined by Nonlocal heat flows arise in geometrical problems where some L p -norms, related to specific geometrical quantities, are required to be preserved by the dynamics, see for instance [2], [26] and the references therein.In particular, the Yamabe flow was set up to show the existence of a solution to the celebrated Yamabe problem.More precisely, the conformal transformation bringing a metric g 0 , with scalar curvature R 0 , into a metric with constant scalar curvature can be determined by the asymptotic limit for large times of the evolutionary problem where ∆ 0 is the Laplace-Beltrami operator and p = 2 * − 1.Here the Lagrange multiplier s(t) is determined in such a way that the volume, proportional to |u| 2 * , is preserved.We refer the interested reader to [30,7] for more details. Our study is also motivated by recent numerical works that exploit normalized gradient flow methods to compute the ground states for some models in Bose-Einstein condensation, see [12,13,3] for instance.In this perspective, our results may be seen as a rigorous justification for the methods proposed in those papers, see Theorem 1.7 below.Equation ( 1) with g = 0 was already rigorously studied in [10].Apart from the nonlocal term, equation (1) becomes linear when g = 0.This fact allows for a detailed description of the asymptotic behavior of the solution, which is characterized by the initial condition.In [10], this result is also used to study a singularly perturbed heat flow, with applications to an optimal partition problem [9], see also [14] for a numerical implementation of the method. In [21], the authors consider equation (1) on a closed manifold (i.e., a compact manifold with no boundary) and g < 0. They show the global existence of solutions and study their asymptotic behavior.Let us remark that this choice of g makes the energy functional (3) always non-negative definite. In this work, we extend the above results by considering g ∈ R and the case when either Ω is a bounded domain or Ω = R d .The positive sign of g implies that the energy does not control the H 1 -norm of the solution in general.In particular, we show the existence of global-in-time solutions in the case g > 0 and 0 < σ < 2/d, for arbitrarily large initial data.Let us remark that, under the same assumptions g > 0 and 0 < σ < 2/d, the classical semilinear parabolic equation, i.e. equation (1) with µ = 0, experiences a possible finite-time blow-up, see [24,Theorem 17.6] for instance.The L 2 constraint provided by the dynamics (1) then prevents this kind of singularity formation.Moreover, in the case of σ ∈ 2 d , 2 (d−2) + and g ≥ 0, it is possible to exploit some arguments borrowed from the standard potential well method (see [17,23] for instance) to show global existence of solutions for some initial data.More precisely, we determine a subset of H 1 that is invariant for the dynamics and initial data belonging to that region emanating global solutions.On the other hand, we also identify a set of initial data whose evolution experiences a grow-up behavior, see Theorem 1.5 below. Our second main goal is to investigate the asymptotic behavior for large times.As in [21], by standard compactness methods it is possible to show that along sequences of times going to infinity, the solution is converging to a stationary solution.On the other hand, when Ω is a bounded domain we can further improve this result by showing that solutions emanating from non-negative initial data converge to the ground state solution, see Theorem 1.7 for a more precise statement.We now present our main results.First of all, we prove the local well-posedness of the Cauchy problem (1).Let us remark that the usual fixed point argument, see for instance [24,Sect. 16], cannot be applied to (1) in a straightforward way.Indeed, there is a quite delicate interplay between the power-type nonlinearity and the nonlocal term.A general proof of the local well-posedness result appears to be missing so far.For this reason, in proving Theorem 1.1 we are going to adopt different strategies.We refer to the beginning of Section 3 for a more detailed discussion.We introduce the following conditions on σ or Ω = R d and σ verifying the conditions in (4).Then for any u 0 ∈ H 1 0 (Ω) there exists a maximal time of existence As previously mentioned, in the case of g > 0, the energy functional in (3) is indefinite, hence global well-posedness does not follow straightforwardly from available a priori bounds.In the following theorem, we present some conditions under which the solutions constructed above can be extended globally. Theorem 1.2.Let the assumptions of Theorem 1.1 hold.If we further assume g ≤ 0 or g ≥ 0 and σ < σ 2 , then for any u 0 ∈ H 1 0 (Ω), there exists a unique global solution u ∈ C([0, ∞), H 1 0 (Ω)) to (1).The next theorem will provide additional conditions under which the solution is global.To state it, let us define and Ω is a bounded domain.Since Ω is a bounded domain, W is a nonempty set, see Section 2.4 and 3.4 for more details.In the spirit of the potential well method, see [23] for instance, we have the following global existence result. The potential well method can also be used in the whole Euclidean space R d to obtain additional sufficient conditions for global existence, as in [17].To state the next theorem, we introduce the following notations.Let 2 d < σ < 2 (d−2) + and let Q be the unique positive solution to Let K be defined as Once again, we note that the above set is nonempty (see Section 3.4).Then the following holds. While it is interesting to further extend the range of models for which global wellposedness holds, another main open problem is to study the possible occurrence of a finite-time blow-up in some specific regimes.Let us recall that the usual arguments for nonlinear parabolic equations (see for instance [24,Theorem 17.1]) do not apply here, due to the conservation of the L 2 -norm.On the other hand, our next result shows a grow-up scenario for some data, in the case of g ≥ 0 and σ ≥ 2 d .Notice that such conditions complement the hypothesis of Theorem 1.2 where global existence is proven for any data. Let T max (u 0 ) > 0 be the maximum time of existence of the corresponding strong solution to (1).Then either T max (u 0 ) < ∞ (and lim t→Tmax ∇u(t) After having shown the existence of global solutions, we now turn our attention to their asymptotic behavior.When g = 0 in (1) and Ω is bounded, the eigenspaces associated with the Dirichlet Laplacian are invariant along the dynamics.As a byproduct, the solution asymptotically approaches the eigenspace of least energy, which contains a non-trivial component of the initial datum, see [10] where this is proved by explicit calculations.On the other hand, allowing for g = 0 leads to more complex dynamics.In [21], this problem is addressed in the case of g < 0 for bounded domains.The authors prove convergence to some stationary state (no uniqueness is given) for some sequence of times going to infinity.The next proposition extends this result to arbitrary g = 0 for both Ω bounded and Ω = R d . Then there exists a sequence Let us emphasize that, in the previous proposition, we only obtain the upper bound u ∞ L 2 ≤ u 0 L 2 instead of the equality, because of a possible loss of mass at infinity (when Ω = R d ).This result may be improved when considering a bounded domain.By exploiting further compactness properties, available in this case, it is possible to show the strong convergence of the sequence.Moreover, in the case when g ≤ 0, 0 < σ < 2 (d−2) + or g > 0, 0 < σ < 2/d, it is well known (see Section 2.3) that there exist regular domains Ω ⊂ R d for which there exists a unique, positive, radially symmetric stationary solution, that is also a minimizer of the energy functional (3) under a constraint on the total mass: (5) Such a solution is usually called the ground state solution.Notice that a minimizer of the problem (5) satisfies the elliptic equation where µ[Q] is a Lagrange multiplier.The maximum principle then ensures that global solutions emanated by non-negative initial data converge asymptotically to the ground state. be the solution to (1).Let Ω ⊂ R d be a bounded domain such that there exists a unique positive solution Q ∈ H 1 0 (Ω) to equation (6) which is also the unique minimizer of the problem (5).Then The problem we address in this work can also be looked at as an example of a bilinear control system.Indeed, consider the nonlinear heat equation (7) ∂ t u = ∆u + g|u| 2σ u + p(t)q(x)u where q is a given smooth function defined on Ω and p ∈ L 2 (0, T ) is a scalar function of our choice.The problem consists of finding, for any initial condition u 0 , a control p which steers the solution to a given target at time T .In [4], this question was addressed for d = 1, g = 0, with the ground state solution as a target, proving an exact controllability result in any time T > 0. This result was later extended, in [1], to arbitrary eigensolutions of evolution equations of parabolic type in one space dimension, under various boundary conditions.Notice that, in both [4] and [1], q is not allowed to be the identity. On the other hand, the approach proposed in [10] applies to the above problem in arbitrary dimension, with q(x) ≡ 1 and g = 0, by choosing p(t) = µ[u(t)].In [10], however, the target is attained asymptotically (as T → ∞) also by an appropriate selection of the initial condition.So, in this work, we extend the approach by [10] to nonlinear flows (g = 0) restricting the target to the ground state (for u 0 ≥ 0).Let us mention that in [5], the authors studied the well-posedness of (1) via the fixed point argument, similarly to Theorem 3.11 below.They also showed that solutions starting close to a local minimum of the energy at fixed mass converge to it asymptotically.Finally, let us also mention that a similar analysis can be also performed on other models, such as the Navier-Stokes equations, see for instance [11,8] where the twodimensional system is studied under various constraints.This work is organized as follows: in Section 2, we will present some useful preliminary results regarding the heat semigroup generated by the Laplacian on L 2 (Ω), the stationary states of system (1), and the potential well method.In Section 3, we will prove our main results (Theorem1.1 up to 1.4) regarding the local and global existence of solutions to the system (1).In Section 4, we will study the asymptotic behavior and give the proof of Theorem 1.7 and Theorem 1.5. Preliminaries 2.1.On the notion of solutions.In this subsection, we introduce the different notions of solutions that are going to be used in this work.Here, Ω can be either a bounded domain with C 2 boundary or the Euclidean space Ω = R d . Definition 2.1.Let u 0 ∈ H 1 0 (Ω).We give the following three definitions.a) Strong solution: We define u to be a strong solution if there exists be the corresponding mild solution to (1).Then the energy and Heat semigroup. In this work, we will denote with e t∆ both the semigroup generated by the Laplacian on L 2 (Ω) and L 2 (R d ).The function u = e t∆ u 0 solves the linear heat equation ( 9) with Dirichlet boundary conditions u |∂Ω = 0, if Ω is bounded.One can prove the following L p − L q smoothing estimates of the heat semigroup (see [24,Proposition 48.4]): As a consequence of the L p − L q smoothing property in Proposition 2.3, we also obtain the following space-time estimates. . Then for any f ∈ L 2 , we have Moreover, if (ρ, γ) is another pair satisfying condition (10), then t 0 e (t−s)∆ f (s) ds In the proposition above and in what follows we use the following notation: for any p ≥ 1, we denote by p ′ ∈ R the constant such that The following estimate holds, see [19,Theorem 16.3]. Proposition 2.5.Let Ω be a regular domain of class C 2 boundary.Let {e t∆ } t>0 be the heat semigroup in R d or the Dirichlet heat semigroup in Ω.Then there exists a constant C > 0 such that for any Finally we also notice that for any u 0 ∈ H 1 , a solution u(t) to the linear system (9) dissipates the L 2 -norm in the following way (14) e t∆ u 0 2 Stationary states. In this subsection, we recall some results concerning the stationary solutions of equation ( 1), namely satisfying Note that, for any given α > 0, if the problem ( 16) (15) where the constant µ[Q] is a Lagrange multiplier.Moreover, we recall the following Pohozaev's identities. Proof.We take the scalar product of ( 15) with Q and obtain (17).Similarly, we obtain ( 18) by taking the scalar product of ( 15) with x • ∇Q. As a consequence of ( 17), we have that . In what follows, we denote by in the origin.We start with the following standard proposition (see [20,25,27]). We proceed by recalling the result of Gidas, Ni, Nirenberg [16, Theorem 1] stating that any real non-negative solution to (15) with Ω = B(0, R) or Ω = R d is radially symmetric. Next, we recall the result stating the uniqueness of positive, radial solutions due to Kwong [18] and Mcleod-Serrin [22, Theorem 1].Proposition 2.9.For any d > 1, there exists exactly one positive solution where Let us notice that Propositions 2.7, 2.8, 2.9 imply the following. 2.4. Potential well in a bounded domain.In this subsection, we present some preliminary definitions and results regarding the potential well method in a bounded set.We use this method to provide an alternative sufficient condition for global existence for the Cauchy problem (1), stated in Theorem 1.3.This argument was first introduced in [28] and [23] for parabolic problems.We refer the interested reader to [24,Section 19] for more details. Let Ω be a bounded domain, 0 < σ < 2 d−2 and g > 0. We fix g = 1 without losing generality to facilitate the exposition.We define the functional I by Notice that I can also be written as (19) where E is the energy functional defined in (3).The potential well associated with problem ( 1) is the set , where Λ = Λ(σ, Ω) is the best constant in the Sobolev embedding We also define the exterior of the potential well as Let Ω be bounded and Then the infimum in ( 23) is attained for some v ∈ H 1 0 (Ω), due to the compactness of the embedding H 1 0 (Ω) ֒→ L 2σ+2 (Ω).By multiplying v for a suitable constant ν, we may suppose that Thus, there exists an element v so that E[v] = p where p is defined as in (22) and To show that such a p is the infimum of the problem (21) we notice that for any u ∈ . Next, we state a sufficient smallness condition for a function in On the other hand, using ( 22), (23) and This implies that I[f ] > 0 and consequently that f ∈ W. 2.5.Potential well method in R d .In this subsection, we present some preliminary results regarding the potential well method in the whole Euclidean space R d .This argument was introduced in [17] to prove a sufficient condition for the global existence of the Nonlinear Schrödinger equation.We suppose that 2 d < σ < 2 (d−2) + and g > 0. We fix g = 1 without losing generality.We recall that the best constant in the Gagliardo-Nirenberg inequality (25) is given by where Q is the unique positive solution of the elliptic equation In [29], this profile is found as the maximizer of the following problem We define the potential well as the set dσ − 2 and Q is the ground state defined above.Finally, we observe that ( 26) Indeed, from the Pohozaev's identities (17), (18) it follows that This implies (26). Existence of Solutions In this section, we are going to prove our local and global well-posedness results.As already mentioned in the Introduction, there is a delicate interplay between (the mathematical difficulties determined by) the power-type nonlinearity and the nonlocal term involving µ[u].More precisely, it does not seem possible to prove Theorem 1.1 by a general contraction argument.Indeed, the presence of the nonlocal term would require setting up the fixed point in Sobolev spaces, say L ∞ t H 1 0 for instance.On the other hand, the power-type nonlinearity is not locally Lipschitz in Sobolev spaces for low values of σ, say 0 < σ < 1 2 , see Remark 3.12.For this reason, we are going to provide three different proofs of the local well-posedness.The first one, stated in Theorem 3.1 below, involves a two step procedure.First, we find a solution to our problem for any data in H 2 (Ω) ∩ H 1 0 (Ω).Then, by a density argument, we show the our problem is locally well posed in H 1 0 (Ω).This proof is well suited when dealing with a bounded domain due to the compact embedding The second proof, stated in Theorem 3.9, is based on a fixed point argument and crucially relies on smoothing properties of the heat semi-group, see Proposition 2.5.This proof requires the nonlinearity to satisfy the condition σ < The third proof relies on the classical fixed point argument.To overcome the difficult interplay between the nonlocal term and the power-type nonlinearity, a lower bound on σ is required.In particular, for σ ≥ 1 2 , the power-type nonlinearity is locally Lipschitz in Sobolev spaces and we perform a contraction argument using the natural and stronger distance.We remark that unifying the results of the second and third proof, we cover all the energy subcritical range 0 < σ < The proof aims at avoiding the use of the fixed point argument on the integral formulation of (1).Due to the interplay between the nonlinearities, this standard argument does not work for σ < 1 2 , see Remark 3.12 below.In particular, the steps of this proof are as follows. To apply the Schauder fixed point theorem, we will need the following lemma which considers (1) where µ[u] is replaced by a bounded function of time. Our goal is to prove that G : Y → Y is a contraction with the distance defined in (29).By using the space-time estimates (11), (12) and then Holder's inequality and Sobolev's embedding, it follows that there exist two constants K 0 > 0 and Moreover, for any u, v ∈ Y, we deduce that Thus, given any u 0 ∈ H 1 0 (Ω), we choose and we choose T small enough so that Note that T depends only on u 0 H 1 and µ L ∞ [0,T ) .It follows that G(u) ∈ Y and In particular, G is a contraction in Y.By Banach's fixed-point Theorem, G has a unique fixed point u ∈ Y. u solves the integral equation u = e t∆ u 0 + t 0 e (t−s)∆ (g|u(s)| 2σ u(s) + λ(s)u(s))ds, and u ∈ C([0, T ], H 1 0 (Ω)).The persistence of regularity is a standard result. The following corollary immediately follows from the previous lemma. be the corresponding local solution to (28).Then there exists a time 0 Moreover, there exists a constant C > 0, depending only on u 0 H 1 , so that Proof.The time of existence in Lemma 3.2 depends only on u 0 H 1 and λ L ∞ [0,T ] and u ∈ C([0, T ], H 1 0 (Ω)).So by continuity, we can find a time Next, using Schauder fixed point theorem, we prove that a solution to (1) exists for We will then show that the two Cauchy problems are equivalent. where v is a solution to the Cauchy problem . By (33), there exists a time T = T u0 > 0 such that I u0 ⊂ B u0 and for any u ∈ B u0 , In order to apply the Schauder fixed point theorem, we shall prove that I u0 is precompact.We observe that for any u ∈ B u0 , v = F u0 (u) ∈ W where From the Aubin-Lions lemma, the embedding of W in C([0, T ], H 1 0 (Ω)) is compact.Since I u0 ⊂ W , we obtain that I u0 is indeed precompact.Applying the Schauder fixed point theorem, we find that F u0 admits a fixed point u ∈ C([0, T ], H 1 0 (Ω)), a solution to (36) . By the standard persistence of regularity, we conclude that u ∈ C([0, T ], H 2 (Ω) ∩ H 1 0 (Ω)). Next, we employ a density argument to prove that we can find solutions to (36) with initial data in Then there exists a solution u ∈ C([0, T ], H 1 0 (Ω)) to (35). Proof.Let u 0 ∈ H 1 0 (Ω) and let {u We can suppose that there C > 0 such that We denote by {u ) the solutions to (36) emanating from u (n) 0 .Using Corollary 3.3 and (37), we see that there exists T > 0, such that and sup where C 1 (C) > 0. We observe that {u (n) } is uniformly bounded also in L 2 ([0, T ], H 2 (Ω)).Indeed, we multiply equation (36) by ∆u (n) and integrate in space to get Integrating this equation in time leads to We suppose that d ≥ 3. The case d ≤ 2 is similar.By Holder's inequality, we see that . Using Hölder's inequality in time yields where q = 4 σ(d − 2) is such that (q, r) satisfies (10) and Consequently, we obtain and the two sequences are uniformly bounded in these spaces.So there exists a subsequence {u (n k ) } k∈N , and ).Thus, u satisfies weakly (and strongly) (36). Proof.Taking formally the scalar product of (35) with u and integrating in time, we see that solutions satisfy L 2 , we obtain that the L 2 -norm is constant in time.Since Theorem 3.5 does not imply uniqueness of solutions, we will show it in the following Proposition.Proposition 3.7.For any u 0 ∈ H 1 0 (Ω) there exists a unique strong solution to (1).Proof.Let u 0 ∈ H 1 0 (Ω) so that u 0 H 1 = M and suppose that u, v ∈ C([0, T ], H 1 0 (Ω)) are two different strong solutions to (1).Here we choose T > 0 so that if T u , T v > 0 are the maximal times of existence of u, v then T < min(T u , T v ).In particular, there exists a constant and (39) sup By multiplying the equation by (u − v) and integrating in space, we obtain 1 2 We start by estimating the contribution given by the power-type nonlinearities.We suppose that d ≥ 2. Notice that We choose the exponent p so that , By Gagliardo-Nirenberg and Young's inequalities, for any δ > 0, we obtain Thus, it follows that for any δ > 0, we have ( 41) Next, we deal with the nonlocal term.We observe that . By Young's inequality, for any δ > 0 we have From (41) and the triangular inequality, we also obtain that, for any δ > 0, (42) In particular, from (40), ( 42) and (41) we have For δ < 1 2C(M) , we obtain by Gronwall's lemma that Proof of Theorem 3.1: Theorem 3.5 implies that for any initial condition u 0 ∈ H 1 0 (Ω), there exists a local strong solution to (1).Moreover, it is possible to extend the time of existence until the H 1 -norm of the solution is bounded, giving the blow-up alternative (27).On the other hand, Proposition 3.7 implies that for any initial condition u 0 ∈ H 1 0 (Ω), there exists at most one solution. Remark 3.8.Let us note that the approach developed in Theorem 3.5 cannot be used in the case Ω = R d .Indeed, at several points, the use of the Aubin-Lions lemma required the compact embedding H 1 (Ω) ֒→ L 2 (Ω). The case Let either Ω be a regular bounded domain or Ω = R d .For any u 0 ∈ H 1 0 (Ω), there exists a maximal time of existence T max > 0 and a unique solution u ∈ C([0, T max ), H 1 0 (Ω)) to equation (1).Moreover, either T max = ∞, or T max < ∞ and we have We need the following lemma, where the restriction on σ is justified by direct application of Sobolev's embedding in (45) below. Proof.The first term on the left-hand side of (44) is bounded by To obtain the bound on the second term on the left-hand-side of (44), we observe that The condition 0 < σ < Next, we obtain the bound in (45).Suppose that d > 2.Then, by Hölder's inequality, we get Inequality (45) follows from Sobolev's embedding theorem.Finally, observe that the case d ≤ 2 follows from a similar and easier proof. Proof of Theorem 3.9: Let T, M, N > 0 be some constants that will be chosen later and consider the set equipped with the natural distance Clearly, (X , d) is a complete metric space.Given any u 0 ∈ H 1 0 (Ω), u 0 ≡ 0, u, v ∈ X , we also define the map Our goal is to prove that F is a contraction in the space X .First of all, we observe that, by Gagliardo-Nirenberg's inequality, for any u ∈ X there exists a constant Let us show that F maps the set X into itself.We observe that there exists C 1 > 0 such that Notice that, for 0 < σ < 1 (d−2) + , we have 2 < 4σ + 2 < 2d (d−2) + .Thus, by Sobolev's embedding theorem, it follows that there exists a constant C 2 (M, N ) > 0 such that (49) Moreover, by using ( 14), ( 48) and (49) we also see that there exists C 3 > 0 such that In particular, it follows that there exists K(M, N ) > 0 such that (50) inf Furthermore, by the smoothing property of the heat semigroup for the gradient stated in (13), we infer that Notice that, since σ < where K is defined in (50) and ( 54) where K 1 is defined in (51).This implies that F : X → X .Next we show that F is a contraction map on X , namely there exists 0 < K 2 < 1 such that, for any u, v ∈ X , we have From the smoothing effect of the heat semigroup (13) we get (55) As a consequence of inequality (44) we have that, for any u, v ∈ X there exists a constant From (56) we obtain that Moreover, from the inequality (45) we obtain (58) From inequalities (55),( 57),(58) we have . By choosing T satisfying conditions (52), (53), (54) and such that T , which is a solution to (1).The uniqueness of this solution follows from the fact that the map F is a contraction in X .Moreover, notice that we can extend the local solution until the H 1 -norm of u(t) is bounded.Hence we obtain the blow-up alternative, that is, if T max > 0 is the maximal time of existence, then either In this subsection, we present our third proof for the local well-posedness result, based on a fixed point argument.We state the following theorem for the case Ω = R d , however, it is straightforward to see that the same proof applies also to the case of bounded domains.The necessity of the restriction σ ≥ 1 2 is further commented in the remark 3.12 below the proof of the following theorem.Proof.Fix M, N, T > 0, to be chosen later, let r = 2σ + 2 and q = 4σ + 4 dσ so that the pair (q, r) satisfies condition (10).Consider the set equipped with the distance Clearly (E, d) is a complete metric space.Consider now u, v ∈ E and observe that the inequality Moreover, using the embedding H 1 (R d ) ֒→ L r (R d ) and Hölder's inequality, we have Using Hölder's inequality in time, we deduce from the above estimates that Next, we observe that, for any u, v ∈ E, from We also notice that from the embedding For any u 0 ∈ H 1 (R d ), let F (u 0 )(u)(t) = F (u)(t) be defined as By using the space-time estimates in Proposition 2.4, the embedding H 1 (R d ) ֒→ L r (R d ) and Hölder's inequality, we obtain (62) and ( 63) As a consequence of (62), ( 63) and (60), there exist C 1 > 0 and K(M, N ) > 0 such that In the same way, by exploiting (61), we also have that We set M = 2C 1 u 0 H 1 and we choose T > 0 small enough so that (64) which is possible because Moreover, we observe that for any t ∈ [0, T ], there exists C 2 , C 3 > 0 such that ) Thus, by exploiting (60), we have that there exists Now we set N = u 0 L 2 .By choosing T > 0 which satisfies condition (64) and also (T we obtain that F maps E into itself and it is a contraction.This is enough to conclude that there exists a local in time mild solution to (1).The uniqueness follows from the fact that F is a contraction.Moreover, we obtain the blow-up alternative (27) because we can repeat this argument extending the solution locally in time until the H 1 -norm of the solution diverges. Remark 3.12.Let us emphasize why we need the condition σ ≥ 1 2 .On one hand, in the absence of the nonlocal term µ, we could use the contraction principle in the set E with the weaker distance Indeed, it is standard to prove that (E, g) is a complete metric space.On the other hand, the presence of µ requires a stronger distance induced by the L ∞ t H 1 x -norm, as it is clear from inequality (61).This implies that we have to use the distance d instead of g.But for σ < 1 2 , this is not possible because the power-type nonlinearity is not locally Lipschitz continuous in Sobolev spaces (namely, inequality (59) fails). 3.4. Global well-posedness.From Proposition 2.2 we obtain that for any u 0 ∈ H 1 0 (Ω) and t ∈ [0, T max (u 0 )], the energy of the corresponding solution satisfies and thus it is non-increasing in time.The global existence follows from a classical argument which combines the blow-up alternative (27) and the bound on the energy. Proof.Let u 0 ∈ H 1 0 (Ω) and let u ∈ C([0, T max ), H 1 0 (Ω)) be the corresponding local solution to (1) given by Theorems 3.1, 3.9 or 3.11.Then the L 2 -norm of the solution is constant and the energy is non-increasing in time.Thus for g ≤ 0, 0 < σ < 2 (d−2) + , we have and dσ < 2 to conclude that In particular, in both cases, the H 1 -norm of the solution is uniformly bounded in time.From the blow-up alternative (27) we conclude that the solution is global. In what follows, we will provide an alternative sufficient condition for global existence in bounded domains, as stated in Theorem 1.3.The proof is based on the potential well method which was introduced in Section 2.4.We start with the following lemma where we use the notations introduced in Section 2.4. Proof.Fix g = 1 without losing generality and suppose that u 0 ∈ W. Let T = T max (u 0 ) be the maximal time of existence of the corresponding solution to (1).Equation (8) implies that E[u(t)] ≤ E[u 0 ] < p, for all t ∈ [0, T ).Since I[u 0 ] > 0, it follows by the continuity of the flow associated with (1) that I[u(t)] > 0 for all t ∈ [0, T ).Indeed, by contradiction, suppose that there exists a time t 0 > 0 such that I[u(t 0 )] = 0 and E[u(t 0 )] < p.Notice that this contradicts the definition of p. Hence u(t) ∈ W. The same argument applies for the case when u 0 ∈ Z. Lemma 3.14 implies that the sets W and Z are invariant under the flow of equation (1).As a consequence, we obtain the following. Proof.We fix g = 1 without losing generality.The condition u 0 ∈ W and Lemma 3.14 imply that u(t) ∈ W for every t ∈ [0, T max (u 0 )) and consequently that Hence the H 1 -norm of u is uniformly bounded by (2σ+2)p σ .From the blow-up alternative (27), we conclude that T max (u 0 ) = ∞.Remark 3.16.Proposition 2.12 provides a sufficient smallness condition on the initial datum for global existence.Indeed, if u 0 ∈ H 1 0 (Ω) is so that ∇u 0 2 L 2 ≤ 2p, then u 0 ∈ W. Remark 3.17.Notice that when Ω is a bounded domain, the set W is not empty because ∇u L 2 < √ 2p implies u ∈ W (see Proposition 2.12), while the set Z is not empty since if E[u] ≤ 0, then from (19), I[u] < 0 and u ∈ Z.On the other hand, when Ω = R d , we have p = 0 and W is empty.Indeed, take any u = 0 such that I Thus, for λ → 0, we obtain that E[v] → 0. As a consequence, from (19), the set W is empty when Ω = R d and Theorem 1.3 does not provide any additional sufficient conditions for the global existence in H and thus, when σ < 2 d , Q belongs to Z.We proceed by providing additional sufficient conditions for global existence in the whole space R d , as stated in Theorem 1.4.We use the same notations introduced in Section 2.5.We start by proving that the set K is invariant under the flow of equation (1). Proof.We suppose that g = 1 without losing generality.By Gagliardo-Nirenberg's inequality (25) it follows that where By taking the derivative of f , we notice that it has two critical points f ′ (0) = 0, and f ′ (x 1 ) = 0, and , is a local maximum.From (26) we can write (25) is an equality for Q, we have From the condition E[u 0 ] u 0 2α ), ( 66) and ( 8) we obtain that for any t ∈ [0, T max (u 0 )), the following inequality is true Suppose by contradiction that there exists a time t 0 > 0 such that ∇u(t 0 ) L 2 u(t 0 ) α L 2 = ∇Q L 2 Q α L 2 .Then we would have f ( ∇u(t 0 ) L 2 u(t 0 ) α L 2 ) = f (x 1 ) which is in contradiction with (67).Thus we can conclude that the set K is invariant under the flow of equation ( 1).Proof.Since K is invariant under the flow of equation ( 1), it follows from the conservation of the L 2 -norm that for any t ∈ [0, T max (u 0 )), and thus the H 1 -norm of the solution is uniformly bounded in time.The blow-up alternative (27) implies that the solution is global in time. Asymptotic Behavior In the following proposition, Ω can be either a regular bounded domain or the whole R d .(Ω)) be the solution to (1).Then there exists a sequence {t n } n∈N , t n → ∞ as n → ∞, such that where u ∞ solves the stationary equation Proof.Since ∂ t u ∈ L 2 ([0, ∞), L 2 (Ω)), sup t>0 |µ[u(t)]| ≤ C, and u L ∞ ([0,∞),H 1 ) ≤ C, then there exists a sequence {t n } n∈N , t n → ∞ as n → ∞ such that where the last convergence is in a weak H −1 (Ω) sense.In particular, we get that By taking the scalar product of this equation with u ∞ , we obtain that µ The result of the Proposition 4.1 can be improved in a bounded domain Ω ⊂ R d due to the Rellich-Kondrachov theorem.Indeed: embedding H 1 0 (Ω) ֒→ L 2σ+2 (Ω), we obtain u(t k ) L 2σ+2 → u L 2σ+2 .By the lower semi-continuity, it follows that which implies that E[u] = E[Q].In particular, ∇u(t k ) L 2 → ∇Q L 2 , and so u(t k ) → Q in H 1 0 (Ω).This is a contradiction with u(t k ) − Q H 1 ≥ ε.Remark 4.4.We would like to notice that in general, there exist initial data not converging to the ground state.Indeed if g = 1, σ < 2 d and u 0 ∈ Z, where Z is defined in (24), then u ∞ where u ∞ ∈ Z is a stationary state with µ[u ∞ ] < 0. Similarly, if u 0 ∈ W where W is defined in (20) then u ∞ ∈ W and µ[u ∞ ] > 0. Thus, at least in one of the above cases, the solution does not converge to the ground state. Finally, we show that in the whole space R d and when σ ≥ 2 d there exists a set of initial data whose solutions satisfy the grow-up condition.Specifically, if E[u 0 ] < 0, then there exists a sequence {t k } k∈N , t k → T max such that lim k→∞ ∇u(t k ) L 2 = ∞.Gagliardo-Nirenberg's inequality and the conservation of the L 2 -norm imply that Thus the energy is bounded from below.Since E[u(t)] is a continuous, decreasing function, there exists the limit and In the same way, we can see that µ[u(t)] is bounded from below and from above.Then there exists a sequence 2 (d− 2 ) + for d ≤ 4. Finally in subsection 3.4 we globally extend the previous results, under the assumptions on the nonlinear term stated in Theorem 1.2 above.Then we provide additional sufficient conditions on the initial data for global existence, as stated in Theorem 1.3 and Theorem 1.4, using the potential well method.3.1.Local existence via the Schauder fixed point theorem.In this subsection, we will prove the local well-posedness for bounded domains and the entire energy subcritical case 0 < σ < 2 (d−2) + .Theorem 3.1.Let 0 < σ < 2 (d−2) + , Ω ⊂ R d be a regular bounded domain.For any u 0 ∈ H 1 0 (Ω), there exists a maximal time of existence T max > 0 and a unique solution ) We show the existence of solutions for the model where the nonlinearity µ[u] is substituted by a function λ ∈ L ∞ [0, ∞) and gather estimates on the H 1 norm of this model.(Lemma 3.2 and Corollary 3.3).(2) We employ the Schauder fixed point argument.Here we need extra regularity of the initial data u 0 ∈ H 2 ∩ H 1 0 to obtain the compactness of the image.Moreover, we have to substitute the L 2 -norm in the denominator of µ by a constant to have a convex domain of the map.(Proposition 3.4, equation (35)).(3) We use a density argument to show the existence of solutions for u 0 ∈ H 1 0 .(Theorem 3.5) (4) Finally, we prove the equivalence of the problem (1) and (35) where the constant at step 2) is chosen to be the square of the L 2 norm of the initial condition.(Proposition 3.6).(5) We show the uniqueness of the solution in Proposition 3.7. The following result states that any mild solution is a strong solution, see[24,
9,984.8
2022-10-10T00:00:00.000
[ "Mathematics" ]
Comprehensive evaluation of environmental dimension reduction of multi-type islands: a sustainable development perspective In recent years, the sustainable development of islands has attracted increasing attention from countries all over the world. An important prerequisite for promoting sustainable development is to understand the foundation and sustainable development potential of islands. Constructing index systems and models is an important means of evaluating the sustainability of islands. This study used factor analysis (FA) to construct an indicator system and set weights. Thirty-eight indicators were set from both natural and social directions to evaluate the sustainable development of seven typical islands in China. The FA removed the 10 indicators that were too relevant, and the 28 effective indicators were reduced into 9 main factors for evaluation. The results showed that the evaluation results are in line with the actual development of the island, which verifies the applicability of the model to different types of islands. The study also found that the changing trends of island social sustainability, tourism sustainability, ecological sustainability, resource sustainability, and economic sustainability are consistent. The value of fully balanced islands is higher than that of unbalanced or undeveloped islands. Among the seven islands, social islands have the highest total value, and ecological islands have the lowest total value. Introduction Since the twentieth century, oceans and islands have gradually become the focus of national and social attention Zhang et al. 2020b;Zheng et al. 2020b). The importance of island development, management, and ecological protection is increasing daily (Liu et al. 2018), and an island view of coordinated development, green development, and sustainable development has gradually formed. In response to the island's complex geography, resource endowments, and various conflicts arising from the development process (Douglas 2006), the island's development situation tends to shrink. Affected by the long-term concept of valuing land and ignoring sea, as well as unfavorable factors such as being far away from the mainland, inconvenient transportation, and hard life, the development of China's islands is in a lagging, disorderly, and extensive stage. In the process of island development, the following problems are present: the development order is chaotic, the construction level lags behind, the development level is low, and the resources and environment are seriously damaged. Since 2018, China has proposed the construction of ecological islands and reefs. To prevent ecological damage to islands, most island projects have been suspended. However, currently, it seems that how to balance the relationship between development and protection still has not been effectively resolved. The lack of accurate positioning, scientific evaluation, and comprehensive planning of the islands is very important reason. Western maritime powers and international small island countries have done much scientific research on the scientific development and utilization of islands (Rigg and Richardson 1934;Tokusige 1939), such as island management (Kim 2020), island ecology (Petridis et al. 2017), and nonresident island development and utilization (Hwang and Ko 2018). Many countries with islands are more likely to adopt centralized and specialized agency management systems for island Responsible editor: Marcus Schulz * Shaoyang Chen<EMAIL_ADDRESS>1 management. The centralized formulation of marine island master plans through a unified organization makes it easier to achieve high resource utilization and ecological environment protection (Shi et al. 2015). International research on islands mostly focuses on ecological and environmental protection. The restoration of the island ecological environment is exemplified by New Zealand. As a small island country, New Zealand attaches great importance to ecological restoration research and has made a series of achievements in island ecological restoration (Towns and Ballantine 1993). Although China's efforts to protect the island ecological environment started relatively late compared to foreign countries, with China's emphasis on islands in recent years, the protection and management of islands has increased. Some achievements have been made in island development (Shen 1995), island protection (Zhang et al. 2020a), and ecosystem evaluation (Chen and Dong 2019). Compared with other countries with more successful island development (e.g., Australia, Thailand), the overall development and utilization level of uninhabited islands in China is low, and both efficiency and benefit are not commensurate with the relatively superior natural endowment status. The concept of sustainable development can be traced back to the 1970s which then quickly penetrated all areas of social development. In 2004, some experts and scholars (Li and Wang 2004) made the following definitions for the sustainable development of the island: adapt measures to island conditions, plan rationally, rely on scientific and technological progress, strengthen legal management, and rationally and effectively develop and utilize the island's ecological environment without reducing its carrying capacity so that it not only meets the needs of the present generation but also does not pose a hazard to the needs of future generations. Since then, the sustainable development of the island has become the primary topic in the study of the island environment. The development of islands is susceptible to human activities and environmental changes (Brauko et al. 2020). From the perspective of politics (Liao and Liu 2019), the economy (Baldacchino 2006;Liu et al. 2020;), resources (del Rio-Rama et al. 2020Zheng et al. 2020a), and the environment (Liu et al. 2017), disorderly islands' human activities and economic development models are not conducive to the sustainable development of islands (Zhao et al. 2016). The vulnerability of islands compared to land emphasizes the importance of island management and planning. Establishing the concept of sustainable development with strategic significance and a fair and just support system to provide a solid foundation and guarantee for the sustainable development of the island (Wang et al. 2006). Domestic and international research on the application of the concept of island sustainable development involves ecology , tourism (Moreno 2005), fisheries (Karcher et al. 2020), land resources (Zhang and Xiao 2020), etc. Its emphasis is on balancing the natural ecology and social development of the island. For small island states in particular, sustainable development research is more relevant. Small island states pay more attention to how to achieve the sustainable development of islands (Tilley et al. 2019), and the study of protecting the island's ecological environment (Hafezi et al. 2020) and socioeconomic development (Mauthoor 2017) is more in-depth. Currently, a series of evaluation applications for sustainable development have been carried out in different industries, such as business management , the automobile sales industry (Zhou et al. 2019), and water resource utilization (Dai et al. 2019). Huang et al. (1998) discussed the sustainability of urban eco-economics and selected 80 eco-economic indicators to evaluate the sustainability of Taipei City. Rajak et al. (2016) used fuzzy logic methods to evaluate the sustainability of urban transportation systems. Tang et al. (2019) selected 39 indicators from the three directions of economic, social, and ecological development to construct a city sustainability evaluation indicator system. Then, the entropy method was used to assign weights, and the gray correlation method was used for evaluation research. Che et al. (2021) selected 19 evaluation indicators from the four directions of environmental sustainability, economic development sustainability, social well-being sustainability, and technological innovation sustainability to construct a comprehensive regional coordination evaluation model. Sustainability evaluations were carried out in 31 provinces and regions in China. It can be seen from this that the evaluation research of sustainable development has developed in-depth from a single direction to a multidirection (Yang and Ding 2018). Generally, sustainable development is a composite ecosystem composed of three directions: nature, economy, and society (Singh et al. 2009;Tanguay et al. 2010;Peng and Deng 2020). However, the definition of sustainability shows uncertainty (Burgass et al. 2017). There are also many differences in the determination of the directions of the indicator system. It is not only related to the development of various directions, such as the environment, economy, society, and technology, but also related to the mutual relationship and interaction between the various directions (Che et al. 2021). Therefore, researchers determine different sustainability directions according to their needs generally (Shaker and Sirodoev 2016). Since the establishment of the evaluation index system for the sustainable development of islands in 2004, many scholars have successively evaluated the sustainable development of islands (Ke et al. 2013;Ke et al. 2014;Gao et al. 2019;Long et al. 2020;Nesticò and Maselli 2020;Xu et al. 2020). The assessment of foreign islands is mainly centered on risk assessment, which explores the impact of disturbance on the island environment itself and human society, involving anthropogenic factors such as fisheries fishing (Gilman et al. 2014) and natural factors such as hurricane crossing (Sealey et al. 2020). Various models and frameworks have been developed to assess the sustainability of islands, such as ecological footprint (Fang et al. 2018;Dai et al. 2019) and data envelopment analysis (DEA) evaluation models (Wu et al. 2009). The ecological footprint is used as a measure of sustainability, but it has obvious shortcomings (Fiala 2008). The ecological footprint proposed by Rees focuses on ecological sustainability, ignoring the sustainability of the socioeconomic system (Rees 1996). DEA has drawbacks in evaluating efficiency and determining weights (Charnes et al. 1979;Wu et al. 2009). Extensive research has been conducted on the sustainability evaluation method based on the indicator system, which follows five conventional analysis stages: indicator selection, data processing, normalization, weighting, and aggregation, relying on a set of methods to ensure the scientific objectivity of the analysis (Miller et al. 2017). The total value score is calculated by the linear weighted sum of multiple indicator scores (Opon and Henry 2020). The choice of methodology involves much subjectivity, and the most commonly used methods include the analytic hierarchy process (Zhu and Wang 2017) and entropy method. The analytic hierarchy process (AHP) is affected by subjective factors (Tang et al. 2019), and expert opinions are highly subjective, resulting in weight predictions that contradict the actual situation. The entropy weight method does not consider the influence relationship between the index and the index, and the weight distribution is easily polarized. Currently, there is no unified and universally applicable island evaluation model at home or abroad (You et al. 2015;Karampela et al. 2017). Many indicator sets only reflect a limited range, and no indicator set is sufficiently comprehensive to characterize sustainability. The consistency and correlation of data also lead to uncertainty in the indicators (Opon and Henry 2020). Drawing on the island research experience of major international marine countries and small coastal island countries, combined with the results of the evaluation of the sustainable development of domestic islands, this study used the network information collection method and statistical analysis method to study and analyze the representative data of the sustainable development of the islands and constructs the evaluation system of the FA method. FA is a multivariate statistical method used to reduce a large number of variables to fewer potential dimensions. It is also used to observe the relationship between data and test whether the assumed relationship or potential dimension in the data can be confirmed (Watson and Thompson 2006). The core of FA is correlation analysis. Usually, it uses covariance to measure the relationship between two variables. The key aspect is that the researcher decides how many factors to keep (Dinno 2009). FA was first applied in the field of psychology (Russell 2002) and is now widely used in various disciplines, such as earth sciences and oceanography (Bopp and Biggs 1981;Dimitriadou et al. 2019;Zheng et al. 2020c). This study used FA to reduce the dimensionality of indicators. First, FA eliminated useless indicators. Then, a large number of potentially relevant indicators were transformed into several unrelated main factors through linear combination, and fewer main factors were used to reflect the sustainable information of the island. The factor weight was obtained by calculating the variance contribution rate of the main factor in the dimensionality reduction process. FA is used to determine the weights in the evaluation of sustainable development, which can not only complete the sustainability evaluation of an island but also complete the comparison of development levels between different islands (Fu and Ma 2016;Yang et al. 2020). The study uniformly evaluated seven islands to test the feasibility of the evaluation method. Using different kinds of islands as samples for unified evaluation not only reduces the amount of necessary calculation for the individual evaluation of each island but also helps analyze the development of different types of islands. This approach is conducive to the state's hierarchical and classified management of islands and the macrocontrol of the development of different types of islands. The evaluation provides a theoretical basis for the sustainable development of the islands and a decision-making basis for the formulation of island development measures. The study area The results of the national survey of island names in sea areas show that there are more than 11,000 islands in China (Pan et al. 2018), including 12 major island counties (Zhao and Zheng 2017). The total area of islands accounts for approximately 0.8% of China's land area. According to whether the island has a household registration, it is divided into resident islands and nonresident islands. The evaluation model requires that the evaluation objects have universal applicability, so the study selected seven representative islands in terms of residents' life, tourism, natural environment, and ecology. The selected islands and reasons for selection are given in Table 1. The geographic locations of the seven selected islands are shown in Fig. 1. Data acquisition Island surveys are the basis for ecological environmental quality assessments of islands. Data acquisition methods include statistical yearbooks, literature inquiries, field surveys and measurements, satellite remote sensing, and camera equipment monitoring. The data in this experiment came from various regional government portals and district and county statistical yearbooks, which were authoritative and representative. The study collected the data of China's seven islands in 2019. The data acquisition URL is given in Table 2. Method The evaluation system is divided into the following four steps: the construction of the index system, the standardization of data, the determination of weight, and the comprehensive processing of index factors. The framework of the model is shown in Fig. 2. Construction of indicator system The index system goes through two steps: theoretical screening and dimensionality reduction screening. First, indicators were selected theoretically through the literature. Second, according to the SPSS FA, correlation screening and dimensionality reduction experiments were carried out on the indicators. Reflecting island sustainability information with fewer irrelevant main factors weakens the problem of inaccurate evaluation caused by repeated calculation of related indicators. In the theoretical screening stage, referring to the high-frequency indicators in the field of sustainable development evaluation in databases such as Science Direct, Web of Science, and China National Knowledge Infrastructure (CNKI), based on regional sustainable development journal literature (Fu and Ma 2016;Tang et al. 2019;Che et al. 2021), this study established the natural environment and social environment as the first-level indicators. It is necessary to evaluate the sustainability of the island from a comprehensive and multiangle perspective. Under the first-level indicators, secondary indicators were set up in terms of culture, economy, social development, resources, ecology, and environment. As a geographical unit different from the land, the island's sustainability has its own characteristics. Then, referring to the characteristics of the island area, adjusted the indicator settings in the island sustainable development evaluation model (Ke et al. 2013;Gao et al. 2019;Nesticò and Maselli 2020), such as increasing indicators of cultural tourism, fishery economy, and tourism economy. Simultaneously, islands of different types and geographic locations will lead to differences in evaluation objectives. In the actual evaluation process, indicators were selected based on the functions of different islands. For the indicator division of uninhabited islands, social factors were negligible, focusing on their natural ecological environment. For tourist islands, the factors of island tourism were appropriately increased. For islands with special resources, emphasis was placed on the evaluation of their resource protection. A total of 38 indicators were covered to build an evaluation system for island environmental value elements. The evaluation index system is shown in Fig. 3. Data standardization The different formats of the data in the evaluation system result in the incomparability of different indicators, so the selected indicator data should be normalized. The SAVEE method (Chen 2011) provides standardized equations to realize the normalization of index data. This study improved the SAVEE standardized equations to achieve the normalization of quantitative data. The study used qualitative adjustment of the k value in the standardization formula to standardize different types of island index data to the same dimension to achieve the same evaluation standard. For quantitative data, within the limit distance X, the research object value b was standardized according to the equation in Table 3. According to the impact of a single indicator on the total value of the island's environment, the index was divided into the positive correlation index, negative correlation index, and normal distribution index. The standardized formulas are showed as follows: The area is relatively small, the permanent population is small, and the island has a unique landscape. The island's economic development is dominated by tourism. Miaodao Islands (D4) Nanji Island (D5) Ecology Shedao Island (D6) The only island in the world where only vipers live. There is a national protection zone for vipers on the island to protect snake resources. Shanhu Island (D7) It is an island composed of coral reefs. The island is rich in coral and guano resources. It is one of the islands with the richest phosphate rock. Positive correlation index: In the formula, y is the standardized value, b is the unstandardized index value, X is the limit distance value, and k={1, 3, 5, 7, 9} (k takes the value according to the degree of discretization of the index data; the greater the degree of discretization, the greater the value of k.) (1) y = − e k×(− ×(b+ )÷X) Negative correlation index: In the formula, y is the standardized value, b is the unstandardized index value, and X is the limit distance value. Normal distribution index: (2) y = e − ×(b+ )÷X Fig. 1 Geographical location of the target island 20951 Environmental Science and Pollution Research (2022) 29:20947-20962 In the formula, y is the standardized value, b is the unstandardized index value, µ is the mean, and σ is the standard deviation. The quantification of the qualitative index of the island environmental index system adopts the expert scoring method. The expert scoring method is a method of quantifying qualitative descriptions. The purpose is to count, analyze, and integrate the opinions of all participating experts and finally reach a consensus (Yang et al. 2019). Considering the impact of the indicators on the overall sustainable development of the islands and the comparative differences of the indicators between different islands, this study quantified the qualitative indicators based on the 0.1-0.9 scoring standard. The scoring standards are given in Table 4. When qualitative indicators were quantified, a number of experts in related fields score secondary qualitative indicators. The value of (3) y = e − 1 2 × − 2 Determination of weight The contribution of indicators to sustainability is not the same (Mikulic et al. 2015), so indicator weighting is a necessary step. Determining the weight of the evaluation index is difficult in the construction of the evaluation system. There are many methods to determine the weight. Each method has its advantages and disadvantages (Ni 2002). Subjective weighting methods are highly subjective, and the data cannot be true and reliable. The objective weighting method uses rigorous mathematical algorithms and requires accurate data. A total of 38 indicators were selected in the study. Among them, some indicators may be relevant, making the indicator system and weights inaccurate. Therefore, to scientifically determine the weight of the index and accurately summarize the situation of the island, FA was adopted to determine the weight. This study used SPSS software to perform FA experiments to reduce the dimensionality of indicators and obtain weights. The main factor obtained through FA was the linear combination of variables. The weight was obtained based on the variance contribution rate of the main factors. FA requires that the number of indicators be less than the number of samples. However, this study contains 38 indicators and 7 samples in total. Therefore, this study considered both the principles and data, divided the indicators into different element layers and modules, and performed FA on each module. First, the indicators were qualitatively divided into five element layers, including the social element layer, tourism element layer, ecological element layer, resource element layer, and economic element layer. Using the average grouping method to weight the elements layers, the weight of a single elements layer was 20%. Then, factor dimensionality reduction analysis was performed on the selected indicators of each element layer. In the experiment, 10 invalid indicators were removed, and the dimensionality reduction operation was analyzed for the remaining 28 indicators. Through experiments, it was found that dividing the five element layers into 6 modules can simultaneously meet the KMO and Bartlett test conditions (KMO>0.5 & Sig<0.05). Then, FA dimensionality reduction was performed on the 6 modules. After continuous index adjustment, the FA divided the 28 indices into 9 main factors (Fig. 4). FA can reduce the dimension of the index and transform the general index into several groups of unrelated comprehensive factors through linear combination. The main factor is the linear combination of the index, and the score of the main factor can be obtained according to the score coefficient matrix. The main factor weight formula of FA is as follows: In the formula, ω i is the weight of the index, and e i is the contribution rate of the main factor. The main factor score formula is as follows: In the formula, Y i is the score of the main factor, A ij is the score coefficient matrix, and y is the standardized value of the index. Comprehensive processing of index factors The index weights are integrated to analyze the overall environmental conditions of the island, and the weighting formula is shown as follows: Table 3 Quantitative data standardization equation There is a negative correlation between AQI and island value. Educational institutions (300), total tourism income (500,000 yuan), total reception (800,000 people), per capita GDP (100,000 yuan), rural per capita GDP (100,000 yuan) The index is positively correlated with the island value, and the extreme value is selected according to the actual situation and expert experience. Fitness venues (50), minimum living allowance (1500 yuan), total agricultural output value (10,000,000 yuan), total industrial output value (10,000,000 yuan), total retail sales of consumer goods (10,000,000 yuan), total fishery output value (1,500,000 yuan) The index is positively correlated with the island value, and the extreme value is selected according to the actual situation and expert experience. Average annual precipitation (1200 mm), temperature (16℃), population per unit area (4 people/hm2) The index and the quantity of value have a normal distribution, µ is the mean, and σ is the standard deviation according to the actual situation. In the formula, ω i is the weight and Y i is the standardized value of the ith main factor. Index dimensionality reduction and weight gain The 38 secondary indicators were divided into 20 qualitative indicators and 18 quantitative indicators. The quantitative indicators included 13 positive correlation indicators, 1 negative correlation indicator, and 3 normal distribution indicators. Quantitative data were standardized by the SAVEE standard equation, and qualitative data were directly converted into a percentile form. Then, the qualitative and quantitative data were standardized. The standardized index value was transformed into a percentile form. The standardization results are shown in Table 5. Through FA experiments, 28 indicators were reduced into 9 main factors. The weights and scores of the main factors are calculated by using Formulas (4) and (5). The weights (6) S = ∑ i × Y i and score calculation formulas of the main factors are given in Table 6. FA condition test FA requires KMO and Bartlett test conditions (KMO>0.5&Sig<0.05). The research carried out KMO and Bartlett tests on 6 modules. As shown in Fig. 5, the KMO test results of the 6 modules were all greater than 0.5, and the Bartlett sphere test results were all less than 0.05. The results meet the conditions of the FA variable test and prove that the adjusted index can be used for dimensional reduction experiments through FA. Island factor score The total scores of social factors, tourism factors, ecological factors, resource factors, and economic factors are calculated using Formula (6). The score results are shown in Table 7. Figure 6 shows that the individual factor scores of social islands, tourist islands, and ecological islands all have a downward trend. Compared with other factors, the downward trend of the resource factor is more moderate. The scores of social, tourism, ecological, resource, and The scores of the five factors of social islands are higher than those of tourist islands and ecological islands, Index Island D1 D2 D3 D4 D5 D6 D7 b1 100 100 14 28 5 1 1 b2 60 100 100 100 100 100 81 b3 90 30 30 30 30 30 50 b4 100 97 96 16 66 21 83 b5 100 100 51 88 100 61 25 b6 80 80 90 90 80 50 90 b7 78 81 72 84 59 79 92 b8 80 90 70 90 70 70 70 b9 90 90 90 90 80 90 90 b10 90 80 -----b11 80 90 -----b12 90 70 80 90 80 90 60 b13 70 90 90 90 90 30 -b14 60 60 90 80 50 30 50 b15 50 50 90 70 30 30 30 b16 70 60 90 90 90 50 80 b17 70 90 50 80 90 --b18 50 50 90 90 90 80 30 b19 90 90 70 60 70 30 50 b20 ---90 -90 90 b21 80 58 62 46 88 --b22 83 98 -----b23 84 75 -----b24 90 74 -----b25 90 90 70 80 70 50 50 b26 90 80 -----b27 90 80 -----b28 90 90 70 60 70 10 30 b29 90 90 90 80 80 50 60 b30 13 4 -----b31 73 94 -----b32 34 78 -----b33 22 31 -----b34 21 100 3 7 7 --b35 77 100 44 97 19 --b36 99 100 43 89 12 --b37 86 90 89 100 92 --b38 75 79 ----- indicating that social islands have the highest level of development and utilization. The social, ecological, resource, economic, and tourism factors are all at a relatively high level, which implies that the development of social islands is balanced. Ecological islands are mostly uninhabited islands. Although ecological islands are rich in ecological resources, they are ecologically fragile, and the islands are almost in an undeveloped state. Tourist islands are rich in natural landscapes, mainly for the development of tourism. The scores on ecological factors of ecological islands are significantly higher than the values of other factors. Shanhu Island has valuable resources such as coral reefs, and Shedao Island has unique snake resources. Therefore, both islands have high ecological protection value. The evaluation result is consistent with the actual situation of the islands. The social factor score reflects the quality of life, education, and social security of the entire island to a large extent. The tourism factor score is set according to the geographical characteristics of the island and reflects the sustainability of the island's tourism industry. The ecological factor score presents the island's ecological level and the island's ecological protection and is also an important factor for the sustainability of the island. The resource factor is similar to the ecological factor and is an important factor for the island's sustainability, reflecting the utilization of the island's resources. The factor score intuitively reflects the level of economic development of the island. Analyzing the interrelationship between the island main factors shows that the changing trends of island social sustainability, tourism sustainability, ecological sustainability, resource sustainability, and economic sustainability are consistent (Liu et al. 2021). This consistency shows that the sustainability of a single element of the island has the potential to drive the sustainability of other elements. This potentiality is of great significance to the development of islands based on their advantages in island planning. For example, the establishment of ecological landscapes on tourist islands promotes the development of island tourism, thereby promoting the overall economic development of the islands and driving the sustainability of the entire island society. Analyzing the results of the island factor score in Fig. 6, the score of a single factor shows a downward trend, indicating that there may be a certain correlation between the factors. Figure 6 shows that all factors show the same changing trend. The main reason may be that in the process of island development, society, tourism, ecology, resources, and economy will restrict and influence each other. Compared with other factors, the change trend of the resource factor is more gradual. After analysis, this may be because resource factors are less affected by human activities than social factors, ecological factors, tourism factors, and economic factors. Social, tourism, economic, and ecological factors are formed by human participation in the construction of islands, which are more affected by human activities and have more drastic changes. This observation shows that the value of an island is not only affected by the characteristics of the island itself but also restricted by human development and construction activities. Island total value score Formula (6) is used to obtain the total score of the island. The total score of the island is given in Table 8. Table 8 shows that the Zhoushan Islands have the highest score of 87.625, and Snake Island has the lowest score of 20.587. Social islands such as Chongming Island and Zhoushan Islands have the highest scores, exceeding 70; tourist islands have a medium score, approximately 40; ecological islands have the lowest score, approximately 20. The comprehensive evaluation results of different types of islands show that the value of balanced development islands is higher than that of unbalanced development islands or undeveloped islands (Zhou et al. 2015). The comprehensive value of social islands is higher than that of tourist islands, while the comprehensive value of ecological islands is the lowest. The total value of Chongming Island and Zhoushan Islands is the highest compared to other islands. The results indicate that the development of Chongming Island and Zhoushan Islands is in good condition, the facilities on the islands are well constructed, and the islands have the strongest development level. This phenomenon is due to the large area of Chongming Island and Zhoushan Islands, the large population base, and the high level of social development that drives the level of island development and utilization. Their social development and the island's natural level are relatively balanced. Weizhou Island, Miaodao Islands, and Nanji Island are second in value. This is mainly due to the relatively small area of the island, the small inhabitant population and the uneven development of the island. The islands have a focus on development according to their own characteristics. There are some residents living on Weizhou Island, Miaodao Islands, and Nanji Island. The islands have rich and unique natural landscapes. Good natural scenery and ecology has attracted many tourists. The residents of the islands mainly make a living from fishing and tourism. Therefore, these three islands are defined as tourist islands. Due to their geographical location and geographical area, these islands are not suitable for many residents. The degree of island development and construction is lower than that of Chongming Island and Zhoushan Islands. The islands mainly rely on the development of tourism, and their value will be relatively low. Shedao Island and Shanhu Island have the smallest value and the weakest level of development. They are in the undeveloped or weakly developed stage. The two islands are uninhabited islands and are almost undeveloped. The islands are very small and lack freshwater resources, which is not conducive to the lives of residents. Shedao Island is the only island in the world that survives a single species of blackbrowed vipers. The toxins of vipers have scientific research value. China has established nature reserves to protect snake resources and the survival and reproduction of vipers. Shanhu Island is an island composed of coral reefs. The island is rich in corals and guano resources. It is one of the islands with abundant phosphate rock. Shedao Island and Shanhu Island are islands that cherish ecological resources. Although the degree of development and construction of the islands is very weak, the ecological value of resources cannot be ignored. The study of the overall value of islands requires a comprehensive and multiangle evaluation and monitoring of the sustainable development level of island society, tourism, ecology, resources, and economy. In this study, these factors and the characteristics of the island were comprehensively considered, formulate an island development plan, and promote the sustainable development of the island. Currently, the contradiction between island development and the environment is more prominent. For islands with comprehensive and balanced development, such as social islands, attention is given to the relationship between balanced development and the environment. As the development of the island continues, the island's environmental protection plan must be carried out. For islands with unbalanced development, such as tourist islands, while developing the island's advantageous industries, attention should be given to the island's social sustainability and ecological sustainability to move toward balanced development. The unified evaluation of different types of islands is of great significance to the hierarchical and classified management of islands. Conclusion The changing trends of island social sustainability, tourism sustainability, ecological sustainability, resource sustainability, and economic sustainability are consistent, indicating that the sustainability of a single element of the island has the potential to drive the sustainability of other elements. This potentiality is of great significance to the development of islands based on their advantages in island planning. The value of fully balanced islands is higher than that of unbalanced or undeveloped islands. The sustainability of islands requires comprehensive and balanced development, especially the relationship between balanced development and ecology. The unified evaluation of different types of islands can provide services for the hierarchical and classified management of islands. After the island is evaluated, it can be classified by level and category, and then the manager can develop and utilize the island rationally according to the island's own characteristics. This study used FA to process indicators. The theoretically selected island sustainability indicators may have relevance, resulting in inaccurate evaluation of island sustainability. This method considers the correlation between indicators and reduces the dimensions of multiple indicators. FA turns indicator variables into low-dimensional uncorrelated main factors. The main factor is a linear combination of indicator variables. The FA used a small number of irrelevant main factors to reflect the island's sustainability information. The FA method can not only complete the sustainability evaluation of an island but also complete the comparison of development levels between different islands. This research also proposed different standardized equations through the analysis of different types of island indicators. For indicators of different types of islands, appropriate standardization weights were set to make data standardization fair. The limitation of this study is that some indicators cannot obtain quantitative data, only qualitative descriptions. This study used the expert scoring method to take the average of many expert opinions to make the quantification of qualitative indicators as less subjective as possible. To date, FA has been successfully applied to inland urban ecosystems by previous experiments Fu and Ma 2016). In this study, FA was successfully applied to island ecosystems, and its applicability in open coastal ecosystems needs to be verified. Data availability In this study, the environmental data of the seven islands mainly come from two kinds of websites (Table 2). One is the official website of local government, including the municipal government official website and the district government official website, and the other is the local tourism website. Among them, most of the indicator data come from the statistical yearbook on the government website. The datasets of both websites are open to public readers. Declarations Ethics approval and consent to participate Not applicable. Consent for publication All authors are responsible for the article and agree to publish. Competing interests The authors declare no competing interests.
8,393
2021-07-12T00:00:00.000
[ "Computer Science" ]
Challenges in Children's Literature Translation: a Theoretical Overview There is an increasing demand for translation of children’s literature nowadays and this demand is accompanied by an increased need for the researchers to study the nature and feature of such a discipline. It is worth mention that the word “children’s literature” in English-speaking countries is a broader term covering children, adolescents and sometimes young adults. The present paper aims to highlight some comprehensive theoretical aspects concerning children’s literature translation. Special attention is paid to the issues which have generated lots of intense and ongoing debates among theoreticians as to which translation strategies and procedures would be more beneficial to the target language child reader. Before elaborating on such issues, this paper casts some light on the various definitions of children’s literature and its characteristics, its status and the role it exerts on the potential readership. Ambivalence of children’s literature – the texts being addressed to both children and adults – constitutes one of the biggest challenges for the author and the translator of children’s literature alike. Such a phenomenon is investigated in this paper illustrated with some book titles. Another feature which is tackled in this paper is that of asymmetry, which refers to the unequal communication levels between adults and children. Finally, conclusions will be drawn regarding to most popular theoretical trends of children’s literature and children’s literature translation. INTRODUCTION There are some reasons behind the assumption that children's literature is a minor and peripheral literary form in many cultures, including Albania.According to Zohar Shavit, this is due to the fact that the emergence and development of children's literature have followed common patterns across different countries (1996: 27).This condition of inferiority derives from the history and tradition of this body of literature, which is strictly bound to those of childhood, representing a minority group that has historically suffered a status of inferiority and subordination to other groups.Thus, the main system of literature tends not to attribute a high value to literature for children, which in turn, has resulted in minor literary research.The most evident repercussion of this peripheral status on the translation of books for children has been identified by many (Shavit, O'Sullivan, among others) in the marked tendency of translated children's books towards 'acceptability' introduced by Toury 'domestication' introduced by Venuti, or, in other words, Schleiermacher's well known principle of 'bringing the author towards the reader' (49). The great freedom allowed to translators and/or editors, and the high degree of rewriting, abridging, adapting and other kinds of intervention that books for children have undergone, seem to derive from the specific attitude adopted towards the genre in the target context; the more this was considered peripheral, marginalized and of little literary merit, the more freedom seemed to be allowed in translating works for children.Klingberg in his book Children's Fiction in the Hands of Translators, states that the extent to which the characteristics of the young readers are taken into consideration can be referred to as degree of adaptation and it should be preserved in translation because the original should not change as far as level of difficulty or interest is concerned.(1986) II. DEFINITIONS OF CHILDREN'S LITERATURE There have been made several attempts on the part of the scholars to provide a unanimously accepted definition of what can be considered children's literature.There are scholars who even go so far as to question the existence of children's literature.As Jack Zipes (2001) The cultural concept of "children" and "childhood" also changes radically with time, place, gender, and perceiver, and so the corpus of texts ("children's literature") is unstable.Childhood two hundred years ago (and consequently the books designed for it) may seem so remote from current childhood and its texts that a distinction might be made between "historical children's literature", or books that were for children, and "contemporary children's literature," books that address or relate to recognizable current childhoods (P.Hunt 1996; Flynn 1997). The body of texts can be seen as a symbiotic movable feast: the book defines its audience, which is children, and that in turn affects how children are generally defined as well as how they actually will be in the future.In this context, the term "children" is increasingly being interpreted as "comparatively inexperienced/unskilled readers."(Nell & Paul: 2001: 43) Jacqueline Rose, who, in The Case of Peter Pan (1984), carefully uses the term "children's fiction," suggests that children's fiction is impossible, not in the sense that it cannot be written, but that it hangs on…. the impossible relation between adult and child….".Children's fiction sets up a world in which the adult comes first (author, maker, giver) and the child comes after (reader, product, receiver), but where neither of them enter the space in between.(ibid: 44) III. CHARACTERISTICS OF CHILDREN'S LITERATURE Before we start to elaborate on the challenges of children's literature translation, it is essential to refer to some peculiarities and characteristics of children's literature as such.One of the characteristics of children's literature is its ambivalence due to the fact of its dual readership.To Rurvin and Orlati, ambivalent texts are those "written for and received by both adults and children at various textual levels of both production and reception" (2006: 159).This is a challenge to a translator and an issue of concern in children's literature translation.Quoting Metcalf: "More children's books than ever before address a dual audience of children and adults, which on the other hand comes with a dual challenge for the translator, who now has to address both audiences in the translated literature" (2003: 323).To preserve multiple levels in the text, the conventional one to be simply realized by the child reader; the other one only understandable to adults, is one of the biggest challenges for translators of children's literature.(Frimmelova 2010: 35) The Harry Potter saga is a very good illustration of an ambivalent text.Hundreds of pages and a seven-book compilation cannot be appealing to teenagers only.Not to mention the linguistic complexities and layers it encompasses due to the author's sophisticated style of writing. Asymmetry is another feature of children's literature which entails the relationship between the writers who are adults and the readers who are children.When the partners in communication are not equal, communication structures are asymmetric.Children's literature differs from adults literature in that the authors of children's books and their audience have a different level of knowledge and experience.It is adults who decide on the literary form and it is they who decide what to publish and what to sell without giving the children a chance to decide for themselves. Another important characteristic of children's literature seen from the pedagogical viewpoint is to educate the child reader.As Puurtinen points out, adults expect children's literature to help in the development of the child's linguistic skills.Therefore, there might be a stronger tendency for authors and translators of children's literature to normalize the texts by grammaticising them, in order to avoid the readership learning faulty grammar from the books.(Puurtinen: 1998) IV. THEORETICAL ASPECTS OF CHILDREN'S LITERATURE TRANSLATION: There are two main trends of translation procedure: source oriented translation and target oriented translation.The first approach advocates the preservation of the source language and cultural characteristics (being faithful to the form and meaning) whereas the latter favors the "merging" of source text into the target language culture, bringing it closer to the readership.Instead of aiming at an adequate translation, the translator should aim at an acceptable translation considering the fact that children's reading abilities are not as advanced as the adults' and their knowledge of the world is limited.The polysystem is conceived as a heterogeneous, hierarchized conglomerate of systems which interact to bring about an ongoing, dynamic process of evaluation within the popysystem as a whole.Evan Zohar' polysystem theory places literature in two positions: in the center and periphery.The closer to the periphery the lower the cultural status of the subsystem is within the polysystem.Translated literature constitutes one of the subsystems and it might position itself either in the center representing a significant part of a country's literature or remaining in the periphery and imposing less influence."(Baker, 1998: 176) According to Shavit, unlike contemporary translators of the adults' books, the translator of children's literature can permit himself great liberties regarding the text as a result of the peripheral position of the children's literature within the polysystem.That is, the translator is permitted to manipulate the text in various ways by changing, enlarging or abridging it or by deleting or adding to it.(1986: 111) "In viewing translation as part of a transfer process, it must be stressed that the subject at stake is not just translations of texts from one language to another, but also the translations of texts from one system to another --for example, translations from the adult system into the children's."(Shavit 1986: 111) Another translation theory that has given a great contribution to the translation process of children's literate is Vermeer and Reiss's Scopos theory.Scopos (purpose) of translation is the main criterion of this theory which shifted the attention from the course oriented approach to the target oriented procedures, thus putting the reader at the center of this process.As a result of this approach, the status and responsibilities of translator changed as well, having more freedom to resort to strategies which meet the children's special demands as the main readers."The translator is "the" expert in translational action.He is responsible for the translational action".(Vermeer 223: 223) According to scopos theory, the translator is considered a "cultural product" and the process of translation "a culture-sensitive procedure".(Vermeer in Mary, and Kaindl: 1994).In the context of children's literature, scopos theory made significant changes to the status of translators, readers and the translation process. CONCLUSION The study of children's literature is a well established discipline and a lot of scholars are giving their contribution despite the wrong conception that children's literature is of less importance and less sophisticated than adults' literature.On the other hand, translation studies of children's literature are embryonic and only in the last two decades are theorists elaborating on the translation strategies with a focus on children as a target group and their reading competences and demands. The primary aim of this paper has been to give an overall view of the subject on children's literature and its translation from the theoretical perspective.Even though an attempt has been made to give a panorama of current situation of this filed, it was impossible, due to the constraints and the length of this paper, to cover all the facets of this discipline. However, it was concluded that there is no final definition of children's literature because of the wide range of topics, genres and elements it covers and the fact that this kind of literature is written by adults and addressed to children.There are scholars who believe that there is no such thing as children's literature due to the fact that the child reader is the passive actor who is offered everything that adults consider as appropriate for them. As far as the characteristics of children's literature as concerned, it was observed that such texts are appealing to children as well as adults and such ambivalence constitutes one of the biggest challenges both for writers and translators.Asymmetry was another feature of children's literature which was highlighted in this paper.Asymmetry refers to the relationship between the writers who are adults and the readers who are children.Additionally, from the pedagogical viewpoint, the purpose of children's literature is to educate. While analyzing the theoretical aspects of translation, it was observed that different theoreticians have different approaches as to whether preserve the culture of the source text during the translation process or simplify it and replace the culturebound word with their equivalents in the target language.Finally we must say that, no matter what strategy the translator resorts to, he/she must produce a text which conveys the elements of the unusual, but it must be acceptable and easy-toread-and-remember, without underestimating the children's knowledge about the world. There has never been a literature conceived by children for children, a literature that belongs to children, and there never will be."Another researcher who raises the question whether there is a need to define children's literature at all is Riita Oittinen arguing that works of literature and whole literary genres acquire different meanings and are redefined again and again. puts it, in "Why Children's Literature Does Not Exist," " "It is the task of the translator to decide how she/he will compensate for the children's lack of background knowledge without oversimplifying the original and forcing children into simple texts that have lost any feature of difficulty, foreignness, challenge and difficulty".(Ztolze2003:209)Inthe late 1980s, Klingberg, in his Children's Fiction in the Hands of the Translators, criticized what he perceived as the most common way to translate books for children.In his view, the main aim of this activity should be that of enriching the reader's knowledge and understanding of foreign cultures.Yet, most translators' interventions on the source textswhat he categorizes as 'cultural context adaptations', 'purifications' 'modernizations', 'abridgements' and 'serious mistranslations' -hinder that aim.Klingberg suggested that translation strategies which tend to preserve the foreign spirit of the originals should be preferred, so that the child-reader can get acquainted with the country and the culture from where those books come.Zohan Shavit has given important contribution to the translation of children's literature in that she utilized the polysystem theory introduced by Itamar Evan-Zohar to explain the translational pattern of children's literature.Polysystem theory had a strong impact on research into translation of children's literature, because it elevated a genre regarded as minor to a central object of research."
3,144.4
2015-08-30T00:00:00.000
[ "Economics" ]
Methicillin-Resistant Staphylococcus aureus (MRSA) Detected at Four U.S. Wastewater Treatment Plants Background: The incidence of community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA) infections is increasing in the United States, and it is possible that municipal wastewater could be a reservoir of this microorganism. To date, no U.S. studies have evaluated the occurrence of MRSA in wastewater. Objective: We examined the occurrence of MRSA and methicillin-susceptible S. aureus (MSSA) at U.S. wastewater treatment plants. Methods: We collected wastewater samples from two Mid-Atlantic and two Midwest wastewater treatment plants between October 2009 and October 2010. Samples were analyzed for MRSA and MSSA using membrane filtration. Isolates were confirmed using biochemical tests and PCR (polymerase chain reaction). Antimicrobial susceptibility testing was performed by Sensititre® microbroth dilution. Staphylococcal cassette chromosome mec (SCCmec) typing, Panton-Valentine leucocidin (PVL) screening, and pulsed field gel electrophoresis (PFGE) were performed to further characterize the strains. Data were analyzed by two-sample proportion tests and analysis of variance. Results: We detected MRSA (n = 240) and MSSA (n = 119) in 22 of 44 (50%) and 24 of 44 (55%) wastewater samples, respectively. The odds of samples being MRSA-positive decreased as treatment progressed: 10 of 12 (83%) influent samples were MRSA-positive, while only one of 12 (8%) effluent samples was MRSA-positive. Ninety-three percent and 29% of unique MRSA and MSSA isolates, respectively, were multidrug resistant. SCCmec types II and IV, the pvl gene, and USA types 100, 300, and 700 (PFGE strain types commonly found in the United States) were identified among the MRSA isolates. Conclusions: Our findings raise potential public health concerns for wastewater treatment plant workers and individuals exposed to reclaimed wastewater. Because of increasing use of reclaimed wastewater, further study is needed to evaluate the risk of exposure to antibiotic-resistant bacteria in treated wastewater. Staphylococcus aureus is a bacterial pathogen associated with a wide range of human infections, including skin infections, pneumonia, and septicemia (Bassetti et al. 2009). Infections with this micro organism can be difficult to treat because the strains are often resistant to one or more anti biotics, including methicillin. Methicillin-resistant S. aureus (MRSA) was first isolated in 1960, and for the past four decades MRSA infections have been largely associated with hospital environments and referred to as hospital-acquired MRSA (HA-MRSA) (Bassetti et al. 2009;Gorwitz et al. 2008). However, in the late 1990s, community-acquired MRSA (CA-MRSA) infections began to appear in other wise healthy people who had no known risk factors for these infections (Bassetti et al. 2009;Gorak et al. 1999). The incidence of CA-MRSA has continued to increase in the United States. Outbreaks of CA-MRSA have occurred among individuals sharing close contact with others in schools, prisons, and locker rooms, but other possible environmental reservoirs of MRSA have yet to be comprehensively explored (Diekema et al. 2001). Identifying environmental reservoirs of MRSA in the community is critical if the spread of CA-MRSA infections is to be controlled. Of other potential environ mental reser voirs, waste water has been identified as a possible source of exposure to MRSA in the community (Börjesson et al. 2009(Börjesson et al. , 2010Plano et al. 2011). Colonized humans shed MRSA from the nose, feces, and skin; therefore, MRSA can end up in municipal waste water streams (Börjesson et al. 2009(Börjesson et al. , 2010Plano et al. 2011;Wada et al. 2010). Börjesson et al. (2009) recently detected MRSA resistance genes in all treatment steps at a Swedish municipal waste water treatment plant (WWTP). These authors also cultured MRSA from influent samples (Börjesson et al. (2009), as well as influent and activated sludge samples (Börjesson et al. 2010). Currently, as water shortages expand, treated municipal waste water is increasingly used for applications including landscape and crop irrigation, groundwater recharge, and snowmaking (Levine and Asano 2004;Tonkovic and Jeffcoat 2002). During these activities, individuals applying, using, or coming in contact with reclaimed waste water could potentially be exposed to MRSA and other bacteria that may remain in treated waste water (Iwane et al. 2001). To our knowledge, no studies have demon strated the occurrence of MRSA in waste water in the United States. In the present study, we evaluated the occurrence of MRSA and methicillin-susceptible S. aureus (MSSA) at four WWTPs located in two different regions of the United States: the Mid-Atlantic region and the Midwest. To further assess the MRSA strains, isolates were charac terized by staphylococcal cassette chromosome mec (SCCmec) typing and pulsed field gel electrophoresis (PFGE), and screened for Panton-Valentine leucocidin (PVL), an exotoxin often associated with virulent strains of S. aureus. Materials and Methods Study sites. Four WWTPs were included in this study: two in the Mid-Atlantic region and two in the Midwest. The treatment steps and sampling locations at each of the treatment plants are illustrated in Figure 1. Background: The incidence of community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA) infections is increasing in the United States, and it is possible that municipal waste water could be a reservoir of this micro organism. To date, no U.S. studies have evaluated the occurrence of MRSA in waste water. oBjective: We examined the occurrence of MRSA and methicillin-susceptible S. aureus (MSSA) at U.S. waste water treatment plants. Methods: We collected waste water samples from two Mid-Atlantic and two Midwest waste water treatment plants between October 2009 and October 2010. Samples were analyzed for MRSA and MSSA using membrane filtration. Isolates were confirmed using biochemical tests and PCR (polymerase chain reaction). Antimicrobial susceptibility testing was performed by Sensititre® micro broth dilution. Staphylococcal cassette chromosome mec (SCCmec) typing, Panton-Valentine leucocidin (PVL) screening, and pulsed field gel electrophoresis (PFGE) were performed to further characterize the strains. Data were analyzed by two-sample proportion tests and analysis of variance. results: We detected MRSA (n = 240) and MSSA (n = 119) in 22 of 44 (50%) and 24 of 44 (55%) waste water samples, respectively. The odds of samples being MRSA-positive decreased as treatment progressed: 10 of 12 (83%) influent samples were MRSA-positive, while only one of 12 (8%) effluent samples was MRSA-positive. Ninety-three percent and 29% of unique MRSA and MSSA isolates, respectively, were multi drug resistant. SCCmec types II and IV, the pvl gene, and USA types 100, 300, and 700 (PFGE strain types commonly found in the United States) were identified among the MRSA isolates. conclusions: Our findings raise potential public health concerns for waste water treatment plant workers and individuals exposed to reclaimed waste water. Because of increasing use of reclaimed waste water, further study is needed to evaluate the risk of exposure to antibiotic-resistant bacteria in treated waste water. Mid-Atlantic WWTP1 ( Figure 1A) is a tertiary WWTP in an urban area that processes 681,390 m 3 /day of waste water, with a peak capacity of 1.51 million m 3 /day. Mid-Atlantic WWTP2 ( Figure 1B), a tertiary WWTP in a suburban area, processes 7,570 m 3 /day of waste water and has a peak capacity of 45,425 m 3 /day. Tertiary waste water treatment includes primary treatment (physical removal of solids), secondary treatment (biological treatment), and additional treatment that can include, but is not limited to, chlorination, ultraviolet radiation, or filtration. The incoming waste water (influent) at both Mid-Atlantic plants includes domestic and hospital wastewater, and effluent (discharge) from both Mid-Atlantic plants is piped to landscaping sites for reuse in spray irrigation. Midwest WWTP1 ( Figure 1C) is a tertiary WWTP in a rural area that processes 1,363 m 3 /day of waste water, with a peak capacity of 10,978 m 3 /day. The incoming water includes domestic waste water and agriculturally influenced stormwater. Seasonal chlorination occurs in June, July, and August, and chlorinated effluent is piped to a landscaping site for reuse in spray irrigation. Midwest WWTP2 ( Figure 1D), a secondary WWTP (with no on-site disinfection) in a rural area, processes 1,439 m 3 /day and has a peak capacity of 7,571 m 3 /day. Secondary waste water treatment includes only primary treatment (physical removal of solids) and secondary treatment (biological treatment). The incoming water at this plant includes domestic waste water, waste water from a food production facility, and agriculturally influenced stormwater. Unchlorinated effluent is piped to an agricultural site for crop irrigation. Sample collection. A total of 44 grab samples were collected between October 2009 and October 2010: 12 samples from Mid-Atlantic WWTP1; 8 from Mid-Atlantic WWTP2; 12 from Midwest WWTP1; and 12 from Midwest WWTP2. The timing of each sampling event was determined by the availability and schedule of the WWTP operators. The sampling time schedule and specific sampling locations for each plant are indicated in Tables 1 and 2 and Figure 1. Samples were collected in 1-L sterile polyethylene Nalgene® Wide Mouth Environmental Sample Bottles (Nalgene, Lima, OH), labeled, and transported to the laboratory at 4°C. All samples were processed within 24 hr. Isolation. Membrane filtration was used to recover S. aureus and MRSA from wastewater samples. Briefly, 300 mL of each sample were vacuum filtered through a 0.45-µm, 47-mm mixed cellulose ester filter (Millipore, Billerica, MA). Filters were then enriched in 40 mL of m Staphylococcus broth (Becton, Dickinson and Company, Franklin Lakes, NJ), vortexed, and incubated at 37°C for 24 hr. A 10-µL loopful of each enrichment was then plated in duplicate on MRSASelect (Bio-Rad Laboratories, Hercules, CA) and Baird Parker agar (Becton, Dickinson and Company) for the isolation of MRSA and total S. aureus, respectively. Plates were incubated at 37°C for 24 hr. Resulting black colonies with halos on Baird Parker agar and hot pink colonies on MRSASelect were considered presumptive S. aureus and MRSA, respectively. These colonies were purified on Brain Heart Infusion (BHI) agar (Becton, Dickinson and Company) and archived in Brucella broth (Becton, Dickinson and Company) with 15% glycerol at -80°C. For quality control and quality assurance throughout the isolation process, S. aureus ATCC 43300 [American Type Culture Collection (ATCC), Manassas, VA] was used as a positive control and phosphate-buffered saline was used as a negative control. Identification. S. aureus and MRSA were confirmed using Gram stain, the coagulase test (Becton, Dickinson and Company), the catalase test, and polymerase chain reaction (PCR). DNA extraction was carried out using the MoBio UltraClean® Microbial DNA Isolation Kit (Mo Bio Laboratories, Carlsbad, CA) following the manufacturer's recommendations. For confirmation of S. aureus, we carried out PCR amplification of the S. aureus-specific nuc gene using NUC1 and NUC2 primers (Fang and Hedin 2003). For MRSA differentiation, we performed PCR amplification targeting the mecA gene, which encodes for methicillin resistance, using ECA1 and MECA2 primers, as previously described by Fang and Hedin (Brakstad et al. 1992;Fang and Hedin 2003;Smyth et al. 2001). The method was modified by including an internal control, using primers targeting the 16S rDNA genes, in a multi plex PCR assay (Edwards et al. 1989). PCR amplification consisted of an initial denaturing step of 95°C for 3 min, followed by 34 cycles of denaturing at 94°C for 30 sec, annealing at 55°C for 30 sec, and extension at 72°C for 30 sec, with a final extension at 72°C for 5 min. PVL screening. All MRSA isolates, confirmed by possession of the nuc and mecA genes by PCR and an identifiable SCCmec type (n = 236), were screened for PVL by PCR of the pvl gene according to Strommenger et al. (2008). S. aureus ATCC strain 25923 was used as a positive control. PFGE. We performed PFGE on a subset of 22 MRSA isolates. To ensure a diverse, representative subset, isolates were selected using the following criteria: treatment plant, sampling date, SCCmec type, and each sampling location that had a positive sample. PFGE was based on the Centers for Disease Control and Prevention (CDC) Laboratory Protocol for Molecular Typing of S. aureus by PFGE (CDC 2011). We used SmaI (Promega, Madison, WI) to digest genomic DNA. Digested samples were run in 1% SeaKem® Gold agarose gels (Cambrex Bio Science Rockland Inc., Rockland, ME) in 0.5X TBE (tris-borate-EDTA) using a CHEF Mapper (Bio-Rad) for 18.5-19 hr at 200 V, 14°C, and initial and final switch of 5 and 40 sec. Cluster analysis was performed using BioNumerics software v5.10 (Applied Maths Scientific Software Development, Saint-Martens-Latem, Belgium) using Dice coefficient and the unweighted pair-group method. Optimization settings for dendrograms were 1.0% with a position tolerance of 0.95%. Based on the similarity of the control strains, isolates were considered clones if similarity was ≥ 88%. Salmonella serotype Braenderup strain H9812 was used as the standard. PFGE strain types were compared with USA types (100, 200, 300, 400, 500, 600, 700, 800, 1000, and 1100). Statistical analyses. Descriptive statistics include the percentages of waste water samples positive for MRSA (Table 1) and MSSA (Table 2) by WWTP. Because PFGE was not performed on all isolates, statistical analyses of anti biotic resistance data were limited to MRSA (n = 84) and MSSA (n = 58) isolates expressing unique pheno typic profiles; this allowed us to reduce bias that could be introduced by including clones. Two-sample tests of proportions were performed between MRSA and MSSA isolates with respect to the percent resistance of each group of isolates to each of the 18 tested anti biotics. Analysis of variance was then used to compare the average numbers of anti biotics against which MRSA and MSSA isolates were resistant. In all cases, p-values ≤ 0.05 were defined as statistically significant. All statistical analyses were performed using Stata/IC 10 (StataCorp LP, College Station, TX) and SAS 9.2 (SAS Institute Inc., Cary, NC). (Table 2). Across all treatment plants sampled, 55% (24/44) of waste water samples were positive for MSSA: 60% (12/20) of samples from Mid-Atlantic WWTPs and 50% (12/24) of samples from Midwest WWTPs. Eightythree percent (10/12) of influent samples from all WWTPs were MSSA-positive; 100% from Mid-Atlantic WWTPs and 71% from Midwest WWTPs. MSSA was not detected in tertiarytreated (chlorinated) effluent samples (Table 2). However, MSSA was detected in two effluent samples from Midwest WWTP1 in September and October 2010 when chlorination was not taking place. Of all four WWTPs, Midwest WWTP2 had the lowest percentage of MSSApositive waste water samples, and MSSA was detected only in the influent. Antibiotic resistance patterns. In total, 240 MRSA isolates were isolated from all of the WWTPs. However, because PFGE was not performed on all isolates, the statistical analyses concerning anti biotic resistance patterns among these isolates were limited to those that could be confirmed as unique (n = 84) using pheno typic analyses. The unique MRSA isolates had a median OXA+ MIC of ≥ 16 µg/mL (range, 4 to ≥ 16 µg/mL) and expressed resistance to several anti biotics approved by the U.S. Food and Drug Administration for treating MRSA infections, including TET, CIP, LEVO, GAT, and CLI, as well as LZD and DAP (Figure 2), which are important alternatives to older anti biotics for treating severe MRSA infections (Johnson and Decker 2008). Antimicrobial resistance patterns among unique MRSA isolates varied by WWTP and sampling location (Figure 2). In general, at both Mid-Atlantic WWTPs and at Midwest WWTP1, the percentage of isolates resistant to individual anti biotics increased or stayed the same as treatment progressed (Figure 2A-2C). At Midwest WWTP2, only influent samples were positive for MRSA, and the majority of these isolates were resistant to most of the tested anti biotics ( Figure 2D). In total, 119 MSSA isolates were isolated from all WWTPs. Similar to our statistical analyses of MRSA isolates, our analyses of anti microbial resistance patterns among MSSA isolates were limited to those isolates that could be confirmed as unique (n = 58) using phenotypic analyses. Antimicrobial resistance patterns among unique MSSA isolates also varied by WWTP (Figure 3). The percentages of ERY-, AMP-and PEN-resistant unique MSSA isolates at Mid-Atlantic WWTP1 increased as treatment progressed, whereas the percentages of isolates resistant to the fluoro quinolones (LEVO, CIP, and GAT) decreased from influent to activated sludge reactor samples ( Figure 3A). At Mid-Atlantic WWTP2, the percentages of ERY-, AMP-, PEN-, and GAT-resistant MSSA isolates increased from influent to activated sludge reactor samples ( Figure 3B). Similarly, among Midwest WWTP1 and Midwest WWTP2 MSSA, resistance to AMP and PEN increased as treatment progressed ( Figure 3C,D). PFGE. Clusters based on > 88% similarity resulted in 12 unique types among our subset of 22 isolates, suggesting a hetero geneous popu la tion among MRSA from U.S. WWTPs ( Figure 5). Three different USA types, 100, 300, and 700, were identified. Nine isolates did not match any of the USA types. MRSA and MSSA occurrence in U.S. waste water. Although MRSA has been identified in WWTPs in Sweden (Börjesson et al. 2009(Börjesson et al. , 2010, to our knowledge, this is the first report of the detection of MRSA at municipal WWTPs in the United States. Fifty percent of total waste water samples were positive for MRSA, and 55% of total samples were positive for MSSA. Yet, the odds of samples being MRSA-positive decreased as treatment progressed. For example, 10 of 12 (83%) influent samples were MRSA-positive, but only 1 of 12 (8%) effluent samples was MRSA-positive (Table 1). Based on these findings, waste water treatment seems to reduce the number of MRSA and MSSA isolates released in effluent. However, the few isolates that do survive in effluent might be more likely to be MDR and virulent isolates. Previous studies conducted in Sweden have also reported a decline in MRSA as waste water treatment progressed. Specifically, Börjesson et al. (2009) showed that the concentration of MRSA as measured by real-time PCR assays decreased as treatment progressed from approximately 6 × 10 3 to 5 × 10 2 mecA genes per 100 mL from inlet to outlet, except for a peak in activated sludge reactor samples of 5 × 10 5 mecA genes per 100 mL (Börjesson et al. 2009). On the basis of these findings, we might also expect to see an overall decrease in MRSA concentrations throughout the waste water treatment process in the United States, except for perhaps a peak in activated sludge. It is also interesting that at Midwest WWTP2, the only WWTP in the study that did not employ an activated sludge step, MRSA was detected only in the influent. The lack of MRSA detected beyond influent at Midwest WWTP2 could be due to the effectiveness of an anaerobic step in the sequencing batch reactor (Figure 1) (Minnigh H, personal communication). Cycling of MRSA between humans and the environment. Our findings also provide evidence that municipal waste water could serve as a medium for the cycling of CA-MRSA strains between humans and the environment. MRSA has been found at concentrations between 10 4 and 10 8 colony-forming units (CFU)/g of fecal material (Wada et al. 2010). PVL-positive strains, SCCmec type IV, and USA 300, all of which charac terize the majority of the MRSA isolated from waste water in the present study, have traditionally been associated with CA-MRSA (Gorwitz et al. 2008;Seybold et al. 2006). The high prevalence of PVL-positive CA-MRSA in the U.S. population compared with those in other countries could explain the high percentage of PVL-positive MRSA isolates in waste water in the present study (Seybold et al. 2006;Tristan et al. 2007). The association of PVL-positive MRSA and CA-MRSA with skin and soft tissue infections could also explain the occurrence of PVL-positive MRSA isolates in waste water samples in the present study, because MRSA could be shed in showers at concentrations of approximately 1.4 × 10 4 -1.0 × 10 5 CFU/person (Lina et al. 1999;Plano et al. 2011). The large cluster of MRSA isolates we recovered that were PVLpositive and showed similarity to USA 300 is concerning because both USA 300 strains -which are typically resistant to erythro mycin and β-lactam anti biotics-and the pvl gene are associated with increased virulence, severe bloodstream infections, and necrotizing pneumonia (Gorwitz et al. 2008;Lina et al. 1999;McDougal et al. 2003). Moreover, the abundance of SCCmec type IV among the recovered MRSA isolates could indicate superior survival charac teristics, namely the lower energy cost of SCCmec type IV carriage (Börjesson et al. 2010). SCCmec type IV strains that we recovered appeared to persist longer in the wastewater treatment process than type II strains. However, this phenome non warrants further investigation because our results are based on only one WWTP (Mid-Atlantic WWTP1), and a previous study found that SCCmec types were not signifi cantly associated with MRSA survival (Levin-Edens et al. 2011). Four isolates that did not have the mecA band in SCCmec typing but were found to be OXA+ resistant through anti microbial susceptibility testing could have the novel mecA homolog, MRSA-LGA 251, as identified by García-Álvarez et al. (2011). Interestingly, Public health implications. Our findings raise potential public health concerns for WWTP workers and individuals exposed to reclaimed waste water. WWTP workers could potentially be exposed to MRSA and MSSA through several exposure pathways, including dermal and inhalation exposures. However, few studies have evaluated microbial exposures among WWTP workers. Mulloy (2001) summarized findings of exposures to Leptospira, hepatitis A, and bacterial entero toxins and endo toxins among WWTP workers (Mulloy 2001). Yet, to our knowledge, no studies have evaluated MRSA or MSSA carriage rates among these populations. Encouraging frequent handwashing and the use of gloves among WWTP workers could reduce the potential risks associated with possible MRSA exposures. Other individuals who are exposed to reclaimed secondary waste water, including spray irrigators and people living near spray irrigation sites, could be potentially exposed to MRSA and MSSA. No federal regulations exist for waste water reuse from either secondary or tertiary facilities, although the U.S. Environmental Protection Agency (EPA) has issued water reuse guidelines (U.S. EPA 2004). States determine whether to develop regulations or guidelines to over see the use of reclaimed waste water within their boundaries, and most state guidelines allow secondary effluent to be used for certain reuse applications, including spray irrigation of golf courses, public parks, and agricultural areas (U.S. EPA 2004). In the present study, we detected MRSA and MSSA in unchlorinated effluent from Midwest WWTP1, a WWTP with only seasonal chlorination (it could be defined as a secondary treatment plant during periods when chlorine is not applied). Our findings suggest that implementing tertiary treatments for waste water that is intended for reuse applications could reduce the potential risk of MRSA exposures among individuals who are working on or living by properties sprayed with reclaimed waste water. Limitations. There are some notable limitations of this study. First, the number and timing of sampling events and samples collected at each WWTP was not the same because of access issues at some of the plants. Second, enrichment of the samples preempted our ability to report concentrations of MRSA and MSSA in waste water. Finally, because PFGE was performed on a representative subset of all MRSA isolates, the true hetero geneity of the MRSA isolates contained in the waste water samples may have been under estimated. On the other hand, MRSA strains have evolved from a small number of clonal strains, so the likelihood of isolating MRSA with phenotypic and genetic similarities during our isolation procedure was high (Enright et al. 2002;Fang and Hedin 2003;. However, the goal of the present study was to evaluate the occurrence of MRSA at WWTPs in the United States and, even if clones were selected, the findings concerning the presence and types of MRSA at the four WWTPs are still accurate. Conclusions To our knowledge, our study is the first to demonstrate the occurrence of MRSA in U.S. municipal waste water. Although tertiary wastewater treatment may effectively reduce MRSA in waste water, secondary-treated waste water (unchlorinated) could be a potential source of exposure to these bacteria in occupational settings and reuse applications. Because of increasing use of reclaimed waste water, further study is needed to evaluate the potential risk of anti biotic-resistant bacterial infections from exposure to treated waste water.
5,534.2
2012-09-06T00:00:00.000
[ "Medicine", "Biology" ]
The C-terminal LCAR of host ANP32 proteins interacts with the influenza A virus nucleoprotein to promote the replication of the viral RNA genome Abstract The segmented negative-sense RNA genome of influenza A virus is assembled into ribonucleoprotein complexes (RNP) with viral RNA-dependent RNA polymerase and nucleoprotein (NP). It is in the context of these RNPs that the polymerase transcribes and replicates viral RNA (vRNA). Host acidic nuclear phosphoprotein 32 (ANP32) family proteins play an essential role in vRNA replication by mediating the dimerization of the viral polymerase via their N-terminal leucine-rich repeat (LRR) domain. However, whether the C-terminal low-complexity acidic region (LCAR) plays a role in RNA synthesis remains unknown. Here, we report that the LCAR is required for viral genome replication during infection. Specifically, we show that the LCAR directly interacts with NP and this interaction is mutually exclusive with RNA. Furthermore, we show that the replication of a short vRNA-like template that can be replicated in the absence of NP is less sensitive to LCAR truncations compared with the replication of full-length vRNA segments which is NP-dependent. We propose a model in which the LCAR interacts with NP to promote NP recruitment to nascent RNA during influenza virus replication, ensuring the co-replicative assembly of RNA into RNPs. INTRODUCTION Influenza A viruses belong to segmented negative strand RNA viruses (sNSVs) and represent a major threat to human and animal health. The influenza A virus genome is composed of negative-sense single-stranded viral RNA (vRNA) segments, which are assembled into separate viral ribonucleoprotein (vRNP) complexes with a heterotrimeric RNA-dependent RNA polymerase and multiple copies of viral nucleoprotein (NP) (1)(2)(3). The viral polymerase is responsible for transcription and replication of the vRNA in association with host factors (4). During transcription, the polymerase copies vRNA into capped and polyadenylated mRNAs in association with host RNA polymerase II (4). During replication, the polymerase first copies the vRNA to generate complementary RNA (cRNA), which is assembled with polymerase and NP into a complementary ribonucleoprotein (cRNP) complex and serves as a template for vRNA synthesis. Multiple polymerase molecules are required for replication: in addition to the polymerase resident in the vRNP and cRNP that acts as a replicase, a trans-activating polymerase is required that specifically promotes the cRNA to vRNA step by assisting template realignment (5)(6)(7)(8). Furthermore, an encapsidating polymerase which captures the nascent RNA strand and initiates its assembly into progeny RNP during both steps of replication is required (9). It has been proposed that during replication elongation, NP molecules are recruited to the growing nascent strand of cRNA and vRNA to ensure their co-replicative assembly into RNPs so that no exposed cRNA and vRNA is generated that could be recognized by innate immune sensors. Non-segmented NSVs (nsNSVs) encode an acidic phosphoprotein (P) which bridges the polymerase (L) and nucleoprotein (N), and recruits N to nascent replication products (3,10,11). The P protein also acts as chaperone of N preventing its oligomerization to guarantee the supply of monomeric RNA-free N to the nascent RNA strand (12)(13)(14)(15). However, sNSVs lack an intrinsic P protein and it remains unclear how the virus recruits NP during the elongation step of viral genome replication. Acidic nuclear phosphoprotein 32 (ANP32) family proteins are known as important host factors of influenza viruses specifically supporting viral genome replication (16,17). These proteins are composed of an N-terminal leucine-rich repeat (LRR) domain, followed by a Cterminal low-complexity acidic region (LCAR). A 33amino acid insertion in avian ANP32A between the LRR and LCAR domains was found to be critical for the activity of avian influenza virus polymerase (18). In a cryo-EM study, the LRR (amino acid residues 1-158) bridges an asymmetric dimer of influenza virus polymerase heterotrimers, which has been proposed to act as a replication platform for the viral genome (9). Although fulllength human ANP32A (huANP32A) or chicken ANP32A (chANP32A) were used to form complexes with influenza C virus polymerase, the structure of the LCAR could not be fully resolved due to its flexibility. A region that extends from LRR of chANP32A locates in a groove formed by both polymerase molecules. Despite not being fully modelled, this region is estimated to include 20-30 amino acid residues, suggesting the N-terminal 180-190 residues of chANP32A are in contact with the polymerase dimer while the rest of the protein which exclusively belongs to the LCAR could be solution accessible. We hypothesize that the highly acidic flexible LCAR could act as a molecular whip recruiting basic NP molecules to nascent RNA, thus mimicking the role of the P protein during nsNSV genome replication. In this paper, we investigate the role of the ANP32 LCAR in influenza A virus genome replication. We show that the ANP32 LCAR is required for RNA synthesis during virus infection and interacts directly with NP. Two previously identified RNA binding grooves of NP contribute to the interaction. We also find that the LCAR is important for polymerase function in a minigenome assay using a fulllength neuraminidase (NA) genome segment as template RNA. However, the LCAR is dispensable for RNA production from a 47-nucleotide(nt)-long template which can be replicated in the absence of NP (19). We propose that influenza virus uses host protein ANP32 for NP recruitment to nascent RNA during the elongation stage of viral genome replication. Viral infections 293T-DKO cells in six-well plates were transiently transfected with 2.5 g of plasmids expressing the indicated ANP32 proteins or their truncated versions. Twenty-four hours post-transfection, cells were infected with influenza A/WSN/33 virus at a multiplicity of infection (MOI) of 5 in DMEM containing 0.5% FBS. Eight hours post-infection, cells were harvested for the extraction of total RNA, and RNA levels were analysed by primer extension. RNA isolation and primer extension assays Total RNA was extracted from 293T-DKO cells in six-well plate using 500 l TRI reagent (Sigma) according to the manufacturer's instructions and 2 g of RNA were subjected to primer extension analysis as previously described (26). Briefly, RNA was reverse transcribed using Super-Script III reverse transcriptase (Invitrogen) with 32 P-labeled NA or NP segment-specific primers. A primer targeting cellular 5S rRNA was included as an internal control. Transcription products were resolved by 6% or 12% denaturing PAGE with 7 M urea in TBE buffer and detected by phosphorimaging on an FLA-5000 scanner (Fuji). ImageJ was used to quantify cDNAs and values were normalized to the cDNA derived from the 5S rRNA control. The values for the 'vector' control were subtracted from the sample values. Data were analysed using Prism 8 (GraphPad). Minigenome assays Approximately 80% confluent monolayers of 293T-DKO cells in six-well plates were transfected with plasmids pcDNA-PB1 (1 g), pcDNA-PB2 (1 g), pcDNA-PA (0.5 g), and pcDNA-NP (2 g) together with pPOLI-NA (0.2 g) and plasmids encoding the indicated ANP32 proteins (2.5 g) using Lipofectamine 2000 (Invitrogen) according to the manufacturer's instructions. For the NPindependent replication assay pcDNA-NP was omitted and the pPOLI-NA plasmid expressing NA vRNA was replaced with pPOLI-NP47 expressing a 47-nt short vRNA-like template derived from the NP segment. Total RNA from transfected cells was isolated using TRI reagent at the indicated time points post transfection. The extracted RNA was analysed by primer extension assay as previously described (26). Recombinant protein production Wildtype and mutant NP from influenza A/NT/60/1968 (H3N2) virus were cloned into pGEX-6P-1 vector (GE Healthcare) with an N-terminal GST tag followed by a PreScission protease site for expression in Escherichia coli. Proteins were purified on Glutathione Sepharose (GE Healthcare). GST-tagged or untagged NP were released with 25 mM reduced glutathione or cleavage with PreScission protease overnight, respectively, and further purified on a Superdex 200 Increase 10/300 GL column (GE Healthcare) using 25 mM HEPES-NaOH, pH 7.5, 150 mM NaCl and 5% (v/v) glycerol. Full-length and truncated GST-tagged huANP32A, huANP32B and chANP32A were cloned into pGEX-6P-1 as described above for NP. After purification on Glutathione Sepharose, GST-tagged ANP32 proteins were eluted using 25 mM reduced glutathione. Proteins were buffer exchanged in 25 mM HEPES-NaOH, pH 7.5, 150 mM NaCl and 5% (v/v) glycerol to remove glutathione using Amicon concentrators (Millipore). An empty pGEX-6P-1 vector was used to produce a GST tag as negative control. Untagged chANP32A (1-220) was produced following the same protocol as described above for NP. GST pull-down assays All pull-down assays were performed in 25 mM HEPES-NaOH, pH 7.5, 150 mM NaCl, 5% (v/v) glycerol at 4 • C. Approximately 200 g bait (GST tag alone or GST tagged ANP32 proteins) was incubated with 50 l Glutathione Sepharose (GE Healthcare) for 2-3 h. The beads were then washed once with the same buffer as specified above before being loaded with 100 g analyte (wildtype or mutant NP). If indicated, a 1.5× or 4.5× molar excess (over NP) of a 29-nt RNA (5 -AGUAGAAACAAGGCCGUAUAUGAACAGA-3 , Dharmacon) was added together with NP. The beads were then incubated for another 3-5 h and washed 3 times with the same buffer as above. GST tag was cleaved overnight with 50 g PreScission protease in the presence of 1 mM DTT to release un-tagged bait from the beads. Protein samples were analysed by SDS-PAGE. To analyse binding of huANP32A and NP from cell lysates to purified GST-NP and GST-huANP32A, respectively, 293T cells in six-well plates were transfected with 3 g of pCAGGS-huANP32A or pcDNA-NP. Forty-eight hours post transfection cells were lysed for 1 h at 37 • C in 500 l of cell lysis buffer (50 mM HEPES-NaOH, pH 8.0, 150 mM NaCl, 25% (v/v) glycerol, 0.5% NP-40, 1 mM ␤-mercaptoethanol, 2 mM MgCl 2 , 1 mM PMSF, 1× complete EDTA-free protease inhibitor cocktail tablet (Roche)) in the presence or absence of 250 U of Benzonase (Sigma). 400 l of cell lysate was applied to the beads with bait bound as described above. The beads were then incubated for another 3-5 h and washed 3 times before eluting overnight with 25 mM reduced glutathione. Samples were analysed by western blotting. Analytical size exclusion chromatography Analytical size exclusion chromatography (SEC) experiments were performed on a Superdex 200 Increase 10/300 GL column (GE Healthcare) in 25 mM HEPES-NaOH, pH 7.5, 150 mM NaCl, 5% (v/v) glycerol at 4 • C. In the analytical SEC of NP (R416A) and chANP32A (1-220), either a mixture of purified NP (R416A) and chANP32A (1-220), or a complex of the two from a GST pull-down, was loaded on the column. The mixture of purified NP (R416A) and chANP32A (1-220) (1:2 molar ratio of NP:ANP32A) was incubated on ice for 3 h before injection onto the column. Split luciferase assay The assay was performed as described previously (24) with minor modifications. Briefly, control 293T or 293T-DKO cells were seeded at ∼65% confluency in 48-well plates. Cells were co-transfected with 20 ng pCAGGS expression plasmids encoding A/Victoria/3/75 (H3N2) NP-luc1 and chANP32A-luc2, huANP32A-luc2, huANP32B-luc2 or huANP32A 1-149 and incubated for 24 h at 37 • C. Samples substituting pCAGGS-NP-luc1 with pCAGGS-luc1 and pCAGGS-NP, pCAGGS-ANP32-luc2 substituted with pCAGGS-luc2 and pCAGGS-ANP32 were used as background controls. Cells were lysed in 60 l Renilla lysis buffer (Promega) with or without 10 ng/l of RNaseA gently shaking for 1 h at room temperature. Gaussia luciferase activity was assayed using 20 l of cell lysate and 100 l of the Renilla luciferase reagent (Promega). Injection of substrate and measurement of bioluminescence were carried out using the FLUOstar Omega plate reader (BMG Labtech). Normalized luminescence ratios were calculated by dividing the signal from the chosen interacting partners by the sum of the two controls as described (27). The LCAR of ANP32 proteins is pivotal to viral RNA synthesis during influenza virus infection We have previously reported the structure of a complex between the influenza virus polymerase and ANP32A (9). In the structure, the N-terminal LRR mediates dimerization of heterotrimeric polymerase molecules and we proposed this dimer provides a replication platform for the influenza virus RNA genome. However, most of the C-terminal LCAR remains unresolved in the structure and its role in influenza virus replication remains unclear. To address this, we constructed a series of truncated versions of ANP32 proteins (huANP32A, huANP32B and chANP32A) based on reported structural and functional studies (9,28) ( Figure 1A). To assess the effect of LCAR deletions on viral RNA synthesis in an infection scenario, we took advantage of the previously described double knockout human 293T cells huANP32A huANP32B chANP32A 149 188 208 220 235 249 149 188 208 220 235 251 149 188 208 220 235 250 relative to that of the polymerase in the presence of wildtype ANP32 proteins from three independent experiments. Error bars represent the standard error of the mean (n = 3). Significance was assessed using Ordinary Two-way ANOVA and asterisks indicate a significant difference as follows: *P < 0.05; ****P < 0.0001. that do not express huANP32A and huANP32B (293T-DKO) (20). All ANP32 constructs expressed equally well in 293T-DKO cells ( Figure 1B). Truncated huANP32A or huANP32B proteins were pre-expressed in 293T-DKO cells prior to infection with influenza A/WSN/33 virus and RNA accumulation at 8 hours post-infection was analysed using primer extension ( Figure 1C). Expression of fulllength wildtype huANP32A (WT) resulted in a substantial increase of all three viral RNAs, including mRNA, cRNA and vRNA, compared to the control (vector only). Constructs retaining parts of the LCAR (1-208, 1-220 and 1-235) were also able to support viral RNA accumulation similar to wildtype ( Figure 1C). However, deletion of most of the LCAR (construct 1-188) or complete LCAR together with part of the LRR (construct 1-149) diminished RNA accumulation to basal levels (vector). Similar Nucleic Acids Research, 2022, Vol. 50, No. 10 5717 results were observed for huANP32B although construct 1-188 was able to support some activity while construct 1-208 showed some reduction, as compared to wildtype (Figure 1D). These results indicate that the LCAR of ANP32 proteins plays an important role in RNA synthesis during virus replication, which is consistent with recently published studies (24,28,29). The LCAR of ANP32 proteins interacts directly with NP We hypothesized that the highly acidic flexible LCAR could act as a molecular whip recruiting the basic NP to nascent RNA during influenza virus RNA genome replication. To address this, we performed assays using purified recombinant ANP32 proteins and NP. The NP we used is derived from influenza A/NT/60/1968 (H3N2) virus (30). Since purified wildtype influenza A virus NP forms oligomers (31,32), we introduced the point mutation R416A, which makes NP monomeric (33,34). In a GST pull-down assay, NP was found to interact with N-terminally GST-tagged, full-length huANP32A, huANP32B and chANP32A, while no interaction was observed between NP and the GST tag alone (Figure 2A). When using LCAR-truncated ANP32 proteins comprising amino acid residues 1-188, corresponding to the region that mediates influenza virus polymerase dimerization (9), much weaker interactions were observed, particularly with chANP32A ( Figure 2A). These results indicate that the LCAR is the primary mediator of interaction between ANP32 proteins and NP. To further investigate the ANP32-NP interaction, we used a series of LCAR-truncated chANP32A constructs ( Figure 1A). The amount of bound NP increased as the length of ANP32 proteins increased from 1-188 to 1-281 (wildtype) ( Figure 2B). Notably, there was a substantial increase in binding between the 1-188 and 1-220 constructs, suggesting that the region of 189-220 is particularly important for NP binding. To corroborate this finding, we expressed a peptide corresponding to region 189-220 of chANP32A and tested its ability to bind NP. We found that the peptide pulled down NP at the same level as chANP32A 1-220, indicating that the 189-220 region of the LCAR enables efficient binding to NP, while the LRR has much less contribution to this interaction ( Figure 2C). The interaction of chANP32A with NP was further tested by size exclusion chromatography. Complexes were prepared by either GST pull-down or pre-mixing individually purified proteins. Both samples resulted in an earlier elution peak on a Superdex 200 column, compared with either chANP32A or NP alone ( Figure 2D). This indicates the complex formed by chANP32A and NP is stable in solution. RNA binding grooves of NP are involved in LCAR binding To characterize the ANP32A binding site on NP, and to gain a better understanding of the ANP32A-NP complex, we made several NP mutants based on its structure (Figure 3A) and performed GST pull-down assays in the absence or presence of a 29-nt RNA. The presence of RNA severely reduced the interaction between a monomeric NP mutant (R416A) and chANP32A ( Figure 3B). This suggests that RNA and the LCAR share the same binding interface on NP, and RNA possesses a stronger binding affinity to NP. To further analyse this, we used monomeric NP mutant (R416A) with four arginine to alanine mutations (R74A/R75A/R174A/R175A/R416A) in the G1 RNA binding groove (also known as the G1(4) mutant) (32,35). We found that this NP mutant, with or without the 29-nt RNA, was incapable of binding to chANP32A ( Figure 3B). These data show that the G1 groove of NP contributes an important interface for the interaction with chANP32A. Next we set out to test whether oligomeric NP could bind to chANP32A, using wildtype and a G1(4) mutant NP (R74A/R75A/R174A/R175A) in the assay. We found that wildtype NP bound to chANP32A but showed very little binding to the control GST tag ( Figure 3C). Judging from band intensities, there appears to be more wildtype NP co-released with chANP32A post protease cleavage, compared with the R416A mutant. This could be due to multiple NP molecules binding to a single chANP32A resulting from its oligomerization. Surprisingly, we found that the G1(4) mutant showed chANP32A binding levels similar to that of the wildtype ( Figure 3C). Presence of the 29-nt RNA strongly suppressed the interaction between chANP32A and the G1(4) mutant, as well as the wildtype NP ( Figure 3D). These results suggest that multiple RNA binding sites of NP are likely to be involved in ANP32 protein binding. In addition to the G1 RNA-binding groove, NP possesses a second RNA-binding groove referred to as G2 (32). Available structures of NP suggest that in the monomeric NP mutant (R416A), this site could be partially blocked by the tail loop (residues 402-428) and the C-terminal acidic tail (residues 491-498). The tail loop packs against a site next to the G2 groove and locates closely to R162; the C-terminal tail lies parallel to the G2 groove and locates closely to R150 and R152 ( Figure 3A). In wildtype oligomeric NP, on the other hand, the tail loop reaches to the neighboring NP to form an inter-molecular salt bridge between R416 and E339 on the adjacent NP protomer; the C-terminal tail is either missing or partially modelled in a position opposite the RNA binding grooves ( Figure 3A). Thus, key residues such as R150, R152 and R162 in the G2 RNA binding groove could be spatially blocked in the monomeric NP but remain exposed in the oligomeric state. These observations could explain why the monomeric and oligomeric forms of the G1(4) mutants show different affinities for chANP32A and led to the speculation that the G2 groove could contribute an additional chANP32A binding site. To test this hypothesis, we generated several mutants of NP with either the tail loop ( T) or the C-terminal tail ( C) deleted. Removal of the C-terminal tail (residues 491-498) rescued chANP32A binding affinity of the monomeric G1(4) mutant, independent of whether the R416A point mutation or whole tail loop (402-428) deletion was used to prevent NP oligomerization ( Figure 3E). These data suggest that in the monomeric NP, it is not the tail loop but the C-terminal tail that blocks the second NP-chANP32A interface, which is probably defined by the G2 groove. To and an LCAR peptide corresponding to amino acid residues 189-220 of chANP32A along with chANP32A 1-220 (C) with a cleavable N-terminal GST tag were immobilized on glutathione sepharose before the addition of NP with a R416A mutation. Bound proteins were released by treatment with PreScission protease. A purified GST tag alone was used as negative control (A, C). Unbound (upper gels) and released (lower gels) samples were analysed by SDS-PAGE and staining with Coomassie Brilliant Blue. Molecular weight markers are indicated in kDa. Note that the 189-220 LCAR peptide is too small to be captured on the gel. (D) Size exclusion chromatography of a complex of chANP32A 1-220 and NP R416A formed either using GST pull-down (pull-down) or mixing the two components (mixing). address whether the G2 groove plays a role, we mutated all eight arginine residues to alanine in both G1 and G2 grooves. This NP mutant had substantially reduced affinity to chANP32A as well as huANP32A and huANP32B compared with wildtype NP ( Figure 3F). The interaction was diminished in the monomeric version of such mutant with both the tail loop and C-terminal tail deleted. These data confirm the contribution of both G1 and G2 grooves to the ANP32 protein interaction. ANP32 proteins interact with NP in cells To address whether ANP32 proteins and NP interact in cells, first we performed pull-down assays combining lysates of 293T cells expressing NP or huANP32A with purified recombinant GST-huANP32A or GST-NP expressed in bac-teria. GST-huANP32A specifically pulled down NP and, in the reciprocal experiment, GST-NP specifically pulled down huANP32A from cell lysates ( Figure 4A, B). GST-ANP32A pulled down significantly larger amounts of NP from cell lysates treated with the endonuclease Benzonase compared to lysates without Benzonase treatment while Benzonase treatment did not increase huANP32A binding to GST-NP in the reciprocal experiment. This result indicates that RNA bound to NP in cell lysates competes with huANP32A binding, in agreement with our data above that RNA interferes with the ANP32-NP interaction. Next, we performed a split luciferase assay with the N-terminal half of Gaussia luciferase (gluc1) fused to the C terminus of NP and the C-terminal half (gluc2) fused to the C terminus of huANP32A, huANP32B and chANP32A. We used huANP32A lacking the complete LCAR as negative control (huANP32A 1-149). We detected significant bioluminescence in lysates containing any of the three fulllength ANP32 proteins and NP, compared with truncated ANP32A ( Figure 4C). Luciferase activity further increased if RNaseA was included in the lysates, in agreement with the data above. Taken together, these data show that ANP32 proteins and NP interact in cells and RNA interferes with the interaction. The LCAR of ANP32 proteins is required for efficient replication of a full-length influenza genome segment but not a short vRNA-like template Having established that the influenza virus NP interacts with the LCAR of ANP32 proteins, we next explored how LCAR deletions affect viral RNA replication that is dependent on NP. The requirement for NP during replica- tion can be relieved when the full-length gene segment is replaced with a vRNA-like template that is shorter than 76nts (19). Using a minigenome assay with either a full-length neuraminidase-encoding vRNA (1409-nt) or a 47-nt vRNA template in 293T-DKO cells, we tested the polymerasesupporting effect of truncated ANP32 proteins at 24 h posttransfection ( Figure 5). Expression of huANP32A proteins retaining parts of the LCAR (1-220 and 1-235) resulted in the replication of both full-length and short vRNA templates at levels similar to that of the wildtype huANP32A ( Figure 5A). The shortest version of huANP32A (1-149), which lacks part of the LRR domain that has been shown to associate with the polymerase dimer (9), supported the replication of neither full-length nor short vRNA template ( Figure 5A). Importantly, huANP32A constructs with deletions of most of the LCAR (1-188 and 1-208) were able to support the replication of the short but not the full-length vRNA template. Similar results were observed with analogous truncation mutants of huANP32B and chANP32A ( Figure 5B, C). These results suggest that NP-dependent replication of full-length templates is more sensitive to LCAR deletions than the replication of short vRNA-like templates that does not require NP. To investigate this further we monitored the kinetics of viral RNA accumulation in 293T-DKO cells expressing truncated (1-188) or wildtype ANP32 proteins. Replication of the full-length template was substantially reduced in the presence of truncated huANP32A (1-188) compared to wildtype huANP32A at all time points tested. In contrast, replication of the short vRNA-like template only showed reduction at 12 h but by 24 h post transfection reached levels similar to those observed in the presence of wildtype huANP32A ( Figure 6A). Changes in mRNA accumulation largely mimicked those in vRNA levels in agreement with vRNA serving as template for mRNA synthesis. We observed similar results with huANP32B ( Figure 6B) and chANP32A ( Figure 6C). Collectively, our results demonstrate that the LCAR of ANP32 proteins is more important for the replication of full-length vRNA segments compared to short vRNA-like templates that can be replicated in the absence of NP. These data are in agreement with our hypothesis that the LCAR is involved in recruiting NP to the nascent RNA during influenza virus RNA replication. DISCUSSION Recently there has been considerable interest in the ANP32 family of proteins, essential host factors in influenza virus replication that work in concert with the viral RNA polymerase to mediate the replication of the viral RNA genome (18,20,24,25,28,29,(36)(37)(38)(39)(40)(41)(42)(43)(44)(45). Using structural studies, our group demonstrated that the ANP32 LRR is involved in mediating the dimerization of the influenza virus polymerase which we proposed is important for viral genome replication initiation (9). However, the role of the ANP32 LCAR, which is largely unresolved in the available structures, remains poorly understood. In this paper, we demonstrated the importance of the ANP32 LCAR in RNA synthesis during viral infection. Using pull-down and split luciferase assays, we showed that ANP32 proteins interact directly with NP via the LCAR. Mutagenesis of NP revealed that the G1 and G2 RNA binding grooves of NP contribute to the NP-ANP32 interface. Consequently, we found that RNA interferes with the ANP32-NP interaction and nuclease treatment resulted in increased association between ANP32 and NP in cell lysates. The presence of RNA in cell lysates could explain why this interaction was not detected in previous studies (36). We also showed that the replication of a full-length viral genome segment is dependent on the LCAR while the NP-independent replication of short vRNA-like templates (19,46) is less sensitive to LCAR deletions. Interestingly, we observed a reduction in viral RNA levels when truncated ANP32 is expressed compared to the . Significance was assessed using Ordinary Two-way ANOVA and asterisks indicate a significant difference as follows: *P < 0.05; **P < 0.01 and ***P < 0.001, ****P < 0.0001. -PB1 188 WT -PB1 188 WT -PB1 188 WT -PB1 188 WT -PB1 188 WT -PB1 188 WT -PB1 188 WT -PB1 188 WT -PB1 188 WT -PB1 188 WT -PB1 188 WT -PB1 188 Figure 6. ANP32 LCAR truncation leads to delayed accumulation of viral RNAs during the replication of a full-length influenza genome segment compared to a short vRNA-like template. (A-C) 293T-DKO cells were co-transfected with plasmids expressing the indicated wildtype (WT) or 1-188 truncation mutant huANP32A (A), huANP32B (B), and chANP32A (C) proteins together with plasmids to express the PB1, PB2 and PA polymerase subunits, NP, and full-length NA vRNA (1409-nt) or a short vRNA-like template (47-nt) as indicated. The PB1 expression plasmid was omitted (-PB1) as a negative control. Total RNA was extracted at the indicated time points post transfection (hpt) and the accumulation of vRNA, cRNA, and mRNA was analysed by a primer extension assay. The quantitations show ratios of vRNA and mRNA accumulation in cells expressing wildtype and 1-188 ANP32 proteins for the full-length 1409-nt and short vRNA-like 47-nt templates from three independent experiments. Error bars represent the standard error of the mean (n = 3). significance was assessed using Ordinary Two-way ANOVA and asterisks indicate a significant difference as follows as follows: *P < 0.05; **P < 0.01 and ***P < 0.001, ****P < 0.0001. wildtype for the short template at early time points post transfection ( Figure 6). We speculate that this could be due to the ANP32 LCAR having functions beyond NP recruitment. For example, in the absence of structural information on the complete LCAR in the context of a polymerase-ANP32 complex we cannot exclude that the LCAR makes contacts with the polymerase that contribute to the formation of the replicase complex. Nevertheless, the overall higher levels of RNA present with the short template compared to the long template at all time points tested suggest the importance of the LCAR in NP recruitment. Segmented NSVs do not encode an equivalent to the P protein expressed by non-segmented NSVs, therefore how free NP is recruited to the nascent RNA strand to form RNPs remains elusive. From data shown in this paper, we propose that influenza virus uses the LCAR of ANP32 proteins to substitute for the function of P protein in RNP assembly. The interactions between ANP32 proteins and NP are likely to be electrostatic due to the involvement of the highly acidic LCAR and basic grooves of NP. This is different from interactions between N and P proteins from nonsegmented NSVs, which are mostly hydrophobic (47)(48)(49)(50)(51)(52). In addition, P proteins are also proposed to act as chaperones to maintain N as monomeric before its association with nascent viral RNA. It is unclear whether ANP32 proteins could perform a similar function. Influenza viruses might use different strategies to maintain NP in monomeric form, for instance, using other host factors as molecular chaperones such as importins (53) or UAP56 (54), or alternatively, exploiting post-translational modifications such as reversible phosphorylation (34,55,56). We propose a model in which the LCAR of ANP32 proteins is specifically involved in NP recruitment to nascent viral RNA. After polymerase dimerization mediated by the N-terminal LRR and replication initiation, the LCAR acts to capture monomeric RNA-free NP molecules to bring them spatially closer to the nascent RNA strand. As the nascent RNA strand extends, due to its higher affinity to RNA, NP dissociates from the LCAR of ANP32 proteins and binds to RNA through its RNA binding grooves, resulting in the assembly of progeny RNPs as this process repeats ( Figure 7). In the absence of the LCAR domain, NP might still be able to bind to nascent RNA, albeit less efficiently ( Figure 6). The length of the LCAR (100-130 amino acids) is sufficient to accommodate multiple NP monomers, suggesting that it could also serve to increase local NP density and thus enhance the efficiency of NP recruitment to viral RNA. To understand how the LCAR mediates recruitment of NP to nascent RNA it will be necessary to obtain structures of complexes of polymerase bound to ANP32A, nascent RNA and NP. The NPs of influenza A viruses share high sequence similarity at the putative ANP32 protein interaction interface (30), suggesting a conserved mechanism of influenza virus genome replication elongation across influenza A virus subtypes. As both cRNA and vRNA are assembled with NP into cRNPs and vRNPs, respectively, we propose that LCAR-dependent NP recruitment occurs during both vRNP and cRNP production. Indeed, a recent study reported the involvement of ANP32 family proteins in both influenza virus vRNA and cRNA synthesis (29). In conclusion, our paper reveals an important additional (iii) ANP32 LCAR facilitates the transfer of NP to nascent cRNA. NP dissociates from the LCAR and binds to cRNA due to its higher affinity to RNA. (iv) cRNA is assembled into a mature cRNP before being released from the template vRNP complex. role of the ANP32 family of host proteins in influenza virus genome replication. DATA AVAILABILITY Source data as well as plasmids are available upon request.
7,316.4
2022-05-27T00:00:00.000
[ "Biology" ]
Some new hybrid power mean formulae of trigonometric sums We apply the analytic method and the properties of the classical Gauss sums to study the computational problem of a certain hybrid power mean of the trigonometric sums and to prove several new mean value formulae for them. At the same time, we also obtain a new recurrence formula involving the Gauss sums and two-term exponential sums. Introduction For any integer m and odd prime p ≥ 3, the cubic Gauss sums A(m, p) = A(m) are defined as follows: where, as usual, e(y) = e 2π iy . We found that several scholars studied the hybrid mean value problems of various trigonometric sums and obtained many interesting results. For example, Chen and Hu [1] studied the computational problem of the hybrid power mean S k (p) = (1) and proved an exact computational formula for (1). Zhang and Zhang [3] proved the identity Other related contents can also be found in [4][5][6][7][8][9][10][11][12], which will not be repeated here. In this paper, inspired by [1] and [2], we consider the following mean value: We do not know whether there exists a precise computational formula for (2), where c is any integer with (c, p) = 1, and p ≡ 1 mod 3. Actually, there also exists a third-order linear recurrence formula of H k (c, p) for all integers k ≥ 1 and c. But for some integers c, the initial value of H k (c, p) is very simple, whereas for other c, the initial value of H k (c, p) is more complex. So a satisfactory recursive formula for H k (c, p) is not available. The main purpose of this paper is using an analytic method and the properties of classical Gauss sums to give an effective calculation method for H k (c, p) with some special integers c. We will prove the following two theorems. Some notes: First, in Theorem 1, if (3, p -1) = 1, then the question we are discussing is trivial, because in this case, we have Second, in the first and third formulas of Theorem 1, we take c = 3 (and c = 1 in the second formula). These are all for getting the exact value of the mean value. Otherwise, the results will not be pretty. Several lemmas To complete the proofs of our theorems, several lemmas are essential. Hereafter, we will use related properties of the classical Gauss sums and the third-order character mod p, all of which can be found in books concerning elementary number theory or analytic number theory, such as [13] and [14]. First we have the following: Lemma 1 Let p be a prime with p ≡ 1 mod 3. Then for any third-order character ψ mod p, we have the identity Proof First, applying the trigonometric identity and noting that ψ 3 = χ 0 , the principal character mod p, we have Noting that ψ 2 = ψ and τ (ψ)τ (ψ) = p, from the properties of Gauss sums we have p-1 a=0 ψ a 3 -(a + 1) 3 + 1 = p-1 a=0 ψ -3a(a + 1) Since ψ is a third-order character mod p, for any integer c with (c, p) = 1, from the properties of the classical Gauss sums we have Applying (7), we have Combining (4) This proves Lemma 1. Lemma 2 Let p be a prime with p ≡ 1 mod 3, and let ψ be any third-order character mod p. Then we have where τ (ψ) denotes the classical Gauss sums, and d is uniquely determined by 4p = d 2 + 27b 2 and d ≡ 1 mod 3. Conclusion The main work of this paper includes two theorems. In Theorem 1, we obtained some exact values of (2) when k = 1, 2, and 3. In Theorem 2, we showed that H k (1, p) satisfies an interesting third-order linear recurrence formula. These works not only profoundly reveal the regularity of a certain hybrid power mean of the trigonometric sums, but also provide some new ideas and methods for further study of such problems.
968.8
2020-05-15T00:00:00.000
[ "Mathematics" ]
Using Taxes to Manage a Multigear Fishery: An Application to a Spanish Fishery Using Taxes to Manage a Multigear Fishery: An Application to a Spanish Fishery When fishing gears alter the composition of fish populations or modify the recruitment rate, it is advisable to include the degree of their fishing selectivity in the analysis. Fishing selectivity can cause two different management problems: interspecies selectivity or by‐catch of fish stocks for which no quota has been set by the regulator. The case study is the Spanish fishery of hake ( Merlucius merlucius ), where the fleet operates using two main gears; most of the vessels are trawlers but a few boats use longlines and other fixed gears. Fishery management by means of effort taxes and how the degree of intraspecies selectivity may affect the resource and tax levels are analyzed. The results show that the tax level will depend on the social value of the marine stock, the marginal productivity of each fleet's effort, and the effect that the fishing activity of each one has on the growth of the hake biomass. Introduction From an economic point of view, fishery resources are assets that provide flows of income over time but show certain characteristics. These are linked with the renewable character of fish stocks, the institutional structure under which the activity takes place, and the existence of externalities in the use of a resource. Bioecological rules are essential to determine the functions of production and meet the necessary biological restrictions in an objective function optimization. However, the institutional conditions in the fish stock exploitation establish who is entitled to capture that resource and under what circumstances, and this is essential to understand and predict the behavior of the economic agents involved in the economic activity (the fishermen) and properly drive any regulatory intervention. Concern for the implications associated with the extraction of marine resources is relatively recent; scarcity problems were largely associated with nonrenewable natural resources until the mid-twentieth century. From then on, the fishing economy has developed quickly. This can be explained by the increasing concerns for the conservation of resources to the perception of degradation of nature and the environment. The effects of the decisions taken at the Third Conference of UN on Law of the Sea in the mid-1970s also have influenced this development, as it recognized the extension of fishery jurisdiction to 200 miles from coastal line and transforming the status of fishery resources from free access to the exclusive property of coastal states. Marine resource exploitation is one of the typical examples of the tragedy of the commons in which the logic of individual maximization of benefits leads to a continual increase in pressure on the resources and their consequent overexploitation. As the population has expanded, the problem of a lack of resources has become more evident. Society has increasingly valued natural and environmental resources. Key institutional figures have become more necessary for establishing more efficient and sustainable management of natural resources to prevent a tragedy of the commons. Thus, the study of the commons is relevant when analyzing common ownership or open access systems, but its conceptual significance goes far beyond these concrete systems because it represents the starting point in the search to understand the rise and formation of institutions. These characteristics pose specific management problems for those who need to build theoretical formalization different from those used for the rest of economic assets and those who must be focused on the determination of optimal trajectories for the exploitation of the renewable natural resources sustainably over time. The marine resources must be managed in a rational way, especially if the welfare of future generations is taken into account in the decision-making process. In a fishery where two or more fleets are using several fishing technologies or gears, it is useful to assume that fishing activity influences the net natural dynamics of the marine resources through the catches, whereas the natural growth function depends on the fish biomass and environmental conditions, and these are taken as stable and constants over time in the specialized literature [1][2][3][4]. However, in some fisheries (as the Spanish hake fishery), several fishing technologies could alter the composition of fish populations or modify the recruitment rate [5]. In this case, it is advisable to include the degree of their fishing selectivity in the study. The selectivity could cause two different management problems: interspecies selectivity or bycatch of fish stocks for which no quota has been set by the regulator [6][7][8][9]. The case study is the Spanish fishery of European hake (Merlucius merlucius) in Ibero-Atlantic grounds. The Spanish fleet involved in this fishery operates uses two main gears; most of vessels are trawlers, but a few boats use longlines and other fixed gears (majority gillnets). Trawlers harvest mainly young individuals of hake of a lower size than that corresponding to sexual maturity (although it too catches mature fish). The other fishing technology (artisanal fleet) catches only mature fish. Based on this, we focus on the intraselectivity problem. We introduce in the analysis of the management of the fishery by means of effort taxes [10][11][12][13][14][15][16]. On the contrary, and given that the International Council for the Exploration of the Sea (ICES; this institution analyzes the stock situation and proposes management measures to the European regulator) and the European Commission (EC) recommend that one of the two technologies involved in the hake fishery (in particular, trawling fleet) improves the level of fishing selectivity and aim to individuals of a larger size, we pose several scenarios and study how the levels of hake stock and the tax applied to each group of vessels would be affected. The results obtained show that the optimum tax level depends not only on the social value of the marine resource and the marginal productivity of each fleet's effort but also on the effect that the fishing activity of each one has on the growth of the hake biomass. Furthermore, and as the fleet that is less conservationist with the stock (trawlers) improves the degree of selectivity of its technology, the equilibrium fishing effort level for this fleet increases and the optimum tax falls, to the detriment of the stationary values corresponding to the other fleet. The particular issue with which this chapter is concerned is how the degree of intraspecies selectivity may affect the hake stock and tax levels. The chapter is structured as follows: the Spanish fishery is described in Section 2. A simple management model applied to the fishery is analyzed in Section 3. The primary results are summarized in Section 4. Lastly, the chapter concludes with the discussion presented in Section 5. Description of the fishery The M. merlucius species is listed within the group of demersal beings and therefore a fish stock of long life. Although it is distributed in the area located between the coast north of Morocco and the North Sea, the ICES valued it separately since 1979, distinguishing two biological units: Northern stock (corresponding to zones IV, VI, and VII and divisions VIIIa and VIIIb; see Figure 1) and Southern stock (divisions VIIIc and IXa). Thus, these two stocks are considered by European regulators as two different management units. This is due to the existence of two well-differentiated recruitment areas: one on the west coast of France (Northern stock) and the other on the coast northwest of the Iberian Peninsula (Southern stock). The fishery we are studying is European hake in ICES divisions VIIIc and IXa, better known as the Southern stock of European hake. The juvenile individuals of European hake mainly feed on zooplankton and decapod prawns (Nephrops norvegicus). Larger hake feed predominantly on fish, with blue whiting (Micromesistius poutassou) being the most important prey in waters deeper than 100 m. Horse mackerel (Trauchurus trauchurus) and mackerel (Scomber scombrus) are the most important prey species in shallower waters. Hake are known to be cannibalistic species located at the top of the food chain. European hake recruitment processes lead to patches of juveniles found in the localized areas of the Iberian continental shelf. European hake concentrations could vary in density according to the strength of the year class; however, they remain generally stable in size and spatial location. The ICES estimates that the spatial patterns could be related to environmental conditions. On the eastern shelf of the Cantabrian Sea, years of large inflow of the shelf-edge current have produced low recruitment rates due to larvae and pre-recruits being transported away from spawning areas. The recent high recruitment has not yet been linked to an environmental process. European hake in ICES divisions VIIIc and IXa is caught in a mixed fishery by trawlers and artisanal vessels. The trawling fleet is homogeneous and uses mainly two gears: pair trawl and bottom trawl. The artisanal fleet is quite heterogeneous and uses a wide variety of fixed gears, mainly large and small fixed gillnets and longlines. The amount of hake in the landings of Spanish trawlers is low in relative terms. However, trawling vessels provide by 55% of the total Spanish hake landings for last years. These fishing gears affect the hake biomass in different ways. Trawling, although it catches individuals of all ages, has a negative impact on young individuals preventing them from reaching adulthood. The more traditional method, however, affects mainly mature fish and is less damaging to the hake stock. Trawl fleet is one of the most important fleets among those operating on the Spanish Atlantic continental shelf in terms of landings value. The standard vessel has approximately 145 GRT of fishing capacity and 330 kW of engine power, is close to 28 m long, has 9 crew members, and has an average age of 20 years. The main target species are hake, megrim, anglerfish, lobster, and horse mackerel. The longline and gillnet fleet is less important than the trawler fleet and the standard vessel has approximately 35 GRT and 150 kW, is close to 20 m long, has 5 crew members, and has an average age of 18 years. The European Union (EU), within the framework of the Common Fisheries Policy (CFP), manages European hake fishery with total allowable catch (TAC), mainly set based on biological criteria. In addition to TACs, EU implements minimum sizes of catches for hake since 1987 and closed areas. The Spanish Government sets a closed list of vessels of each fishing fleet for the last decades. Furthermore, and in the face of the poor biological situation of the stock (see Figure 2), since 2006, a recovery plan has been implemented, aimed at recovering the spawning biomass above precautionary biomass and reducing fishing mortality to 0.27 [17]. To do so, the EC, while continuing with the establishment of downward TAC, proposes to reduce the effort exercised in the fishery and includes the improvement in the selectivity of some of the fishing methods. Regarding the Southern stock of European hake, we have obtained information from the ICES on the spawning biomass for the period 1985 to 2014. Figure 2 shows how the hake biomass has decreased to such an extent in the late 1990s, as it reached only 25% of that which existed in the early 1980s, falling well outside the biological safety limits in spite of the recovery experienced in the last 3 years [18]. This hake biomass evolution indicates that the resource is being exploited to excess. With respect to the total catches, we can see that it has shown a decreasing trend in the said period and in keeping with the deterioration of the fish biomass (see Figure 2). The trends in both variables show that the measures adopted by the EU were not sufficient to avoid the overexploitation of hake stock and the resource is still being overfished in the last years. Therefore, it is necessary to introduce a regulatory mechanism to manage the hake fishery in a sustainable way to avoid the overexploitation of resource and depletion of the fish stock. Method If the regulator of fishery establishes a tax on effort (τ i ), both fleets will assume an increase in the unit cost of the effort and will be faced with the following problem: where p, w, h, e, and X denote the unit price of hake, unit cost of effort, total landings, fishing effort, and fish stock, respectively. The parameter δ represents the discount rate. The usual natural growth function of the marine resource (F) is modified by a new parameter θ, which catches the selectivity of both fleets. The fish stock dynamic is shown as follows: where F(·) is the natural growth function of the resource. The effects that the different technologies have on it are defined as follows [19]: where the parameter γ i (0≤γ i <1, i=1,2) shows the level of fishing selectivity of each technology or fleet. If the i-fleet technology has no effects on the fish stock dynamics, the fleet shows a high selectivity level and this fleet can be considered as conservationist with the marine resource. In this case, the parameter γ i takes on a zero value. In contrast, if technology has effects on the marine stock dynamics in a negative way, the fleet shows a nonselective level and it can be considered as a less conservationist fleet with the fish stock. Therefore, the fishing selectivity parameter will approach the unit value. From one of the first-order conditions to resolve the problem (1) [20], the following equation is obtained: In the absence of regulation (i.e., no tax would be implemented), we would obtain the following expression [19]: and if both expressions (4) and (5) are compared, the optimum tax value can be obtained (i=1,2): This expression indicates that the tax level depends not only on the social value of the marine resource (μ) and the marginal productivity of the effort (∂h i /∂e i ) but also on the effect on the natural growth of the resource (∂G(·)/∂e i ). On the contrary, the lower (higher) the marginal productivity of the fleet i, the lower (higher) the tax level that will have to be paid to fish in the fishery. On the contrary, and for γ i ≠γ j , if fleet i shows a high (low) selectivity level and with γ i < γ j (γ i > γ j ), then γ i →0 (γ i →1) and the effect of the activity of i on the natural growth function will be lower (higher), allowing a greater (smaller) growth of the fish population, that is, (γ j -γ i )> 0 ((γ j -γ i ) < 0) and ∂G(·)/ ∂e i > 0 (∂G(·)/ ∂e i < 0). Consequently, given that ∂h i (·) > 0, the tax level for this fleet will be higher (lower) than that which corresponds to the other fleet. Estimations Because fishing effort (fishing days) data are not available separately for the trawling and artisanal fleets for the last 10 years, we will use the parameter values estimated by Garza-Gil and Varela-Lafuente [19] for this fishery, who made an econometric estimation through the Ordinary Least Squares (OLS) method with annual observations for 20 years and for different options of the natural resource dynamic and the production functions. These values are summarized in Table 1. Substituting those values of the parameters in the above expression (6), the stationary solutions for the tax levels can be estimated. However, previously, and because the selectivity parameters are unknown, we must assume some value for them. Regarding trawling, this fleet catches mainly smaller-sized individuals, as mentioned in the previous sections, and therefore has a negative impact on the Southern stock hake population by preventing a greater number of young fish from reaching maturity and being able to spawn for next years. On that basis, we will assume a selectivity parameter value for this fleet initially closer to unit value than to zero, in particular γ 1 =0.7. Regarding the artisanal fleet, although it captures mostly mature individuals, it also captures a small amount of young individuals. This figure does not reach 10% of the landings [19]. Therefore, we will assume a selectivity value for artisanal fleet closer to zero (0.1). On the contrary, the trawling may improve the selectivity of this gear, as the EC [17] and the ICES [18] proposed in its management recommendations with a view to improving the pattern of hake production for this fishery. Accordingly, some options may increase, for example, the size of the mesh and expand the cod-end of the fishing nets (the "cod-end" is the rearmost part of a trawl net, of net of the same mesh size, having either a cylindrical or a tapering shape). If this technology improves its fishing selectivity level, the negative effects of its activity on hake dynamic will decrease. In this case, other possible and lowest values for γ 1 can be posed. The results obtained for different values of parameter γ 1 are shown in Table 2. Expression/value Unit It can be seen that the tax level on trawling (in euros per fishing day) is higher than that applied to the artisanal fleet in the scenarios contemplated for selectivity parameter due fundamentally to the fact that it shows a greater marginal productivity in the effort and a negative effect on hake biomass. Consequently, it should pay more to fish in the fishery. Furthermore, as the trawling selectivity improves (γ 1 →0) and therefore the negative effect of the activity of this fleet on the hake population diminishes, the tax per unit of effort applied to this fleet also decreases, whereas, for the artisanal fleet, it increases and its effort level decreases. "1" indicates trawling and "2" artisanal. Table 2. Hake biomass (metric tons) and tax levels (euros/day) for different γ 1 and γ 2 = 0.1. Discussion and conclusions The intensive exploitation of the fishery resources around the world for the last decades has shown the natural limitations of the productivity of fish stocks. In this environment with a depletion of marine resources, economists have been worried about searching for management tools oriented to change the behavior of fishermen to save the resource and also to maintain a positive economic return. From an economic point of view, fish populations are treated as capital assets that can provide flows of income over time. The aim is therefore to determine the path of exploitation of marine resources in a sustainable way and to incorporate the biological conditions of the marine resource and institutional conditions of fishing into the analysis. In this way, the fishing economy has advanced since the first works by Gordon [22] and Scott [23], which includes biological and institutional conditions of basic form, to the development raised by Clark [11] and Clark and Munro [24], who introduced the theory of capital to manage a fishing resource in a dynamic context. In general, the regulatory mechanisms can be classified into two groups [11]: (1) those that are directed toward the direct control on the fish stocks as well as to maintain high production levels and (2) a group of mechanisms that, in addition to indirectly control the size of the stock, points to sustain activity in economically efficient levels. The methods that have been traditionally implemented, such as production quotas, closed seasons, closed zones, and restrictions on the equipment, correspond to the first group and it has been shown that they have failed to prevent the overexploitation of fish stocks [25][26][27]. The allocation of property rights and the system of taxes (on production or on the inputs) are in the second group. Among the latter, individual property rights require the creation of markets; the regulator may establish certain rules with respect to fishery exploitation (distribution of the surplus of the marine resource among fishermen involved in the fishery) and allow a rights transaction market to emerge to ensure that fishermen comply with its conduct selling or buying part of that right. Taxes can be defined as mechanisms based on the regulation via prices; the essence of these instruments involves the introduction of a price (cost) linked to the behavior that the regulator wants to promote or discourage. In this chapter, we have studied the European hake fishery (Southern stock), where two fishing fleets are operating using different technologies. We have shown the way in which effort taxes exercised in this multigear fishery make it possible to reach a socially optimum solution for this marine resource, introducing a variable into the analysis, which includes the effects of fishing activity on the natural growth function of the hake population. The efficient stationary solutions for the hake stock levels, its social value and the effort exercised by the two fleets involved in the fishery (trawling and artisanal), propose different scenarios with regard to the selectivity parameter for the fleet that has a more intensive impact on young individuals and then on marine resource dynamics. If trawling selectivity improves, then the optimum level of the natural resource and its shadow price increases, whereas the global level of effort diminishes, increasing that of the trawling fleet and reducing that of the longline fleet [19]. If the present situation is compared to the optimal estimations obtained in this study, it can be seen that the Southern stock of European hake is being fished in an inefficient way, both from an economic point of view and the conservation of the natural resource point of view. In particular, the amount of hake biomass existing at the end of the period studied is significantly lower than that derived from a socially stationary solution. Even in a few years, landings have exceeded the spawning hake biomass in Iberian-Atlantic waters. To reach socially stationary solutions, we have incorporated an intervention mechanism based on taxes, particularly a tax based on effort exercised by each fleet. The tax equilibrium level is directly related to the social value of the fishing resource, with the marginal productivity of the effort exercised and with the effect that fishing activity has on the natural growth of the resource. In particular, the tax level on the trawling effort is greater than that applied to the artisanal fleet, as it is more productive and affects the hake population more negatively. Therefore, it will pay more to exercise its effort in the fishery. On the contrary, the equilibrium level obtained for the tax on the effort of the artisanal fleet is lower, as it is less productive and much more selective. However, when the trawling fleet improves its selectivity, its effort equilibrium level increases and the optimum tax decreases, to the detriment of the stationary values that correspond to the artisanal fleet. In this framework, the proposed regulation involving declines in the level of fishing (reducing the pressure on the stock of fish) is not usually well received by the fishing industry. However, an efficient regulation allows maintaining the marine resources in a sustainable way and it will generate economic income for fishermen. An inefficient situation to an efficient change must be associated with a policy of income distribution suitable based on the criteria of equity. The regulation mechanism based on taxes could offer a solution to the externalities associated with the absence of efficient allocations. Although the analysis shown in this chapter is simple, the results can orient the regulator to achieve a more rational exploitation of the Southern stock of hake.
5,440.4
2016-09-28T00:00:00.000
[ "Environmental Science", "Economics", "Business" ]
Analysis of the bioethanol production process control . The article presents the analysis of the automatic control of the bioethanol production process intended for biofuel. It presents the formulated general concept of the system and the method of designing a closed control system based on the iterative prototyping procedure. The modeling and the simulation were carried out in the Matlab®-Simulink environment. The simulation model of the object was developed based on the experimentally registered characteristics. It has been adjusted, i.e. the compatibility of its behavior with the object it reproduces has been confirmed. Based on the tuned model of the object, a control system model was created, which was the basis for computer simulation which enabled the control algorithm parameters to be established. The final verification of the correct operation of the system was performed with the use of hardware simulation. It was based on entering a negative feedback loop of the virtual control system of the real object elements into the loop. The results of the simulation confirmed the correctness of the adopted design. Introduction With the increase of ecological awareness of the automotive market participants, there are a number of initiatives aimed at making transport more environmentally friendly. One of the visible results of such aspirations is the use of fuels obtained from renewable energy sources. This type of fuels includes ethanol obtained from biomass. It is possible to use it directly as fuel (Nissan's e-biofuell cell technology) or as a component in the production of biodieselThe interest in producing this type of fuel for private needs is demonstrated by companies and institutions that have a fleet of vehicles and private individuals, including farmers. It is necessary to introduce ecological regulations [1]. A distillation / rectification column is necessary to produce bioethanol intended for fuel. One of the basic parameters of her work is temperature. A correctly configured control system is necessary for precise control of this process. The study presents a model of such a system formulated in the Matlab ® -Simulink environment and the selected control unit. The developed control algorithm was verified on the real object. The purpose of the work was to determine the initial parameters for the integration of system components within the design of the control system for the process of producting bioethanol for biofuel. The scope of work consisted of: determining the dynamic properties of the control object, formulation of simulation models of the object and the control system, computer and hardware simulation in the Matlab-Simulink environment. Methodology The methodology, according to which the control system was developed, was based on the iterative prototyping procedure. The general concept of control was formulated at the beginning. Next, a simulation model of the control object was developed. It was created on the basis of experimentally determined dynamic characteristics. The model was then adjusted by confirming its compliance with the object it represents.Based on the adjusted model of the object, a simulation model of the control system was created. It was the basis for computer simulation which enabled selection of the control algorithm parameters. The last stage of the procedure was to check the algorithm on the object. For this purpose, the actual elements of the control object -temperature converter (measuring element) and two heaters (actuators) were introduced to the virtual feedback loop of the control unit [2,3,4]. Control object He control object constitutes a rectification column. Its general view is shown in fig 1. The device consists of a rectified liquid container (1), a defecator (2) placed above it and a liquid cooler (3) located on top of the cooler of the liquid to be obtained. For the distillation/rectification process to occur, it is necessary to provide heat from the heater installed in the tank (1). The core of the column's operation is based on the use of crossflow contact of the liquid freely flowing downwards the deflegmator with the vapors of the rectified mixture. Inside the deflegmator, there are structural elements that increase the surface contact of the liquid with the siad vapors. During the process, about ¼ of the liquid volume is directed for collection. Simulation model of the control object The preliminary stage of the development of the control object model was the experimental determination of the step response. For this purpose, the tank was filled with the liquid to be rectified (35 dm3). Next, the voltage applied to the 4000 [W] heater was increased which created an enforcement. The reaction of the object to that enforcement in the form of a change in the temperature of the rectified liquid T is the desired step response ( fig. 2) [2,3,4,5]. Fig. 2. Step response of the control object The obtained process (Figure 2) was the basis for the development of the transmittant simulation model G (s), and it is expressed by the dependence 1. Model of control system in the computer simulation On the basis of the object model (dependence 1) a simulation model of the control system was created. It was the basis for computer simulation which allowed the control algorithm to be selected. In the Matlab-Simulink environment, 2 versions of the control system were implemented -in configurations with Relay and PID controllers ( fig. 3) [6,7]. The significance of the most important functional blocks in the diagram is as follows: Setpoint represents a given temperature course, Controller is a regulator transmittance, fcn transfer and transport delay represent the control object. The diagram presents the control loop there is also an additional block, which is not part of the structure of the system -it is a signal generator. This block simulates interfering effects. Its presence during simulation tests enables the analysis of the influence of the interfering signal on the quality of control. In addition, the following symbols appear on the diagram: kp -proportional gain, Tiintegration time (doubling), Td -differentiation time (advance). During the computer simulation, the set signal was shaped according to the algorithm that predicts the increase and maintenance of the liquid temperature in the column tank (Fig. 1) at the level ensuring the proper operation of the process. The control quality provided by the modeled systems (Figure 3) was determined by using the integral indexes WJS1 and WJS2 as assessment criteria. Where WJS1 is the integral of the absolute error of regulation (2), while WJS2 is the integral of the absolute value of the derivative of the control signal (3). Where: e -regulation error, JS2 provides information on the dynamics of the control signal, while the value of the WJS1 indicator informs about the quality of control (the lower the value, the better the control quality is) [8,9,10]. The system operation was analyzed under ideal conditions (without interference) and with regard to the disturbing signal with a sinusoidal waveform with variable amplitude. Based on the conducted simulation tests, it should be stated that the algorithms of the tested regulators during the computer simulation ensured an acceptable control quality. In connection with the above, choosing an easier to implement Relay controller to control the process should be considered. Model of control system in the hardware simulation Hardware simulation confirmed the correctness of the design assumptions and proper operation of the control program, which involved the inclusion of the actual elements of the control object of the virtual control system -temperature converter (measuring element) and two heaters (executive elements) into the feedback loop. The system prepared in this manner formed a prototype of the control system. The block diagram according to which the hardware simulation was carried out is illustrated in Fig. 4 [11]. Fig. 4. Block diagram of hardware simulation The nature of the hardware simulation required a transformation of the model of the control system which is illustrated by the diagram in Fig. 3, which would enable communication with the system's environment. For this purpose, the blocks representing the object model were replaced with input blocks (Analog Input) and Analog output (Analog Output) and scale blocks of input and output signals. (fig. 5). The measuring element (temperature transmitter) Control object Regulator Task value By analysing the illustrated waveforms, it can be observed that the control system prototype correctly generated a feedback effect in regards to the control object -the temperature is maintained at a specific level. Summary The use of simulation models and the iterative procedure assuming the repetition of computer and hardware simulations up to the fulfillment of the prototyped system of conditions, made it possible to determine the initial parameters for the integration of elements included in the system. Computer simulation allowed the selection of a regulator. The hardware simulation made it possible to verify the real-time prototype of the control system. Using the proposed methodology, it was confirmed that the easy-to-implement algorithm of the Relay regulator controlling the temperature in the bioethanol production process guarantees its maintenance at the given level.
2,055.2
2020-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Remote Sensing Supported Sea Surface pCO 2 Estimation and Variable Analysis in the Baltic Sea : Marginal seas are a dynamic and still to large extent uncertain component of the global carbon cycle. The large temporal and spatial variations of sea-surface partial pressure of carbon dioxide (pCO 2 ) in these areas are driven by multiple complex mechanisms. In this study, we analyzed the variable importance for the sea surface pCO 2 estimation in the Baltic Sea and derived monthly pCO 2 maps for the marginal sea during the period of July 2002–October 2011. We used variables obtained from remote sensing images and numerical models. The random forest algorithm was employed to construct regression models for pCO 2 estimation and produce the importance of different input variables. The study found that photosynthetically available radiation (PAR) was the most important variable for the pCO 2 estimation across the entire Baltic Sea, followed by sea surface temperature (SST), absorption of colored dissolved organic matter (a CDOM ), and mixed layer depth (MLD). Interestingly, Chlorophyll-a concentration (Chl-a) and the diffuse attenuation coefficient for downwelling irradiance at 490 nm (Kd_490nm) showed relatively low importance for the pCO 2 estimation. This was mainly attributed to the high correlation of Chl-a and Kd_490nm to other pCO 2 -relevant variables (e.g., a CDOM ), particularly in the summer months. In addition, the variables’ importance for pCO 2 estimation varied between seasons and sub-basins. For example, the importance of a CDOM were large in the Gulf of Finland but marginal in other sub-basins. The model for pCO 2 estimate in the entire Baltic Sea explained 63% of the variation and had a root of mean squared error (RMSE) of 47.8 µ atm. The pCO 2 maps derived with this model displayed realistic seasonal variations and spatial features of sea surface pCO 2 in the Baltic Sea. The spatially and seasonally varying variables’ importance for the pCO 2 estimation shed light on the heterogeneities in the biogeochemical and physical processes driving the carbon cycling in the Baltic Sea and can serve as an important basis for future pCO 2 estimation in marginal seas using remote sensing techniques. The pCO 2 maps derived in this study provided a robust benchmark for understanding the spatiotemporal patterns of CO 2 air-sea exchange in the Baltic Sea. Introduction Global oceans are an important sink of atmospheric CO 2 and uptake approximately 30% of the global anthropogenic CO 2 emissions [1]. As the global ocean uptake of CO 2 increases by a rate proportional to the atmospheric CO 2 , substantial differences exist between oceans and marginal seas [1,2]. The changing air-sea exchange of CO 2 in marginal seas, particularly those at high-latitude, is found to be the major source of uncertainties in the estimate of ocean CO 2 uptake [3,4]. As the atmospheric CO 2 is as rather globally homogenous, sea surface partial pressure of carbon dioxide (pCO 2 ) in the marginal sea is the key component for precisely determining the direction of the air-sea exchange of CO 2 . Therefore, deriving maps of the changing pCO 2 for marginal seas over time is critical for precise estimate of global air-sea exchange and ocean uptake of CO 2 [2,3,5]. Generally, sea surface pCO 2 is jointly determined by biogeochemical processes, vertical and horizontal mixing of sea water, and the air-sea exchange of CO 2 [6,7]. Many sea surface variables related to these processes are can be retrieved from remote sensing images. Given their vast spatial coverages, remotely sensed sea surface variables have increasingly been used in sea surface pCO 2 estimation. Remotely sensed Chlorophyll-a concentration (Chl-a) is commonly used as an indicator of biological activities in water [8]. Sea surface temperature (SST) largely determines the solubility of CO 2 in sea water and has been frequently used to estimate pCO 2 from remote sensing [9][10][11][12][13]. In addition, bacteria respiration produces CO 2 by decomposing dissolved organic matter (DOM) [14,15]. Therefore, absorption of colored dissolved organic matter (aCDOM) retrieved from remote sensing images was used in sea surface pCO 2 estimation [16,17]. Furthermore, after [18] found from in-situ measurements that sea surface salinity (SSS) was highly related to sea surface pCO 2 , SSS derived directly from remote sensing images or remotely sensed aCDOM were adopted to support sea surface pCO 2 estimate [16,19]. Kd_490nm, a proxy of water transparency, was derived from remote sensing and included in sea surface pCO 2 estimation to indicate the effect of biological activities [16]. Mixed layer depth (MLD) determines thermal stratification between different water masses and, however, is not retrievable with remote sensing approaches. Therefore, some studies used the MLD obtained from ocean models to support the derivation of sea surface pCO 2 maps [9,12]. Similarly, model-yield gross primary production (GPP) and net primary production (NPP) were also included to support pCO 2 estimation by approximating the biological control on pCO 2 in sea water [9,12]. Sea surface pCO 2 in many global marginal seas have been estimated with various remote sensing supported approaches [9,12,16,17,[20][21][22][23]. Most of the studies chose the variables based on empirical knowledge and focused on deriving pCO 2 maps with small estimate errors (e.g., RMSE). However, few studies have investigated the spatiotemporal variabilities of the variable's relevance to sea surface pCO 2 in marginal seas. Considering the high spatial variabilities in the controlling forces of sea surface pCO 2 in marginal seas, some studies divided the targeted seas into sub-basins/subsets and separately constructed models for pCO 2 retrieval in each of the sub-basins/subset [12,22,24]. Though this strategy produced maps of good quality in the sub-basins/subsets, it provided little knowledge on the variables' relevance to pCO 2 distribution. Furthermore, Reference [25] regarded the sea surface pCO 2 in the targeted area as a mixture of the pCO 2 controlled by different processes (e.g., vertical mixing and biological uptake) and determined each of the processes separately from different sets of variables. Despite the successfully applications in multiple marginal seas [10,25,26], their method was often limited to pCO 2 estimation in summer time and thus fails to provide information for other seasons. Overall, large space remains for investigation on variables' relevance (importance) in sea surface pCO 2 estimate across different time and space. The Baltic Sea is a semi-enclosed marginal sea located in northern Europe. The carbon budget of the Baltic Sea displays considerable seasonal and interannual variabilities. To date, the few studies attempting to estimate sea surface pCO 2 in the Baltic Sea using remote sensing approaches, e.g., [12]., have barely provided information on the variables' relevance/importance to the pCO 2 estimate for this marginal sea. In this study, we aimed to analyze the importance of different variables for pCO 2 estimation and derive improved monthly pCO 2 maps for the Baltic Sea from 2002 to 2011. We conducted the following: (1) filtering the in-situ pCO 2 data for the model training and validation; (2) assessing the relative importance of the input variables for the pCO 2 estimation on different spatial and seasonal scales; and (3) deriving pCO 2 maps for the Baltic Sea. Study Area The Baltic Sea is located at high latitudes (55-60 • N) in Europe. As the sun illumination and temperature there exhibit significant seasonal changes, the Baltic Sea and adjacent terrestrial ecosystems also undergo high seasonality. In addition, the wide span of the Baltic Sea in latitude forms a large spatial gradient in sun illumination and the corresponding environment condition, like SST. The Baltic Sea has restricted water exchange with the open North Atlantic Ocean via the Danish straits and is a semi-enclosed marginal sea. More than 600 rivers drain the catchment of total 1.7 million km 2 and export to the Baltic Sea substantial freshwater and terrigenous substances, including organic carbon [27][28][29][30]. Therefore, the Baltic Sea is characterized with a high concentration of CDOM, and most part of the sea presents as "brown water". With varying inputs from different rivers, the sub-basins of the Baltic Sea create highly heterogeneous biogeochemical conditions in this marginal sea. Consequently, the pCO 2 distribution in the Baltic Sea displays evident seasonality and spatial heterogeneity [31]. Upwelling characterized with evident seasonality and spatiality occurs frequently in the Baltic Sea and brings up deep water of high pCO 2 up to 2000 µatm to the sea surface [32,33]. The high concentration of nutrients brought up together with the deep water leads to cyanobacteria and phytoplankton blooms after the upwelling event, which further complicates the pCO 2 distribution in the Baltic Sea [34]. Till now, nearly all the pCO 2 related studies in the Baltic Sea were based on in-situ measurements from ship and/or buoys, and the findings are often valid for limited sites of the sea. Therefore, analyzing variables' relevance and obtaining reliable pCO 2 maps is critical for better understanding the carbon cycle and the air-sea exchange in the Baltic Sea [35]. Data We chose the variables for pCO 2 estimation based on previous studies and the characteristic of the Baltic Sea. The variables SST, photosynthetically available radiation (PAR), Chl-a, Kd_490 nm, and a CDOM were remotely sensed. SSS and MLD were produced by the numerical model NEMO-NORDIC together with data assimilation. In-situ pCO 2 measurements from three different sources were used to train and validate the model for pCO 2 estimation. Remote Sensing Products The Moderate Resolution Imaging Spectroradiometer (MODIS) on board Aqua satellite was designed for ocean surface investigations. The sensor maps the earth every two days from July 2002 on. A MODIS image consists of 36 spectral bands covering the spectrum of wavelength from 0.63 to 14.38 µm. Images from MODIS Aqua have been successfully used to detect coastal water clarity [36], survey red tides [37], map lake suspended matter [38], and retrieve coastal dissolved organic carbon [39]. Variables, like Chl-a and SST retrieved from MODIS-Aqua images with already mature algorithms, have been widely used to estimate sea surface pCO 2 or simulate sea surface CO 2 flux in different oceans and marginal seas [11,16,17,40,41]. From the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (https://oceancolor.gsfc.nasa.gov/), we obtained the level-3 monthly mean MODIS products of PAR, Kd_490 nm and SST covering the period of August 2002-October 2020. All data have a spatial resolution equivalent to 4×4 km at the equator (Table 1). The Medium Resolution Imaging Spectrometer (MERIS) on board Envisat satellite was designed for ocean color observation. During its life span from 2002 to 2011, MERIS mapped the earth every 1-3 days and measured water surface radiances in 15 spectral bands from visible to infrared spectrum. Up to now, MERIS data have been frequently used to investigate water related issues in global ocean and marginal seas, including mapping sea algae coverage [42], detecting phytoplankton bloom [43] and cyanobacterial bloom [44], and estimating Chl-a, a CDOM, and suspended matter [45][46][47][48][49]. Most of these studies targeted at European lakes and seas and demonstrated the great potential of MERIS data for investigating these waters. Specifically, [45] found that Chl-a retrieved from MERIS for the Baltic Sea had similar distributions to that of in-situ measurements. The MERIS data from the MERCI data base (https://merisrr-merci-ds.eo.esa.int/ merci) were used to retrieve Chl-a and a CDOM for the Baltic Sea with the Free University of Berlin (FUB) processor which were especially developed for European coastal waters. Invalid pixels (i.e., land, mixture of land and water, various cloud types, and cloud shadow) were masked out from MERIS images before the Chl-a and a CDOM retrieval. The performance of Chl-a and a CDOM retrieved from MERIS with FUB processor in the Baltic Sea were assessed to be excellent [49,50]. In this study, the daily Chl-a and a CDOM derived from MERIS images were aggregated monthly and resampled to 4×4 km. The Chl-a and a CDOM derived from the full MERIS archive spans from July 2002 to December 2011. Comparison of the contributions of the Chl-a products from MODIS and MERIS to pCO 2 estimation in the method employed here did not show significant differences ( Figure S2). Modeled Data MLD and SSS are important variables for pCO 2 estimates. However, remotely sensed SSS have much coarser resolutions than other variables, such as Chl-a, and MLD is not yet obtainable from remote sensing. Alternatively, modeled MLD and SSS have been applied in many studies on sea surface pCO 2 estimation [9,12,20,51,52]. Therefore, we employed the monthly MLD and SSS produced by the NEMO-NORDIC model which is a a Baltic and North Sea model based on the NEMO ocean engine and a local singular evolutive interpolated Kalman (LSEIK)filter data assimilation with a spatial resolution of 4 × 4 km [53] (Table 1). Validation of the modeled SSS against the station observation demonstrated a bias smaller than 0.5 ppt and a RMSE of 0.5 ppt [53]. In-Situ Data We used all the in-situ sea surface pCO 2 measurements available in the Baltic sea during August 2002-November 2011 (Table 2 and Figure 1). They included the data from the Surface Ocean CO 2 Atlas (SOCAT) (2nd Version) [54], the measurements from a moored buoy at Östergarnsholm site [55], and data from [56]. All the data in SOCAT have undergone quality control and were of error < 10 µatm [54,57]. We used pCO 2 measurements acquired from 2002 to 2011 to match the remotely sensed variables. The data from SOCAT for this period were obtained from the Finnpartner vessels which travelled between Lübeck and Helsinki every second day [58]. The pCO 2 measurements are available every 1-2 min and appear as a series of points distributed along the ship tracks ( Figure 1A). At the Östergarnsholm site, the sea surface pCO 2 is measured by a submersible autonomous moored instrument (SAMI) mounted on a buoy mooring one kilometer east off the island Östergarnsholm in the central Baltic Sea ( Figure 1A). The SAMI sensor was installed four meters below the water surface and has recorded the pCO 2 there every 30 or 60 min from 2005 May to the present [55]. The pCO 2 measurement from Östergarnsholm site also fulfills the accuracy criterion of <10 µatm. The pCO 2 data used by [56] filled the data gap left by the previous two data sources in the Gulf of Bothnia. The data set consisted of both manual bottle measurements from discrete stations and continuous ferry box measurements obtained with the same method as the vessel data in SOCAT ( Figure 1A). The measurements were mainly from the year of 2006, 2009, and 2010. More details about the data are available from [56]). Random Forest Random forest is a tree-assembled model where the trees are constructed based on a set training samples [59]. Random forest has shown excellent performance in classification and regression [60,61]. Therefore, it has been used in various fields. For example, it has been used to estimate gross primary production of vegetation from remote sensing images [62], downscaling the soil moisture data and chlorophyll fluorescence of coarse resolutions etc. [63,64]. With respect to pCO 2 estimation from remote sensing data [17] derived pCO 2 maps for the Gulf of Mexico with an RMSE of 31.7 µatm using a similar tree-based algorithm. In addition, [16] compared random forest with other commonly used approaches (e.g., multiple linear regression) and proved that random forest was a robust algorithm for sea surface pCO 2 estimation from remote sensing data in the Gulf of Mexico [16]. In this study, random forest models were trained to express the relationship between the in-situ pCO 2 measurements and spatially and temporally co-located variables (i.e., Chl-a, a CDOM , SST, PAR, Kd490nm, SSS, and MLD). Each random forest model contained a number of tree (known as Ntree) with each node splitting to a number of leaves (known as Mtry). At each node, a bootstrapped subset of randomly selected training samples was used to construct the relationship between the Mtry variables (e.g., Chl-and SST) and the dependent variable (i.e., pCO 2 ) in the form of split leaves [65]. The tree grew as the nodes were produced and connected in a cascade manner. Each decision tree was independently produced. The forest construction was finished as the trees grew to Ntree, a user-defined number of trees [59]. The final random forest is a set of trees with best performance in expressing the relationship between variables in the training samples. Further details on the random forest model are to be found in Breiman (2001). Each random forest model contained 500 trees (N tree = 500) of the leaf size of three (Mtry = 3). We used the random forest algorithm implemented in the package randomForest [66] for the open access software R [67]. Subsequently, the importance of each variable in the random forest model was also extracted and analyzed. The importance of a variable X m was determined by the mean decrease accuracy (MDA) of the random forest model when the variable X m is randomly permuted in the training samples [59]. Therefore, the importance of variable X m in a random forest model indicates its contributions/relevance to the model and the response of corresponding variable to the pCO 2 variation in the training data set. For each variable, the importance was derived independently. The variables are not complementary to each other in pCO 2 estimate, Therefore, the sum of the variables' importance cannot stay as constant value, like 100%, across different time and spatial scale. Filtering In-Situ Data The diurnal differences of sea surface pCO 2 in the Baltic Sea can reach up to 40 µatm [68], and using only the data from day time or night time would introduce 8% to 36% error on monthly air-sea CO 2 fluxes [69]. Pre-analysis also found that using in-situ pCO 2 measurements from 24 h for sea surface pCO 2 estimation would increase the uncertainty of results by 30-60 µatm (Supplementary Materials Figure S2). Therefore, we only used the in-situ pCO 2 measurements obtained during the exact period of the two satellites (i.e., MODIS Aqua and MERIS) passes over the Baltic Sea, i.e., 9:00-14:00 UTC 00. Subsequently, the in-situ data were aggregated monthly to match the frequency of the remotely sensed and modelled variables. The variables exactly co-located to the in-situ pCO 2 measurements were extracted and used for random forest model construction and validation. Using the variables (e.g., SST) derived for the months characterized with frequent upwelling occurrences can significantly affect the monthly pCO 2 estimates by introducing large biases ( Figure S3). Therefore, the upwelling effect should be eliminated to the largest possible extent. To achieve this, we constructed a random forest model using in-situ data from each month as validation data and the rest as training data. All the models with the alternative absence of in-situ data from each month were constructed with identical settings. Inspection on the mean absolute errors (MAE) and RMSE of these models showed that the following monthly data were dominated by upwelling (i.e., Figure S4). Nearly all of them were in fall when upwelling prevails in the Baltic Sea [32]. In-situ pCO 2 measurements from these months were eliminated from training and validating the model. Sea surface pCO 2 maps in these months were not predicted as it would produce misestimation for these months. After narrowing the time window of in-situ pCO 2 measurements down to 9:00-14:00, aggregating these in-situ pCO 2 measurements monthly, and filtering out the data from the upwelling dominated months, 10,769 in-situ pCO 2 measurements with matching variables remained, as shown in Figure S1. Analyzing Variables' Importance for pCO 2 Estimation We derived the variables' importance to the pCO 2 estimation on two scales: spatially and temporally. On the spatial scale, the random forest models were constructed both for the overall Baltic Sea and its sub-basins indicated in Figure 1B. In each sub-basin, a random forest model was trained with the in-situ in the sub-basin from 2/3 of the months from random selection. Each model was then validated with the in-situ data in the sub-basin from the rest 1/3 months. We constructed 50 random forest models in each sub-basin with the training and validation data selected in such way. In the temporal analysis of the variables' importance to the pCO 2 estimates, the in-situ measurements were divided into different seasons. Specifically, February-April was spring, May-July was summer, and August-October was fall. The limited availability of satellite data due to frequent and extensive cloud coverage in November, December, and January did not allow for such analysis during these months. Like the spatial analysis, in-situ data from 2/3 of the months from random selection were used for training and the rest 1/3 for validation. Fifty random forest models were constructed in each season with the training data selected in the same manner and validated with the corresponding complementary data. Constructing the Fnal Model for pCO 2 Estimation in the Baltic Sea We constructed a final random forest model for pCO 2 estimation in the entire Baltic Sea. This model was trained with the in-situ pCO 2 measurements in odd months of even years (e.g., March 2002) and even months of odd years (e.g., April 2003) and validated with the remaining data. By doing this, both the training and validation data covered each of the 12 months in a year and the pCO 2 relevant processes from each month. Exchanging the training data and validation data yielded models with nearly the same performance ( Figure S7). The monthly mean pCO 2 distribution in the entire Baltic Sea were predicted with this model. The Pearson correlations of the pCO 2 estimated with above model to each of the variables were analyzed. In order to speed up the processing, the correlation was analyzed on a 0.5 • × 0.5 • grid form. In each month, the mean of pCO 2 and the means of each targeted variables (e.g., Chl-a) in the same grid cell was derived. The Pearson correlations between pCO 2 and each of the variables in each grid cell were obtained across the study period of 2002-2011. Comparing the Random Forest to Self-Organized Map (SOM) and Multiple Linear Regression (MLR) for pCO 2 Estimation in the Baltic Sea SOM is an artificial neuronal network algorithm which classifies the input samples into a number of classes, based on their Euclidian distance from each other in the space determined by the variables of the input data [20,70]. Often, the number of classes (neuron) are given a priori in a grid format (e.g., 2 × 5). Each class corresponds to a neuron which contains the coefficients determining the relationship between the variables and the dependent variable in the same class, which is also called labelling the class with the dependent variable (output). In the case of sea surface pCO 2 estimation with SOM, the remotely sensed variables, like Chl-a and SST, in the training data, are used to calculate the distance between the input samples for classification. In the pCO 2 prediction with such a SOM model, the samples will be attributed with the pCO 2 of a class to whom the sample show the closest distance to. Detailed description of a SOM application for sea surface pCO 2 estimation by remote sensing data is available in Telszewski et al. (2009). SOM and its variants have been widely used to estimate sea surface pCO 2 with support of remote sensing products [11,12,20,[71][72][73][74]. In this study, we used the SOM algorithm implemented in the R packages of kohonen [75]. We set the size neurons (class) grid to be 25 × 20, in order to have the total number of classes same to the number of trees in the random forest models constructed in this study. Furthermore, multiple linear regression (MLR) has been used in many studies for estimating sea surface pCO 2 in marginal seas and performed good results [9,16]. Therefore, we compared the performance of SOM, MLR, and random forest in the sea surface pCO 2 estimation in the Baltic Sea. During the comparison, the same variables were used in the three algorithms without any preselection. Random forest, SOM, and MLR models were trained with the identical data and validated likewise. Two schemes of training data selection were adopted, one with in-situ pCO 2 measurements from 2/3 of the months from random selection (scheme Number 1, same as in Section 4.3) and the other one using 2/3 of in-situ pCO 2 measurements from random selection as training data (scheme Number 2). Scheme Number 2 was similar to the training data selection by [12]. In both schemes, the validation data were the complementary of the training data. Spatiotemporal Characteristics of Variable Importance to pCO 2 Estimation On the entire Baltic Sea scale, PAR was the most important variable (mean importance of 66%) for the sea surface pCO 2 estimate during 2002-2011. It meant that the errors of the random forest model constructed without PAR would be by 66% higher than that constructed with PAR. PAR was followed by SST, MLD, a CDOM , and SSS with mean importance of 21%, 20%, 15%, and 14%, respectively. Chl-a and Kd_490nm showed the lowest importance of 12% and 10% (Figure 2A). The variables importance differed among the sub-basins of the Baltic Sea. Compare to the pCO 2 estimate in the entire Baltic Sea (Figure 2A), the importance of PAR, SST, a CDOM , SSS, and MLD for pCO 2 estimation in the Gulf of Finland (i.e., sub-basin No.2) increased by 26%, 13 %, 15%, 5%, and 1% ( Figure 2B). For pCO 2 estimation in this sub-basin, PAR was still the most importance variable. With the mean importance of 25%, a CDOM and SST are the next most importance variables, followed by SSS and MLD with respective importance of 18% and 16% ( Figure 2B). The importance of Chl-a and a CDOM to the pCO 2 estimation in the southern Baltic Sea (i.e., sub-basins No. [3][4] were similar to that for the overall Baltic Sea, with slightly lower importance of SSS in sub-basin No.3 (Figure 2A). The filtering and time window narrowing down left the Gulf of Bothnia (i.e., sub-basin No.1, Figure 1 The variables' importance for pCO 2 estimation also varied on seasonal scales. For the sea surface pCO 2 estimate in the entire Baltic Sea during February-April, PAR was the most important variable with mean importance of 56%, followed by MLD (20%), SSS (15%), SST (15%), and a CDOM (10%). Chl-a and Kd_490nm showed mean importance of 8% ( Figure 3B). From May to July, all the variables displayed a similar importance (12-14%), with Kd_490nm (7%) and MLD (5%) ( Figure 3C). The low importance of all the variables in May-July means that during this period the alternative absence of the variables in the models constructed did not significantly change the accuracies of the respective models. In another word, during May-July, the combination of any six out of the seven variables used in the study can well cover the variations of pCO 2 in the Baltic Sea. For pCO 2 estimation in the entire Baltic Sea in the period of August-October, PAR and SST were the first two most important variables with respective importance of 38% and 31% ( Figure 3D), followed by MLD (16%) and SSS (12%) and the rest variables with importance of 10%. Chl-a and Kd_490nm showed overall low importance for the pCO 2 estimate across Baltic Sea, regardless of the season. From November to the following January, the dense cloud cover over the Baltic Sea region barely allowed any optical images qualified for the retrieval of remote sensed variables. The RMSEs of the 50 models were in the range of 30-80 µatm. The models trained with data from May-July showed the smaller RMSEs (41 µatm) than those trained with in-situ data from February-April and August-October (52 µatm and 55 µatm) ( Figure 3D). Overall, PAR showed the highest importance for pCO 2 estimate in the Baltic Sea across different seasons and locations. SST was the second most important variable. a CDOM is important for pCO 2 estimate in the Gulf of Finland. MLD is important for pCO 2 estimate in all the sub-basins of the Baltic Sea but varied seasonally. SSS is important for pCO 2 estimation in the Baltic Sea both spatially and temporally. Chl-a, which has been commonly considered as the determining variable for pCO 2 , showed low importance to the pCO 2 estimate over the entire Baltic Sea and its sub-basins. Kd_490nm showed low importance for pCO 2 estimation in the Baltic Sea across different seasons and sub-basins. pCO 2 Maps from Final Random Forest Model The final random forest model for sea surface pCO 2 estimation for the entire Baltic Sea engaged all the variables, namely, PAR, Chl-a, a CDOM , SST, Kd_490nm, SSS, and MLD. Its RMSE was 47.8 µatm and its coefficient of determination (i.e., R 2 ) was 0.63 ( Figure 4A). The mean absolute error (MAE) of the model was -3.26 µatm, implying a slight overall underestimate of pCO 2 . The pCO 2 predicted with this model exhibited minor overestimates for pCO 2 larger than 450 µatm and slight overestimates for pCO 2 around 200 µatm ( Figure 4A). Both the estimated and observed pCO 2 values were mainly in the range of 100-500 µatm, with a few pCO 2 observations between 500 µatm and 600 µatm ( Figure 4A). The variable importance in the final model was similar to that in Figure 2A. Specifically, PAR was the most important variable, followed by SST, MLD, and a CDOM . Ch-a and Kd_490nm showed the lowest importance ( Figure 4B). For the period of August 2002-October 2011, pCO 2 maps covering the entire Baltic Sea were retrieved for each month except November, December, January, and February, when the remotely sensed variables were not available due to frequent cloud coverage. Taking the year of 2005 as example ( Figure 5), the sea surface pCO 2 in the Baltic Sea were in the range of 100-500 µatm. On the spatial scale, the pCO 2 maps exhibited reasonable transitions in the Baltic Sea ( Figure 5). In addition, detailed features of the pCO 2 variation were also displayed in those maps. For example, in April 2005, much lower pCO 2 was present at the river mouths in the southern Baltic Sea compared to other areas. In May 2005, a strip of low pCO 2 was present in the central Baltic Proper. In September 2005, an area of pCO 2 higher than both August and October was displayed in the southern Baltic Sea ( Figure 5). The sea surface pCO 2 in the Baltic Sea exhibited significant seasonal variations ( Figure 5). Generally, low (undersaturated) pCO 2 conditions of 100-300 µatm prevailed during summer months (e.g., July) and the winter months (e.g., October) were characterized by oversaturated pCO 2 conditions of up to 500 µatm ( Figure 5). The pCO 2 variation at different sites in the Baltic Sea also exhibited these characteristics ( Figure 6). The sea surface pCO 2 in the Baltic Sea also showed significant spatial gradient and variation along the months, particularly between April and September ( Figure 5). In April, July, and August, the southern central Baltic Sea (excluding the sub-basin No.4 in Figure 1B) often displayed pCO 2 approximately 100-150 µatm lower than the northern sub-basins ( Figure 5). In May, the Gulf of Finland and the Gulf of Riga (Sub-basin No.2 in Figure 1B) showed the lowest pCO 2 of 100 µatm in the Baltic Sea. In June, sea surface pCO 2 in the two narrow gulfs increased slightly, while the Gulf of Bothnia exhibits its lowest seas surface pCO 2 in a year. In September, the sea surface pCO 2 in the southern Baltic Sea increased rapidly and displayed a reversed the gradient to that in August. In October, the pCO 2 in the entire Baltic Sea was in the range of 380-420 µatm, rather homogenous in comparison to other months (Figures 5 and 6). On the other hand, different areas in the Baltic Sea showed their minimum pCO 2 at different time. While the Gulf of Finland (No.42 in Figure 6A) and the Baltic Proper (i.e., No.61 in Figure 6A) had two seasonal minima in May and July, respectively, the Bothnia Sea (i.e., No.8 in Figure 6A) and the Bothnia Bay (No.28 in Figure 6A) showed their only seasonal minima of 180-250 µatm in June. Thirdly, the seasonal change points of pCO 2 int the Baltic Sea varied spatially. The pCO 2 in the Bothnia Bay and Bothnia Sea started decreasing in May ( Figure 6B,C), but the pCO 2 in the Baltic Proper and Gulf of Finland in the south showed this change already in April, one month earlier ( Figure 6D,E). The pCO 2 in the Gulf of Bothnia (i.e., No.8 and 28 in Figure 6A) increased already in July, but such changes in the pCO 2 in the southern Baltic Sea were delayed by one month to August. Consequently, in August, when pCO 2 in the northern Baltic displayed are almost equal to the values in winter months ( Figure 6B,C), pCO 2 in the Baltic Proper and Gulf of Finland remained on the level of its summer value ( Figure 6D,E). Furthermore, in the Gulf of Finland (i.e., No.42 in Figure 6A), significant inter-annual pCO 2 differences were present in April and August ( Figure 6D), but, in the Baltic Proper (i.e., No.62, Figure 6A), this occurred in May, July, and August ( Figure 6E). Across the period of 2002-2011, the estimated pCO 2 were correlated to the variables in the Baltic Sea to different degrees in different directions, varying spatially (Figure 7). The Chl-a-pCO 2 correlation varied between −0.5 and 0.5, with general positive correlation in the northern Baltic Sea and negative correlation in the south. The estimated pCO 2 were generally negatively correlated to the co-located a CDOM in the Baltic Sea with correlation coefficients ranging from −1 to 0, and the correlation exhibited larger absolute coefficients than Chl-a-pCO 2 correlation, particularly in the southern Baltic Sea. SST-pCO 2 correlation mostly exhibited negative coefficients (i.e., from −0.5 to 0) in the Baltic Sea, with larger absolute values in the south than in the north. Exceptionally high positive SST-pCO 2 correlation, up to 0.8, was present in the very west part of the Baltic Sea. The PAR-pCO 2 correlation in the Baltic presented the largest absolute coefficients and pCO 2 was mostly negatively correlated to PAR in the entire Baltic Sea (i.e., from −1 to −0.6), showing the same pattern to the SST-pCO 2 correlation. Kd_490nm-pCO 2 correlation showed the similar pattern as Chl-a-pCO 2 , with slightly higher absolute coefficients in the southeastern coasts. SSS exhibited high positive correlation to the co-located pCO 2 at the coastal waters with values ranging from 0 to 0.8, mostly at 0. MLD was positively correlated to pCO 2 in the entire Baltic Sea with large absolute coefficients (0.5-1), except in the very north and west part of the sea. Comparison of Random Forest and SOM In the both schemes of training and validation data selection described in Section 4.5, majority of validation data were in the range of 100-500 µatm. The pCO 2 estimated with random forest were in the same range as the validation data ( Figure 8A,C). In contrast, the SOM model constrained the pCO 2 estimate into the range of 230-430 µatm ( Figure 8A,C), particularly in the scheme No.2 where the training data were the randomly selected pCO 2 measurements ( Figure 8C). In addition, often one pCO 2 value estimated from SOM responded to a large range of observed pCO 2 , forming evident horizontal features in the cross-validation ( Figure 8A,D), particularly when the prediction covers multiple months. However, such patterns were not notable in the pCO 2 estimated with random forest or MLR ( Figure 8B,E). In an example of 50 experiments where the training data were selected with scheme No.1 ( Figure 8A,B), the coefficient of determination of the random forest model prediction was 0.68, much larger than 0.58 and 0.6, the coefficient of determination of the prediction with the SOM and MLR trained with the identical pCO 2 measurements. The mean RMSE of the 50 random forest models trained with training data selected with scheme No.1 was 49 µatm, while the mean RMSE of their SOM and MLR counterparts were 55 and 62 µatm ( Figure 8C). In the case of training data selected with scheme No.2, the mean RMSE of the 50 random forest models was 24 µatm, significantly lower than 30 and 48 µatm, the respective means of RMSEs of the 50 SOM models and MLR models trained with the same sets of training data ( Figure 8F). This indicated random forest outperformed SOM in the pCO 2 estimation in the Baltic Sea. Characteristics of Variable Contribution to the pCO 2 Estimate We analyzed the importance of different variables to the pCO 2 estimation in the Baltic Sea using random forest on different spatial and temporal scales. It was evident that the spatiotemporal variability in the variable's importance was high, but some general patterns were visible. Chl-a displayed overall low importance (small contribution) to the pCO 2 estimate across different spatial and temporal scales in the Baltic Sea (Figures 2 and 3). The Chl-a-pCO 2 correlation in the Baltic Sea was also relatively low, compared to the other variables' correlation to pCO 2 (Figure 7). This was in contrast to previous findings that Chl-a was closely related to pCO 2 in global oceans [13] and marginal seas, like the Gulf of Mexico [10]. The limited importance of Chl-a is probably due to: (1) In addition to Chl-a, PAR, and SST are also fundamental factors for the photosynthesis induced biological fixation of carbon; (2) The studies that established or confirmed correlations between Chl-a and pCO 2 did not include a CDOM [13,76]. But high correlation (r > 0.9) was found between remotely sensed Chl-a and a CDOM in the Gulf of Mexico [17] and West Florida Shelf [41]. Chl-a and a CDOM also displayed similar spatiotemporal patterns in the Baltic Sea ( Figure S8). In the analysis of variables' importance, a CDOM exhibited a more pronounced response to pCO 2 variation than Chl-a (Figure 2A), as it showed higher correlation to pCO 2 than Chl-a did (Figure 7). Similarly, sea surface pCO 2 in the Gulf of Mexico is more closely related to a CDOM than to Chl-a [41]. However, despite its low importance for sea sur face pCO 2 estimate in the Baltic Sea at all the spatial and temporal scales and its general low correlation to pCO 2 (Figures 2, 3 and 7), we still regarded Chl-a as an important variable for the pCO 2 estimation in the Baltic Sea. This is particularly the case during summer (i.e., May-July), when the cyanobacteria and phytoplankton blooms takes place often, uptakes CO 2 and reduces the sea surface pCO 2 in the Baltic Sea [58]. The low importance of Chl-a in May-Jul (summer in this study) ( Figure 3B) is very likely that, during this time, the effect of absent Chl-a in the model was compensated by variables highly correlated to Chl-a during in this time (e.g., CDOM and SST). Likewise, the other variables also exhibited low importance for pCO 2 estimate in May-July ( Figure 3B). Yet, this was the case for the Baltic Sea, as for its applicability in other marginal seas, and the situation should be treated carefully. Overall, PAR exhibited the highest importance for the pCO 2 estimation in the Baltic Sea across different sub-basin and nearly in every season, except summer. In addition, the PAR-pCO 2 correlation coefficients were of the largest absolute values among all the variable-pCO 2 correlations (Figure 7). The high importance of PAR for pCO 2 in the Baltic Sea and its sub-basins and the high correlation of this variable to sea surface pCO 2 are attributed to the high seasonality of the sun illumination. Located at the high latitude of the Baltic Sea 54-66 • N (Figure 1), the sun illumination in the central Baltic Sea, for example, varies from 6 h in winter to 18 h in summer. As phytoplankton photosynthesis is largely determined by the available sun illumination, it is reasonable that seasonality of pCO 2 aligns with that of PAR. In addition, river discharge loaded with CDOM, etc. is also characterized with high seasonality and, to large extent, synchronized to PAR [30], so is the bacteria respiration dependent on the available organic matter. Therefore, it is reasonable that PAR exhibited high importance for sea surface pCO 2 estimation in the Baltic Sea and its sub-basins. The importance of PAR in the pCO 2 estimate in the Baltic Sea in different seasons can be attributed to the wide span of the Baltic Sea (12 • ) in latitude ( Figure 1) and the resultant large gradient in sun illumination. On a day in spring, the sun illumination in the southern Baltic Sea is 2-3 h longer than that in the north, same for fall. The gradients in PAR largely impose differences in the intensities of phytoplankton photosynthesis, SST distribution, and ultimately to CO 2 uptake of sea water via primary production. As for in summer when PAR and other variables displayed similar but low importance, sun illumination in the northern Baltic Sea is up to 6 h longer than in the southern Baltic Sea, displaying an even larger spatial gradient across the Baltic Sea than in other seasons. However, owing to snowmelt, the co-current freshwater discharge and the nutrients it loads are all very high in the Baltic Sea in late spring and early summer [30], create a high spatiality in the nutrient and DOM etc. Yet, the spatial pattern of cDOM etc. are likely different from that of PAR, depending on the sizes of catchment and land cover types. When all the processes determining pCO 2 take place with similarly high intensities, none of the variables exhibit prominent importance, but all of them jointly determined the pCO 2 in the Baltic Sea in summertime with similar degree (importance). Concerning the determination of the seasonality in sea surface pCO 2 , the Julian day of the year (DOY) has been frequently in previous studies [12,16]. However, in this study, PAR holds two advantages over DOY. Firstly, PAR is a direct measure of sun radiation available for photosynthesis, and it has physical meaning, while DOY is a proxy of the seasonality. Secondly, a trigonometric conversion is often applied on DOY to correctly proximate the seasonality. Specifically, the minus cosine of DOY was used for pCO 2 estimate in waters in the northern hemisphere and cosine of DOY for waters in the southern hemisphere [16,18]. Consequently, a trigonometric conversion of DOY attributes a spatially constant value in the entire hemisphere and overlook the effect spatial gradient of sun illumination. In contrast, PAR captures well the spatial gradient of sun illumination along the longitude and express its effect on photosynthesis in the water. Therefore, we suggest that future sea surface pCO 2 estimation consider the participation of PAR instead of DOY ( Figure 1). The SST holds the same position in the pattern of variables' importance for pCO 2 estimate in the Baltic Sea and its sub-basin ( Figure 2). This was probably because the seasonality magnitudes of SST in each sub-basin are on the same order, particularly when the sub-basins are relatively small and well mixed horizontally. In many cases, despite its correlation to pCO 2 being on the same order as the Chl-a-pCO 2 and Kd_490nm-pCO 2 correlations, SST showed a larger importance than Chl-a, which aligned with the prediction error produced by alternatively omitting the variables by [17]. In the pCO 2 estimates for the Baltic Sea in different seasons, SST was more important in August-October than in other seasons (Figure 3). This was probably because, in fall, the large spatial gradient in SST in the Baltic Sea responded more to the pCO 2 distribution at a similar degree as the PAR does, but more than other variables. For example, the sea surface in the Gulf of Bothnia starts freezing already in October and lower down the primary production, whereas the southern Baltic Sea remains open water at time and allow the biological CO 2 uptake [77]. Despite its low importance for the pCO 2 estimate for the entire Baltic Sea, a CDOM exhibited more important for the pCO 2 estimate in the Gulf of Finland than in other subbasins ( Figure 2B). The a CDOM -pCO 2 correlation in the Baltic Sea is also relatively large, particularly at the coast and in the Gulf of Finland ( Figure 7). As mentioned previously, bacteria respiration produces CO 2 by decomposing organic carbons, like DOM [14,15]. The relatively narrow waters of the Gulf of Finland receive a large terrestrial input of DOM from the rivers, including the Neva, which drains the largest sub-catchment of the Baltic Sea, approximately 1/6 of the total Baltic Sea catchment [30]. The changes of sea surface pCO 2 in the Gulf of Finland largely responded to the changes in CDOM there. Therefore, a CDOM is important for pCO 2 estimation in the Gulf of Finland ( Figure 2B) and thus in the Baltic Sea, as well. Similar mechanism very likely applies at coastal waters receiving river discharges. Moreover, this study used the a CDOM derived from MERIS images. The MERIS sensor was succeeded by the Ocean and Land Color Instrument (OLCI) sensors on Sentinel-3 satellites in 2016. Therefore, a CDOM derived from OLCI images will likely play an equivalent role in the pCO 2 estimate in the Baltic Sea and other similar waters. Though less than PAR and sometimes slightly less than SST, MLD was important for the pCO 2 estimation in the Baltic Sea and all its sub-basins ( Figure 2B). pCO 2 in the Baltic Sea is largely and positively correlated to MLD (Figure 7). This is probably resulted from the seasonally varying amount of fresh water discharged by the many rivers and lay above the relatively saline and heavy water [78]. In addition, seasonal winds in the Baltic Sea might have jointly determined the high variation of MLD [32] and, consequently, the vertical mixing of sea water and pCO 2 , as well. In this study, Kd_490 nm showed low importance to the pCO 2 estimation in the Baltic Sea, regardless of season or sub-basin (Figures 2 and 3) and a relatively weaker correlations to pCO 2 (i.e., from −0.7 to 0), compared to variables, like PAR and a CDOM . This aligns with the previously found negatively correlation between Kd_490 nm and pCO 2 in the Gulf of Mexico [16]. Here, we argue that the reasons behind the low contribution of Chl-a to pCO 2 estimation very likely also applied to Kd_490nm. This argument is well supported by previous studies. It is found that Kd_490nm in the Baltic Sea was a function of inherent optical properties, i.e., absorption and scattering of phytoplankton, and effects of illumination and viewing angle [79,80]. Furthermore, [81] observed a strong positive correlation between Kd_490nm and river discharge into the Baltic Sea and the latter is rich of CDOM. In addition, a positive correlation of Kd_490nm to Chl-a and a CDOM were noticed in the Baltic Sea (S9), and the Kd_490nm-pCO 2 and a CDOM -pCO 2 correlations also exhibited similar patterns (Figure 7). Impact of Unbalanced In-Situ Measurements Distribution on the Model for pCO 2 Estimate The in-situ pCO 2 measurements available in the Baltic Sea during 2002-2011 were unevenly distributed, namely, relatively sparse measurements in the north and dense measurements in the south (Figure 1). In order to ensure the participation of the in-situ data from the northern Baltic Sea, we selected in-situ data month-wise to train and validate the model for pCO 2 estimation, instead of randomly selecting from the in-situ measurements. However, this measure led to the missing determination of variables' importance for the Gulf of Bothnia due to the few months of in-situ measurements in this basin (i.e., March 2006 and September 2009). In the future, including additional in-situ pCO 2 measurements from the Gulf of Bothnia can help analyze the variables' importance for the pCO 2 estimate in that region and understand the processes controlling pCO 2 there. These additional in-situ pCO 2 measurements are also expected to improve the RMSE of pCO 2 estimate for the entire Baltic Sea. Despite the unbalanced distribution of in-situ data in the Baltic Sea, the monthly pCO 2 maps were retrieved for the Baltic Sea for the period of August-October 2011 ( Figure 5). The RMSE of the model for pCO 2 estimation was 47.8 µatm (Figure 4), slightly larger than 25 µatm and 31.7 µatm, the RMSEs of the models constructed by [16] and [17], respectively, for pCO 2 estimation in the Gulf of Mexico using similar tree-based regression algorithms. Still, the RMSE of 47.8 µatm is relatively small for pCO 2 estimation in the Baltic Sea, considering the following factors: (1) the pCO 2 estimation was undertaken on the monthly frequency, where the in-situ data from entire month was integrated to the few days with remote sensing images; (2) The magnitudes of the seasonal changes in pCO 2 in the Baltic Sea are much larger than that in middle or low latitude marginal seas. For example, the pCO 2 in the Baltic Sea was in the range of 100-600 µatm (Figure 8), while, in the Gulf of Mexico, it was 200-450 µatm [16], and, in the South China Sea, it was 250-450 µatm [11]; (3). The processes controlling pCO 2 across the Baltic sea (e.g., phytoplankton photosynthesis, bacteria respiration and runoff) vary spatially and temporally [30,82] and thus increase the difficulties in mapping pCO 2 in the Baltic Sea with high accuracy; (4) Upwelling take places in the Baltic Sea with varying frequencies among years and months [83] and complicates the pCO 2 process in multiple manners [34,84]. Even though we eliminated the months dominated by upwelling, few upwelling might have remained in the rest of the months and increased the RMSE of the model; (5) Most importantly, the random forest model covered the processes that took places in the entire Baltic Sea in all the seasons in the period of 2002-2011. This task itself is a challenging one due to the above factors. All these factors rendered deriving sea surface pCO 2 in the Baltic Sea more challenging than in other marginal seas. The random forest algorithm outperformed SOM and MLR in the sea surface pCO 2 estimation ( Figure 8). We attributed this to how the three algorithms treated the variables. In random forest, a series of forests were constructed, and the most effective one was chosen for prediction [59,65]. While the variables and training samples were randomly selected for the tree construction, the best model was the one with little participation of the unimportant variables. In contrast, when the mode was constructed with SOM, all the input variables had the same weights [70]. This very likely amplified the contribution of the unimportant or correlated variables and suppresses the important ones at the corresponding temporal and spatial scale, thus caused misestimates ( Figure 8A,C). The variants of SOM, such as SOMLO, probably also inherit such effects. MLR attributed weights to the input variables by determining their correlation coefficients to the dependent variables. The effect of the coefficients is very evident in the case when the training samples were chosen across months and cover a large variation. For example, in the experiments in Figure 8A-C, the samples covered 2/3 of the months and performed RMSE similar to that of random forest and better than SOM. In contrast, in the experiment where the samples were 2/3 of the entire in-situ data set from random selection, samples from the same season/months of high similarity were likely used. Given that the time window of in-situ data was narrowed down to 9:00-14:00, and the in-situ data from the months dominated by upwelling were also removed, we did not consider the effect of outlier on the modeling and the errors produced by the models were regarded to be from the misestimate of the models. Overall, random forest performs better than MLR and SOM regardless of the variation range of the training data. MLR performs better than SOM when the training data cover a large variation, and SOM performs better than MLR when the training data cover a relatively small variation. pCO 2 Maps for the Baltic Sea and Its Spatiotemporal Characteristics In this study, we produced the monthly pCO 2 maps for the entire Baltic Sea over the period of August 2002-October 2020. These maps showed that pCO 2 across the Baltic Sea was characterized by strong seasonality, generally, high pCO 2 in winter and low pCO 2 in summer ( Figures 5 and 6). The trend aligned well with that derived from insitu data in the Baltic Sea [85]. The seasonality of pCO 2 in the Baltic Sea was similar to that in the marginal sea of Gulf of Maine but different from the one observed in Gulf of Mexico by [16]. In addition, the range of seasonal pCO 2 variation in the Baltic Sea (i.e., 100-500 µatm) was larger than that observed for the two marginal seas (i.e., 300-500 µatm) (Figures 5 and 6) [16]. These different seasonal variations trends and variables' importance (e.g., Kd_490nm) suggest that the processes determining the pCO 2 in the Baltic Sea are likely different from that observed in other seas, or same processes work on different intensity, for example, the gradient in PAR. In addition to the similar seasonal trend, minor differences exist in the seasonal trends of pCO 2 in the Baltic Sea. For example, Baltic Proper and the Gulf of Finland showed pCO 2 minima both in May and July, while, in the Bothnia Bay and Bothnia Sea, it was only shown on minima in June ( Figure 6). May is the time when most rivers pass their annual peak of water levels [30], and, in July, the daytime is the longest in a year in Baltic Sea, with the most sunny days. In addition, different areas in the Baltic Sea showed interannual variations in different months ( Figure 6). For example, the waters in the Gulf of Finland exhibited large interannual variation in April ( Figure 6D), when the large river input take place in the sub-basin [27]. The Baltic Proper showed such variations during May-July ( Figure 6E), when the primary production is high in this sub-basin and upwelling also occurs very often there [58,68]. This indicates that the dominantly driver of pCO 2 are spatially variable across the Baltic Sea. The pCO 2 maps derived from this model exhibited continuous transitions between the sub-basins of the Baltic Sea ( Figure 5). Therefore, these maps are a significant improvement from those produced in previous studied by dividing the Baltic sea into different sub-basins [12]. Conclusions This study analyzed the variables' importance in the pCO 2 estimation for the Baltic Sea across different time and sub-basins with the support of remote sensing and derived pCO 2 maps for the Baltic Sea from August 2002 to October 2011. We found that the contributions of the variables to pCO 2 retrieval for the Baltic Sea vary both spatially and temporally and likely replicated the spatiotemporal characteristics of the driving forces. Among all the variables, PAR was the most important, followed by SST and MLD. Chl-a contributed surprisingly little to the pCO 2 estimate. a CDOM was important for the pCO 2 estimation for the Gulf of Finland and the Gulf of Riga. The random forest model used for the pCO 2 estimate for the entire Baltic Sea had the RMSE of 47.8 µatm, MAE of −3.26 µatm, and coefficient of determination of 0.63. These pCO 2 maps derived in this study are one of the most reliable pCO 2 fields in the Baltic Sea and can potentially support determining the role of the Baltic Sea as sink/source of the atmospheric CO 2 . Moreover, the variables importance/relevance from this study can provide a benchmark for understanding the different drivers of pCO 2 in the Baltic Sea and how they vary in different time and space. In the Baltic Sea region, frequent clouds in November, December, and January lead to the absence of pCO 2 maps during those three months. This is an inevitable situation considering the high-latitude location of the Baltic Sea. Derivation of sea surface pCO 2 for the Baltic Sea in the wintertime needs to be achieved by combining the remote sensing supported results with additional sources information, e.g., modeling. Supplementary Materials: The following are available online at https://www.mdpi.com/2072-429 2/13/2/259/s1, Figure S1: Spatial and temporal distributions of the in-situ data used for training and validating the pCO 2 estimate. Figure S2: Diurnal effect on the pCO 2 estimate. Figure S3: Scenarios where the upwelling affects the pCO 2 estimate from remote sensing images. Figure S4: The effect of upwelling in the pCO 2 estimate with remote sensing image. Figure S5: The monthly mean product of Chl-a derived from MODIS and MERIS images in May, July and September 2011 mapping the Baltic Sea. Figure S6: a CDOM from MODIS and MERIS in the Baltic Sea. Figure S7: The performance differences between of Chl-a from MODIS and Chl-a from MERIS in the pCO 2 estimate. Figure S8: Alternative of the final model for pCO 2 estimate in the entire Baltic Sea. Figure S9: Relationship between variables in the Baltic Sea. Author Contributions: S.Z., A.R. and P.P. designed the study. S.Z. did the data collection, analysis and manuscript preparation. Writing-review & editing, S.Z., A.R., P.P. and M.B.W. Investigation, S.Z., P.P. and M.B.W. All authors have read and agreed to the published version of the manuscript.
13,510
2021-01-13T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Prognosis of Failure Events Based on Labeled Temporal Petri Nets Article history: Received: 21 February, 2020 Accepted: 30 April, 2020 Online: 18 June, 2020 To reduce the risk of accidental system shutdowns, we propose to control system developers (supervisor, SCADA) a prediction tool to determine the occurrence date of an imminent failure event. The existing approaches report the rate of occurrence of a future failure event (stochastic method), but do not provide an estimation date of its occurrence. The date estimation allows to define the system repair date before a failure occurs. Thus, provide visibility into the future evolution of the system. The approach consists in modelling the operating modes of the system (nominal, degraded, failed); the goal is to follow the evolution of the system to detect its degradation (switching from nominal to degraded mode). When degradation is reported, a prognoser is generated to identify all possible sequences and more precisely those ending with a failure event. then it checks among the sequences (with failure event) which ones are prognosable. The last step of the approach is carried out in two parts: the first part consists in calculating the execution time of the socalled prognosable sequences (by optimizing the number of possible states and resolving an inequalities system). The second part makes it possible to find the minimum execution (the earliest occurrence of a failure event). Introduction The supervision applications provided to control system developers (in manufacturing production, robotics, logistics, vehicle traffic, communication networks or IT) make it possible to report the detection of a dysfunction or accidental shutdown of the system and locate its origin. The discrete event systems (DES) community has developed diagnostic methods that focus on the logical, dynamic or temporal sequence of failure events that cause this dysfunction. However, the criticality of some systems and their complexity require a method of the failure events prognosis, to report their occurrence in advance in order to avoid any damage caused by a failure. The challenge is therefore to prevent the future occurrence of a failure event. However, which suitable modeling tool is required for this system? And knowing that more the complexity of the system increases, more its state of space increase. So, how to overcome this problem of combinatorial explosion? And what are the prognosis limits? Several fault prognosis methods have been developed; some have adopted a stochastic approach [1] [2] [3] while others have chosen non-stochastic [4]- [6], one for state automaton or Petri Net. These approaches are interested in prediction of failure m-steps in advance, based on a stochastic process. However, their assessment is difficult and probabilistic information is not always realistic. Others propose a prognosis approach [7] that consists of giving occurrence rates of possible traces that end with a failure event. These approaches indicate the occurrence of a future failure event, but do not specify its occurrence date. The possible occurrence date of a failure event makes it possible to plan the intervention date to repair the system before a failure occurs and thus provides visibility into the future evolution status of the system. The challenge of each group working on this topic is to predict perfectly the future reality. [8] introduces the notion of signature of a trace, which consists to use several formal systems devoted to the description of event signature and the recognition of behaviors, called chronic. This concept has been used in diagnostic work [9], [10] and is based on error detection, localization, evaluation, recognition and response. [11] proposed a method for calculating the execution time of a trace, but it is still diagnosis-oriented. The development of a new approach of the temporal prognosis requires a modeling tool that allows the time constraints of the system (temporal prognosis) while using labels (it involves predicting an event over time). An extension of the Petri nets offers this possibility. These Petri nets are called, the Temporal Labelled Petri net (TLPN for short). The aim is to propose a correct control of a system subject to unforeseen failures. The existing studies use the logical order of failure events occurrences to make the prognosis. In this paper, we are not only interested in the logical order of events, but also in the date of their occurrence. We assume that the system accepts three possible operating modes (nominal, degraded, and failed one). The events occurrence allows the system to switch between these modes. The event occurrence dates allow the synchronization of state switching in the model. A delay occurrence of an event, for example, can be explained by a degradation of the system. Approach's based only of a logic events occurrence cannot detect this delay. Hence the interest of a time-based prognosis approach. Two contributions are proposed in this paper. The first one is concerned with the formal representation and the second one with the methodology of prognosis calculation. Indeed, the model is based on a TLPN. The association of events to temporal transitions will be presented. The evolution from one mode to another one will be represented by transitions firing. The firing of each transition depends on the occurrence of an event and corresponding occurrence date. The second contribution relates to the methodology of the prognosis. A prognoser is built from the TLPN model. It is an oriented state graph, which identifies all possible sequences namely those that end in a failure event. But before predicting a failure event, it is important to make sure that it is possible to do it. That's why we introduced the prognosability property whose objective is to determine the sequences ending with a failure event. Such event is called prognosable, the goal is to predict the earliest date of failure event occurrence. To calculate the execution time of these sequences and optimize the number of possible states, the resolution of an inequalities system based on works of [11]- [13] is used. The idea is to find the set of minimum values solution of the inequalities system. These values will constitute the minimum time after which the occurrence of the failure event is sure. The paper is organized as follows: the second section is devoted to the basic concepts of Petri Nets (PN). The third section introduces temporal PNs (according to Berthomieu [14]- [19] and Popova [11]- [13], [20]- [22]. The fourth section focuses on labelled PN. In the fifth section, we discuss time-labelled PN to verify the prognosis approach in the sixth section. Thus, in this last section, the formal approach of our proposal will be presented, with an algorithm for predicting a temporal failure event and a case study, with explanations. : × → ℕ, ( , ) =  is the value of the arc weight arc from the place to the transition . • is the forward incidence function that assigns to each couple ( , ) of transitions and places a non-negative integer. : × → ℕ, ( , ) =  is the value of the arc weight arc from the transition to place . The initial marking 0 is an application: 0 ∶ → ℕ, it is labeled as an initial global system state. A marked net system =< , 0 > is a net with an initial marking 0 . When the transition t is enabled, it then would be fired. From the marking m, the firing of the t leads to the new marking ′ denoted by [ > ′ . • The symbol • denotes the set of all places such that Pre( , )  0 and • the set of all places such that post( , )  0. Analogously, • denotes the set of all transitions such that post( , ) 0 and • the set of all transitions such that Pre( , ) 0. Temporal Petri Nets (TPN) Temporal Petri Nets TPN are introduced in [5], then studied by [16], [20]- [26]. Thus, we can divide the set T of transitions into two subsets and [27] where is the set of timed transitions and is the set of immediate transitions with: ∩ = ∅ and ∪ = The aim of this distinction is to determine the firing priorities of the transitions. Firing transitions has a higher priority than firing transitions. Behavior, states and reachability relation Definition 2: According to [1], a state of a temporal net is a pair E = (m, I) in which is a marking and the application I associates a firing temporal interval to each transition. The initial state consists of the initial marking 0 and the application 0 which associates to each enabled transition its static firing temporal interval, 0 = ( 0 , 0 ), such that: Transition t may fire iff it remains logically enabled for a time interval  included in [Tmin; Tmax].  is the amount of time that has elapsed since the transition t is enabled. According to [11], a state of an TPN is a pair E = ( , h) in which is a place marking (noted p_marking) and h is a clock vector (of dimension equal to the number of network transitions) that corresponds to the transition markings (noted t_marking). Thus, the p_marking describes the situation of the places and t_marking that of the transitions. Such a pair (p_marking, t_marking) describes a TPN status. $ means that the transition is not enabled. is a p_marking and h is a t_marking. The pair E = (m, h) is called a state in _ if and only if: 1-is a marking accessible in R. Definition 4 shows that each transition t has a clock. If t is not enabled by the marking , the associated clock is not activated (sign $), If t is enabled by , the clock of t indicates the time elapsed since the last activation of t. The initial state is given by 0 = ( 0 , ℎ 0 ) avec In general, each TPN has an infinite number of states, depending formulation of time. The construction of the reachability graph of a such PN is then generally impossible. To reduce this state space and provide a finite representation of the reachability graph, two different methods are defined. [14] Introduces the notion of state classes and [11] provides a parametric description to reduce this state space without affecting network properties. This reduced report space is used to define the reachability graph of a TPN. Such a graph will provide a basis to predict failure events of the system. Parametric state and parametric sequence Let _ be an arbitrary TPN. Either = 1 … a firing sequence in _ and either = 0 1 … a time sequence with ∈ ℝ * + . Then there is at least one dated sequence ( ) = 0 1 1 2 … −1 of  in _ called the timed sequence of  which leads the net from the initial state 0 to a state E (noted 0 [()>E) with E = ( , h). Let us consider for example the following sequence leading the network from the initial state 0 to a state : 5> ′ The switch from 0 to 1 is made in 2 time units after the firing of 1 . In addition to this feasible sequence, it is obvious that there is an infinity of feasible sequences leading _ from 0 to E, which makes the reachability graph infinite. Instead of considering fixed numbers , a variable is used to denote the time elapsed between firing the transition and the transition +1 in . Thus instead of having an infinity of execution sequences between the states 0 and , we will study a single sequence that we will call parametric sequence ( ) = 0 1 … −1 leading the network from the state 0 to the state * Now, it is assumed that and are already defined for the sequence = 1 … . " was enabled or and remains enabled for " +1 Otherwise "because t is newly enabled " 3. ℎ ( ) is a sum of variables (6) (ℎ ( ) is a parametric t_marking), it is a vector of linear functions: ℎ ( )= f(x) with x:= ( 0 , … x|σ|) is a set of conditions (7) (a system of inequalities) Example: Consider the temporal Petri Net and The POPOVA approach not only reduces the system's state space (considering only the essential states) [12], but also determines the time required to reach each state. By using parametric states, it is not necessary to check all possible values of the clock, and the inequation system allows to determine the minimum values of their firing times. We will take advantage of this last remark to make the prognosis as soon as possible of a failure event. Labeled Petri net In discrete event systems, partial observation often results in the addition of events or labels as sensor responses of the system. Thus, a Labelled Petri Net (which we will note _ ) is a classic Petri net in which labels are associated to transitions. Definition 6: A Labelled Petri Net (LPN) is a net _ =< , , , , 0 , Σ, ℒ > in which =< , , , , 0 >, is a marked Petri net,  is the set of labels associated with transitions, ℒ : T →  ∪ {} is the transition labeling function associating a label (event) e ∈  ∪ {} to each transition ∈ , with  the empty event (or silent). Thus: ℒ (t) = e means that the label of the transition is e. Remark: Σ can be partitioned to Σ and Σ with Σ is the set of observable events and Σ is the set of unobservable events In this paper we assume that the same label e ∈  can be associated with several transitions, i.e., two transitions and with ≠ can be labelled with the same event e in a LPN. Let * the set of all event trace  containing the label , the function of labeling transitions ℒ can be extended to sequences: Moreover, if ℒ (λ) =  then λ is the empty sequence. Temporal labelled Petri net In this paper, the aim is to provide a prognosis of the occurrence date a failed event based on discrete event systems. To represent the behavior of a such system, we adopt the temporal labelled Petri net as modeling tool that represents both the events and their occurrence dates. Let's therefore provide for each event sequence on the network a temporal signature. The temporal labelled Petri net (TLPN) is an extension of the temporal PN [17] [18] for which each transition is associated with an observable (or not) event [5] [26] [29]. Definition 7: A TLPN is a net _ =< , , , , 0 , Σ, ℒ, > in which < , , , , 0 >is a Petri net,  is the set of labels associated with transitions, ℒ is the transition labelling function and I is the function associating a static time interval with each transition. A change in TLPN state can occur either on a transition firing or over an elapsed time period. Here, the definition of state and its transition function are the same as for a TPN according to the POPOVA approach presented in section 2.2 [11] [21]. Failure prognosis based on TLPN The failure prognosis is intended to predict the properties of a system that are not in compliance with the specifications. The aim is to predict the occurrence of failure events in the system before their future occurrence. The prognosis in discrete event systems has been discussed in various research papers. Most of them have developed a prognosis approach predicting a failure event m-steps in advance, based on finite state automata [3][4] [6] or Petri nets [1]- [2], [30]- [34], using stochastic and or non-stochastic ways [6] [35]. Our proposed approach consists to predict a failure event nunits time in advance. The first contribution relates to a formal representation framework. The adopted modelling considers the three possible operating modes of the system, as shown in the figure 2. • The nominal mode that contains only the set of states that represent a nominal execution of the system. • The degraded mode groups all states in which the system operates with a tolerable degradation without influencing the behavior of the system. • The failed mode that contains all states that represent the failed behavior of the system. Figure 2 also shows the interest of the prognosis because it aims to explain the causality. Indeed, the diagnosis cannot prevent a failure situation, whereas the prognosis offers more visibility on the future evolution of the system and makes it possible to act before a fault occurs. Our purpose consists to determine a prognosis within an operating mode managing context. To model such behavior, we propose an extension of the Temporal Labelled Petri nets within a context of operating modes. This extension provides an ability to represent temporal constraints and labels in the modeling process. Figure 3 shows an example of operating modes of a system based on a TLPN model. Switching state is conditioned by the firing of transitions. A transition is fired if it is enabled. The prognosis will need an observer module constrained by a place ( ) and transition ( ). This module has no influence on the behavior of the system, it only observes the occurrence time of a failure event (figure 3). To do this, we suppose that: • Only one transition is fired at the same time; • Only one mode is active at the same time; • The PN is safe; • we assume that the firing of transitions is immediate and there is no firing delay; • All TLPN events are observable. After firing the transition, the TLPN changes from E=( ,h) to the state E' = ( ', h') (see definition 4). is a failure event and is a repair event. • The transition 6 is a failed transition such as: 6 ∈ then, ℒ ( 6 ) = . By firing the 1 ′ transition the system switches to a degraded mode marking thus , that is ( ) = 1. The place remains marked until the system switch to a failed mode. The introduction of and doesn't influence the behavior of the system. Their interest will be explained in the following section. To represent sequences ending with a failure event, we use the both notions of parametric state and sequences allow to construct the reachability graph which contains only the essential states, i.e. the time associated with each timed transition enabled of a state E = (m, h) is a natural integer. However, knowing the behavior of the network in the "essential" states is sufficient to determine at any time the overall behavior of the network. (cf. [12] [22]). The advantage of this approach is the application of linear optimization (generated by the system of inequalities in each state), which makes it possible to calculate the execution time of a sequence at the earliest and at the latest. Clock times must be accumulated to progress from a state E of the net to a failed state E'. To do this, an observer model is introduced to the model in order to record the cumulative time between E and E'. This observer model has no impact on the behavior of the system, it just makes it possible to record the time required to progress from a non-defaulting, but not necessarily normal, to a state E' that is considered failed. To calculate this execution time, we propose an extension (definition 11) of definition 5. But before discussing the proposed approach, we formulate the following assumptions: 1-The system model is known 2-all events are observable. The case of prognosis under partial observation is not considered here. 3-The prognosis begins when the model switches from nominal mode to degraded one. Remark: the remains the same, if the prognosis is started from any nominal state of the system. The following framework (figure 4) describes the steps of the proposed prognosis approach. The first step, called the behavioral model, is required to describe the possible operating modes of the system (figure 3). The prognoser is an oriented state graph (figure 8), built from the system model, its role is to detect all possible traces ending with a failure event; Once the system switches from nominal to degraded mode, the prognoser must identify all the sequences of the model namely those that lead to a failure event. Such an event cannot be predicted overall in the sequences. The prognosability property is introduced to determine the sequences of failure event that can be predicted. From an inequality system, the execution time of each sequence is calculated; It called "Time signatures of execution traces". The minimal time signature will then represent the earliest date before a failure event occurs. The resolution of the inequation system will be the last step, which calculates the time signature of execution for all the prognosable sequences. The minimum execution generated from this step represents the earliest occurrence time of a failure event. In the example shown in figure 7, the prognosis starts from the firing transition 6 because degraded mode will start at this place. Indeed, if the event g occurs at earliest after 3 units time, the model switch to the degraded mode. From this state the observer place ( ) will be activated, and its corresponding transition becomes enabled until the event f (failure event) will occurred. Thus, the interval times associated with the transitions enabled from place 6 , will be combined in the form of associated system of inequalities to while the occurrence of the failure event of the transition 13 does not occur. When the event r is generated (meaning that the system is repaired), the observer place will be initialized to allow a next operating cycle. The 6 place is called the candidate place for the prognosis. Once this place is marked, the occurrence of the failure event can be predicted. are the marked places and D means that the system is in degraded mode and is marked. When the prognoser switches to a state with place marked, the prognosis process is then activated. The prognosis process is achieved by the identification of all sequences ending with an F state. According to the prognoser's model and from {D 6 ,O } the event sequences ending in a failure event are: ( 1 ) = ℎ and ( 2 ) = ℎ .To simplify, we don't take into consideration ℎ and ℎ cycles. Then, the execution time of each sequence is calculated (time signature) by applying algorithm 2. The aim is to find all the minimum solution values of the system of inequalities. These values will constitute the minimum time after which the occurrence of the failure event is certain. Definition 11, which is an extension of definition 5, allows, from a TLPN, to recursively determine the parametric state and parametric sequence leading to a failure state, and thus generating the system of inequalities composed of the constraints obtained from the intervals associated with each enabled transition from a candidate place. But before presenting definition 11, let's first reconsider a set of enabled transitions from a marking. We considered the smallest possible values for each . Thus, from the candidate place P6, we will reach the failure state (place P12) after at least 12-time units. We assume that only one cycle is executed in degraded mode. We can, of course predict the failure state from any nominal or degraded state. Conclusion In this paper, we have presented two contributions to determine the prognosis of a failure event in discrete event systems. The first one is about the exploitation of the technique of state and events sequence parametrization on a model of temporal labelled Petri nets. The interest is to reduce the state space of the model for an analysis of both the order and the date of occurrence of events. The second contribution is the proposal of an algorithm based on a system of inequations, to determine the occurrence date of a future failure event. The proposed algorithm makes it possible to determine, from a place belonging to all the candidate places, the minimum date necessary to reach a critical place from which the occurrence of the failure event is certain. Work in progress considers the system under partial observation, which makes it possible to address the problem of the system's prognosability. Works presented in this paper supposed that the used PN is safe, but in practice, the system is composed of several components, it would then be more interesting to consider a multitoken model and assign a type of clock according to the nature of the token and then to predict the failure status for each component in the same model. It would also be very important to predict the failure event of a system while considering the aging state of the system.
5,846.4
2020-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Testing students’ e-learning via Facebook through Bayesian structural equation modeling Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. Introduction The use of information systems and the Internet as teaching tools is a noteworthy aspect in today's tech-savvy community. The reason is that these tools are expanding rapidly into education, while teaching without technology is now seen as uninteresting [1]. According to various researchers, e-learning is usually associated with face-to-face activities [2]. It is also a complementary tool to traditional learning and teaching processes, since e-learning can facilitate education and training through information communication technology (ICT) for anyone, anytime and anywhere. Some organizations utilize e-learning for employee training, as it lowers training costs, increases learning flexibility (place and time), and enables on-demand approach allows for the simultaneous estimation of all cross-loadings and residual correlations in an identifiable model, which is not possible with ML estimation [20]. For example, from a traditional perspective, maximum likelihood-structural equation modeling (ML-SEM) is applied to analyze the appropriate number of hidden indicators (constructs or latent variables) to determine the observed indicators. ML-SEM can facilitate concurrent analysis to illustrate the connection among observed indicators and the corresponding latent variables as well as the connections among latent variables (Ullman [23]). Disadvantages of ML-SEM, including the multivariate normal distribution of independent variables and small sample size, have encouraged researchers to seek better applications for prediction analysis. Several studies suggest that Bayesian-structural equation modeling (B-SEM), which represents a nonparametric method, is able to overcome the limitations of ML-SEM [24,25]. Lee and Song [26], Scheines, Hoijtink, and Boomsma [27], and Dunson [28] argued that Bayesian approaches assist with utilizing real prior information and information accessible in the observed data for enhanced results, better distribution of indices and statistics like percentiles and means of posterior distributions for unknown research parameters, and more trustable outputs for small samples. Lee [29] suggested three advantages of the B-SEM approach: a) it leads to direct latent variable approximation, which is superior to traditional regression modeling for obtaining the factor score estimates, b) it models measurement indicators directly with their latent variables through familiar regression models, provides additional direct interpretation and allows the application of common techniques in regression modeling, such as outliers and residual analyses, and c) statistical modeling progress is based on the first moment properties of raw individual data, which are simpler than the second moment properties of the sample covariance matrix. Hence, B-SEM is easier to use in more complex situations. In view of the above explanation, the main motivation of this study is to re-examine the UTAUT2 model in the context of e-learning via Facebook and test the data with stronger techniques for more accurate results. Considering the advantages and robust prediction power of B-SEM, UTAUT2 is examined in the context of e-learning and the results are compared with ML-SEM. The comparison offers additional knowledge about the predictive power of Bayesian techniques, consequently providing opportunities for future research. From another perspective, the current research results demonstrate the possibility to use Facebook for teaching and learning, whereby instructors can utilize it to connect, befriend and communicate with students, and extend the communicative activities of the traditional physical classroom into virtual forms. The following section provides a related literature review with respect to prior research on e-learning, technology acceptance and a theoretical background of Bayesian analysis from an SEM perspective. Subsequently, the research method, data analysis and study results are specified. Based on the present research findings, the principal results are discussed together with the outputs prior to the final concluding remarks. Background of the study A review of the literature indicates that research on technology acceptance and usage has been quite active, especially in recent years with social network/social media usage increasing very fast. Several models have been developed mainly in the information science domain to predict individual technology acceptance. Researchers have applied these models in a range of contexts. In the following section, previous literature related to e-learning and technology acceptance is discussed. Prior research on e-learning Recently, e-learning has become a widely accepted learning approach [30]. This method of learning (E-learning) emphasizes the use of telecommunication technology for teaching and learning [31] and involves web-based communication systems. It enables learners to access various learning tools, such as discussion boards and document sharing systems anywhere, anytime [3]. E-learning comprises all forms of electronically supported learning and teaching processes [32]. In its broadest definition, e-learning includes instructions delivered via all electronic media, including the Internet, intranets, extranets, satellite broadcasts, audio/video, interactive TV, and CD-ROM [33]. Universities and educational organizations mostly use e-learning technologies to attain new and innovative ways of delivering education to students [4]. Social networks are one of the technologies that students use extensively to communicate with each other and share information. Social network sites are now adopted by many students as well as research scholars at academic institutions [14]. Social networks create the possibility for collaborative learning environments, which benefit learners and makes it easier, faster, more productive, and more memorable to meet, share and collaborate [34]. Sánchez, Cortijo, and Javed [9] indicated that Facebook is among the most popular SNSs among college students. Mason [35] stated that Facebook has many of the qualities desirable of an effective education technology (for teaching and learning) in its reflective element to use mechanisms for peer feedback and goodness-of-fit with the social aspects of university teaching and learning. It was found that educational use of Facebook is significantly related to its use for collaboration, communication, and resource or material sharing [12]. Student adoption and use of Facebook appear to be positively related with usefulness, ease of use, social influence, facilitating conditions and community identity. Considering the fact that e-learning is related to telecommunication technology use for education and learning, several researchers of technology acceptance and education have conducted studies to identify the factors that affect students' willingness to use such technology. Different theories/models have been applied to explain individual learning behavior in diverse contexts, such as Facebook [36], e-learning [37,38], mobile learning [39,40], iPad use for learning [41] and distance learning [42]. The following section discusses previous studies on technology acceptance. Prior research on technology acceptance A number of researchers have employed the Unified Theory of Acceptance and Use of Technology (UTAUT) to explain factors that affect individual intention to use new technology. This theory was developed based on a comprehensive review of eight of the most common theories employed to predict computer use (Theory of Reasoned Action, Technology Acceptance Model (TAM), Theory of Planned Behavior (TPB), The Motivational Model, Combined TAM and TPB, Model of PC Utilization, The Innovation Diffusion Theory, and Social Cognitive Theory), based on conceptual and empirical similarities to predict individual adoption and use of technology [15]. UTAUT postulates that three core constructs, namely performance expectancy, effort expectancy, and social influence act as direct determinants of behavioral intention, while facilitating conditions and behavior intention are direct determinants of use behavior. It is argued that variables moderating these relationships are voluntariness of use, experience, age, and gender. In a study by Venkatesh, Thong, and Xu [16] three constructs were added to UTAUT, namely hedonic motivation, price value, and habit, and UTAUT2 was developed tailored to consumer IS adoption behavior. Venkatesh and Thong measured the determinants of intention to use technology in two stages. In UTAUT2, performance expectancy, effort expectancy, social influence, hedonic motivation and price value were considered predictors of individual intention, while facilitating conditions, habit and behavioral intention were considered determinants of technology use [16]. Venkatesh and colleagues defined performance expectancy as the degree to which individuals believe that using a system will help them attain gains in job performance, and they defined effort expectancy as the degree of ease associated with the use of new technology. Venkatesh and colleagues defined social influence as the degree to which individuals perceive that important others believe they should use the new system, while they argued that social influence is not significant in voluntary situations and only becomes significant when use is mandated by organizations. Based on UTAUT2, facilitating conditions are the degree to which individuals believe that appropriate organizational and technical infrastructure and facilities should exist to support new system use. They defined habit as the extent to which people tend to perform automatically because of learning, while hedonic motivation was defined as the fun or pleasure derived from using a technology [16]. Researchers have applied UTAUT and UTAUT2 in different contexts and settings to measure technology adoption and use behavior [43][44][45], while data was analyzed with an array of methods and techniques. In the current study, UTAUT2 (Fig 1) serves as a base model to examine students' acceptance and use of e-learning via Facebook. UTAUT2 demonstrates the greater predictive power of Bayesian analysis compared to SEM in model testing. The research framework includes eight constructs. Performance expectancy, effort expectancy, hedonic motivation, and social influence are the initial independent variables; use behavior is the main dependent variable; and intention to use acts as the mediator in both relationships between hedonic and facilitating conditions and use behavior. Theoretical Bayesian background To clearly demonstrate how the Bayesian approach functions, besides the underlying model, a random sample Y = (Y 1 , . . ., Y n ) is assumed from a distribution f(y|θ) with an unknown parameter, i.e. interest θ 2 O. The primary goal from an estimation viewpoint is to estimate θ using the information available in sample Y. To this end, the ML estimator can be used to estimate θ and obtain the ML estimate: This classical means of estimating θ is influenced by the frequentist approach. Distribution models that rely on the frequentist approach for parameter estimation are classified as generative models [46]. These are employed to model the distribution of all available data assumed to have been generated with a fixed θ. When parameter θ can be treated as a random variable, unlike with the frequentist estimation approach, in Bayesian analysis the researcher assigns a belief in the form of a probability statement to the parameter(s) of interest. It is worth noting that any statement about θ is made prior to data observation. Upon observing sample Y, this probability statement about θ is updated by using the Bayes theorem. More specifically, a prior distribution π (θ) is assumed for θ. According to the chain rule, the joint probability of (Y, θ) is given by: To update the probability statement about θ, it suffices to evaluate π(θ|y), y = (y 1 , . . ., y n ), which is the conditional distribution of θ given y. Hence, according to the chain rule we have: The above is known as the Bayes theorem. The LHS of the Bayes theorem is called the posterior distribution of θ. Hence, the posterior distribution is proportional to the likelihood multiplied by the prior distribution as follows: With Bayesian analysis, it is possible to assign probabilities to theories or models given the data, which is often the goal. With frequentist approaches, probabilities are assigned to the data given the theory or model, but they provide no information about the probability of the theory, model, or hypothesis [47]. The above is a bona fide probability function, but this is not the case for the prior distribution. Since the prior distribution is a key part of any Bayesian analysis, focus remains on the various prior distributions in the forthcoming section. Specifying the prior distribution. There are situations in which the researcher assigns a proper or improper belief to parameter θ. This belief can be collected from previous research, such as meta-analyses and studies with similar data, or it can reflect expert knowledge of researchers or practitioners [48]. Therefore, two categories of prior distributions are identified, namely subjective and objective. These consist of three main types of prior distributions that vary in their degree of (un)certainty about the parameter value of interest: non-informative, highly informative, and moderately informative [49]. When specifying highly informative priors, the researcher has a high degree of certainty about the parameter of interest and specifies numerical information about that parameter based on previous knowledge. These informative prior distributions are proper-that is, the function used as a prior density has a finite integral and is a probability density function (pdf). In contrast, non-informative priors are specified when the researcher has no prior knowledge about the parameter of interest and are improper-namely, the function used as a prior density has an infinite integral and is thus not a pdf. Finally, informative prior distributions are assigned to moderately informative priors, but scientific information about them is limited. Well-known non-informative priors include Laplace's prior, invariant prior, the Jeffreys prior, reference prior and matching priors. Among informative priors, conjugate priors are the most familiar. See Robert [50] for more details. Bayesian computation. In this section, the well-known Markov Chain Monte Carlo (MCMC) numerical approach in Bayesian computation is briefly discussed. One elemental quantity that requires integration over a possibly high-dimensional parameter space is the Bayes theorem denominator. Thus, to generate a sample from the posterior distribution, the denominator needs to be solved. A practical computational tool for generating a sample from the posterior distribution is the MCMC simulation algorithm that uses pðyjyÞ / LðyÞ Â pðyÞ to generate samples. The MCMC method constructs a Markov chain on state space θ 2 O, whose steady-state distribution is the posterior distribution. It then returns a collection of M samples {θ (1) , θ (2) , . . ., θ (M) }, where each sample can be assumed to be drawn from π(θ|y). It is nonetheless important to note that MCMC is an iterative method, such that given the current state θ (i) , the algorithm makes a probabilistic update to θ (i+1) . There are a number of procedures in MCMC, with the two most general being the Metropolis-Hastings algorithm and the Gibbs sampler. Interested readers may refer to Marin and Robert [51] for more details. Lee [29] extensively studied B-SEM, among others, and pointed out the advantages of B-SEM: a) a more flexible approach to deal with complex situations; b) utilizes useful prior information (if available); c) achieves reliable results with small/moderate sample sizes [26]; and d) gives direct estimates of latent variables. For further studies on B-SEM, please refer to Lee and Song [48] and Kaplan and Depaoli [52]. In this study, the research indicators collected are in the form of ordered categories. Yanuar, Ibrahim, and Jemain [53] suggested that before piloting a Bayesian examination, a threshold specification must be identified to treat the ordered categorical data as manifestations of a hidden continuous normal distribution. A brief explanation of the threshold specification is given below. Suppose X and Y are defined as X = (x 1 , x 2 , . . ., x n ) and Y = (y 1 , y 2 , . . ., y n ), which can denote the ordered categorical data matrix and latent continuous variables, respectively. Moreover, the connection between X and Y is termed by applying the threshold specification. The procedure for x 1 is described as an instant. More precisely, let • c is the number of categories for x 1 ; • τ c − 1 and τ c denote the threshold stages related to y 1 . For instance, in this work it is supposed that c = 3, which leads to τ 0 = −1 and τ 3 = 1. For the time being, the measures of τ 1 and τ 2 are evaluated based on the proportion of cases in each category of x 1 using In the current study, it is assumed that Y is distributed as a multivariate normal. In the above equation, • Φ −1 (Á) represents the inverse standardized normal distribution; • N is the total number of cases; • N r is the number of cases in the r th category. . . is the structural parameter. In line with Lee [29], the prior model is given by Where, due to the ordinal nature of thresholds, a diffuse prior can be adopted. For an approximately constant c, it is definite that: Moreover, from a subjective viewpoint, a natural conjugate prior can be implemented for θ with the conditional representation π(θ) = π(Λ│Ψ ε )π(Ψ ε ). More specifically, let where ψ εk is the kth diagonal component of ψ ε , Λ k is the kth row of Λ, and Γ denotes the gamma distribution. Finally, an inverse-Wishart distribution is adopted for Φ as follows: It is further supposed that all hyperparameters are known. The posterior distribution can be found by normalizing the product L(Θ│X = x)π(Θ). For sampling from the posterior distribution Θ|X = x, MCMC is applied to deal with the computational complexity. Materials and method With respect to the advantages and robust predictive power of the Bayesian approach in data analysis and for the purpose of measuring Facebook use for e-learning (re-examining the UTAUT2 model) among students, a questionnaire was developed. The data were collected via the questionnaire delivered to students who were taking a business statistics class at University of Malaya, Malaysia. The following sections explain the sampling procedure, measurement and an introduction to the Bayesian approach in more detail. Sampling and data collection procedures The correspondents for the study include 170 bachelor students enrolled in a business statistics class at the Faculty of Business and Accountancy, University of Malaya, Malaysia. To enable measurement of e-learning via Facebook, the class instructor created a business statistics Facebook group at the beginning of the semester. The lecturer provided the Facebook group address to the class, and all students requested to be part of this Facebook group within a week. This Facebook group is managed to facilitate e-learning material use by the students in the mentioned class. Every week, any information and supplemental materials related to the study subject are uploaded to the Facebook group, such as videos, texts, journal papers, and books. Students in this Facebook group ask their lecturer questions or communicate with classmates. The questionnaire was distributed at the end of the semester to 170 students who were using the Facebook group for learning in order to measure the students' experience with using e-learning via Facebook. No information pertaining to the respondents' names and identities was collected for this study, and the data were aggregated and analyzed anonymously. Measurements In the current study, the original questionnaire developed by Venkatesh, Thong, and Xu [16] was applied and conceptualized to the e-learning context. The items related to the eight variables (S1 File) include use behavior, intention to use, facilitating conditions, habit, social influence, hedonic motivation, effort expectancy, and performance expectancy, which were adopted from Venkatesh, Thong, and Xu [16]. All indicators in this study were measured on a seven-point Likert scale (1 = strongly agree to 5 = strongly disagree). Results The data analysis was based on the 170 questionnaires collected from students taking a business statistics class. The two approaches applied for data analysis are maximum likelihood and the Bayesian approach. The first modeling part was implemented using AMOS version 18, a flexible tool that allows examining the interrelationship under the normality assumption of the variables in the UTAUT2 framework for e-learning with Facebook. Second, B-SEM was employed with the same framework in the first part of data analysis along with WinBUGS (version 1.4) software. Four mathematical indices were applied to compare two outputs of the Bayesian and maximum likelihood estimators. Missing data, outliers, and normality Missing data occurs when no value is stored for observation. Three data were missing from intention to use, habit, and effort expectancy. The missing data were replaced with the medians of the variables. Outliers can be classified into two categories: simple and multivariate. Simple outliers have the highest values in connection with a single variable, whereas multivariate outliers only possess extreme values of a multiple variable on the surface. The Mahalanobis distance is an extremely general measure that is utilized for multivariate outlier measurement. If the Mahalanobis D-squared values, which can be calculated using AMOS or SPSS, are the highest, they tend to be the most probable significant outliers, meaning the outliers cause reduced analysis outcomes [54]. The significant outliers' impact on the analysis needs to be assessed and investigated carefully to determine whether they can be retained. Byrne [55] suggestion for outlier analysis in SEM was taken in this study. Table 1 presents the Mahalanobis distance testing output. Case number 18 is the furthest from the centroid with a Mahalanobis D-squared value of 36.227. The p1 value indicates that, assuming normality, the probability of D-squared (for observation number 18) exceeding a value of 36.227 is < 0.0037. The p2 value, also assuming normality, reveals that the probability is still < 0.0091, which the largest D-squared value for any individual case would exceed 36.227. Given the wide gap in Mahalanobis D-squared values, the first five observations (numbers 18, 36, 48, 101, and 127) from other cases would be judged as outliers and deleted from further analysis. These outliers could affect the model fit, R 2 , and parameter estimates' size and direction (see Table 1). With SEM, the skewness and kurtosis indices are used for the normality test [56]. Byrne [57] mentioned that the absolute kurtosis values of skewness and kurtosis should be less than 2. Table 2 shows the kurtosis and skewness range indicators in their latent variables; the absolute kurtosis and skewness values are less than 2, therefore the indicators' normality distribution is acceptable. Validity and reliability Fornell and Larcker [58] determined the following terms and conditions for SEM validity and reliability: 1. Cronbach's alpha-based validity. This index must be equal to or higher than 0.7 for every research model construct [59] 2. Reliability based on average variance extracted (AVE). This index must be equal to or higher than 0.50 for every research model construct [60]. Model fit The data were run through Amos and the results (Table 4) indicate good data fit to the model. As seen in Table 4, the index outputs confirm that the measurement model significantly fits to the data. The data were run through both techniques and the results of both models are displayed in Figs 2 and 3. The figures show the estimated structural equations that address the relationships between latent variables for ML-SEM and B-SEM. According to the results, the relationship between performance expectancy, hedonic motivation, social influence, and habitual use, and intention to use e-learning via Facebook is significant in both models. Habitual use, facilitating conditions and intention to use have a significant positive relationship with e-learning via Facebook use. The effect of effort expectancy and facilitating conditions on intention to use is not significant for both models. A comparison of the two models signifies that B-SEM outperformed ML-SEM. ML-SEM was able to predict 66% of the variance in e-learning via Facebook use, while B-SEM predicted 71% of the variance in students' e-learning via Facebook use. The beta value for all hypothesized relationships was stronger in the model tested with B-SEM. These results confirm the superior ability of B-SEM over the other technique. In addition, the results indicate that performance expectancy is the strongest factor affecting students' willingness to use Facebook for e-learning. In other words, students will use the new method if they think it will affect their academic performance or they will benefit from it. The habit of using Facebook is a strong predictor of use, which suggests that students use Facebook for different purposes, such as connecting with friends and family, socializing or learning. The effect of performance expectancy is not significant in both models, since young generations are familiar with technology and using Facebook is habitual, so they require no effort to use it. The significant effect of hedonic motivation demonstrates that it is important for students to have a pleasurable experience and enjoy the technology if they are going to use it. Social networks, especially Facebook, can satisfy this need and create an enjoyable experience for students. The non-significant effect of facilitating conditions on intention to use Facebook for e-learning indicates that the presence of facilities is more important for students if they are going to use it, and it does not influence their intention. Comparison between ML-SEM and B-SEM This section presents a comparison analysis of the ML-SEM and B-SEM techniques in predicting the user behavior index in the UTAUT2 framework. Four indices were used to compare the two prediction techniques: root mean square error (RMSE), coefficient of determination (R 2 ), mean absolute error (MSE) and mean absolute percentage error (MAPE). These are the most familiar statistical indices for modeling evaluation and are defined by the following equations: 1. Coefficient of Determination (R 2 ) 3. Mean absolute error (MSE) 4. Mean absolute percentage error (MAPE) In the above formulas, y i is the ith actual value of the dependent variable and y ; i is the ith predicted value. Table 5 presents the values of the four performance indices, including R 2 , RMSE, MSE and MAPE for ML-SEM and B-SEM. The R 2 value for B-SEM is greater than that for ML-SEM, and the RMSE, MSE and MAPE values for B-SEM are lower than those for ML-SEM. Therefore, the performance indices for the B-SEM technique indicate superior estimation to ML-SEM. The main reason B-SEM performed better is the neural network framework defined, which permits simultaneous self-adjustment of parameters and effective learning of the association between inputs and outputs in causal and complex models. The scatter plots in Fig 4 illustrate that the B-SEM prediction values are closer to the real values than the ML-SEM predictions. The present comparative analysis proves that B-SEM has Discussion The main objective of this research was to demonstrate the power of the ML and Bayesian approaches with the SEM technique to predict students' intention to use and Facebook use for e-learning based on the UTAUT2 model. These two approaches were compared in terms of prediction power and accuracy. ML-SEM was applied as a representative parametric modeling method, while Bayesian-SEM served as a representative nonparametric modeling technique to explore students' use of Facebook for e-learning. Based on the UTAUT2 model, we measured the effect of performance expectancy, effort expectancy, hedonic motivation, social influence, facilitating conditions and habit on intention to use and Facebook use for e-learning. The results indicate significant effects of the determinants of intention to use and Facebook use for e-learning, but nonsignificant effects of effort expectancy and facilitating conditions. The significant effect of performance expectancy suggests that students would use Facebook for elearning if they perceived a benefit from using it and that it would increase their academic performance or help them learn. The significant effect of hedonic motivation on intention to use highlights that if students enjoy e-learning via Facebook, it creates a pleasurable experience for them. The habit of using Facebook is an important factor for students, while using Facebook for e-learning is easy and effort expectancy is not an important factor for them. Habit is significant because students are used to Facebook and they normally utilize it for other purpose too, such as socializing with friends and communicating with others. It is also easy to use. Having appropriate facilities is important for students who wish to use Facebook for e-learning, but it does not influence their intention to use. This suggests that students need facilities, so help with, or providing facilities will affect use. This highlights the importance of providing elearning materials over different channels for students to facilitate and encourage usage. The results of this study are consistent with the UTAUT2 findings. Furthermore, re-examining the UTAUT2 model through the Bayesian and Maximum Likelihood approaches shows that the Bayesian approach produced better results in terms of the number of statistically significant parameters. The error statistics results also illustrate that the Bayesian-based framework with SEM provided a reasonably well-fitting model with a higher coefficient of determination (0.762) than ML-SEM (0.716). Compared with traditional methods, it was observed that introducing Bayesian statistics to traditional SEM improved model performance by reducing the RMSEA from 0.286 to 0.118, MSE from 0.112 to 0.097, and MPE from 0.097 to 0.073 (Table 5). Moreover, the values predicted based on B-SEM are closer to the actual values than ML-SEM (Fig 4). The results further demonstrate that the Bayesian framework model is less sensitive to sample size. The Bayesian model with SEM is a robust approach, since it does not require any distribution function assumptions such as normality. Therefore, this study suggests that a Bayesian approach can produce better results for testing UTAUT2 and predicting use behavior. In formulating ML-SEM and developing the Bayesian method, emphasis is on the raw individual random observations rather than on the sample covariance matrix. Moreover, Bayesian statistics is on the rise in mainstream psychology as well as management and information systems. It provides researchers with a number of theoretical and practical advantages over the "traditional" ML approach. As the Bayesian paradigm is further incorporated into information systems, researchers have access to methods uniquely suited to create cumulative knowledge. Conclusions and future research This study was conducted to examine students' use of Facebook for the purpose of e-learning. The data were tested with Bayesian analysis and compared to the ML approach. This study is unique from a methodological perspective, in that it is the first study to compare the ML estimator with the Bayesian estimator within an e-learning framework based on Facebook. This is a new research modeling contribution, given the increasing accuracy of alternative estimation and prediction techniques that employ software packages. The Bayesian approach allows researchers to model information system data meaningfully, and appears to offer greater statistical power than approaches that do not take censoring into account. In summary, this was the first study in which the flexible and innovative B-SEM approach was applied to evaluate various factor structures for predicting use behavior in UTAUT2 studies. B-SEM can be applied to test hypotheses and theories and is capable of producing superior results to ML-SEM. This research also provides some directions to researchers who endeavor to apply B-SEM modeling. In this research, a practical method was used for structural and parametric learning. This methodology additionally provides guidelines for updating posterior probabilities with the generation of new evidence. The results of the current study can benefit the academia, and teaching institutions and organizations in two ways. First, the presented results provide knowledge related to the effectiveness of social networks, especially Facebook, for teaching and learning. Academic managers and instructors can thus utilize social network sites and build platforms for effective communication between lecturers and students. In this respect, academic managers may use the findings of this research owing to the importance of providing these kinds of templates to enhance teaching quality in universities. The findings are also applicable to other organizations that need to train staff, and communicate or distribute information faster and more effectively. Second, this study created more knowledge related to data analysis and the higher predictive power of Bayesian analysis compared to maximum likelihood and regression, which can assist academics to obtain better results from their data. In the current study, the lecturer distributed and collected the questionnaires, therefore any mistreatment of implementation reliability was minimal. However, the question of whether the findings can be generalized to other settings (subjects, times, places, etc.) is an important concern in any research. The structure of this research focused only on students, which may limit the ability to generalize the outputs of this study to other research settings like e-learning in organizations, institutes, and companies, since students are dissimilar from organizations and/or other individual users in some respects. The notion of modeling the use behavior index in UTAUT2 studies by considering various indicators that describe the latent factors can be further explored by incorporating new survey data. This notion is particularly suitable with the sequential Bayesian approach if taking the results of this study into consideration as prior input for new surveys. Future studies can apply the findings from this study and use the B-SEM technique to analyze data, particularly in contexts that require stronger and more accurate results. In engineering for instance, neural networks and fuzzy sets are the most familiar techniques for non-parametric studies. These two methods and combinations of them may thus be suitable for future UTAUT2 studies. Supporting information S1 Data. Raw data.
7,768.6
2017-09-08T00:00:00.000
[ "Computer Science", "Education" ]
Possible Magnetic Resonance Signal Due to the Movement of Counterions around a Polyelectrolyte with Rotational Symmetry Experimental, theoretical and computational studies revealed that the characteristic time scales involved in counterion dynamics in polyelectrolytes systems might span several orders of magnitude ranging from subnanosecond times to time scales corresponding to acoustic-like phonon mode frequencies, with an structural organization of counterions in charge density waves (CDWs). These facts raise the possibility of observing Magnetic Resonance (MR) signals due to the movement of counterions in polyelectrolytes. In case that this signal is detected in macroions or other biological systems, like micelles, vesicles, organeles, etc. with rotational symmetry, this method opens a new tool to measure with precission the counterions velocity. Introduction Polyelectrolytes are ionizing macromolecules.An important property of polyelectrolyte molecules is the formation of electric double layers surrounding the polymer chains.Most of the biological macromolecules under physiological conditions are polyelectrolytes in solution and their biological activity depends on their physico-chemical properties.Depending on the strength of the electrostatic interactions, it has been found [1] that distinct "phases" of counterions can be formed, i.e., a "condensed" layer of mobile oppositely charged counterions [2] [3] and "diffuse" phase consisting of loosely bound counterions to the considered macroion.The last phase of collective motion of the more mobile ions can be involved in the formation of charge density waves. Experimental [4]- [7], theoretical [8]- [14] and computational [15]- [17] studies reveal that the characteristic time scales involved in counterion dynamics in polyelectrolyte systems may span several orders of magnitude ranging from subnanosecond times to time scales corresponding to acoustic frequencies.The counterions exhibit an acoustic-like phonon mode that suggests the existence of a correlated phase.At small length scales within the domains, counterions exhibit liquid-like correlations and dynamics, and they are organized into counterion charge density waves (CDWs) [18].The measured speed of sound is of the order of 2000 m/s. We believe that these CDWs exist also on the surface of polyelectrolytes with rotational symmetry generating a circular current loop, which produces a magnetic field B and the corresponding magnetic moment µ at the center of the macroion.This magnetic moment µ is oriented in an external magnetic field o B producing a magnetic resonance signal under the aplication of a certain frequency ν . Magnetic Field on the Axis of a Circular Current Loop Consider a circular loop of wire of radius R located in the xy plane and carrying a steady current I , as shown in Figure 1.The magnetic field at an axial point P a distance z from the center of the loop is given by [19]: where o µ is the permeability of free space, ( ) is the magnetic moment associated with the current loop, where o z is a unit vector in the z direction.The magnetic field at the center of the loop, we set 0 z = in Equation (1).At this special point, this gives 1 , ( ) In case that the current loop is produced by charged particles in movement, with electric charge q and velocity v .The magnetic moment is given by, ( ) where we have used 2π , with v R ω = , with τ the period and , v ω the angular and linear velocity of the counterions respectively. Magnetic Resonance A magnetic moment in a external magnetic field o B adquires an energy E given by the following escalar product [20], ( ) We observe that the highest and lowest energy is when π θ = and 0, θ = this means that when z µ and z B are in opossite directions the energy is maximum and in the same directions is minimum.This last state is the most natural tendency of the magnetic dipole, paralell to the field.The difference in energy between this two states is given, Reemplacing z µ given by Equation (3) in Equation ( 5), considering As an example consider the circular movement of counterions on the surface of a polyelectrolyte, used in nanomedicine with rotational symmetry, as shown in Figure 2, this does not mean that this effect can be visualize necessarily in this molecule.These ions with charge q and velocity v produce a magnetic field B and a magnetic moment µ perpendicular to the plane of the macroion in accordance to Equation (2) and Equation ( 3).If we put this magnetic moment in an external magnetic field o B , its orientation will no longer be random.The small magnetic moment may spontaneously "flip" from the most favorable orientation, the lowenergy state and the less favorable orientation the high-energy state and visceversa.The energy required to induce flipping and obtain a Magnetic Resonance (MR) signal, given by Equation ( 6), is shown in Figure 3 to depend on the strength of the magnetic field o B in which the macroion containing the magnetic moment is placed.The input radiation energy in order to accomplish the transition given by Equation ( 6) is given by Planck's law Using Equation ( 6), we obtain for the resonance frequency, where h is Planck's constant ( ) Conclusions The distribution of molecular magnets in the different energy states is given by the Boltzmann equation, where upper N and lower N represent the population of molecular magnetic moments in upper and lower energy states, respectively. is Boltzmann constant and T is the absolute temperature (K).To give some idea of the consequences of increasing magnetic field on the population of molecular magnets states, the distribution of a small number (about two million, non real2 ) of macroions magnets, calculated from Equation (9), is shown in Figure 3. Such a small population difference presents a significant sensitivity problem for MR because only the difference in population is detected; the others effectively cancel one another.As seen from Equation ( 8) and Equation ( 9), the use of stronger magnetic fields will increase the population ratio, and consequently the sensitivity.In Table 1, we observe that the resonance energy, varies between ( ) 2 are reported NMR energy and frequency data of 3 nuclei.Compared with Table 1 for the counterions, we observe that the NMR values are one to three digits lower. In case that this signal is detected in macroions or other biological systems, like micelles, vesicles, organeles, etc. with rotational symmetry, this method opens a new tool to measure with precission the counterions velocity.The magnetic moment for phonons counterions with velocities of the order of 2500 m/s and for a macroion of 10 nm radius with Figure 1 . Figure 1.Magnetic field on the axis of a circular current loop. 1 The SI unit of magnetic field is the tesla (T): [ ] Figure 3 . Figure 3. Dependence on magnetic field strength B o of ΔE and the relative populations of the energy levels for counterions with z = 2 and velocity V = 2000 m/s and a macroion with R = 10 nm. Table 1 . Results shown in Figure 3.
1,614.2
2015-01-27T00:00:00.000
[ "Physics" ]
APPROXIMATION BY BV-EXTENSION SETS VIA PERIMETER MINIMIZATION IN METRIC SPACES . We show that every bounded domain in a metric measure space can be approximated in measure from inside by closed BV -extension sets. The extension sets are obtained by minimizing the sum of the perimeter and the measure of the difference between the domain and the set. By earlier results, in PI-spaces the minimizers have open representatives with locally quasiminimal surface. We give an example in a PI-space showing that the open representative of the minimizer need not be a BV -extension domain nor locally John. Introduction In this paper we study the existence of BV -extension sets in complete and separable metric measure spaces X.By BV -extension sets we mean sets E for which any integrable function with finite total variation on E can be extended to the whole space X without increasing the BV -norm by more than a constant factor.BV -and Sobolev-extension sets are useful in analysis because via the extension one can use tools a priori available only for globally defined functions also for the functions defined only in the extension set.Not every domain of a space is an extension set, so in cases where one starts with functions defined on an arbitrary domain Ω one first approximates Ω from inside by an extension set, then restricts the functions to this set and then extends them as global functions.Such process immediately raises the question: when can we approximate a domain from inside by extension domains (or sets)? In the Euclidean setting, an answer to this has been known for a long time.For instance, from the works of Calderón and Stein [7,20] we know that Lipschitz domains of R n are W 1,p -extension domains for every p ≥ 1.Any bounded domain in R n can be easily approximated from inside and outside by Lipschitz domains.It was later observed that in a more abstract setting of PI-spaces (that is doubling metric measure spaces satisfying a local Poincaré inequality [14]; see Section 4), good replacements of Lipschitz domains are uniform domains.In [4] it was shown that uniform domains in p-PI-spaces are N 1,p -extension domains, for 1 ≤ p < ∞, for the Newtonian Sobolev spaces, and in [17] it was shown that bounded uniform domains in 1-PI-spaces are BV-extension domains.Finally, in [19] it was shown that in doubling quasiconvex metric spaces one can approximate domains from inside and outside by uniform domains.Since PI-spaces are quasiconvex [9,15], we conclude that in PI-spaces one can approximate domains by extension domains. Recently there has been increasing interest in analysis in metric measure spaces (X, d, m) without the PI-assumption.However, the extendability of BV -functions seems to have been studied only in Date: May 5, 2023.2000 Mathematics Subject Classification.Primary 30L99.Secondary 46E35, 26B30. 1 some specific cases, such as infinite dimensional Gaussian case [5].We continue into the direction of general metric measure spaces and show in Theorem 3.4 that even without the PI-assumption one can still approximate domains Ω from inside by closed BV -extension sets.It is not clear if an approach similar to the approximation by uniform domains could work in general metric measure spaces.Therefore, we take a completely different approach and obtain the extension set by minimizing the functional A → Per(A) + λm(Ω \ A) for a large parameter λ > 0. Section 3 contains the proof of Theorem 3.4 and remarks on the minimization procedure.Before it, in Section 2 we recall and prove preliminary results on BV -functions and sets of finite perimeter.In Section 4 we connect the minimization approach to domains with locally quasiminimal boundary in PI-spaces, and also show that in PI-spaces the open representatives of the minimizers of the functional, and consequently domains with locally quasiminimal boundary need not be BV -extension domains, nor locally John domains.In the final part of the paper, Section 5 we list open questions raised by our extension result. Preliminaries We will always assume (X, d, m) to be a metric measure space where (X, d) is a complete and separable metric space and m is a Borel measure that is finite on bounded sets.The set of all Borel subsets of X is denoted by B(X).We define the open and the closed ball with center x ∈ X and radius r > 0 by B r (x) := {y ∈ X : d(x, y) < r} and Br (x) := {y ∈ X : d(x, y) ≤ r}, respectively.We shall denote by LIP(X) the space of all Lipschitz functions on X and by Lip(f ) the (global) Lipschitz constant of f ∈ LIP(X).Given any f ∈ LIP(X) and E ⊂ X we set Lip(f ; Having this notation at our disposal, the asymptotic Lipschitz constant (or the asymptotic slope) of a function f ∈ LIP(X) is a function lip a (f ) : X → [0, +∞) given by Notice also that lip a (f ) ≤ Lip(f ).Given an open set A ⊂ X we will say that a function f : X → R is locally Lipschitz on A if for every x ∈ A there exists r > 0 such that B r (x) ⊆ A and f | Br(x) is Lipschitz.We denote the space of all locally Lipschitz functions on A by LIP loc (A). Functions of bounded variation.We next recall the definition of the space of functions of bounded variation (BV functions, for short), as well as some of the characterisations of the total variation (measure) associated with a BV function.The below presentation is based on [11]. We extend |Df | X to all Borel sets as follows: given B ∈ B(X), we define 18,Thm. 3.4]).It follows from the definition that, given an open set A ⊂ X Given a Borel set B ⊂ X and f ∈ L 1 loc (m | B ), we introduce the following notation: |Df | B := the total variation measure of f computed in the metric measure space (X, d, m | B ). Definition 2.2 (The spaces • BV (B) and BV (B)).Let (X, d, m) be a metric measure space.Let B ⊂ X be Borel.We define We endow the space • BV (B) with the seminorm and the space BV (B) with the norm given by Remark 2.3.The following characterisation of the total variation measure of the whole space will be useful for our purposes.By [11,Theorem 4.5.3]we have that In general, we cannot restrict to globally Lipschitz functions when calculating the total variation measure: consider A = (0, 1) ∪ (1, 2) ⊂ R and f = χ (0,1) . We will use the following version of Lipschitz extensions where the asymptotic Lipschitz constant is preserved.Proposition 2.4 ([12, Theorem 1.1]).Let (X, d) be a metric space, C ⊂ X a subset and g : C → R a Lipschitz function.Then for every ε > 0 there exists an (Lip(g) + ε)-Lipschitz function f : X → R whose restriction to C coincides with g and such that for every x ∈ C. Moreover if g is bounded (resp.with bounded support), then f can be chosen to be bounded (resp.with bounded support). By combining Proposition 2.4 with Remark 2.3 we get the following. Proof.By Proposition 2.4 every f ∈ LIP(B) can be extended to an element of LIP(X) without changing the asymptotic Lipschitz constant on B, thus (taking into account Remark 2.3) we obtain and thus BV (B) = BV (Y ) (cf. [12,Theorem 3.1]).Now, take A ⊂ X open.Since every f ∈ LIP loc (A) can be restricted to an element of LIP loc (B ∩A), we get that By the definition of total variation measure, the inequality (3) extends to all Borel sets A ⊂ X.Finally, by (2) and recalling that |Df | Z is a finite Borel measure for any metric measure space (Z, d Z , m Z ), we have for all Borel A ⊂ X that The equality (1) follows by taking A = B in the above equality, combined with Remark 2.3 and Proposition 2.4. We define the notion of sets of finite perimeter on a Borel subset B ⊂ X. Definition 2.6 (Sets of finite perimeter on a Borel subset B).Let (X, d, m) be a metric measure space and B, E ∈ B(X).We define the perimeter of E on B as We say that E has finite perimeter on B if the quantity Per B (E) is finite.Moreover, we define for every F ∈ B(X) the quantity Per B (E; To shorten the notation, whenever B is equal to the whole (base) space X, we will often write Per(E) instead of Per X (E). Extension sets and extension properties.Definition 2.7 (BV -extension set).A set B ∈ B(X) is said to be a BV -extension set if there exist C > 0 and a map E B : BV (B) → BV (X), such that for every f ∈ BV (B) the following hold: i Given a BV -extension set B, we define the operator norm of E B as Definition 2.8 (Extension property for sets of finite perimeter).Let B ∈ B(X).We say that B has the extension property for sets of finite perimeter with respect to the full BV -norm if there exists C > 0 such that for every E ⊂ B with Per B (E) < +∞ there exists E ∈ B(X) such that the following two properties hold: Approximation by BV-extension sets from inside In this section we prove the main result of the paper, Theorem 3.4, according to which we can estimate domains from inside by closed BV -extension sets.In the proof we will need the following two results.The first one connects the extendability of BV -functions with the extendability of sets of finite perimeter.In the Euclidean case, such result was obtained by Burago and Maz'ya [6].Later it was extended to PI-spaces by Baldi and Montefalcone [3].The connection of perimeter-and BVextensions with W 1,1 -extensions was studied in detail in [13] in Euclidean spaces, and then in general metric measure spaces in [8].In [8,Proposition 3.4] the extension result closest to what we need was proven.There a Borel set was shown to be a BV -extension set if and only if it has the extension property for Borel sets of finite perimeter with the full norm.We need to make a small modification to this result, since in our proof we need to stay in the class of closed sets and consequently will only use open sets for testing the perimeter extensions.Proposition 3.1.Let (X, d, m) be a metric measure space.A Borel subset Ω ⊂ X has the extension property for BV if and only if it has the extension property for open sets of finite perimeter with the full norm. Proof.Having already the equivalence between BV -extension and perimeter extension of Borel sets given by [8, Proposition 3.4], we only need to show that perimeter extension for open sets implies BVextension for functions in BV (Ω)∩L ∞ (Ω).Towards this, take f ∈ BV (Ω)∩L ∞ (Ω).By the definition of the total variation, there exists a sequence of open sets U n ⊃ Ω and functions Now, by assumption we can extend each relatively open set , where C > 0 is the constant given by the assumption on having the extension property for open sets. As in the proof of [8,Proposition 3.4], this implies that we get an extension fn ∈ BV (X) of . By an application of Mazur's lemma (see again the proof of [8, Proposition 3.4] for details), this implies that we also get an extension This concludes the proof. The next lemma is the reason why our approach works only for closed sets.Later in Example 3.3 we observe that the claim of the lemma fails for general sets B ⊂ X. Lemma 3.2.Let (X, d, m) be a metric measure space.Given a closed set B ⊂ X and a set A ⊂ B of finite perimeter on B, it holds that Since B is closed, by Corollary 2.5 there exists a sequence (g i ) i ⊆ LIP(X) such that (5) For a fixed i ∈ N we then have that Therefore, up to taking a (relabeled) subsequence of (f i ) i , we may assume that ( 6) Taking into account (6), this gives where the last inequality follows from ( 5) and ( 4). Notice that Lemma 3.2 does not hold in general if we replace the closed set B with a general Borel set.This is seen from the next simple example. Example 3.3.Let us consider (R, d Eucl , L 1 ) as our metric measure space.Let B = (0, 1) ∪ (1, 2) and A = (0, 1).Then we have that Theorem 3.4.Let (X, d, m) be a metric measure space.Let Ω ⊂ X be a bounded open set.Then for every ε > 0 there exists a closed set G ⊂ Ω such that m(Ω \ G) < ε and so that the zero extension gives a bounded operator from BV (G) to BV (X). Proof.Let us denote C Ω = {A ⊂ Ω : A closed}.We consider the following functionals.For λ > 0 define M λ : C Ω → [0, +∞] as M λ (A) := Per(A) + λm(Ω \ A).We will show that for λ large enough, a minimal element in a partial order given by M λ will give the desired set G. We divide the proof into several steps. Step 1: For every λ > 0, we have inf A∈C Ω M λ (A) < +∞.Moreover, given ε > 0 there exists λ > 0 such that for any sequence Together with the above, this gives the existence of s ∈ [0, r] such that proving the first part of the claim.Let now ε > 0. Take r > 0 so small that m r = m(B(∂Ω, r)∩Ω) < ε 2 .Note that for any λ > 0, we have lim i→∞ M λ (A λ i ) ≤ mr r + λm r and so, by taking λ > 1 r , we get This proves the claim of Step 1. Next, we shall consider the following (non-empty, due to Step 1) subset of C Ω : Consider now a partial order A ≺ λ B on C Ω,λ defined as Step 2: For every λ > 0 and C ∈ C Ω,λ , the set {A ∈ C Ω,λ : A ≺ λ C} has a minimal element with respect to the partial order ≺ λ . Proof of Step 2. By Zorn's Lemma, it suffices to prove that any chain (A λ i ) i∈I ⊂ {A ∈ C Ω,λ : A ≺ λ C} contains a lower bound.By selecting inductively elements in the chain so that m(A λ i \ A λ j ) > 0, we may assume that I = N.Moreover, we may assume that A λ i+1 ⊂ A λ i for all i ∈ N. We claim that gives the lower bound.Trivially, A λ ⊂ A λ i for all i ∈ N, so it is enough to prove that M λ (A λ ) ≤ M λ (A λ i ) for all i ∈ N. To verify the latter, notice that by the continuity of measure, we have that and so by the lower semicontinuity of the perimeter, we have also Per(A λ ) ≤ lim inf i→+∞ Per(A λ i ), proving the claim.We now show that for any λ > 0 and a minimal element G λ ∈ C Ω,λ with respect to ≺ λ we have that the zero extension from G λ gives a bounded operator.Given any Borel set B ⊂ X, in what follows we will denote by E B the zero-extension operator from BV (B) to BV (X). We are now ready to combine the results obtained in the three steps above and get the claim of the theorem. Step 4. Fix ε > 0. There exists a closed set G ⊂ Ω such that Proof of Step 4. Let λ (depending on ε) be such that the claim of Step 1 holds and fix any minimizing sequence (A λ i ) i∈N .Then, for i ∈ N large enough we have that i } with respect to the partial order ≺ λ , whose existence has been proved in Step 2. By Step 3 we know that G λ,A λ i is a BV -extension set, thus it only remains to check that m(Ω \ G λ,A λ i ) < ε.To verify this, notice that, by the minimality This proves the statement of Step 4 (and of the theorem itself) for G = G λ,A λ i . By approximating a measurable set from outside by an open set, Theorem 3.4 gives the following corollary. Corollary 3.5.Let (X, d, m) be a metric measure space and let F ⊂ X be a bounded Borel set.Then for every ε > 0 there exists a closed set G ⊂ X such that m(F ∆G) < ε and so that the zero extension gives a bounded operator from BV (G) to BV (X).Remark 3.6.A stronger version of Corollary 3.5 where we require in addition that G ⊂ F , does not hold.A counter example is given by taking F to be a fat Cantor set in R equipped with the Lebesgue measure. We end this section with an example where the set G does not have an open representative. Example 3.7.Let X = R 2 with the Euclidean distance.We define Ω = Q ∪ ∞ n=1 T n , where Q = (0, 1) × (−1, 0) and T n are defined as follows.We start by defining a triangle with unit length base: Notice that T contains the base, but not the other two sides of the triangle.We then define Furthermore, define where x n = (2 −2n+1 + 2 −2n , 0) is the center point of the base of the triangle T n . Step 1: Let us show that we can split the functional M λ with respect to the cube Q and the triangles T n .First notice that for all A ⊂ Ω we have Towards showing that the perimeter part of the functional M λ also splits, we next show that for a finite perimeter set is the closed upper half plane.We do this by showing the chain of inequalities The equality in the chain (9) follows by subadditivity.We first show the inequality Per(A∩R 2 + ; R 2 + ) ≤ Per(A; R 2 + ).To this end we define and call U i the 1 i -neighborhood of R 2 + .This way we obtain Further let f i ∈ LIP loc (U i ) be such that f i ⇀ χ A and R 2 lip a (f i ) dm → Per(A; R 2 + ).We may assume that f i have values in [0, 1].Now setting g i = f i φ i we have g i ⇀ χ A∩R 2 + and g i is an admissible sequence of Lipschitz functions for Per(A ∩ R 2 + ; R 2 + ).By the Leibniz rule we now obtain Next we show the second inequality To this end we let φ i be as before.Further let We may again assume that f i and g i have values in [0, 1].Therefore, we can set h i = φ i g i + (1 − φ i )f i , for which it holds h i ⇀ χ A .Now again by a similar approximation as before using the Leibniz rule we obtain from which the claimed inequality follows.Now Per + is an open set, we have ).Let us recall that the perimeter measure enjoys the following locality property: given an open set U ⊂ X and sets of finite perimeter E, F ⊂ X such that m(U ∩ (E∆F )) = 0, it holds that (10) Per(E; U ) = Per(F ; U ). Taking into account that T n are pairwise disjoint compact sets together with (10), one can easily verify that Consequently, we get Step 2: Let G λ be a minimizer of M λ .We look to show that for large λ > 0 and n > 0, G λ will contain one of the points x n , but nothing of the respective triangle T n , in the measure sense, i.e. m(G λ ∩ T n ) = 0.This means that G λ does not have an open representative. Although it is not strictly needed in the following, we first notice that Q ⊂ G λ as long as λ > 0 is large enough.Next we will perform a reflection of the part of G λ that lies inside the triangles T n across the line where Per euc denotes the Euclidean perimeter.The first inequality follows since an admissible Lipschitz function for the definition of the perimeter on the left hand side will define an admissible Lipschitz function for the definition of the Euclidean perimeter on the right hand side via a reflection.For the second inequality we used the Euclidean isoperimetric inequality.Now given λ > 0 and as long as the minimizer G λ contains boundary of positive measure and thus there is no open representative of G λ . Notice that the example above has a closed representative since we can always add in the boundary of the set G λ to itself since the dirac masses do not add perimeter, and the rest of the boundary is a null set with respect to m. Remarks on quasiminimal sets in PI-spaces As noted in the Introduction, in PI-spaces we can approximate a domain from inside and outside by uniform domains which are extension domains for BV and Sobolev functions.Therefore, we will focus here only on connecting our approach of the more general existence result obtained in Section 3 with other results on the structure of minimizers in PI-spaces.Here with a PI-space we mean a complete metric measure space (X, d, m) where the measure is doubling and the space satisfies a local (1, 1)-Poincaré inequality.Recall that a measure m is doubling on X if there exists a constant C > 0 so that for every x ∈ X and r > 0 we have m(B(x, 2r)) ≤ Cm(B(x, r)). A metric measure space satisfies a local (1, 1)-Poincaré inequality if there exist constants C > 0 and λ ≥ 1 so that for every function f in X with an upper gradient g f , every x ∈ X and r > 0 we have where f A denotes the average of f in a set A ⊂ X of positive and finite measure.The proof of Theorem 3.4 is based on the minimization of the functional If we replace the term λm(Ω \ A) by λm(Ω∆A) we obtain a more studied functional A minimization of the functional M λ leads to a set which is close in measure to Ω, but not necessarily contained in Ω.Still, the argument in the proof of Theorem 3.4 for showing that the minimizer is a BV -extension set works also for the functional M λ provided that the minimizer has a closed representative (in order to use Lemma 3.2).Since in general we do not know if the minimizer of M λ or M λ has a closed representative, instead of using a global minimizer we took a minimal element in a decreasing chain of closed sets.Recall that by Example 3.7 we know that the minimizer need not have an open representative. In PI-spaces we do have a closed representative for the global minimizer of M λ in the class of Borel sets.This can be seen via the regularity results of quasiminimal sets.By [2, Proposition 3.20 and Remark 3.23] we have that in PI-spaces the minimizer of the functional M λ is locally K-quasiminimal in X. Recall that a Borel set E ⊂ X is said to be K-quasiminimal, or to have K-quasiminimal boundary in an open set Ω ⊂ X, if for all open U ⋐ Ω and every Borel sets F, G ⋐ U we have A set E is said to be locally K-quasiminimal in Ω, if instead of requiring the minimality for all open U ⋐ Ω we require that for every x ∈ Ω the exists an open neighbourhood V ⊂ Ω of x so that for all U ⋐ V the above holds. By [16,Theorem 4.2] a K-quasiminimal set in a PI-space has a representative for which the topological and measure theoretic boundaries agree.Recall that the measure theoretic boundary of E consists of those points where the (upper) density of both E and X \ E are positive.By a density point argument, the measure theoretic boundary has always measure zero.Consequently, a K-quasiminimal set has both an open and a closed representative.The proof of Theorem 3.4 then gives that the closed representative is a BV -extension set.However, as we will see in Example 4.1, being a BV -extension set is not invariant under taking representatives, so we cannot conclude directly that the open representative is also a BV -extension set. Notice also that for the functional M λ we have the local K-quasiminimality only inside Ω.Therefore, via [16,Theorem 4.2] we only know that the topological boundary of the minimizer has measure zero inside Ω.However, if we start with a domain Ω with m(∂Ω) = 0, we can conclude that also the minimizer of M λ has both an open and a closed representative. The above argumentation leads to natural questions: In a PI-space, is every domain with locally quasiminimal surface a BV -extension set?Is the closure of a domain with locally quasiminimal surface a BV -extension set?We end this section with an example showing that the answer to the first question is negative.In fact, the example shows that even the open representative of a minimizer of M λ need not be a BV -extension set in a PI-space.The same example also answers a question in [16]: domains with locally quasiminimal surface need not be local John domains in PI-spaces. Recall that a domain Ω is a local John domain if there exist constants C, δ > 0 such that for every x ∈ ∂Ω, every 0 < r < δ and all y ∈ B r (x) ∩ Ω there exists a point z ∈ B Cr (x) ∩ Ω with d(y, z) ≥ r/C and a curve γ ⊂ Ω such that ℓ(γ y,w ) ≤ Cdist(w, ∂Ω) for all w ∈ γ, where γ y,w is the shortest subcurve of γ joining y and w, and ℓ(α) denotes the length of a curve α.A motivation for asking about the local John condition comes from the Euclidean setting, where David and Semmes showed that bounded sets with quasiminimal boundary surfaces are locally John domains [10]. Example 4.1.Consider the metric measure space X = X 1 ∪ X 2 ∪ X 3 where X i = {i} × [0, 1] and for every t ∈ [0, 1] the points (i, t, 0), i = {1, 2, 3} are identified.(Later on we will not always write the first coordinate that was above used only as a label.)Let us write the common part of X i as D = X 1 ∩ X 2 ∩ X 3 .In other words, X = [0, 1] × T , with T being a tripod with unit length legs.The distance d on X is the length distance on each X i given by and the reference measure m is the sum of weighted Lebesgue measures on each X i : The obtained metric measure space (X, d, m) is an Ahlfors 2-regular and satisfies the (1, 1)-Poincaré inequality.We will consider a domain Ω ⊂ X as Ω = Ω 1 ∪ Ω 2 ∪ Ω 3 , where each Ω i ⊂ X i is defined as follows.We start by defining as a basic building block a triangle Now, for the set Ω 1 in X 1 we simply choose and the set Ω 3 ⊂ X 3 is given by The common part J ⊂ D for the sets above is defined by See Figure 1 for an illustration of the domain Ω. 1 lives in three copies of the unit square, X 1 , X 2 , and X 3 , that are glued together at one edge.The domain minimizes M λ and thus has locally quasiminimal surface.One intuitive way to see the quasiminimality is to observe that with local variations one cannot decrease the perimeter much when trying to remove the slits appearing at the common edge in the X 1 ∪ X 2 square.The slits prevent the domain from being locally John or BV -extension domain. Claim 1: For any λ ≥ 1, the domain Ω is a minimizer of M λ among Borel subsets of X.To prove it, we first show that This can be verified by simply taking as a sequence (f n ) n of Lipschitz functions approaching to χ Ω in L 1 (m) whose elements f n are given by Then, denoting we have that Thus, it remains to estimate the measure of Ω n .Notice that by the choice of the distance d and the slopes in the triangle T , we have for k ∈ {2, 3} that Claim 3: Ω is not locally John domain.To show this, we take as the center x := (0, 0) ∈ ∂Ω.Given any C ≥ 1 and δ > 0 we take k ∈ N large enough so that Notice that by the selection of k we have 0 < r < δ.Then so the point z in the John condition is forced to be selected outside E k .Consequently, any curve γ joining y and z in Ω must pass through a point where in the last inequality we used again the selection of k.This contradicts the John condition with the given parameters C and δ. Remark 4.2.Notice that as a minimizer of M λ in a PI-space, the domain Ω of Example 4.1 also has quasiminimal surface.If we use as the measure m in the example the 2-dimensional Hausdorff measure, we have that the space (X, d, m) is isotropic.(Let us recall that a metric measure space is isotropic whenever the density function θ E associated with the set of finite perimeter E and for which it holds that Per(E, •) = θ E H | ∂ e E is independent on the set E itself.We refer to [1] for more details about the mentioned density function.)Since the property of being quasiminimal is invariant under a change of the reference measure to a comparable one, we will thus obtain a version of the example where the space is isotropic, but the domain only has quasiminimal surface instead of being a minimizer of M λ .Notice also, that changing to a distance d induced by the Euclidean distances in X i we also preserve the quasiminimality, since the change in distance is bi-Lipschitz. Open questions Our extension result leads to several questions that we have not yet been able to answer.In Theorem 3.4 we proved that we can approximate domains from inside by closed BV -extension sets.For the special case of PI-spaces, in Section 4, we noted that minimizers of M λ have also open representatives.However, Example 4.1 showed that the open representatives need not be BV -extension sets even in PI-spaces.What still remained open is if being a minimizer of M λ is really needed or if having just quasiminimal surface is enough: Question 5.1.Let (X, d, m) be a PI-space and Ω ⊂ X a bounded domain with locally K-quasiminimal surface.Is then Ω a BV -extension set?Another question stemming from the proof of Theorem 3.4 is if we really need to take the partial order into use to guarantee that the minimal element has a closed representative.Question 5.2.Let (X, d, m) be a metirc measure space and Ω ⊂ X a bounded domain.Let E be a minimizer of M λ (or M λ ) among Borel subsets of Ω (or X respectively).Does E have a closed representative? For PI-spaces the answer to Question 5.2 is positive for M λ , see again Section 4. Independent of the minimization approach, the obvious question still remaining is: Question 5.3.Let (X, d, m) be a metric measure space, Ω ⊂ X a bounded domain and ε > 0. Does there exist a BV -extension domain A ⊂ Ω such that m(Ω \ A) < ε? None of our approximations is from outside because we argue that the minimizer is an extension set by comparing the value of the functional to value at a modification of the minimizer where we take away an open subset. Question 5.4.Let (X, d, m) be a metric measure space, Ω ⊂ X a bounded domain and ε > 0. Does there exist a BV -extension domain (or just a BV -extension set) A ⊃ Ω such that m(A \ Ω) < ε? In addition to knowing the answer to the above questions, it would be interesting to see if we can also approximate domains by Sobolev W 1,p -extension domains in the absence of the local Poincaré inequality.In particular, the case p = 1 is intimately connected to the BV and perimeter extensions even in general metric measure spaces [8]. Corollary 2 . 5 . Let (X, d, m) be a metric measure space.Let B ⊂ X be closed and define Y = (B, d| B×B , m| B ). Then BV (B) = BV (Y ) and the total variation measures |Df | B and |Df | Y agree on the Borel subsets of B for every f ∈ BV (B).Moreover, Step 3 : Fix any λ > 0 and C ∈ C Ω,λ .Let G λ,C be a minimal element in {A ∈ C Ω,λ : A ≺ λ C} with respect to the partial order ≺ λ .Then we have that E G λ < +∞.Proof of Step 3. By Proposition 3.1, we only need to check that the zero extension is bounded for characteristic functions of open sets of finite perimeter in G λ,C .So, let A ⊂ G λ,C be relatively open with Per G λ,C (A) < +∞.Then by the minimality of G λ,C and the fact that m 3 Figure 1 . Figure1.The domain Ω in Example 4.1 lives in three copies of the unit square, X 1 , X 2 , and X 3 , that are glued together at one edge.The domain minimizes M λ and thus has locally quasiminimal surface.One intuitive way to see the quasiminimality is to observe that with local variations one cannot decrease the perimeter much when trying to remove the slits appearing at the common edge in the X 1 ∪ X 2 square.The slits prevent the domain from being locally John or BV -extension domain.
8,177.4
2023-05-04T00:00:00.000
[ "Mathematics" ]
and In chiral models with SU(3) group structure, strange form factors of baryon octet are evaluated by constructing their sum rules to yield theoretical predictions comparable to the recent experimental data of SAMPLE Collaboration. We also study sum rules for the flavor singlet axial currents for the EMC experiment in a modified quark model. Introduction There have been many interesting developments concerning the strange flavor structures in the nucleon and the hyperons. Especially, the internal structure of the nucleon is still a subject of great interest to both experimentalists and theorists. In 1933, Frisch and Stern [1] performed the first measurement of the magnetic moment of the proton and obtained the earliest experimental evidence for the internal structure of the nucleon. However, it wasn't until 40 years later that the quark structure of the nucleon was directly observed in deep inelastic electron scattering experiments and we still lack a quantitative theoretical understanding of these properties including the magnetic moments. Quite recently, the SAMPLE Collaboration [2] reported the experimental data of the proton strange form factor through parity violating electron scattering [3]. To be more precise, they measured the neutral weak form factors at a small momentum transfer Q 2 S = 0.1 (GeV/c) 2 to yield the proton strange magnetic form factor [2] G s M (Q 2 S ) = +0.14 ± 0.29 (stat) ± 0.31 (sys). This positive experimental value is contrary to the negative values of the proton strange form factor which result from most of the model calculations except the predictions [4,5] based on the SU(3) chiral bag model [6] and the recent predictions of the chiral quark soliton model [7] and the heavy baryon chiral perturbation theory [8]. (See Ref. [9] for more details.) On the other hand, the EMC experiment [10] also reported the highly nontrivial data that less than 30% of the proton spin may be carried by the quark spin, which is quite different from the well-known prediction from constituent quark model. To explain this discrepancy, it has been proposed [11] that the experimentally measured quantity is not merely the quark spin polarization ∆Σ but rather the flavor singlet axial current (FSAC) via the axial anomaly mechanism [12]. Recently, at the quark model renormalization scale, the gluon polarization contribution to the FSAC in the chiral bag model has been calculated [13] to yield a significant reduction in the relative fraction of the proton spin carried by the quark spin, consistent with the small FSAC measured in the EMC experiments. In this paper, in the chiral models with SU(3) group structure, we will investigate the strange form factors of octet baryons in terms of the sum rules of the baryon octet magnetic moments to predict the proton strange form factor. We will also study the modified quark model with SU(3) group structure to obtain sum rules for the strange flavor singlet axial current of the nucleon in terms of the octet magnetic moments µ B and the nucleon axial vector coupling constant g A . In section 2, we construct the sum rules of the baryon octet magnetic moments in the SU(3) chiral models. In section 3 we construct the sum rules for the nucleon strange flavor singlet axial current in the modified quark model. Strange form factors Now we consider the baryon magnetic moments in the chiral models such as Skyrmion [14], MIT bag [15] and chiral bag [6] with the general chiral SU(3) group structure. In the higher dimensional irreducible representation of SU(3) group, the baryon wave function is described as [4,16] where the representation mixing coefficients are given by Here E λ is the eigenvalue of the eigen equation H 0 |B λ = E λ |B λ . (For explicit expressions for the Hamiltonian H = H 0 +H SB in the chiral models, see Ref. [5].) Using the above baryon wave function in the Hamiltonian H the spectrum of the magnetic moment has the hyperfine structure (For the other magnetic moments, see Ref. [5].) Here one notes that the coefficients are solely given by the SU(3) group structure of the chiral models and the physical informations such as decay constants and masses are included in the above inertia parameters, such as M, N and so on, calculable in the chiral models. Using the V-spin symmetry sum rules [5], one can obtain the relation which will be used later to obtain sum rules of the strange form factors of octet baryons. Now we consider the form factors of the octet baryons which, in the chiral models, are definitely extended objects with internal structure associated with the electromagnetic (EM) current, to which the photon couples, According to the Feynman rules the matrix element ofV µ γ for the baryon with transition from momentum state p to momentum state p + q is given by the following covariant decomposition where u(p) is the spinor for the baryon states and q is the momentum transfer and σ µν = i 2 (γ µ γ ν − γ ν γ µ ) and M B is the baryon mass and F γ 1 and F γ 2 are the Dirac and Pauli EM form factors, which are Lorentz scalars and p 2 = (p + q) 2 = M 2 B on shell so that they depend only on the Lorentz scalar variable q 2 (= −Q 2 ). We will also use the Sachs form factors, which are linear combinations of the Dirac and Pauli form factors which can be rewritten as On the other hand, the neutral weak current operator is given by an expression analogous to Eq. (2.4) but with different coefficients: Here the coefficients depend on the weak mixing angle, which has recently been determined [17] with high precision: sin 2 θ W = 0.2315 ± 0.0004 . In direct analogy to Eq. (2.7), we have expressions for the neutral weak form factors G Z E,M in terms of the different quark flavor components: Here one notes that the form factors G f E,M (f = u, d, s) appearing in this expression are exactly the same as those in the EM form factors, as in Eq. (2.7). Utilizing isospin symmetry, one then can eliminate the up and down quark contributions to the neutral weak form factors by using the proton and neutron EM form factors and obtain the expressions It shows how the neutral weak form factors are related to the EM form factors plus a contribution from the strange (electric or magnetic) form factor. Measurement of the neutral weak form factor will thus allow (after combination with the EM form factors) determination of the strange form factor of interest. It should be mentioned that there are electroweak radiative corrections to the coefficients in Eq. (2.9), which are generally small corrections, of order 1-2%, and can be reliably calculated [18]. The EM form factors present in Eq. (2.10) are very accurately known (1-2 %) for the proton in the momentum transfer region Q 2 < 1 (GeV/c) 2 . The neutron form factors are not known as accurately as the proton form factors (the electric form factor G n E is at present rather poorly constrained by experiment), although considerable work to improve our knowledge of these quantities is in progress. Thus, the present lack of knowledge of the neutron form factors will significantly hinder the interpretation of the neutral weak form factors. At zero momentum transfer, one can have the relations between the EM form factors and the static physical quantities of the baryon octet, namely G γ E (0) = Q B and G γ M (0) = µ B with the electric charge Q B and magnetic moment µ B of the baryon. In the strange flavor sector, the Sachs magnetic form factor in Eq. (2.6) yields the strange form factors of baryon octet degenerate in isomultiplets F s with the fractional strange EM charge Q s B . Here note that one can express the slope of G s E at Q 2 = 0 in the usual fashion in terms of a strangeness radius r s defined as r 2 s = −6 dG s E /dQ 2 Q 2 =0 . Now we construct model independent sum rules for the strange form factors of baryon octet in the chiral models with the SU(3) flavor group structure. Since the nucleon has no net strangeness the nucleon strange form factor is given by [5] which, at least within the SU (3) On the other hand, the quantities G Z E,M in Eq. (2.10) for the proton can be determined via elastic parity-violating electron scattering to yield the experimental data G s M (Q 2 S ) = +0.14 ± 0.29 (stat) ± 0.31 (sys) [2] for the proton strange magnetic form factor. Here one notes that the prediction for the proton strange form factor (2.13) obtained from the sum rule (2.12) is comparable to the SAMPLE data. Moreover, from the relation (2.10) at zero momentum transfer, the neutral weak magnetic moment of the nucleon can be written in terms of the nucleon magnetic moments and the proton strange form factor [19] 4µ Z p = µ p − µ n − 4 sin 2 θ W µ p − F s 2N (0). (2.14) Next, we obtain the other octet baryon strange form factors [5] F s 2Λ (0) = which, similarly to the nucleon strange form factors, can be rewritten in terms of the octet magnetic moments to yield the sum rules for the other octet strange form factors Table 1 are unreliably sensitive in the strange flavor channel. Strange flavor singlet axial currents In this section, we consider a modified quark model [20]. In the nonrelativistic quark model, the quarks possess the static properties such as mass, electromagnetic charge and magnetic moments, which are independent of their surroundings. However this assumption seems to be irrelevant to the realistic experimental situation. In the literature [20], the magnetic moments of the quarks were proposed to be different in the different isomultiplets, but to be the same within an isomultiplet. The magnetic moments are then given by where µ B f is an effective magnetic moment of the quark of flavor f for the baryon B degenerate in the corresponding baryon isomultiplet, and ∆f B is the spin polarization for the baryon. Using the SU(3) charge symmetry one can obtain the magnetic moments of the octet baryons as follows [20] After some algebra we obtain the novel sum rules for spin polarizations ∆f with the flavor f in terms of the octet magnetic moments µ B and the nucleon axial vector coupling constant g A where we have assumed the isospin symmetry µ B u = −2µ B d . Here one notes that the above sum rules (3.3) are given only in terms of the physical quantities, the coupling constant g A and baryon octet magnetic moments µ B , which are independent of details involved in the modified quark model, as in the sum rules in Eqs. (2.12) and (2.16). Moreover these sum rules are governed only by the SU(3) flavor group structure of the models. Using the experimental data for g A and µ B , we obtain the strange flavor spin polarization ∆s ∆s = −0.26 (3.5) which, together with the other flavor spin polarizations ∆u = 0.81 and ∆d = −0.44, one can arrive at the flavor singlet axial current of the nucleon as follows 2 ∆Σ = ∆u + ∆d + ∆s = 0.11 (3.6) which is comparable to the recent value ∆Σ = 0.28 obtained from the deep inelastic lepton-nucleon scattering experiments [22]. Here note that the strange flavor singlet axial current ∆s in Eq. (3.5) is significantly noticeable even though the flavor singlet axial current ∆Σ in Eq. (3.6) is not quite large. Now it seems appropriate to discuss the strange form factor in this modified quark model. Exploiting the relations (3.2), together with the isospin symmetry µ B u = −2µ B d , one can easily obtain We thus arrive at the sum rule for the nucleon strange form factor in the modified quark model Substituting the experimental values for µ p and µ n , and the above predictions ∆u = 0.81 and ∆d = −0.44, we obtain which reveals the discrepancy from the SAMPLE experimental values, differently from the prediction (2.13) of the SU(3) chiral model case. However, 2 In fact, in the literature [20], ∆Σ is evaluated using the sum rule for ∆Σ. However, here we have explicitly obtained the sum rules for its flavor components ∆f (f = u, d, s) and as expected, this result is quite comparable to the prediction in the literature [21] where, similar to Eq. (3.2), the SU(3) charge symmetry relations with the quark-loops are used. The difference between the preditions of F s(0) 2N in the SU(3) modified quark model and the SU(3) chiral model originates from the assumptions of these models, for instance, those in the SU(3) modified quark model that the magnetic moments of the quarks are different in the different isomultiplets, but do not change within an isomultiplet. Conclusions In summary, we have investigated the strange flavor structure of the octet baryon magnetic moments in the chiral models with SU(3) group structure. The strange form factors of octet baryons are explicitly constructed in terms of the sum rules of the baryon octet magnetic moments, which originate from the SU(3) flavor group structure, to yield the theoretical predictions. Especially in case of using the experimental data for the baryon magnetic moments as input data of the sum rules, the predicted proton strange form factor is comparable to the recent SAMPLE experimental data. On the other hand, we have studied the modified quark model with SU(3) group structure, where the magnetic moments of the quarks are different in the different isomultiplets, but do not change within an isomultiplet. In this model, we have obtained the sum rules for the spin polarizations ∆f with the flavor f (f = u, d, s) in terms of the octet magnetic moments µ B and the nucleon axial vector coupling constant g A , to yield the flavor singlet axial current of the nucleon, comparable to the recent experimental data. Moreover, the strange flavor spin polarization has been shown to be quite noticeable. However, exploiting the sum rule for the nucleon strange form factor constructed in the modified quark model, we have obtained the value, which shows discrepancy from the SAMPLE experimental values but is comparable to the prediction in the previous literature. Through further investigation, it will be interesting to study deep structure of the sum rules of the models with SU(3) group structures, which could lead to unification of these models. STH would like to thank Bob McKeown for helpful discussions and kind concerns at Kellogg Radiation Laboratory, Caltech where a part of this work has been done, and M. Rho for useful discussions and comments. He also
3,502.8
2001-11-30T00:00:00.000
[ "Physics" ]
Deep learning-based pupil model predicts time and spectral dependent light responses Although research has made significant findings in the neurophysiological process behind the pupillary light reflex, the temporal prediction of the pupil diameter triggered by polychromatic or chromatic stimulus spectra is still not possible. State of the art pupil models rested in estimating a static diameter at the equilibrium-state for spectra along the Planckian locus. Neither the temporal receptor-weighting nor the spectral-dependent adaptation behaviour of the afferent pupil control path is mapped in such functions. Here we propose a deep learning-driven concept of a pupil model, which reconstructs the pupil’s time course either from photometric and colourimetric or receptor-based stimulus quantities. By merging feed-forward neural networks with a biomechanical differential equation, we predict the temporal pupil light response with a mean absolute error below 0.1 mm from polychromatic (2007 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm$$\end{document}± 1 K, 4983 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm$$\end{document}± 3 K, 10,138 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm$$\end{document}± 22 K) and chromatic spectra (450 nm, 530 nm, 610 nm, 660 nm) at 100.01 ± 0.25 cd/m2. This non-parametric and self-learning concept could open the door to a generalized description of the pupil behaviour. with short-wavelength stimuli than with longer-wavelengths 25,39,41 . Thus, a distinction must be made between a phasic and a tonic pupil light response. Historically, these notable findings have had little impact on pupil modelling research. The origin of pupil modelling begun with the functions of Holladay 42 and Crawford 43 , each based on investigations of unknown age. With their ground-breaking works, they set the requirements for upcoming pupil models; developing a model that can predict the pupil diameter as a function of a V(λ) weighted quantity. It was indirectly assumed that the pupil control path is managed by an additive combination of L-and M-cones. This assumption is the basis of all published pupil models until the year 2012. Moon and Spencer 44 and De Groot and Gebhard 45 created combined models based on previously published data sets. These two models differ mainly in the predicted pupil diameter at high and low luminance. The models from Crawford and Moon and Spencer both used a hyperbolic tangent fitting function, taking care of the minimum and maximum pupil diameter. De Groot and Gebhard 45 believed that an intense saturation of the pupil diameter at high luminance using a hyperbolic tangent function does not correspond to the pupil's physiological nature. However, a high raw data variance between all authors up to the year 1952 is noticeable, which is justified by Stanley and Davies 46 with differently sized adaptation surfaces. Therefore, Stanley and Davies 46 proposed a pupil model that integrates the adaptation field size as an additional dependent parameter. Watson and Yellot 47 reviewed all pupil formulas and developed an unified pupil model with the additional parameters "age" and "number of eyes". Including the model from Watson and Yellot 47 , all formulas predict the static sustained pupil diameter in millimetres at the equilibrium state, caused by white light from thermal radiators. The time and spectral dependency of the afferent pupil control path were not taken into account in any of these models, although these are essential dependence parameters. In 2017, Rao et al. published a pupil model that takes into account the influence of ipRGCs by using a cirtopic luminance as an additional parameter 48 . The model was based on pupil examination, which used white light from phosphor-converted LEDs with an exposure time of 80 s. However, using the model requires knowledge about the measured stimulus spectrum, which complicates its application compared to L-and M-cone based pupil models. Therefore, the more rigorous application must make a significant contribution to the prediction accuracy, justifying the extra work. In a recent study, it was found that at 60 s exposure time, the mean prediction error of the Watson and Yellot pupil model with polychromatic white light of different correlated colour temperatures ( ∼ 2000 K, ∼ 5000 K, ∼ 10,000 K) is less than ± 0.5 mm 25 . At one second exposure time, it was 0.71 ± SD 0.15 mm 25 . Furthermore, with chromatic spectra of the peak-wavelengths 450 nm, 530 nm, 610 nm and 660 nm, the averaged prediction error at one second adaptation time was 0.94 ± SD 0.12 mm 25 . Therefore, adding a static ipRGC-component for the steady-state pupil diameter for longer exposure times like in the model from Rao et al. is not sufficient. The temporal influence is much more significant than the spectral impact when using white polychromatic light spectra 25 . Neither the dynamic receptor weighting nor a time-dependent prediction of pupil diameter is possible with any state-of-the-art pupil model. Even with spectra along the Planckian locus, pupil models reveal flawed predictions due to the missing time dependence, showing that being able to reconstruct the wavelength-dependent time course of the pupil light response would be the next step 25 . Moreover, the history of pupil modelling showed that parametric model approaches with fixed functions are not sustainable. When adding additional dependent parameters or renewing the data, the whole structure of the model has to be changed. With this work, we aim for a non-parametric and data-driven model approach, which can consider additional stimulus dependencies without changing the model structure itself. This could make it possible to build a self-learning pupil model based on a publicly accessible database, leading to a general pupil behaviour function. The published standards in pupil research have created a basis for the vision of such a pupil light database 49 . Here, we developed a concept for a deep learning-based pupil model that can consider the temporal and adaptive weighting dependence of the retinal receptors. We combined time-variant and time-invariant model approaches with a data-driven non-parametric neural network to link model parameters with spectral stimulus quantities, making it possible of reconstructing the pupil light response up to its' equilibrium-state by using only photometric and colourimetric, or receptor-based stimulus quantities. Materials and methods The requirements for a time-and wavelength-dependent pupil model approach. The structure of state-of-the-art pupil model approaches needs to be changed when additional exogenous influencing parameters inside the function are necessary. For instance, the age of subjects y significantly affects the pupil diameter d p , because the maximum aperture decreases with rising age 50 . To take this achromatic effect into account, Watson and Yellot had to modify the function of Stanley and Davies by embedding it into another function to derive the age dependency y in the unified pupil model d p,Watson (L, y, e, α) . Such a strategy is not effective and would not have been necessary for a data-driven non-parametric pupil model. Given the pupil's dependency parameters, it is foreseeable that cognitive influences will be included to improve the prediction accuracy in the future. Such cognitive influencing parameters can cause intersubject or intrasubject scatter in the measured raw data. Studies have shown that the intrasubject variance of a single participant reaches from ± 0.3 mm to ± 0.6 mm 51,52 . A higher variance of up to ± 1.5 mm 43,50 is associated with intersubject studies 43,50 . Thus, a pupil model can never be more accurate than these variances. Large sample sizes behind a pupil model lead to an improved model quality since the mean of the population is approximated more accurately. A generalised pupil model would not actively decrease the prediction error of a single observer. However, by knowing the pupil diameter's distribution of a population at a given stimulus, a confidence measure could be modelled too. Non-parametric functions that have sufficient degrees of freedom are the key to make a data-driven model possible. Before cognitive influences can be modelled, an approach must be found to model the complex www.nature.com/scientificreports/ properties of the exogenous influences to the afferent pupil path. In this area, there is a gap that has not been closed. The afferent pupil path's mechanism affects the temporal constriction and dilatation of the pupil differently depending on the radiance L e, of the stimulus spectrum x( ) for ǫ[380, 780], and exposure time t L . When using short exposure times ( 0 < t L ≤ 2 seconds), the pupil reacts after a latency time τ of 220 ms to 550 ms and contracts up to a peak 53 diameter d Peak , followed by a re-adaptation phase in which the pupil diameter dilates back to its pre-stimulus state (Fig. 1A). When a stimulus spectrum x( ) is constant, the latency τ , constriction velocity and peak constriction depends on the used radiance L e, or luminance L of the light source. As the luminance L increases, the constriction velocity and peak constriction increase while the latency time τ reduces [53][54][55] (Fig. 1A). The afferent pupil control path starts adapting to the stimulus x( ) itself after the peak constriction when the exposure time of the stimulus t L is increased. In this adaptation phase, the influence of the L-, M-and S-cones decreases and the melanopsinactivated ipRGC signal reaches its dominance 7 . This adaptive weighting of the receptors causes the decrease ("pupil escape") of the initial peak constriction with increasing adaptation time (Fig. 1B). When steady-state light stimuli with constant luminance L but different chromatic spectra x( ) are used, the pupil light response's wavelength dependency becomes more apparent. Studies have shown that both the latency time τ and the peak constriction d Peak (L, ) are wavelength dependent. The pupil contracts stronger and faster at short wavelengths than at long wavelengths 36,[56][57][58] . Additionally, the chromatic pupil adaptation mechanism at longer wavelengths takes more time to reach the equilibrium state 39,59,60 (Fig. 1C). Therefore, the pupil light response can be defined as d p (t, x( )) . Existing L-and M-cone based pupil models only predict a static pupil diameter d p (L) with the luminance L at the equilibrium-state. Neurophysiological or practical models derived from empirical data are conceivable to describe these timeand wavelength-dependent processes. The neurophysiological approach would have the goal of deriving the photons to photoreceptor relationships all the way up to the transmission of frequency-coded action potentials via the afferent pupil path and the regulation of the iris muscles by the Edinger-Westphal nucleus, allowing to reconstruct the complex temporal pupil responses ( Fig. 1A-C). Although such an approach would have the advantage of modelling the neurophysiological findings in recent years, it would make its application considerably more difficult for the latter, since knowledge of the spectrum and calculated receptor signals would be the prerequisite. It must be taken into account that the prediction of L-and M-cone based pupil models are flawed, but often used, since they can calculate the pupil diameter by using standard measurement equipment. Therefore, an alternative pupil model must be able to compensate for the deficits of current L-and M-cone models and give the possibility of adding additional model dependencies. Participants. We used the data from an intra-and intersubject pupil experiment with chromatic and polychromatic spectra to develop and train the proposed data-driven pupil model approach 25 . The complete pupil data used in this manuscript are from the authors' previous publication 25 . Therefore, the methodology in the collection and pre-processing of the participants' data is reported from the previously conducted experiments 25 . The pupil experiments were split into a chromatic and polychromatic stimuli session. The subjects in the chromatic trial had an age between 19 to 25 y, mean age 21.95 SD ± 1.73 y. In the chromatic session, the observers were 19 to 25 years old, mean age 22.2 SD ± 1.77 y. One subject was tested in-depth with twelve repetitions (Age: 33 y). Participation's prerequisite was an age range between 19 to 25 y, no history of ocular disease, no use of medications or drugs that could influence the pupil response. Furthermore, we instructed the subjects to drink no caffeine and alcohol 48 h before the experiment. The study was approved by the ethics committee of the Technical University of Darmstadt (ID: EK 12/2019) and carried out in accordance with the ethical principles of the Declaration of Helsinki 25 . All guidelines and regulations of the TU Darmstadt's ethics committee were met. We have received a signed consent from all participants. Phasic pupil response at one short light pulse. As the luminance intensity increases, the constriction velocity and the peak pupil constriction increases. At the same time, the latency of the pupil decreases with increasing luminance intensity. After peak constriction, the pupil undergoes re-adaptation. (B) During longer exposure times, the pupil adapts to the light stimulus itself, resulting in an increasing dilatation up to the equilibrium state. (C) The latency time, constriction velocity and peak constriction depend on the used light wavelength. Short wavelength stimuli cause a lower latency and a greater pupil constriction. The equilibrium time is reached faster with short wavelengths. 25 . Through a mirror inside the box, a homogeneous illuminated 700 × 700 mm rectangular surface was reached, corresponding to a visual angle of 53.1°. The gaze position was fixed to the middle of the adaptation surface through a 0.8° fixation target from Thaler et al., consisting of a bull-eye combination with a cross-hair structure 61 . The pupil measurements from the authors' previous publication to obtain the training data were split into two studies 25 . In the first study, chromatic LED spectra with the peak wavelengths 450 nm (99.73 SD ± 0.4 cd/ m 2 ), 530 nm (100.12 SD ± 0.2 cd/m 2 ), 610 nm (100.16 SD ± 0.2 cd/m 2 ) and 660 nm (99.97 SD ± 0.2 cd/m 2 ) were used. The second study was conducted with polychromatic spectra along the Planckian locus with correlated colour temperatures of 10,138 SD ± 22 K (99.83 SD ± 0.2 cd/m 2 ), 4983 SD ± 3 K (100.10 SD ± 0.4 cd/m 2 ) and 2007 SD ± 1 K (100.17 SD ± 0.3 cd/m 2 ). For simplicity, we labelled these spectra as ∼ 10,000 K, ∼ 5000 K and ∼ 2000 K. The Polychromatic spectra were optimized using a heuristic multi-objective optimization method (genetic algorithm). On each experimental day, the spectra were measured twenty times using a calibrated Konica Minolta CS2000 spectroradiometer. The spectra are reported in the Supplementary Table S2. Within the experiment, the stimuli were presented in a fully randomized order, each with 300 s adaptation time. The longer adaptation time was intended to capture the pupil light response up to its' equilibrium state, ensuring that our model approach had training data for the complete pupil adaptation 8 . Prior to each stimulus, a reference stimulus of 5500 K (199.45 SD ± 0.43 cd/m 2 ) was switched on for 300 s to adapt the pupil back to a baseline. The luminance increment between the anchor and stimulus spectrum was intended to provide a comfortable transition between the chromatic and phosphor-converted anchor spectrum 25 . Preliminary studies showed that at steady luminance the transition between the anchor and 450 nm spectrum was uncomfortable for the subjects, leading to increased eye blink rates in phasic pupil data 25 . For comparability, the anchor luminance was preserved in the second study with polychromatic spectra. One test session took 40 min with the chromatic spectra and 30 min with the polychromatic stimuli. The observers fixed the target inside the observation chamber during the whole time, to avoid pupil foreshortening error 62 . An instructor checked the gaze position of the participants with real-time gaze tracking. Pupil measurement and pre-processing of the data. The pupil diameter of the left eye was recorded during the whole 300 s adaptation time with an extrinsic and intrinsic-calibrated stereo camera system at 120 frames/s from Smart Eye Pro, consisting of two 659 × 494 pixels Basler acA640-120gm cameras and 8 mm lenses. Camera calibration was performed with a checkerboard, resulting in an average accuracy of 0.15 mm for edge detection. Prior to each experiment, gaze calibration was conducted with the participants. We removed the blink-artefacts from the pupil data with the blink detection algorithm from Smart eye pro. All pupil data which had an edge detection accuracy less than 97 percent were deleted from the dataset. Other non-physiological artefacts were cleaned by using a velocity filter. The pupil data were differentiated numerically and all strong outliers with a percentile threshold criterion of 99.993 and 0.007 percent were removed. We linearly interpolated all missing data. The pupil data were smoothed using a Savitzky-Golay-Filter with a window size of 3000 data points. However, the first three seconds were excluded from the smoothing, to avoid artificially induced minimization of phasic pupil diameter. The concept of modelling the pupil light response. Our empirical modelling approach of the timeand wavelength-dependent pupil light response aims to reconstruct the pupil diameter using the respective photometric and colourimetric parameters from which it was triggered. There is a direct and indirect approach to this task. The direct way would be to train a recurrent neural network with measured empirically collected pupil data d p,meas (t 1 , t 2 , . . . , t n ) for t 1 , t 2 , . . . , t n ǫR C with C for each stimulus condition. When designing the neural network, the input parameters (features) would be a sequenced abstraction {x i } N i=1 x i ǫR of the stimuli spectrum and the output d p,out (t 1 , t 2 , . . . , t n ) would be the pupil diameter per time unit t . The number of input parameter N could be chosen freely, but its goal is to provide enough information, allowing the neural network to reconstruct the pupil diameter d p,out (t) . For instance, it would be possible to use different combinations of luminance, CIExy-2° chromaticity coordinates and receptor signals as input values {x i } N i=1 . The combination of luminance and CIExy-2° chromaticity with coordinates ( N = 3) would have the advantage of considerably simplifying the use of the later model since the knowledge of a spectrum is not required to predict the pupil light response d p,out (t) . Usually, sequence-to-sequence recurrent neural network architectures are used for such tasks, but they require a substantial amount of data to achieve the desired accuracy. The accuracy would be limited by the skew of the number of parameters ( N, n ) between input and output. At a resolution of one second with tǫ[0, 300] , the neural network output would correspond to 300 pupil diameter values, which needs to be determined from three photometric quantities (L, CIExy-2°) as input {x i } N=3 i=1 . Even if the time resolution of the set is halved and the number input parameters N increased, a neural network would still have to determine 150 diameter values d p,out (t) values from six input values {x i } N=6 i=1 (CIExy-2°, luminance, L-cone, M-cone, S-cone, melanopsin signal). The reconstructed pupil data should not exceed a mean absolute error of ∼ 0.5 mm, since existing L-and M-cone models already predict the polychromatic spectra caused pupil diameter in such an error range 25 www.nature.com/scientificreports/ today's pupil research applications, a model's prediction error should not exceed ∼ 0.1 mm as cognitive and vision science focuses on smaller diameter margins 63 . For this reason, we chose an indirect procedure, aiming to reduce the number of output values n from the neural network. We developed a so-called base function F(y 1 , y 2 , . . . , y D ) for y 1 , y 2 , . . . , y D ǫR C to model the measured pupil data d p,meas (t) by varying the model parameters {y i } D i=1 . In this way, the temporal pupil response can be reconstructed by knowing the parameters y D . The primary requirement for the base function is sufficient degrees of freedom D , allowing to reconstruct d p,out (t) = F(y 1 , y 2 , . . . , y D ) from the empirical pupil data d p,meas (t) which is measured in different light spectra conditions C . As measured sample set, we had {d p,meas (t i )} t=300 i=1 for d p,meas (t i )ǫR S×C available. S denotes the number of subjects in each of the seven stimuli conditions C with the spectra types 420 nm, 530 nm, 610 nm, 660 nm, ∼ 2000 K, ∼ 5000 K and ∼ 10,000 K from the intra-and intersubject experiments. For modelling, the median of the subjects {d p,meas (t i )} t=300 i=1 with d p,meas (t i )ǫR C was used. Therefore, the number of subjects S or the performed repetitions in the pupil measurements had no direct effect when training the model. The data sets {d p,meas (t i )} t=300 i=1 were used to model each pupil response with the base the function F(y 1 , y 2 , . . . , y D ) . As a result, by knowing the model parameters for a corresponding stimulus spectrum condition C , the temporal pupil diameter d p,out (t) can be reconstructed with the base function F(y 1 , y 2 , . . . , y D ) . The idea is that each temporal median pupil data set {d p,meas (t i )} t=300 i=1 from the light conditions C receives its own model parameters With such an approach, it is no longer necessary to find a direct relationship between associated stimulus quantities {x i } N i=1 and pupil data per time unit d p,meas (t) . The indirect approach predicts the model parameters using a neural network to insert them into the base function F(y 1 , y 2 , . . . , y D ) . Thus, the number of output parameters of the neural network is defined by the degrees of freedom D of the base function F . However, the degree of freedom D from the base function F must be sufficient enough to model the measured wave-and time-dependent pupil responses d p,meas (t) ( Fig. 1A-C). Wavelength-dependent pupil adaptation in the collected train data. The pupil's wavelengthdependent adaptation behaviour is essential for a time-dependent model and must be covered in the train data {d p,meas (t i )} t=300 i=1 . Therefore, we analysed whether the wavelength-dependent temporal behaviour of the afferent pupil path is catched in our data. Using the mean of the pupil diameter μ(t) 450nm as a reference and subtracting it from the other mean values μ(t) 530nm , μ(t) 610nm , μ(t) 660nm , the adaptation behaviour can be related to each other ( Fig. 2A, B). In the intersubject experiment, the comparison of the mean differences showed that the equilibrium state for the spectra 610 nm and 660 nm is reached at 90 s. It takes about 20 s for the 530 nm spectrum ( Fig. 2A). The intrasubject experiment showed a more characteristic spectral adaptation behaviour (Fig. 2B). At 610 and 660 nm, the equilibrium status is reached at about 120 s and 530 nm after approximately 10 s. To assess the adaptation response from polychromatic spectra, we used the mean pupil diameter μ(t) 10,000K as a reference. In the intersubject experiment at ∼ 2000 K, the adaptation process is completed after 30 s. In the trial with the individual subject, the steady state is reached after 60 s with the ∼ 2000 K stimulus. At ∼ 5000 K, there is no clear chromatic adaptation either in the individual or in the multiple subject examination because 5700 K was used as pre-stimulus. Thus, the adaptation mechanism is covered in the data and can be considered in the proposed model. The consequence of the measured time-and wavelength-dependent pupil light response {d p,meas (t i )} t=300 i=1 is that it needs to be categorized into a phasic and tonic section, each with the different discussed characteristics. These sections were used to break down the base function F into two "child"-functions before fusing them with into a combined model d pM (t). The phasic pupil light response represents the constriction of the pupil after a specific latency τ time from the starting point d p,meas (t 1 , ) to the peak pupil diameter d p,meas (t Peak , ) until the beginning of dilatation with t 1 ≤ t ≤ t d,start (Fig. 1C). In our data, this process takes place approximately in the first two seconds ( t d,start ≈ 2s ). In the tonic section t d,start ≤ t ≤ t eq , the pupil adapts to the stimulus itself under a sustained light stimulus until a state of equilibrium t eq is reached. The velocity and gradient of adaptation up to the equilibrium state vary significantly with the spectral distribution x( ) . This tonic time t eq is defined in our data with 300 s since we measured the pupil diameter in this time window. Using the initial pupil diameter to reconstruct the temporal pupil light response. When predicting or reconstructing the pupil response in time, the initial pupil diameter d p,meas (t 1 , x( )) is necessary as a starting point. The initial point should preferably be independent of the spectrum, meaning d p,meas (t 1 , x( )) ≈ d p,meas (t 1 , L) to facilitate the prediction of the starting position. This would allow the prediction of this pupil diameter d p,meas (t 1 , L) with a classical L-and M-cone based pupil model. For this purpose, we statistically checked in our data whether the initial pupil diameter is significantly affected by the spectrum x( ) (Fig. 2C, D). According to graphical inspection with a quantile-quantile-plot, normal distributed data can be assumed in both inter-and intrasubject experiments. The Mauchly test revealed for the intersubject examination that the assumption of sphericity had been met p = 0.6 > .05 . Therefore, a correction of degree is not needed. According to repeated measure ANOVA, there is no significant difference F(6, 66) = 0.85, p = 0.537 > .05 of the initial pupil diameter between the used spectra for the multiple subject trial (Fig. 2C). Within the data from the individual subject, the Mauchly test showed that the assumption of sphericity had been met p = 0.41 > .05 (Fig. 2D). The results from the repeated measure ANOVA showed that the initial pupil diameter is not affected by the type of the spectrum F(6, 66) = 6.23 · 10 −2 , p = 0.999 > .05 . Due to the latency of the pupil and the usage of a constant anchor spectrum, the initial pupil diameter always results from the pre-stimulus at 5700 K. The randomized conduction of the experiments did not significantly affect the initial pupil diameter and we can assume in the following d p,meas (t 1 , x( )) ≈ d p,meas (t 1 , L) . A wavelength dependence of the initial pupil diameter www.nature.com/scientificreports/ would have indicated that the anchor pre-stimulus was not presented long enough to adapt the pupil back to its baseline. Developing the base functions to model the phasic and tonic pupil light response. There are different time-variant function proposals for the phasic pupil light reflex from the research areas of biomechanics and control engineering. The pupil response is assumed as a time-dependent control loop or mechanical feedback system. With such functions, the phasic pupil course can be reconstructed with corresponding characteristics of the constriction velocity and constriction peak. Unlike the classical L-and M-cone based pupil models, the time-variant function proposals have not been developed with comprehensive empirical data. A valid prediction of the absolute pupil diameter as a function of any intensity magnitude or light spectrum x( ) is not possible without extensive modification. The function proposals to describe the pupil light reflex as a control system is a so-called black-box approach, which does not provide information about the internal mechanisms of the pupil behaviour 64 . In 1957, Stark et al. 65 described the pupil light reflex as a servomechanical control system with a delayed linear differential equation of third order. Subsequent work has extended the control loop 66,67 by using other non-linear differential equations, to create a generalized description of the phasic pupil response [68][69][70][71][72] . Although the proposed control systems describe the behaviour of the phasic pupil light reflex systematically, the transfer functions are not intended to convert them into a closed equation 73 . In their present proposed form, the functions cannot be used to calculate the pupil diameter as a function of an intensity quantity or spectrum x( ) . Furthermore, they do not provide insight into the actual physiological processes of iris muscle activity caused by the parasympathetic and sympathetic nervous system 73 . Biomechanical approaches break down the pupil light reflex dependencies into individual components, creating functions of the physiological subprocesses for an overall function. In the work of Longtin and Milton 74 , it is discussed that a biomechanical pupil function should include the neuronal feedback control mechanism, spontaneous pupil changes from the autonomic nervous system and the regular oscillation of the pupil 75 . Longtin and Milton 74 modelled the rate of action potentials in the receptors as a function of luminous flux and then built an equation to describe the efferent signal from the Edinger-Westphal nucleus to the pupil's muscles. The relationship between pupil muscle activity and the resulting pupil area is derived using the Hill function. A generalized retarded non-linear differential equation is proposed to describe the temporal pupil area as a function of luminous flux. The model parameters of the differential equation depend on muscle activity in the iris. www.nature.com/scientificreports/ Pamplona et al. 55 took this approach and determined the missing constants with the available pupil data from Moon and Spencer 44 . As a result, the function of Longtin and Milton was combined with the model of Moon and Spencer to predict the phasic pupil light reflex as a function of luminance. The resulting model did not consider the fact that Moon and Spencer measured the tonic pupil diameter. Furthermore, the integration of the adaptation phase's spectral dependence is insufficiently possible due to the proposed function's low degrees of freedom. The consequence would be a derivation and adaptation of the entire equation for each stimulus condition C in the pupil data {d p,meas (t i )} t=300 i=1 . Usui and Hirata 64 have created a biomechanical pupil function based on iris muscle activity. The constrictor and dilatation muscle are mechanically considered as elastic viscous elements. The equation could be adapted to study data and represent the activity of the autonomic nervous system. However, with a total of 19 differential equations, the entire pupil equation is relatively extensive 73 . Even when the equations are combined, the model still consists of three independent second order delayed differential equations 73 . A simplified time-variant pupil function was developed by Fan and Yao 73 with a single delayed differential equation of second degree (Eq. 1). For this purpose, the two iris muscles were modelled separately as viscoelastic materials. The constriction and dilation path were considered separately with the time-dependent muscle forces ḟ p (t) and f s (t). K c and K d are the elasticity constants of the constriction and dilatation muscle in the iris. L 0d and l 0c define the length of the iris muscles, D the viscosity constant and P 0 the static iris force at resting. The temporal pupil diameter d Phasic (t) is mainly determined by the time-dependent iris muscle force functions ḟ p (t) and f s (t). In Eqs. (2) and (3) f s0 , f p0 are the static iris muscle forces. τ p and τ s define the latency until the respective muscle activity is triggered. The parameters t p and t s represent the duration of the parasympathetic and sympathetic modulation. We decided to use the function of Fan and Yao 73 to model the phasic pupillary reflex since it combines enough degrees of freedom to fit {d p,meas (t i )} t=300 i=1 in any condition of C by changing the model parameters X p,Ph = [ḟ p (t), f (t) s , P 0 , τ p , τ s , �t p , �t s ] . The values X k,Ph = [L 0d , l 0c , K d , K c , D] are stimulus independent iris muscle parameters and needs to be calculated once. Coming back to the discussed concept of the neural network, the model parameters X p,Ph ǫR D1 are the first half of values that need to be predicted from the stimulus quantities {x i } N i=1 . However, to solve the differential equation numerically, the initial pupil diameter r(0) = d p,meas (t 1 , L) must be known. In the previous section, we showed that d p,meas (t 1 , L) is statistically independent of the used spectrum x( ) and resulted from the anchor stimulus. Therefore, we used classical L-and M-cone-based pupil models to predict the starting point d p,meas (t 1 , L) . A recent work showed that these models could predict the static equilibrium pupil diameter for white light along the Planckian locus with acceptable prediction errors 25 . We assume that no chromatic stimuli were used as reference light for adaptation, which would also be unusual. The unified model of Watson and Yellot 47 in Eqs. (4) and (5) was chosen to predict d p0 (t 1 , L, α, e) = d p,meas (t 1 , L) , because this function was reported as most valuable compared to other L-and M-Cone models 25 . In the model by Watson and Yellot, the pupil diameter is determined with the parameters L as luminance, α as viewing angle in deg 2 of the stimulus area and y as the age of a subject. The reference age y 0 is a constant defined by 28.58 years. With such a starting point, the Fan and Yao function is able to fit the temporal phasic pupil diameter d p,meas (t) for t 1 ≤ t ≤ t d,start well for the different stimulus conditions C but fails to describe the tonic pupil response at t d,start < t ≤ t eq . The function oscillates for larger time periods, which is not able to describe the wavelength dependent tonic adaptation behaviour ( Fig. 2A, B). Therefore, we take a separate function for the tonic pupil response. We found that a ninth-degree polynomial (Eq. 6) showed appropriate conditions to be considered as a tonic function. It was able to represent any tonic pupil response for each condition C in an automated fitting algorithm. Especially the extreme case where the pupil diameter at short wavelengths is particularly early in equilibrium compared to long wavelengths was covered with this function. The parameters of the masking functions q and r determine the position and transition behaviour between the two functions d Phasic (t, X k,Ph , X p,Ph ) and d Tonic (t, X p,Ton ) . These parameters need to be determined only once and are independent of the pupil data. The resulting base function d pM (Eq. 9) can fit the time-dependent pupil data {d p,meas (t i )} t=300 i=1 for d p,meas (t i )ǫR C from any experimental measurement condition C and reconstruct it with the respective stimulus-dependent model parameters X p = [X p,Ph , X p,Ton ] for X p ǫR CxD . Thus, the temporal pupil light response can be replicated with time-independent model parameters {X p,i } D i=1 in each stimulus condition C . The other model parameters q, r and X k,Ph can be considered as constants when the function is fitted to {d p,meas (t i )} t=300 i=1 in the different stimulus conditions C . The combined model (Eq. 9) with the tonic (Eq. 6) and phasic (Eq. 1) function were implemented in MathWorks MATLAB, which is available as an open-source project. Computing the model parameters of the phasic and tonic pupil functions. The base function d pM (t, q, r, X k,Ph , X p ) was used to fit the measured pupil response data {d p,meas (t i )} t=300 i=1 in each stimulus conditions C with the spectra 420 nm, 530 nm, 610 nm, 660 nm, ∼ 2000 K, ∼ 5000 K and ∼ 10,000 K. This procedure was performed for both the inter-and intrasubject experiment. The results for the intrasubject experiment are reported in the Supplementary Information. We varied the model parameters X p and solved the differential equation numerically by using an ode45 solver, to fit the pupil data. The stimulus independent parameters q, r and X k,Ph were determined only once and kept constant for all light conditions to reduce the number of wavelength-dependent parameters. As stated, we calculated the initial pupil diameter d p0 (t 1 , L, α, e) with the Watson and Yellot model, using it as a solving condition for the numerical solution of the differential equation. Due to the delayed pupil light response, the anchor spectrum caused the initial pupil diameter. Therefore, the luminance of the anchor spectrum (199.45 cd/m 2 ) was set into the Watson and Yellot model. As age parameter, we took the mean value of our sample from the polychromatic (n: 20 However, the measured average initial pupil diameter across all subjects and conditions was 2.38 mm in the dataset. Therefore, an offset correction of 0.41 mm was performed for matching the prediction. The prediction difference is partly due to the fact that our spectrum was generated with a multi-channel LED light whose spectrum differs from the thermal radiators used to develop the Watson and Yellot model. Such an approach was used in a recent publication to adapt classical L-and M-cone based models to pupil data caused by chromatic and polychromatic LED-spectra 25 . The offset corrected prediction of the Watson and Yellot model was used as r(0) in Eq. (1). We programmed a graphical user interface in MathWorks MATLAB to fit the differential equation to the median of the measured pupil data {d p,meas (t i )} t=300 i=1 . The software made it possible to change the model parameters X p and visualize the solution of the differential equation d pM (t, q, r, X k,Ph , X p ) ( Supplementary Fig. S1) for each lighting condition. We have stored the measured pupil raw data with calculated model parameters (Table 1) for each condition in the available software (see Supplementary Information). The parameters of the masking functions q = 1.1359 and r = 0.3517 were determined manually with the programmed graphical user interface ( Supplementary Fig. S1). During the adjustment, we ensured a smooth transition between the phasic and tonic functions in all lighting conditions. As a result of this approach, 17 dependent and seven constant values represent the temporal pupil light response for each light condition. Using the base function d pM (t, q, r, X k,Ph , X p ) , we have reduced the feature set from 300 pupil diameter values (1 s resolution) to 17 model parameters X p . Thus, by combining a neural network with the base function, the time and wavelength-dependent pupil diameter can be reconstructed by predicting X p from the stimulus quantities Linking stimulus quantities with model parameters through a neural network. The knowledge of model parameters X p alone is not advantageous because the connection to the stimulus characteristics in each condition C is missing. Therefore, we used the calculated stimulus dependent parameters X p of the base function to train a neural network with photometric, colourimetric or receptor signals as input parameters (Table 1). We aimed to establish a link between the model parameters X p of the base function and stimulus quantities {x i } N i=1 . Ideally, this would ensure that after the input of stimulus values from a stimulus condition such as luminance and CIExy-2° chromaticity points, the respective model parameters X p from Table 1 could be predicted through the neural network. The reconstruction of the temporal pupil light reflex d p,out (t 1 , t 2 , . . . , t n ) would be possible by solving the base function d pM (t, q, r, X k,Ph , X p ) with the predicted values X p from the neural network. At first, we need to determine which combination stimulus features make sense as input parameters to the neural network. We trained three variants of feedforward neural networks, each with different input combinations. From the measured stimulus spectra x( ) , we calculated the photometric, colorimetric and receptor-based quantities and used the mean stimulus values ( Table 2) for training. We used the input parameters luminance and the CIExy-2° chromaticity points for the neural network's first variant {x v1,i } N=3 i=1 . Variant two (7) f Masc1 (t, q, r) = 1 − (0.5 + 0.5 · tanh(t − q/r)) (8) f Masc2 (t, q, r) = 0.5 + (0.5 · tanh(t − q/r)) (9) d pM (t, . . . ) = d Phasic t, X k,Ph , X p,Ph · f Masc1 t, q, r + d Tonic t, X p,Ton · f Masc2 t, q, r Table 2. Metrics that were used as features for the neural network. The features were calculated from the repeated measured spectra in the pupil examinations 25 . The values are given with standard deviation in the table, but for training the neural network, the mean values were used. On each study day, stimuli were measured twenty times with a Konica Minolta CS-2000 spectroradiometer. S-cone, M-cone, L-cone and ipRGC excitation were calculated with the 10-deg cone fundamentals and the melanopic action spectra reported in CIE S 026/E:2018. The cone and ipRGCs excitation values are specified as α-opic radiance in W/m 2 sr. www.nature.com/scientificreports/ {x v2,i } N=4 i=1 was trained with the L-, M-, S-cones and the melanopsin signals. The luminance, CIExy-2° chromaticity points and the melanopsin signal was used in the third variant {x v3,i } N=4 i=1 . The train data sets were normalized with the unity-based normalization X i = (X i − X Min )/(X Max − X Min ) before the training was conducted. The neural networks were trained and implemented using PyTorch 1.5 with PyTorch Lightning 76 in Python 3. We trained the model by minimizing the mean squared error MSE = 1/N n i=1 (y i − y 0i ) 2 between the output of the neural network y i and the target model parameters y 0i (Table 1, Supplementary Table S1). The weightings were optimized using a Adam optimizer 77 , with a learning rate of 0.001 and a batch size of 7. We used three fully connected layers (40,380,80) with a rectified linear unit (ReLu) activation function. The number of neurons of the input layer corresponded to the number of input parameters N (Variant 1: 3, Variant 2: 4, Variant 3: 4) and the number of neurons of the output layer was 17. Three fully connected hidden layers were used with 40, 380 and 80 neurons, respectively. The neural networks were trained 4000 epochs ( Supplementary Fig. S2) by using the calculated model parameters X p ǫR C with C as stimulus conditions. For each variant, two neural network versions were trained. One based on the intersubject parameters (Table 1) and the second with the intrasubject parameters (SupplementaryTable S1). The training process over the epochs is reported in Supplementary Fig. S2. Results The deep learning-driven pupil model approach. The structure of the overall model proposal to reconstruct the time-dependent pupil response d p,out (t 1 , t 2 , . . . , t n ) with a neural network as a data-driven component is summarized in Fig. 3. After the neural networks have been trained (Variant 1 to 3) with the corresponding data sets (Table 1, Supplementary S1, S2), they are able to output the model parameters of the tonic X p,Ton and phasic X p,Ph functions from photometric or receptor-based quantities x v1 , x v2 and x v3 (Fig. 3: Step 1). The next step in the model is to determine the initial pupil diameter d p0 (t 1 , L, α, e) with the Watson and Yellot model (Fig. 3: Step 2). It is inserted as an initial state d p (0) together with the predicted model parameters of www.nature.com/scientificreports/ the neural network ( Fig. 1: Step 1) into the second order differential equation d Phasic (t, X k,Ph , X p,Ph ) and solved numerically to reconstruct the phasic pupil light response. The second part of the predicted model parameters X p,Ton from the neural network is applied to the tonic model d Tonic (t, X p,Ton ) to reconstruct the pupil course from the peak pupil diameter to the equilibrium state ( Fig. 3: Step 4). This part is particularly important for mapping the wavelength-and time-dependent adaptation of the pupil control path (Fig. 1, 2). In the last step, the prediction from the phasic and tonic model is combined by the masking functions (Eqs. 7,8) according to the combined model equation (Eq. 9), to obtain the total reconstructed pupil response up to 300 s. Thus, the entire time course of the pupil light response can be determined by using photometric or receptor-based quantities. In this overall system, the neuronal networks represent the data-driven component. The structure in Fig. 3 is embedded in an algorithm in MathWorks MATLAB and Python, allowing to return the complete temporal pupil response through the respective stimulus quantities. Reconstructing the temporal pupil light response with the proposed model approach. We used the discussed structure of the proposed pupil model approach (Fig. 3) and the trained neural networks to perform a direct comparison between the measured pupil diameter from the intersubject experiments and the predicted reconstructed pupil response. Figure 4 (A-G) shows the measured median pupil diameter and the predicted pupil response (Variant 1) for each lighting condition. The median pupil diameter is plotted with the respective percentile range of the raw data. The mean absolute error (MAE) between measured and predicted pupil diameter is between 0.015 mm and 0.069 mm for chromatic and polychromatic stimuli. The residuals analysis showed that for each variant of a neural network, the prediction error of the proposed concept is below ± 0.3 mm (Fig. 4H). At most times, the error is even less than ± 0.2 mm. Just with the stimulus of Peak = 610 nm, an eruption of up to -0.3 mm prediction www.nature.com/scientificreports/ error is observed between 240 and 250 s, which is due to fluctuations of the median diameter (Fig. 4C). The same analysis was performed for the trained combined model with the intrasubject data sets, showing that the error was even smaller than for the intersubject data, due to the lower fluctuation of the median diameter (Supplementary Fig. S3). As a comparison to our model concept, we calculated the residuals of the classical L-and M-cone based pupil model by Watson and Yellot in relation to the measured median diameter (Fig. 4I). The prediction of the Watson and Yellot model had an absolute prediction error of greater than 0.6 mm for the phasic pupil diameter. For the tonic pupil diameter, the error increases to 1.14 mm due to the time-and wavelength-dependent dependent receptor weighting of the pupil path (Fig. 4I), showing that the inaccuracy of the L-and M-cone based pupil model is not only caused by the lack of melanopsin weighting. Discussion The key idea of this work is to model the temporal pupil light response for different stimuli through a time-variant biomechanical differential equation and predict its model parameters using a deep learning approach. We showed that the concept works well for both chromatic and polychromatic spectra with a mean absolute error of less than 0.1 mm across the 300 s of the pupil's time course. The trained neural networks were able to find a pattern between the light parameter features and the model parameter successfully. All input parameter combination x v1 , x v2 , and x v2 achieved a loss that would allow the usage in the proposed combined model. Furthermore, the fusion of the combined model with neural networks revealed that with all three light-metric feature combinations, the residuals were in a range of ± 0.2 mm. Similar results were obtained with the intrasubject dataset, indicating the validity of the proposed pupil modelling concept. Specifically, the first input variant x v1 could make a simplified application possible 78 since only the CIExy-2° chromaticity points and the luminance of a stimulus is necessary for determining the base function's model parameters and reconstructing the temporal pupil light response. Compared to the recently published models by Holladay 42 48 , we took additionally into account the temporal, spectral receptor weighting of the afferent pupil control path. We can predict the pupil's spectral dependent phasic and tonic time course up to 300 s adaptation time which outperforms previous approaches. Additionally, the combined model is non-parametric, meaning a continuous extension of the prediction space through data basis upgrades is possible without changing the basic structure. Analysation of the residuals from the Watson and Yellot function (Fig. 4I), showed that in pupil modelling the spectral dependence need to be considered together with the time behaviour. The adaptive weighting of the ipRGCs leads to different tonic pupil response patterns depending on the stimulus spectra. Therefore, previous approaches are currently reaching their limits and cannot be extended to solve the issue of pupil modelling. Note that the neural networks' input values are used to support the pattern recognition between the input features and predicted model parameters of the basis function. At the moment, our input parameters are used for classifying the respective stimulus spectrum without considering external study dependent parameters such as the adaptation field size α . For instance, we used the CIExy-2° coordinates although the adaptation field size in our setup corresponded to a visual angle of 53.1°. Suppose the neural network should also manage the pupil's relationship between different adaptation field sizes. In that case, it makes more sense of using a separate parameter α as input to the model in the future. A simultaneous change of the CIExy observer is not needed, because the chromaticity point features are only intended for specifying the stimulus itself without considering the adaptation field size. Thus, each input feature should have its identification task of a stimulus or experimental condition modality. However, it will be interesting to what extent the currently used input parameters behave when using pupil data caused from metamer stimuli, i.e. different spectra with the same chromaticity points. We assume that in such a case, additionally to luminance and CIExy-2° coordinates, the melanopsin signal needs to be integrated as an input (input variant x v3 ) for characterizing the stimulus. Our proposed combined model is currently based on the temporal pupil light of seven different spectra a constant luminance, which is insufficient for a finalized pupil light response model. When focusing on the future perspective of our approach, it is necessary to train the neural networks with an additional amount of temporal pupil data, ensuring continuous development of the stimulus modalities' prediction space. With sufficient training data, it should be possible of reconstructing the temporal pupil light response even for stimulus metrics that are explicitly not present in the training data. However, taking into account the amount of the pupil's control path influencing parameters, the data collection must be prioritized. In our view, the next step is to collect data on the pupil light response to fully model the behaviour with varying luminance and spectral power distributions by using the silent substitution technique 79 . For this purpose, the parameters of the anchor's luminance, anchor's spectrum and exposure time of the main stimulus should not be varied as this leads to additional influencing parameters, impairing the training result of the neuronal network. As the next important step, we consider the modelling of the exposure time, which would require a similar experimental protocol but with different adaptation times of the main stimulus. Due to the non-parametric model approach, the adaptation time could be mapped to the neuronal network as an additional input parameter, if sufficient training data is available. In the same way, other influencing parameters such as the adaptation field size α or cognitive effects could be increasingly incorporated into the combined model to approach a comprehensive pupil behaviour description with new data dependency layers. A weakness of the proposed model is the integrated polynomial equation for describing the tonic pupil behaviour. The tonic function alone requires ten input parameters, which need to be predicted by the neural network. In principle, this has not led to any disadvantage in reconstructing the temporal pupil light response. However, this approach is not elegant, making an alternate function with a smaller number of parameters preferable. This is an open issue which we need to address in an upcoming work. Furthermore, we currently assume a static www.nature.com/scientificreports/ reference spectrum (anchor) as an adaptation in our proposed model. If one wants to model the temporal pupil light reflex relating to different anchor spectra, it is not sufficient to change the starting point d p (0) of the pupil course with the Watson and Yellot component (Fig. 3: Step 2). Although the Watson and Yellot model determines the starting point d p (0) of the pupil's course, a change in the reference spectrum or luminance also means that the entire pupil light response could be different, affecting the tonic X p,Ton and phasic X p,Ph model parameters. In fact, for modelling the relationship between different adaptation spectra and the pupil light response from a main stimulus, the combined model needs an adaptation input in the neural network additionally. In general, one must consider that a higher number of input parameters in the neural network leads to a more robust prediction for additional dependencies, but simultaneously to a more complex application of the model, because more parameters have to be entered. In future, only the neural network's input count need to be changed if more dependencies should be modelled since the base function has a sufficient degree of freedom for describing any temporal pupil response. The research applications in the field of pupillometry are highly interdisciplinary [80][81][82][83][84][85][86][87][88] across species 89 , covering the topic of clinical diagnostics 41,[90][91][92][93][94][95] , cognitive science [96][97][98][99][100][101][102][103] , neuroscience 104 , vision science 105,106 , autonomous nervous system [107][108][109] and quantification of the circadian photoentrainment 39,[110][111][112][113] . A reliable data-driven pupil model that integrates the findings of past years could also be an essential step forward for these research areas. However, individual research groups will not be able to model the pupil behaviour's cognitive and light-induced dependencies alone, so the focus should be in our view on a non-parametric data-driven approach 114 . Therefore, in future works, we will connect the current combined model with a publicly accessible pupil database, achieving an automated self-maintenance of the neural networks as the database grows. The entire code and neural networks are provided with this manuscript so that this concept could become a door-opener to an overall model of the light-and cognitive induced pupil dependencies. Data availability The training data, graphical toolbox and the implemented pupil model with respective neural networks is available at the main authors' GitHub page: https ://githu b.com/BZand i/DL-Pupil Model . Received: 20 July 2020; Accepted: 11 December 2020 www.nature.com/scientificreports/ Author contributions B.Z. had the initial idea of the model structure. B.Z. and T.Q.K. worked out the concept of the model approach. B.Z. wrote the manuscript, created the figures, did the data analysis, implemented the formulas in MATLAB and built the neural networks in Python. B.Z. programmed the graphical user interface which was used to obtain the model parameters. B.Z. and T.Q.K. revised the manuscript. All authors have read the manuscript. Funding Open Access funding enabled and organized by Projekt DEAL.
13,141.8
2021-01-12T00:00:00.000
[ "Physics", "Computer Science" ]
Development of degradable pre-formed particle gel (DPPG) as temporary plugging agent for petroleum drilling and production Temporary plugging agent (TPA) is widely used in many fields of petroleum reservoir drilling and production, such as temporary plugging while drilling and petroleum well stimulation by diverting in acidizing or fracturing operations. The commonly used TPA mainly includes hard particles, fibers, gels, and composite systems. However, current particles have many limitations in applications, such as insufficient plugging strength and slow degradation rate. In this paper, a degradable pre-formed particle gel (DPPG) was developed. Experimental results show that the DPPG has an excellent static swelling effect and self-degradation performance. With a decrease in the concentration of total monomers or cross-linker, the swelling volume of the synthesized DPPG gradually increases. However, the entire self-degradation time gradually decreases. The increase in 2-acrylamide-2-methylpropanesulfonic acid (AMPS) in the DPPG composition can significantly increase its swelling ratio and shorten the self-degradation time. Moreover, DPPG has excellent high-temperature resistance (150 °C) and high-salinity resistance (200,000 mg/L NaCl). Core displacement results show that the DPPG has a perfect plugging effect in the porous media (the plugging pressure gradient was as high as 21.12 MPa), and the damage to the formation after degradation is incredibly minor. Therefore, the DPPG can be used as an up-and-coming TPA in oil fields. Introduction In recent years, temporary plugging technology has been widely used in various fields of petroleum reservoir drilling and production (Kang et al. 2014;Xiong et al. 2018;Zhang et al. 2019a). The design idea of temporary plugging technology is to use plugging agents (usually chemical agents) to plug the flow channels (i.e., layers with higher permeability) (Li et al. 2019;Liu et al. 2018). However, in the subsequent oil and gas production process, it is hoped that the agents sealing in the layers can be effectively removed or self-recovered (Zhang et al. 2020a). In this way, the oil and gas flow channels can be further increased or expanded, and more crude oil and natural gas can be produced. For example, the use of temporary plugging technology during drilling can shield and protect oil and gas reservoirs during drilling. It can avoid damaging the permeability of the reservoir and ensure that oil and gas can flow easily to the wellbore and be produced. Besides, in the process of oil and gas reservoir developement, if the reservoir permeability is low, stimulation technologies such as acidification and fracturing of the formation are required (Zhang et al. 2020a, b). Temporary plugging technology can be used to plug the treated layers so that subsequent treatment agents (e.g., fracturing fluid or acid, etc.) can be diverted into other untreated layers (Xue et al. 2015). In this way, the reservoir conformance can be improved, and thereby increasing oil and gas production (Jia et al. 2020). Take the multi-stage fracturing process as an example, as shown in Fig. 1. If the temporary plugging technology is used in multiple fracturing processes, it can divert the subsequent injection of high-energy fracturing fluids to form a more effective and complex fracture network Yuan et al. 2020). Therefore, the flow conductivity and oil drainage area can be improved to increase the single well production. Temporary plugging agent (TPA) differs from conventional plugging materials in that it can dissolve in the formation water or fracturing fluids, or it can be self-degraded after the diverting operation is completed. Therefore, it can cause little damage to the formation . After years of development, there are many types of temporary plugging agents, mainly divided into the following four types. They are granular TPA Shi et al. 2020), fiber-type TPA (Zhang et al. 2019b, c), gel-type TPA (Nasiri et al. 2018;Zhao et al. 2016), and compound TPA Liu et al. 2020). Non-deformable granular TPA includes water-soluble inorganic salt particles (e.g., calcium carbonate), oil-soluble particles (e.g., wax balls and resin), and temperaturesensitive degradable particles (e.g., polylactic acid). They mainly rely on the mechanism of particle bridging effect to accumulate in the fractures or pore throats to form temporary plugging layers (Allison et al. 2011). However, the granular TPA is generally made of inorganic salt particles with high compressive strength, which has a low crushing rate and high compressive strength. Therefore, the inability to deform causes this type of TPA to have limited adaptability and requires manual selection, which increases the difficulty of oilfield operations. Gel-type TPA mainly refers to water-soluble polymer gels. The gel-type TPA can generate a glue liquid named gelant on the ground. After the gelant is injected into the reservoir, it will undergo a cross-linking reaction to form a temporary plugging layer and seal the fractures. Then, the chase gel breaker is injected to react with the polymer gel, and it will de finally degraded it into a lowviscosity liquid, reducing damage to the reservoir (Wang et al. 2019a). The fiber-type TPA realizes the plugging effect mainly through the three processes of capturing, bridging, and compacting (Li et al. 2019). After plugging treatments, the fiber-type TPA can be completely dissolved in water or residual acid, which can protect the reservoir from damage. Conventional non-deformable granular TPA can be subdivided into water-soluble or acid-soluble inorganic salt particles, oil-soluble particles, and temperature-sensitive degradable particles. These particles mainly rely on the particle bridge plugging effect to form temporary plugging layers at fractures or pore throats and follow to meet the plugging requirement that the particle size should be larger than 1/3-1/2 of the pore throat size (Cargnel and Luzardo 1999). Huo (2009) used oil-soluble resins with different softening points to synthesize a new type of oil-soluble temporary plugging agent. The temporary plugging efficiency was more than 94%, and the permeability recovery percentage in oil was more than 90%, which can meet most of the fracturing operation requirements. Jiang and Mu (2006) experimentally investigated the thermal stability, compatibility with crude oil, pressure, dissolution rate, plugging, and backflow properties of wax plugging agents. The performance of the wax balls could meet the requirements of the temporary plugging and repeated fracturing process in the Ansai Oilfield, China, with excellent performance and reasonable cost. Cargnel and Luzardo (1999) found calcium carbonate (CaCO 3 ) could be used as a bridging agent and applied in drill-in fluids; it can prevent massive loss circulation to the reservoir formation (Nasiri et al. 2017). At present, the most commonly used temperature-sensitive degradable particles are made of polylactic acid (PLA) (Lv et al. 2019;Schultz et al. 2020;Surjaatmadja and Todd 2009). PLA is a new type of biodegradable material made from starch extracted from renewable plant resources (such as corn) (Takahashi et al. 2016). It is reported that it can be degraded by microorganisms in nature, and eventually generate carbon dioxide and water without polluting the environment (Reddy and Cortez 2013). The degradation of polylactic acid is divided into two stages. First, it is hydrolyzed into lactic acid monomers. Then, the lactic acid monomers are degraded into carbon dioxide and water under the action of microorganisms. However, it takes about 60 days for polylactic acid to degrade completely. Reddy et al. (2018) conducted a detailed study on the degradation ability of polylactic acid temporary plugging agents. They found that when using polylactic acid as the temporary plugging agents, degradation accelerators need to be added to shorten the degradation time. Commonly used accelerators include ethylenediamines, ethanolamines, polyamines. However, polylactic acid has an insufficient plugging strength, too fast degradation rate, slow degradation rate, and inadequate mechanical property of polybutyrate. Therefore, Xiong et al. (2018) Fig. 1 Schematic diagram of multiple fracturing by diverting process (Du et al. 2013) adapt to degradation requirements of different bottom hole temperatures by optimizing the ratio of raw materials. Although the fiber-type material has excellent flexibility and outstanding leak-proof and plugging performance, its degradation rate is slow, and strength is limited (Zou et al. 2019). It is challenging to penetrate deeply into the fracture or the tip of the fracture. Besides, the cost of degradable fibers is prohibitive ). In addition, non-deformable TPA (e.g., inorganic particles, hard resins, polylactic acid) has a specific compressive strength. However, due to the large pores between the stacked particles, the plugging strength is limited (Wei 2017). Moreover, the rigid particles are easily crushed under high pressures, and may not be able to maintain a permanent and effective fracture opening. Pre-formed particle gel (PPG) can be changed into deformable particle after absorbing water (Wang et al. 2019b;Zhu et al. 2017a). Because of its easy injection, strong plugging strength, and excellent environmental protection, it is widely used in profile control and water shutoff operations (Bai et al. 2007a, b;Wang and Bai 2018). However, there are few reports on its use as a temporary plugging agent. The main reason is that conventional PPG was crosslinked by N,N'-methylene bisacrylamide (NMBA), which is thermal-stable in low and medium temperature reservoirs (Zhou 2011). In this study, we synthesized a degradable pre-formed particle gel (DPPG) as a temporary plugging agent (TPA), which introduced a cross-linking structure that can be selfdegraded under reservoir conditions. The bottle test method was first used to investigate the effects of different formulations (e.g., total monomer concentration, cross-linking agent concentration, initiator concentration, and monomer ratio) on swelling volume and degradation performance of DPPG. Then, reservoir adaptability of the optimized temporary plugging agent (e.g., brine salinity and formation temperature) was evaluated. Subsequently, the polymerization and self-degradation mechanism of DPPG were explained based on the static experimental results and the microscopic morphology and structure changes of DPPG. Finally, the core displacement experiment was used to investigate the plugging and degradation performance of DPPG in the porous medium. This research can provide a theoretical and experimental basis for the further oilfield application of the temporary plugging agent. To investigate the temporary plugging effect of the degradable pre-formed particle gel (DPPG) under oilfield conditions, the physical model used in this experiment was designed and fabricated. The main design and fabrication of physical models are main shown in Fig. 2. Preparation of DPPG The preparation method of DPPG is the same as that of conventional PPG (Elsharafi and Bai 2016). First, a certain number of water-soluble monomers (such as AM and AMPS) were weighed and dissolved in a certain amount of reverse osmosis (R.O.) water. After a complete dissolution was achieved, a certain amount of cross-linker (DT-2) and initiator (potassium persulfate) were added in sequence. After being fully dissolved again, it was placed in a thermostat water bath at 45 °C for heating and reacting for three hours. Then they were dried and crushed and finally DPPG samples were obtained, with different particle sizes and various compositions, as shown in Fig. 3. In this study, twenty DPPGs with different total monomers, cross-linker, initiator concentrations, or different monomer ratios were synthesized. Their formulations are shown in Table 2. Evaluation of static experiments (Bottle test method) The bottle test method was used to observe the swelling ratio and degradation of the temporary plugging agent (TPA) under different conditions. First, a certain amount of DPPG was put into a transparent and scaled test tube with enough solvent. For example, 0.1 g of DPPG with the dry particle size of 20-30 mesh was added to 20 mL of NaCl aqueous solution. Then it was put in a thermostat water bath (e.g., 65 °C). The packed volume of DPPG in the test tube was recorded with time. The correspondence between swelling time and reading interval and oilfield operation process are shown in Table 3. As shown in Table 3, the packed volume of swollen DPPG was recorded every 10 min in the first two hours to simulate the swelling performance when the DPPG is prepared in the mixing tank on the ground. Then, it was investigated every 30 min from 2 to 5 h to simulate the swelling performance in the wellbore before the DPPG is injected into the formation. At the time the DPPG was aged in brine for five hours, the solution in the test tube was taken away, and the packed volume was recorded. This process is to simulate the following situation. When the DPPG is injected into the target horizon, free water will flow into the pores and throats in the formation due to the pressure gradient, only the swollen DPPG without free water exists. After that, the swollen DPPG samples without free water were aged in an oven at 65 °C, and the packed volume of the gel was recorded every day. The time when the DPPG samples change from a solid gel to a flowable liquid state without visible particles is defined as the entire degradation time in this study. Each experiment was repeated three times to reduce the experimental error. Infrared spectrum analysis A Fourier transform infrared (FT-IR) spectrometer (Nicolet iS20, Thermo Fisher Scientific) was used to measure the infrared spectrum of the sample. Before test, the DPPG particles were dried and ground with KBr powder. The mixture was compressed in a pellet for FT-IR analysis. Microstructure test An optical microscope (Axio Vert.A1, the Carl Zeiss Company) was used to observe the morphological changes of the swollen DPPG during the entire dynamic degradation process. After being aged for different times, the DPPG particles were placed under an optical microscope with a magnification of ten times. The observation process can keep the original appearance of the DPPG that swells by water absorption unchanged. In addition, a scanning electron microscope (SEM, Hitachi S4700, Tokyo, Japan) was used to observe the microscopic changes in the skeleton structure before and after the degradation of DPPG. During the experiment, the DPPG before and after degradation needed to be freeze-vacuum dried, and liquid nitrogen was used as the freezing liquid. After the sample was freeze-dried, conductive glue was employed to stick the dry DPPG sample on the sample stage. Finally, gold was sprayed to enhance its conductivity. Core displacement experiment The core displacement experiment was used to evaluate the temporary plugging performance of DPPG in porous media. The experimental device diagram is shown in Fig. 4. The core was placed in the core holder, and the confining pressure was 25 MPa. First, brine was injected into the core at a constant flow rate of 0.5 mL/min by an ISCO pump until the displacement pressure was stable. Then, the swollen DPPG was injected into the core at a constant flow rate of 1 mL/min until the displacement pressure reaches the preset value (i.e., 20 MPa). After that, brine was injected into the core again at a constant flow rate of 0.5 mL/min. During the flooding process, the real-time changes of the pressure and the breakthrough pressure gradient during the experiment were recorded. After that, the core holder with the remaining DPPG was aged in the oven at 65 °C. After the remaining DPPG was completely degraded, the core holder was installed in the experimental setup. Please note that the inlet and outlet of the core were swapped to simulate the flow back production process of the petroleum reservoir. Brine was injected again to calculate the reservoir damage of DPPG. The core permeability can be calculated by the Darcy equation, where Q is the flow rate through the core under the pressure gradient Δp, cm 3 /s. A is the cross-sectional area of the core, cm 2 . L is the length of the core, cm. μ is the fluid viscosity, mPa s. Δp is the pressure gradient between the inlet and outlet of the core, 0.1 MPa. K is the proportionality coefficient, which is the permeability of the porous medium, μm 2 . (1) Q = KAΔp L Simulate when DPPG is injected into the target horizon, free water will flow into pores and throats in the formation due to the pressure gradient, only swollen DPPGs without free water exist After 5 h Record every day until degradation is complete Simulate the self-degradation of DPPG in the target layer 1 3 Preparation and characterization of DPPG According to the different formulations in Table 2, different types of DPPG samples can be synthesized by free radical polymerization. Monomers such as AM and AMPS are connected to a long chain through free radical polymerization. During this reaction, DT-2, which has a self-degrading function, acts as a cross-linker. It can connect the above polymer chains to form an intricate three-dimensional (3D) network structure. We can take the DPPG C4 sample as an example. Its infrared spectrum is shown in Fig. 5. The peak at 3500 cm −1 can be attributed to the superposition of the stretching vibration absorption peaks of -NH 2 . In addition, the peak at 1654.19 cm −1 is the C=O stretching vibration in the amide group, which is the characteristic peak of polyacrylamide. The flexural vibration absorption peak at 1449.66 cm −1 attributable to -CH 3 is the characteristic peak of AMPS. Moreover, the peak at 1182.85 cm −1 is attributable to the characteristic vibration frequency of C-N. The absorption peak at 1039.70 cm −1 is attributed to the C-O bending vibration, which is the characteristic peak of the cross-linker DT-2. Last, the peaks at 802 and 627.41 cm −1 are the out-of-plane bending vibration frequencies of -NH. Effect of monomer concentrations on the static swelling and degradation performance of DPPG To investigate the effect of monomer concentrations on the swelling volume and the degradation time of DPPG, we carried out the static evaluation experiment. The experimental sample was the DPPG C1 to C5, the brine used was 1% NaCl solution, and the test temperature was 65 °C. The specific compositions of the DPPG C1 to C5 are shown in Table 2. Figure 6 gives the swelling and degradation performance of DPPG C1 to C5 composed of different monomers ranging from 12 wt% to 28 wt%. It can be seen from Fig. 6 that in the swelling process of the first 60 min, all the swelling rates of DPPG samples C1 to C5 were rapid. Moreover, the swelling rate of DPPG C1 was the quickest, but the differences between them were small. When they were aged at 65 °C for 70 min, the swelling volume of DPPG C5 reached the highest. The DPPG C1 reached its highest packed volume at the time of 90 min, which shows the most excellent swelling performance. In addition, when they got their highest volumes, their swelling rates began to slow down and could almost remain stable. The final swelling volumes of DPPG samples were recorded. In summary, the swelling performance of the DPPG samples decreased with the increase in the total monomer. The possible reason is that the more considerable the total amount of monomers in the DPPG samples, the longer the main chain of the polymer formed, and the more significant the space hindrance after cross-linking. Therefore, water molecules are more difficult to enter the inside of the polymer structure, which leads to the swelling ratio of the DPPG decreasing. Thus, in the actual oilfield operation and application process, we can adjust the total amount of monomers during the synthesis of DPPG, so that the swelling volume can be changed to adapt to the field application of different petroleum reservoir conditions. For example, according to the bridging principle of particle plugging, large particles will be suitable for large porous media in petroleum reservoirs. After the DPPG was fully swelled in 1 wt% NaCl solution for five hours, the free water in the test tube was taken away by a rubber-tip dropper. Then the DPPG samples were placed in an oven at 65 °C again, and the packed heights were recorded every day. The experimental results are shown in Fig. 6. The complete degradation time of DPPG C1 was the shortest, and the time was five days. The complete degradation time of the DPPG increased as the total amount of monomer increased. The possible reason is that the cross-linking density increases with the increase in the total amount of monomers, which can be manifested by the changes of their swelling ratio. Thus a stable threedimensional network structure is formed, resulting in an increase in the degradation time of the DPPG. Figure 7 shows the state of the DPPG C4 when it is completely degraded. For clear observation, it can show the actual pictures before and after PPG degradation. Fifty grams of swollen DPPG samples were taken for the experiment. Before degradation, as shown in Fig. 7a, b, it was transparent water-absorbing swollen particles. After degradation, as shown in Fig. 7c, it degraded into a pale-yellow aqueous solution with a very low viscosity like water. Effect of cross-linker concentration on the static swelling and degradation performance of DPPG To study the influence of cross-linking agent concentration on the swelling volume and the complete degradation time of the temporary plugging agent, DPPG samples C6 to C10 with various cross-linker concentrations were prepared. Bottle tests were performed, and experimental results are shown in Fig. 8. In these tests, 1 wt% NaCl aqueous solution was used. It can be seen from Fig. 8 that in the swelling process of the first 60 min, the overall swelling rates of the DPPG samples C6 to C10 were rapid. Moreover, the packed volume of the swelled DPPG C10 was the smallest, and the maximum swelling arrived at 60 min. In contrast, the swelling rate of the DPPG C6 was the highest, and the maximum swelling arrived at 100 min, which was longer than the DPPG C10. Besides, the swelling performance of the DPPG samples With the rise in the cross-linker concentration, the density of the polymer microstructure will be increased. Therefore, it is more difficult for the water molecules to enter the internal structure of the polymer, resulting in a decrease in the water absorption capacity of the DPPG. Figure 9 shows the bulk volume of the swollen DPPG samples C6 to C10 after they completely absorbed water and the free water was removed. Figure 8 also shows the degradation performance of the swollen DPPG samples (without free water) with different cross-linker concentrations over time. The cross-linker concentration of the DPPG C6 was the lowest. However, the degradation time was the fastest. Also, the degradation was completed in two days. In contrast, the complete degradation time of the DPPG C10 was the longest, and its degradation could be completed in 15 days. In summary, under the same monomer concentrations, the complete degradation time of the DPPG samples increased with an increase in the concentration of the cross-linker. The above phenomenon can also be explained in conjunction with the change rule of the swelling performance of the DPPG samples. The smaller the expansion ratio of DPPG, the denser the microscopic grid density. That is, the stronger the cross-linking density, and therefore the more time it takes to degrade. Effect of initiator concentration on the static swelling and degradation performance of DPPG To study the influence of initiator concentration on the swelling performance and complete degradation time of DPPG, different initiator concentrations (i.e., 0.4, 0.8, 1.2, 1.6, and 2.0 wt%) were used to prepare DPPG samples. They were numbered as DPPG samples C11 to C15. Their compositions are shown in Table 2. The DPPG samples were dispersed in 20 mL of 1% NaCl solution at 65 °C, and their swelling volumes over time were recorded. The results are shown in Fig. 10. It can be seen from Fig. 10 that the swelling rates of DPPG samples were very fast in the first 80 min of swelling with water. The swelling ratio quickly approached 60-70 times its initial volume. However, the difference between them was relatively small. When they absorbed water for 100 min, these five groups of DPPG samples nearly completed swelling. Among them, DPPG C14 had the most substantial swelling volume, followed by DPPG samples C15 and C13, and DPPG C11 had the lowest swelling volume. With the increase in the initiator concentration in the process of DPPG synthesis, the volume swelled with water first increased and then decreased. It is because when the initiator concentration is low, the reaction speed will increase as the initiator concentration increases, and the microscopic cross-linking density of the synthesized DPPG is low. Therefore, when the DPPG swells with water, the higher the crosslinking density of DPPG, the larger the hydration swelling volume. When the concentration of the cross-linking agent exceeds a critical concentration (e.g., 1.6 wt% in this study), the polymerization will rapidly exothermic, causing the local polymerization to occur quickly. This, in turn, leads to an increase in the cross-linking density of some parts of the DPPG. Therefore, the volume of its swelling with water reduces. After the DPPG was swelled in a 1 wt% NaCl aqueous solution for five hours, the free water around the DPPG samples in the test tube was taken away with a glue-tip dropper, and the self-degradation performance of the DPPG was evaluated. The experimental results are shown in Fig. 10. The complete degradation time of the DPPG C11 was the longest, and total degradation occurred only after 16 days of aging. The complete degradation time of the DPPG C14 was the shortest, and the complete degradation occurred on the 10th day. So, the complete degradation time of the DPPG first decreased and then increased as the initiator concentration increased, which is opposite to their swelling performance. That is, the more the expansion multiple of DPPG, the shorter the time for its complete self-degradation. It is very likely to be related to its microstructure, which will be discussed in detail in the following sections. Effect of monomer ratio on the static swelling and degradation performance of DPPG To study the effect of the ratio of AM to AMPS monomers in the DPPG system on the swelling and self-degradability of DPPG, DPPG samples were prepared with five different monomer ratios. Their total monomer concentration was fixed at 24 wt%, but the ratios of AM to AMPS monomers were 3:7, 4:6, 5:5, 6:4, and 7:3, respectively. Their numbers are DPPG samples C16 to C20, and the specific compositions are shown in Table 2. They were dispersed in 1 wt% NaCl solution, and their swelling volumes were evaluated over time at 65 °C, and the complete degradation times were also recorded. The experimental results are shown in Fig. 11. It can be seen from Fig. 11 that the swelling rates of DPPG samples C16 to C20 increased rapidly within 50 min and slowed down significantly between 50 and 100 min. At 120 min, the swelling of the five groups of DPPG samples was completed and remained stable. Among them, the DPPG C16 had the most substantial swelling volume, and the DPPG C20 was the smallest. So, for the DPPG samples synthesized with different monomer ratios, that is, as the concentration of AM increases or AMPS decreases in the composition, swelling volume of DPPG in 1 wt% NaCl solution will be significantly reduced. It is mainly because AMPS is negatively charged; the repulsion between the molecular chains during synthesis will dramatically increase when the AM in the composition of DPPG decreases or AMPS increases. In addition, because the spatial volume of AMPS is much larger than that of AM, the steric hindrance effect is obvious (Zhu et al. 2017b(Zhu et al. , 2019. Therefore, it can play the role of expanding the grid, making the grid density smaller during cross-linking (i.e., the grid size becomes more substantial). In addition, as the proportion of AMPS in the DPPG system increases, the final expansion volume when it encounters water will also be more significant. When the above five kinds of DPPG samples swelled in water for five hours, the free water around them was taken away, and the swollen particles were retained for self-degradation experiments. The results show that the complete selfdegradation time of the DPPG C16 was the shortest, which was two days. The self-degradation time of the DPPG C20 was the longest (12 days). Moreover, there was an obvious opposite relationship between the complete self-degradation time and the final swelling volume. That is, as the swelling volume of the DPPG increased, its complete degradation time would decrease. Effect of reservoir brine salinity and temperature on the static swelling and degradation performance of DPPG To study the influence of reservoir brine salinity and temperature on the swelling effect of DPPG, we selected 0.1 g of the DPPG C14 with a particle size of 20-30 mesh as an example. The salinity was 1%, 2%, 5%, 10%, 15%, and 20%, separately, and the temperature ranged from 45 to 150 °C. The packed volume and complete degradation time of these DPPG samples were recorded, as shown in Fig. 12. It shows the swelling and degradation performance of DPPG samples at different reservoir brine salinity and temperatures. As the brine salinity increased from 1 wt% to 20 wt%, the final swelling volume of DPPG samples decreased. The main reason is that with an increase in the degree of brine salinity, salt ions can enhance the shielding capacity of the counterions on sulfonic acid and carboxyl anions. Therefore, the hydration capacity of the hydrophilic functional groups of the polymer chains in the DPPG increases, and then the reverse polyelectrolyte effect can lead to the shrinkage and curling of the polymer chains. Therefore, the water absorption capacity of the polymer reduces. In addition, the degradation rate of the DPPG previously swollen by the salinity of 1 wt% NaCl was the fastest, and the degradation was completed on the tenth day. However, the degradation rate of the DPPG previously swollen by the salinity of 20 wt% NaCl was the slowest, and the degradation was completed on the 18th day. Therefore, the DPPG shows excellent salt tolerance, even in the solution of 20% salinity. The swelling performance is still satisfactory and can be completely degraded into water-like solutions. Moreover, there is also an apparent opposite relationship between the final swelling volume and the complete self-degradation time. That is, with an increase in the swelling volume of DPPG, its complete degradation time will decrease. As for the influence of reservoir temperature, when the DPPG C14 swelled in brine at 45 °C, its final swelling volume was 5.5 mL, this swelling capacity is still suitable for oilfield operations. With the increase in the aging temperature from 60 to 120 °C, the final swelling volume increased slightly. However, it is worth noting that when DPPG was aged at 150 °C for 210 min in the saline solution, its packed volume decreased. It is mainly because under extremely high-temperature conditions, the temperature resistance of the functional groups AM and AMPS is limited, leading to high-temperature hydrolysis in part of the polymer chains of DPPG. This allows part of the DPPG to be dissolved in the aqueous solution, so the packed volume reduces accordingly. In summary, as the temperature increases, the swelling ratio of the DPPG also increases. Therefore, this series of DPPG shows excellent high-temperature adaptability, which can be applied to reservoirs even with the temperatures up to 150 °C. After the DPPG C14 was fully expanded for 5 h, the DPPG being aged at 150 °C had the fastest degradation time. The degradation was completed at 12 h. However, the DPPG being aged at 45 °C had the lowest degradation rate, and the complete degradation was achieved after 13 days. In summary, the higher the temperature, the faster the degradation rate of the DPPG. It is mainly because as the experimental temperature increases, the hydrolysis rate of the amide group and the self-breaking of C-O bond of the polymeric cross-linker (DT-2) increases, and thus the degradation rate increases. Morphology and microstructure changes of DPPG during self-degradation To study the changes in the morphology and microstructure of the DPPG during the self-degradation process, we arbitrarily selected the DPPG C14 as the research object for observation. First, we weighed 0.1 g of DPPG C14 with a dry particle size of 20-30 mesh, dispersed it in 20 mL of 1% NaCl solution, and then placed it in a thermostat at 65 °C for aging. After swelling for five hours, the free water was taken away, and the swollen DPPG was retained. Then the morphology changes of the DPPG under different aging times at 65 °C were investigated using an optical microscope. When the DPPG was degraded entirely, we took a small amount of the dissolved liquid and used a Brookfield viscometer to measure the viscosity of the degraded solution. Figure 13 shows that when the DPPG was fully expanded for 5 h, the shape of the particles was a solid gel particle with sharp edges and corners. It had a certain degree of elasticity and viscosity. After being placed in the oven for 24 h, the edge of the temporary plugging agent began to soften. However, the overall change was not noticeable. When it was aged for 48 h, the edges and corners of the swollen particles became smooth, and some edges and corners began to degrade. After 72 h, the peripheral edges and corners of the swollen particles were all degraded and became very smooth. At the aging time of 96 h, the edges of the swollen particles began to degrade and shrink inwardly. After 192 h, the DPPG particles had been completely degraded to be a liquid flow state. Then, the Brookfield viscometer was used to measure the viscosity of the degraded liquid, which was 2.6 mPa s. Therefore, the viscosity of the degraded DPPG is lower, which has little damage to the formation. Figure 14 shows the scanning electron microscope (SEM) images of swollen DPPG C14 before and after self-degradation. It can be seen from Figs. 14a-c that the monomers AM and AMPS can form a dense spatial three-dimensional microscopic network by adding the self-degradable monomer DT-2 (its primary function is cross-linking). These hydrophilic networks can expand rapidly after encountering water molecules, as shown in Fig. 13. And, the smaller the grid density of gel microstructure (i.e., the larger the pores between the grids), the more water molecules can enter. Therefore, the swelling ratio of the corresponding DPPG is also higher. This, in turn, explains the previous experimental results. That is, as the monomer concentration or the crosslinking agent concentration decreases, the grid density of the formed DPPG is smaller, and the expansion ratio is higher. In addition, with an increase in the proportion of AMPS in the system, the steric hindrance effect is strengthened, the grid density of the synthesized DPPG is significantly reduced, thus the swelling ratio will be increased considerably. In addition, the smaller the grid density of the gel, the larger the pore size in the grid, the more conducive to the self-degradation of swollen DPPGs. Figure 14d shows the SEM image of DPPG C14 after degradation. There is no visible particle and network structure, and almost all DPPG has been degraded. It is due to the introduction of the degradable DT-2 functional structure into the DPPG structure. It can undergo self-degradation and finally degrade into a solution with a viscosity close to water. Therefore, the damage of the DPPG to the formation is minimal. Core displacement experiment The previous parts have studied the static expansion and self-degradability of DPPG. To further verify that DPPG particles have sound temporary plugging effects under real reservoir conditions, we also used artificial cores for displacement experiments. The drilled cylindrical core use is shown in Fig. 2. The purpose of the drilling is to simulate the high-permeability layer in the petroleum reservoir so that a certain volume of DPPG can be injected into the core, and thus form a certain plugging. The experimental results of the core displacement experiment are shown in Fig. 15. In the stage of the DPPG injection, the swollen particles were injected at a flow rate of 2 mL/min. The pressure gradient between the two ends of the core can be seen to rise rapidly. When the injection pressure of DPPG reached 20 MPa, water was injected. The water injection pressure can be as high as 21.12 MPa. This indicates that the DPPG has excellent plugging performance. As the injection of water continues, the water flow will form a flow channel along the surface of the compacted deformed particles under the injection pressure gradient. However, compared to the previously drilled channel, water mobility is significantly decreased. So, the breakthrough pressure gradient of DPPG can be 17.58 MPa, which means the DPPG can significantly decrease the core permeability to water. Then, the plugged core was sealed and aged in an oven at 65 °C. After the DPPG was degraded entirely, the water flooding test was performed on the core again. It can be seen from Fig. 15 that the core permeability after degradation of the temporary plugging agent is measured to be 24.26 × 10 -3 μm 2 , which is a little higher than its original permeability (26.65 × 10 -3 μm 2 ). Therefore, DPPG has little damage to the low-permeability layer of the core. So, it can be seen from core displacement experiments that DPPG has excellent plugging performance in reservoir cores. Besides, it can become a solution with a viscosity like water through self-degradation. Thus, it has very little damage to the core permeability. Therefore, DPPG has an excellent temporary plugging effect and can be used as a temporary plugging material with high application potential. experiment of DPPG, as well as its microscopic morphology characteristics and changes. Based on these, we can propose the following synthesis and self-degradation mechanisms of DPPG, as shown in Fig. 16, which will further facilitate the understanding of the above experimental results. First, the low molecular weight monomers of acrylamide (AM) and 2-acrylamide-2-methylpropanesulfonic acid (AMPS) can undergo a polymerization under the cross-linking effect of the low molecular weight polymer DT-2 through free radical polymerization. Thus, a microscopic three-dimensional network of polymer molecules DPPG can be formed. Since each monomer in the polymer (i.e., AM and AMPS) has excellent hydrophilicity, when the DPPG encounters water, it can quickly swell. That is, water molecules can enter the DPPG microscopic grid through osmotic hydration. And, the smaller the grid density, the larger the pores between the grids, and therefore more water molecules can be absorbed. That is, it has a more water-absorbing swelling ratio. Besides, the more AMPS monomer content in the system, the more obvious the steric hindrance effect of AMPS is. Therefore, the larger the size of the spatial microgrid formed, the more water can be absorbed, and the better the expansion effect in saltwater. Furthermore, in the self-degradation stage of DPPG, due to the spreading effect of DT-2, the structure can be broken by itself. Therefore, the grid of the entire three-dimensional network is gradually disassembled. Finally, it is completely degraded into an aqueous solution of ultra-low molecular weight residues. Therefore, DPPG, as a temporary plugging agent (TPA), has very little damage to the formation. Conclusions A degradable pre-formed particle gel (DPPG) was developed and used as the temporary plugging agent (TPA) for petroleum reservoir drilling and production. Bottle tests were used to evaluate the influence of the composition of the DPPG on the static temporary plugging performance. Then the reservoir adaptability of the optimized DPPG was evaluated. Finally, the core displacement experiment was used to investigate the plugging and degradation performance of DPPG in the porous medium. The main conclusions are as follows: (1) The decrease in the total number of monomers and cross-linker or the increase in the concentration of monomer AMPS will cause an increase in DPPG swelling volume and a decrease in the complete degradation time. (2) As the content of the initiator increases, the final swelling volume of the DPPG first increases and then decreases. However, the change in the complete degradation time is just the opposite. (3) The DPPG has excellent temperature resistance and salinity resistance. The temperature resistance was up to 150 °C, and the salinity resistance could reach 200,000 mg/L. (4) For the sandstone core with a partially opened fracture and an initial permeability of 26.65 × 10 -3 μm 2 , the pressure gradient of DPPG plugging between the two ends of the core could reach 21.12 MPa. And the permeability could be restored to 24.26 × 10 -3 μm 2 after temporary plugging. That is, the core permeability recovery rate was about 91%. Therefore, DPPG has high plugging strength and low core damage. (5) DPPG has a relatively regular spatial three-dimensional microscopic network, which makes it have an excellent water swelling effect. The smaller the grid Polymerization process D egradation process Fig. 16 Synthesis, expansion and self-degradation mechanism of DPPG density, the larger the pores between the grids, the stronger the water absorption capacity, and the shorter the time required for complete self-degradation. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
9,641.8
2020-11-28T00:00:00.000
[ "Engineering", "Environmental Science" ]
In search of new reconstructions of (001) α-quartz surface: a first principles study Centre for Materials Science and Nanotechn Blindern, No.-0316 Oslo, Norway. E-mail: o Singapore University of Technology and De Singapore Department of Physics, University of Oslo, Norway Department of Materials Science and Engin 100 44 Stockholm, Sweden † Electronic supplementary information structures, and calculations for 3 3 surface). See DOI: 10.1039/c4ra10726h Cite this: RSC Adv., 2014, 4, 55599 a-quartz (P3 1 21 space group) is a stable silica (SiO 2 ) polymorph 1 and one of the most abundant minerals, having wide applications in construction, piezoelectric devices, glass production industries, etc. It is also one of the major parts of oil shales 2 and seems to play a signicant role in their properties. A number of studies have been focused on the analysis of a-SiO 2 properties and formed the basic knowledge on phase stability, 1 electronic properties, 3-7 defects, 8-10 some surface properties, 3,[11][12][13][14][15][16] etc. Nevertheless, the understanding of a-SiO 2 surface properties is still weak. It is known that similar to other semiconductors/ insulators, cleaved a-SiO 2 surfaces tend to minimize high energy dangling bonds by surface reconstructions. 16 However, experimental investigations of a-SiO 2 surfaces (especially of reconstructions) are rare and do not provide sufficient details on atomic ordering at the surfaces. It is particularly due to difficulty to use atomic force microscopy (AFM) for the analysis of semiconductor and insulator surfaces. AFM studies reported that a-SiO 2 surface is at. 17 Low-energy electron diffraction (LEED) investigations showed that the a-SiO 2 (001) surface has a (1  1) pattern. 18,19 However, both AFM and LEED studies did not provide any information on the reconstructions at low temperatures. To the best of our knowledge, only ffiffiffiffiffi 84 p  ffiffiffiffiffi 84 p reconstruction, which takes a place at the temperatures above 873 K, was reported so far. 18 Nevertheless, this reconstruction can be attributed to the a-b quartz phase transition occurring at 846 K. 18 Therefore, the use of computational studies for the understanding of a-SiO 2 surfaces is of signicant interest. During the last 15 years, a few fundamental studies 11,14,16 showed that a-SiO 2 (001) surface has the lowest energy. In particular, Rignanese et al. reported rst principles study of cleaved and reconstructed a-SiO 2 (001) surfaces, and they suggested that a "dense" surface is the most favorable reconstruction. 14 The properties of the same reconstructed surface were further studied by Rignanese et al. 13 and Goumans et al. 11 Moreover, the "dense" surface was later used for the analysis of molecular adsorption on the SiO 2 surfaces 3,11,13 and SiO 2 interfaces with other materials. 20,21 Recently, using the mix of molecular dynamics (MD) and rst principles calculations, Chen et al. showed that the "dense" surface may further reconstruct. 22 Nevertheless, all previous studies of the reconstructions have some limitations such as small size of the studied systems, limited analysis of system stability, use of empirical interaction potentials for MD, etc. Because of this, in this work, using BOMD simulations and "static" DFT calculations, detailed analysis of the stability of reconstructed and cleaved a-SiO 2 (001) surfaces was performed. We show that the well-known "dense" surface should be considered as metastable. Moreover, the electric and structural properties of the found reconstructed surfaces are analyzed and discussed in details. All calculations were carried out using the Vienna ab initio simulation package (VASP) with the Perdew-Burke-Ernzerhof (PBE) 23 functional. Projector augmented wave (PAW) pseudopotentials 24,25 were used to model the effect of core electrons. The non-local parts of the pseudopotentials were treated in reciprocal and real space for the "static" DFT and BOMD calculations, respectively. The cutoff energies for the plane wave basis set were set to 400 and 300 eV for all "static" DFT and BOMD calculations, respectively. It should be noted that for BOMD, the increase of the cutoff energy to 400 eV did not affect the nal ("static" DFT) energies of both the 27-layer unit cell and the 2  2 supercell slabs. The Brillouin-zone integrations were performed using G-centered Monkhorst-Pack 26 grids (see Table S1 †). SiO 2 surface was modelled as a periodic SiO 2 slab containing N (N ¼ 27, 36, or 45) atomic layers and a vacuum region of about 12Å. In this study, we used three slab sizes: unit cell, 2  2 supercell, and 3  3 supercell (only for 27-layer slab, see ESI †). For the studied systems, all atoms were relaxed until the internal forces were smaller than 0.01 eVÅ À1 . The xation of the central layers and its impact on the surface energy of the cleaved surface (for 27-layer slab) were also studied, but the effect was found to be insignicant. To nd possible reconstructions at a-SiO 2 (001) surface, BOMD simulations with two different annealing/quenching temperature protocols were used. For the rst temperature protocol, atomic velocities were initialized at 1 K using the Maxwell-Boltzmann distribution. Then, the system was heated to 1000 K during the period of 20 ps. Using canonical ensemble BOMD (number of atoms, volume, and temperature are conserved), the resulted system was annealed at 1000 K for a period of 20 ps. Finally, the resulted system was quenched to 1 K with a cooling rate of 0.033 K fs À1 . For the second temperature protocol, the velocities were initialized using the Maxwell-Boltzmann distribution at a few different initial temperatures: 500, 600 (for unit cell calculations only), 1000, and 1500 K. Then, canonical ensemble BOMD simulations were carried out for the period of 15-20 ps with a time step of 1.5-2 fs. To ensure that the increase of simulation time does not affect the structures and energetics, the key BOMD calculations were performed for the period of about 45 ps. Finally, the resulted systems were quenched to 1 K with the cooling rate of 0.033 K fs À1 , and the obtained congurations were further optimized using quasi-Newton algorithm. For all BOMD simulations, the system temperature was controlled using the Nose thermostat. [27][28][29] It should be noted that the results obtained from the rst and the second temperature protocols were found to be similar (the largest energy difference was found to be about 0.02 eV), and hence, in this study, results only for the second temperature protocol are included. Moreover, the key results are presented only for the lowest energy structures predicted based on the mix of BOMD and "static" DFT calculations. Nevertheless, it should be noted that for the slab annealed at different temperatures, the largest difference of "static" DFT energies was found to be as much as 0.02 eV. In a-SiO 2 , each Si forms bonds with four O atoms with two slightly different bond lengths (in this study, 1.625 and 1.628Å) and the average O-Si-O angle of about 109.5 . The formed Si-O bonds have the covalent polar nature (in some references, 30,31 the Si-O can be also treated as ionic bond). The analysis of computed Bader charges 32 indicates that Si atoms have a charge reduction (electron density reduction) of about 3.23e (the Bader charge is 3.23e), while the O atoms have a charge increase of about 1.62e (the Bader charge is À1.62e). The computed Bader charges correlate well with the analysis of Si and O electronegativities 33 and previously reported results (from 3.26e to 3.33e and from À1.63 to À1.67 for Si and O, respectively). 34,35 For a-SiO 2 , the computed PBE band gap is found to be 5.71 eV, and it is, as expected, smaller compared to experimental values (8.9 AE 0.2 eV for amorphous SiO 2 (ref. 36)), but it is comparable with other theoretical studies (from about 5.6 eV for local density approximation or PBE to 9.4 eV for GW). [5][6][7]14 To study surface stability, the surface energy (g) was calculated as g ¼ (E S À E SiO 2 )/(2A), where E S and E SiO 2 are the energies of the slab and of the bulk SiO 2 , containing the same number of SiO 2 units as the slab; A is the surface area of one side of the slab. Considering the a-SiO 2 structure and taking into account the presence of two surfaces in the slab model, the stoichiometric (001) slab can be built as Si/O-or O/O-terminated slab (see Fig. S1 †). However, since the Si/O-terminated a-SiO 2 (001) slab is highly energetically unstable (the average surface energy was found to be 3.42 J m À2 for the 27-layer slab), this study was limited to the analysis of O/O-terminated slabs. This is consistent with other studies of a-SiO 2 surface stability. 11,14 The surface energy of the cleaved a-SiO 2 (001) surface is found to be 2.23 J m À2 and does not depend on the slab thickness (number of SiO 2 layers). Due to the presence of dangling bonds, the cleaved surface tends to reconstruct. Using BOMD simulation, the evolution of the potential energy of 27layer unit slab (Fig. 1a) was studied. It was found that the potential energy decreases with two steps ((1) reconstruction of one surface and (2) reconstruction of the second surface). As the result of the reconstruction, the under-coordinated Si surface atoms become four-coordinated. Moreover, the comparison of the cleaved and reconstructed structures indicates that the reconstructed system has signicant atomic displacements for the top 6 surface atomic layers (see Fig. 1a). The atomic displacements lead to the formation of 3-and 6-member rings (named based on the number of Si-O pairs in the rings), which are not typical for bulk a-SiO 2 (see Fig. 1a and b). The same reconstruction was also observed for both 36-and 45-layer systems (see ESI †), and for all considered slab thicknesses, the surface energy was found to be 0.39 J m À2 . The Bader charge analysis shows that atoms at the surface layers of the reconstructed and cleaved surfaces have a signicant difference in the atomic charges. For the reconstructed surface, the average Bader changes for rst-layer O and Si surface atoms are À1.60e and 3.20e, respectively, which are comparable to those (À1.62e and 3.23e) for the bulk a-SiO 2 . In contrast, for the cleaved surface, the average Bader charges on surface O and Si atoms are À1.41e and 3.01e, respectively. This difference correlates well with that for the surface energies of cleaved surface and reconstructed surfaces. The formed 6-member ring has a triangle-like structure (named as "dense" surface, see Fig. 1b), which is the same as that reported based on MD simulations 11,12 and DFT calculations using the local density approximation. 13,14 Despite the previous studies, the possibility of the "dense" surface reconstruction is not well studied. To the best of our knowledge, only Rignanese et al. 14 reported a rst principles study of surface stability as a function of time, but that study was limited to a short MD simulation period of about 300-400 fs (which is believed to be too short for signicant changes of the reconstruction). Other studies 11,12,22 used a classical MD to obtain the reconstructed surface, which is known to have limitations with describing SiO 2 properties. 37 Because of this, using "dense" prereconstructed 2  2 supercell slab (see Fig. 2a), BOMD simulations of 27-layer slab were performed. For all considered annealing temperatures (500, 1000, and 1500 K), the application of annealing/quenching temperature protocol leads to the transformation of 6-member triangle-like rings to 6-member ellipse-like rings (see Fig. 2a and b, the reconstruction is called as 2  2 reoptimized "dense" surface). This reconstruction is found to be similar to that reported based on classical MD simulations by Chen et al. 22 This reconstruction reduces the surface energy from 0.39 to 0.34 J m À2 for all considered temperatures, indicating that the annealing temperature does not affect the nal energetics (see Fig. 2c). The transformation of 6-member ring induces the turning of the 3-member rings, resulting in atomic displacements at deeper atomic layers than those for the "dense" surface (see Fig. 2a and b). For instance, the analysis of Si-O bond length distributions for the central Si slab layer shows a signicantly broadened distribution compared to that for the "dense" surface (see Fig. 2d). As illustration, for the "dense" 27-layer slab, Si atoms in central layer form two bonds of mostly identical bond lengths (about 1.628Å; it should be noted that for the "dense" surface, the Si-O bond lengths formed by Si atoms within the central layers approaches to those for the bulk a-SiO 2 (1.625 and 1.628Å) in 45-layer slab). For 2  2 reoptimized "dense" surface in 27-layer model, the Si-O bonds have bond lengths varying from 1.622 to 1.635Å for all considered temperatures (see Fig. 2d). This broadening indicates the existence of indirect surface-surface interaction, which can be induced by overlapping of the atomic displacements. Because of this, it is rough to use the 27layer model for the analysis of reconstruction at real a-SiO 2 (001) surfaces. However, these results can be useful for the analysis of atomic structures of thin a-SiO 2 (001) nanosheets. Application of annealing/quenching temperature protocol to 36-and 45-layer 2  2 supercell slabs leads to the transformation of 6-member triangle-like rings to 6-member ellipselike rings for all considered temperatures (500, 1000, and 1500 K). The observed rings are structurally similar to those for the 27-layer model. Moreover, similar to the 27-layer model, the formation of the 6-member ellipse-like rings results in the turning of the 3-member rings and signicant atomic displacements within at least 13 (for 45-layer model) atomic layers. Herewith, for the 45-layer model, the largest change of the Si-O bond length for the central layers are about 0.002Å, indicating that the 45-layer slab can be used to represent the real a-SiO 2 (001) surface. For both 36-and 45-layer supercell slabs, the surface energies were found to be 0.35 J m À2 , which are slightly larger compared to that for the 27-layer system (0.34 J m À2 ), but it is still about 0.04 J m À2 lower than that for the "dense" surface (see Fig. 3a). The analysis of Si-O bond length distribution in 6-member rings for the "dense" and 2  2 reoptimized "dense" surfaces suggests that the reconstruction causes the reduction of the dispersion of the distribution (see Fig. 3b and c). As illustration, for the 6-member rings at the "dense" surface, the Si-O bond lengths vary from 1.616 to 1.629 A, while for the 2  2 reoptimized "dense" surface, the variation is from 1.620 to 1.628Å. Moreover, for 2  2 reoptimized "dense" surface, the number of Si-O bonds with bond lengths lying within those for bulk a-SiO 2 (1.625 and 1.628Å) is found to be signicantly larger compared to that for "dense" surface (see Fig. 3b and c). The ring transformation also induces the increase of the average nearest in-plane Si-Si distance (the average nearest in-plane O-O distance is not affected signicantly by the transformation) by about 0.05Å. Taking into account that the Bader charges for the surface atoms at the 2  2 reoptimized "dense" surface are the same as those for the "dense" surface (À1.60e and 3.20e for O and Si, respectively), the increase of Si-Si distance implies the reduction of repulsive electrostatic interaction between the ions. All these are the reasons why the 2  2 reoptimized "dense" surface has a smaller surface energy compared to that for the "dense" surface. Finally, electronic properties were calculated for the cleaved, "dense", and the 2  2 reoptimized "dense" surfaces (all calculations were done for 45-layer slab supercells, see Fig. 3d). For the cleaved surface, the dangling bonds produce the occupied O-p states above the valence band maximum (VBM) of bulk (d) Density of states for cleaved, "dense", and 2  2 reoptimized "dense" slabs. The Fermi level is set at VBM, and it refers to 0 eV. a-SiO 2 . These states reduce the band gap from 5.71 (for bulk) to 3.68 eV, which is consistent with the observation by Rignanese et al. 14 In contrast, both "dense" and 2  2 reoptimized "dense" surfaces have a similar DOSs and do not provide O-p states above the VBM. The minor difference between the DOSs for the "dense" and 2  2 reoptimized "dense" surface comes from the difference in band gap (5.70 and 5.62 eV for "dense" and 2  2 reoptimized "dense" surface, respectively), indicating that the surface reconstruction can induce the minor reductions of the band gap (by 0.08 eV) due to the surface effects. In summary, based on DFT calculations, a detailed analysis of possible reconstructions at a-SiO 2 (001) surface was performed. It was found that the "dense" surface has a tendency to reconstruct; the 6-member triangle-like rings at the "dense" surface reconstruct to 6-member ellipse-like rings. This reconstruction is caused by the optimization of both Si-O bond length distribution and Si-Si interactions at the surface layer. The 2  2 reoptimized "dense" surface has the surface energy about 10% lower than the "dense" surface and atomic displacements for the top 13 surface atomic layers. The found lowest energy reconstruction can induce the minor change for the electronic properties (the reduction of band gap by about 0.08 eV).
4,013.8
2014-10-28T00:00:00.000
[ "Physics" ]
Sequential Pattern Mining: A Proposed Approach for Intrusion Detection Systems Technological advancements have played a pivotal role in the rapid proliferation of the fourth industrial revolution (4IR) through the deployment of Internet of Things (IoT) devices in large numbers. COVID-19 caused serious disruptions across many industries with lockdowns and travel restrictions imposed across the globe. As a result, conducting business as usual became increasingly untenable, necessitating the adoption of new approaches in the workplace. For instance, virtual doctor consultations, remote learning, and virtual private network (VPN) connections for employees working from home became more prevalent. This paradigm shift has brought about positive benefits, however, it has also increased the attack vectors and surface, creating lucrative opportunities for cyber-attacks. Consequently, more sophisticated attacks have emerged, including Botnet attacks which typically lead to Distributed Denial of Service (DDoS). These pose a serious threat to businesses and organisations worldwide. This paper proposes a system for detecting malicious activities in network traffic using sequential pattern mining (SPM) techniques. The proposed approach utilises SPM as an unsupervised learning technique to extract intrinsic communication patterns from network traffic, enabling the discovery of rules for detecting malicious activities and generating security alerts accordingly. By leveraging this approach, businesses and organisations can enhance the security of their networks, detect malicious activities including emerging ones, and thus respond proactively to potential threats. The performance evaluation for the proposed approach reveals a True Positive Rate (TPR) of over 99% and a False Positive Rate (FPR) of 0%. INTRODUCTION 4IR has played a pivotal role in the digital transformation of businesses and industries.The COVID-19 pandemic has further accelerated this trend, forcing us to rely more heavily on technology for daily activities such as accessing government services and transportation.This paradigm shift has revolutionised how employees work, promoting remote work and increasing the use of online communication platforms such as Teams and Zoom.This shift has brought numerous benefits, including cost savings, increased productivity and efficiency.However, it has also increased the attack surface for adversaries, creating a lucrative opportunity for cyber-attacks due to the deployment of a large number of smart technologies that operate without human intervention.These technologies have also increased the risk of sophisticated attacks, such as multi-stage attacks (MSAs) [2,3,14,22,23].An example of MSAs are Botnet attacks, which are often used to launch DDoS attacks [8,9] at a later stage, these have been a serious threat in recent years.As businesses and industries continue to embrace digital transformation, it is important to remain vigilant and take proactive measures to mitigate the risks of cyber-attacks. The stages of a cyber-attack typically begin with reconnaissance, which involves gathering information about the target organisation to map its security posture.This is followed by a scanning attack, which is a pre-attack stage that adversaries use to identify potential attack vectors that can be exploited to gain access to the network.During this stage, port scanners, ping scanners, and related tools are employed to discover open ports and obtain information about the network services running on them, as well as details about the operating systems and versions in use.The output of this stage usually consists of a list of attack vectors that can be used to penetrate the target organisation's defences. Port scanning involves sending packets to the target host to initiate a TCP connection through a three-way handshake.Through this process, a scanner can determine the state of the port on the target network hosts by sending a packet with the SYN flag set and analysing the response from the host being scanned.There are various types of port scans, such as the syn scan, TCP connect scan, and stealth scan.The stealth scan is particularly effective as it limits the noise generated during the scan by not completing the full three-way handshake, thus making it relatively more difficult to detect. Network monitoring tools, such as Zeek [29] and Snort [7,13,17], are equipped with pre-defined rules and signatures that enable the detection of common scanning attacks.Additionally, firewalls are typically deployed to secure networks, employing different sets of rules to filter out malicious traffic while allowing only legitimate traffic into the network.Given the availability of these intrusion detection tools and technologies, the likelihood of successful execution of scanning techniques by attackers is considerably low. As technology continues to advance and security measures become more sophisticated, attackers are constantly developing new techniques to gain access to target networks.In addition to standard scanning methods, adversaries create custom scans that involve sending packets with combinations of TCP flags that are not typically used in normal communication.This leads to mapping firewall rules and gaining more understanding of the traffic filtration rules implemented on the firewall.This then helps them develop attack strategies that allow them to send traffic that probes the network in a manner that evades the implemented rules.By doing so, attackers can identify attack vectors and exploit them to gain unauthorised access to the target network.It is imperative for organisations to detect these malicious activities at an early stage to prevent ultimate attacks.Timely detection and appropriate countermeasures can protect organisations from severe financial and reputation damage. Intrusion Detection Systems (IDSs) are security measures that can either be devices or software designed to monitor hosts or networks proactively [20,32].Their primary objective is to detect and report malicious activities to the network security team.IDSs can be classified into two categories based on their behaviourhost-based IDSs (HIDSs) and network-based IDSs (NIDSs).NIDSs analyse network traffic collected from devices such as routers and switches, whereas HIDSs process and analyse log files to detect attacks on a particular host [32]. Additionally, IDSs can also be classified based on the techniques they utilise, such as signature-based IDSs and anomaly-based IDSs.Signature-based IDSs identify threats by analysing predefined signatures of known malicious activities, while anomaly-based IDSs monitor and identify unusual network behaviour that deviates from the norm.In summary, IDSs are an essential component of a robust cybersecurity strategy that adds another layer of security that helps detect and prevent potential security threats and attacks. This paper proposes an approach for intrusion detection of malicious activities in network traffic that utilises SPM techniques.As a proof of concept, this work focuses on detecting the second phase of a typical attack life cycle, which is the scanning phase.SPM is an unsupervised learning technique that extracts intrinsic communication patterns from network traffic.The patterns discovered through SPM are then used to detect scanning activities on the monitored network.Additionally, a rule-based approach is proposed as part of the system for the classification of scanning traffic based on the discovered sequential patterns. The rest of this paper is organised as follows: Section 2 discusses related work, Section 3 presents the proposed methodology, The experimental setup, dataset used and results are discussed in Section 4 and finally, the conclusion of the paper is provided in Section 5. RELATED WORK Ananin et al. [1] conducted a comprehensive review of various port scan types, including scanning attacks, and developed a mathematical model for detecting anomalies related to these attacks.They evaluated their approach by implementing an algorithm derived from the mathematical models to test their detection model. Birkinshaw et al. [5] proposed an Intrusion Detection and Prevention System (IDPS) designed to detect port scanning attacks and Denial of Service (DoS) attacks.The authors stress the importance of early detection, such as during port scanning, to prevent the potentially devastating impact of ultimate attacks such as DoS.Their proposed approach utilises Software Defined Network (SDN) technology and is capable of real-time detection.Moreover, the approach can be extended to include the detection of other types of malicious activities.The authors reported a low False Positive Rate (FPR) for their approach. Husák et al. [18] conducted a study highlighting the underutilisation of data mining techniques in the cybersecurity domain.They provided an in-depth discussion of rule mining and SPM use cases, particularly in the context of cyber alert analysis.Moreover, they conducted a survey on alert correlation and attack prediction.The authors evaluated pattern mining techniques, considering speed, using a real dataset of alerts.Finally, they presented a comparison of different methods and shared valuable lessons learned, and thus demonstrated the importance of exploring the full potential of data mining techniques in the cybersecurity domain. Tıktıklar et al. [31] conducted a study that investigated the existing SPM algorithms.The study analysed the underlying principles of the algorithms and performed a comparative analysis across various domains such as cybersecurity, telecommunications, air quality monitoring, and user behaviour analysis.The evaluation of the algorithms was based on a real-life telecommunications dataset.The study compared three SPM algorithms, namely GSP, Prefix Span, and CMRules, and concluded that their performance may vary depending on the dataset analysed. Fournier-Viger [11] conducted a comprehensive survey on SPM and identified its trends for discovering patterns in sequential data.SPM algorithms have found numerous applications in different domains ranging from bioinformatics to e-commerce.One of the prominent applications of SPM is natural language processing, particularly in text analysis.In addition, SPM algorithms have been used in market analysis to analyse customers' purchasing patterns, which helps in recommending products to customers.The study discusses some popular SPM algorithms such as PrefixSpan, highlighting their strengths and weaknesses.Jafarian et al. [19] proposed a DNS-based technique for detecting network scanning attacks aimed at enterprise networks, both internal and external.Their approach involves monitoring the network subnet's ingress and egress flow and correlating it with the preceding DNS query/response.This method has been shown to effectively detect scans with less overhead. In their study, Yue et al. [33] analysed the Train Ethernet Consist Network (ECN), which is responsible for transmitting train control signals.They identified intrusion threats to the data security of railway vehicles due to the increased interaction between the train network and the external environment.To address these challenges, they proposed an ensemble-based IDS that can detect ECN attacks such as IP Scan, Port Scan, DoS and Man-in-the-Middle (MITM) attacks.Their proposed IDS employs Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to detect such attacks.The authors evaluated their IDS on an ECN testbed and reported a high accuracy of 0.975. In their study, Sagatov et al. [28] emphasised the significance of protecting networks against scanning attacks, which are often the first steps in exploiting network vulnerabilities.These attacks exploit protocol behaviour to gather information about open ports and the services running on a target network, which can then be used to exploit any discovered vulnerabilities.The researchers proposed a method to detect the initial stages of attacks in TCP and UDP, which could help address the challenges of defending against these attacks.They tested their method on a testbed they created and evaluated its effectiveness. Aparicio-Navarro et al. [4] proposed an IDS that uses Fuzzy Cognitive Map (FCM) and Pattern-of-Life (PoL) techniques to detect malicious activities.The IDS is designed to address the increasing complexities of cyber-attacks.In their evaluation, the team reported a high detection rate of 99.76% with a low FPR of 6.33%.Other intrusion detection approaches for detection some of malicious activities that have been a serious threat recently include machine learning approaches [6, 15,16,34].Specifically, feature selection approaches contribute to improved performance evidenced by high TPR and low FPR [21,24]. PROPOSED METHODOLOGY The proposed methodology is illustrated in Fig. 1.The proposed system takes in network traffic as input, which is then processed by the Network Traffic Filter Module.This module extracts key features from the packets transmitted between hosts such as ICMP type and code IDs or TCP header flags.These key features are then organised into a sequence that accurately represents the activities between the hosts.The extracted features can be related to different communication activities between two hosts communicating through TCP, User Datagram Protocol (UDP) or any other protocol.The output of this module is traffic filtered with only relevant features organised as a database of sequences.This sequence database is passed to the Sequential Pattern Miner Module for further processing. The Sequential Pattern Miner Module extracts frequent sequential patterns that are passed to the Detection Rule Generator module and the deployed Malicious Activity Detection module.At this point, the system follows a process of analysing sequential patterns through both the Malicious Activity Detection Module and the Detection Rules Generator Module.The Malicious Activity Detection Module is designed to identify any instances of malicious activity based on the detection rules that have been implemented within the module.The Detection Rules Generator Module, on the other hand, is responsible for supporting the development of new detection rules.This is done by forwarding unknown patterns to the Network Security Team for a thorough analysis, which is then used to create new detection rules.These newly created rules are then evaluated and ultimately deployed in the Malicious Activity Detection Module.Sequential pattern mining is a technique used to extract valuable insights from sequential data in various domains.For instance, it is used in recommender systems to analyse sequences of products purchased together or subsequently, revealing crucial insights about customer buying behaviour.The discovered sequential patterns are then used to recommend products to customers based on their purchasing patterns [30].Apart from the retail domain, sequential pattern mining has also been successfully employed in other domains, such as cybersecurity [18], to analyse sequential patterns. The generation and analysis of sequences are a crucial part of the proposed system.Sequential Pattern Mining Framework (SPMF), a data mining software, is used to extract sequential patterns from the sequences [10,12].The sequences generated from network traffic are preprocessed to transform them into a format compatible with SPMF.Since SPMF only takes integer values as input, the preprocessing includes converting feature values for the sequences into integers.Once the sequences of activities are ready, they are passed into the Sequential Pattern Miner Module, which extracts sequential patterns from the network traffic sequences for further analysis using SPMF revealing insights and patterns of network malicious activities taking place on the network. The sequential pattern mining algorithm utilised and the one implemented on SPMF is PrefixSpan [11,27].PrefixSpan uses two techniques: database projection of subsequences within the databases and depth-first search for traversing the entire sequence database for mining frequent sequential patterns.This process of finding different sequential patterns is done recursively.To mine different sequential patterns the algorithm requires an input sequence database and minimum support, where minimum support means the frequency of occurrence of sequential patterns or how many sequences contain a particular sequential pattern. Upon receiving input, the algorithm scans the entire database and counts the minimum support of each sequential pattern in the set of sequences.The minimum support for each of the sequential patterns is then evaluated against the minimum support.Any sequential pattern with support less than the minimum support is considered infrequent and is consequently eliminated.The process is repeated to find the next sequential pattern comprising of occurrence on one item followed by another item, this is performed for each of the sequences in the sequence database.Again, the minimum support is compared against the support of subsequences, and those found to be less frequent are eliminated.This process is continued until even longer and more frequent item sets are discovered [27].One of the benefits of PrefixSpan algorithm is that it considers only the observed sequences database as opposed to creating new ones like other algorithms do and is easy to extend. EVALUATION RESULTS This section provides a discussion on the evaluation of the proposed approach.It is split into two subsections.The first is Sec 4.1, covering dataset description as well as steps followed and the second, Sec 4.2 on the analysis and discussion. Experimental Setup To evaluate the effectiveness of the proposed system, the reconnaissance dataset [25] consisting of port scanning activities is utilised.Specifically, TCP three-way handshake traffic relating to TCP flags for setting up communication connections is derived from this dataset.As a proof of concept for the performance evaluation of the proposed system, the publicly available dataset by the Canadian Institute for Cybersecurity based at the University of New Brunswick is utilised [25].This dataset is network traffic generated from 105 IoT devices and 33 different attacks have been executed including reconnaissance activities and more specifically port scanning.The experiment was performed following the steps illustrated in Fig. 1, the process begins with extraction of relevant features.The feature extraction process focuses on the TCP three-way handshake negotiation process between two hosts communicating through TCP.This approach provides a detailed evaluation of the network traffic and its patterns.Specifically, for each TCP connection setup, a sequence is generated for network packets, with a particular emphasis on TCP flags for each connection setup.This enables an in-depth and critical analysis which then leads to gaining insights on these malicious activities in terms of how they work and target goals.This then results in the development of countermeasures to combat these malicious activities. Analysis and Discussion This section provides a detailed discussion of the results of the experimental setup.Fig. 2 3 shows a sample of frequent sequential patterns within the port scanning traffic uncovered by the SPM process.While existing signature-based detection approaches can already detect this type of scan and related, advanced cyber-attackers do not confine themselves to the standard communication patterns, they however, experiment with different custom scans that are not necessarily aimed to determine whether a particular port is open but instead the goal is mapping firewall rules [26].Once the firewall rules on the target network are well understood, the adversaries can then develop a successful strategy to breach firewalls and further probe the network for running services.This leads to the discovery of version numbers of these services and ultimately vulnerabilities which are exploited to gain access.With the proposed SPM system, these custom patterns will be detected by rules generated for the detection of such malicious activities. Figure 2: Samples of Network Activity Sequences The proposed approach is evaluated by analysing the TCP handshake traffic and labelling it for horizontal scanning.To create ground truths for horizontal port scans, the approach considers a scenario where a source IP address scans multiple IP addresses on the same port.The number of IP addresses scanned can be set to a sufficiently large enough value to constitute a horizontal scan.A detection rule is then developed to identify similar patterns across multiple devices, which is indicative of the same type of malicious activity.Beyond just detecting a scan, these frequent sequential patterns detected on multiple hosts are forwarded to the network security team for further insights into the goals of the malicious activity.This approach can reveal firewall rules that the malicious activity is attempting to circumvent.Once the goals of the sequential patterns are determined, specific rules can be developed to detect similar patterns more quickly.The confusion matrix in CONCLUSION This paper presents an approach for IDSs that utilises SPM for detecting malicious activities in network traffic.The proposed system uses SPM to identify sequential patterns from the network traffic, which are then utilised to detect malicious traffic using a rule-based engine.The system is evaluated on a publicly available reconnaissance dataset for detecting port scanning activities and is capable of detecting advanced custom scans and stealth scans.The proposed system also facilitates the generation of security rules by forwarding unknown sequential patterns related to new advanced custom scans to the network security team for further analysis.This approach provides efficient and realistic labelling of the scanning attack and improves network security.Future work will focus on developing and adding more rules to the system to enable the detection of other malicious activities in addition to port scanning. Figure 1 : Figure 1: Proposed Methodology for the detection of malicious activities. provides a sample of sequences generated for each pair of source IP address & source port and destination IP address & destination port, while setting up network communication.For example, for two hosts communicating, namely the scanner and the target hosts, the sequence generated might be 0x0002 -0x0012 -0x0010 -0x0004.This communication sequence would translate to [SYN] -> [SYN, ACK] -> [ACK] -> [RST].This communication sequence translates to a type of scanning activity known as a stealth scan or a half-open scan.Fig.
4,583.6
2023-12-21T00:00:00.000
[ "Computer Science", "Engineering" ]
Recognition of Functional Areas Based on Call Detail Records and Point of Interest Data With the recent emergence of big data, there has been significant progress in the study of big data mining and rapid developments in urban computing. With the integration of planning and management in urban areas, there is an urgent need to focus on the identification of urban functional areas (UFAs) based on big data. )is paper describes the concept of communication activity intensity, which is more meaningful than the number of communication activities or the user density in identifying UFAs. )e impact of diverse geographical area subdivisions on the accuracy of UFA recognition is discussed, and a k-means clustering method for dynamic call detail record data and kernel density estimation technique for static point of interest data are established at the traffic analysis zone level. A case study on the region within Beijing’s 3rd Ring Road is conducted, and the results of UFA identification are qualitatively and quantitatively verified. )e causes of large passenger flows on certain metro lines in Beijing are also analyzed. )e highest identification accuracy is obtained for park and scenery areas, followed by residential areas and office areas. In conclusion, the proposed method offers a significant improvement over the identification accuracy of previous techniques, which verifies the reliability of the method. Introduction In the process of urban planning and management [1], the division of urban functional areas (UFAs) is a fundamental step. e distribution of UFAs is directly related to decisionmaking regarding urban transportation, resource management, and factory relocation [2]. As a city develops, the requirements for the integration of urban planning and management change, requiring some dynamic adjustment to the urban planning procedure. At the same time, as urban traffic congestion increases, it is important to alleviate this congestion to prevent an imbalance between urban traffic supply and demand caused by an unreasonable layout of urban functions. However, there is a certain deviation between the existing urban planning and real-world urban development. erefore, the precise and timely identification of UFAs is urgently required. Furthermore, the identification of UFAs has positive significance for policy formulation, resource allocation, transportation, and enterprise development [3]. Of course, it also has great significance for refining future traffic demand management. Traditional urban land use classification is largely based on questionnaire surveys, which are time-consuming, laborintensive, and nonexhaustive and do not reflect the structure of the city in real time [4]. However, some researchers believe that the arrival of the big data era signifies a change in our mode of thinking [5][6][7], and so the application of big data in planning is currently a hot topic of research [8]. ere is also a recognition that constructing UFAs based on big data is essentially self-fulfilling. In recent years, many studies have made full use of big data for urban land use classification or UFA detection [9,10]. For example, the number of regional mobile phone calls has been used to represent the characteristics of urban functions [11], and points of interest (POIs) data have been collected to demonstrate the land use of an area [12,13]. However, three challenging problems must be solved before mapping the functional area to very-high-resolution images [14], namely, the spatial units, features used for the analysis, and category criteria. ere have been many studies on UFAs using massive mobile phone data, including Call Detail Record (CDR) databases and Location-Based Service (LBS) databases. Several studies have also focused on the division and selection of Geographical Area Subdivisions (GASs) when using big data. Previous studies considering CDR volumes did not take the GAS size into account. Additionally, there has been a lack of data such as POIs, which include the attributes of land use at the application of CDR data, and little application of combined qualitative and quantitative methods in the verification of results. is paper describes a set of data-driven methods for UFA identification. We consider the abovementioned factors comprehensively, including the influence of different GAS sizes, statistical indicators of CDR data, data sources containing land use features, and verification methods that are both qualitative and quantitative. e purpose of this study is to develop a practicable method for UFA identification, thus enabling reliable decision-making for urban planning and traffic planning and improving the utilization rate of existing big data applications in the engineering field. Based on CDR data and POI data, the proposed approach makes the following contributions. First, their largescale and long-period properties mean that CDR data can be used to record citizens' daily activities. A novel data-driven method of UFA recognition is proposed, and the intensity of daily activities, which depends on land use, makes a great contribution to identifying the function of a district within a city. Second, this study demonstrates that both calculation indicators and GASs must be considered before constructing the CDR model. e size of GASs is shown to have a significant impact on the results of numerical experiments, which further affect the indicators of CDR data such as the Number of Communication Activities (NCA), CAI, and the User Density (UD). ird, POI data overcome the shortcomings of CDR data in the analysis of land use characteristics. Combining POI data and CDR data can improve the accuracy of UFA identification. Finally, although many previous methods have employed qualitative verification, few have also adopted quantitative verification. e structure of this paper is organized as follows. In the next section, relevant research related to UFA identification is reviewed. Novel region classification methods (GASs, CDR data model, and POI data model) and the required data sources are then presented in Section 3. Section 4 describes a case study of Beijing along with qualitative and quantitative verification and presents the results and a detailed discussion. Finally, Section 5 provides some conclusions and recommendations for future studies. Literature Review Research on land use classification and UFA recognition has been the subject of considerable effort in the field of geographic information [15][16][17]. However, satellite remote sensing data and other traditional detection methods have some shortcomings, such as a long collection cycle, high cost, and poor representation of the difference between intrinsic functions. Several scholars [18,19] have studied land use classification models, but their accuracy varies greatly depending on the input data. To overcome these limitations, mobile phone data have been used to explore the spatial structure of cities [20]. e results of spatiotemporal changes in urban activities based on mobile phone data can be displayed using thermodynamic diagrams [21]. is opened the way to a wide range of big data applications in urban computing [22][23][24][25][26]. For example, Croce et al. [27] used Floating Car Data (FCD) for zoning and graph building, while Alonso et al. [28] used a great quantity of observed traffic data to estimate the effects of traffic control regulation on the macroscopic fundamental diagram of the traffic network. Croce et al. [29] integrated transport models with big data on transport and energy in an attempt to design transport services with electrical vehicles. CDR data use the auxiliary positioning function of the Global Positioning System (GPS) [30], allowing the analysis of crowd activities or human activity patterns. A literature review has investigated the use of mobile phone data to track travel behavior [31]. Population activities and human activity patterns are closely related to urban land use and UFAs [32], allowing urban functional types to be distinguished from the perspective of "humans" by CDR data. Of course, there has been much research on the application of big data for land use, for example, traffic data from loop sensors [28], Smart Card Data (SCD) [24], FCD data [27], and GPS data [33]. e employment space and commuting scope of the urban population in the suburbs of New York were analyzed by using CDR data in different periods [34]. Urban activities have also been analyzed dynamically in Monza and Brianza province, Italy, using the amount of mobile phone conversations, messages, and the number of mobile switching center users in different time intervals [11]. However, some experts have mentioned the greater influence of density than volume for CDR data applications [35]. Iounousse has identified the land use of a city using unsupervised clustering based on satellite data [36]. In terms of GASs, their size differs from buildings to administrative regions [37]. Moreover, the GASs may not represent a complete region in the city [38]. Additionally, researchers have conducted experiments that indicated the significance of traffic analysis zones (TAZs) in CDR applications and provided useful suggestions for urban transportation planning agencies [39]. UFA recognition typically uses a clustering method [24,36,38,40,41] or a semantic model [9,15,31,42]. Semantic models can realize hierarchical recognition, but ignore the shape and size of objects, which have a great impact on the results. In addition, erroneous classification objects can also lead to incorrect results, and the correlations between the UFAs are known to have a strong influence on the overall classification. Clustering methods can overcome these shortcomings; furthermore, the clustering approach is adaptive to individuals and obtains results quickly and precisely. e lack of discussion on GASs and quantitative verification in previous studies has led to inaccurate recognition results, and the combination of static data in existing methods is inadequate when using CDR data. In this study, to identify UFAs, the k-means clustering model is applied to dynamic CDR data that have been translated into the CAIs of GASs, and kernel density estimation (KDE) is used for static POI data based on TAZs. Additionally, in the verification of UFA recognition, qualitative and quantitative analyses are used based on static Baidu high-resolution image map data and field survey data. Data Sources e case study covers the region within Beijing's 3rd Ring Road, an area of 159 km 2 . e study described in this paper was conducted using Beijing CDR data from June 1-30, 2015. ese data were obtained from strategic cooperation projects undertaken by our research team and the Beijing branch of China Mobile Communications Group Co., Ltd. As a result, the data have strict privacy protection (with private information removed) and right of use protection. e research included 3198 mobile communication base stations, with 880 macrocellular stations and 2318 microcellular stations. is covered an average of about 4.94 million daily users and 100.73 million daily records. e CDR data format and examples are presented in Table 1. POI Data. e POI data refer to all geographic entities that can be abstracted as points. e POI data were extracted from the Beijing electronic map in 2015 (see Table 2). Baidu High-Resolution Image (BHRI) Map Data. e BHRI map data used in this paper can be found at https://map.baidu.com and are publicly available. Discussions of GASs. Five different GASs were collected from previous studies, namely, the raster layer [43], Voronoi layer [41,44], road network segmentation layer [33], TAZ layer [24], and administrative layer [6]. e influence of the different GASs on the identification of UFAs is discussed in Table 3. From the discussion in Table 3, the TAZ layer appears to have several advantages in terms of UFA identification. CDR Data Model. Compared with other methods, clustering has many advantages, such as easy operation, rapid output of results, and the ability to focus on individuals. e k-means clustering method is widely used in clustering analysis of UFA recognition based on human activity data. Hence, the k-means clustering is used in this study to deal with CDR data. As there is some difference between human travel characteristics on workdays and at the weekend [45], these periods are analyzed separately, which is very helpful for the recognition of UFAs. In addition, NCA, CAI, and UD are also considered in the CDR data model. Several Definitions. e following items are used in our model of CDR data (see Table 4): (1) CAIs of GAS: the ratio of the number of calls made or received in a certain GAS at a fixed time interval of the day to the area of the GAS coverage (2) UDs of GAS: the ratio of the number of users in a certain GAS at a fixed time interval of the day to the area of the GAS coverage (3) Matrix of NCAs, CAIs, and UDs: the distributions of NCA, CAI, and UD in each GAS at a 5 min time slot of the day, expressed as V n (τ), v w n (τ), and u w n (τ), where τ denotes the 5 min time slot, τ ∈ 0, 1, . . . , 287 { } (4) Signature of each GAS: the aggregation result of the NCA matrix, CAI matrix, and UD matrix, which indicates a certain UFA, expressed as Index Calculation (1) Matrix of user numbers: where U n (δ, τ) is the matrix of user numbers in the nth GAS; n ∈ 1, 2, . . . , N { } is the number of GASs; δ represents each day of a month (δ ∈ 1, 2, . . . 30 { } in this paper); C is the number of mobile communication base stations in a certain GAS; P c n (δ, τ) represents the number of users in the nth GAS connected to the cth mobile communication base station in the τth 5 min interval of day δ. (2) Matrix of communication numbers: where V n (δ, τ) is the matrix of communication numbers in the nth GAS; R c n (δ, τ) represents the number of communications in the nth GAS connected to the cth mobile communication base station in the τth 5 min interval of day δ. (3) Area of GASs: area statistics are mainly determined using ArcGIS, and the nth GAS area is referred to as A n . e specific statistical operations are not discussed in this article. (4) Average value calculation: (a) e average user numbers in GASs: Journal of Advanced Transportation where U w n (τ) represents the matrix of average user numbers in GASs on weekday or weekends; w ∈ 1, 2 { }, with 1 denoting weekday and 2 denoting weekend. (b) Average communication numbers of GASs: where V w n (τ) represents the matrix of average communication numbers in GASs on a weekday or weekend. where v w n (τ), u w n (τ) denote the matrixes of CAIs and UDs, respectively and A n is the area of the nth GAS (km 2 ). (b) Signature of CAI: (c) Signature of UD: where S n,Ω w (τ), s n,Ω w (τ), t n,Ω w (τ) are the signatures of GASs based on the NCA, CAI, and UD on a weekday or a weekend. e signatures are calculated by SPSS. Clustering Analysis. Unsupervised clustering technology requires the number of clusters to be known beforehand. In the case of k-means, the optimal number of clusters is determined by whether close clustering or good separation is required. A validation method [46] can be used to select a better value of k. e cluster validity index is the ratio of the intracluster distance to the intercluster distance. e ideal classification will minimize the intracluster distance and maximize the intercluster distance, so a smaller value of the validity index indicates better classification. e cluster validity index is calculated as follows: where C ita and C ite denote the intracluster and intercluster distances; VI is the validity index; C p is the set of signatures belonging to the cluster defined by centroid c p ; Z n represents a signature, such as S n , s n , and t n ; and p, q ∈ k. Journal of Advanced Transportation Figure 1 shows the clustering results obtained with different values of k. e following can be inferred: (1) e validity value of the CAI data is smaller than that of the NCA and UD data, which indicates that clustering analysis based on CAIs results in a large intracluster distance and small intercluster distance. is suggests a better clustering result and demonstrates that the size of the GAS has a significant impact on the recognition of urban functions. (2) e UD and NCA data do not provide good results, indicating that the NCAs or UDs may not have as great an impact on the clustering results as the CAIs, which can broadly distinguish the mechanisms of CDR data. ere are many situations in which the communication base stations are triggered, including active triggering and passive triggering, cross-region triggering, and switching on-off. In practice, there may be great deviations in results if the communication activity is ignored. (3) e different k values produce different values of VI. e smallest value is given by the CAI data with k � 6. us, combined with some relevant research about the types of urban functions and the Chinese standard [47], five single UFAs (residential, commercial, park and scenery, office, and education areas) plus mixed areas are considered in this paper. Name Layer diagram Discussion Raster layer is kind of layer has no difference between cells. In reality, small segmentation makes little significance, but the theoretical basis is insufficient with large divisions. Moreover, this layer crosses the traffic corridor in the city, which may lead to incorrect identification results Voronoi layer All units in this layer have irregular sizes, and the accuracy of recognition obviously decreases in areas where the density of mobile base stations is high, making incorrect results more likely in these units. e phenomenon of crossing traffic corridors also exists Road network segmentation layer Although this layer does not cross traffic corridors, there is a huge discrepancy in the road network distribution between urban centers and suburbs, which strongly affects the results. Furthermore, the results from this layer lack practical significance TAZ layer ere is no crossing of traffic corridors in this layer, and the division takes multiple factors into account, such as landform, administration, human history, road network, and urban function. us, the results are much more meaningful in terms of traffic field, land use, and urban planning Administrative layer e granularity of the division is somewhat coarse, making correct identification difficult. Furthermore, the recognition results are of little significance for urban planning and transportation planning Journal of Advanced Transportation 5 Seven days of the week, Δ ∈ Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday U n (δ, τ) Matrix of user numbers in the nth GAS V n (δ, τ) Matrix of communication numbers in the nth GAS C Number of mobile communication base stations in a certain GAS P c n (δ, τ) Number of users in the nth GAS R c n (δ, τ) Number of communications in the nth GAS, which is the cth mobile communication base station in the τth 5 min interval of day δ A n Area of nth GAS N w Number of weekdays or weekends in a month, N 1 � 22, N 2 � 8 in this paper U w n (τ) Matrixes of average user numbers of GASs on a weekday or weekend V w n (τ) Matrixes of average communication numbers of GASs on a weekday or weekend v w n (τ) Matrixes of CAIs u w n (τ) Matrixes of UDs S n, Ω w (τ) Signature of GASs based on the NCAs on weekday or weekend s n, Ω w (τ) Signature of GASs based on the CAIs on weekday or weekend t n, Ω w (τ) Signature of GASs based on the UDs on weekday or weekend C ita Intracluster distances C ite Intercluster distances VI Validity index C p Set of signatures that belong to the cluster defined by a centroid c p Z n Signature, S n , s n , t n ∈ Z n , p, q ∈ k f(s) KDE function at spatial position s h Distance attenuation threshold m Number of elements for which the distance is less than or equal to h from location s c j Sample element g(x) Spatial weighting function i Type of cluster; i ∈ (1, 2, 3, 4, 5) a i Identification index of type i S i A Actual function area of a cluster i, km 2 S i B Area of detecting function, which is the area of the GAS containing i, km 2 6 Journal of Advanced Transportation POI Data Processing. e POI data were classified for modeling. First, any POI data unrelated to functional identification were removed, for instance, bus station data, exit and entrance data, and other POI data. Several POIs were then reclassified according to the Chinese standard [47]; in this study, the school POI data were divided into university, high school and middle school, and primary school and kindergarten. Residential areas were permitted to include some public facilities, such as primary schools, kindergartens, and convenience stores. In addition, office buildings, government agencies, and parking lots of office buildings were classified as office areas. Commercial areas were distinguished by supermarkets and shopping malls, hotels, and restaurants. e processing results of the POI data are presented in Table 5. POI Model. In general, nonparametric estimation, which is not affected by the overall parameters, is the most widely used method for determining the probability density. Moreover, it can be applied to any sample analysis. KDE is a nonparametric estimation method for the unknown probability density function. us, the POI data model uses KDE. e calculation can be expressed as follows: where f(s) is the KDE function at spatial position s; h is a distance attenuation threshold; m is the number of elements for which the distance is less than or equal to h from location s; c j is the sample element; and g(x) is the spatial weighting function. e two key parameters in the KDE function are g(x) and h. Different average weights must be selected when choosing a certain g(x) function. e uniform function gives the same weight to all points within the scope of the study; the triangular function gives a linear decreasing trend; the Epanechnikov function is relatively slow; and the Gaussian function has no boundaries, allowing weights to be assigned to all points. is study adopts an adaptive bandwidth for the KDE of the Gaussian kernel function [48], as this ensures better convergence and smoothness than the fixed-bandwidth KDE function. Recognition Procedures. e identification procedure is illustrated in Figure 2. First, based on CDR data, we calculate the parameters required for the index calculations. Second, the characteristics of the CDR clustering results are analyzed, including the weekday and weekend features, number of peak values, intensity of peak values, and distribution of peak values; this allows the travel behaviors and public cognitions to be understood. ird, based on the clustering of POI data, the UFA identification results are modified. Fourth, verification is conducted using the BHRI data, field data, and the identification index. Finally, we obtain the final UFA results. Results and Discussion. To recognize the UFAs, the clustering results of POI data are shown in Figures 3 and 4. e characteristics of residents' travel behavior and public cognition are now introduced to explain the signatures. UFA identification based on Figure 3 is also discussed. Cluster 1: the main feature of this cluster is that it has a very high CAI on a weekday and an obvious double peak concentrated at 08:00-11:00 and 14:00-16:00. e CAI in the morning is greater than that in the afternoon. Furthermore, there is still a certain number of CAIs between 18:00 and 21: 00, which indicates that some people are still working during this period. e CAIs of this cluster are lower on the weekend than on weekdays, probably the result of overtime being worked on weekends. However, the value of CAI begins to decrease at 16:00 and is very low after 17:00, which suggests that the work is much more flexible on weekends than on weekdays, so employees can leave their offices early. Based on this analysis, cluster 1 is considered to represent office areas. Cluster 2: this cluster is characterized by the fact that the CAIs on the weekend are higher than those on weekdays, and there is no double peak on weekdays. In contrast, there is a peak activity from 15:00 to 17:00 on weekends, which indicates that people in these areas use their mobile phones to contact friends, fellow travelers, or drivers to arrange their journey home. Combined with the POI results in Figure 4(a), we can infer that this is the signature of park and scenery areas. Cluster 3: this cluster has the obvious feature that the CAI values on workdays and weekends are relatively low. Additionally, there is a double peak on weekdays and higher CAIs than on weekends. However, no double peak occurs at the weekend. is can be explained by residents working at home on workdays and taking a nap after lunch. In contrast, people who are enjoying their leisure time do not need a specific period of rest. ere are around 500 calls/km 2 on weekdays, which might be to invite friends or clients to dinner. us, these GASs are likely to be residential areas. Cluster 4: in this cluster, a notable double peak occurs on the left side and has a higher value on weekdays than on weekends. However, the CAIs on both workdays and weekends are not especially high. e trends on weekdays and weekends are similar after 19:00, and the intensity values are only slightly different between day and night, which indicates that it is mainly young people living and working here. In conclusion, this kind of schedule suggests universities and high schools. With the help of the POI data in Figure 4(b), we can firmly conclude that these are education areas. Cluster 5: the fifth cluster type features a slight difference in intensity between workdays and weekends, and there is a Journal of Advanced Transportation double peak on working days. Furthermore, high CAIs are maintained from 08:00 to 21:00 and longer into the night on weekends. ough the CAI values decline on both weekdays and weekends after 21:00, their number and duration during this period on weekends are stronger and longer than that on weekdays. All of these features are more likely to occur in commercial areas. Cluster 6: with the lowest CAI values on weekdays and almost the highest values (albeit with significant fluctuations) on weekends, this cluster cannot be accurately summarized, especially at weekends. At the same time, no travel behaviors or human activities can fully explain this pattern. us, this region is tagged as a mixed area. e cluster members of each signature, as calculated by SPSS, were displayed in GIS, and the spatial distribution of the UFA recognition results are shown in Figure 5. According to the UFA recognition results in Figure 5, several conclusions can be drawn. First, the residential areas have a high density of occupation and are widely distributed. However, the distribution of park and scenery areas is relatively concentrated. Second, most of the GASs south of metro line 1 are residential areas; in contrast, the educational areas are largely located to the north of metro line 1, which may result in tidal traffic situations. As a result, the passenger flow on north-south subway lines (e.g., metro lines 4 and 5) is very high. ird, office areas are mainly distributed around and between metro lines 6 and 1. is places significant traffic pressure on these metro lines, with the spatial and temporal characteristics of passenger flow making for heavy daily average passenger numbers Fourth, the concentrated distribution of park and scenery areas, especially in urban central areas, brings greater Qualitative Verification. In terms of qualitative verification, we consider some typical GASs and field survey data, as well as the BHRI map. Representations of the six clusters are discussed in Table 6. Quantitative Analysis. In the quantitative analysis of the proposed methodology, the mixed areas about 8.5% of the total area are neglected because more than one functional component is present. For those areas with a sole functional result, the identification index is defined as the ratio of the area covered by that function to the whole area of the GAS. is is schematically illustrated in Figure 6. Of course, this index can be used to represent the accuracy of recognition. e actual function area is calculated using the field survey results and the BHRI map, and the area of each GAS is computed by GIS. e identification index is computed as follows: where i is the cluster type, i ∈ (1, 2, 3, 4, 5); a i is the identification index of type i; S i A is the actual function area of cluster i (km 2 ); and S i B is the area of the GAS in which i is located (km 2 ). e lowest identification index was found to be 63.16% for the commercial area, which is higher than the overall accuracy obtained in previous studies [49,50]. e dynamic needs of urban planning and management can be satisfied if the identification index is above 60% [50]. us, the identifications have great practical significance because the results are all above 60%. e average identification index is 78.30%, far more than the mean value achieved in the previous research, which demonstrates the great progression made by this study. As Table 7 shows, the park and scenery areas have the highest identification index of 96.00%. is can be explained by the fact that the GASs or TAZs were considered when these functions were divided; furthermore, it shows the significance of choosing reasonable GASs before identifying the urban functions. e next-highest identification index values are given by the residential areas and office areas. e POIs of multitype residential facilities (e.g., kindergartens, drug stores, and convenience stores) are very helpful in identifying residential areas. Moreover, there are very high CAIs in office areas, so an impressive identification index can be achieved. e education areas and commercial areas have lower identification index values. is can be explained by the many hotels for conference attendees and departments for school staff around the education area; likewise, with complex land use close to commercial areas, people come and go, but do not stay too long, which affects the CAIs to a certain extent. Identification results is region includes Bank of Beijing Mansion, China Unicom building, Yuhang building, Kaifu building, and others, which means the office area has been correctly identified GAS GAS of BIT Identification results e residential districts of Songjiazhuang, Fangnan, Zhengxin, and others are located in this GAS, and there is no doubt that these are residential areas 12 Journal of Advanced Transportation Conclusions and Future Work e tendency toward integrated urban planning and management requires dynamic recognition of UFAs. However, the selection of GASs in the previous research has nonnegligible effects on the identification of UFAs. In this study, three indexes of CDR were presented, and the concept of CAI was selected as the main focus of the study. Moreover, POI data were found to be very helpful in identifying UFAs. us, kmeans clustering for CDR data and the KDE method for POI data were applied to the region within Beijing's 3rd Ring Road. It is worth noting that the proposed method could be used with a combination of other information, such as SCD data or blog check-in data, which contains POI data. In the final UFA identification results, the park and scenery areas were found to be most accurate. e average identification index was about 78.30%, far higher than in previous research. e findings of this study are conducive to dynamic urban management and planning. Note that the proposed method has not been applied to the whole city in a case study. However, urban planning theories and related planning data should be considered in future research on UFA identification. Additionally, further research may focus on the application of new technologies in big data mining, such as deep learning and machine learning, which can provide reliable information for the integration of planning and management. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Disclosure e manuscript abstract was previously presented in Transportation Research Board 98th Annual Meeting. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
7,504
2020-04-02T00:00:00.000
[ "Computer Science" ]
Frontal Structural Neural Correlates of Working Memory Performance in Older Adults Working memory is an executive memory process that allows transitional information to be held and manipulated temporarily in memory stores before being forgotten or encoded into long-term memory. Working memory is necessary for everyday decision-making and problem solving, making it a fundamental process in the daily lives of older adults. Working memory relies heavily on frontal lobe structures and is known to decline with age. The current study aimed to determine the neural correlates of decreased working memory performance in the frontal lobes by comparing cortical thickness and cortical surface area from two demographically matched groups of healthy older adults, free from cognitive impairment, with high versus low N-Back working memory performance (N = 56; average age = 70.29 ± 10.64). High-resolution structural T1-weighted images (1 mm isotropic voxels) were obtained on a 3T Philips MRI scanner. When compared to high performers, low performers exhibited significantly decreased cortical surface area in three frontal lobe regions lateralized to the right hemisphere: medial orbital frontal gyrus, inferior frontal gyrus, and superior frontal gyrus (FDR p < 0.05). There were no significant differences in cortical thickness between groups, a proxy for neurodegenerative tissue loss. Our results suggest that decreases in cortical surface area (a proxy for brain structural integrity) in right frontal regions may underlie age-related decline of working memory function. INTRODUCTION Working memory is a vital process underlying human thought. Working memory is a limited capacity system that involves active manipulation of information currently being maintained in focal attention (Glisky, 2007). Working memory is one component of executive function that allows for transitional information to be held and manipulated temporarily in memory stores, before either being forgotten or encoded into long-term memory (Baddeley, 1992;Goldman-Rakic, 1996;Owen et al., 2005). As with other components of executive function, working memory processes rely heavily on frontal lobe structures (Courtney et al., 1998). Working memory processes guide voluntary or goal-directed behaviors including short-term maintenance of relevant information, mental manipulations, and mental organization of imminent sequence of actions (Goldman-Rakic, 1987;Boisgueheneuc et al., 2006). Working memory is necessary for everyday decision-making and problem-solving, making it a fundamental process in the lives of older adults. Activities of daily living such as preparing meals, taking medication, paying bills, as well as organizing and planning daily routines and appointments require working memory and other components of executive function (Mograbi et al., 2014). As such, declines in working memory can lead to deficits in these domains and consequently lead to loss of independence and decreased quality of life (Klingberg, 2010;Williams and Kemper, 2010). Working memory performance can be impacted by age-related reductions in working memory capacity and is increasingly susceptible to interference in older adults. Not surprisingly, memory loss and perceived declines in memory performance are frequent complaints in older adult populations (Gazzaley et al., 2007;Kaup et al., 2014). As frontal cortices undergo the most pronounced structural decline with advanced age (Lemaitre et al., 2012) and play an important role in working memory function (Freeman et al., 2008), identifying frontal structures underlying age-related working memory decline may provide important therapeutic targets for combating cognitive aging. The prefrontal cortex participates in cognitive features of behavior, engaging the organization of goal-directed behaviors (Fuster, 1988). Although the frontal lobe is the last brain region to mature in humans around age 25, it is also one of the first regions to structurally decline during the aging process, following the 'last in, first out' model of aging . Studies of brain morphometry show that the prefrontal cortex experiences the most striking reductions (Lemaitre et al., 2012). Similarly, agerelated decreases in cortical surface area are greatest in frontal regions , while the greatest age-related volume reductions occur in the middle frontal gyrus, the superior frontal gyrus (SFG), and the frontal pole (Lemaitre et al., 2012). The frontal lobes, and the right frontal lobe in particular, play an important role in working memory function. The ability to hold onto visuospatial information, to be fractioned into separate visual and spatial components, is thought to be principally represented within the right hemisphere (Baddeley, 2000). Prabhakaran et al. (2000) compared the retention of verbal and spatial information held in integrated or unintegrated forms using functional magnetic resonance imaging (fMRI), and found greater right frontal activation for integrated information, providing evidence for the right frontal lobe being particularly critical for retention of integrated information (Baddeley, 2000;Prabhakaran et al., 2000). Previous fMRI work studying the functional neural basis of aging and working memory have shown distinct activation patterns in older versus younger adults, and for high versus low performance rates on an N-Back working memory task (Cabeza, 2002;Dolcos et al., 2002). Positron emission tomography (PET) and fMRI studies of higher-order cognitive functions have been associated with prominent activations in the prefrontal cortex. Often, activations are sometimes lateralized, which may reflect the nature of the processes and/or the stimuli involved (Nyberg et al., 1996;Cabeza et al., 2002). Prefrontal cortex activity tends to be less asymmetrical in older than younger adults . Young high performers on working memory tasks tend to exhibit significant activation of the dorsolateral prefrontal cortex (DLPFC) lateralized to the right hemisphere. Older adults with low performance exhibit more robust right hemisphere activation than young, potentially reflecting inefficiency of activation, whereas older adults who perform at the same level as young adults exhibit bilateral activation patterns in the prefrontal cortex. This difference in activation patterns of high performing older adults compared to high performing younger adults may counteract age-related neurocognitive declines as a form of compensatory mechanism (compensation hypothesis), or it could reflect age-related deficits in recruiting specialized neural mechanisms (dedifferentiation hypothesis; Cabeza et al., 2002). While the functional pattern of working memory performance in older adults has been well explored, the age-related structural alterations in frontal cortices underlying working memory decline versus compensation remains unclear. Older adults exhibit significant deficits in tasks that involve active manipulation, reorganization, or integration of the contents of working memory (Salthouse et al., 1989). Investigating the structural neural correlates of performance on working memory tasks in older adults is necessary to understand how working memory systems change with age. This study aimed to determine the frontal structures underlying poorer working memory performance. We hypothesized that working memory deficits would be associated with decreases in cortical surface area in right frontal brain regions in healthy older adults. We did not expect any significant changes in cortical thickness, as decreases in thickness signify neurodegenerative tissue loss (Shefer, 1973;Fischl and Dale, 2000) and our population of interest was healthy older adults. In contrast, cortical surface area serves as a proxy for gray matter structural integrity (Fischl and Dale, 2000;Salat et al., 2004;Dickstein et al., 2007;Lemaitre et al., 2012). Participants We recruited healthy community dwelling older individuals in the Gainesville and North Florida region (N = 56, 50% female, 52 right handers). A thorough medical history questionnaire for each participant provided detailed information on health status, medication status, and allowed us to rule out the presence of age-related brain disorders. Exclusionary criteria for the study included pre-existing neurological or psychiatric brain disorders, MRI exclusions, mild cognitive impairment (MCI) or diagnosis with a neurodegenerative brain disease (i.e., dementia or Alzheimer's). The Montreal Cognitive Assessment (MoCA) was given to assess general cognitive ability and rule out possible MCI (Nasreddine et al., 2005). Additionally, the MoCA allowed us to control for differences in global cognitive function and insure our analyses were directly relevant to working memory rather than a reflection of generalized cognitive deficits. The MoCA cut off score to be an eligible participant in the study was 20. A comprehensive neuropsychological battery was performed on each participant to provide for clinical assessment of MCI status. A clinical neuropsychologist assessed participant performance on the battery to determine MCI status. No participants in this sample were clinically indicated to have MCI using this approach. Participants did not significantly differ in age, sex, education, MoCA score, or intracranial volume (ICV; p > 0.05). ICV is especially important to control for as it is closely relates to brain size (Hentschel and Kruggel, 2004;Im et al., 2008), and thus was also included as a covariate in our model to rule out the possibility of head size driving any cortical thickness or cortical surface area differences between groups. The total sample (N = 56) consisted of 28 female and 28 male older adults. The range for the total sample of the following covariates were: MoCA scores = 20-30, age = 44-89 years old, education = 12-20 years, and ICV = 975547. 27-1988968.30. See Table 1 for demographic means, standard deviations, and statistics for the total sample. All participants in the study underwent cognitive testing followed by an MRI scanning session where the N-Back task was performed inside the scanner. fMRI data on N-Back will be presented in a subsequent paper. N-Back performance data was used to characterize participants into high and low working memory groups (described in detail below). Prior to any study procedures, all participants provided written informed consent. The study protocol was carried out in accordance with the Declaration of Helsinki, and the University of Florida Institutional Review Board approved all procedures in this study. N-Back Task The N-Back task requires continuous performance in which participants are asked to monitor the identity of a series of stimuli and indicate when the currently presented stimulus is the same as the one presented n-trials previously (Kirchner, 1958;Owen et al., 2005). This task is known to engage working memory processes and thus was used in this study. Participants completed an N-Back practice session to ensure that all instructions were clear and that participants could accurately perform the task. All N-Back tasks were created with E-Prime version 2.0 (Psychology Software Tools Inc., Pittsburgh, PA, USA). The task was completed inside the scanner, with images projected onto a screen behind the participants' head and viewed through a mirror mounted on the head coil. Responses were made via an MRI-compatible button box, using the middle and index finger. Participants performed two runs of the N-Back, which included both 0-Back and a 2-Back version of the N-Back, totaling 15 min of functional task time. For 0-Back, participants were asked to respond by button press (with index finger) when they saw a X on the screen, and respond with another button press (with middle finger) when they saw any other letter (distractors). This task was used as an attention control. Each letter was displayed one at a time, for 700 ms, followed by a crosshair for 2300 ms. The participants could respond by button press at any point in the total 3000 ms trial interval. During the 2-Back task, participants viewed single letters (i.e., letters of identical font, color, size) on the screen with the same timing scheme as 0-Back. When a letter appeared and was the same as the letter that was presented two letters prior, participants were asked to respond to that target letter by a button press of their index finger (see Figure 1 for visual example). All letters that did not match the 2-Back pattern were used as distractors, and participants were asked to respond by button press with their middle finger. The order of whether participants received 2-Back or 0-Back first was randomized. N-Back Working Memory Performance Characterization N-Back accuracy rates were collected and recorded in E-Prime v2.0 then transferred as total percent accuracy scores of both runs into SPSS. The data was then processed through SPSS version 21. All participants responded to greater than 75% of all N-back trials. A median split based on 2-Back accuracy was performed to determine high versus low performers. High performers (N = 29) scored 67% or above correctly on 2-Back, while low performers (N = 27) had an accuracy score of 66% or below. For 5 participants (three high performers, two low performers), one of their runs was lost during data collection due to technical problems. For these participants, the one run collected was used for analyses. Working Memory Group Demographics Behavioral data for 0-Back average accuracy was 83.71 ± 17.38% (range = 19-98%) while 2-Back average accuracy was 64.88 ± 16.93% (range = 20-90%) for the overall sample (N = 56). High and low working memory performers on the 2-Back task were determined by performing a median split of accuracy scores. Accuracy scores of 67% or above were grouped as high performers. In contrast, scores below 67%, were grouped as low performers. See Table 2 for more detailed task performance information for high and low groups. The range for the following covariates for the low performing group was: age = 47-89, education = 12-20, MoCA = 20-30, ICV = 1001786.49-1954571.14. The range for the following covariates for the high performing group was: age = 44-85, education = 12-20, MoCA = 21-30, ICV = 975547. 27-1988968.30. High versus low working memory groups did not significantly differ on the above covariates (p > 0.05). For high and low group demographic means, standard deviations, and test statistics, see Table 1. MRI Acquisition T1-weighted MPRAGE structural MRI scans were performed on all participants. Participants were imaged in a Philips Achieva 3.0 Tesla (3T) scanner (Philips Electronics, Amsterdam, The Netherlands) with a 32channel receive-only head coil. Scan parameters: repetition time (TR) = 7.0 ms; echo time (TE) = 3.2 ms; flip angle = 8 • ; field of view = 240 mm × 240 mm × 170 mm; voxel = 1 mm × 1 mm × 1 mm. Foam padding was placed around the head to limit motion during the scan. No images exhibited evidence of motion artifact. Participants were given headphones and earplugs to minimize noise while inside the scanner. T 1 -Weighted Neuroimaging Processing Cortical reconstruction and volumetric segmentation was performed with FreeSurfer version 5.3 image analysis suite. The technical details of these procedures are described in prior publications (Dale and Sereno, 1993;Dale et al., 1999;Fischl et al., 1999aFischl et al., ,b, 2001Fischl et al., , 2002Fischl et al., , 2004aFischl and Dale, 2000;Segonne et al., 2004;Han et al., 2006;Jovicich et al., 2006). Briefly, this processing includes removal of non-brain tissue (Segonne et al., 2004), automated Talairach transformation, segmentation of the subcortical white matter and deep gray matter volumetric structures (Fischl et al., 2002(Fischl et al., , 2004a, intensity normalization (Sled et al., 1998), tessellation of the gray matter white matter boundary, automated topology correction (Fischl et al., 2001;Segonne et al., 2007), and surface deformation following intensity gradients to optimally place the gray/white and gray/cerebrospinal fluid borders at the location where the greatest shift in intensity defines the transition to the other tissue class (Dale and Sereno, 1993;Dale et al., 1999; Dale, 2000). Once the cortical models are complete, a number of deformable procedures can be performed for in further data processing and analysis including surface inflation (Fischl et al., 1999a), registration to a spherical atlas which utilized individual cortical folding patterns to match cortical geometry across subjects (Fischl et al., 1999b), parcellation of the cerebral cortex into units based on gyral and sulcal structure (Fischl et al., 2004b;Desikan et al., 2006). This method uses both intensity and continuity information from the entire three dimensional volume in segmentation and deformation procedures to produce representations of cortical thickness, calculated as the closest distance from the gray/white boundary to the gray/CSF boundary at each vertex on the tessellated surface (Fischl and Dale, 2000). The maps are created using spatial intensity gradients across tissue classes and are therefore not simply reliant on absolute signal intensity. The maps produced are not restricted to the voxel resolution of the original data and thus are capable of detecting submillimeter differences between groups. FreeSurfer measures have been shown to be both reliable and valid. Procedures for the measurement of cortical thickness have been validated against histological analysis (Rosas et al., 2002) and manual measurements (Kuperberg et al., 2003;Salat et al., 2004). FreeSurfer morphometric procedures have been demonstrated to show good test-retest reliability across scanner manufacturers and across field strengths (Han et al., 2006;Reuter et al., 2012). Once processed through FreeSurfer, all output was visually inspected for processing errors (e.g., mislabeling white matter, gray matter, or skull inclusions) and manually corrected for when necessary. Neuroimaging Measures: Cortical Thickness and Cortical Surface Area The relationship between cortical surface area and cortical thickness creates a quantifiable brain volume. For example, although two objects may have the exact same volume, the shape or contours of the objects can vary considerably, exhibiting very different topography. If we consider a cube measuring 3 × 3 × 3 versus a rectangular shape measuring 3 × 9 × 1, both shapes share the same volume of 27 mm 3 ; this exemplifies that surface area and thickness may exhibit a very different pattern while sharing the same volume. When we consider the human brain, age-related changes in surface area versus thickness may have different implications for behavioral and cognitive processes. These two components exhibit distinct patterns of change when comparing healthy versus diseased brains (Dotson et al., 2015). Gray matter, which makes up the cortical ribbon, experiences volume loss throughout adulthood into advanced age (Scott and Thacker, 2004). Neuronal density is relatively stable throughout life; any robust decrease in neuronal density is thought to reflect a disease state (Morrison and Hof, 2002;Dickstein et al., 2007). Decrease in cortical thickness is a proxy for neuronal loss due to neurodegenerative disease (Shefer, 1973;Fischl and Dale, 2000). While changes in cortical surface area and its relationship to general cognitive function is less known (Schnack et al., 2015), cortical surface area is thought to reflect the structural integrity of gray matter (Fischl and Dale, 2000;Salat et al., 2004;Lemaitre et al., 2012). It has been suggested that preservation in neuronal number, but loss of neuronal dendritic architecture underlies neocortical volume loss with increasing age in the absence of Alzheimer's disease (Morrison and Hof, 2002;Freeman et al., 2008). In normal healthy aging, to our best knowledge, there are no studies that have closely examined cortical surface area changes and the possible role this may play in driving age-related declines in working memory function. Regions of Interest and Neuroimaging Statistical Analyses Frontal lobe regions (defined as all regions anterior to the pre-central gyrus using the Desikan-Killiany parcellation, see Table 3 for a comprehensive list of ROIs) and two control regions outside the frontal loges (left and right pericalcarine areas of the occipital cortex; i.e., V1; Desikan et al., 2006) were analyzed for both thickness and area using separate univariate general linear models with performance group (high versus low) as a fixed factor and age, sex, years of education, ICV and MoCA score as covariates using the software SPSS version 21. Control sites were included to assess the regional specificity of our frontal focused analyses. To control for multiple comparison type I error we implemented a false discovery rate (Benjamini and Hochberg, 1995) threshold of FDR < 0.05 using the software R, which is freely available for download online 1 . . No significant differences in cortical thickness were observed after correcting for multiple comparisons (FDR > 0.05). As a control brain region, the pericalcarine gyrus of the occipital lobe was analyzed in both hemispheres and did not significantly differ in thickness or surface area between groups. See Figure 2 for significant surface area results and Table 3 for all surface area and thickness results. DISCUSSION The current study investigated the neural correlates of agerelated decreases in working memory performance in frontal cortices. We found significant differences in cortical surface 1 https://www.r-project.org/ area for three regions of the right frontal lobe. Low working memory performers had significantly less surface area for the inferior frontal gyrus (pars opercularis), SFG, and the medial orbital frontal gyrus, when compared to high working memory FIGURE 2 | Cortical surface differences between low versus high working memory performers. Arrows connect graphs of between group differences to the affiliated gyri ROI highlighted on a FreeSurfer brain model. POP, pars opercularis of the inferior frontal gyrus; SFG, superior frontal gyrus; MOF, medial orbital frontal gyrus. Error bars = ± 1 SE. performers. These areas of decreased structural integrity are consistent with prior fMRI findings for functional correlates of working memory performance (Curtis and D'Esposito, 2003;Owen et al., 2005). These results are also consistent with prior research demonstrating right lateralized BOLD activation of frontal cortices in young adults with high working memory performance, but bilateral (potentially compensatory) activation of right and left frontal cortices in older adults able to maintain a high level of working memory performance . In contrast, older adults unable to maintain performance showed unilateral increase in activation of right frontal regions, perhaps consistent with less efficient neural processing . Collectively, our structural MRI findings, when considered in concert with prior functional MRI research , suggests that areas in the right prefrontal cortex are critical substrates for age-related change in working memory function. Our findings provide evidence that right lateralized structural abnormalities in inferior, superior, and medial orbital frontal gyri underlie age-related working memory decline. Pars Opercularis of the Inferior Frontal Gyrus The pars opercularis (BA44), a sub-region of the inferior frontal gyrus, is included in the functionally defined ventrolateral prefrontal cortex (VLPFC; Molnar-Szakacs et al., 2005). The VLPFC is consistently found to be active in working memory fMRI studies; early functional neuroimaging studies that activated this region in humans tended to emphasize the explicit retrieval of one or a few pieces of information, as well as the sequencing of responses based directly on stored information (Owen et al., 2005). Aron et al. (2004) argues that the right VLPFC plays a critical role in cognitive inhibition. Cognitive inhibition is a component of executive control that can be localized to the right inferior frontal gyrus, specifically the pars opercularis (Molnar-Szakacs et al., 2005;Falquez et al., 2014). Inhibition can be defined as the suppression of inappropriate responses (Aron et al., 2004). Cognitive inhibition could be one of a set of functions (including working memory maintenance of task sets and items, selection and manipulation of information in working memory, and conflict detection) implemented by different, possibly overlapping prefrontal cortical regions. The voluntary blocking of memory retrieval may also be dependent on this region. As more information in the environment is perceived than can accurately and appropriately be attended to, inhibition is an integral feature of the prefrontal cortex that allows irrelevant information to be inhibited enabling more important information to be processed more quickly and efficiently. Superior Frontal Gyrus The SFG is a large region of the prefrontal cortex, making up about 1/3 of the frontal lobe in the human brain. The SFG is thought to contribute to higher cognitive functions, and play a particularly important role in working memory (Boisgueheneuc et al., 2006). The functional anatomical region referred to as the DLPFC (BA9) overlaps structurally, in part, with SFG (Owen et al., 2005;Falquez et al., 2014). The DLPFC plays a crucial role in terms of working memory. It has been established as a crucial node that supports working memory processes. Neurophysiological unit recordings of the DLPFC in monkeys have shown persistent sustained levels of neuronal firing during retention intervals of delayed response tasks (Curtis and D'Esposito, 2003). Sustained activity in the DLPFC is thought to provide a bridge between the stimulus cue and its contingent response (i.e., goal-directed behavior) in a working memory task. Goldman-Rakic (1987) has shown that lesions in the DLPFC impair the ability to maintain sensory representations on-line that are no longer present in the external environment. Studies of patients with SFG lesions show global impairments in working memory tasks with impairments present months to years post-lesion, indicating that the SFG may be a key component in the working memory network (Boisgueheneuc et al., 2006). Medial Orbital Frontal Cortex The orbitofrontal cortex in primates is situated ventrally and frontally in the brain (Kringelbach and Rolls, 2004), and can be further divided into distinct areas. Two major subdivisions have been cytoarchitecturally and functionally identified: the lateral orbitofrontal cortex and medial orbitofrontal cortex (MOF). The medial orbital frontal cortex surface includes BA14 (Rolls, 2004). The MOF receives input from all sensory modalities. Accumulating evidence from fMRI implicates the orbitofrontal cortex as a necessary component in working memory, demonstrating activity in this area while coordinating multiple working memory operations (Wager and Smith, 2003;Owen et al., 2005;Barbey et al., 2011). Studies of human brain lesion patients with damage to orbitofrontal cortex have shown specific behavioral outcome deficits on components central to working memory (Barbey et al., 2011). Orbitofrontal damage was associated with deficits on working memory tasks involving coordination of maintenance, manipulation, and monitoring processes (e.g., N-Back task). However, this association was not seen on neuropsychological tests of working memory maintenance (digit/spatial span forward) or manipulation (digit/spatial span backward and letter-number sequencing; Barbey et al., 2011). LIMITATIONS Although not significantly different on age, the low performers tended to be older than high performers. If the sample size increased, it is possible this could impact the overall results as structural brain changes increase in older age. Even still, age was used in our models as a covariate to account for any numerical differences in age between groups. It is also possible that the clinical assessment of MCI by the study neuropsychologist did not capture participants in the earliest stages of MCI. This possibility is supported by the range of MoCA scores in this study, although these ranges were not significantly different between groups. Nonetheless, our findings may by biased by an unknown number of participants in either group that were in the earliest stages of MCI and thus evidencing early neurodegenerative tissue loss. The N-Back may also exhibit limitations inherent to the task regarding its use for the study of lateralized differences in structure-function relationships. As the functional foci of activation for N-Back changes with age and development, this tool may not be ideal for full identification of all frontal related working memory related neural correlates. CONCLUSION Normal physiological processes of aging are associated with neuronal circuitry changes, which may result in impaired cognition and behavior in some older individuals. Individuals that show poorer cognitive performance tend to show impairments of executive functions first (e.g., working memory, planning, and goal directed behavior), thus it has been postulated that neurons and circuits of the prefrontal cortex may be particularly vulnerable during normal aging in humans and non-human primates (Dickstein et al., 2013). During the aging process, there is evidence that neurons undergo morphological changes such as reduced complexity of dendritic arborization and dendritic length, as well as decreases in spine numbers. As spines are the major sites for excitatory synapses, changes in spine numbers could reflect a change in synaptic densities (Dickstein et al., 2007). These morphological changes may underlie surface area reductions, as neuron numbers remain relatively stable in older aged individuals lacking neurodegenerative diseases. As dendrites are pivotal in forming and maintaining neural networks, regulating synaptic plasticity, and integrating electrical inputs (Dickstein et al., 2007), it is perhaps not surprising that a potential marker of age-related change in dendritic morphology correlates with poorer performance on behavioral tasks. There is great variability in cytoarchitectonic features of the cortex between individuals (Kringelbach and Rolls, 2004). The difficulty of deciphering the functional role of any brain region lies in the complexities of connections between and within brain structures, which may lend a single structure the ability to activate for a multitude of tasks. The N-Back task used in this study requires considerable vigilance and working memory processes to accurately detect target letters in the correct 2-Back pattern, a task that is quite challenging. The right inferior frontal gyrus, an area implicated in cognitive inhibition and working memory, demonstrated significant reduction in surface area in older adults with lower working memory performance. The SFG, a crucial substrate of working memory processes, also exhibited a reduction in structural integrity. Finally, the MOF, a region shown to be necessary in coordination of working memory maintenance, manipulation, and monitoring processes also exhibited significantly reduced cortical surface area in low working memory groups. Taken together, these regions appear to play an important role in age-related working memory decline. The structural integrity of these three regions may also play an important role relative to compensatory processes previously found in functional MRI studies of N-Back performance. For example, deficits in these right frontal regions may interfere with compensatory engagement of left frontal structures found to activate in older adults able to maintain a high level of working memory performance. Future research investigating differences in both functional and structural connectivity between right and left frontal regions in high versus low working memory performers will be important for further evaluating this theory. In addition, these three frontal areas may prove to be important therapeutic targets for brain stimulation or other methods capable of upregulating cerebral metabolism and function in brain regions showing decline. ETHICS STATEMENT The Institutional Review Board (IRB) at the University of Florida approved this study. Prior to any study procedures, all participants provided written informed consent. The study protocol was carried out in accordance with the Declaration of Helsinki, and the University of Florida Institutional Review Board approved all procedures in this study. All study participants were healthy older adults.
6,797
2017-01-04T00:00:00.000
[ "Psychology", "Biology" ]
ISCTE-IUL . Human activity recognition has become one of the most active research topics in image processing and pattern recognition. Manual analysis of video is labour intensive, fatiguing, and error prone. Solving the problem of recognizing human activities from video can lead to improvements in several application fields like surveillance systems, human computer interfaces, sports video analysis, digital shopping assistants, video retrieval, gaming and health-care. This paper aims to recognize an action performed in a sequence of continuous actions recorded with a Kinect sensor based on the information about the position of the main skeleton joints. The typical approach is to use manually labeled data to perform supervised training. In this paper we propose a method to perform automatic temporal segmentation in order to separate the sequence in a set of actions. By measuring the amount of movement that occurs in each joint of the skeleton we are able to find temporal segments that represent the singular actions. We also proposed an automatic labeling method of human actions using a clustering algorithm on a subset of the available features. Introduction Human activity recognition is a classification problem in which events performed by humans are automatically recognized. Detecting specific activities in a live feed or searching in video archives still relies almost completely on human resources. Detecting multiple activities in real-time video feeds is currently performed by assigning multiple analysts to simultaneously watch the same video stream. Manual analysis of video is labour intensive, fatiguing, and error prone. Solving the problem of recognizing human activities from video can lead to improvements in several application fields like surveillance systems, human computer interfaces, sports video analysis, digital shopping assistants, video retrieval, gaming and health-care [15,13,8,10]. Ultimately, we are interested in recognizing high-level human activities and interactions between humans and objects. The main sub-tasks of this recognition are Usually achieved using manually labeled data to train classifiers to recognize a set of human activities. An interesting question is how far can we take the automatic labeling of human actions using unsupervised learning? From our experiments we have found that this labeling is possible, but still with a large margin for improvement. Related Work Human activity recognition is a classification problem in which events performed by humans are automatically recognized by a computer program. Some of the earliest work on extracting useful information through video analysis was performed by O'Rourke and Badler [9] in which images were fitted to an explicit constraint model of human motion, with constraints on human joint motion, and constraints based on the imaging process. Also Rashid [16] did some work on understanding the motion of 2D points in which he was able to infer 3D position. Driven by application demands, this field has seen a relevant growth in the past decade. This research has been applied in surveillance systems, human computer interfaces, video retrieval, gaming and quality-of-life devices for the elderly. Initially the main focus was recognizing simple human actions such as walking and running [4]. Now that that problem is well explored, researchers are moving towards recognition of complex realistic human activities involving multiple persons and objects. In a recent review written by [1] an approach-based taxonomy was chosen to categorize the activity recognition methodologies which were divided into two categories. Single-layered approaches [2,20,18] typically represent and recognize human activities directly based on sequences of images and are suited for the recognition of gestures and actions with sequential characteristics. Hierarchical approaches represent high-level human activities that are composed of other simpler activities [1]. Hierarchical approaches can be seen as statistical, syntactic and description-based [3,6,8,14,17,21]. The previous approaches all used computer vision (CV) techniques to extract meaningful features from the data. Motion capture data (MOCAP) has also been used in this field, a relevant approach found was [22] where they pose the problem of learning motion primitives (actions) as a temporal clustering one, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters representing an action. They were able to achieve competitive detection performances (77%) for human actions in a completely unsupervised fashion. Using MOCAP data has several advantages mainly the accuracy of the extracted features but the cost of the sensor and the required setup to obtain the data is often prohibitive. With the cost in mind Microsoft released a sensor called Kinect which captures RGB-D data and is also capable of providing joint level information in a non-invasive way allowing the developers to abstract away from CV techniques. Using Kinect [11] the authors consider the problem of extracting a descriptive labeling of the sequence of sub-activities being performed by a human, and more importantly, of their interactions with the objects in the form of associated affordances. Given a RGB-D video, they jointly model the human activities and object affordances as a Markov random field where the nodes represent objects and sub-activities, and the edges represent the relationships between object affordances, their relations with sub-activities, and their evolution over time. The learning problem is formulated using a structural support vector machine (SSVM) approach, where labelings over various alternate temporal segmentations are considered as latent variables. The method was tested on a dataset comprising 120 activity videos collected from 4 subjects, and obtained an accuracy of 79.4% for affordance, 63.4% for sub-activity and 75.0% for high-level activity labeling. In [7] the covariance matrix for skeleton joint locations over time is used as a discriminative descriptor for a sequence of actions. To encode the relationship between joint movement and time, multiple covariance matrices are deployed over subsequences in a hierarchical fashion. The descriptor has a fixed length that is independent from the length of the described sequence. Their experiments show that using the covariance descriptor with an off-the-shelf classification algorithm one can obtain an accuracy of 90.53% in action recognition on multiple datasets. In a parallel work [5] authors propose a descriptor for 2D trajectories: Histogram of Oriented Displacements (HOD). Each displacement in the trajectory votes with its length in a histogram of orientation angles. 3D trajectories are described by the HOD of their three projections. HOD is used to describe the 3D trajectories of body joints to recognize human actions. The descriptor is fixed-length, scale-invariant and speedinvariant. Experiments on several datasets show that this approach can achieve a classification accuracy of 91.26%. Recently [12] developed a system called Kintense which is a real-time system for detecting aggressive actions from streaming 3D skeleton joint coordinates obtained from Kinect sensors. Kintense uses a combination of: (1) an array of supervised learners to recognize a predefined set of aggressive actions, (2) an unsupervised learner to discover new aggressive actions or refine existing actions, and (3) human feedback to reduce false alarms and to label potential aggressive actions. The system is 11% -16% more accurate and 10% -54% more robust to changes in distance, body orientation, speed, and subject, when compared to standard techniques such as dynamic time warping (DTW) and posture based gesture recognizers. In two multi-person households it achieves up to 90% accuracy in action detection. Temporal Segmentation This research is framed in the context of a doctoral program where the final objective is to predict the next most likely action that will occur in a sequence of actions. In order to solve this problem we divided it in two parts, recognition and prediction. This paper will only refer to the recognition problem. Human activity can be categorized into four different levels: gestures, actions, interactions and group activities. We are interested in the actions and interactions category. An initial research was conducted to analyze several datasets from different sources like LIRIS (Laboratoire d'InfoRmatique en Image et Systèmes d'information) dataset [19], CMU (Carnegie Mellon University) MoCap dataset 5 , MSR-Action3D and MSR-DailyActivity3D dataset [13] and verify it's suitability to our problem. All these datasets contain only isolated actions, and for our task we require sequences of actions. We saw this as an opportunity to create a new dataset that contains sequences of actions. We used Kinect to record the dataset which contains 8 aggressive actions like punching and kicking, 6 distinct sequences (each sequence contains 5 actions). Recorded 12 subjects, each subject performed 6 sequences. Total of 72 sequences, 360 actions. An example of a recorded sequence is illustrated on Figure 1. Kinect captures data at 30 frames per second. The data is recorded in .xed files which contains RBG, depth and skeleton information, and also a light version in .csv format containing only the skeleton data. We expect to make the dataset available to public in a near future on a dedicated website. Since our dataset contains sequences of actions our very first task was to automatically decompose the sequence in temporal segments where each segment represents an isolated action. We went for a very simple approach. From visual observation we've noticed that during each action there were joints that moved more than others. If we could measure that movement and compare it with other joints we could be able to tell which joint is predominant in a certain action and then assign a temporal segment to a joint. Figure 2 shows a timeline which represents the movement of the right ankle. It is perfectly visible that there are two regions where that joint has a significant higher absolute speed. These two regions represent moments in time where an action was performed that involved mainly the right leg. Our first step was to create these regions which we called regions of interest. This was achieved by selecting frames in which the absolute speed value was above the standard deviation multiplied by a factor of two. Then we selected all the neighboring frames that were above the average value with a tolerance of 3 frames below of the average. This data was collected for four different joints: right and left ankle, right and left wrist. Then we searched for overlapping regions. While the user performs a kick the rest of his body moves, specially the hands to maintain the body's balance. Overlapping regions were removed by considering only the joint moving at a higher average speed in each frame. Figure 3 illustrates an example result of our automatic segmentation method. Each color of the plot represents a temporal segment to which we assigned a joint as being the dominant joint for that action. We obtained 5 temporal segments which successfully correspond to the number of actions that the sequence contains, in this case: right-punch; left-punch; front-right-kick; front-left-kick; side-right-kick. Table 1 shows that the automatic segmentation can be improved. These results reflect the measurements between the frames of the annotated data and the frames of our automatic temporal segments. Overall the segmentation is satisfactory and we believe that the segments have the most important part of the actions. This method might be revisited in the future to improve the overall performance of our system. In most cases, as seen in [12], action labeling is achieved by manually labeling the segments obtained by the segmentation algorithm and then use that data to train one classifier per action. Those classifiers would then be used in an application capable of recognizing actions in real-time. Instead we thought that it would be more interesting if we could automatically label equal actions performed by different subjects. For example a right-punch performed by subject 1 should be very similar to a right-punch performed by subject 2. This process is composed of the following stages: 1: Automatically find temporal segments that represent the actions of the sequence 2: Sample the dataset based on the previously found temporal segments 3: Extract meaningful features for each segment 4: Use clustering to automatically group similar actions and thus label them Sampling To sample the data for the clustering algorithm the program automatically selects the automatically found temporal segments which ideally should be 5 per sequence, which corresponds to the number of actions that compose the sequence. The most active joint is assigned to that segment. Based on the window-frame of the segment found for a specific joint we create new temporal segments for the remaining joints on the same exact window-frame. This can be portrayed has stacking the joints timeline one on top of another and making vertical slices to extract samples of data that correspond to temporal segments where an action has occurred. Feature Extraction An action can be seen as a sequence of poses over time. Each pose respects certain relative positions and orientation of joints of the skeleton. Based on the positions and orientations of the joints we extracted several features that will be used to model the movements performed by the subjects. We have experimented with several features (speed; absolute speed; speed per axis; joint flexion in angles; bone orientation). After a comparison of these different approaches (to be published) we selected the angles of the elbows and the knees a1,a2,a3,a4 and the relative position of the wrists and ankles s1,s2,s3,s4 ( Figure 4) and used these to calculate other features like relative speed of each joint. Different subsets of these features combined will constitute the feature vectors that will be used by the clustering algorithm. Clustering Experiments As previously mentioned, the objective is to cluster similar actions performed by different subjects (or by the same subject in different recordings). For that purpose we will use k-means which is one of the simplest unsupervised learning algorithms. We made several experiments with different combinations of features. Our initial experiments using simply the average speed of each joint over the whole segment as a feature. Results of this experiment are shown in Table 2. Clustering all the segments of the same sequence of actions being performed by different subjects brought interesting results. All the actions were correctly labeled except for the side-right-kick. As shown in the table this action was classified as a front-right-kick 33,33% of the time. These two actions are similar and originate from the same body part. These results lead us to believe that maybe more features could help distinguish these movements more clearly. Table 3 shows the results of clustering using also the angles of the knees and the elbows. Surprisingly the results are worst. The right-punch and the left-punch even when they are from different arms are labeled with the same cluster-label, the same happened to the front-right-kick and the side-right-kick. This can be explained by the angle features becoming more relevant than the speed features. Given that angles are less discriminative of these movements, this results in more miss-classifications. When considering the amplitude of the movements of the lower body members the differences between a right-punch and a left-punch become minor. To prove this a simple experiment was performed using only the temporal segments originated by an action from the upper part of the body. Table 4 shows that using only the upper body k-means is perfectly capable of distinguishing the actions of the right arm from the actions of the left arm using the same features as in Table 3. Table 5 shows the results using only the angles of the knees and elbows as features. In this case the kicking actions are diluted amongst several clusters. So using only the angles as features has proven insufficient to correctly label the actions. Since a single value (average) is used to represent a temporal segment a loss in the granularity of information might be a problem. In the following experiments temporal segments were divided in equal parts to increase the feature vector, but the results were 30 to 40% lower. This can be explained in figure 5 where we show a comparison between the same movement performed by two different subjects. The curves are similar but they start at different frames, which if we divide the temporal segment in four parts, for subject 10 the first two parts will have a higher value and for subject 1 the last two will have a higher value. For this reason these two actions would probably be assigned different clusters. Overall the results are very similar to all the other sequences that we have in our dataset (total of 8). Due to space limitations we were unable to include the clustering results for each sequence. Our final experiment (Table 6) was to see how well k-means coped with all the sequences at the same time using only the average speed as Fig. 5: Temporal segment of a right punch performed by subject 1 and subject 10 a feature since it was the feature that proved to have the best results. Again there is a clear separation from actions from the right and left side of the body. As for actions that are from the same part of the body there is room for improvement. Conclusion In this paper, we described a new dataset of sequences of actions recorded with Kinect, which is, to the best of our knowledge, the first to contain whole sequences. We proposed a method to achieve automatic temporal segmentation of a sequence of actions trough a simple filtering approach. We also proposed and evaluated an automatic labeling method of human actions using a clustering algorithm. In summary, our results show that, for the type of actions used, k-means is capable of grouping identical actions performed by different users. This is evident when the clustering is performed with all of the subjects performing the same sequence of actions. When all the sequences are used the accuracy decreases. This might be explained by the effect that the neighboring actions have on the current action. So for different neighboring actions, the same current action will have a different start and ending. By using several features (absolute speed, absolute 3D speed, joint angle) we also show that the choice of features affects greatly the performance of k-means. The poor results achieved when using the angles of the knees and elbows, appear to be related to how the flexion angles are calculated using the law of cosines. In our next experiment Euler angles will be used which represent a sequence of three elemental rotations (rotation about X,Y,Z). We also think that we could improve the results if we applied dynamic time warping to the temporal segments. This technique is often used to cope with the different speed with which the subjects perform the actions. Our study showed how clustering and filtering techniques can be combined to achieve unsupervised labeling of human actions recorded by a camera with a depth sensor which tracks skeleton key-points.
4,327.4
2016-06-15T00:00:00.000
[ "Computer Science" ]
Resonant amplification of curvature perturbations in inflation model with periodical derivative coupling In this paper, we introduce a weak, transient and periodical derivative coupling between the inflaton field and gravity, and find that the square of the sound speed of the curvature perturbations becomes a periodic function, which results in that the equation of the curvature perturbations can be transformed into the form of the Mathieu equation in the sub-horizon limit. Thus, the parametric resonance will amplify the curvature perturbations so as to generate a formation of abundant primordial black holes (PBHs). We show that the generated PBHs can make up most of dark matter. Associated with the generation of PBHs, the large scalar perturbations will give rise to the scalar induced gravitational waves which may be detected by future gravitational wave projects. However, in the standard slow-roll inflation, which predicts nearly scale-invariant curvature perturbations, the possibility of the formation of PBHs is negligible.This is because the cosmic microwave background radiation (CMB) observations have implied a very small amplitude of the power spectrum of the curvature perturbations of about O(10 −9 ).To generate a sizable amount of PBHs requires that the amplitude of the power spectrum of the curvature perturbations reaches about O(10 −2 ).Since the CMB observations only give a limit on the curvature perturbations at large scales [28] and the magnitude of the power spectrum at scales smaller than the CMB one is not restricted strongly by any observations, the formation of an abundant PBHs will be possible if there are some mechanisms to enhance the curvature perturbations at small scales. It is well known that the amplitude of the power spectrum of the curvature perturbations R is given by P R = H 2 8π 2 ϵcs when the mode exits the horizon during inflation in the standard slow-roll inflation.Here ϵ is the slow-roll parameter, which is proportional to the rolling speed of the inflaton, c s is the sound speed of the curvature perturbations and H is the Hubble parameter.So, a natural way to enhance the curvature perturbations is to reduce the rolling speed of the inflaton or to suppress the sound speed c s [94][95][96][97][98][99][100][101].Additionally, some other ways can also predict the formation of PBHs .The decrease of the rolling speed of the inflaton can be realized by flattening the potential of the inflaton field.The corresponding inflation model is called the inflection-point inflation .Recently, it has been shown that the gravitationally enhanced friction can also slow the rolling speed of the inflaton [59][60][61][62][63][64].In this mechanism, a derivative coupling between the inflaton field and gravity is invoked.In addition, the growth of curvature perturbations caused by parametric resonance has also been extensively studied [125][126][127][128][129][130][131].The parametric resonance can be obtained by adding a periodic correction to the potential of the inflaton field or considering a periodic sound speed.As the derivative coupling can realize the decrease of the inflaton's rolling speed, can it lead to the parametric resonance to amplify the curvature perturbations?This motivates us to finish the present study.We find that the parametric resonance will occur after introducing a periodical derivative coupling between the inflaton field and gravity, since the equation of the curvature perturbations can be transformed into the form of the Mathieu equation.We demonstrate that the enhanced curvature perturbations will lead to an abundant generation of PBHs, which can make up most of dark matter, and the SIGWs may be detected by the future GW projects. The rest of this paper is organized as follows: In Sec.II, we will introduce the inflation model with a periodical derivative coupling between the inflaton field and gravity.Sec.III discusses the parametric resonance and Sec.IV describes the formation of PBHs.In Sec.V, we investigate the SIGWs.Finally, we give our conclusions in Sec.VI. II. INFLATION WITH PERIODICAL DERIVATIVE COUPLING We consider an inflation model with a non-minimal derivative coupling between the inflaton field ϕ and gravity.The action of the system has the form Here g is the determinant of the metric tensor g µν , M pl is the reduced Planck mass, R is the Ricci scalar, G µν is the Einstein tensor, θ(ϕ) denotes the coupling function, and V (ϕ) is the potential of the inflaton field.This action belongs to a class of the general Horndeski's theories with second-order equations of motion [132,133], which can be free of the ghost and gradient instabilities [133].The Lagrangian of such Horndeskis theories has the term G 5 (ϕ, X)G µν ∇ µ ∇ ν ϕ, where G 5 is a generic function of ϕ and X ≡ −∂ µ ϕ∂ ν ϕ/2.By choosing the function G 5 = −κ 2 χ(ϕ)/2, the term containing θ in Eq. ( 1) can be recovered from the Horndeski's Lagrangian after integration by parts with θ being defined as θ ≡ dχ/dϕ. In the spatially flat Friedmann-Robertson-Walker background, the background equations derived from the action (1) are Here the overdot denotes the derivative with respect to the cosmic time t, θ ,ϕ ≡ dθ/dϕ and To describe the slow-roll inflation, we define the slow-roll parameters where and with Defining z ≡ a √ 2Q and u k ≡ zR k , we find that u k satisfies the equation Here k is the wave-number and a is the cosmic scale factor.Solving Eq. ( 10) leads to the power spectrum of the curvature perturbations The CMB observations have implied that at large scales this power spectrum is a nearly scale-invariant spectrum with the amplitude being about O(10 −9 ) [28]. To generate a sizable amount of PBHs, we need to enhance the curvature perturbations at scales smaller than the CMB one through the parametric resonance.Thus, we choose the coupling function θ(ϕ) to take an oscillating form Here w is a dimensionless constant, which must satisfy ω ≪ | ≪ 1, and ϕ c is a quantity with the same dimension as ϕ and is set to be much less than ϕ.Meanwhile ϕ s and ϕ e represent the beginning and the end of the coupling, respectively, and Θ is the unit Heaviside step function.The value of ϕ s is chosen to be away from that of ϕ at the beginning of inflation, and thus the derivative coupling does not affect the curvature perturbations at the CMB scale.So, the amplitude of the power spectrum of the curvature perturbations at the CMB scale remains to be of the standard form: . The spectral index n s and the tensor-scalar-ratio r are To be consistent with the CMB observations, we choose the potential of the inflaton field to be the Starobinsky potential [137] where Λ is a constant. Since the parametric resonance occurs deep inside the Hubble horizon (c s k ≫ aH), the Eq. ( 10) can be simplified to be Considering the slow-roll conditions given in Eq. ( 5), w ≪ 3H 2 and ϕ c ≪ ϕ during inflation, we find that the background equations (Eqs.(3,4)) can be reduced to and then the sound speed square of the curvature perturbations can be simplified to be We find that δc s oscillates around zero and satisfies |δc s | ≪ 1.So, the sound speed could exceed the speed of light.However, it has been found that the superluminal sound speed will not result in the causal paradoxes when the scalar field is non-trivial [138][139][140][141][142][143][144].This suggests that there may be no violation of causality from the superluminal sound speed for the model considered in this paper.If a different coupling function, i.e. θ(ϕ) ∼ sin 2 (ϕ), is chosen, we find that the subliminal oscillation of the sound speed square is possible. Substituting Eq. ( 16) into Eq.( 14), one can obtain We assume that the inflaton field evolves from ϕ s to ϕ e during a short time, which indicates that during this short time the evolution of ϕ can be expressed approximately as ϕ ≈ and k c = | φs| ϕc , we find that the Eq. ( 17) can be transformed into the form of the Mathieu equation where For the Mathieu equation, the resonant bands are close to narrow regions near A k (x) ≃ n 2 (n = 1, 2, 3...).The width of each resonant band is ∆k ∼ q n .If 0 < q ≪ 1, the resonance in the first resonant band (n = 1) is the most violent.Therefore, we only consider the influence of the first resonant band on u k . For the first instability, the Floquet index µ k which describes the rate of exponential growth has the form Here ℜ refers to taking the real part.Then, we find that the resonance occurs in a narrow band where In obtaining the expressions of k ± , we have used the condition |C| ≪ 1 derived from 0 < q ≪ 1.Since Eq. ( 21) is time dependent, the duration that the k mode stays in the resonant band is finite, which is given by T in (k) = min(t e , t F ) − max(t s , t I ) with k satisfying k − a s < k < k + a e , where subscripts s and e represent the moment that the inflaton field equals ϕ s and ϕ e , respectively, and t I and t F represent, respectively, the time of k mode entering and leaving the resonant band.Thus, the resonant amplified width of the power spectrum is ∆k = k + a e − k − a s , which is determined mainly by k c , ϕ e and ϕ s . During the parametric resonance, the curvature perturbations will be enhanced exponentially Defining B k (t) = k kca , the above integral can be re-expressed as The amplified modes can be divided into three groups: (1) the modes entering the band before t s ; (2) the modes entering the band after t s and exiting before t e ; (3) the modes exiting the band after t e .For these three groups, the wavenumber satisfies: Apparently, for the second group: k + a s < k < k − a e , B k (t I ) and B k (t F ) are independent of k, which results in that A k is independent of k for this group. The enhanced power spectrum of the curvature perturbations can be expressed approximately as In the region of k + a s < k < k − a e , since A k is independent of k, the enhanced power spectrum in this region will have a plateau, which can be seen in Fig. (1), where we plot the evolution of P R /P R 0 with k.In this figure the blue and red lines represent the numerical results from Eq. ( 10) and the approximate ones given in Eq. ( 27), respectively.It is easy to see that the approximate results are consistent well with the numerical ones, and the power spectrum can be enhanced by several orders.These enhanced curvature perturbations can lead to the generation of significant gravitational quadrupole moments during inflation, which will emit GWs.However, this issue is beyond the scope of the present paper and is left to be investigated in the future. Figure (2) shows the power spectrum of the curvature perturbations from numerical calculation, which indicates clearly that the power spectrum is compliant with the CMB observations at the CMB scale, and it can be amplified to generate a sizable amount of PBHs at scales smaller than the CMB one. IV. PBHS When the sufficiently large curvature perturbations re-enter the Hubble horizon during the radiation-dominated period, the gravity of the high-density regions will overcome the radiation pressure and lead to the formation of PBHs.The PBH mass has the following The evolution of P R /P R 0 with k.The blue and red lines represent the numerical results from Eq. ( 10) and the approximate ones given in Eq. ( 27).The parameters are set to be FIG. 2: The power spectrum of the curvature perturbations.The green shaded region is excluded from the current CMB observation [28].The orange-and blue-shaded regions are excluded by the µ distortion of CMB [145] and the effect on the n − p ratio during big-bang nucleosynthesis (BBN) [146], respectively.Cyan shaded region indicate the limitations of current PTA observations in the power spectrum [147]. relationship with k: Here γ is the ratio of the mass of PBH to the total mass of the Hubble horizon when the PBH is formed.It represents the effective collapse rate, and its specific value is related to the details of gravitational collapse.In our analysis we set γ ≃ (1/ √ 3) 3 [5].In Eq. ( 28), M ⊙ represents the solar mass, and g * is the number of degrees of freedom of the relativistic particle at the time of the PBH formation.Assuming that the PBHs form in the radiationdominated period, we can set g * = 106.75. after assuming that the probability distribution function of the disturbance obeys the Gaussian distribution.Here erfc is the complementary error function, and δ c is the threshold for the relative density perturbation of the PBH formation, which is chosen to be δ c ≃ 0.4 [149,150] in our calculation of the PBH abundance.The variance σ 2 (M ) represents the coarse-grained density contrast with the smoothing scale k, and it takes the form Here W is the window function.We find that the PBH mass spectrum can be obtained from the following equation Here Ω DM represents the current dark matter density parameter and h is the reduced Hubble constant.While Ω DM h 2 is constrained to be Ω DM h 2 ≃ 0.12 by the Planck 2018 observations [28].We show the numerical results of the PBH mass spectrum in Fig. (3) and find that the PBHs can make up most of dark matter since Ω PBH Ω DM ≃ 0.99. Associated with the formation of PBHs, which are assumed to be generated in the radiation-dominated era, the large metric scalar perturbations become an important GW source and radiate the observable SIGWs.The second-order tensor perturbations h ij satisfy the equation: where a prime denotes the derivative with respect to the conformal time, H ≡ a ′ /a, T lm ij is the transverse-traceless projection operator, and is the GW source term [156,157].Here Ψ is the metric scalar perturbation.In the radiationdominated era, the evolution of 3) [157], where ψ k is the primordial perturbation, which relates with the power spectrum of the primordial curvature perturbations through Solving Eq. ( 32), one can obtain the GW energy density for each logarithmic interval k [158]: where η c represents the time when Ω GW stops to grow.The current energy density spectrum of SIGWs can be expressed as [147,158] Ω GW,0 h 2 = 0.83 g * 10.75 where Ω r,0 h 2 is the current density parameter of radiation which is set to be 4. the sensitive intervals of different gravitational wave detectors including SKA [159], EPTA [160], TAIJI [22], TIANQIN [23], LISA [21], and ALIGO [161]. In Fig. ( 4), we show the current energy spectrum of SIGWs.One can see that the SIGWs possess a multi-peak structure and may be detected by the future GW projects including LISA, Taiji and TianQin. VI. CONCLUSIONS In order to produce a sizable amount of PBHs, the amplitude of the power spectrum of the small-scale curvature perturbations must be enhanced by about 7 orders of magnitude compared to that at the CMB scale.In inflation models in which the inflaton field couples derivatively with gravity, it has been found that the curvature perturbations can be amplified through the gravitationally enhanced friction mechanism [59].In this paper, we find that, if there is a weak, transient and periodical derivative coupling between the inflaton field and gravity, the sound speed square of the curvature perturbations becomes a periodic function, which results in that the equation of the curvature perturbations in the sub-horizon limit can be transformed into the form of the Mathieu equation.Thus, for some k-modes, the parametric resonance will amplify their fluctuations.These amplified fluctuations are stretched to be super-horizon by the inflation and then the power spectrum will be enhanced at scales smaller than the CMB one.When the enhanced curvature perturbations re-enter the horizon during radiation-dominated era, they will lead to the formation of PBHs, which can explain most of dark matter.Associated with the generation of PBHs, the large scalar perturbations will radiate the observable SIGWs.We demonstrate that the current energy spectrum of the SIGWs has a multi-peak structure, which is different from that in the inflation model with the gravitationally enhanced friction [60], and it can be detected by future GW projects including LISA, Taiji and TianQin.Therefore, the future detection of SIGWs will help us to distinguish different mechanisms of enhancing curvature perturbations at small scales.Finally, it is worth noting that when loop corrections are considered the perturbation theory will be broken in the cases of the amplification of the primordial curvature perturbations due to the decrease of the inflaton's rolling speed [162] and the parametric resonance from the oscillating potential [163].Whether the loop corrections will break the perturbation theory in the scenario considered in the present paper needs to be studied separately since the gravity, which couples derivatively with the inflaton field, is different from the theory of general relativity. When 2 M 2 pl| ≪ 1 . {ϵ, |δ ϕ |, δ X , |δ D |} ≪ 1 are satisfied, the slow-roll inflation is obtained.Furthermore, we add the condition: | 3θ(ϕ)H Thus, the background dynamics in the non-minimally derivative coupled inflation model will be almost the same as that in the minimal coupling case During inflation, the quantum fluctuations provide the seed for the formation of large scale cosmic structures.The fluctuations are described by using the curvature perturbations R. Expanding the action given in Eq. (1) to the second-order, one can obtain the action of R[133][134][135][136] FIG. 3 : FIG. 3: The mass spectrum of PBHs.The colored regions are ruled out by observations including 2 ×FIG. 4 : FIG. 4: The current energy spectrum (solid blue line) of SIGWs.The various dotted lines represent and k − a e ≤ k < k + a e , respectively.The corresponding B k (t I ) and B k (t F ) can be calculated as
4,260.2
2024-01-15T00:00:00.000
[ "Physics" ]
Selective oxidation of tool steel surfaces under a protective gas atmosphere using inductive heat treatment For the realization of liquid lubricant free forming processes different approaches are conceivable. The priority program 1676 “Dry forming Sustainable production through dry machining in metal forming” addresses this issue in the context of metal forming processes. The present study reports results from one subproject of the priority program that employs selective oxidization of tool steel surfaces for the implementation of a dry sheet metal deep drawing process. Within the present study, specimen surfaces of the tool steel (1.2379) were heat-treated to optimize their tribological properties with respect to sliding wear behaviour in contact with drawn sheet metal (DP600+Z). The heat treatment was designed to result in the formation of selective oxide layers that can act as friction reducing separation layers. The heating setup employed an inductive heating under protective gas atmosphere. Selective oxidation was realized by controlling the residual oxygen content. Specifically, the specimens were heated in the near-surface region just above the annealing temperature, thus avoiding the degradation of mechanical properties in the bulk. Evaluation of hardness along cross-sections of each specimen revealed suitable initial temperatures for the inductive heat treatment. Oxide layer systems were analyzed regarding their tribological sliding wear behaviour after selective oxidation, as well as their morphology and chemical composition before and after the sliding wear tests. Introduction For modern industrial manufacturing, economic and ecological objectives become more and more significant. In this respect, lubricant free dry metal forming is a promising approach and is examined within the scope of the priority program 1676 founded by the German Research Foundation (DFG). However, the sheet metal forming and the bulk metal forming industries are two of the most affected sectors in the context of related manufacturing failures [1][2][3]. In sheet metal forming, deep drawing processes feature large contact areas between tool surface and sheet metal, and thus, lubricant oils are used to reduce friction and increase tool wear resistance. Lubricants are ecological polluting products which are inconsistent with requirements of sustainable production. Moreover, process chains are extended by additional cleaning processes. Selective oxidation of tool steel surfaces promises sustainability in modern manufacturing. The specific loading case of deep drawing processes is very complex, but depends in general on friction. Yet, the direct contact of the tool surface and the drawn sheet metal should be avoided. Therefore thin oxide layers were generated on the tool steel surfaces using a heat treatment under defined atmospheres.In earlier studies, positive effects of α-Fe2O3 oxide layers on friction and wear behaviour have already been reported. Lubricant free pin on disc tests at temperatures between 20 °C and 600 °C showed that oxidized tool steel had friction-reducing properties. However, α-Fe2O3 was also generated from metallic debris, so that homogeneous oxide layers could not be realized [4][5][6]. The present study focuses on the wear behaviour of selectively oxidized tool steel specimens (1.2379), which were heat treated inductively under an argon process atmosphere with an oxygen content of 0.03 vol.-%. These process conditions were selected to favour the formation of α-Fe2O3 oxide layer systems, which can act as a friction reducing separation layers. The friction coefficients and wear resistance were determined to compare the systems' behaviour with results obtained in previous studies using convective tempering methods. Materials and preparation For investigations of sliding wear, cylindrical specimen of the tool steel X153CrMoV12 (EU alloy grade 1.2379) with an elemental composition of 12% Cr, 1.55% C, 0.9% V and 0.8% Mo (in wt.-%) were hardened to 56 ± 2 HRC ((600 ± 30) HV30). The specimens were circular ground to generate a surface roughness with an arithmetical mean height value of SA = 1.04 µm ± 0.09 µm, which was measured using a 3D scanning laser microscope (Keyence VK-9710). Prior to the oxidizing heat treatments, contaminations were removed from the specimen surfaces by different cleaning steps in an ultrasonic bath. After the first cleaning step with ethanol (> 96 %) for 10 minutes, lipophilic contaminations were removed in a further cleaning step with acetone (> 99 %) before cleaning with pure ethanol (> 99.8 %) to remove hydrophilic contaminations. Afterwards the specimens were marked at the front to localize positions for subsequent surface analysis of selected areas. The design of the specimen is shown schematically in Fig. 1 and is described in detail in [7]. The dual-phase-steel DP600+Z (EU alloy grade 1.0936 with an elemental composition of 2.0 % Mn, 1.5 % Si, 1.0 % Cr + Mo + Ni, 0.14 % C, 0.07 %P, min. 0.015 % Al, 0.015 % S, 0.005 % B; in wt.-%) was used as sheet metal for the wear investigations. The metal strips had a nominal thickness of 0.96 mm and a width of 35 mm and were factory-provided hot-dip galvanised with a Zn coating of 9.5 µm ± 0.5 µm thickness. Selective oxidation Selective oxidation of the specimen surfaces was performed in a tubular furnace. Stationary conditions were achieved by controlling the gas flow, the composition of the atmosphere and the process temperature. Argon (Ar) with a purity of 99.996% was used as inert shielding gas. The heat treatments were conducted inductively with a coupled electrical power of 3000 W at a frequency of 500 kHz. The inductive heat treatment was performed at a constant oxygen content of 0.03 vol.-% measured by a lambda probe applying different peak temperatures above the annealing temperature (530 °C) of the tool steel at a constant heating rate of 10 °C/s. After holding the specimen at the selected target temperature for one minute, the temperature was decreased to 500 °C. Subsequently the specimens were hold isothermally for 15 minutes prior to cooling down to ambient temperature at a rate of 5 °C/s, cf. Fig. 2. The actual peak temperatures of the inductive heat treatment are shown in Table 1 along with the parameters used for the convectively tempered specimen. For sake of completeness, the native specimen is included as well. Measurement of hardness After the heat treatment, the hardness of the specimens was measured along cross sectional areas from the edge to the centre, in order to investigate the influence of different peak temperatures on the mechanical properties. The measurements were conducted according to the ISO 6507-1 standard (Metallic materials -Vickers hardness test -Part 1). Strip drawing tests Strip drawing tests were carried out in order to determine the friction coefficient µ of the selectively oxidized specimen surfaces. The tests were performed with 90° redirection, to reproduce the loading case of deep drawing and related wear investigations. For this purpose, steel strips of DP 600+Z were slid over the corresponding specimen surfaces at a drawing speed of 20 mm/s with a contact pressure of 18 MPa. The strip drawing setup is illustrated schematically in Fig. 3. The friction coefficient µ was calculated from the measured forces using the equation Wear investigations The wear investigations were carried out at a hydraulic wear test bench to analyze the sliding wear behaviour of the tribological system formed by the wear specimen and the sheet metal. By pulling the sheet metal from a coil over the oxidized specimen with 90° re-direction, a loading case is generated that resembles the one of modern deep drawing processes. For the investigations, the sheet metal used was provided from a coil, which was cleaned inline to remove the prelube-oil from the surface. A 10% solution of a cleaning fluid (Tickopur R33) was used. To avoid residues of the cleaning solutionespecially phosphates -on the surface of the sheet metal, the sheet metal was then cleaned with an ethanol (> 96%) soaked sponge prior to getting in contact with the tool steel specimen. Detailed information of the wear test bench procedure can be found in [8]. The actual setup of the wear test bench is shown in Fig. 4. Surface analysis Different methods were employed to analyze the specimen surface. The specimen were initially characterized regarding their topography using 3D microscopy (Keyence VK-9710). Further investigations were performed using a scanning electron microscope (SEM: Zeiss Supra 55 VP), which was equipped with different detectors. Inlens and secondary electron detectors were employed prior to energy dispersive X-ray spectroscopy (EDX) to determine the chemical composition of selected areas. EDX data were collected at an acceleration voltage of 3 kV in areas of 90 µm x 60 µm. For representative cases, high resolution cross sections were analyzed using a focussed ion beam SEM (Zeiss Auriga). Low acceleration voltages were used in order to minimize contribution to the signal from the underlying substrate when analyzing the very thin oxide layer systems. Oxide layer characterisation For comparison with results from previous investigations that had employed convective tempering under defined argon atmospheres (O2 content = 0.03 vol.-%), the surface morphology of the inductively tempered specimen surfaces was characterized in detail. Reference samples (diameter: 12 mm, height: 2 mm) of the hardened (56 ± 2 HRC) tool steel (1.2378) used were polished with 3 µm diamonds dispersion and inductively tempered under using the conditions of the selective oxidation process of specimen B1, cf. Table 1. Secondary electron images of the surface are shown in Fig. 5. The oxide layer exhibit the typical structure of α-Fe2O3 layers, which have been characterized in detail in [8]. The structure of the oxide layer generated seems homogeneous in most regions. However, it is not uniform throughout, and several areas have not been oxidized. The inset in Figure 5 shows a cross section prepared from the marked area using focused ion-beam cutting. The selected area was coated with a platinum layer before sectioning in order to protect the oxide during the process. The oxide layer has an average thickness of about 450 nm, while precipitations can also be detected at the surface that vary in size and are not oxidized. Beneath the oxide layer, a thin reaction zone occurs (dark appearing areas) suggesting a near-surface enrichment of light elements. The EDX mappings (Fig. 6.) revealed a superposition of iron (Fe) and oxygen (O) in the areas corresponding to the generated oxide layer. The precipitations seen in uncovered regions consisted mainly of chromium (Cr), carbon (C) other alloying components like vanadium (V), which is typical for carbides in this material. For comparison, a convectively heat-treated reference sample (diameter: 12 mm, height: 2 mm) was characterized. The selective oxidation process was performed using the conditions (510 °C for 60 minutes) described in detail in [8]. In this case, the oxide formed was also an α-Fe2O3 layer system. Morphologically, the layer was similar to the layer on the inductively tempered specimen. However, the chain like bondings of the oxide to the substrate had coarsened. This can be attributed to a thin oxide layer, which was detected with an average thickness of about 150 nm in a cross section using focused ion-beam cutting (Fig. 7, inset). Compared to inductive tempering chromium precipitations were also detected, but in a higher extend preventing the oxide layer to cover the surface at these areas. So, large-scaled areas, which are not covered by the oxide layer can be detected after convective tempering. Measurement of hardness Hardness measurements were performed along cross sectional areas of the tempered specimen in order to determine the effect of different peak temperatures above the annealing temperature on the mechanical properties of the tool steel. Figure 8 present hardness data measured on the specimen in direction from the specimen surface to the centre. Clearly, tempering procedures with short initial temperatures between 570 °C (specimen B1) and 620 °C (specimen B2) have negligible influences on the material, while a heat treatment up to 700 °C decreases the HV30 value by nearly 40% (Specimen B4) . Surface analysis after wear The wear investigations were performed at constant loading conditions at 80 °C specimen temperature and a normal stress of 80 MPa. Selected specimens that have kept a HV30 value within the scope of tolerance after the hardening procedure were tested with 500 strokes. Therefore, specimen B1 (initial temperature: 570 °C) and specimen B2 (620°C) , which seem to have no noticeable losses of hardness after the tempering process were characterized regarding their sliding wear resistance in comparison to conventional heat treated (A1) and the reference specimen (R1). The surface topography of the specimen was measured before and after the wear investigations in order to characterize the morphological changes of the surfaces. For this purpose, the arithmetical mean height SA and the dale void volume VVV measurements were determined in the area at the central 45° position of the specimen prior and after the wear investigations. This position represents a constant normal stress loading case, while the contact area of the tool steel and the drawn sheet metal amounts to π/2 of the specimen surface. In Fig. 9 the arithmetical mean height SA for different specimens before and after the wear investigation are compared. In general, selectively oxidized surfaces seem to feature smaller initial SA-values than untreated reference samples before wear testing. Furthermore, the convectively tempered specimen A1 shows the lowest initial arithmetical mean height. Clearly, the magnitude of reduction in the SAvalues after wear testing is nearly constant (at about 10 %) for selectively oxidized specimen, while the reference sample R1 exhibits a loss of nearly 30 %. Thus, the arithmetical mean height of the oxidized specimens is not significantly changing during wear, but even appears to decrease slightly. The clear decrease of the SA-value of the reference specimen R1 after the wear tests could be related to two mechanisms that seem to govern the wear process. As a major effect, smoothening of the surface appears. Secondly, transferred zinc coating from the sheet metal can result in adhesive zinc pick-up, which was also observed in previous studies [8]. This is confirmed by analyzing the dale void volume VVV, which is a parameter that provides information about the empty volume of surface valleys. In Fig. 10 the dale void volumes determined prior to and after the wear tests for each specimen are given. The dale void volume VVV shows a trend that can be related with the arithmetical mean height. Following wear, VVV decreases substantially for the reference surfaces, whereas only small changes were obtained for the oxidized surfaces. The friction coefficient decreased slightly for the selectively oxidized specimen compared to the virgin reference. Furthermore, the inductively tempered specimen showed lower friction coefficients than convectively tempered specimen. As a result an increasing effect of thicker oxide systems as friction reducing separation layers occurs. To gain further insight into the wear mechanisms, SEM micrographs of the specimen B1 and specimen B2 were taken prior to and after the wear tests. For condition B1, an untested specimen is shown in Fig. 11 (top) along with a SE image of the critical 45° area in a tested one. The surface is completely covered, with a chain-like bonding oxide structure, which is characteristic for α-Fe2O3 layer systems. After the wear tests, the layer appears smoothed (Fig. 11 bottom). The SE micrograph shown in Fig. 11 was recorded at the central position of the worn area. The layer structure is still recognizable and the chain-like oxide bondings are even more concentrated, and oxide particles seem to have been picked up during the wear tests at uncovered substrate areas. Fig. 11. Specimen B1 prior to (top) and after (bottom) the wear test (500 strokes); SE images were taken a low acceleration voltage of 3 kV to minimise charging effects This effect was also confirmed for specimen B2. The micrograph (Fig. 12 top) also showed a layer system with a similar structure of chain-like oxide bondings prior to the wear tests, but the occurrence is more intense. Again small particles have accumulated in between the chain structure of the oxide compounds after the wear test (Fig. 12 bottom). Discussion and outlook Depending on the actual heat treatment parameters, the oxide layer systems generated inductively at higher peak temperatures showed substantially different behaviour. On the one hand, peak temperatures above the annealing temperature of the tool steel can favour surface activation, and thus, selective oxidation processes that are caused by thermal acceleration of diffusion and higher reactivity of the tool steel elements with oxygen. On the other hand, even short dwell periods at peak temperatures above 620 °C reduced the hardness of the tool steel by nearly 40% in the bulk. In this context, it is important to note that the hardened tool steel features precipitations of chromium carbides, which also occur in the near surface areas. The focussed ion-beam prepared cross sections of the inductively tempered reference specimen showed that the carbides correspond to the areas that were not covered by the α-Fe2O3 oxide layer. As this may negatively affect the long-term wear behaviour, powder metallurgically manufactured tool steel with fewer or smaller chromium carbides might provide for even better performance in this respect. This approach is based on the assumption that layer defects increase constantly as a function of wear. The measured arithmetical mean height and dale void volume of inductively tempered specimen indicated promising wear behaviour. Decreasing arithmetical mean height and dale void volume were determined after the wear tests and SEM micrographs also showed a characteristic oxide structure with several particle pickups in between the oxide compounds. It is assumed that these result from tribooxidation effects. Increased surface temperatures at the contact areas between tool steel and drawn sheet metal should favour oxide growth. Thus, it appears feasible that an erosion of the oxide layer might be counterbalanced by a growth of oxide particles. The present study demonstrates that inductive heat treatment could be a viable approach generating reproducible oxide layer systems by selective oxidation processes. Compared to conventional heat treatments, the higher heating rates in combination with short isothermal dwell periods would reduce overall processing times. More importantly, however, the hardness of the bulk is less affected by this approach. Albeit the current study indicated promising wear properties, it should be noted that the contact pressures in industrial deep drawing processes could be substantially higher than those used in the present set-up for measuring sliding wear resistance, and work is underway to address this effect.
4,219.6
2018-01-01T00:00:00.000
[ "Materials Science" ]
Protein Domain of Unknown Function 3233 is a Translocation Domain of Autotransporter Secretory Mechanism in Gamma proteobacteria Vibrio cholerae, the enteropathogenic gram negative bacteria is one of the main causative agents of waterborne diseases like cholera. About 1/3rd of the organism's genome is uncharacterised with many protein coding genes lacking structure and functional information. These proteins form significant fraction of the genome and are crucial in understanding the organism's complete functional makeup. In this study we report the general structure and function of a family of hypothetical proteins, Domain of Unknown Function 3233 (DUF3233), which are conserved across gram negative gammaproteobacteria (especially in Vibrio sp. and similar bacteria). Profile and HMM based sequence search methods were used to screen homologues of DUF3233. The I-TASSER fold recognition method was used to build a three dimensional structural model of the domain. The structure resembles the transmembrane beta-barrel with an axial N-terminal helix and twelve antiparallel beta-strands. Using a combination of amphipathy and discrimination analysis we analysed the potential transmembrane beta-barrel forming properties of DUF3233. Sequence, structure and phylogenetic analysis of DUF3233 indicates that this gram negative bacterial hypothetical protein resembles the beta-barrel translocation unit of autotransporter Va secretory mechanism with a gene organisation that differs from the conventional Va system. Introduction Domain of Unknown Function (DUF) 3233 (PFAM: PF11557) is a family of uncharacterised hypothetical proteins conserved among gram negative gammaproteobacteria. Representative members of this domain include marine bacteria from genus Vibrio, Shewanella, Colwellia and Alcanivorax of which Vibrio cholerae, Vibrio parahaemolyticus, Vibrio splendidus and Vibrio vulnificus are pathogenic to human and aquatic life. Vibrio cholerae causes seasonal outbreaks of cholera of epidemic proportions in developing countries with high mortality rates [1]. The enterotoxins produced by the bacteria after colonising the host small intestine disrupts the ion transport by the intestinal epithelial cells causing outflow of large volumes of fluids into the intestine leading to watery diarrhoea, dehydration and in severe cases, death [1] [2]. Significant fraction of genomes of Vibrio species lack structure function annotation and most of these gene products are classified as hypothetical proteins or domains of unknown function [3]. The PFAM [4] database in its 24 th release lists about 3000 DUF families. Many of these DUF families are kingdom specific (DUF2883, DUF3328, DUF3329), limited/shared between kingdoms (DUF1497, DUF3609) or restricted/specific to certain organisms (DUF1196, DUF2667). The specific and ubiquitous nature of these domains suggests their functional importance in organism specific niches or a common biological role. Identifying homologous protein families through sequence based search marks the first step in the annotation of DUFs, providing an initial broad picture of the protein's probable family and function. Sequence homology search becomes increasingly powerful when we advance from normal sequence-sequence based searches to methods that uses profile or HMM information like HHsenser [5], which increases the efficiency of finding remote homologues. In silico structure prediction methods together with sequence similarity detection methods assist the annotation of foldfunction space. Fold recognition methods like I-TASSER [6] help predict the 3 dimensional (3D) structure and functions of proteins that share low sequence identity with other known structures. In this study we analyse the sequence and structural characteristics of DUF3233 using computational approaches and try to infer various properties of this domain. Sequence search by HHsenser identifies similarity with the beta-barrel translocation unit of autotransporter Va secretory proteins. Sequence homology combined with secondary structure prediction indicates a beta-barrel domain of 12 betastrands. The predicted 3D model from I-TASSER shows the structure with an overall beta-barrel topology with an N terminal helix running along the central barrel axis perpendicular to the 12 antiparallel strands that form the barrel. Amphipathicity and membrane barrel discrimination analysis suggest the domain is a potential outer membrane gram negative beta-barrel protein. Autotransporter translocation units belong to the transmembrane beta-barrel fold in SCOP database [7], defined by a betabarrel of 12 to 14 antiparallel strands with an N terminal helix perpendicular to the barrel. Finally with the analysis of genomic context of DUF3233 we could infer that this outer membrane beta-barrel translocation domain has a gene organisation that is not typical of the autotransporter Va secretory mechanism. Results Sequence based characterization of DUF3233 as autotransporter b-domain protein Sequence search for homologues with PSI-BLAST using a representative query, Vibrio cholerae DUF3233 (RefSeq: NP_232949) against the NCBI nr database with a threshold 0.005, reached convergence at the 4 th iteration. The resulting sequences identified were hypothetical proteins conserved among gram negative proteobacteria. For improved search and better coverage of homologous sequence space, information from aligned regions of DUF3233 sequences in the form of a multiple sequence alignment profile was queried with HHsenser. From the resulting sequences in the permissive alignment list we were able to infer homology between DUF3233 and the outer membrane beta-barrel translocation domain of autotransporter proteins. DUF3233 shares sequence similarity with outer membrane beta-barrel domain of Ochrobactrum intermedium autotransporter (e-value 1E-34, 95% coverage, 22% identity), Rhizobium leguminosarum adhesin autotransporter (e-value 2E-29, 94% coverage, 18% identity) and Yersinia aldovae AidA adhesin autotransporter (e-value 2E-26, 89% coverage, 18% identity). Interestingly, a number of gram negative hypothetical proteins were picked up as potential homologues, which showed fair amount of similarity to the autotransporter beta-domain (Table S1). DUF3233 is a solitary outer membrane autotransporter bbarrel domain Proteins targeted for transport across membranes posses leader sequence or signal peptide at their N-terminus, which directs translocation. We analysed DUF3233 sequences using a combination of artificial neural networks and HMMs implemented in SignalP [10] to predict the presence and location of signal peptide cleavage sites. SignalP identified the presence of N-terminal signal peptide of an average length of 23 amino acid residues having a positively charged amino terminal followed by a hydrophobic region and hydrophilic carboxy terminal. Signal peptides are cleaved from the exported protein by specific proteases called signal peptidases (SPases) [11]. Prediction of cleavage mechanism of these signal sequences by LipoP [12] identifies SPase 1 target site, indicating DUF3233 might be a non-lipoprotein. We browsed DUF3233 genomic region of all representative organisms with STRING [13] to look for possible gene fusion events with other domains and found no such occurrence. DUF3233 is a single domain protein found on the small chromosome 2 in Vibrio species with an upstream gamma-glutamyltranspeptidase (GGT) or response regulatory protein transcribed in one potential operon (Table 1). These upstream proteins lack the N-terminal signal sequence for inner membrane transport and analysis through SecretomeP [14] indicates that these proteins are not exported through non-classical secretory system. Gene organisation of DUF3233 therefore suggests a solitary translocation unit with an absent upstream secretory protein. Structure based validation of DUF3233 as transmembrane b-barrel domain of autotransporter proteins DUF3233 sequence based PSI-BLAST search for proteins with known structures (72,386 structures in PDB as of April 2011) fetched results with a maximum alignment length covering 51 residues. Secondary structure assignment by PSIPRED [15] predicts an N-terminal a-helix (a N ) followed by 12 consecutive b-strands (b 1 -b 12 ) interspersed by two short turns of a-helices a 1 and a 2 predicted to occur between b 1 -b 2 and between b 5 -b 6 respectively. With no suitable template with significant sequence homology available, we used the fold recognition algorithm implemented in I-TASSER to predict a 3D model of DUF3233. The translocation unit of NalP from N. meningitidis (PDB: 1UYN_X, 15% identity, 85% coverage, normalised Z-score above 1) was identified by I-TASSER in the top four threading templates to model V. cholerae DUF3233. The predicted structure of V. cholerae DUF3233 (Figure 1) resembles the beta-barrel translocation unit of autotransporter proteins, aligning over 75% structurally equivalent positions with the template and an RMSD of 2.3. The domain has an N-terminal a-helix running along the central axis surrounded by beta-barrel formed by twelve antiparallel beta-strands. Predicted strand assembly within the outer membrane shows the carboxy and amino terminal of the betabarrel point towards the periplasmic space, the central helix is oriented such that its N-terminal is pointed towards the external environment. Secondary structure based sequence alignment of DUF3233 with the translocation unit of autotransporters shows a similar domain organisation ( Figure 2). Using alignment of DUF3233 sequences the average hydropathy, amphipathicity and similarity plots was generated with AveHAS [16]. Figure 3, shows 12 hydrophobicity and amphipathicity peaks with an average stretch of 10 to 15 residues per peak that may form transmembrane beta-strands. DUF3233 is evolutionarily linked to the autotransporters To infer evolutionary relation with type V secretory proteins, we analysed DUF3233 representatives with members of both Va and Vb family (Table S2). The third type of proteins found in the type V secretory system, type Vc or AT-2 proteins, which are characterised by trimeric C-terminal beta-barrel [17] were not considered for phylogenetic analysis. The inferred phylogenetic tree ( Figure 4) classifies members of the two families into two separate clans. Proteins are grouped into clusters with similar function, architecture and organism type as analysed in [18] and [19]. DUF3233 family sequences though related to autotransporters form a distinct group from the main autotransporter clan indicating that these domains represent new cluster of autotransporters. milieu that includes enzymes, which break down carbohydrates, proteins and lipids, and virulence factors such as adhesins and toxins by those involved in pathogenesis. Transport of these molecules is mediated by protein complexes through conserved secretory pathways. Of the 6 types of secretory mechanisms known in gram-negative bacteria (type I to type VI), type V represents the simplest transport system. Proteins of the type V secretory system fall under the autotransporter (Va), two partner secretion (Vb) and the AT-2 (Vc) families, and share a similar domain organisation: an N-terminal signal peptide for inner membrane translocation followed by a passenger protein which is normally a virulence determinant and a C-terminal translocation unit for transporting the upstream passenger protein [18]. Discussion Proteins of the autotransporter (Va) family were the first to be described [20] and form the largest representation of this system [19]. Autotransporters export a wide range of toxins and enzymes [21] to the cell surface or secrete them into the external environment. The passenger domain and translocation unit of autotransporters are both expressed as a single polypeptide [20] making the translocation unit highly specific and committed for transporting only the upstream passenger. Solved experimental structures of the autotransporter translocation unit [22][23][24] show that they all possess a similar structure, a beta-barrel of 12 antiparallel strands with a central N-terminal helix running along the barrel axis. Proteins of the two partner secretion (Vb) are widely distributed and follow a similar mode of function, transporting cytolysins, adhesions and metalloproteases [19]. The secreted exoprotein and the transporter unlike the Va proteins are not linked but, are expressed as two separate proteins transcribed in a single operon [25]. Vb transporters are predicted to have a multidomain architecture [26] and a relatively wider barrel made of 16 [27] to 20 [28] beta-strands. The newly discovered AT-2 family (Vc) [29] represents proteins secreted via a homotrimeric mechanism [30]. Proteins secreted through this system are mainly implicated in virulence [31]. With a domain organisation similar to that of autotransporters, the system functions with the coming together of three individual proteins each complete with an N-terminal signal peptide, a passenger unit and four beta-strand domain at the C-terminal which makes a complete closed 12 stranded beta-barrel translocation unit upon trimerisation [31] [32]. The present work describes sequence and structure based characterization of proteobacteria DUF3233 as a beta-barrel transmembrane domain of autotransporter proteins. DUF3233 packs an average 312 amino acid residues (including N-terminal signal peptide) and is currently classified as a domain of unknown function. DUF3233 is encoded as a single domain protein, homologous to the translocation unit of autotransporters. One aspect of DUF3233 that distinguishes it from other main class autotransporters is that it lacks a covalently linked N-terminal passenger domain, to which C-terminal translocation units of all autotransporters are committed to transport. Few autotransporter representatives of two-polypeptide architecture [19] might suggest the secretion of co-transcribed upstream proteins similar to the TPS system, but considering the cytosolic nature of upstream proteins, extracellular translocation seems unlikely. Few representative members from the Vibrio genus express DUF3233 and upstream putative GGT or response regulatory proteins in a single operon (Table 1). Over expression of GGT [33] and GGDEF domain proteins [34] [35] are implicated in pathogenesis. The prokaryotic GGT is shown to be a major factor in the colonisation of gut and gastric mucosa [36] [37]. The upstream response regulators are two-domain proteins with an N-terminal CheY-like regulatory and a conserved C-terminal GGDEF effector domain, which is responsible for eliciting pathogenic response through cyclic di-GMP mediated exopolysaccharide synthesis and biofilm formation [38]. Genes encoding virulence products in V. cholerae are organised in clusters or operons [39], and since gene encoding DUF3233 is located among virulent genes, the possibility of the involvement of DUF3233 in pathogenesis cannot be overlooked. The translocation units of autotransporters exhibit conserved amino acid consensus motif at their carboxy terminus, the barrel closing beta-strand displays alternate arrangement of polar and hydrophobic residues terminating with a conserved aromatic amino acid at the barrel terminus which is usually a phenylalanine or a tryptophan [40]. Hendrixson et al., [41] demonstrated the importance of C-terminal consensus motif on the viability of H. influenzae Hap translocation unit. Deletion of terminal 12 residues proved detrimental to the outer membrane localisation; while the stability and/or outer membrane localisation of the translocation unit was affected with the deletion of all three terminal residues, point mutations of these residues showed no effect on the outer membrane localisation or secretion of the mature protein [41]. DUF3233 displays consensus pattern at its C-terminal that resembles the conserved motif found among autotransporters discussed above ( Figure S1). A stretch of alternating polar and hydrophobic residues precedes the terminal beta-strand having a hydrophobic segment and a conserved 'terminal' phenylalanine or a tyrosine residue. Interestingly the extreme carboxy terminus harbours three conserved polar residues [N/D][Q/E][D/E] after the 'terminal' F/Y. Secondary structure and predicted models of DUF3233 show the hydrophilic residues of the ''polar tail'' are not part of the terminal beta-sheet, but instead form a short overhang pointed towards the periplasm. As yet, we do not know the significance and possible role of these tail polar residues on the outer membrane localisation and stability. DUF3233 exhibits certain features that are in common with the translocation units of type Va secretory proteins and yet possesses characteristics that are not typical to the proteins of the above system. DUF3233 represents a translocation unit that is devoid of a secretable passenger unit. Considering its location in the representative genomes alongside other virulence genes, we hypothesize that this domain is involved in pathogenesis. However, the mechanism apparently looks new and different than a typical type Va secretion system. Our study on the proteobacterial protein DUF3233 with a combination of methods like sequence similarity searches, outer membrane beta-barrel discrimination, phylogenetic analysis, and fold recognition has led us to a consensus at annotating fold and function to this domain. Sequence similarity search suggests that DUF3233 has remote homology with the translocation unit of autotransporter proteins of the type V secretory system. The domain's outer membrane beta-barrel nature was further emphasised by signal peptide, outer membrane beta-barrel discrimination and amphipathicity analysis. Secondary structure prediction and alignment with translocation units, and inputs from the predicted model suggests that DUF3233 and the translocation unit of autotransporter proteins share a similar domain organisation. Drawing a consensus from various in silico prediction methods it appears that DUF3233 is a cluster of remote homologues of autotransporters. This is the first report of an autotransporter like protein family in Vibrio species, and though within the realm of bioinformatics we were able to infer its probable family and fold, the pathogenic mechanism still remains to be explored and seeks further experimental studies. Materials and Methods Sequence similarity search 20 DUF3233 genes from NCBI comprising one sequence each from Aliivibrio, Colwellia and Ferrimonas, five sequences from Shewanella and twelve from Vibrio species were fetched using their corresponding RefSeq ids. YP_001366070 and YP_001555810 were excluded because of their short domain size and a total of 18 DUF3233 sequences were included in this study. The signal peptides of DUF3233 sequences have not been included in various analyses of this study unless mentioned. PSI-BLAST [42] profile-sequence search was used to probe homologous protein families against NCBI nonredundant (nr) database with default parameters. ClustalW multiple sequence alignment profile of DUF3233 protein sequences was taken in as input for HHsenser to search the nr database with a threshold e-value of 0.001 and default parameters for improved homolog coverage. TMBETADISC-RBF [8], PSSM profile based discrimination of beta-barrel OMPs from other folding types like globular and membrane proteins was used to assess the outer membrane nature of DUF3233. DUF3233 sequences were queried against HHomp database [9] to detect homology with other known OMPs. Signal peptide and Genomic context analysis The presence of N-terminal signal peptide and the putative cleavage sites were predicted with SignalP 3.0 [10]. Using LipoP 1.0 [12] the signal peptide sequences were checked for lipoprotein signal peptide signatures that differentiate them from other signal peptides and subsequently cleavage by signal peptidase II from signal peptidase I. SignalP 3.0 and SecretomeP 2.0 [14] were used to determine inner membrane transport of proteins upstream of DUF3233 via Sec dependent or other non classical secretory pathways. STRING (version 8.3) [13] and DOOR (version 2.0) [43] were used for gene neighbourhood and operon analysis. Structure prediction and transmembrane b-barrel analysis Secondary structure assignments were made using PSIPRED [15]. 3D structure of V. cholerae DUF3233 (RefSeq: NP_232949) was predicted using I-TASSER fold recognition method [44]. The predicted structure was superimposed with the template using TopMatch [45] and visualised with Rasmol [46]. The WHAT [47] program was used to predict hydropathy and amphipathicity using sliding windows of 13, 15, 17 residues and an angle of 100u for a-helix and 180u for b-strand. Multiple sequence alignment profile was used to plot the average hydropathy, amphipathicity and similarity plot using the AveHAS [16] program. Orientation of the domain within the outer membrane was predicted using the Viterbi method implemented in PRED-TMBB [48]. Secondary structure based sequence alignment of DUF3233 family and with the representative autotransporter translocation units was done with inputs from ClustalX [49], ProbCons [50], Ali2D [51] and the alignment was adjusted manually. Phylogenetic analysis Amino acid sequences of translocation unit of autotransporters, and pore forming beta-domain of two-partner secretion (TPS) family members, reported in [18] and [19] were used for phylogeny analysis. Three representative DUF3233 sequences from V. cholerae (RefSeq: NP_232949), C. psychrerythraea (RefSeq: YP_269983) and S. loihica (RefSeq: YP_001095898) along with beta-barrel domain sequences of the Va, Vb secretory systems were analysed with MEGA 4 [52]. Pairwise distances were calculated and the phylogenetic tree of the aligned sequences was generated using minimum evolution method. A bootstrap test of phylogeny was performed with p-distance model on the inferred evolutionary tree and a consensus bootstrap tree was generated from 1000 replicates. [18], Two partner secretion (Vb) [19] proteins and DUF3233 representatives used for phylogenetic analysis. (DOC)
4,431
2011-11-01T00:00:00.000
[ "Biology" ]
Study on Improvement of Dumping Site Stability in Weak Geological Condition by Using Compacted Layer Berau Basin, a sub-basin of Tarakan Basin, had been developed during Eocene to Miocene period. Rocks in Berau Basin consist of sedimentary, volcanic and igneous rocks aged from Pre-tertiary until Quaternary epoch. The youngest identified rock formation was alluvial deposit consists of mud, silt, sand, gravel and swamp with brown to dark color. This youngest rock formation is relatively weak geological condition and can cause problems in the coal mining operation. PT Berau Coal as one of the coal mining companies in Berau Basin area had experienced some problems related to the occurrence of alluvial deposit. A large failure has occurred at one of its out pit dumping area which lies over the swamp material. The failure caused a higher operating cost since it made that the distance for waste rock dumping became to be farther than the designated area. Therefore, in order to prevent similar failure occurring at dumping area which lies above swamp material, an improvement of dumping site stability on weak geological condition has to be needed. The proposed method for improving the stability of out pit dumping area in weak geological condition is to construct the compacted layer of waste rock before the out pit dumping area construction. Based on experimental results, a minimum of 40 kPa pressure is needed to give a proper compaction to the waste rock. The result of numerical analysis by Finite Element Method (FEM) shows that construction of compacted layer on the base of out pit dumping area can improve its stability. Introduction PT Berau Coal is one of the Indonesian coal mining companies located in Berau Regency, East Kalimantan Province, Indonesia.PT Berau Coal runs several mining operations within this area.One of its mining operations is located in Lati area (Figure 1(a)).The coal deposit in Lati area is a syncline with Northwest-Southeast axis.The dip of coal seam is around 10˚ to 23˚.Lati coals consist of 4 major coal seams which are P, Q, R and T seam.The average thicknesses of these seams are 2.6 m, 2.4 m, 3.1 m and 2.4 m respectively.Due to the geometry of coal deposit, the mining activity in Lati area was divided into 5 pits which are Pit West, East, T, Others and North.Boundary of Lati open pit coal mine is shown in Figure 1(b). Lati area is located in Berau Basin which is a sub-basin of the Tarakan Basin [1].Formation of the Tarakan-Basin was begun with transgression process which happened during the Eocene until early Miocene Epoch.In the middle of Miocene, regression in Tarakan Basin occurred continuously by eastward gradational deposition to form delta deposit.Tarakan Basin experienced more active regression during Miocene to Pliocene Epoch.Depocenter thick delta sedimentation process with relatively eastward movement continued as the time goes by.Regionally, rocks in this area consist of sedimentary rock, volcanic rock and igneous rock with predicted age ranging from pre-tertiary to quaternary period.Berau Basin, from the oldest formation to the youngest ones, consist of Banggara Formation (Kbs), Sambakung Formation (Tes), Tabalar Formation (Teot), Birang Formation (Tomb), Latih Formation (Tml), Tabul Formation (Tmt), Labanan Formation (Tmpl), Domaring Formation (Tmpd), Sinjin Formation (Tps), Sajau Formation (TQps) and alluvial deposition (Qa).The stratigraphic column of rocks in Berau area can be seen in Figure 2. Figure 3 shows regional geology map of Lati area (see Figure 2 for the Index).In general, rocks in this area can be divided into several formations as follows: a) Birang Formation (Tomb): substitution between napal, limestone and tuff in upper part and substitution between chert, napal, conglomerate, quartz sand and limestone in lower part.Thickness of rock formation is more than 1100 m.Some fossils are contained in the formation which are Lepidocylina ephicides, Spiroclypeus sp., Miogypsina sp., Margionopora vertebralis, Operculina sp., Globigerina tripartita, Globoquadrina altispira, Globorotalia mayeri, Globorotalia peripheronda, Globigerinoides immaturus, Globigerinoides sacculifer, Praeorbulina transitoria, Uvigerina sp., Cassidulina sp.Predicted Epoch: Oligocene-Miocene.b) Latih Formation (Tml): quartz stone, limestone, siltstone and coal in upper part.Sandy shale in the middle part and limestone at the bottom part.Coal seam with brown, dark color.Thickness of the formation is no more than 800 m.The formation deposited in delta environment estuarine and shallow sea.Some fossils are contained in the formation, such as Praeorbulina glomerosa, Praeorbulina transitioria.Predicted Epoch: early Miocene -Middle Miocene.c) Alluvial deposit (Qa): mud, silt, sand, gravel, pebble and swamps with dark until black color and the thickness is more than 40 meter. The presence of the youngest rock formation in Lati area, which is alluvial deposit, makes several problems to mining operation especially the waste rock dumping operation due to its weak strength characteristics.In 2007, one of the out pit dumping areas of Pit East was failure (Figure 4).4,400,000 BCM of overburden in 30 meter height and 40˚ slope was collapsed.The base of this dumping area was known to be the swamp material. Even though the out pit dumping area failure has not caused fatality or equipment damage, it has caused significant financial loss.Waste rock dumping operation plays an important role in mining activities [3].This operation needs to be conducted effectively in order to reduce the operating cost.Moreover, dumping area must remain stable to ensure a safe and continual mining operation.Waste rock dumping operation in Lati area is planned to be carried out by combination of out pit dumping and backfilling operation.Out pit dumping is needed especially at the beginning of mining activity until there is enough space available of mined out area which can be used for the backfilling operation.The maximum distance of out pit dump is set to be 1.5 km from the mining front.By this planned operation, the operating cost will be lowered and the disturb area from the mining activity can be minimized.However, the failure resulting higher operating cost since overburden from Pit East has to be dumped further from the designated area.Therefore, in order to prevent the similar case happening at dumping area which lies over swamp material, an improvement for stability of dumping site on weak geological condition is needed.This paper will discuss the impact of swamp material to stability of dumping area which lies above it.Moreover, the improvement of dumping area stability by using compacted layer of waste rock will also be discussed. Investigation of Failure out Pit Dumping Area Site investigation was carried out to find the cause of failure in out pit dump of Pit East.Core and bulk samples were taken from the failure out pit dump.The samples were then wrapped by series of plastic wrap and aluminum foil to minimize physical and chemical alteration.Finally samples were wrapped by sponge and put into PVC pipe to resist shaking during transportation to the laboratory.Laboratory test was then carried out in order to understand samples' physical and mechanical properties which will be the input data for numerical analysis.Based on the laboratory test, the physical and mechanical properties of waste rock, swamp and base rock materials were given in the Table 1. Numerical analysis using Finite Element Method (FEM) was carried out to simulate the stability of dumping area by using properties obtained from the laboratory test and geometry of the failure out pit dump.Finite Element Method has been widely known as reliable and accurate method to analyze wide range of slope stability problems such as reported by [4] and [5] Failure occurs in the model after the third dumping stage as in the real field condition.Each dumping stage height is 10 meter so after the third dumping stage, the total dumping height is 30 meter.Based on the result of numerical analysis, it can be known that the shear failure was occurred at the bottom of the out pit dump from second stage and becoming worse at the third stage.A low strength factor zone at the base of dumping area was increased by increasing height of the out pit dump.Therefore, there is possibility that failure occurring at the base of dumping area triggered the failure of out pit dump by developing crack or slip plane to the surface of dumping area. The result of numerical analysis shows that the occurrence of swamp material in the base of out pit dumping area has triggered the collapse.To give a better understanding of the influence of swamp material on the stability of the dumping area, further analysis was carried out.A numerical model for an out pit dump with no swamp material underneath was constructed and the result is shown in Figure 9.Moreover, numerical models for an out pit dump with several different thickness of swamp material at the base were also constructed in order to understand the influence of swamp material thickness and the result are shown in Figure 10, Figure 11. Figure 9 shows that the strength factor of dumping area without existence of swamp material is in stable condition (the strength factor is more than 1).This result is strengthens by no shear or tension failure developed around the base of the out pit dumping area.Figure 10 and Figure 11 represent that the out pit dumping area is becoming more unstable by increasing thickness of the swamp material.Therefore, the existence of swamp material has obvious impact on the stability of dumping area. Improvement of Stability of Dumping Site The previous analysis has shown that the swamp material which is the base of the out pit dumping area is the cause of failure.Therefore, an improvement must be taken in order to prevent such failure occurs at other dumping site with similar condition and to prevent additional loss at the operating cost.The proposed method to improve stability of the out pit dumping area is by layering the base of the out pit dumping area with compacted waste rock before the waste rock dump is constructed. Waste Rock Compaction Test To understand characteristic of waste rock material when being compacted, three samples were taken from dumping site Q10 located near Pit East.Water content (w) of sample 1, 2, and 3 are 11.0%, 12.5% and 14.0% respectively.Samples were compacted using mold and hammer [6].The compacted sample can be seen in Figure 12.The coefficient of permeability of each samples was then determined and the result represent in Figure 13. Based on the result shown in Figure 13, the coefficient of permeability is decrease with increasing compaction pressure.The coefficient of permeability of samples 2 and 3 are relatively low compared with that of sample 1.The reason behind this result was the water content of the sample 2 and 3 which is higher than sample 1.The biggest effect of compaction pressure was found at sample 3 which has the highest amount of water content.However, the coefficient of permeability converges on the same value after given more than 40 kPa pressure.Therefore, a minimum of 40 kPa pressure was needed to give a proper compaction to a various material contained in waste rock. Numerical Analysis for Effect of Compacted Layer After determining the amount of pressure needed to compact waste rock, the use of compacted layer from waste rock to improve the stability of out pit dumping area was studied.Firstly, laboratory test was carried out to obtain the physical and mechanical properties of compacted sample.The summary of its properties was given in Table 2. As known from the previous investigation, the swamp material which is the base of failure out pit dumping was the cause of out pit dumping area instability.Therefore, constructing compacted layer above the swamp area by using waste rock was proposed to improve the stability of out pit dumping area in the future.To see the effect of this proposed method, numerical model for simulating the effect of compacted layer at the base of out pit dumping was constructed, as shown in Figure 14. Various thickness of compacted layer which are 2 m, 4 m, and 6 m was used in numerical model to see its effect to the out pit dumping area stability with 15 m thickness of swamp material.The result was shown on Figures 15-17.Further analysis was conducted by simulating both thickness of compacted layer and swamp material to obtain the ratio of compacted layer to swamp material as can be seen in Figure 18, Figure 19. Based on result shown in Figures 15-17, both of shear failure and low strength factor zone are decrease with increasing thickness of compacted layer.The required thickness of compacted layer for 15 m thickness of swamp material was 6 m.Moreover, result at Figure 17 indicates that another stage of waste rock dumping with height of 10 meter was able to be constructed with 6 m thickness of compacted layer at the base of out pit dumping area.Similar result was obtained for thickness of compacted material 2 m and 4 m and swamp material 5 m and 10 m respectively.The out pit dumping area was in stable condition from both simulations.Therefore, thickness ratio between swamp material and compacted layer to maintain the out pit dump stability is around 2.5. Conclusion Based on investigation, the cause of out pit dumping area failure near Pit East, Lati Area, PT Berau Coal was the existence of swamp material at the base of out pit dumping area.Improvement of out pit dumping area stability in order to prevent such failure occurs in the future is carried out by layering the swamp material with compacted waste rock before the waste rock dumping is constructed.An amount of minimum 40 kPa pressure is needed to form compacted layer.Numerical model shows that the stability of out pit dumping area can be improved by layering compacted layer above the swamp material.The thickness ratio between swamp material and compacted layer to maintain the out pit dump stability is around 2.5. Figure 4 . Figure 4. Failure at out pit dumping area near Pit East. . The geometry of numerical model in accordance with the field condition was shown in Figure 5. Simulation was carried out for several dumping stages with height of 10 meter per stage.Figures 6-8 show the numerical analysis result for each dumping stage.Model was considered as failure when strength factor is below 1 or when shear and/or tension failure occurs in the model indicated by cross and circle symbol respectively. Figure 6 . Figure 6.Result of numerical analysis for first dumping stage. Figure 7 . Figure 7. Result of numerical analysis for second dumping stage. Figure 8 . Figure 8. Result of numerical analysis for third dumping stage. Figure 9 . Figure 9. Result of numerical analysis for out pit dumping area without existence of swamp material. Figure 10 . Figure 10.Result of numerical analysis for out pit dumping area with 5 m thickness of swamp material. Figure 11 . Figure 11.Result of numerical analysis for out pit dumping area with 10 m thickness of swamp material. Figure 12 . Figure 12.(a) Mold and hammer for compaction process; (b) Sample after compaction process. Figure 13 . Figure 13.Relation between compaction pressure and coefficient of permeability. Figure 18 . Figure 18.Numerical model with 2 m compacted layer and 5 m swamp material. Figure 19 . Figure 19.Numerical model with 4 m compacted layer and 10 m swamp material. Table 1 . Physical and mechanical properties of rocks around out pit dumping area. Table 2 . Physical and mechanical properties of compacted waste rock.
3,529.4
2015-03-04T00:00:00.000
[ "Geology" ]
Wiki-Pi: A Web-Server of Annotated Human Protein-Protein Interactions to Aid in Discovery of Protein Function Protein-protein interactions (PPIs) are the basis of biological functions. Knowledge of the interactions of a protein can help understand its molecular function and its association with different biological processes and pathways. Several publicly available databases provide comprehensive information about individual proteins, such as their sequence, structure, and function. There also exist databases that are built exclusively to provide PPIs by curating them from published literature. The information provided in these web resources is protein-centric, and not PPI-centric. The PPIs are typically provided as lists of interactions of a given gene with links to interacting partners; they do not present a comprehensive view of the nature of both the proteins involved in the interactions. A web database that allows search and retrieval based on biomedical characteristics of PPIs is lacking, and is needed. We present Wiki-Pi (read Wiki-π), a web-based interface to a database of human PPIs, which allows users to retrieve interactions by their biomedical attributes such as their association to diseases, pathways, drugs and biological functions. Each retrieved PPI is shown with annotations of both of the participant proteins side-by-side, creating a basis to hypothesize the biological function facilitated by the interaction. Conceptually, it is a search engine for PPIs analogous to PubMed for scientific literature. Its usefulness in generating novel scientific hypotheses is demonstrated through the study of IGSF21, a little-known gene that was recently identified to be associated with diabetic retinopathy. Using Wiki-Pi, we infer that its association to diabetic retinopathy may be mediated through its interactions with the genes HSPB1, KRAS, TMSB4X and DGKD, and that it may be involved in cellular response to external stimuli, cytoskeletal organization and regulation of molecular activity. The website also provides a wiki-like capability allowing users to describe or discuss an interaction. Wiki-Pi is available publicly and freely at http://severus.dbmi.pitt.edu/wiki-pi/. Introduction Annotations of proteins such as their sequence, structure, interactions and functions, or their association to diseases and drugs, are provided by a number of web-based databases such as Uniprot [1], HPRD [2], Gene Cards [3], Gene Ontology [4], KEGG [5], PDB [6], OMIM [7] and REACTOME [8].Some databases such as BioGRID [9], STRING [10], DIP [11], MINT [12], InnateDB [13], and IntAct [14] are designed exclusively to provide information about protein-protein interactions (PPIs).These PPI databases provide a valuable resource by curating experimentally known interactions, and have become the goldstandard data sources for a number of bioinformatic studies such as prediction of protein-protein interactions and protein functions, gene prioritizations and other systems biology studies.The contribution of most of these websites is the presentation of datasets that are painstakingly compiled by curators from literature.Conversely, a crowdsourcing model for curating protein annotations was explored by WikiGenes [15].Similar to Wikipedia, users can collaboratively create, edit and update articles on the site.Thus, instead of a small group of creators, researchers around the globe are able to contribute to that knowledge base.However, all of these web-based data resources provide a gene-centric view of interactions.That is, the ''central players'' of these databases are genes and not the interactions.In most of these web resources, interactions are merely provided as lists with respect to a specific protein, and any information about the interactions, if provided, is about the type of interaction or the experimental method or publication that reports the said interaction.Although the information that an interaction exists between two proteins is useful by itself, it may be insufficient from a biomedical researcher's perspective.Biomedical researchers often have one or a few proteins that they study in detail, and exploring the interactions of these proteins requires rich annotations about the interacting partners in order to identify an interaction that is relevant to their research -namely, an interaction that would potentially lead to further experiments in their own lab. Currently there is no search engine that allows retrieval of PPIs by their biomedical associations.Existing databases primarily allow a user to search for interactions by gene symbol or other widely used identifiers, be it protein/gene name, Entrez gene identifier, or Ensembl identifier.However, biologists specializing in the study of a certain disease or pathway may be interested in retrieving interactions associated with that disease or pathway, and not by a single gene.For example, a researcher studying diabetes is not able to retrieve PPIs associated with diabetes using any of the existing PPI databases (although specialized databases may exist occasionally for a few well-studied diseases).InnateDB and IntAct provide search functionality, and users can search for PPIs by experimental details but not by specifying biomedical attributes of the proteins. PPIs can contribute to the discovery of a gene's biological function.An example where PPIs have contributed to the discovery of gene function is Disrupted in Schizophrenia 1 (DISC1), a novel protein discovered in 2000 with no known homolog in human.DISC1 was identified to be associated with schizophrenia; although it had well characterized protein domains such as coiledcoil domains, leucine-zipper domains, and nuclear localization and export signals, nothing was inferred about its function [16,17].To understand the function of DISC1, PPIs were determined using yeast 2-hybrid technology [18,19].Availability of this 'DISC1 interactome' has led to a large number of studies that concluded the association of DISC1 to cAMP signaling, axon elongation and neuronal migration, and accelerated the research pertaining to schizophrenia in general and DISC1 in particular [20].Therefore, it is useful to have a web resource of PPIs that displays not only the symbols of interacting partners but also comprehensive information on what the interacting partners of a gene can tell about the gene itself. We developed a web resource, Wiki-Pi, which addresses the above issues.It provides an effective means to search and retrieve interactions of interest, and displays the retrieved interactions with annotations of their biomedical associations so as to enable further discoveries.The search for interactions can be carried out by specifying biological and disease-relevant annotations of genes. Wiki-Pi provides the seed information necessary for gene function discoveries, by readily presenting the annotations of the gene at hand as well as those of its interacting partners.Further, Wiki-Pi facilitates knowledge-creation via crowdsourcing.It allows users to discuss or describe their hypothesis, or other known facts that are not part of existing database, in the wiki portion of each interaction.The website is freely available at http://severus.dbmi.pitt.edu/wiki-piand is viewable in all major browsers including those on smartphones and e-readers. Data and Functionality Wiki-Pi is a web resource whose focus is on telling the story of each interaction in the human interactome.Only binary biophysical interactions are presented.Each interaction can be viewed on its own webpage (Figure 1).The mechanism to reach individualized PPI pages is via the search functionality provided on the homepage (Figure 2) or via a search box provided conveniently at the top of any page. Data Sources Binary biophysical interactions of the human interactome have been collected from HPRD and BioGRID.Currently, Wiki-Pi contains 48,419 unique interactions among 10,492 proteins.Data sources for annotations are given in Table 1.Excluding HPRD, all of the data from the databases is automatically updated monthly.Only data from HPRD is updated manually (we note that HPRD has not updated its database since April 13, 2010).We rely on these databases for curated PPIs, and do not curate them from other resources ourselves.The database of interactions and other annotations are loaded into MySQL. Individualized Page for Each PPI A webpage of a PPI consists of two sections: an automatically generated annotation section with detailed annotations describing the interaction and its participant proteins, and a wiki section where users can discuss the interaction.The details of the annotation section from top to bottom are as follows (see Figure 1). Biomedical Annotations.The top of the section gives a link to the PubMed record of the original publication reporting the interaction; this publication source is obtained from HPRD or BioGRID.Following that, the count of papers citing that publication is shown; this count is obtained from PubMed.The citation count is provided so as to give an idea of the extent of the scientific impact of that interaction.Sometimes the original publication is cited more for the experimental method than for the interactions itself, but this can be easily concluded by following the PubMed link to the original publication.Next, biologically and medically relevant characteristics of the two participant proteins are shown where available: PDB IDs and structure, Gene Ontology cellular component, molecular function and biological process terms at the GO Slim level, associated pathways from REACTOME, associated diseases from KEGG, and drugs binding to that protein from DrugBank [21].These annotations provide useful information for analyzing the biological function of the given interaction.Additionally, links to corresponding pages of the genes in other databases, namely, Entrez gene [22], HPRD, Ensembl [23], and Uniprot, are provided. GO Terms Enriched among Interacting Partners.A unique feature of this web resource is that it provides for each gene in the interaction, a list of Gene Ontology biological process terms statistically enriched among its interacting partners.The enriched terms are computed by employing BiNGO plugin in Cytoscape [24,25].The hypergeometric statistical test of significance is used with a Benjamini & Hochberg False Discovery Rate (FDR) correction at a significance level of 0.05.For instance, when calculating enriched terms for gene 'a' (see Figure 3), the study group consists of the interacting partners b 1 , b 2 , …, b n , while the reference set consists of n genes randomly selected from the entire genome.BiNGO then collects GO biological process terms of b 1 , b 2 , …, b n .For each of the terms in the collection, it computes whether the number of genes associated with that term is significantly greater among interacting partners compared to that of random set.The methodology is described in detail in the original publication of BiNGO [24].For a given gene ('a'), if more than 50 terms are found to be enriched among interacting partners associations, only the top 50 enriched terms in the order of increasing p-value or decreasing statistical significance are shown on the website.For example, when viewing the annotations for an interaction between DISC1 and another protein, GO biological process terms that are significantly overrepresented in DISC1's interacting partners are shown.Viewing these terms would provide a handle for biologists in determining any novel associations of that gene in specific biological processes or diseases.These terms are especially useful when many interactions are known for a protein, but its functional characteristics are unknown [26]. Tag Clouds from Abstracts.To give an overview of the topics that each of these genes are associated with, tag clouds are presented which are constructed from abstracts of papers associated with each protein as given by pubmed2ensembl [26].An interaction may be more interesting if it connects two different processes together, whereas it may be less novel if the interaction is between two proteins which participate in same biological process.Therefore, in addition to the above tag clouds, another tag cloud is displayed for each protein made up of words that associate with Gene a interacts with genes ,b 1 ,:::,b n -.GO terms t i of each interacting partner are shown to its right.BiNGO computes the statistically enriched GO terms (functional categories that the genes are enriched in, and find that the statistically enriched GO terms are t 20 , t 30 , and t 12 .See methods in [24] for details of computation.doi:10.1371/journal.pone.0049029.g003one protein but not the other.The tag cloud for a given gene is calculated as follows.First, a given gene's Ensembl identifier is mapped into a PMID (PubMed identifier) as given in pubme-d2ensembl (http://www.pubmed2ensembl.org/)data.The abstract of the said publication is obtained, and it is treated as a document representing that particular gene.Starting with all of the abstracts as a corpus, stop words (such as 'for', 'it', 'the', etc) are removed, and stemming is carried out on the remaining words.Tfidf, which is a measure of relevance used in information retrieval, is computed.tf refers to term-frequency and idf refers to inverse document-frequency, and tf-idf gives the relevance of a term to a given document ( [27]).The size of a word in the tag cloud corresponds to the values of tf-idf for that term with respect to the document. Wiki for Further Annotations by Users.The second section of the interaction page is the wiki, where users are encouraged to provide insights and discuss predictions about the relevance of the interaction in a biological process, disease or pathway.The wiki section may be used for crowdsourcing not only knowledge curation but also knowledge creation about each interaction. Navigation through Search Users navigate Wiki-Pi primarily by using the search interface.Wiki-Pi allows full-text search as well as field-specific search; it does not require users to have the knowledge of any form of query language like Structured Query Language (SQL). Indexing for Information Retrieval.The index for freetext search is constructed from gene symbols, gene names, GO annotations, pathways, drugs, and diseases (but not enriched GO terms and abstracts).Stop words are removed and stemming is carried out on all the content prior to indexing.Stemming in the context of information retrieval is a process by which words like Image shows the results of the search where one gene is involved in the immunity pathway, while the other gene contains the term cancer anywhere in its annotation but not the word immunity.Note that the results can be sorted by number of pathways, diseases or drugs associated with the genes (counts of each gene are considered individually).URL: http://severus.dbmi.pitt.edu/wiki-pi/index.php/search/adv?a-all = pathway%3Aimmunity&b-any = cancer&b-none = immunity.doi:10.1371/journal.pone.0049029.g004'inflammation' and 'inflammatory' are mapped to their stem 'inflamm'.When a word is queried, all interactions whose annotations (for either gene) contain that word are retrieved.The search functionality is created using the open-source search engine Sphinx (http://sphinxsearch.com/). Search Functionality.The interactions may be retrieved with a simple search where any of the indexed content is given in the search box.For example, a query can be simply the gene symbol (e.g., AKT1) or any term that appears among the annotations of the gene (e.g.'blood', 'cytokine', 'hemostasis').As stemming has been performed on all the words prior to indexing, searching for ''inflammation'' will retrieve interactions that have not only the word inflammation but also the word inflammatory.By allowing users to search for interactions based on fields such as GO terms, pathways, diseases, and drugs, researchers without a particular protein in mind can still successfully retrieve interactions of their interest.When multiple words are given in a simple search box, interactions containing all of the words are retrieved.An advanced-search page is also provided to retrieve interactions with more complex queries.Here, users can construct queries such as ''DISC1 but not immunity'', ''interactions of any of these proteins: TLR1, TLR2, …'', ''genes associated with schizophrenia that interact with genes associated with immunity'' and so on.An example is shown in Figure 4, where the query is: ''an interaction where one gene is involved in the immunity pathway, while the other gene contains the term cancer anywhere in its annotation but not the word immunity''.Note that the users do not type such natural language sentences, but will type out query words in appropriate boxes in the advanced search page.Advanced search also allows users to restrict search to any of these fields: disease, pathway, drug, symbol, gene name, GO terms, or Entrez identifier (e.g.'disease:diabetes', 'pathway:hemostasis' or 'drug: diflunisal'). Display of Search Results.The results of the search are presented in a tabular format showing gene symbols, names, pathways, diseases and drugs of the participant genes (Figures 5 and 6).The rows are sortable by the number of attributes associated with the genes.Each interaction may be clicked to view the detailed annotations page of the interaction (Figure 1). Formulation of Novel Hypotheses Uniquely Enabled by Wiki-Pi Unique features available in Wiki-Pi enable addressing scientific queries that are otherwise not feasible by other tools.Without Wiki-Pi, a biomedical scientist is left with manual curation of information from several data sources without a guarantee on finding the seed evidence required to crystallize a novel hypothesis.A comparison of functionality in Wiki-Pi and those of other existing PPI databases is given in Table 2.Note that Wiki-Pi is the The search functionality and annotations displayed for the retrieved interactions are compared across different PPI databases.For each function, the cell shows a tick mark if the function is supported by the corresponding webserver.doi:10.1371/journal.pone.0049029.t002 sole database that allows a user to search by specifying conditions about both the proteins involved in a given interaction.Imposing strict conditions on the interaction in effect narrows down the search space of PPIs; this is critical, as there are tens of thousands of PPIs available in existing databases.This capability is invaluable when hypothesizing functions of genes that are not well-studied.Wiki-Pi is especially useful today, as several genome-wide association studies (GWAS) are being published.GWAS studies are unbiased by current scientific knowledge (i.e. they do not have literature-bias) and often implicate genes with currently unknown biological functions to be associated with the disease under study.The number of GWAS studies has increased rapidly in the past couple of years.So far, 1,309 publications have reported GWAS results on 674 traits or diseases (www.genome.gov/gwastudies[28], accessed 2012-July-17).Though extensive work is being carried out to identify the common genetic variants that influence various diseases or traits through GWAS, the role of these genes and the exact mechanism of their action are yet to be discovered.Very little information is available about some of the GWASidentified genes in terms of their molecular function and biological process.Wiki-Pi enables researching each of these genes and provides novel insights that may not otherwise materialize except when a scientist knows all the multiple specialized domains involved. Possible Function of IGSF21 and the Likely Mediators of Its Association to Diabetic Retinopathy.Using Wiki-Pi, we analyzed immunoglobin superfamily member 21 (IGSF21) which has been identified through a recent GWAS study to be associated with diabetic retinopathy, where new blood vessels form at the back of the eye causing bleeding and blurring of vision [29].There is no information currently known about IGSF21 except for the protein-protein interactions determined through high-throughput experiments and that it is an extracellular protein.Searching on Wiki-Pi for interactions of IGSF21, and then viewing the list of GO terms enriched among its interacting partners reveals that this extracellular protein may be involved in regulating metabolic processes, catalytic activity as well as cytoskeletal organization and response to external stimuli (see Figure 7 and File S1, generated by pasting list of interacting partners of IGSF21 into Cytoscape BiNGO plugin [24]).Although this enriched term calculation reveals that IGSF21 may be involved in signaling mechanisms in response to external stimuli, specifically in cytoskeletal organization, it does not reveal its relation to diabetic retinopathy.Its relation specifically to diabetic retinopathy is revealed further with the advanced-search feature of Wiki-Pi, which may be used to find interactions where one gene is IGSF21 and the other gene includes the term ''blood'' in any of its annotations (http://severus.dbmi.pitt.edu/wiki-pi/index.php/search/adv?a-all = symbol%3Aigsf21 &b-all = blood).This query results in four interactions, namely Figure 6.PPIs retrieved when searched by disease.In these search results also, similar to those in Figure 3, the results can be sorted by number of pathways, diseases or drugs associated with the genes (counts of each gene are considered individually).URL: http://severus.dbmi.pitt.edu/wiki-pi/index.php/search?q= alzheimers.doi:10.1371/journal.pone.0049029.g006with (i) heat shock 27 kDa protein 1 (HSPB1), (ii) v-Ki-ras2 Kirsten rat sarcoma viral oncogene homolog (KRAS), (iii) thymosin beta 4 X-linked (TMSB4X), and (iv) diacylglycerol kinase delta 130 kDa (DGKD).The annotations of these four interacting partners on their corresponding interaction pages on Wiki-Pi show that HSPB1 is involved in blood vessel endothelial cell migration and the other three, namely KRAS, TMSB4X, and DGKD, are all involved in blood coagulation.Further, KRAS annotations show that it is involved in insulin receptor signaling pathway (GO biological process).Researching for these genes outside of Wiki-Pi (i.e. in PubMed), it is also found that (i) TMSB4X may play a role in diabetic retinal neovascularization in the context of proliferative diabetic retinopathy [30], and that (ii) DGKD deficiency causes peripheral insulin resistance and metabolic inflexibility [31].We conclude that IGSF21 may be involved in signaling cellular response to external stimuli, specifically triggering cytoskeletal organization and regulation of metabolic and catalytic activity, and that its association to diabetic retinopathy may be mediated through its interactions with the genes HSPB1, KRAS, TMSB4X and DGKD which are involved in blood-coagulation. Conclusions Wiki-Pi provides a means for effectively retrieving and studying human protein-protein interactions.The data itself is not curated by us, but is retrieved from other widely-used human protein information databases (Table 1).Wiki-Pi presents this information in a manner that is easy to be found and assimilated by biologists.The database is also timely because in the last few years several genome-wide association studies have been completed which resulted in the identification of genes associated with specific diseases or traits.Biological role of many of these genes is currently unknown or not fully characterized.If any such gene has known PPIs, the biological role of the gene may be determined based on the functions of its interacting partners. Wiki-Pi facilitates the discovery of the molecular interconnects, if any, between seemingly unrelated biological processes that govern the human body: e.g.psychological stress and inflammation [32,33,34,35,36,37,38], or schizophrenia and immunity [32,39,40,41]; although these processes are hypothesized to be related, the molecular pathways connecting these processes are not well understood.Wiki-Pi makes it possible to search for interactions connecting these processes. Biologists routinely draw inferences by putting together the information about the proteins and formulate hypotheses and conduct experiments to validate them; Wiki-Pi makes assimilation of such information extremely easy by presenting all or most of the required annotations readily at hand.Wiki-Pi complements traditional databases, promoting research in molecular biology and biomedical informatics of human proteins.Future developments include the integration of additional data sources (both interactions and annotations) and the addition of authorship tracking for the wiki. Figure 2 . Figure 2. Website homepage.The homepage gives a search box, and also shows a shortlist of interactions some of which are populated randomly from the database while others are those that are most-frequently searched on Wiki-Pi.URL: http://severus.dbmi.pitt.edu/wiki-pi/.doi:10.1371/journal.pone.0049029.g002 Figure 3 . Figure 3. Concept diagram of GO term enrichment calculation.Gene a interacts with genes ,b 1 ,:::,b n -.GO terms t i of each interacting partner are shown to its right.BiNGO computes the statistically enriched GO terms (functional categories that the genes are enriched in, and find that the statistically enriched GO terms are t 20 , t 30 , and t 12 .See methods in[24] for details of computation.doi:10.1371/journal.pone.0049029.g003 Figure 4 . Figure 4. Advanced-search feature.Image shows the results of the search where one gene is involved in the immunity pathway, while the other gene contains the term cancer anywhere in its annotation but not the word immunity.Note that the results can be sorted by number of pathways, diseases or drugs associated with the genes (counts of each gene are considered individually).URL: http://severus.dbmi.pitt.edu/wiki-pi/index.php/search/adv?a-all = pathway%3Aimmunity&b-any = cancer&b-none = immunity.doi:10.1371/journal.pone.0049029.g004 Figure 5 . Figure 5. PPIs retrieved when searched by gene symbol.In these search results also, similar to those in Figure3, the results can be sorted by number of pathways, diseases or drugs associated with the genes (counts of each gene are considered individually).URL: http://severus.dbmi.pitt.edu/wiki-pi/index.php/search?q= brca1.doi:10.1371/journal.pone.0049029.g005 Figure 7 . Figure 7. Statistically enriched Gene Ontology biological process terms of PPIs of IGSF21.Wiki-Pi website makes available only a list and not an image of enriched GO biological process terms.For clarification, this network diagram has been generated with BiNGO Cytoscape Plugin [24], for GO biological process terms, with the hypergeometric statistical test of significance, and a Benjamini & Hochberg False Discovery Rate (FDR) correction at a significance level of 0.05, by pasting the list of interacting partners (gene symbols) from Wiki-Pi.Statistical significance of the node (GO term) is shown in color, with the darker color indicating stronger significance.High-resolution image with labels of the nodes is available as File S1. doi:10.1371/journal.pone.0049029.g007 Table 2 . Comparison of functionality of Wiki-Pi with other PPI databases.
5,520.2
2012-11-28T00:00:00.000
[ "Computer Science", "Biology" ]
A New Global Scalarization Method for Multiobjective Optimization with an Arbitrary Ordering Cone We propose a new scalarization method which consists in constructing, for a given multiobjective optimization problem, a single scalarization function, whose global minimum points are exactly vector critical points of the original problem. This equivalence holds globally and enables one to use global optimization algorithms (for example, classical genetic algorithms with “roulette wheel” selection) to produce multiple solutions of the multiobjective problem. In this article we prove the mentioned equivalence and show that, if the ordering cone is polyhedral and the function being optimized is piecewise differentiable, then computing the values of a scalarization function reduces to solving a quadratic programming problem. We also present some preliminary numerical results pertaining to this new method. Introduction Scalarization is one of the most commonly used methods of solving multiobjective optimization problems.It consists in replacing the original multiobjective problem by a scalar optimization problem, or a family of scalar optimization problems, which is, in a certain sense, equivalent to the original problem.The existing scalarization methods can be divided into two groups: 1) Methods that use some representation of a given multiobjective problem as a parametrized family of scalar optimization problems.Such scalarization methods should have the following two properties (see [1], p. 77): (i) an optimal solution of each scalarized problem is efficient (in some sense) for the original multiobjective problem, (ii) every efficient solution of the multiobjective problem can be obtained as an optimal solution of an appropriate scalarized problem by adjusting the parameter value.Some examples of possible scalarizations of this kind are given, for instance, in [1] (pp. 77-78) and [2]. 2) Methods that use local equivalence of a multiobjective optimization problem and some scalar optimization problem whose formulation depends on a given point.Such equivalence enables one to solve the multiobjective problem locally by using necessary and/or sufficient optimality conditions formulated for the scalar problem (for examples of such an approach, see [3], Thm. 1 and [4], Prop. 2.1 and 2.2). There are also scalarization approaches which combine properties of both groups such as the Pascoletti-Serafini scalarization [5] (for a survey of different scalarization methods, see [6], Chapter 2; for adaptive algorithms using different scalarizations, see [6], Chapter 4; for scalarizations in the context of variable ordering structures, see [7], Chapters 4 and 5). In this paper, we propose a new scalarization method different from the above-mentioned ones.It consists in constructing, for a given multiobjective optimization problem, a single scalarization function, whose global minimum points are exactly vector critical points in the sense of [8] for the original problem.This equivalence holds globally and enables one to use global optimization algorithms designed for scalar-valued problems (for example, classical genetic algorithms with "roulette wheel" selection) to solve the original multiobjective problem.We also show that, if we consider an order defined by a polyhedral cone and the function being optimized is piecewise differentiable, then computing the values of a scalarization function reduces to solving a quadratic programming problem.So far, the term "scalarization function" has been used for a scalar-valued function defined on the image space of an optimization problem, which transforms a vector-valued objective function into a scalar-valued one (see [9], Thm. 1.1).However, by using such a scalarization, we are able to find only some (usually a small part of) Pareto solutions, or efficient points, of the original multiobjective optimization problem, while the other Pareto solutions are lost.Contrary to this approach, our scalarization function is defined on the space of feasible solutions of the original problem and attains the minimum (zero) value on the set of vector critical points for this problem.The set of vector critical points is larger than the set of efficient solutions and can serve as an approximation of the latter one. The purpose of this research is to describe the idea of our new scalarization method and to present some underlying theory for the case of an unconstrained multiobjective optimization problem.The extension to constrained optimization is also possible and will be the subject of further investigations. A Global Scalarization Function for an Arbitrary Ordering Cone Let Ω be an open set in n  , and let ( ) 1 , , : Definition 1 [10] We define the (Clarke's) generalized Jacobian of f at x ∈ Ω as follows: where ( ) Jf x denotes the usual Jacobian matrix of f at x whenever f is Fréchet differentiable at x, and "co" denotes the convex hull of a set.  .The calculation of Clarke's generalized Jacobian in the general case can be quite difficult due to the lack of exact calculus rules.For piecewise differentiable functions, however, there is a representation of the generalized Jacobian as the convex hull of a finite number of Jacobian matrices, which was obtained by Scholtes in [11].To formulate this result, we need some additional definitions. Definition 2 Let Ω be an open subset of n  and let We define the set of essentially active indices for f at x as follows: Definition 4 [8] Let x ∈ Ω .We say that (i) x is a vector critical point for problem (3) if there exist { } where 0 n is the zero vector in n  ; (ii) x is an efficient solution for (3) if x is a weakly efficient solution for (3) if (iv) x is a local weakly efficient solution for (3) if there exists a neighborhood U of x such that It is obvious that implications ( ) ( ) ( ) (for locally Lipschizian f) follows from [12] (Thm.5.1 (i)(b)).Some opposite implications can be obtained under additional assumptions of generalized convexity type.In particular, Gutiérrez et al. [8] have identified the class of pseudoinvex functions for which ( ) ( ) holds, and the class of strong pseudoinvex functions for which ( ) ( ) Definition 5 [13] Let C be a nontrivial convex cone in Lemma 7 (a finite-dimensional version of [13], Lemma 2.2.17)Let C be a nontrivial closed convex cone in is a compact base for C + . In the sequel, we consider a fixed vector int y C ∈ and a base B for C + defined by (11).In order to define a global scalarization function for problem (3), we first consider the following mapping : Lemma 8 A point x ∈ Ω is a vector critical point for problem (3) if and only if ( ) ( ) Proof.If x ∈ Ω is a vector critical point for problem (3), then equality (7) holds for some { }  For a nonempty subset S of n  , let ( )  be the distance function of S, defined as follows: where ⋅ denotes the Euclidean norm.We now introduce the following scalarization function Note that s depends on the choice of y .The name "scalarization function" is justified by the following.Theorem 9 A point x ∈ Ω is a vector critical point for problem (3) if and only if ( ) 0 s x = . Proof.If x is a vector critical point for (3), then by Lemma 8, condition (13) holds, which gives ( ) 0 s x = .Conversely, suppose that ( ) 0 s x = .Since h is continuous and the sets B and ( ) is also compact; hence it is closed.Therefore, the equality ( ) 0 s x = implies condition (13).  Having defined the scalarization function s, we can now replace problem (3) by the following scalar optimization problem: ( ) Obviously, problems (3) and ( 17) are not equivalent because there may exist vector critical points which are not (weakly) efficient solutions for (3).Nevertheless, by solving problem (17) we can obtain some approximation of the set of solutions to (3). Computing the distance function in ( 16) is not easy in the general case, but under additional assumptions on both C and f, it is possible to apply some existing algorithms to perform this task.The details are described below. A convex cone which is a polyhedral set is called a polyhedral cone.Theorem 11 Suppose that the ordering cone C in p  is polyhedral and the function : , let B be a base for C + defined by (11) and let h be the function defined by (12).Then, for each x ∈ Ω , the set ( ) ( ) is polyhedral, or equivalently, it can be represented as the convex hull of a finite number of points in n  . Proof.It follows from ( [14], Thm.19.1) that a convex set D in p  is poly- hedral if and only if it is finitely generated, which means that there exist vectors 1 , , l a a  such that, for a fixed integer k, 0 k l ≤ ≤ , D consists of all the vectors of the form where 1 1, 0 for 1, , . In particular, if D is bounded, then no i λ can be arbitrarily large, which im- plies that k l = , and conditions ( 19) -( 20) reduce to By assumption, C is polyhedral, hence, by [14] (Corollary 19.2.2), C + is also a polyhedral cone, which implies that base B is a polyhedral set.By Proposition 3, ( ) It is easy to prove that the Cartesian product of two polyhedral sets is a polyhedral set and that the image of a polyhedral set under a linear transformation is a polyhedral set (see [15], Proposition A.3.4).Therefore, ( ) ( )  Theorem 11 reduces the problem of computing the values ( ) s x given by ( 16) to the problem of computing the Euclidean projection of 0 n onto the polyhe- dron ( ) ( ) . This is a particular case of a quadratic programming problem (see [16], p. 398).There are also specialized algorithms designed for computing such projections (see [17] [18]). The Case of Two Objectives For two objectives, under differentiability assumptions, it is possible to find some representation of the scalarization function s in terms of the gradients 1 f ∇ and 2 f ∇ .Let p = 2 and suppose that the mapping ( ) The following theorem will help to compute the scalarization function ( 16) for bi-objective problems. Theorem 12 Let p = 2, int y C ∈ , and let B be the compact base for C + defined by (8).Then there exist vectors ( ) Proof.It follows from ( 8) that B is a subset of some line in 2  .Moreover, by Lemma 7, B is compact and convex, so it must be a closed line segment.Denote to the first argument, we obtain . Pareto Optimization We now consider the case of classical Pareto optimization, i.e., when .According to Theorem 12, we have hence, the scalarization function has the form For any point n x ∈  , there are two possible cases: . Then ( ) s x is the distance from 0 to the line segment S joining ( ) We now consider case (ii).The line L passing through ( ) is a point on the line, and : is the line direction.The closest point on the line L to 0 is the projection of 0 onto L which is equal to , where ., Using the same parametrization, we can represent the line segment S as follows: then the point in S closest to 0 is b a + .Finally, if 0 0 1 t < < , then the point in S closest to 0 is q.Hence, the function s can be described as follows: Taking into account the definitions of a and b above, we see that this sca- larization function depends on the values of gradients of 1 f and 2 f only, so it is easily computable. Example 13 (problem FON in [19], p. 187) Let The authors of [19] consider problem (3), where , and state that the set of efficient (Pareto) solutions for this problem is equal to the set of points ( ) Here the set Ω is closed (contrary to the rest of our paper), but this constraint is in fact inessential and the problem can also be considered on the whole space 3  .Computing the partial derivatives of 1 f and 2 f , we obtain from (24) -(25) ( ) ( ) We have designed a program in Maple to compute ( ) s x , using formulae (23) and ( 27) -(28).This program consists of three nested loops for the values of the variables 1 2 3 , , x x x , each variable taking values from −4 to 4 in steps of 0.01.We have obtained 0 = ) (x s for each x satisfying (26), and ( ) 0 s x > for all other points x.However, there are some points x for which the values ( ) This example shows that one must be careful when using global optimization algorithms to minimize s because points like the ones appearing in (29) can be easily misclassified as vector critical points. Conclusion We have presented a new scalarization method for solving multiobjective optimization problems which is based on computing the Euclidean distance from the origin to some subset determined by the generalized Jacobian of the mapping being optimized.This article contains the main underlying theory and only some preliminary numerical computations pertaining to this method.More numerical results will be presented in another research. p . A nonempty convex subset B of C is called a base for C if each nonzero element z C ∈ has a unique representation of the form z b λ = with 0 λ > and b B ∈ .Remark 6 If B is a base of the nontrivial convex cone C, then 0 p B ∉ . . Since B is a base for C + , there exist 13) holds.Conversely, if (14) is true for some b B ∈ and we see that x is a vector critical point for (3). by ( ) 1 b and ( ) 2 b the endpoints of B. Using (21) and the linearity of h with respect base for C + , and is the closed line segment joining the two points ( ) ( ) other points at which ( ) s x α < , except the Pareto optimal solutions (26). a locally Lipschitzian vector function.Suppose that C is a closed convex pointed cone in
3,367
2017-02-07T00:00:00.000
[ "Computer Science", "Mathematics" ]
Remote plasmonic-enhanced Raman spectroscopy with the plasmon-molecule coupling in distance over 100 nm . We propose remote plasmonic-enhanced Raman scattering (RPERS) spectroscopy for molecular sensing and imaging applications. RPERS requires no contact between analyte molecules and metallic nanostructures, which overcomes the limitations of surface-enhanced Raman scattering (SERS). We constructed RPERS substrates consisting of silver nanoislands and columnar silica structures, which demonstrated a 2×10 7 enhancement in Raman scattering for Rhodamine 6G molecules, even when the metal nanostructures and analyte molecules were over 100 nm apart. The RPERS substrate also exhibited improved reproducibility (<15% RSD), long-term stability (>1 month), and sensitivity (>10 times) compared to conventional SERS substrates. We also confirmed the feasibility of RPERS for biophotonic analysis, i.e., enhancing Raman histological imaging of oesophagus tissues with oesophageal adventitia of a Wistar rat attached atop the columnar silica structure layer. Our demonstration is a promising advancement in the field of enhanced spectroscopy using plasmon and offers a solution to the challenges faced by conventional SERS spectroscopy. It has the potential to pave the way for future developments in remote plasmonic-enhanced spectroscopy. Introduction Raman spectroscopy is a versatile technique used to analyze molecular species and structural changes based on the Raman spectrum derived from the molecular vibrations of a sample 1,2 .It can be applied to various states of matter, such as solids, liquids, and gases, and is widely used in molecular sensing, bioimaging, and other applications. However, the primary limitation of Raman spectroscopy is its weak Raman scattered light intensity, which results in low molecular detection sensitivity and long measurement times.To overcome this limitation, surface-enhanced Raman scattering (SERS) spectroscopy has been developed 3 .SERS utilizes plasmons generated by light excitation of metal nanostructures, which can significantly enhance the Raman scattered light intensity in the vicinity of the metal nanostructures (<10 nm) by 10 2 to 10 7 orders of magnitude.This results in significant improvements in molecular detection sensitivity (<nM) and measurement times (~ms). Despite its potential, SERS spectroscopy has limitations that hinder its practical application.One concern is the possibility of denaturation of both the metal nanostructures and the measured molecules due to contact between them.Another challenge is achieving precise quantitative measurements without careful control of molecular positioning with 1 nm accuracy, as SERS signal intensity is particularly strong in metal nanogaps (hotspots). To address these limitations, in the present study, we proposed plasmon-mediated long-range enhancement of Raman scattering via dielectric nanostructures, namely, remote plasmonic-enhanced Raman scattering (RPERS) spectroscopy that overcomes the limitations of SERS spectroscopy. Fundamental characteristics of RPERS Figure 1 illustrates the structure of the RPERS plate, which was composed of Ag nanoislands (AgNIs) and columnar SiO2 structures (CSS) on a float slide glass plate 4 .The AgNIs measured between 50-150 nm in lateral dimension and less than 20 nm in height, while the CSS layer, which was around 100 nm thick, acted as a protective layer for the AgNIs.The AgNIs and CSS layers were produced using a sputtering process, enabling the creation of an RPERS plate with a large area. The fundamental enhancement capability of RPERS spectroscopy was demonstrated using Rhodamine 6G (R6G) molecules, as shown in Fig. 2a.Remarkably, we achieved an optical enhancement of 2×10 7 for the RPERS plate, compared to the slide glass plate, even though the distance between the metal nanostructures and the analyte molecules was over 100 nm.The detection sensitivity of 1.8 pM achieved by RPERS spectroscopy was comparable to that of general SERS spectroscopy, indicating that RPERS was a highly sensitive detection method.Furthermore, compared to the AgNI plate without CSS, which was used for SERS measurements, the RPERS plate demonstrated better signal linearity and signal-to-noise ratio, as shown in Fig. 2b.This is thought to be due to the uniform enhancement provided by RPERS, whereas SERS signals fluctuated significantly depending on the molecule's position with respect to hotspots.To demonstrate that the observed RPERS enhancement occurred even when the analyte molecules were separated from the AgNIs by a CSS layer that was more than 100 nm thick, we performed several tests from various viewpoints.Firstly, we rinsed the R6G molecules that were bound atop the CSS surface with ethanol for a few seconds.This resulted in the extinguishing of the enhanced Raman signals.Secondly, we examined the dependence of the RPERS enhancement on the molecular species and found that it was not equivalent to SERS.Thirdly, we conducted an adhesive tape test where we dispersed 2-naphthalene thiol fine powders onto the adhesive side of a tape.We observed enhanced Raman signals of 2-naphthalene thiol only when the tape was attached to the CSS surface of the RPERS plate.Moreover, the signals appeared and disappeared reversibly with the tape on and off the surface.These observations suggested that the analyte molecules were bound atop the CSS surface, providing an explanation for the enhanced Raman scattering even when the metal nanostructure and the analyte molecule were more than 100 nm apart. RPERS spectroscopy for bioimaging Finally, we demonstrated the potential of RPERS spectroscopy for bioimaging applications, as illustrated in Fig. 3.A tissue section of the oesophagus, including the oesophageal adventitia, from a Wistar rat was attached to the RPERS plate.The Raman signals were significantly enhanced by the RPERS plate, enabling clear and highresolution Raman imaging of the tissue section. Conclusion This study presented RPERS spectroscopy, which allowed for significant enhancement of Raman spectroscopy without requiring contact between analyte molecules and metallic nanostructures.The RPERS plate offers several advantages over conventional SERS plates, including improved reproducibility, stability, and sensitivity.Our demonstration represented a promising advance in the field of enhanced Raman spectroscopy using plasmon and provided a solution to the challenges faced by conventional SERS spectroscopy.This technology has the potential to pave the way for future developments in remote plasmonic-enhanced spectroscopy.
1,321.6
2023-01-01T00:00:00.000
[ "Chemistry", "Materials Science" ]
Imaging Fibrosis and Separating Collagens using Second Harmonic Generation and Phasor Approach to Fluorescence Lifetime Imaging In this paper we have used second harmonic generation (SHG) and phasor approach to auto fluorescence lifetime imaging (FLIM) to obtain fingerprints of different collagens and then used these fingerprints to observe bone marrow fibrosis in the mouse femur. This is a label free approach towards fast automatable detection of fibrosis in tissue samples. FLIM has previously been used as a method of contrast in different tissues and in this paper phasor approach to FLIM is used to separate collagen I from collagen III, the markers of fibrosis, the largest groups of disorders that are often without any effective therapy. Often characterized by an increase in collagen content of the corresponding tissue, the samples are usually visualized by histochemical staining, which is pathologist dependent and cannot be automated. In this paper we have used second harmonic generation (SHG) and phasor approach to auto fluorescence lifetime imaging (FLIM) to obtain fingerprints of different collagens and then used these fingerprints to observe bone marrow fibrosis in the mouse femur. This is a label free approach towards fast automatable detection of fibrosis in tissue samples. FLIM has previously been used as a method of contrast in different tissues and in this paper phasor approach to FLIM is used to separate collagen I from collagen III, the markers of fibrosis, the largest groups of disorders that are often without any effective therapy. Often characterized by an increase in collagen content of the corresponding tissue, the samples are usually visualized by histochemical staining, which is pathologist dependent and cannot be automated. Fibrotic diseases are responsible for organ death and often the only possible course of action is exchange with a healthy organ 1 . The various diseases associated with the fibrosis include liver cirrhosis, idiopathic pulmonary fibrosis, diabetic nephropathy, arteriosclerosis, scleroderma, rheumatoid arthritis and fibrosarcomas [1][2][3][4][5][6][7] . Fibrotic diseases are one of the largest groups of disorders without any effective therapy. They usually arise as the wound healing process fails to end after the normal wound healing response. During this wound healing process new tissues are synthesized and the proteins being produced include collagens and fibronectins. Failure to end this synthesis results in overproduction of fibril forming proteins and fibrosis 1,8 . Collagens are one of the major components of the extracellular matrix (ECM) in tissues and are the major components of fibrosis 7,8 . They are the most abundant proteins in the human body, consisting of almost 30% of the total protein mass. There are various different types of collagens that are present in mammalian systems, some fibrous and some non-fibrous. The fibrous collagens give rise to complicated fibril structures and are responsible for the tensile strength and fibrillar network. The non-fibrous collagens are responsible for various other biological functions including tissue flexibility 8 . It has also been shown that the ratio of collagen III to collagen I is important for diseases like dilated cardiomyopathy and fibrosis 3,5,9 . Collagen I is mostly heterotrimeric, non-centrosymmetric and the most abundant in the tissues. Collagen III is often co-distributed with collagen I. The major source of type IV collagen is basement membranes and type V collagen is usually present in a small amount at the core of the collagen I fibers 8 . Collagens have been studied using various different techniques, including immunohistochemical staining of the excised tissue to determine the type of collagen, second harmonic generation (SHG) imaging, and HPLC combined with mass spectrometry 6,8,[10][11][12][13] . Fluorescence imaging and fluorescence lifetime imaging (FLIM) have also been employed, although not for separating different fluorescence components 4,14-18 . These fluorescence techniques were used for the characterization of collagen and separating collagens from other tissue components, but not for separating different types of collagens. The main way to separate different collagens to date has been staining the excised tissues with the dye, picrosirius red 19 . This technique although widely employed, is pathologist dependent and cannot be automated. SHG have also been widely used to image collagen fibrils. The non-centrosymmetric structure of some collagen fibers give rise to SHG signals and can be used for imaging 11 . The caveat is that SHG cannot be used for either the non-fibrous or for the symmetric fibrous collagen samples 2,10,16,17,20 . Amongst the fibril forming collagens, collagen I and II result in the strongest SHG signals and collagen III, although fibrous, result in very weak SHG signals 10 . It has also been shown that in a gel formed from the mixture of collagen I and V, increasing the fraction of collagen V results in smaller fiber formation and fibers of smaller diameter. The collagen V usually forms the core of the fiber and is usually wrapped around by collagen I. Thus collagen V by itself does not give good SHG contrast 21,22 . Accordingly SHG being the most widely used technique for the label free imaging of collagens, cannot be used for the imaging of all types of collagens. HPLC followed by mass spectroscopy has also been employed for the separation of collagen signals and have been shown to separate signatures of collagen I through V. However, this technique is incapable of giving images and hence the localization of different collagens in different tissues cannot be visualized 8 . Collagens are known to show autofluorescence when excited with single photon UV excitation or with a two photon excitation around 730 nm. The fibrous collagen I, when excited with 730 nm light in two-photon excitation scheme, shows auto-fluorescence at 450 nm to 600 nm wavelength range 17,20 . These fibers were determined to have bi-exponential fluorescence lifetime decay with 39% amplitude of 0.29 ns and 61% amplitude of 1.68 ns 15 . The fluorescence properties of collagens, mostly collagen I in solution, including one and two photon excitation spectra, absorption spectra, excitation dependent emission spectra, and fluorescence lifetime have been known in literature 23,24 . However, the properties of the fibrous form of collagen I are very different compared to the properties in solution 15,20 . Even though this vast amount of knowledge about fluorescence of collagens has been known, it has never been employed to separate out the signatures arising from different collagens in their native like structures. In this paper, first we present FLIM as a technique to separate different collagen signatures (collagen I through V) in the pre-formed gels 14,15,25 . The phasor approach to lifetime, which offers a fit free method of separating pixels having different fluorescence lifetimes, is used for the analysis of the FLIM images [26][27][28][29][30][31] . In this approach, populations having similar lifetimes can be selected in the phasor plot and the fluorescence image is painted accordingly. This approach was first used to find the signature positions in the phasor plot of the different collagen autofluorescence from the individual gels and then further used to separate collagen I and III in the SMRT mRID mouse femur. The SMRT mRID mouse produces spontaneous myelofibrosis, a progressive bone marrow fibrosis and results in increasing collagen I and collagen III in the bone marrow samples 32 . Combined with the SHG, FLIM and the phasor approach represent a new method of separating different collagens in an image with a label free approach and gives the possibility to use it as a diagnostic tool for fibrosis. Results and Discussion Pure collagen gels. The objective of this paper was to identify locations in the phasor plot that can be used to separate different collagen types in an image. Collagen gels consisting of purified collagen I through V were prepared and imaged with a 32 μ s pixel dwell time for 20 repeat images. The images were acquired with a 38 μ m field of view and a resolution of 256 × 256 pixels. Each individual fluorescence image was first corrected for the background. The position of the phasor points in the phasor plot originating from that particular image were selected with a colored cursor and the image was painted accordingly. Figure 1a shows the intensity images after background correction (from left to right are collagen I through V). The corresponding FLIM images (Fig. 1b) were colored according to the chosen cursors in Fig. 1c. In the phasor plot ( Fig. 1c) each type of collagen forms a different cluster of phasor points. Each cluster is indicated by a circular colored cursor. The points in the images are colored according to the cluster they belong. The red, green, cyan, yellow and magenta represent the selected phasor points for collagen I, II, III, IV and V, respectively. Figure 1b, showing the intensity image painted with the chosen cursor colors, proves that the clusters in the phasor plot are completely separate and can be used to identify the type of collagen. Each point in the phasor plot can be associated with a phase angle φ and a distance from the origin. We define the lifetime obtained from the phase of each point in the phasor plot as and f is the repetition frequency of the laser (80 MHz). The average phase lifetimes of the collagens are shown in Fig. 1d. The average phase lifetime for the collagens is around 1.5 ns which agree with previously known values 33 . This figure also underlines the importance of the phasor approach to FLIM analysis. All of the five collagens measured using the phase lifetime have lifetime between 1 ns to 2 ns, and cannot be easily separated. However they are clearly separated in the phasor plot of Fig. 1c. The other important observation is that the relative intensity of SHG and fluorescence are dependent on the type of collagens. Collagen I and II give very strong second harmonic signals. Collagen IV and V do not produce SHG. Collagen III has a weak contribution in the SHG channel, but this signal in our instrument is due to leakage of the very strong fluorescence signal through the filters used for the separation of SHG signals, as collagen III is responsible for the strongest fluorescence amongst the collagens under study. Figure 2a shows the SHG intensity images acquired in the same field of view as that of the fluorescence images of Fig. 1a. The phasor points originating from SHG images appear at the coordinate of s = 0 and g = 1, as the lifetime of SHG is basically zero. Figure 2c shows the phasor plots arising from each of the five collagen SHG images. The phasor plot for collagen III has a non-zero lifetime and is similar to the fluorescence lifetime of collagen III, thus signifying that the origin of the signal for collagen III is not SHG and is actually fluorescence. After selection of the different populations of the phasor plot (Fig. 2c) using colored cursors, the image for collagen I and II the images were masked with red (SHG mask) and the collagen III image was masked with green, the mask for non-zero lifetime. On the contrary, collagens IV and V do not produce any SHG signal. Collagen IV has a non-fibrous structure and hence does not produce second harmonic signals. Collagen V is known to be fibrillar, but only in the presence of collagen I. In a gel formed from the mixture of collagen I and V, increasing fraction of collagen V results in decrease of fibrillar structure and also a decrease in the fibril diameter. An increase of 20% of collagen V in the mixture of collagen I and V decreases the fibril structures by 40%. Thus a gel formed by only collagen V does not produce fibrillar structure and SHG signals 21 . The relative intensities obtained in the fluorescence and SHG images of the different collagens acquired under same laser power indicate the possibility of separating collagens based on the ratio of the SHG and fluorescence signals. Collagen IV and V have almost no SHG signals, thus they can only Figure 1a, from left to right, shows the fluorescence intensity signals originating from gels of collagen I to V. Figure 1b shows the same intensity image masked with the cursor color chosen in the phasor plot (Fig.1c). Red, green, cyan, yellow and magenta colors were chosen to select the phasor clusters in Fig. 1c and the intensity images were painted correspondingly. Figure 1d shows the calculated phase lifetimes and shows the separation in phasor plot (Fig. 1c) is more significant. The field of view in these images is 38 μ m. be separated by the FLIM analysis. For the other three collagens the ratio of the intensities of SHG and fluorescence signals acquired in the same field of view were calculated and is shown in Supplementary Figure S1. The Y axis in this figure is in semi logarithmic scale and thus the very large difference shown in the SHG to fluorescence intensity ratios of these different collagens represent the large separation of collagen type based on this criteria. Collagen mixtures in gels. Collagen I and III are known to coexist in tissues. Both the ratio of the two collagens and the total amount of them has been shown to indicate the extent of different fibrotic diseases. As mentioned earlier, collagen I results in stiffness and tensile strength and large amount of collagen III results in greater elasticity. Thus a change in the relative ratios of these two collagens can determine the behavior of the extra cellular matrix and was shown to be an important factor in cardiac myopathy [3][4][5] . Therefore separating the signals of collagen I and III becomes important for diagnostic purposes. To distinguish if the phasor approach to FLIM can separate the collagen I and III signals, a mixed gel was formed from a 3:1 mixture of these two collagens and then SHG and FLIM images were acquired. Figure 3ai,ii show the fluorescence images selected for regions of high intensity and the low intensity, respectively. This selection was done based on the histogram in Fig. 3ci, where the top shows the selection for the Fig. 3ai and the bottom shows the selection for the Fig. 3aii. Collagen III is much more fluorescent than collagen I and thus to observe collagen I, the lower fluorescence intensity must be selected. Figure 3bii,bi show the masked image of the intensity overlapped with the phasor color in Fig. 3cii. In Fig. 3bi, most of the image is colored cyan and in Fig. 3bii most of the image is colored red, the cursor colors (Fig. 3ci) for the FLIM signature of collagen III and I, respectively. Figure 3aiii-ciii show the SHG intensity image, phasor masked image and the phasor plot of the second harmonic generation, respectively. The lifetime of SHG is zero and thus the phasor points appears at s = 0, g = 1 (Fig. 3ciii). A comparison between Fig. 3bii,biii shows that most of the fiber structures in the SHG image can also be separated by the red fluorescence mask in Fig. 3biii. Collagen I has a very strong SHG signal. In the mixture, the bleed through of collagen III fluorescence in the SHG channel has a much lower intensity than the SHG signal of collagen I and is actually very close to the background. Thus after background correction, the phasor points originating from the bleed through disappears from the phasor plot (Fig. 3ciii). It is also evident that the bright image in Fig. 3ai does not give rise to the signals in the SHG Figure 2. Signals in the SHG channel for gels of collagen I to V. (Fig. 2a) SHG intensity image of collagen I to V (left to right). (Fig. 2b) SHG intensity images overlapped with the color mask chosen in the phasor plots (Fig. 2c). Red cursor was used to select the phasor points of zero lifetime (SHG) and the green cursor was used to select the fluorescence phasor points (non-zero lifetime). It is evident that the signal in the SHG channel for the collagen III (Fig. 2a) can be identified with fluorescence since the position in the phasor plot is not at the (1,0) position. Scientific RepoRts | 5:13378 | DOi: 10.1038/srep13378 channels and only the dim fluorescent spots, i.e. the ones from collagen I coexist in both SHG and fluorescence images. This proves that at least in gels, collagen I and III can be separated based on lifetime. Fibrosis in biological samples. A mouse model that spontaneously develops myelofibrosis is the SMRT mRID mouse. In these mice, two receptor interaction domains of epigenetic repressor silencing mediator of retinoid and thyroid hormone receptors are targeted and these mice develops spontaneous myelofibrosis, characterized by the bone marrow fibrosis and increasing collagen content in the bone marrow 32 . The FLIM and SHG measurements were further extended to study these mouse femur slices, obtained from Dr. Ronald Evans' lab at Salk Institute, San Diego, CA. Each individual image was taken with 925 μ m field of view and with 256 × 256 pixels. Both FLIM and SHG images were then tiled and are shown in Fig. 4b,c, respectively. Figure 4a shows the image of the bone taken with a camera. Red, cyan and orange colored cursors were chosen in the phasor plot (Fig. 4f) to select collagen I fluorescence, collagen III fluorescence and the SHG signals, respectively. Fig. 4d,e represent the phasor masked image of the corresponding intensity images, Fig. 4b,c. A comparison between the masked FLIM (Fig. 4d) and masked SHG (Fig. 4e) shows that the part of the image covered in SHG is mostly covered by red in the fluorescence image, red being the cursor used for collagen I FLIM signature, signifying the inability of The fluorescence image was selected either for the high intensity (Fig. 3ai) using the top histogram (Fig. 3ci) or for the low intensity (Fig. 3aii) using the bottom histogram in (Fig. 3ci). The fluorescence intensity images were masked using the cursor colors in the phasor plot (Fig. 3cii) and colored accordingly to show the prominently collagen III rich region (Fig. 3bi) and collagen I rich region (Fig. 3bii). The SHG intensity image, phasor masked image and the corresponding phasor plot for SHG generation is shown in Fig. 3aiii collagen III to produce SHG. This result shows that in the mouse femur tissue, collagen I and III can be separated by the FLIM. The strong correlation between the pixels measured by SHG and the red mask (the mask for collagen I) in the FLIM images shows that collagen I can be identified by both SHG and FLIM (Supplementary Fig. S2). The phasor approach to separate collagens using FLIM imaging was further extended to study myelofibrosis in the bone marrows. The two components of the bones, osteoblasts and osteoclasts maintain a balance where osteoblasts absorb the bone matrix and osteoclasts regenerate the matrix continuously. In case of idiopathic myelofibrosis the bone marrows gets occupied by fibrotic tissues, e.g. collagens, and changes the microenvironment of the bone marrow. Most of the treatments available for the myelofibrosis are supportive and the one main treatment is the significantly risky allogenic stem cell transplantation 32 . The femur slices from two different mice; one wild type control mouse and one from SMRT mRID mouse were imaged using the phasor approach to the fluorescent lifetime imaging. The different areas Fig. 4a-c, respectively. The FLIM and SHG phasor masked images are shown in Fig. 4d,e where the masking color indicates the cursor color of the phasor plot (Fig. 4f). Most of the parts chosen by cyan (Fig. 4d) are not present in Fig. 4e and mostly the red part of the masked image correlates with the SHG image. of these two samples imaged using the FLIM technique are shown in Supplementary Figure S3. These images were then analyzed using the continuous cursor analysis in the phasor plot. As mentioned in materials and methods section, one of the key unique features of the phasor approach is that, in this approach a continuous color scheme can be used to show differential contribution of two separate species in any individual pixel. Figure 5c shows the continuous color scheme used to show the differential contribution of collagen I and collagen III in these images. The more red color is representative of the more collagen I rich areas and the more violet color is representative of more collagen III rich areas. A comparison between the phasor masked images of the wild type mice in Fig. 5a and the SMRT mRID mouse femurs slices in Fig. 5b shows that while the periphery in both cases is made of mostly collagen I, the bone marrow of the SMRT mRID mouse is more violet in color and hence have more contribution from collagen III. This is similar to the results of staining shown before 32 . Thus Fig. 5 demonstrates that phasor approach to FLIM can indeed be used image fibrosis in tissues. Materials and Methods Preparation of collagen gels. Collagen (Fig. 5a) and (Fig. 5b) show the phasor masked FLIM images of the non-fibrotic wild type mice and fibrotic SMRT mRID mice, respectively. The more violet color in the Fig. 5b is representative of higher contribution from collagen III. Figure 5c shows the phasor plot and the continuous cursor used for the analysis. No. -C3657-1MG) from human placenta were purchased from Sigma Aldrich (St. Louis, MO). All the collagens gels were prepared using the following procedure. The collagen samples were first diluted to 3.75 mg/ml. The eight chamber borosilicate coverglass system (Lab-Tek) was placed on the refrigerator at 4 °C. All the components were placed on ice to decrease the temperature shock. In a 2 ml sterile tube 317 μ l water and 533 μ l collagen was added and vortexed to ensure complete mixing. 100 μ l 10X PBS pre-mixed with phenol red was added to this solution while vortexing and then neutralized with 0.5 N NaOH very slowly until the appearance of light pink color. 350 μ L of this collagen mixture and 50 μ L of 1X PBS were added to the wells of the Lab-Tek chamber. The chamber was placed at 20 °C for one hour and then transferred to the 37 °C incubator overnight and then imaged the next day. The gel containing the mixture of the collagen I and III was prepared by mixing 270 μ L of collagen I and 90 μ L of collagen III prior to the addition to Lab-Tek chambers. Biological sample preparation. The femurs of both wild type mice and SMRT mRID mice were simultaneously decalcified and fixed with CAL-EXII (Fisher Scientific, USA). Then femurs were embedded and frozen in O.C.T. compound (TissueTek, USA). 10 mm frozen sections were obtained using Leica CM 1850 Cryostat (Leica, Germany). SHG and FLIM images were obtained for these bone slices. Generation and initial characterization of SMRTmRID mice are described previously 34 . These mice were further backcrossed for 4 more generations to sv129. Only age matched male mice (average cohort size 6-10) were randomly assigned and used. All mice were bred and maintained in the Salk Institute animal facility under specific pathogen free conditions. Procedures involving animals were reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) at the Salk Institute, and conformed to regulatory and ethical standards. The methods were carried out in accordance with the approved guidelines. The mouse studies were not blinded, as the same investigators performed the grouping, dosing and analyses, rendering blinding of the studies unfeasible. Microscopy. The fluorescence lifetime imaging and the second harmonic generation imaging were carried out using the homebuilt DIVER (Deep Imaging via Enhanced-Photon Recovery) microscope. The details of this microscope construction are explained elsewhere [35][36][37] . Briefly, the DIVER microscope is based on an upright laser scanning fluorescence microscope. The main difference from a regular upright microscope is on the emission path of the instrument. Here the sample is placed directly on top of the filter wheel assembly and the filter wheel is placed right on top of a wide area PMT. The collagen gel samples were excited with 710 nm line of a Deep See MaiTai laser with a 40X water immersion objective (Olympus Plan Apo). The bone samples were excited with a 20X air objective (Olympus). Different filter sets were used to select either the SHG or the fluorescence generated in the samples. A combination of UG11 and BG39 (used to protect the PMT from direct excitation) creates a window of 350 nm ± 20 nm (FWHM) and was used to collect the SHG signal. Another filter with a window of 400 nm to 560 nm was used for the collection of the fluorescence signals. The signals were recorded using the FLIMBOX and directly transferred to the phasor plot 38 . The second harmonic signal has a lifetime of zero and appears at the position of s = 0 and g = 1 at the phasor plot. Fluorescence signals have non-zero lifetime and appear elsewhere in the phasor plot. A solution of Rh110, having a lifetime of 4 ns, was used for the calibration of the phasor and used as the standard for all the samples. Phasor approach to fluorescence lifetime 26,29 . The lifetime signals originating from the different collagens samples were analyzed by the phasor approach to fluorescence lifetime. The details of this approach for both the TCSPC (time correlated single photon counting) and phase and modulation measurements are explained elsewhere and have been used extensively in biological samples 14,28,29,31,38 . Briefly, the intensity decay originating from each point of an image is transferred to the phasor plot and creates a single point. A particular population in the phasor plot can then be chosen using a colored cursor and the fluorescence intensity image can be painted accordingly. This results in a fit free method to analyze FLIM images. Different populations corresponding to different lifetimes can easily be selected in the phasor plot and thus the intensity image can be masked according to the fluorescence lifetime. This is instantaneous and unlike the TCSPC approach, does not require a multi-exponential fitting at every pixel of an image. Thus the phasor approach is computationally much less expensive and faster. If the intensity decay at any pixel can be defined by a mono-exponential, then the phasor point originating from that pixel appears in the semicircle shown in blue in the phasor plots (called the universal semicircle). Multi-exponential decays result in phasor points inside the universal semicircle. A mathematical property of this method is that if at one pixel, there are contributions from two or more different exponential components, i.e. different phasor positions on the universal circle, then the corresponding point in the phasor plot of that pixel lies along the line or lines joining the those individual components at the universal circle. This is called the law of linear combination. According to this law, the relative contribution of those components can be obtained graphically by calculating the distance between the combination point in the phasor plot and the individual component positions in the universal circle. The SHG signals have a lifetime of zero as the signal from the SHG is coherent with the laser and they appear at s = 0, g = 1 in the phasor plot. Fluorescence signals from the collagens have non-zero lifetime and appear inside the semicircle.
6,363
2015-08-21T00:00:00.000
[ "Engineering" ]
Naive Bayesian classifier for hydrophobicity classification of overhead polymeric insulators using binary image features with ambient light compensation : Dispersion nature of water droplets over the insulator surface is used for hydrophobicity classification. Stochastic nature of water dispersions makes naive Bayesian classifier a preferable choice, which has been investigated in this work. About 12 features describing the characteristics of water droplets are extracted from the binary image using binary large objects analysis. Ambient light intensity is a significant factor that affects the binary image quality. As these insulators are installed in the outside environment, variations in ambient light intensity are inevitable. An adaptive threshold technique is proposed to compensate for ambient light variations. Six classes of various ambient light intensities have been considered in this study, and the proposed adaptive threshold technique can produce quality binary image consistently. Features extracted from the binary image are ordered according to their principal components (PCs) using PC analysis. Improvement in classification accuracy with the accumulation of ordered features is analysed. Results illustrate the use of the first eight features provides a reliable classification accuracy of 97.6% for test image samples. In comparison to the other existing classifiers, the proposed classifier illustrates optimal performance in terms of classification accuracy and computational time. Introduction Overhead insulators are installed in electrical transmission lines to provide isolation to poles from high voltage. They play a dominant role in determining the safety of electrical transmission and distribution systems. Porcelain, glass and polymeric insulators are the types of insulators used in electrical systems [1]. Among them, polymeric insulators exhibit lower failure rate as compared with non-polymeric insulators. Polymeric insulators are lightweight, less costly, less possibility of breakage, easy to mount and mould and exhibit low surface energy. They have better hydrophobic surface even in contaminated state and faster recovery of hydrophobicity. Polymeric insulators also provide better flashover characteristics than other types of insulators. It makes them widely used in electrical transmission lines [2]. As these insulators are installed in the outside environment, they are affected by various environmental factors such as ultraviolet radiation, moisture, acidic components in rain, thermal stress, corona discharges, dry band arcing and pollutant [3]. It degrades the hydrophobic property of the insulator over time. Loss of hydrophobicity in insulators leads to reduction of flashover voltage even under a constant degree of pollution [4,5]. It may lead to flashing/short circuit and eventually the failure of the electrical systems [6]. Therefore, monitoring the hydrophobicity level of the polymeric insulator is essential and it can be used to predict the lifetime of the insulator, which can ensure reliable operation of the electrical system [7]. Contact angle, surface tension and spray method are the standard techniques (IEC 62073) used to identify hydrophobicity class (HC) of insulator [8]. Contact angle and surface tension method requires a detailed inspection and demands the insulators to be shifted to the laboratory. Therefore, they are considered as offline methods [9,10] and may lead to misclassification due to subjective analysis. Spray method analyses the dispersion pattern of water droplets to classify hydrophobicity by comparing with standard patterns available in Swedish Transmission Research Institute (STRI) guide [11]. This method makes the insulator to be tested on the field itself. HC is identified by the inspector, which makes it a subjective analysis leading to incorrect classification. Digital image processing technique has been a promising tool to analyse the water dispersion pattern and to provide reliable identification of insulator's hydrophobicity [12]. Water is sprayed over the insulator surface, and images are acquired using the digital camera. Acquired images are pre-processed and features are extracted. The literature illustrates the use of colour parameters, statistical features and geometrical features for hydrophobicity classification [13,14]. Distilled water is coloured in contrast with the insulator colour and sprayed to measure the colour parameter. Dispersion of coloured water is identified by using the colourbased filter. Colour parameters depend on the concentration of colour mixed in the water, ambient light and spray method, which makes them highly stochastic. Statistical features are extracted from the grey-scaled image. Various statistical features including fractal dimension, standard deviation, entropy, maximum intensity, kurtosis, skewness, variance, homogeneity, contrast, correlation and energy [15] are widely used. Grey-scale intensity level is the vital parameter to determine the quality of these features, which depends on the ambient light. Since insulators are installed in the outside environment, the image acquired is subjected to ambient light variations and dust. It makes statistical features suitable only for laboratory test conditions [14]. The literature illustrates the use of pre-processing techniques such as histogram equalisation, white top hat filter, Sobel edge operator and digital filters, which are used to minimise illumination effect [16,17]. Unlike grey-scale images, the binary image has two levels (black and white) of intensity. It facilitates extraction of features describing the count, size, shape and dispersion of white spaces (corresponds to water) from its black (corresponds to insulator surface) background. It makes the binary image features a more reliable choice for hydrophobicity classification [18]. Binary large object (BLOB) analysis is one of a preferred technique for binary image feature extraction, which has been least investigated for hydrophobicity classification. It isolates the group of connected pixels called BLOB objects. Then, the properties of each BLOB objects are analysed individually. Even though binary image provides several advantages, selection of a threshold for binary image conversion remains a challenging problem [19]. It is mainly due to the variations in ambient light intensity prevailing in the outside environment, where insulators are installed. Thus, the proposed work aims to select threshold adaptively based on the ambient light intensity. Classifiers using classical and learning techniques are designed to classify hydrophobicity using these extracted features [20,21]. Support vector machine (SVM), decision tree, neural network and fuzzy clustering are widely adopted to classify the hydrophobicity intelligently. However, the stochastic nature of water dispersion over the insulator surface encourages the use of Bayesian classifier. It extracts the probabilistic distribution of binary features for the given class of hydrophobicity and stores it as a priori knowledge. This prior knowledge is used to predict the HC of the given insulator using Bayesian theorem. Main contribution of the research paper is as follows: (i) ambient intensity-based adaptive threshold for binary image conversion of acquired images, (ii) binary feature extraction using BLOB analysis, (iii) design of Bayesian classifier, (iv) optimal feature selection using principal component analysis (PCA) and analysing the improvement in classification accuracy and (v) testing and validation of the proposed hydrophobicity classifier with other classifiers. The research paper is organised as follows: Section 2 describes the experimental procedure and image acquisition techniques used to collect the sample images of insulators, Section 3 illustrates binary feature extraction using BLOB analysis, design of Bayesian classifier is explained in Section 4 and Section 5 describes the ordering of features using PCA and optimal selection of best features. Section 6 describes the performance evaluation and results and Section 7 concludes the work with a summary and future directions. Experimental procedure The proposed work employs spray method as described in the STRI guide to determine hydrophobicity. The STRI guide provides a standard procedure and apparatus to be used for hydrophobicity classification, which makes the HC depend only on the nature of insulator. In this method, water is sprayed on the insulator surface, and the image of water dispersion pattern is acquired. An operator classifies the HC of the insulator from the acquired image by comparing with the standard reference image provided in the STRI guide. Classification of hydrophobicity by the operator is carried out by visual comparison, which may lead to misclassification of HC due to subjective analysis. Automatic inspection of insulators is required for monitoring it periodically, which is done by image processing and learning techniques, and lead to accurate classification of HC. Table 1 shows the criteria for evaluation of HC according to STRI guide. Acquiring a large number of insulator samples with various hydrophobicity classes is a challenging task. A solution of isopropyl alcohol and distilled water with different concentrations is prepared. It is sprayed on the surface of a new polymeric insulator to generate various classes of hydrophobicity. Alcohol tends to lower the surface tension of water and lower its density. This makes the solution to spread uniformly over the surface emulating the same effect as if the surface is less hydrophobic. The literature [22] illustrates the percentage of alcohol by volume in the spraying solution and their corresponding hydrophobic class as in Table 1. The concentration of isopropyl alcohol is inversely propositional to the hydrophobicity due to the degradation of water surface tension. For instance, a 0% alcohol concentration produces a high class of hydrophobicity (HC-1) and 100% concentration of alcohol emulates a hydrophilic surface with a very low grade of hydrophobicity (HC-7) as illustrated in Fig. 1. A specimen is cut from the fresh polymer insulator and placed on a flat surface. In case of field specimen, contamination is inevitable and will lead to a false classification of hydrophobicity [23]. Hence, the polymer insulators are cleaned to remove the contaminants from the surface before proceeding with the test. Solutions of different alcohol concentrations are sprayed on the polymer surface. A camera is mounted vertically 25 cm above the insulator surface, and images are captured for various hydrophobic classes. In the proposed work, a total of 414 images are captured with at least 30 images per HC. About 330 images are considered for training the Bayesian classifier and the remaining 84 images are used to generate testing data for validation. The proposed work uses binary image of insulator to evaluate the hydrophobicity. Extraction of binary image uses threshold to determine the white and black pixels. Hence, an accurate threshold is required to provide a binary image of high quality. As the threshold depends on the ambient light intensity, an adaptive threshold technique is adopted. It involves two stages, namely stage-1 to evaluate the ambient light intensity and stage-2 to provide threshold based on the evaluated ambient light intensity. At stage-1, an image of the insulator before spraying of water is acquired. The acquired colour image is converted to grey scale (I g ) and its mean intensity (I al ) is evaluated as in (1), which quantitatively provides the ambient light intensity. This mean intensity (I al ) is used to calculate the binary threshold level (T b ) using the pre-defined calibrated curve, which is discussed in Section 3.2. Stage-2 involves the acquisition of insulator image after spraying water. Acquired image is converted to binary form using the calculated threshold (from stage-1) as in (2), which can provide ambient light compensation and improve the image quality Binary features are extracted using BLOB analysis from the binary image (I b ) [24]. Total of 12 features describing the distribution of both discrete droplets and wetted traces from the water runnels are observed. (i.e. = 0°). Completely wetted areas <2 cm 2 . Methodology In the proposed work, the methodology involves pre-processing of the acquired image, adaptive threshold selection, extraction of binary features and design of Bayesian classifier, which has been explained as follows. Pre-processing Images of the insulator surface with water droplets are acquired from the digital camera. The images are converted to grey scale to make it colour independent. Variations in intensity distribution due to shadows and ambient lights are compensated by using histogram equalisation. Histogram equalisation performed on the grey-scale image tends to equalise the intensity of the image and eventually enhances the features as illustrated in Fig. 3. Then, the images are cropped with the pre-defined region and size making it uniform for further processing. Adaptive binary threshold using ambient light intensity In the proposed work, the grey-scale image (I g ) of the insulator is converted to binary image (I b ) which requires a threshold (T b ) as in (1). Value of threshold plays a vital role in determining the image quality and majorly depends on the ambient light illumination, in which the image is acquired. As these polymer insulators are installed in transmission lines, which are predominantly in the outside environment, the acquired images are subjected to variations in ambient light intensity. It demands intensity-based binary threshold that can compensate for intensity variations, which has been addressed in this work. To find a correlation between the ambient light intensity and binary threshold, six classes of various ambient light intensities prevail in the outside environment are considered in this paper (see Figs. 4a, 5a and 6a for classes 1, 3 and 6, respectively). About 20 images for each class are acquired before and after water spraying. Images acquired before water spraying (stage-1, see Fig. 2) are used to calculate the mean ambient light intensity (I al ) level. The binary threshold (T b ) producing optimal binary image after water spraying are also determined. The correlation between the intensity level (I al ) and the binary threshold (T b ) are determined and it is found to be a linear relation. Fig. 7 illustrates intensity level and their corresponding thresholds of the images obtained under various ambient lighting conditions (20 images × 6 lighting conditions = 120 image samples). Mean intensity levels of these images across the ambient light classes are obtained. A linear relation is observed between the intensity level (I al ) and the binary threshold (T b ). This leads to fit a linear curve using least-square technique with the reliable level of root-mean-square error 0.02155 as in (3). Using this curve, the binary threshold for the given ambient light intensity conditions can be determined, which can provide a binary image of reliable quality To ascertain the efficiency of the proposed ambient light compensation technique, the effect of binary threshold on binary features is evaluated. Number of water droplets (N wd ) is one of the important binary feature, which is considered in this paper. The acquired image of insulator after water spraying (see Fig. 8a) is converted to grey-scale image and histogram equalisation is applied as described in Fig. 8b. Histogram equalisation enhances the image and improves the sharpness of the edges leading to distinguish water droplets from its background effectively. A pre- defined region is cropped from the image as shown in Fig. 8c. The choice of region is capable of avoiding any insulator edge and observing the water dispersion pattern. To demonstrate the impact of threshold, the cropped images are converted to binary image with both fixed threshold (see Fig. 9) and with the proposed adaptive threshold (see Fig. 10). BLOB analysis is performed to isolate the group of white space regions, which corresponds to the water droplets. Area-based filtering technique is used to denoise the segmented binary image. Binary objects with the lesser area often correspond to noise, which is removed by a pre-defined area constraint. The number of water droplets is identified as illustrated in Figs. 9d and 10d. It is observed that the adaptive threshold is capable of accurately identifying the water droplets, whereas the fixed threshold tends to group multiple water droplets along with its background. Furthermore, the robustness of the proposed adaptive threshold technique is evaluated across all the six classes of ambient light intensities (C a ) considered in this paper. These classes of intensities indicate discrete samples of increasing ambient lights that occur in an outdoor environment at a forenoon session of day time. Ambient light intensity class (C a = 1 ) corresponds to low light intensity that occurs at dawn/dusk and C a = 6 indicates a high light intensity at noon. Since the same class of lighting conditions will occur in decreasing order on the afternoon session, it becomes redundant and hence they are omitted. The number of water droplets in the same insulator is determined across various light intensities with area constraint. It is observed that the number of water droplets identified remains consistent and close to the actual number of water droplets (N wd = 68) as in Fig. 11. Feature extraction using BLOB analysis BLOB analysis primarily groups the cluster of white pixels (indicating water droplets) and determines their properties. This makes BLOB a preferred tool for hydrophobic classification as it predominantly depends on the dispersion pattern of water droplets [25]. It uses a template of connected pixels may be either four connected or eight connected pixels to detect the region. Once the region is segmented, the properties of these regions are determined and can be used as features for hydrophobicity classification. 12 features describing the dispersion pattern of water droplets are extracted from the BLOB analysis as explained below. Number of water drops: Number of water droplets in the given area is a key marker to indicate the dispersion of water over the hydrophobic surface. As the water is sprayed uniformly, they tend to cluster locally to provide a minimum contact area with the insulator. This leads to an increase in number of water droplets (with smaller area) for higher HC as illustrated in Fig. 12a. In a low hydrophobic insulator, water tends to spread all over the surface leading to reduced number of water droplets (with larger area) as in Fig. 12d. In BLOB analysis, the number of water droplets N wd is calculated by the number of binary objects N bl detected. However, BLOB has a potential to identify even a smaller cluster of white pixels, which may correspond to a high-frequency noise. A threshold for the area (A bl thre ) is set to filter out the noise and the binary objects having area higher than the threshold are considered as water droplets as in (4). It is possible to define a minimum pixel that corresponds to water droplet as the camera is mounted in accordance to the standards at a fixed distance of 25 cm from the insulator. As there is no relative movement/translation between the camera and insulator, the minimum pixel value corresponding to the water droplet is constant and set as area threshold. The segmented image with identified water droplets for various classes of hydrophobicities is illustrated in Fig. 12. It is clearly evident that the number of water droplets significantly reduces with an increase in its area. This makes the number of water droplets a clear marker of hydrophobicity Circular factor (C f ): Circular factor is one of the crucial shape features to classify hydrophobicity. It describes the closeness of the water droplet to a perfect circle. It is directly related to the surface tension existing between water and the insulator surface. If the water droplet appears to be circular, then the insulator exhibits a higher hydrophobicity. Water droplets may be traced out (noncircular) in a lower hydrophobic surface circularity = perimeter of BLOB 2 π area of BLOB (5) Coverage rate of water: In the given binary image, the white pixels correspond to the presence of water and a black pixel represents the background surface of the insulator. Coverage rate of water (C r ) is calculated as a ratio of number of white pixels N wpix to the total number of pixels (N pix ) in the given image. In a highly hydrophobic surface, the water tends to present in the minimal area due to its higher surface tension. This makes coverage rate of water to be lesser for a good hydrophobic insulator. On the other hand, water tends to spread over the lesser hydrophobic surface making a higher coverage rate Coverage rate of maximum water droplet: The water droplet with a maximum area can provide some insight into the distribution of water over the insulator surface. The area of this maximum water droplet is calculated as its coverage rate and used for hydrophobicity classification 3.3.5 Solidity: Solidity is the measure of droplet shape. It is an area section of the object related to its convex hull. A more circular water drop produces the solidity value closer to one and indicates a higher hydrophobic surface. If it is stretched, then the value is less than one indicating a lesser hydrophobic insulator. Maximum perimeter: Perimeter of the binary objects P max can provide information about the shape of water dispersed over the insulator surface. On a high hydrophobic surface, the water droplet experiences a circular shape with the minimal perimeter. Similarly, the water tends to disperse on a lower hydrophobic surface having a higher perimeter. Shape factor: Shape factor is a standard metric to determine the shape of the binary objects. In the proposed work, the shape factor of the maximum water droplet is considered as a feature. It is proportional to the ratio of diameter D max to the perimeter of the maximum binary objects in the given image as in the equation below: Euler number: Euler number is the difference between the number of objects and the number of holes in that image. Negative sign shows that the sum of holes is higher than the sum of objects. It is one of a significant feature that describes the reflectivity of the water droplet. On a higher hydrophobic surface, the water droplet tends to form a convex shape and reflects the light to create a single brighter spot. The reflection of the insulator surface in the water droplet creates a darker spot just near the brighter spot as shown in Fig. 13. It creates a hole in the binary image of the water droplet, which can be measured using Euler number. The water tends to disperse on a lesser hydrophobic surface. It will create a uniform reflection of light and also no reflection of the insulator surface is observed. The binary image is also observed to be with no or a minimal number of holes as shown in Fig. 13. Eccentricity mean: Unlike the conventional technique, BLOB analysis provides the elliptical fit of the identified binary objects. It provides a major r M i and minor r m i radii that cover the possible area of the ith binary objects. Thus, the eccentricity mean for the given image is calculated as the ratio of minor to the major radius, which is close to unity for a circular object. It provides the overall inferences about the shape of the water droplets (Fig. 14) Water droplets are fitted into an elliptical shape. Major and minor axes of the ellipse are calculated. Eccentricity is defined as the ratio of these two axes. Water droplets having more circular nature produces the eccentricity value closer to one. In this work, both mean and maximum values of eccentricity are considered as a separate feature. Eccentricity maximum: The maximum value of the eccentricity is also considered as one of the features for hydrophobic classification. For a perfect circle, the major and minor axes are identical leading the eccentricity equals to one. Eccentricity maximum describes the availability of any water droplet closer to a circular shape. Histogram-based major area: Histogram is used to classify the water droplets by their area. The binary objects identified from the BLOB analysis are classified into 20 area bins. The number of water droplets in each of the area bins is found, and a histogram is plotted as in Fig. 15. The area bin having the highest number of water droplets is considered as major area and used as a feature to classify hydrophobicity. As the insulator surface loses hydrophobicity, the number of water droplets falls and the dispersion/area coverage by water increases as illustrated in Fig. 15. Histogram-based major water droplets: The number of water droplets having the majority area is also considered to be a feature. A good class of insulator produces a higher number of water droplets belonging to a smaller area bin, and the isolation of water tends to decrease as the insulator loses its hydrophobicity as observed in Fig. 15. Bayesian classifier Naive Bayesian classification (NBC) is a statistical supervised classification method [26]. It is one of the widely used machine learning technique for both binary classification and multi-class classification problems. NBC is based on Bayes theorem to calculate the conditional probability (P H c f s ) for the given image features ( f s ) belong to a particular class of hydrophobicity (H c ) as in (10). It is simple to be constructed and suitable for high dimensionality systems NBC finds its application in challenging image classification problem as reported in the literature [27]. NBC is preferred for image classification because of its higher classification accuracy and training speed [28]. It provides the likelihood of class for the extracted features, which enables variable thresholds and makes reliable for real-time applications [27,29]. NBC outperforms with higher accuracy than other conventional techniques [30] in texture classification, which is closely related to the proposed work. It makes NBC a preferred choice for analysing water dispersion patterns in the proposed work. In the proposed work, binary features are extracted from the acquired insulator image and its corresponding hydrophobic classes are determined by its alcohol concentration. The extracted features and their corresponding HCs are used as training data to design multiple Gaussian models, which represent the a priori knowledge P f s H c . Once the NBC is trained, its classification accuracy is evaluated using the features available in the training data. For testing, new images for various classes of hydrophobicities are acquired. Binary features are extracted after pre-processing with an adaptive threshold. These features are used to determine the HC of the insulator. Optimal feature selection Among the 12 features extracted from BLOB analysis, optimal features required for classification are selected as follows. Initially, PCA is used to order the feature set ( f BL ) in accordance with the magnitude of eigenvectors (E g ). Eigenvectors provide information about the interdependencies among features. The independent feature will have a higher magnitude of eigenvector co-efficient and are considered as preferred features. Eigenvectors (E gs ) are sorted in the increasing order of its magnitude and the corresponding feature indices (i f ) are identified. Next, the features are reordered according to the identified indices and a new set of features ( f blr ) is created. This reordered feature is accumulated one by one and a Bayesian classifier B c is designed. This Bayesian classifier is used to predict the HC H cp using the given features f bs . Classification accuracy is evaluated using the predicted H cp and actual class of hydrophobicity H ca of the insulator. The improvement is classification accuracy with the introduction of additional feature is analysed by using the confusion matrix C m . This procedure is repeated for all the identified 12 features as illustrated in Algorithm 1 (see Fig. 16). Feature ordering using PCA Eigenanalysis of the correlation matrix (formulated using the feature sets) is carried out using PCA [31]. The variations in eigenvalues across all the PCs of the features are identified. These values are used to identify the predominant components to evaluate the eigenvectors. From Table 2, it is observed that around 85% of the features lies in the first three PCs and remaining PCs are eliminated due to its lesser significance. Eigenvectors for these first three PCs are evaluated as in Table 3. The coefficients of these eigenvectors having a significantly higher magnitude are identified. The features are reordered in accordance with the strength of its coefficients as illustrated in Table 3. Feature evaluation using accumulation effect The improvements in classification accuracy with the accumulation of features in the order prescribed by PCA are evaluated. Confusion matrix has been used as a performance index to evaluate the classification accuracy for training and testing image sets. Fig. 17 shows the improvement in accuracy with a sequential inclusion of one feature at a time. It is observed that the inclusion of feature beyond eight has no significant improvement in classification accuracy. Hence, the first eight feature ordered by PCA is considered to be optimal and the remaining four features are made obsolete. Results and discussion A total of 330 training images and 84 test images of the insulator are considered in this paper. These images are compensated for ambient light variations and converted to binary images. Identified eight BLOB features (see Table 2) have been extracted for these images and the training feature set is used to design NBC. The performance of NBC is evaluated using the testing data set with a confusion matrix as in Table 4. It is observed that the proposed NBC can classify HC accurately at higher classes (HC-1-3) and produce minor classification error at a lower class of hydrophobicity. It is mainly due to the minimal shape variations of water droplets in lower HCs. Furthermore, the influence of the training sample size is analysed as follows. The training samples (N tr ) are chosen as per the ratio (R) of testing samples (N te = 84) as in (11). The classification accuracy for the proposed NBC trained using the samples selected based on the ratios is illustrated in Fig. 18. The ratio less than unity corresponds to lesser training samples than testing samples (N tr < N te ) and vice versa. At unity ratio, the number of training data is equal to the testing data (N tr = N te ). At lower ratios, it is observed that the proposed classier tends to overfit, which is indicated by a large deviation between the classification accuracies for training and testing data. With a higher number of training data, the NBC is capable of learning the likelihood of features amidst outliers. This improves the classification accuracies at higher ratios Thus, the proposed NBC trained with all the training samples (R = 4) produces a classification accuracies of 99.1 and 97.6% for training and testing samples, respectively. Multi-fold crossvalidation (N = 10 folds) is also performed to assess classification accuracy. These folds provide an ordered selection of training data and testing data from the same set of samples. This makes each data to be used for both training and testing (not simultaneously) at least once. Classification accuracy of 97.82% is observed, which indicates the reliability of the proposed NBC. Furthermore, the performance of the proposed Bayesian classifier is compared with other closely related classical techniques. This includes decision tree, linear discriminate analysis (LDA), K-nearest neighbourhood (KNN) and SVM. These techniques are also found to be employed for hydrophobicity classification with different features [18,21,32]. Similar to the feature selection procedure used in the proposed NBC, features ordered by PCA are sequentially accumulated for other classifiers. Their classification accuracies for testing data are evaluated as illustrated in Fig. 19. It is observed that the decision tree and SVM are aligned with the proposed NBC in providing optimal classification accuracy for eight features. LDA and KNN produce higher classification accuracy for 9 and 12 features, respectively. Conclusion In this paper, the HC of polymer insulator is identified by the spray method. Dispersion of water droplets over the insulator surface is a direct marker of HC. Features are extracted from the digital image of the insulator to infer the water dispersion pattern. Ambient light variations have been measured, and the threshold for binary images is determined adaptively. Six classes of ambient light variations are used to validate the adaptive threshold technique. Results illustrate a robust extraction of feature (number of water droplets) amidst variations in ambient light intensity. NBC is used to build a priori model of features for the given HC and used to classify the insulator's HC. About 12 features are extracted from the binary image by BLOB analysis. Features are ordered by PCA and improvements in classification accuracy with the accumulation of features are analysed for possible feature reduction. Experimental results demonstrate that the use of first eight features can produce a significant classification accuracy of 97.6% for testing data. The proposed classifier is equipped with ambient light compensation, which makes it suitable for field testing of insulators. Automated acquisition of insulator images and online classification using an unmanned aerial vehicle is the future scope of the proposed work. Furthermore, binary image-based features used in this proposed work are best suited for evaluation of pattern, shape, size and location of the foreground objects. Hence, the proposed procedure can be applied for image-based object classification. Identification of various elements in transmission lines, evaluation of flashover pattern in contaminated insulators and assessment of transmission line layout using aerial images are some of the potential applications that can use the proposed methodology.
7,547.4
2019-06-19T00:00:00.000
[ "Engineering", "Materials Science", "Environmental Science" ]
Charging and discharging at the nanoscale: Fermi level equilibration of metallic nanoparticles Surrounding environment, excess charge and size affect the Fermi level of the electrons in nanoparticles, having a significant influence on their properties. Introduction From an electrochemical viewpoint, metallic nanoparticles (NPs) can be regarded as multivalent redox species with a wide range of redox states that may be charged or discharged by interaction with their environment. The redox properties of metallic NPs, especially those of gold (Au) and silver (Ag), have been extensively studied and may be divided into three distinct voltammetric regimes based on their sizes: (i) bulk continuum, (ii) quantized charging and (iii) molecule-like voltammetry. 1,2 For the largest metallic NPs, with core sizes typically in the range 2 to 100 nm, the redox potentials of each charge state are so close together that they form a continuum. As the core size of the NPs decreases to less than 2 nm, the NPs are renamed "nanoclusters (NCs)" and a threshold is reached where the redox potentials of the different charge states are separated enough or "quantised" such that they may be measured distinctly. Such measurements have been achieved with metallic NCs of gold, copper and various alloys coated with an organic monolayer of alkanethiols and are commonly referred to as monolayer-protected clusters (MPCs). 3 In this perspective, we will focus on the shis of Fermi level and its inuence on the chemical and electrochemical properties of NPs. For further information on the synthesis of subnanometer sized metal NCs and their interesting catalytic, uorescent and chiral properties, the reader is referred to recent reviews. 4,5 Additionally, we will limit the scope to metallic NPs, and not semiconductor nanocrystals or quantum dots, as reviewed and discussed in detail elsewhere. [6][7][8] The Fermi level of an electron in solution The electrochemical potential of an electron in an aqueous solutionm S e À is a concept associated with the presence of a redox couple (ox/red) in solution (S) and the following virtual redox reaction between an electron and that redox couple: ox S + e ÀS % red S . At equilibrium, we can dene the electrochemical potential for the virtual electron in solution as the difference between the electrochemical potentials of the reduced and oxidised species, respectively. It represents the work to bring an electron at rest in vacuum to the solution containing the redox couple. 9 m S e À ¼m S red Àm S ox ¼ a S e À À ej S where a is the real chemical potential and j S is the outer potential associated with the presence of excess charge on the solution. By analogy with the Nernst equation, we can dene the standard redox potential on the absolute vacuum scale (AVS) by considering the virtual reduction reaction between an electron at rest in vacuum and the oxidised species in solution: ox S + e ÀV % red S to have e[E ox/red ] S AVS ¼ ÀDG red ¼m S ox Àm S red (2) as by denition the electron at rest in vacuum is the origin of the AVS scalem V e À ¼ 0. By comparing eqn (1) and (2), we can dene the electrochemical potential of the electron in solution and consequently dene a Fermi level for the electron in solution as shown in Scheme 1. Eqn (3) states that the Fermi level of an electron in solution depends on the real potential of ox and red. In the case of a system with multiple redox couples, eqn (3) has to be fullled for all the redox active species in equilibrium, and typically one redox species in excess will dominate the Fermi level of the solution. The ionization energy of a metallic NP in vacuum The work function F is the work to remove an electron from a neutral and large piece of metal, whereas the ionization energy is a term usually associated with atoms and molecules but also charged NPs for the extraction of an electron: NP V ze / NP V (z+1)e + e ÀV . The ionization energy IE in vacuum of a spherical metallic NP of charge ze and radius r can be expressed using elementary electrostatics [10][11][12] IE V NP,ze contains a bulk term for the work function, and a charging term. More generally, the ionisation energy of a neutral NP has been proposed to read where the coefficient a can be considered to be equal to 1/2 as in eqn (4) or 3/8 according to the electrostatic model used. 13 A recent review by Svanqvist and Hansen 14 compared both experimental and computational values of work functions of small clusters, and concluded that metals tended to have a coefficient a of ca. 0.3. This variation of the coefficient a is due to quantum effects. 14 Eqn (4) shows that as a metallic NP becomes more negatively charged (z < 0), its ionization energy in vacuum decreases as the energy required to extract an electron decreases. Inversely, as a metallic NP becomes more positively charged (z > 0), its ionization energy increases as illustrated in Scheme 2. Eqn (4) also demonstrates that the ionization energy of a neutral metallic NP in vacuum is higher than the corresponding work function of the bulk metal. Hence, electrons in neutral metallic NPs are at a lower Fermi level than in the bulk metal. In vacuum, the ionisation energy of a neutral Au NP lies somewhere between that of a gold atom, 9.2 eV, 15 and the work function of bulk gold metal, approximately 5.3 eV. 9 It is important to note that the charge on the NP could be electronic or electrostatic due to the presence of adsorbed ionic species or ligands. The difference between these ionization energies under neutral and charged conditions is directly related to the excess charge on the metallic NP and, therefore, on the outer potential. For spherical metallic NPs, the outer potential is directly related to the excess charge ze by the capacitance and is given by: If we consider a nanoparticle on a support, the capacitance will depend on the geometry of the support. The Fermi level and redox potential of a metallic NP in solution The redox potentials of a metallic NP can be evaluated with thermodynamic cycles, as previously shown by Su and Girault. 11 For the reduction of a metallic NP in solution: where d and 3 d are the thickness and relative permittivity of an adsorbed layer. Eqn (8) shows that the absolute standard redox Scheme 2 Representation of the apparent ionization energy for the extraction of an electron from a metallic NP in vacuum when the surface of the metallic NP is neutral (j NP ¼ 0), negatively (j NP < 0) and positively (j NP > 0) charged. (B) Variation of the Fermi level of bare metal NPs of different radius and the corresponding equilibrium potential between the Au NP and AuCl 4 À ions, as calculated using eqn (8) and (14) when the activities of Au NPs and AuCl 4 À are taken as unity. potential of a spherical, metallic and chemically inert NP depends on the work function of the bulk metal but also on a term that takes into account the size and charge of the metallic NP and the dielectrics of the solvent and an adsorbed molecular layer (if present). Fig. 1A compares the redox potentials of bare and layer-coated NPs of radii 5 and 10 nm in solution, with d ¼ 0.8 nm, and 3 d ¼ 10 or 2 and taking F Au as 5.3 eV. It is worth noting that the charging of these metallic NPs is not quantized but strongly depends on the presence of an adsorbed layer. The capacitance of these larger metallic NPs (>5 nm) is rather small, and hence the change of charge by variation of one electron results in very a small variation of the Fermi level of the NP, E NP F . The slopes in Fig. 1A represent the reciprocal of the capacitance. As gold is a noble metal, E NP F for an uncharged particle is well below that of the Fermi level for the H + /H 2 redox couple (taken here equal to À4.44 eV). Indeed, Fig. 1 shows that E NP F remains more negative than this value unless the charge on the NP becomes largely negative. Approximately 500 negative charges are needed on a 10 nm Au NP, corresponding to a charge density of 64 mC m À2 (for comparison silica has a charge density of 10 mC m À2 at neutral pH) to reach 0 V vs. the Standard Hydrogen Electrode (SHE). For comparison, nanorods have been charged to a charge density of 2100 mC m À2 . 16 Metallic NPs in solution are oen synthetized with anions adsorbed on the NPs (e.g. citrate anions in the Turkevich synthesis of gold NPs). It is important to note that the adsorbed ionic charges contribute to the position of E NP F , and the charge ze in eqn (8) includes both the excess number of electrons and the adsorbed ionic charges. To keep here a simple electrostatic model considering the solvent as a dielectric continuum, the capacitance of an MPC is given by: 17 this equation is self-consistent with eqn (8) when calculating the separation of the redox potentials upon charging. This model can be extended to take into account the effects of the diffuse electrical double layer surrounding the MPC (by both linearized and non-linear Poisson-Boltzmann (P-B) models), [18][19][20] and also the solvent and ion penetration into the surrounding monolayer. 21,22 In fact, the capacitance of an MPC can be considered as two capacitors in series, one for the monolayer of thickness d and one for the bulk solution. If we consider the ionic atmosphere around the NP, the bulk capacitance becomes 17 with k the Debye length, determined by the ionic strength of the electrolyte solutions and the relative permittivity of the solvent. These simple equations illustrate the dominating effect of the monolayer when determining the values of the capacitance and, hence, the separation between the different redox potentials. However, this equation does not include the effect of the charge on the capacitance of the NP, as linearized Poisson-Boltzmann equation was used to account for the electrochemical double layer. 17 The most sophisticated approach involves a generalized P-B equation that considers the monolayer as a disordered medium where the thermal motion of the counter ions is decreased because of electrostatic correlations and monolayer structural effects. This generalised P-B equation was proposed on the basis of replacing the Boltzmann distribution by the Tsallis q-exponential distribution. 23 The comparison of these models show that capacitance calculated from the simple concentric sphere model (eqn (11)) gives both the correct order of magnitude as well as some qualitative features of the total MPC capacitance, although signicant deviations are predicted especially when ion and solvent penetration are important. Recent works by Su et al. have also stressed the importance of ion penetration into the monolayer. 24,25 The Fermi level and redox potential of a soluble metallic NP in solution The link between E NP F and r of a metallic NP has also been highlighted by the pioneering theoretical and experimental work of Plieth 26 and Henglein. 27 Henglein predicted large shis to higher E NP F as r decreased on the basis of gas phase thermodynamic data and kinetic measurements. These results revealed that for exceptionally small metallic NPs of silver, [27][28][29] copper, 30 lead 27 and others with one to een atoms present, the predicted negative shis were not smoothly monotonic but oscillated due to small quantum mechanical effects at this scale. E NP F for Ag n clusters is predicted to rise to such extents with decreasing r that for the smallest odd atom clusters, n ¼ 1 and 3, the expected redox potentials are: 28 h E By comparison, the standard redox potentials of bulk silver and the strong reducing agent zinc are +0.799 V and À0.76 V, respectively. In fact according to eqn (2), the redox potential for the oxidation of a silver atom corresponds to the ion-solvent interaction energy of ca. À476 kJ mol À1 when considering the ionization energy of Ag atom of 731 kJ mol À1 . This compares well with the hydration energy of À430 kJ mol À1 . 31 Plieth considered the contribution of the chemical potential of a metal atom on a metallic NP of the same metal for growth and dissolution reactions in the presence of reducing or oxidising agents, respectively. For the reduction of a metal cation resulting in the addition of a metal atom to the NP: M +S + e ÀV + NP n ze % NP n+1 ze , the standard redox potential differs from that on a large metal electrode by a term inversely proportional to r. As a rst approximation considering only a polycrystalline NP we have: where g is the surface tension, N Av is Avogadro's constant and V m is the molar volume of the metal. This approach considers the change in Gibbs free energy associated with an increase in the metals surface area, but it does not take into account the differences in surface energies of different facets. Additionally, surface energy depends also on the Galvani potential difference between the NP and the solution according to the Lippmann equation. 32 The additional term in eqn (14) accounts for the difference of the chemical potential of a metal atom between a bulk metal and a NP. Eqn (14) is applicable for dissolution and growth reactions in solution containing metal cations and additional oxidising or reducing agents, but it does not consider the charge of the NPs. Eqn (8) and (14) account for different phenomena. Eqn (8) expresses the variation of the redox potential upon charging or discharging of a metallic NP capable of storing either positive or negative charges upon oxidation or reduction, respectively. Eqn (14), on the other hand, accounts for the size effect of the redox potential for (i) the reduction of a metal cation resulting in the growth of a NP or (ii) the oxidation of a metallic NP not capable of storing positive charges as it dissolves upon oxidation. The major difference between eqn (8) and (14) is that eqn (8) gives the Fermi level of the electron on the metallic NP whereas eqn (14) gives the Fermi level for the electron in solution for the redox couple M + /M NP . This is not oen clear in the literature and oen a source of confusion. Of course, at equilibrium eqn (8) and (14) should be equal, thereby dening a relationship between the excess charge and r of the NP according to this simple electrostatic model. This is illustrated in Fig. 1B for the case of gold with chloride. For example, 5 nm radius Au NP in equilibrium with AuCl 4 À solution with an activity of 1 has a positive excess charge of +32e. Eqn (14) is a simple form of the more general equation that also considers the charge of the metallic NP. This expression can be derived by utilizing the chemical potential of the charged NP presented by Lee et al. 33 The higher E NP F of some metallic clusters allows seemingly strange reactions not possible with bulk materials, for example electron transfer from a noble metal to a non-noble metal. Indeed, Henglein reported such a reaction involving electron transfer from small Ag clusters to Cu 2+ . 30 Experimental proof of the predicted exceptional reducing abilities of small metallic clusters was found indirectly by investigating their abilities to reduce organic molecules. [27][28][29] The difficulties in designing experiments that directly show E NP F rising, and the NP stability decreasing, with decreasing r is reected in several seemingly contradictory reports from electrochemical stripping voltammetry, 34-37 electrochemical scanning tunnelling microscopy (STM) and microscopy experiments. [38][39][40][41] However, these contradictory results are on reection not overly surprising as, though thermodynamically the stability of a NP should decrease with decreasing r, other interfering mechanisms may come into play that unexpectedly stabilize the NP. Such unpredictable stabilizing mechanisms are particularly a problem at the extreme nanoscale and this is exactly the size-regime where the most dramatic changes in E NP F and redox potentials are expected with decreasing r. One seemingly simple and direct experimental approach to prove that a metallic NP becomes less stable as r decreases is to attach it to a conductive electrode surface and carry out stripping voltammetry. The position of the peak potential, E P , for the stripping peak represents the oxidative dissolution of the metallic NP to ions and is a direct indication of the NPs stability. According to Plieth, 26 Henglein, 27,29,30 and eqn (8) and (14) above, one would logically expect to see E P shi negatively (easier to oxidise) as r decreases (Fig. 2). Indeed, this approach has been utilized by the groups of Compton 34 and Zamborini. [35][36][37] Zamborini et al. observed E P of chemically synthesized Ag NPs attached to indium tin oxide (ITO) electrodes by amine linker molecules shiing negatively as a function of r for NPs < 35 nm in size. 35 The latter results are qualitatively in agreement with the thermodynamic predictions, however, little variation was observed in the 35-50 nm size range. 35 These authors also investigated the stripping of electrodeposited 4-250 nm Au NPs 36 and chemically synthesized and tethered Au NPs with sizes <4 nm (ref. 37) in the presence of halides at conductive ITO electrode surfaces (see Fig. 2). In both instances, E P shied negatively with decreasing r, once more in qualitative agreement with the thermodynamic expectations. The dissolution of Au NPs is much more complex than that of Ag NPs, however, requiring considerable further study to elucidate the precise mechanism of Au oxidation and complexation. The dissolution of small clusters of atoms or NPs on conductive substrates has also been monitored by STM and microscopy. Sieradzki et al. observed the oxidative dissolution of <4 nm Pt NPs at potentials less than the bulk potential. 41,42 Furthermore, they made a distinction between the mechanisms of Pt dissolution between the NP and the bulk metal. Whereas bulk Pt dissolves from the oxide, the Pt NPs are dissolved by a direct electrochemical route involving the electro-oxidation of Pt NPs to Pt 2+ ions. 41,42 Similarly, Del Popolo et al. reported that small Pd NPs dissolve at more negative oxidation potentials relative to bulk Pd. 40 Meanwhile, Penner and co-workers electrodeposited Ag NPs on HOPG and by microscopy noted a thermodynamically unexpected enhanced stability of the NPs in comparison to the bulk Ag. 39 Currently, there is considerable debate in the literature to explain these contradictory results. One possibility is that the interaction of the electrode surface with small NPs has stabilizing effects, possibly due to mechanical alloying or quantum mechanical effects that render the bulk energy term inappropriate to describe the bonding. 38 Sieradzki et al. put forward several other possibilities including the NPs compensating their increased energy by bonding more strongly with passivating agents in solution, such as oxygen, protons, or hydroxyl groups. 41,42 They also attribute the primary source of error in the thermodynamic predictions to the use of bulk surface and cohesive energies and the neglect of edge and vertex atoms in NPs. 41,42 Another approach by Miaozhi et al. demonstrated the effect of the oxide particle size on the equilibrium potential between bulk silver and Ag 2 O NPs. 43 The reaction is: By considering the chemical potential of Ag 2 O NPs, they obtain In this case, the decreasing cluster size increases the redox potential. This approach allows the surface energy of the NPs to be determined. For Ag 2 O, the surface energy can be estimated from the slope of the redox potential vs. 1/r. An experimental slope of 1.283 V nm gives a value of 0.381 mJ cm À2 . Electrochemical equilibria in solution It is important to realise that a metallic NP immersed in a solution will reach, albeit sometimes very slowly (is some cases this might take days or even years), an electrochemical equilibrium with the surrounding solution. If the redox potential in solution is dominated by a single redox couple, ox/red, in excess, then the Fermi level of the electrons in the metallic NP, E NP F , will change to become equal to the Fermi level of the electrons in solution, E S F,ox/red , for this redox couple. This change results in either an electrostatic charging of the metallic NP accompanied by an oxidation of the redox couple in solution or discharging of the metallic NPs accompanied by a reduction of the redox couple in solution. Both scenarios are illustrated in Scheme 3. In this case, it is assumed that the metallic NP itself is completely chemically inert in solution. Under standard conditions (c ox ¼ c red ), this equilibrium is given by In other words, the charge on the metallic NPs will be imposed by the redox couple in solution to satisfy eqn (17). This has been proven experimentally by performing potentiometric titrations of NPs, demonstrating that NPs can behave as any normal redox couple (also see Fig. 3). 44 At this stage, it is necessary to recognize that the initial potential of E NP F "at rest" immediately aer synthesis is determined by the synthesis method chosen, and especially the stabilizing ligands employed, as well as the storage conditions, i.e., aerobic or anaerobic. Thus, one could in principle view NP synthesis as a dynamic process that initially leads to the formation of metallic nuclei, followed by their growth until Fermi level equilibration of all of the components in the synthesis media (metallic ions, metallic NPs, reductant, solvent, oxygen, etc.) takes place. As a result, NPs prepared with a relatively weak reductant such as citrate have a lower E NP F than NPs prepared with a stronger reductant such as sodium borohydride (NaBH 4 ). 45 This residual charge remaining on the NP due to the reduction step is difficult to reproduce, and not always observed, 46 but in theory will always be present unless a reductant with an identical reduction potential to E F of the uncharged polycrystalline metal is employed. 45 Thus, it is immediately apparent that Fermi level equilibration in metallic NPs is a crucial process that offers an alternative perspective not only on the behaviour and reactivity of metallic NPs, as discussed above, but also on their synthesis. Once synthesized, E NP F of metallic NPs in solution is determined by their redox environment. Therefore, introduction of chemical reductants, such as NaBH 4 or ascorbic acid, 47 or oxidants, such as cerium(IV) sulfate (Ce(SO 4 ) 2 ), 46 to the NP solution will result in E NP F raising or lowering, respectively, as depicted in Scheme 3. The latter is also true for biphasic systems as demonstrated by Wuelng et al., who used a biphasic approach to lower the Fermi levels of MPCs of Au suspended in dichloromethane by liquid-liquid interfacial electron transfer with an aqueous Ce(SO 4 ) 2 solution. 46 Fermi level equilibration of metallic NPs with polarized electrodes E NP F may be raised or lowered by collision with a polarized electrode. Upon collision, the Fermi levels of the metallic NP and the electrode will equilibrate. As the Fermi level of the electrode is controlled by the voltage source, the Fermi level of the NP is shied to reach the Fermi level of the electrode. Ung et al. demonstrated that E NP F of polymer-stabilized Ag NPs equilibrates with a polarized gold-mesh bulk electrode in solution by spectro-electrochemically monitoring the optical properties of the colloidal solutions upon charge-discharge (discussed vide infra). 48 Due to the high ionic strength of the media, the NPs could theoretically (from DLVO theory) approach the electrode surfaces to within 1 or 2 nm and Fermi level equilibration was proposed to occur via tunnelling of electrons in their hundreds and thousands across the double-layers of the NP and electrode. 48 Much earlier Miller et al. demonstrated that E NP F of Pt NPs could equilibrate with a reductively polarised Hg-pool working electrode using a methylviologen redox shuttle as a mediator. 49 The equilibration of the Fermi level of the NP with the electrode is shown schematically in Scheme 4. Pietron et al. 44 took advantage of the unique properties of MPCs of Au, namely the discrete quantized nature of their capacitive charging, to introduce a more quantitative approach to raising E NP F for a solution of Au MPCs. Stirred solutions of Au MPCs were subjected to classical electrolysis conditions at either oxidizing or reducing potentials, in a toluene-acetonitrile solvent. The Fermi levels of the Au MPC cores and electrode equilibrated by the injection or removal of electrons from the Au MPC core and the simultaneous formation of an ionic space charge layer around the Au MPC. The so-called quantized charging behaviour which in electrochemical terms would be called an oxidation or reduction reaction, monitored by differential pulse voltammetry, was used as a means to estimate the average "stoichiometric oxidizing or reducing capacity", i.e., the oxidizing (hole) or reducing (electron) equivalents per mole, of the Au MPCs in solution at an arbitrarily set potential. The resulting Au MPC solutions were shown to be both remarkably stable, discharging at very slow rates, and capable of maintaining their new oxidative or reductive potentials even aer isolation in a dried form and re-dissolution in a new solvent. Potentiometric titrations of charged Au MPCs with electron donor and acceptor molecules, such as ethylferrocene and tetracyanoquinodimethane (TCNQ), respectively, revealed the ability of these nanoscopic metallic NPs to act as non-molecular oxidizing or reducing agents, see Fig. 3. The classical behaviour of these potentiometric titrations, proceeding in a predictable and quantiable way, highlighted the ability of the Au MPCs to act as quantiable electron or hole carriers by Fermi level equilibration with a polarised electrode. 44 Chemisorption of nucleophiles or electrophiles to metallic NPs E NP F may be raised or lowered by the chemisorption of nucleophiles or electrophiles, respectively, at the NPs coordinatively unsaturated surface atoms. 30,50 Such adsorbates can substantially inuence the reactivity and optical properties (discussed vide infra) of the metallic NPs. The shi of E NP F can be due to (i) the charge of the nucleophile, for example anions, as discussed above in relation to eqn (8), charging the NP with adsorbed negative charges and raising E NP F but also (ii) the change of the surface potential of the NP, as shown in the case of neutral nucleophiles such as triphenylphosphine on silver. As proposed by Henglein, a surface atom carrying a nucleophile will acquire a slight positive charge (d + ) while excess electron density is transferred to the interior of the NP, which becomes slightly negative (d À ). 50 In this case, the nucleophile affects the local charge density of the surface, altering the surface potential and hence also the Fermi level. The effect of adsorbed species on surface potential is well known from for example work function measurements in UHV. For all nucleophiles, the surface potential c and hence the real potential a of the NP are altered and the Fermi level changes according to the sign of the surface potential until equilibrium is reached (see Scheme 5). At this point, no further nucleophiles may be adsorbed and, in essence, this signies that the nucleophile desorption/adsorption equilibrium is dictated by the position of E NP F . Consequently, by manipulation of E NP F the nucleophile desorption/adsorption equilibrium can be drastically shied in either direction. Full desorption of nucleophiles can be achieved by adding or subtracting excess electrons to the NP. This behaviour is similar to ion or molecule adsorption on polarized metal electrodes, where surface coverage of the electrode can be adjusted by changing its potential (Fermi level). One approach is to introduce highly reducing free radicals to the solution and the full desorption of anionic nucleophiles such as hydrogen sulde (SH À ) and iodide (I À ) anions from the surface of Ag NPs was achieved in this manner. 51 Another approach is to raise E NP F by chemisorption of a highly reducing nucleophile that prevents the initial adsorption or induces the full desorption of another competing nucleophile. Indeed, in such a manner, SH À is capable of fully desorbing I À from the surface of a Ag NP under conditions where the surface of the NP is unsaturated with SH À (i.e., sub-monolayer conditions). 51 The adsorption of nucleophiles, and the subsequently raised E NP F , renders metallic NPs drastically more susceptible to oxidation and therefore dissolution. The equilibrium may be shied to favour further nucleophile adsorption by discharging the metallic NP with O 2 or weak electron acceptors that ordinarily would not react with the metallic NP in the absence of chemisorbed nucleophiles. For instance, Ag NPs may be oxidized and dissolved by weak electron acceptors such as nitrobenzene or MV 2+ that would ordinarily never attack them. 50 Nucleophilic CN À and SH À chemisorbed to Pd NPs raise E NP F to such an extent that the Pd NPs begin to dissolve forming Pd(CN) 4 2À and PdS, respectively, in the absence of an oxidant by reducing water and producing H 2 . 52 Additionally, as a metallic NP dissolves, r decreases and E NP F increases (eqn (14)), thus further increasing the driving force for dissolution. Recently, Smirnov et al. introduced a facile method to encapsulate oil droplets in an unbroken lm of Au NPs, creating "metal liquid-like droplets". 53 In this study, aqueous citrate stabilized Au NPs were emulsied with an organic solution of 1,2-dichloroethane containing a lipophilic electron donor, tetrathiafulvalene (TTF, see Fig. 4). Aer emulsication and settling, the aqueous phase became devoid of any Au NPs with a lustrous gold lm now present at the liquid-liquid interface. The suggested mechanism involves TTF acting as a nucleophile injecting electrons into the Au NP, raising E NP F until the system reaches Fermi-level equilibration and, thereby, signicantly inuencing the adsorption/desorption dynamics of citrate and TTF species. Specically, at equilibrium, the more reduced Au NPs may induce the removal of anionic citrate ligands electrostatically, further facilitating the absorption of TTFc + . The removal of the electrostatically stabilising citrate ligands ultimately causes the Au NPs to aggregate and form a dense lm at the liquid-liquid interface. Such a mechanism was supported by the observations of Weitz et al. who noted that TCNQ, adsorbed as its radical anion, TCNQc À , cannot displace citrate. 54 As an electron acceptor or electrophile, TCNQ lowers E NP F during charge transfer to a less reducing (i.e. more positive) potential, thereby increasing the electrostatic attraction between citrate and the surface of the Au NPs. Metallic NPs landing experiments Over the past twenty years, a new area of research based on Fermi level equilibration between NPs and ultramicroelectrode (UME) surfaces known as "nanoparticle impact" or "nanoparticle landing" electrochemistry has developed at pace. 55 The premise of this eld is that when NPs, travelling under thermal Brownian motion in an inert electrolyte solution, strike a polarized UME surface (e.g., a carbon bre or a mercury-plated Scheme 5 The Fermi level of a metallic NP is raised on chemisorption of neutral dipoles, modifying the surface potential and hence the Fermi level. The direction of change depends on the overall change in the surface potential. In this scheme the surface potential is lowered, making extraction of electrons easier. Pt microelectrode) the impacts are directly detected electrochemically as sharp current-time transients at a constant electrode potential 55 or potentiometrically by monitored changes of the open-circuit potential. 56 The simplest experiments involve direct electrochemical measurements of the NPs by raising and lowering of E NP F on impact. The majority of these studies involve oxidation (and subsequent dissolution, see Scheme 6A) of Ag, 57 Au, 58 Cu 59 or Ni 60 NPs to reveal a host of information such as the NP size distributions and conductivities, 57 identication of individual NPs in a mixture, 61 the concentrations of the NPs, 61 the NP residence time on the electrode sufficient to ensure complete oxidation, 62 etc. Bard et al. demonstrated that single NP collisions with an electrode could be observed via electrocatalytic amplication. 63 In effect, on contact with a polarized substrate, the NP acts as a single nanoscopic spherical electrode for redox reactions that are catalysed only on the NP and not the underlying electrode. For example, by rising E NP F of Pt NPs on impact with a carbon UME, the NPs were able to act as nanoelectrodes and catalyse proton reduction (see Scheme 6B). 63,64 Similarly, lowering E NP F on impact allowed Pt NPs colliding with a Au UME to oxidize hydrazine 64 and IrO x NPs colliding with a Pt UME (pretreated with NaBH 4 ) to oxidize water. 65 Much work has focused on mechanism elucidation and developing a clear appreciation of how the NP interaction with the electrode, in terms of NP residence time, 65 permanent NP adsorption 64 and NP deactivation, 66 or lack thereof, 67 aer adsorption, signicantly affects the observed potentiometric responses, see Fig. 5B, or current signals (permanent stepwise changes, see Fig. 5C, vs. transient spikes, see Fig. 5D). Finally, core-shell metallic NPs may be synthesized by NP collisions at a reductively polarized electrode in the presence of the ions of another metal. Using this approach, by raising E NP F of Ag NPs on impact, Tl + was deposited to form Ag@Tl NPs, 68 while Cd 2+ was deposited to form Ag@Cd NPs. 69 Unwin and co-workers 70 recently introduced a novel method to carry out landing experiments using scanning electrochemical cell microscopy (SECCM, see Fig. 5E and F). Their approach involves miniaturising the "landing area" by using tiny electrochemical cell volumes and not tiny electrodes. Thus, Scheme 6 Fermi level equilibration between polarized electrodes and metallic NPs that "land" or "impact" on the electrode surface. (A) Oxidative dissolution of NPs by lowering their Fermi levels on impact with a positively polarized electrode. (B) "Electrocatalytic amplification" of metallic NP impacts by raising E NP F on impact and thereby electrocatalyzing a reduction reaction that is kinetically limited on the electrode surface. (C) The NP enhanced tunnelling of electrons through the insulating layer, lowering E NP F on impact and thereby electrocatalyzing an oxidation reaction that does not happen without the NP. landing experiments take place on a polarised macro-sized electrode in contact with a nanodroplet of solution, rather than the typical approach of immersing an UME in a large volume of solution. In a proof of concept paper, SECCM was used to show the O 2 reduction reaction (ORR) and HER at the surface of impacting Au NPs by modulating E NP F on changing the potential at the HOPG surface. 70 Uniquely, this approach allows landing experiments on substrates that are not amenable to UME manufacture such as HOPG and even a carbon-coated TEM grid. Strikingly, the ability to electrochemically interrogate individual NPs on a TEM grid means that unambiguous correlations of a single NPs physical attributes and electrochemical properties, e.g. catalytic activity, are possible (see Fig. 5G and H). 70 Another interesting property of metallic NPs is described in Scheme 6C. NPs can signicantly enhance tunnelling of electrons through an insulating layer (either self-assembled monolayer or solid layer). 71,72 Tunnelling to the metallic NP is much more probable than tunnelling to molecules in solution, as NPs have signicantly higher density of states compared to dilute molecular redox species in solution. 73 Tunnelling currents decay exponentially with i f A exp(Àbd) over distance d, where parameter b depends only on the insulating layer. However, a higher density of states signicantly enhances the pre-exponential factor, 74 allowing tunnelling to metallic NPs when tunnelling to molecules is no longer possible. This phenomena has been utilized to fabricate robust and stable tunnelling nanoelectrodes, where a single metallic NP (acting as the electrode) is captured in a collision with an insulating layer on an UME. 73 Experimental approaches to quantifying changes in Fermi levels after charge and discharge events Quantitative comparison of the relative apparent Fermi levels of differently functionalized oating supported NPs may be achieved by titrating the stored electrons in charged metallic NPs and also NP-semiconductor (SC) nanocomposites with electron acceptor molecules (A) such as C 60 (ref. 75) and dyes (e.g., methylene blue 76,77 or thionine 76,78 ). Such an approach was used by Grätzel 79 and Nozik 80 in the 1980's, to make a connection between the energy level of electrolyte redox molecules and those of SCs, and has since found prominence with the group of Kamat to determine the electrons stored in SCs and their nanocomposites with both metallic NPs 78 and carbon supports. 76 The key to these measurements is that the standard reduction potential of A, E A=A À , in the solvent containing the nanocomposites is known and used as a reference to calibrate the apparent Fermi level vs. SHE. The Fermi level of the nanocomposite establishes a Nernstian equilibrium with the A/A À redox couple, [A À ] and [A] are determined by UV/vis spectroscopy and the Nernst equation is applied to determine the apparent Fermi level, E * F . The yield of A À will increase as E A=A À shis anodically to more positive potentials. Thus, this technique provides information on the relative differences in apparent Fermi levels between two different nanocomposites by titrating identical quantities of both with the same redox probe. A relatively lower yield of A À for a particular nanocomposite indicates that it has a more positive Fermi level and provides less driving force for electron transfer. A number of alternative techniques exist that allow quantitative monitoring of changes in the Fermi level of supported metallic NPs due to charge injection and discharge. For example, an estimation of the number of electrons stored on either charged carbon (C) or SC supports is possible by discharging their stored electrons to reduce metallic ions, such as Ag + , in solution to form Ag/C, 81 Ag/SC 77 or even Ag/C/SC 82 nanocomposites. Although such an approach can quantify the number of electrons stored on the nanocomposite, and provides a useful method to prepare metallic NP nanocomposites, it does not provide information on the apparent Fermi level of the materials as it is unreferenced (unlike potentiometry of Au MPCs which is referenced to a reference electrode in the electrochemical cell or the titration method for large metallic NP/SC nanocomposites, described above, which is referenced to E o ). In the specic case of ZnO, the injection and storage of electrons into the conduction band may be monitored by spectroscopy as charge storage causes the absorption band edge to shi to lower wavelengths and, simultaneously, the green emission arising from oxygen vacancy to disappear. 78,82,83 Thus, for ohmic materials such as Pt NPs deposited on ZnO an almost complete recovery of the emission is seen, as the Pt NPs efficiently discharge electrons to the solvent, whereas with Au NPs only 60% of the emission recovered as a portion of the electrons remain on the ZnO due to the capacitive storage of electrons on Au and their poor discharge abilities. 78 Pioneering work by Henglein and Mulvaney highlighted that changes in the Fermi level of metallic NPs due to charging or discharging events directly inuence their localized surface plasmons. 45,48,85 A technique based on this principle, surface plasmon spectroscopy (SPS), monitors changes in the Fermi levels of metallic NPs that exhibit well-dened surface plasmon modes (Fig. 6). The relationship between the surface plasmon resonance (SPR) wavelength maximum (l) and the relative electron concentrations (n) before and aer electron injection or discharge is given by 84 Thus, charging of metallic NPs leads to a blue-shi of the SPR, while discharging leads to a red-shi ( Fig. 6A and B). The magnitude of the shi in SPR is highly dependent on the aspect ratio of the metallic nanorods ( Fig. 6C and D). Greater changes in wavelength (Dl) are observed for the same change in electron concentration (Dn) for metallic nanorods vs. nanospheres and for high-aspect vs. low-aspect nanorods. 47 Mulvaney et al. have furthermore dened the relationship that links the SPR to the nanorods shape and the shi of the Fermi level with n as 47 where l p is the bulk plasma resonance, 3 N is the high-frequency contribution from interband transitions, 3 m is the dielectric function of the medium and L is the shape-dependent depolarization factor. The versatility of SPS lies in its ability to probe any event that leads to a change in a metallic NPs Fermi level, such as chemisorption, 45,47,51,87 core-shell formation by underpotential deposition of metals, 88,89 Fermi level equilibration between metallic NPs and SCs in NP/SC 77,83 or core-shell NP@SC 90 nanocomposites, electron transfer from solution phase reductants and oxidants 47 or by interaction with polarized electrodes, 48,85 etc. The current state-of-the-art in SPS seeks to use the technique to study redox reactions at single metallic NPs. 16,91 The transition from studying ensembles of NPs to single NPs will allow the effects of NP size, shape and interaction with the substrate on the rates of catalysis to be precisely determined instead of being obscured by incorporation into ensemble averages. 16,91 As SPS is a non-invasive optical technique that is inuenced by the redox environment within which the metallic NP nds itself, much future work is envisioned where SPS will indirectly relate the redox conditions inside biological tissues and cells optically. Perspectives Metallic NPs are increasingly becoming part of everyday consumer products, ranging from cosmetics to clothes to medical and electrical devices. Thus, it is inevitable that our exposure to these nanomaterials will increase, as they appear in air, water, soil and organisms due to the ramp up in metallic NP production to meet consumer demand. 92,93 The attractive features of Ag NPs that make them commercially sought-aer, in particular their antimicrobial effects, on the ip-side cause them to be greatly detrimental to many mammalian organs. 94 Thus, it is now an imperative grand-challenge for the nanotechnology community to (i) thoroughly investigate the nanotoxicity of metallic NPs to humans, animal and plant life under various environmental conditions 95 and (ii) develop sensitive and selective analytical methods to detect and determine the environmental fates of metallic NPs. 96 The in vitro and in vivo nanotoxicity of metallic NPs is inuenced by a host of factors such as their sizes, shapes, redox properties, surface chemistry, chemical stability or propensity to dissolve under certain environmental conditions. 92,93,99 The latter point is key: the toxicity of a pristine metallic NP will not be the same as a metallic NP that interacts with and is inuenced by its environment. 100 As shown in Scheme 5, the chemisorption of nucleophiles or electrophiles in solution, such as inorganic ligands, can dramatically increase the likelihood of a metallic NP dissolving to release ions. A body of research is emerging relating the toxicity mechanisms for different metallic NPs with the release of toxic ions. 101 Indeed, metallic NPs that were unable to release toxic ions (for example by surrounding them in a stable coating) were much less toxic. 101 Even if a metallic NP is inert in solution (so it does not dissolve) its Fermi level in solution may shi considerably aer equilibration with redox species. This means that the redox properties at the surface of the metallic NP may change from benign to toxic by, for example, facilitating the production of reactive oxygen species and dramatically increasing the amounts of free-radicals produced when the metallic NP is charged. Similarly, as the size of a metallic NP decreases, its Fermi level and associated redox properties vary. Thus, unsurprisingly, in certain instances smaller NPs were generally shown to be more toxic than larger metallic NPs. 99 Accordingly, the theoretical framework introduced in this perspective to comprehend the charging and discharging of metallic NPs will be of considerable use to explain the nanotoxicity mechanisms of metallic NPs. Particularly promising avenues of research allowing the analytical determination of metallic NPs are NP landing or impact studies, either at polarised UME surfaces or using SECCM. These techniques, entirely based on Fermi-level equilibration of NPs with polarised electrode surfaces and redox species in solution, are expected to lead to the development of highly automated, cost-effective, and rapid screening microelectrochemical devices for much needed point-of-care environmental monitoring systems. A key step in this process will be the development of lab-on-a-chip microuidic devices capable of quantifying NP impacts. Recent steps in this direction have been made by the groups of Pumera and Crooks. Pumera and co-workers detected Ag NPs in a microuidic lab-on-a-chip device by electrochemically oxidising (i.e., dissolving) the Ag NPs on impact with an embedded electrode. 96 Ag NPs of 10 and 40 nm in size were detected, although not as individual NPs but as groups of NPs undergoing simultaneous oxidation. One potential perspective of this work is to multiplex it with micellar electrokinetic chromatography to allow the separation and detection of various sizes of Ag NPs. 102 Crooks and co-workers developed two microuidic devices with either Hg or Au electrodes, see Fig. 7A, to carry out electrocatalytic amplication studies under owing conditions monitoring the collision dynamics of Pt NPs with N 2 H 4 as the sacricial redox molecule. 97 These devices demonstrated several advantages over conventional electrochemical cells including lower limits of detection, higher collision frequencies and more stable electrochemical responses (at baselines and uniform current transients) over long periods of time. 97 A possible perspective suggested by Crooks and co-workers is to further enhance the principle advantage of these devices, the high collision frequencies observed, by using NPs with magnetic core/catalytic shell structures and applying magnetic elds to enhance mass transfer. 103,104 This eld is as yet in its infancy with fundamental issues, such as a reported irregular distribution of metallic NPs within the ow-prole, 97 to be overcome but holds huge promise, in particular if multiplexed with efficient metallic NP separation techniques. The power of SECCM, as developed by Unwin's group, lies rstly in its ability to determine the catalytic activity of a single NP within an ensemble of NPs. 105 Within an ensemble of NPs large variations in morphology and catalytic activity are possible, thus obscuring the discrimination of the truly catalytic NPs from those less so. The nanoscale resolution of SECCM, in conjunction with its ease of multiplexing with TEM for example, will give unprecedented access to the structure-activity relationship of a single NP without interference from the "ensemble". The second advantage of SECCM lies in its exibility regarding the choice of substrate that the metallic NPs impact onto due to its inherently different mode of operation compared to the UME approach. SECCM forms a nanoscale electrochemical cell by contacting a nanodroplet at the tip of a nanopipette with a macroelectrode. 106 This means that it is not constrained by the need to fabricate an UME. Thus, interesting substrates that lack an amenable technique to form an UME, such as transition metal dichalcogenides, 107 may now be assessed by SECCM. This will lead to a whole host of studies to determine the inuence of the underlying "inert" substrate on the catalytic activity of either an impacting or electrodeposited NP. Such effects are real as evident in the difference in electrocatalytic activity of Au NPs supported on carbon and on titania in the CO oxidation reaction. 108 These fundamental studies are key to the development of a complete understanding of advanced nanocomposite materials that may potentially impact the eld of electrocatalysis, and in turn fuel cell and solar cell technologies, amongst others. As noted earlier, surface plasmon spectroscopy (SPS) can monitor changes in the Fermi levels of metallic NPs that exhibit well-dened surface plasmon modes. This principle may be utilized to develop a new class of colorimetric sensors for the sensing of species of interest in solution. For example, in a study by Jiang and Yu, 87 inorganic anions were detected in the presence of Ag nanoplates by monitoring the shis in the SPR absorption peak (and therefore colour) aer Fermi level equilibration between the Ag nanoplates and chemisorbed inorganic anions (see Fig. 7B). Although, in that study individual inorganic anions could be distinguished from others in a mixture or from inorganic cations, much future work is needed to improve the selectivity of such sensors. The current state-of-the-art in SPS seeks to use the technique to study electrochemical processes, such as redox reactions or electrodeposition, at single metallic NPs. Mulvaney and co-workers have used dark-eld microscopy (DFM) to study the scattered light from single Au nanorods (see Fig. 7C for details of the experimental setup). 16 DFM permitted the modulation of the optical properties of single Au nanorods to be observed aer electrochemical charge injection via an ITO electrode. 16 Using this approach, the kinetics of the charging and discharging of single Au nanorods were directly observed during a redox reaction involving the oxidation of ascorbic acid on the surface of the Au nanorods. 91 These results constituted the rst direct measurement of the rates of redox catalysis on single metallic nanocrystals. 91 Recently, Mulvaney and coworkers monitored the electrodeposition of metallic Ag onto Au nanostars absorbed to ITO electrodes by DFM and SEM and accurately modelled their observations with COMSOL simulations ( Fig. 7D and E). 98 Clearly, as discussed regarding SECCM, the transition from studying ensembles of NPs to single NPs will allow the effects of NP size, shape and interaction with the substrate on the rates of catalysis to be precisely determined instead of being obscured by incorporation into ensemble averages. This makes SPS a powerful new tool for researchers at the forefront of electrocatalyst development. For further information on the optical characterisation of single plasmonic NPs using alternative techniques to SPS, the reader is referred to recent reports from the groups of Link 109 and Tao, 110 and in particular to an informative tutorial review on the topic. 111 As SPS is a non-invasive optical technique that is inuenced by the redox environment within which the metallic NP nds itself, much future work is envisioned where SPS will indirectly relate the redox conditions inside biological tissues and cells optically in a reversible manner over a wide potential range. Regulation of the intracellular redox potential is critically important to cell function. 112 The disruption of intracellular redox potential may be implicated in the initiation and progression of several disease states including cancer, cardiovascular, neurodegenerative, and autoimmune diseases. 112 However, despite the importance of the intracellular redox potential as a potential diagnostic marker, its study is severely limited by the lack of suitable measurement techniques. A current state-of-the-art technique involves using an optical approach to infer the intracellular redox potential, 113 hypothetically demonstrating the viability of SPS in a cellular environment. Briey, Au nanoshells modied with molecules whose surface enhanced Raman spectroscopy (SERS) spectrum change depending on oxidation state may be controllably delivered to the cytoplasm without any toxic effects. 113 Then, by accurately measuring the proportions of oxidized and reduced species optically with SERS, the intracellular redox potential can be calculated using the Nernst equation. 113 Similarly, the shis in SPR of metallic NPs or nanorods or nanostars or nanoplates etc. may be potentially monitored optically in real-time, perhaps by DFM as discussed above, allowing measurement of the changes in the redox environment in the cell. SPS studies also represent the rst steps in the development of a new class of "plasmonic electrochromic" devices, i.e., capable of dynamic colour changes by electrochemical manipulation of the plasmon absorbance of metallic NPs under an applied potential. 84,114 Interest in the development of plasmonic electrochromic devices has surged in recent years by replacing metallic NPs with transparent conductive oxide (TCO) nanocrystals. 84,114 TCO nanocrystals may be charged and discharged by Fermi level equilibration with an electrode, such as transparent ITO. However, TCO nanocrystals have much lower carrier concentrations than metallic NPs and, as a result, for the same value of Dn dramatic increases in Dl are observed. Thus, once more, a eld whose history is steeped in the fundamental background of understanding the nanoscale charging and discharging of metallic NPs has taken an unexpected and highly productive turn by applying that theory to new non-metallic nanomaterials. These TCO nanocrystal-based plasmonic electrochromic devices hold huge promise for the development of highly advanced electrochromic windows capable of an independent regulation of transmitted visible light and solar heat into a building by controllably shiing the plasmon absorption from the visible to the near-infrared (NIR) regions of the solar spectrum under an applied potential. 84,114 A major perspective in this eld is the development of new TCO nanocrystals with plasmon absorption peak wavelengths closer to 1250 nm and their integration into devices of low manufacturing cost, long cycle life and capable of both high optical contrast and fast switching times. The recently demonstrated technique to form gold metal liquid-like droplets by Fermi level equilibration and subsequent adsorption of tetrathiafulvalene with citrate stabilized Au NPs 53 is expected to lead to applications in optics (as lters and mirrors), 115,116 biomedical research (size-selective membranes for dialysis, or drug-delivery capsules), model systems to probe the collapse and folding of biological membranes and cellular structures, sensors (SERS at uid interfaces), catalysis (nanoscale bioreactors) and perhaps as an alternative gold recovery method in the mining industry. The Fermi level equilibration will provide another approach to rationalize the observed phenomena, and help to further develop these systems.
12,551
2015-03-23T00:00:00.000
[ "Physics" ]
A measure of local uniqueness to identify linchpins in a social network with node attributes Network centrality measures assign importance to influential or key nodes in a network based on the topological structure of the underlying adjacency matrix. In this work, we define the importance of a node in a network as being dependent on whether it is the only one of its kind among its neighbors’ ties. We introduce linchpin score, a measure of local uniqueness used to identify important nodes by assessing both network structure and a node attribute. We explore linchpin score by attribute type and examine relationships between linchpin score and other established network centrality measures (degree, betweenness, closeness, and eigenvector centrality). To assess the utility of this measure in a real-world application, we measured the linchpin score of physicians in patient-sharing networks to identify and characterize important physicians based on being locally unique for their specialty. We hypothesized that linchpin score would identify indispensable physicians who would not be easily replaced by another physician of their specialty type if they were to be removed from the network. We explored differences in rural and urban physicians by linchpin score compared with other network centrality measures in patient-sharing networks representing the 306 hospital referral regions in the United States. We show that linchpin score is uniquely able to make the distinction that rural specialists, but not rural general practitioners, are indispensable for rural patient care. Linchpin score reveals a novel aspect of network importance that can provide important insight into the vulnerability of health care provider networks. More broadly, applications of linchpin score may be relevant for the analysis of social networks where interdisciplinary collaboration is important. Introduction measure was developed to infer distinct theorized aspects of importance or influence based on the topological characteristics of the adjacency matrix of the network in which a node is embedded. More recently, advancements in network centrality have included community-aware centrality and multi-component centrality measures. Community-aware centrality measures identify nodes that are essential to connect two or more communities of the network (Tulu et al. 2018). Extensions from this work have defined influence based on the extent to which a node is a hub within their community and a bridge across communities (Ghalmane et al. 2019a, b). Redefining local and global influence in networks with overlapping communities, new representations of centrality measures have been developed that are specifically designed to identify influential nodes in overlapping modular networks (Ghalmane et al. 2019a, b). These structural centrality measures remain agnostic to node attributes. Node attributes are used to describe characteristics and can be continuous or discrete. Widely used network measures that do consider node attributes include assortativity and homophily, which are network-level measures of the correlation (assortativity) or tendency (homophily) of nodes to be connected to similar others (Newman 2003;McPherson et al. 2001). Many studies have observed assortativity or homophily in social networks by characteristics such as happiness, smoking and drinking behavior, and race (Bollen et al. 2011;Bliss et al. 2012;Cheadle et al. 2013;Smith et al. 2014;Mollica et al. 2003). Community detection is another example of an established network algorithm that has evolved to consider node attributes (Zanghi et al. 2010;Newman and Clauset 2016;Jia et al. 2017). Network communities can be identified by combining structural and attribute information such that communities consist of nodes that are not only more densely connected than nodes outside of the community, but also share similar attributes (Jia et al. 2017). This collection of work provides strong evidence that the attributes of individuals in a network relate to, and even influence, network structure. This raises the question of how attributes can be leveraged to identify strategically important nodes in a way that is distinct from the centrality measures that rely purely on the underlying adjacency matrix. In line with this question, previous work has decomposed centrality measures according to categorical attribute data (Everett and Borgatti 2012;Krackhardt and Stern 1988;). The present work introduces a new node-level measure that combines the topological data from the adjacency matrix with accompanying external attribute data. Herein, we propose linchpin score, which describes the tendency of a node to be the only one of its kind among its neighbors' ties. The term linchpin was chosen because it refers to nodes that are indispensable within its two-hop neighborhood. We consider a node to be more indispensable, or a linchpin, if more of its neighbors have no other existing ties to other similar nodes. This term thus defines the importance of a node in a network as being dependent on whether or not its neighbors are directly connected to others that are similar to the focal node. Motivation Traditional health services research methods evaluate the quality of individual physicians as a function of other physician-level attributes. However, physicians are embedded within professional networks, and their individual outcomes and ability to deliver high quality care may be impacted by their own position in their peer network or the characteristics and outcomes of their peers. Patient-sharing networks offer a quantitative, scalable approach for indirectly measuring relationships between physicians based on shared patients observed in administrative data. Prior work has shown that patientsharing relationships correspond with self-reported referral and advice-seeking relationships between physicians (Barnett et al. 2011). Patient-sharing network characteristics have been associated with care utilization, care quality, and patient outcomes (Pollack et al. 2013;Bachand et al. 2018;Tannenbaum et al. 2018;Moen et al. 2016;Zipkin et al. 2021;Barnett et al. 2012). Increased patient-sharing within physician group practices has been shown to correspond with patient-reported care coordination and timeliness of care . However, while increased patient-sharing within teams of physicians is hypothesized to reflect care coordination, the absence of ties to other physicians may suggest barriers in access to specialist referrals or other important resources (Hollingsworth et al. 2015). Health care delivery systems depend on the availability of personnel and infrastructure to deliver high quality care. The shortage of medical professionals in rural areas is a significant national concern. Access to health care is typically measured according to the supply per capita or distance to one type of provider or service (Levit et al. 2020). The National Rural Health Association reports 13.1 physicians per 10,000 people in rural areas compared with 31.2 physicians per 10,000 people in urban areas. The number of specialists per capita is even more skewed, with 30 specialists per 10,000 people in rural areas compared with 263 specialists per 10,000 people in urban areas. Given the importance of multidisciplinary care coordination in the delivery of high-quality care for many complex and chronic conditions, there is a strong premise for using networks to understand access to care which recognizes the importance of professional relationships. In creating this measure, we also take inspiration from the concept of network vulnerability to selective node removals (Chen and Hero 2013). One of the most conventional network vulnerability measures is the susceptibility of the size of the largest connected component to the removal of nodes. In this sense, a node would be considered more vital to the network if the largest connected component was more disrupted (e.g., broken into smaller, disconnected components) upon its removal. Many studies assessing network vulnerability focus on infrastructure networks (Grubesic and Murray 2006;Corley and Chang 1974). Yet social networks can also be vulnerable to disruption upon an individual node's removal. Here, we hypothesize that the neighborhood of a physician would be more vulnerable to the physician's removal if their neighbors have no existing connections to other physicians of the same specialty. While this measure has implications to a broader range of network studies, an application of linchpin score in health services research would be to identify networks or sub-networks that are more vulnerable to removal of physicians of a specific specialty. The rest of the paper is structured as follows. We next propose and formally define linchpin score. Then, we measure linchpin score in a physician network using specialty as the node attribute of interest. We calculate linchpin score for the physicians in the network, summarize linchpin score by specialty, and compare the observed linchpin scores to those measured in random networks. We then evaluate whether linchpin score is associated with degree, betweenness, closeness, and eigenvector centrality. Finally, we examine linchpin score within 306 patient-sharing physician networks, representing the 306 hospital referral regions in the United States. We test the extent to which linchpin score and other centrality measures are associated with physician rurality within hospital referral regions and across specialty types. Methods consists of a set of nodes V and a set of edges E between them. An edge e ij connects node v i with node v j . Let c i denote the type of node v i for attribute c, where i = 1, . . . , N is the index of the nodes in the network. While the applications in this work focus on categorical node attributes, linchpin score can be extended to continuous node attributes by setting a threshold to bin the continuous variable into discrete categories. The linchpin score for node v i , denoted by l i , is the number of neighbors of node v i with no other ties to any other node equal to node v i for attribute c , divided by n i , the degree of node v i . The neighbors of node v i are not allowed to have the same attribute value as node v i to contribute to l i . Let the event that nodes i and j have the same value of attribute c be denoted by the binary variable a c ij . That is, a c ij = 1 if c i = c j and a c ij = 0 otherwise. The number of neighbors v k of node v j for which c k = c i other than node i itself is ical definition of linchpin score is then expressed as: The linchpin score of node i is seen to be the weighted degree of node i, where the weights for node j = i are given by 1 − a c ij 1 − b c ij , divided by the degree of node i. The first term comprising the weight, 1 − a c ij , indicates whether nodes i and j have different values of attribute c while the second term, 1 − b c ij , indicates whether none of the other neighbors of node j (besides node i) have the same value of attribute c as node i. The linchpin score ranges from 0 to 1, with 0 indicating all of the neighbors of node v i are connected to at least one node that is equal to node v i for attribute c (Fig. 1A), and 1 indicating that none of the neighbors of node v i are equal to node v i for attribute c nor are connected to another node (besides node i) that is equal to node v i for attribute c (Fig. 1B). Figure 1C illustrates the linchpin score for a node v i who has two out of four connections also tied to another node that is equal to node v i for attribute c . In Fig. 1D, we consider the circumstance where node v i is directly connected to another node of the same value for attribute c. In this case, we do not count the neighbor with the same attribute value as node v i in the calculation of l i , but that neighbor would still contribute to n i . We take this approach because it would be reasonable to expect that the focal node's direct ties could relatively easily form a new tie with the neighbor that has the same attribute value as the focal node, if the focal node were to be removed from the network. The R code to calculate linchpin score on any network dataset that contains node attributes is available at https:// github. com/ mneme sure/ linch pin_ centr ality/ blob/ master/ linch pin_ netwo rk_ fx2.R. Example network datasets For the patient-sharing network analysis, we linked four publicly available data sources. The first data source is the Physician Shared Patient Patterns Data from 2015 released by Centers for Medicare and Medicaid Services (CMS) (Physician Shared Patient Patterns 2015). The Physician Shared Patient Patterns Data lists health care physicians who participate in the delivery of health services to the same Medicare beneficiary within specific time intervals (30 days, 60 days, 90 days, and 180 days). It reports the number of patients each physician dyad shared within the specified time interval. We used the Physician Shared Patient data to create undirected patient-sharing networks for which ties between physicians indicate shared patients within 30 days in 2015. The second data source was the November 2015 Physician Compare National Downloadable File released by CMS and archived by the National Bureau of Economics Research (Physician Compare 2015). The dataset contains general information about individual eligible health care professionals including specialty, practice affiliation, and practice ZIP code. For the purposes of this study, we used this dataset to obtain specialty and practice ZIP code. The third data source links ZIP codes to hospital referral regions as defined and made available by the Dartmouth Atlas (Dartmouth Atlas Supplemental Data 2015). Hospital referral regions represent regional health care markets for tertiary medical care, and there are 306 geographically contiguous hospital referral regions in the US. We assigned each physician to a hospital referral region based on their practice ZIP code. This further allowed us to parse the national network into sub-networks that represent the patientsharing patterns within regional health care delivery markets. We examined the Providence, RI hospital referral region as our example network due to its relatively small size and our familiarity with the area. The fourth dataset includes the 2010 Rural-Urban Commuting Area (RUCA) codes for each ZIP code released by the United States Department of Agriculture Economic Research Service updated on August 17, 2020 (Rural-Urban Commuting Area Codes 2020). Rural physicians were identified as those who practice in a rural ZIP code (RUCA codes 4.0-10.6) based on the practice ZIP codes listed in Physician Compare National Downloadable File. Physicians who practice in multiple locations that included both rural and urban ZIP codes were categorized as urban. Linchpin scores of physicians in a patient-sharing network Networks of physicians are frequently assembled based on administrative data of patient encounters: two physicians are connected if they have encounters with common patients. In this example, we evaluate linchpin score of physicians using physician specialty as the node attribute of interest. If a physician is the only one of their specialty among their neighbor's ties, it is reasonable to expect that the physician is indispensable for the proper coordination and delivery of health care to the patients cared for by that set of physicians. The Providence, RI physician network includes 1,749 physicians, has a density of 0.017 and global transitivity of 0.296 (Fig. 2). The mean linchpin score by specialty type varies substantially (Table 1). Intensive care, obstetrics-gynecology, and endocrinology are the specialties with the highest mean linchpin score (0.88, 0.43, and 0.40, respectively), whereas radiologists and cardiologists had the lowest mean linchpin scores. We evaluated the correlation between the number of physicians in the specialty category and mean linchpin score using Kendall's Tau, a non-parametric correlation coefficient. Examining the mean linchpin score by specialty type reveals an inverse association with the number of physicians who have that specialty (Kendall's τ = − 0.6, p < 0.001). In other words, specialties that are rarer tend to have higher linchpin score. By comparing linchpin score of physicians within the same specialty, one would identify physicians who are more indispensable to ensure that other members of the network have access to that specialty for referrals. Those physicians with greater linchpin score would be less easily replaced by another physician of their same specialty based on existing ties if they were to leave the network. Specialties with the highest variance in linchpin score among physicians of that specialty type are obstetrics-gynecology, infectious disease, and endocrinology (Table 1). Next, we compared the linchpin scores of specialties in the observed physician network to a network in which specialty is distributed at random. We permuted ten networks that were identical to the observed network in structure and number of nodes with each specialty, but specialty was assigned at random. We then calculated the mean and standard deviation of linchpin scores of physicians in each specialty and present the means across the 10 permuted networks. The negative association between mean linchpin score and number of physicians who have that specialty was even stronger in the random networks (Kendall's rank correlation τ = − 0.9, p < 0.001) than what we found for the observed network. We also found that 12 of the 18 specialty groups have lower mean linchpin score in the observed network compared with the random networks (Table 1). Table 1 Summary statistics of linchpin score by specialty type The number of nodes (N) represents the number of physicians with that specialty in the Providence, RI physician network. The linchpin mean and SD for the observed network are presented alongside the linchpin mean and SD for 10 permuted networks where the network structure was constant but specialty was randomly assigned. SD, standard deviation Linchpin mean (SD) Random networks Physician network attribute Taken together, these results suggest that the patient-sharing patterns may have formed in ways that make the networks less vulnerable, or less dependent on linchpin physicians. Correlations between linchpin score and centrality measures To examine whether a physician's linchpin score was associated with node centrality measures, we present a correlation matrix for linchpin, degree, betweenness, closeness, and eigenvector centrality for the Providence, RI physician network (Fig. 3). Previous work has demonstrated that network centrality measures tend to be correlated (Valente et al. 2008;Rajeh et al. 2020). We find that linchpin score is modestly correlated with betweenness centrality (Kendall's τ = 0.25, p < 0.001). In general, linchpin score seems to be identifying a distinct set of important nodes in each network that are not captured by the other centrality measures and vice-versa. Consistent with previous work, we observed moderate to high correlations between the node centrality measures. Closeness centrality and eigenvector centrality were most highly correlated (Kendall's τ = 0.82, p < 0.001). Linchpin score and physician rurality The motivation for developing linchpin score was to identify locally unique physicians who would not be easily replaced by another physician of the same specialty through existing ties if they were to leave the network. Linchpin score is most relevant for attributes that are difficult to change. For example, a physician cannot easily change specialties. Networks characterized by high linchpin score for a specialty of interest could be considered more vulnerable to the removal of physicians with that specialty. We aimed Fig. 3 Heatmap of the correlation matrix of linchpin, degree, betweenness, closeness, and eigenvector centrality for the physician network. Correlation was measured using Kendal's Tau non-parametric correlation coefficient to test this with a study of physicians practicing in rural and urban areas in the United States. As a consequence of the uneven distribution of specialists across rural and urban areas, we expect that physician networks caring for predominantly rural patients may differ in both the organization of ties and the types of specialty groups present compared with physician networks caring for predominantly urban patients. We first examined associations between physician rurality and the node-level network measures. Then, we examined each specialty separately to determine whether the node-level measures were able to distinguish differences in network importance among rural and urban physicians by specialty type. We consider rural areas as being more vulnerable to a specialist leaving and we hypothesized a priori that rural specialists will have higher linchpin scores compared with urban specialists. We further hypothesized that this association may not be observed for general practitioners, who are more prominent in the care of rural patients. We calculated the linchpin score, degree, betweenness, closeness, and eigenvector centrality for all physicians within all 306 hospital referral region networks. All network measures were standardized to have a mean of 0 and a standard deviation of 1 to better compare model estimates between network measures across hospital referral regions of different sizes. Eigenvector centrality was excluded from the models due to issues of high collinearity. We first examined associations between physician rurality and network measures within hospital referral regions. Physician rurality was represented as a binary variable assigned based on the rurality of the practice ZIP code, as defined in the methods. We excluded regions where fewer than 3% of physicians practiced in a rural ZIP code (n = 147), as some hospital referral regions are entirely urban. For each of the 159 hospital referral regions remaining, we estimated a separate multivariable logistic regression predicting physician rurality. Physician linchpin score, degree, betweenness, and closeness centrality were the independent variables of interested and we included physician specialty as a covariate. Based on the model results, we calculated the number of hospital referral regions for which each network measure was a significant predictor of rurality (corresponding to a p-value less than 0.01) and the number of hospital referral regions for which each network measure was the strongest predictor of rurality (corresponding to the highest z value). Our results, shown in Table 2, demonstrate that closeness centrality is more likely to be associated with physician rurality compared with the other network measures. This suggests that the differences in physician connectedness between rural and urban physicians is best detected using a centrality measure based on average distances to other nodes. This may reflect the regionalization of health care, often embodied by a regional "hub" with spokes extended to adjacent, more rural settings. To learn more about the characteristics of hospital referral regions where closeness centrality or linchpin score was predictive of physician rurality, we further characterized these hospital referral regions using network measures such as network size (e.g., number of physicians), network density, and network transitivity. Network density and transitivity have previously been shown to impact the relationships among node-level measures, such as the correlation between centrality and hierarchy measures (Rajeh et al. 2020). We also evaluated the proportion of physicians within each hospital referral region who practiced in a rural setting. With bivariate analyses, we found that hospital referral regions where closeness centrality was a significant predictor of rurality were characterized by networks of larger size (p < 0.001), lower density (p < 0.001), and lower transitivity (p < 0.001). Hospital referral regions where linchpin score was a significant predictor of rurality were also characterized by networks of larger size (p = 0.02), lower density (p < 0.001), and lower transitivity (p = 0.01). The performances of closeness centrality and linchpin score were not associated with the proportion of rural physicians. Next, we evaluated associations between network characteristics and rurality within each of the 18 specialty groups. For each specialty group, using data from all 306 hospital referral region networks, we developed separate mixed effect logistic regressions predicting rurality of physician with linchpin score, degree, betweenness, and closeness centrality as independent variables. We included network size, network density, and network transitivity as covariates, and included a random effect for hospital referral region. In Fig. 4, we show the adjusted associations between physician rurality and each of the network measures. To facilitate comparisons in the association between centrality and rurality across specialty types, we grouped the results by network measure. We observe significantly greater linchpin scores for rural physicians across almost all specialty groups, highlighting the importance of individual specialists in delivering services specific to that specialty among their direct ties. The only specialty groups that exhibit lower linchpin score in rural areas are general practitioners and, to a lesser effect, surgeons. These results are consistent with our hypothesis and provide additional evidence that general practitioners are more prominent in managing care for rural patients. They are more likely to be either directly connected or indirectly connected to each other through referrals to other specialists in their local networks, resulting in lower linchpin score. The patterns across all specialty groups for the other centrality measures varied. Notably, linchpin score is the only measure that distinguishes rural specialists from rural general practitioners. Closeness centrality was consistently lower across all specialty types, including general practitioners, in rural areas compared with urban. Degree centrality of rural physicians compared with urban tended to be either greater or not significantly different. Betweenness centrality did not show a strong association with the rurality of physicians across most specialties. Altogether, these results demonstrate that incorporating specialty in defining physician network characteristics adds important contextual information to understanding which physicians are important. Closeness centrality was lower among all rural physician specialty groups compared with their urban counterparts, indicating that while closeness centrality was a strong predictor of rurality, it was not able to pick up differences between types of physicians in terms of which were locally unique for rural patients, which is an important aspect of rural health care access and quality. Linchpin score, on the other hand, did not tend to be the strongest predictor of rurality, but it was able to distinguish the different roles in the network played by specialists and general practitioners. Fig. 4 Mixed effect models predicting physician rurality with linchpin (A), closeness (B), degree (C), and betweenness (D) for each specialty. Odds Ratios (ORs) and 95% confidence intervals (CIs) are shown. The gray vertical line indicates OR = 1. ORs > 1 indicate that rural physicians had higher values for a given network measure. The scales across the forest plots are not equal
5,993.4
2021-07-01T00:00:00.000
[ "Computer Science", "Sociology" ]
Evolutionary relationships of courtship songs in the parasitic wasp genus, Cotesia (Hymenoptera: Braconidae) Acoustic signals play an important role in premating isolation based on sexual selection within many taxa. Many male parasitic wasps produce characteristic courtship songs used by females in mate selection. In Cotesia (Hymenoptera: Braconidae: Microgastrinae), courtship songs are generated by wing fanning with repetitive pulses in stereotypical patterns. Our objectives were to sample the diversity of courtship songs within Cotesia and to identify e underlying patterns of differentiation. We compared songs among 12 of ca. 80 Cotesia species in North America, including ten species that have not been recorded previously. For Cotesia congregata, we compared songs of wasps originating from six different host-foodplant sources, two of which are considered incipient species. Songs of emergent males from wild caterpillar hosts in five different families were recorded, and pattern, frequency, and duration of song elements analyzed. Principal component analysis converted the seven elements characterized into four uncorrelated components used in a hierarchical cluster analysis and grouped species by similarity of song structure. Species songs varied significantly in duration of repeating pulse and buzz elements and/or in fundamental frequency. Cluster analysis resolved similar species groups in agreement with the most recent molecular phylogeny for Cotesia spp., indicating the potential for using courtship songs as a predictor of genetic relatedness. Courtship song analysis may aid in identifying closely related cryptic species that overlap spatially, and provide insight into the evolution of this highly diverse and agriculturally important taxon. Introduction Acoustic signals are used by diverse groups of insects for species recognition, fitness displays, and courtship elicitation. Songs used during insect courtship are generally stereotypical within a species and likely play a role in reproductive isolation. Moreover, courtship songs may be a useful identifying character, especially among cryptic or closely related species [1]. For example, songs of Drosophila species groups are species-specific and have been studied for evolutionary patterns [2][3][4][5]. Furthermore, courtship song analyses have been used in conjunction PLOS songs among multiple host-foodplant complex sources. Wasps from two of these host-foodplant complex sources, Manduca sexta on tobacco ("MsT") and Ceratomia catalpae on catalpa ("CcC"), have diverged genetically and are likely incipient species [47]. These wasps display a lower male response rate to the female pheromones of the reciprocal source, slight differences in duration and frequency of some song elements, and typically produce sterile hybrid females resulting from CcC♂xMsT♀ crosses [15]. In this study we describe the courtship songs of ten additional species of Cotesia, and use clustering to explore the relationships and patterns among songs. Further, we identify song differences among select host-associated populations and incipient species of C. congregata. Parasitic wasp collection Cotesia spp. were primarily collected from wild caterpillar hosts at multiple sites in the United States; some C. nr. phobetri came from an ongoing laboratory colony (Table 1). Caterpillars known to be hosts of Cotesia were targeted for collection, particularly Cotesia that have published gene sequences. When possible, wasps from different sites were collected for wider population sampling. In most cases, each wasp species came from a single host species. In contrast, C. congregata were collected from six different sphingid host species feeding on different plant families ( Table 1). All permissions were obtained as necessary for field collections on both public and private land from property owners and managers. None of the species involved are listed as endangered or protected. Caterpillars, usually collected before parasitization status was known, were reared on their host plant in plastic containers under ambient laboratory conditions until parasitoid egression or pupation. Individual unattached Cotesia cocoons were placed in clear gel capsules (size 00) 2-4 days after egression. Cotesia species forming a connected cocoon mass were chilled upon adult emergence and placed in individual capsules or vials. Adults were sexed under a dissecting microscope. Wasp songs were recorded within 24 hours of emergence. Voucher samples of each species were both point pinned and stored in 95% EtOH at -20˚C. The song of one species included in analysis (Cotesia marginiventris) was obtained from a USDA-ARS sound library (https://www.ars.usda.gov/ARSUserFiles/3559/soundlibrary.html) and originally described by Sivinski & Webb [24]. Audio recordings Males in capsules were randomly selected from each brood for recording. Individuals were placed in an open paper arena with a drop of honey as a food source to encourage them to stay in the arena. Courtship songs were induced by exposing individual males to an immobilized female of the same species. Songs were recorded using a miniature omnidirectional microphone (model 4060, DPA, Longmont, CO; 20-20,000 Hz) positioned 5-7 mm above the male and a high resolution digital audio recorder (model 702, Sound Devices, Reedsburg, WI; 48 kHz sampling rate, 24 bit resolution) in a sound isolation booth (Industrial Acoustics, Bronx, NY) at 23 ± 1.5˚C and 40-55% RH. Generally, one recording per brood was analyzed; however, recordings of different individuals were analyzed for species with fewer than four collected broods. Additional individuals of C. congregata were recorded to test for relatively small differences among host-foodplant sources. Additional C. nr. phobetri were recorded from each brood because they could not initially be identified to a known species. Duration of song elements and fundamental frequency were quantified using Raven Pro v1.3 [48]. Waveforms were high-pass filtered at 100 Hz to reduce background noise. Songs were divided into multiple elements based on acoustic characteristics shared across species (Fig 1). "Pauses" occur between other song elements when wings are held motionless above the body and do not generate sound. "Pulses" are high amplitude elements comprising the greatest range of wing movement, referred to as "boings" in C. congregata [20]. "Terminal buzzes" immediately follow pulses and consist of continuous lower-amplitude sounds. "Pre- pulse buzzes" are steady lower amplitude sounds that precede a pulse with no pause in between. For "pulse-buzz units" we measured the duration of the pulse and buzz together. "Interpulse interval" is the time from the start of one high-amplitude pulse to the start of the next. Pulses, buzzes, and pauses make up the pulse-buzz units and interpulse interval, which were included because they make discrete units that may be important in species recognition. Song amplitude was not compared because distance from the microphone varied slightly among some species. Quantification of song elements started with the second complete pulsebuzz cycle and continued for six complete pulse-buzz cycles. Spectrograms of the entire song were produced using a short-time Fourier transform (Hann window, size = 2,000 samples, 50% overlap). Frequency spectra for song sections were calculated using fast Fourier transforms (Hann window, size = 1,000 samples, 50% overlap). Frequency of the first harmonic (fundamental frequency) was used in all comparisons. Comparisons and analysis Several statistical procedures were used to determine differences in courtship songs, consolidate elements, and group songs by structure. The considerable differences among some songs present a challenge for direct comparisons using standard statistical tests. For example, some elements were drastically reduced or absent in some species. For statistical tests, the repeating song elements were averaged so that each individual wasp was treated as an N of 1. Where appropriate in similar species, duration and fundamental frequency were compared using analysis of variance (ANOVA) followed by Tukey's post-hoc test. Song element durations were log transformed to meet the assumptions of ANOVA. Pulse and terminal buzz frequency were compared between adjacent elements of each song for each species and all species together using linear regression. Principal components analysis (PCA) for each individual wasp was used to condense dimensionality of the data into principal components (PCs) based on communitive explained variance. Hierarchical cluster analysis (Ward method) of mean principal components of each species was used to group wasps. The resulting dendrogram was compared to the most recent Courtship songs in Cotesia (Hymenoptera: Braconidae) molecular phylogeny of Cotesia [44] to determine if the same groups were resolved (see Table 2 for genes used and GenBank ascension numbers of species included in this study). Subsequent PCAs were performed on six different host-foodplant complex sources of C. congregata, on the MsT and CcC incipient species, and the two geographically isolated sources of C. nr. phobetri. Sources that displayed separation in the PCAs were followed by a Welch's unequal variance t-test to compare duration and frequency of song elements. The feasibility of matching songs back to their species or population based on song elements and PCs was tested using linear discriminant analysis. All statistical and multivariate analyses were performed with JMP v11 (SAS Institute, Cary, NC). Description of wasp songs Eleven species of Cotesia were collected at different sites in the United States (Table 1); one additional species was from a published recording (C. marginiventris). Some target host species collected did not yield any Cotesia. Multiple broods were collected of each species except for the uncommon C. teleae, which came from a single Antheraea polyphemus larva that produced only one living male concurrently with females. Cotesia congregata were collected from six different sphingid host species. Cotesia nr. phobetri, (currently undescribed) were supplied from a laboratory colony originating from host Grammia incorrupta (formerly G. geneura) that feed on forbs in grassland habitat and found in Redington Pass, Pima County, AZ [49]. Initially unknown specimens found as cocoon clusters independent of hosts in a recently mowed horse pasture in Gloucester County, VA, were identified to be either C. nr. phobetri or a closely related sister species based on morphology, cocoon structure, similarity of habitat, and song structure (Discussion, 8 th paragraph). Unparasitized caterpillars of Grammia virgo were found in an adjacent field, which suggest this species is the probable host. The other Cotesia species in our study were each collected from a single host species. All twelve species of Cotesia generated songs by wing fanning that consisted of repeating high-amplitude pulses and, in most species, lower amplitude buzzes (Fig 2). Pulses were accompanied by abdominal movements. All males continued to produce songs until copulation or the female moved away; therefore, number of pulses was not used as a factor. Songs varied in duration of pulse, buzz, and pause elements (Table 3). Some songs were different Table 2. Available genes and GenBank ascension numbers of Cotesia species used in this study. Not all species have been sequenced using all four genes, and therefore are not included in the phylogeny. Species NADH1 mt16S rDNA n28S rDNA LW rhodopsin enough from the others as to not require statistical comparisons of individual elements, e.g., C. flavicochae produces a song with unique elements. Moreover, songs with considerably longer element durations had greater variance than songs with relatively short element durations (e.g., compare pause duration between C. teleae and C. congregata in Table 3), resulting in violation of the assumptions for ANOVA. Although an ANOVA could not be performed with all 12 species, an ANOVA was performed with four species that had song elements of similar durations and frequencies. Among all species, fundamental frequency ranged from 176 Hz in C. phobetri to 328 Hz in C. euchaetis (Table 4). All songs produced detectable harmonics up to 4-5 kHz for pulses and 1-2 kHz for buzzes (Fig 2). Analysis of individual elements was useful for comparing similar or sister species but of limited use for comparisons across all species. Courtship songs were divided into groups using a combination of element duration and frequency, and general patterns rather than focusing on individual song elements. The most common courtship song structure included pause-pulse-buzz elements repeating ca 2-3 times a second. A subset of four species with similar pause-pulse-buzz patterns was used to discern more nuanced differences. The songs differed in either duration of interpulse intervals and component elements (ANOVA: F 3,37 = 8.03, p = 0.0003; Fig 3A) or fundamental frequency (ANOVA: F 3,37 = 46.08, p < 0.0001; Fig 3B). In C. congregata, the courtship songs consisted of an initial buzz followed by repeating pulse-buzz elements with a short (20-26 ms) pause, followed by a pulse ("boing") that decays into a buzz [20]. Courtship songs of C. phobetri and C. euchaetis had similar patterns, although terminal buzzes after pulses were longer than in C. congregata. Cotesia nr. phobetri songs were similar to those of C. euchaetis but with shorter terminal buzzes. Cotesia marginiventris songs had pulse-buzzes that varied more in duration, producing a warble sound [21,24]. The song of C. marginiventris did not contain the discrete high-amplitude pulses present in C. congregata, C. phobetri, and C. euchaetis, but otherwise followed a similar pattern. Cotesia glomerata was similar but lacked discrete pauses between pulse-buzz elements and had a sudden, <50 ms, power spike at the start of each pulse that was not observed in other species (Fig 2). Three species produced courtship songs that consisted of rapid repeating pulses without long terminal buzzes. Cotesia empretiae produced a pulse train of~2 seconds that consisted of short repeating pulses with different durations. Cotesia diacrisiae produced rapid short pulses at a rate of four per second. Cotesia orobenae also produced rapid repeating pulses that were shorter in duration and had longer pauses compared to C. diacrisiae. These three species could be readily distinguished by waveform patterns (Fig 2). Three species produced songs with long pauses between pulses. Cotesia rubecula produced songs with pulses similar in duration and pacing to those of C. phobetri and C. congregata but lacked a terminal buzz. Cotesia teleae produced a pulse-buzz element with pauses that were about five times longer than those of the C. congregata group. The C. flaviconchae courtship song was substantially different from other Cotesia songs. It was the only species that produced songs with a buzz before the high-amplitude pulse ("pre-pulse buzz") lasting 486 ± 32 ms at 286 ± 2 Hz. The buzz-pulse repeated every 4-9 seconds, while all other songs repeated in less than a second. Analysis of all species Songs from all 12 Cotesia species were grouped using PCA and cluster analysis. Frequency of adjacent pulse and terminal buzz elements could not be accurately calculated in all sections of songs produced by some species due to short durations of one of these elements (e.g., C. empretiae); however, adjacent pulses and buzzes were correlated in song sections containing sufficiently long durations of both elements (r 2 = 0.65, d.f. = 770, p < 0.0001; Pulse = -31.0 + 1.1 � TBuzz; Fig 4). Therefore, frequency was consolidated into one term by using the "pulsebuzz unit" frequency in the PCA. The PCA using the seven song elements resulted in four PCs explaining 91.7% of total variance. PC1 was best represented by duration of the pulse-buzz unit, interpulse interval, and pre-pulse buzz, PC2 by frequency, PC3 almost entirely by pause duration, and PC4 by frequency, pause duration, and pulse duration ( Table 5). The factor scores of the first four PCs differed significantly among some, but not all, species (ANOVA: For example, C. flaviconchae differs significantly from all other species by PC1 and PC2, whereas species with highly similar song patterns such as C. congregata and C. phobetri may only differ by one PC (in this case PC4) but not the others. Differences in one or more PCs among species indicated groups with most species forming close clusters with some overlap (Fig 5). Species separated from the main cluster and containing relatively long duration elements had greater variance in PCs (e.g., C. flaviconchae and C. rubecula). Hierarchical cluster analysis of species using the first four PC mean factor scores resolved four main groups (Fig 6). Group 1 consists of wasps with short rapid pulses, group 2 with pulses and terminal buzzes, group 3 with long pauses between pulses, and group 4 of only C. flavichonchae with a thus far unique song pattern. Groupings from the cluster analysis generally reflect genetic groups [44], with the exception of C. rubecula which lacks the terminal buzz found in all other related species in the "rubecula" group (Fig 7). Differentiation of C. congregata host-foodplant complex sources Songs from the different C. congregata host-foodplant complex sources could not be distinguished by courtship songs alone. The PCA using the six song elements present in C. congregata resulted in three PCs explaining 88.8% of total variance and four PCs explaining 99.9% of total variance. PC1 was most represented by pulse-buzz unit duration, interpulse interval, and terminal buzz duration, PC2 by pause duration and frequency, PC3 by pulse duration, and PC4 by frequency (Table 6). A PCA of the two geographic sources of MsT and the one source of CcC host-foodplant complexes produced a similar component matrix (Table 7). High overlap of every PC prevents discrimination of host-plant complex sources (Fig 8) and geographically separated populations (Fig 9), even if means of some elements and the first threes PCs differ significantly between some groups (ANOVA: p < 0.001). Differentiation of C. nr. phobetri by location Songs from the two sources of C. nr. phobetri could be clearly distinguished by courtship songs. The PCA using the six song elements present in C. nr. phobetri resulted in three PCs explaining 94.5% of total variance and the fourth PC explaining the remaining variance. PC1 was most represented by duration of pulse and buzz components, PC2 by pause duration and frequency, PC3 by pause duration, and PC4 by pulse duration ( Table 8). The C. nr. phobetri populations from Virginia and Arizona can be reliably distinguished by PC1 (pulse and buzz durations) but not the other PCs (pause duration and frequency) (Fig 10). Linear discriminant analysis sorts wasps by population with 100% accuracy. Mean duration of the pulse and buzz components were longer in songs of wasps originating in Arizona than Virginia (unequal variance t-test, p < 0.001), with mean pulse-buzz unit duration 0.13 s longer (t 13 = -8.6, p < 0.0001). Overall song pattern and structure remained the same in wasps from both populations and were more similar to each other than to the other wasp species analyzed (Figs 4 and 5). Discussion Courtship songs of the twelve species of Cotesia presented in this study were variable yet distinguishable. Songs were characterized quantitatively by dividing songs into elements that were shared across most species. Principal components analysis was used to reduce dimensionality of correlated song elements. Species grouped using hierarchical cluster analysis corresponded to groups in the genetic phylogeny [44], with one exception. Some species that have not been sequenced can be provisionally placed into these pre-identified groups based on courtship song characteristics. Likewise, the general pattern of courtship songs may be predicted for species that have been placed within a phylogeny but were not found and recorded in this study. However, the relationships among the groups do not correspond strictly to the genetic phylogeny and the large differences among groups make their placement difficult. Differentiation within C. congregata by host-foodplant complex was not reliable; in contrast, C. nr. phobetri could be identified by source. Courtship song analysis has potential use for the systematics of this genus. Songs were generally unique to each species and could be distinguished by waveform and frequency characteristics. All songs consisted of repeating pulse and/or buzz elements generated from wing fanning, which is common among parasitic wasps. The song pattern was stereotypical for each species (Fig 2 and supplemental audio). Notably, songs of all species had overlapping ranges in the duration and frequency of some elements (Fig 5). Songs of some species with relatively longer elements (e.g., the pause element produced by C. rubecula) were more variable. The species representing the "rubecula" group produced the most similar song patterns (short pause, pulse, terminal buzz) as indicated by tight clustering in the PCA analysis (Fig 5). Songs of some species in this group differed only in the duration or frequency of one or a few elements (Fig 3). Although we attempted to capture the primary acoustic variation among species, some unique characteristics were not integrated into the PCA, e.g., the prepulse power spike produced by C. glomerata and the warble sound produced by C. marginiventris, which can be used to easily distinguish these species from others with otherwise similar song patterns. We do not rule out that some as yet unrecorded species may produce songs so similar to a sister species as to not be distinguishable, as was the case among the host-foodplant complexes of C. congregata. Moreover, playback experiments are necessary to determine whether wasps can distinguish between songs of closely related species. Importantly, some clustering of species was expected given the morphological and genetic similarity of many members of Cotesia. Phylogenetics and taxonomy of the Microgastrinae are active areas of study [50]. Most work has been at the subfamily or genus level [51][52][53][54], on closely related species clusters or cryptic species [47,[55][56][57], or by determining the evolutionary relationship with symbiotic viruses [58][59][60][61]. Considering the diversity of Cotesia and the difficulty of producing high-resolution phylogenies, not all of the same genes or all common species have yet been sequenced. Therefore, some species in this study cannot be placed reliably in current phylogenies (e.g., C. phobetri, C. teleae). Eight of the recorded species are included in the most recent genetic phylogeny for Cotesia reported by Michel-Salzat and Whitfield [44]. This phylogeny contains four identified groups, three of which are represented in our current study. Courtship songs of two species in the fourth group ("melanoscela") have been described in other studies [22,43]. In our study, species phylogenetically grouped together had similar courtship songs with the exception of C. rubecula. Several species not placed on the genetic phylogeny can be putatively placed within a group based on song characteristics (e.g., C. phobetri). The most basal genetic group containing C. empretiae and C. diacrisiae had songs that consisted of rapid repeated pulses although the placement of these species has low nodal support. Most songs of derived groups consisted of pulses and longer terminal buzzes. The apparent phylogenetic signal of courtship songs allows for predictions of song structure before recording. For example, most species in the "rubecula" group have a pause-pulse-buzz pattern of similar duration (Fig 7). Species such as C. schizurae and C. electrae likely have songs similar in pattern to C. congregata and not more different than the more distantly related Courtship songs in Cotesia (Hymenoptera: Braconidae) C. euchaetis or C. marginiventris. In both the genetic phylogeny and acoustic dendrogram, C. glomerata was placed as a sister group to most of the "rubecula" group, with its main distinction being the pre-pulse power spike. In the "kariyai" group, C. cyaniridis is predicted to have a song with a buzz that leads directly into a pulse with a long interpulse duration. The song may differ in details of timing and frequency, but otherwise should sound similar to the closely related C. flaviconchae (Fig 7). These predictions are supported by the recent recording of two male C. schizurae (host Schizura unicornis; 37.7549, -77.3458) from the same brood. This species, closely related to C. congregata, was targeted for collection but not initially found. The courtship song of C. schizurae (S13 Audio) is similar enough in structure and duration of pulses, buzzes, and pauses to that of C. congregata to be firmly placed as a closely related species. Although courtship songs can be used to construct species groups, predicting evolutionary relationships among these groups is more challenging due to the lack of data for intermediate species. Among other insect groups, phylogenetic signals have been reported for the diverse songs of psyllids, but divergence may occur more rapidly among sympatric species [62]. Predicting the courtship songs of other acoustically active insect groups is challenging due to rapid diversification of song characteristics unrelated to genetic distance (e.g., the Drosophila willistoni species complex [3] and Chrysoperla lacewings [63]). Cotesia rubecula is the only species that deviates from expectation. It is genetically placed with species with pulse-terminal buzz patterns (e.g., C. congregata) but lacks a discrete terminal buzz (Fig 2). The most parsimonious explanation is that C. rubecula secondarily lost the long terminal buzz and replaced it with a long pause. The time between pulses is similar to those within its genetic group. Alternatively, C. rubecula may not belong in this group, although high nodal support for its inclusion makes this possibility less likely (Fig 7). Notably, C. rubecula has other characteristics that differentiate it from most species recorded in the "rubecula" and "glomerata" groups-it is solitary and the largest Cotesia species recorded. Determining whether a solitary or gregarious life history or relative size influences courtship song characteristics would require recording a broader range of species, particularly more solitary species (the only other being C. marginiventris in this study). Moreover, C. rubecula and C. glomerata are the only species collected that utilize the same host, Pieris rapae on Brassicaceae. Their songs may display greater divergence in part due to character displacement, which has been demonstrated for the songs of few insects [64]; however, extensive surveying suggests competitive exclusion between these two species over most of the range in the United States [65]. Cotesia teleae has a song that challenges direct placement into a group. The pattern of the pulse-buzz unit is similar in many ways to those of the "rubecula" group; however, it has a short pulse with a high energy terminal buzz that loses amplitude at the end (Fig 2). Most distinctly, there are long pauses between pulses. Possibly, it belongs with the "rubecula" groupthe cluster analysis supports a relation with C. rubecula-but its placement remains less certain than for other species without either genetic information or another species with a similar song pattern. The song of C. teleae is also the only one analyzed using a single male. A single parasitized polyphemus caterpillar (Antheraea. polyphemus) yielded few adult wasps. The brood began egression in October with most wasp larvae going into diapause, which was not broken in the lab and yielded only a single male concurrently with females. Considering that courtship songs are conserved within species and this male was healthy, the recorded song is presumably a reliable representation of this species. A second male emerged without a living conspecific female present and would not initiate courtship when presented with other species. Attempts to find a second brood over multiple years failed. Since additional samples of C. teleae are improbable, it was included in this study. The "melanoscela" group, containing C. sesamiae and C. flavipes, is the only major group not included in this analysis. These two species, widely used as biocontrol agents of stemborer pests, are not native to North America and could not be acquired for this study. Similar to other Cotesia spp., their songs consist of repeating pulse, buzz, and pause elements with a frequency of 222-290 Hz [22,43] (note: our terminology differs from that used in these references). The overall pattern of higher-amplitude pulse decaying into a longer terminal buzz has some structural similarities to songs of C. marginiventris and others in the "rubecula" group; however, the time between high amplitude elements (termed buzz 1 in [22,43]) is nearly a full second in C. flavipes and C. sesamiae, which is considerably longer than most of the other Cotesia species recorded. The species of the "melanoscela" group would form a distinct cluster based on reported song characteristics, and may be placed close to the "rubecula" group based on overall pattern. Courtship song analysis can be used to match unidentified wasps to species, particularly those with similar morphology. Two cases occurred during this study in which parasitoid cocoons were found separated from their host. Unknown Cotesia cocoons found on a garden tomato plant were identified as C. orobenae upon recording. Presumably, the cross-stripped cabbageworm (Evergestis rimosalis) hosts had decimated nearby cabbages and had then migrated to the tomato before parasitoid egression. The hosts were absent, leaving only the wasp cocoons remaining on the leaves (C. orobenae cocoons typically do not remain attached to the host). In the second case, three loose bundles of parasitoid cocoons were found with no host in a mowed horse pasture in Virginia, USA. After recording the adults, the species acoustically matched those grouped with C. congregata but not any currently recorded species. Subsequently, cocoons of C. nr. phobetri that originated from Arizona were received. These wasps produced courtship songs that very closely resembled those of the unknown Virginia wasps, and also were similar in morphology, cocoon structure, and habitat. This is the first known record of C. nr. phobetri outside of Arizona. Considering that this species utilizes a common caterpillar genus, Grammia, as hosts in a common habitat type, they may be widespread in the United States. Courtship song elements may differ even among closely related species or host-associated populations. For example, allopatric populations of C. sesamiae and C. flavipes utilizing different hosts had courtship songs that differed in element duration and frequency [43]. Likewise, C. congregata originating from hosts M. sexta on tobacco (MsT) and Ce. catalpae on catalpa (CcC) differed significantly in pulse and pause durations though the differences were not enough to reliably distinguish all individuals [15]. We expanded this earlier finding by using four additional host-foodplant sources of C. congregata in Virginia and an additional population of MsT wasps from Indiana (Table 1). These six total sphingid host species represent two subfamilies so thus were phylogenetically diverse [66]. Mean song element duration (pulse and pause) and PCs differ among MsT and CcC wasps; however, the degree of range overlap with the additional sources prevents reliable discrimination by source (Figs 8 and 9). The slight differences may indicate recent reproductive isolation that over time may become discrete differences under sexual selection or genetic drift. Breeding crosses using these additional sources of C. congregata indicate a pattern of asymmetric hybrid female sterility with either MsT or CcC wasps, suggesting only two primary lineages (Bredlau et al., in submission). In contrast, geographically separated populations of C. nr. phobetri differ in song element durations (Fig 10), even though they are similar enough to be recognized as the same species (S7 and S8 Audio). We cannot determine whether the Virginia and Arizona populations represent sister species, host-associated races, or isolated populations without additional information on reproductive compatibility and range. These populations are separated by 3,150 km and thus may be expected to have some differences in song elements regardless of species status. Another possibility is that laboratory rearing of the Arizona population for three years could have resulted in slight changes in courtship song elements, as reported for other braconid wasps [25]. Collecting wild C. nr. phobetri at multiple sites would be required to make that assessment. The other geographically separated samples came from C. glomerata and C. rubecula; however, not enough individuals were recorded to discern acoustic differences within these species. Relatively small differences in songs among closely related species indicate a phylogenetic signal that may have useful applications for systematics. Sexual selection likely plays a role in the differentiation of some songs. However, within the majority of the "rubecula" group, songs consist of similar pause-pulse-buzz patterns more indicative of a slow build-up of differences rather than active sexual selection. Furthermore, song differences in reproductively isolated incipient species of C. congregata are slight and cannot be used to reliably identify host-foodplant complex sources. Song differentiation of reproductively isolated species may change over time via genetic drift in the absence of strong sexual selection, producing minor changes in elements among closely related species. In this scenario, songs may seem arbitrarily different among species with relatively small changes, yet still be conserved within species. Courtship shortly after emergence on the natal host-plant may play a role in limiting contact among sympatric, closely related species, thereby reducing selective pressure on courtship song differentiation. Likewise, other factors such as host-plant learning and adaptations to host immune systems may play a greater role in parasitic wasp speciation, leading to differing rates of song differentiation. In contrast, species clusters of Drosophila are reported to have large differences in courtship songs, suggesting strong sexual selection leading to differentiation before other traits [2][3][4]67]. Playback experiments using the D. buzzatii species cluster demonstrate that females are more likely to accept males with a conspecific song, supporting the role of sexual selection [68]. Likewise, cryptic species complexes of lacewings [8] and sand flies [10] can be reliably distinguished by courtship song patterns. Courtship songs of Cotesia spp. are structurally complex and highly variable compared to those produced by other braconids. For example, courtship songs of other microgastrines vary from the consistent wing fanning sounds that increase in amplitude ca. every 2 seconds produced by Glyptapantales flavicoxis [28] (in a genus closely related to Cotesia [44]) to the short repeating pulse trains of Microplitis croceipes that may merge into a warble [24]. More distantly related braconids, such as those in the Opiinae [24][25][26], typically produce short repeating pulses similar in structure to the song of C. diacrisiae. Wasps in the Aphidiinae [27] and Euphorinae (Bredlau, unpublished) produce longer pulses (200-220 ms) with longer pauses between pulses (200-370 ms) that do not trail into terminal buzzes, similar in structure to the song of C. rubecula. The song of at least one ichneumonid, Compoletis sonorensis, also produces a song consisting of pulses (215 ms) and long pauses (215 ms) (Bredlau, unpublished). Among parasitic wasps, constant wing fanning or repeating pulses are the most common patterns. Considering that all Cotesia songs consist of pulses in one form or another, the ancestral song most likely also consisted of pulses or consistent wing fanning varying in amplitude that later developed pauses before the high amplitude components. Similarity of songs among species such as C. rubecula and those in other subfamilies likely evolved independently. For example, convergence of song structure has been reported for allopatric cryptic species of Chrysoperla lacewings [69]. The analysis presented in this study has several limitations. Even the relatively simple songs of parasitic wasps contain multiple acoustic elements and frequencies, often with different degrees of variance depending on song structure. Principal component analysis is useful for reducing the dimensionality of complex datasets to uncorrelated variables, and in identifying elements that contain the greatest variance. Moreover, PCA is a common method of data exploration widely understood by biologists and has been used in the comparison of songs in diverse taxa including birds (e.g. [70,71]) and insects [4,8,10,63,72]. We used PCA as a means to reduce the acoustic data for comparison and included the number of PCs that adequately explained aspects of the courtship songs in the cluster analysis. However, limitations such as uneven scaling of long vs short elements and time vs frequency were not accounted for because their biological significance is unknown. No one element could adequately capture the differences among songs; furthermore, some elements calculated were intentionally redundant. Alternative methods to a cluster analysis that consider the probability of a given acoustic tree among all possible trees should be considered with additional data. Additionally, the sequencing of all sampled wasps with additional genes or recordings of those already sequenced will permit a more thorough examination of song trait evolution. This comprehensive study of courtship song diversity within a genus of parasitic wasps, Cotesia, implicates a wide diversity of song patterns that can be divided into groups based on duration and frequency of song elements. The basal song most likely consisted of regular pulses generated by high-amplitude wing strokes, as seen in other members of subfamily Microgastrinae [24]. This song diverged into the several distinct patterns among the major groups of Cotesia. Many wasps not yet recorded can likely be placed into these groups based on a combination of song structure and morphology. The unique structure of songs for each species can potentially be used for species recognition and as a reproductive barrier between cryptic species; however, the influence of sexual selection is uncertain. Despite measurable differences among species, the songs among C. congregata host-foodplant complexes cannot be reliable distinguished, suggesting that song differentiation does not proceed without other reproductive barriers. In total, fifteen Cotesia species have been recorded out of the estimated 1,000 species globally [50]. Considering the size of this genus, other entirely new song patterns may yet be discovered. When combined with additional genetic data, courtship song analysis should prove useful in determining the systematics and evolutionary history of groups of parasitic wasps, particularly in this highly diverse and agriculturally important taxon.
8,407
2019-01-04T00:00:00.000
[ "Biology" ]
Transforming Area Coverage to Target Coverage to Maintain Coverage and Connectivity for Wireless Sensor Networks Area coverage is one of the key issues for wireless sensor networks. It aims at selecting a minimum number of sensor nodes to cover the whole sensing region and maximizing the lifetime of the network. In this paper, we discuss the energy-efficient area coverage problem considering boundary effects in a new perspective, that is, transforming the area coverage problem to the target coverage problem and then achieving full area coverage by covering all the targets in the converted target coverage problem. Thus, the coverage of every point in the sensing region is transformed to the coverage of a fraction of targets. Two schemes for the converted target coverage are proposed, which can generate cover sets covering all the targets. The network constructed by sensor nodes in the cover set is proved to be connected. Compared with the previous algorithms, simulation results show that the proposed algorithm can prolong the lifetime of the network. Introduction With development of wireless communication technology and microelectromechanical system (MEMS) technology, the cost, power, and volume of sensors decrease, while functions increase. Sensors can sense data or events in sensing regions, handle them, communicate with other sensors, and finally transmit the processed data to the base station. In recent years, wireless sensor networks (WSNs) have been used extensively in many fields, such as monitoring the living environments and behaviors of wild animals [1], detecting the temperature and pressure of craters and earthquakes [2], military supervising, tracking targets, health cares, and vehicular applications [3,4]. In a WSN, the batteries of sensors are limited and it is not feasible to recharge for large numbers of sensors in many applications. In such an energy-constrained WSN, if sensors are all in the working state simultaneously, excess energy would be wasted and the collected data would be highly correlated and redundant. Therefore, it is critical to make an effective schedule to let one subset of sensors in the working state and the remaining in the sleeping state such that the sensing region can be covered in a long time without recharging. Coverage problems in WSNs have been widely studied [5][6][7][8] and can be classified into the following three types [9]. Area coverage [10][11][12][13] is to cover a sensing region. Sensors are randomly or deliberately deployed to cover every point in the region. Target coverage [14][15][16] is to cover a target set. Sensors and targets are deployed in a region and sensors can cover all the targets in the target set. Barrier coverage [17,18] is to cover a long belt region. A belt region is covered if an intruder is detected when crossing the region along any path. These three kinds of coverage are shown in Figure 1. In this paper, we mainly discuss the area coverage problem, which is similar to the art gallery problem [19] in computational geometry. In the art gallery problem, cameras are placed such that every point in the art gallery is monitored by at least one camera. In area coverage, sensors are deployed and scheduled such that every point in the sensing region can be covered by at least one sensor. Full coverage and connectivity are two important requirements for WSNs. The environment can not be monitored in an accurate way without full coverage and sensor nodes can not communicate with each other to process the sensed data and transmit them to the base station without connectivity. In recent years, many approaches have been proposed for area coverage problems and target coverage problems, respectively. However, few results integrate them together. In this paper, we study area coverage problems by applying an approach that can solve target coverage problems effectively. The main contributions of the paper are as follows. (1) We design an energy-efficient approach for area coverage problems from a new perspective, that is, solving the area coverage problems by transforming the area coverage problem to the target coverage problem. (2) We propose two dynamic schemes for the converted target coverage problems to generate cover sets and prove that these cover sets can cover the sensing region completely. Compared with the previous works, our approach can prolong the lifetime of the network significantly by simulations. The two schemes can also be used in the general target coverage problems. (3) We prove that as long as the sensing region is completely covered, the network is connected when the communication radius of every sensor node is no smaller than twice of its sensing radius. The rest of the paper is organized as follows. Section 2 gives the existing results about area coverage, target coverage and connectivity. Section 3 gives the problem statement, some definitions, and assumptions. Section 4 gives the proposed algorithm and two schemes to generate cover sets. In Section 5, we discuss the performance of the algorithm. In Section 6, we show the superiority of our algorithm by simulations. Finally, Section 7 concludes the paper. Related Works We mainly consider the problem of transforming area coverage to target coverage to maintain coverage and connectivity simultaneously. We now give some related works about area coverage, target coverage, and connectivity in this section. Area Coverage. Area coverage is to cover or monitor a region such that every point in it can be covered by at least one sensor node. In [11], Zalyubovskiy et al. proposed an energy-efficient algorithm for area coverage problems. The locations of sensor nodes are determined and their sensing ranges and communication ranges are adjustable. The algorithm adjusts the sensing radii of sensor nodes and arranges them optimally. In addition, the authors gave two types of coverage models. In the first model, the centers of three neighboring disks with the same sensing radius are connected to be an equilateral triangle. In the second model, the centers of four neighboring disks with the same sensing radius are connected to be a square. In [12], Misra et al. proposed a coverage algorithm based on Euclidean distance. It requires a cluster head node with high-power, high-computation, and communication capacity for each local region. These cluster heads communicate with common sensor nodes in the same local region, deal with their locations, and generate cover sets independently. A new one will be activated after a cover set finishes its work. In [13], Kasbekar et al. proposed a polynomial time and distributed k-coverage algorithm to maximize the network lifetime. The algorithm does not require the knowledge of locations of sensor nodes and directional information, but only needs to know the distance between any two communication neighbors and their sensing radii. The algorithm includes two phases, initialization phase and activation phase. In the initialization phase, every node u knows local information including the intersection point set P u covered by it, the set of sensor nodes T u , and P u P v , where T u contains sensor nodes covering intersection points in P u and P v is the intersection point set covered by sensor node v(v ∈ T u ). In addition, it also has global information including the number of sensor nodes deployed and the maximum of the initial energy of all sensor nodes. In the activation phase, every sensor node is assigned a weight, called activation preference. Sensor node u competes with the sensor nodes in T u for being an active node according to their activation preferences. A contending sensor node u activates itself once it detects that it has a lower activation preference than all the contending sensor nodes in T u . Once a sensor node u detects that all the intersection points in its sensing range in P u are k-covered by the already active sensor nodes in T u , it enters the sleeping state. The algorithm guarantees that the sensing region is covered completely, whereas it does not consider the tradeoff between energy and coverage. If there is only one intersection point uncovered in the sensing range of a sensor node, the node must be activated, which is against saving energy. Target Coverage. Target coverage is to cover a target set such that all the targets in it can be covered by at least one sensor node. In [14], Cardei et al. proposed a centralized solution based on linear programming (LP) to generate nondisjoint cover sets for the target coverage problem. It takes a high complexity of O(m 3 n 3 ), where m is the number of cover sets and n is the number of sensor nodes. The authors also proposed a greedy algorithm, with a lower complexity of O(dk 2 n), where d is the minimum number of sensor nodes that cover all the targets and k is the number of targets. In [15], Zorbas et al. proposed an effective coverage algorithm (CCF) for target coverage. CCF divides all the sensor nodes into cover sets, each of which can cover all the targets. These cover sets are disjoint or nondisjoint. The algorithm assigns a weight for each sensor node u, which combines both its monitoring capacity and remaining energy. The authors gave a static-CCF scheme and a dynamic-CCF scheme to produce cover sets, respectively. During the construction of cover sets, the node with a higher weight is preferred to be selected. The static-CCF assigns a weight for every node to describe its relation with Critical Targets. The weight is computed only once at the beginning of the algorithm, and it remains constant until the termination of the algorithm. On the other hand, in the dynamic-CCF scheme, the weight varies dynamically with the set of critical targets during the process of the scheme. In [16], Shih et al. mainly considered the connected target coverage (CTC) problem in wireless heterogeneous sensor networks with multiple sensing units. The problem can be reduced to a connected set cover problem and further formulated as an integer linear programming (ILP) problem. However, the ILP problem is NP-complete. Therefore, two distributed heuristic schemes, REFS (Remaining Energy First Scheme) and EEFS (Energy Efficiency First Scheme), were proposed. In REFS, each sensor node determines whether it should activate itself such that all targets can be covered and the sensed data can be delivered to the sink, according to its remaining energy and the decisions of its neighbors. The advantages of REFS are its simplicity and reduced communication overhead. To utilize energy of sensor nodes efficiently, EEFS was proposed. In EEFS, the sensor node considers its contribution to coverage and connectivity to make a better decision. Definitions of the Problem We give some assumptions and definitions before describing the problem. Assumption 1. All the sensor nodes are randomly deployed in a sensing region A, and A can be completely covered. If A is very large, we divide it into several smaller subregions by following a divide-and-conquer approach [12]. Each of them executes the algorithm independently. Assumption 2. All the sensor nodes are static and locationaware. Every node has a unique ID to identify itself, and their locations can be obtained via some localization techniques [20]. Assumption 3. All sensor nodes have the same sensing radius and the same communication radius; that is, all the sensor nodes are homogeneous. Assumption 4. The sensing range of a sensor node u is a disk of radius r, centered at the location of u. A sensor node v located within the sensing range of u is denoted by v ∈ SN(u). If the distance between point p in the sensing region and u is less than r, that is, dist(p, u) < r, p is covered by u. The communication range of a sensor node u is a disk of radius R, centered at the location of u. A sensor node w located within the communication range of u is denoted by w ∈ CN(u). If the distance between u and w is less than R, that is, dist(w, u) < R, they can communicate with each other. Assumption 5. To guarantee the connectivity of the network, the communication radius R of each node must be no smaller than twice of its sensing radius r, which will be proved in Theorem 8. If the network is not connected, the sensed data can not be delivered to the base station and the paper will lose real significance. Definition 1. Given a convex region A, if every point p ∈ A is covered by at least one sensor node, then A is covered. This kind of coverage is called area coverage. Definition 2. Given a target set T, if each target t ∈ T is covered by at least one sensor node, then T is covered. This kind of coverage is called target coverage. Definition 3. The set composed of sensor nodes in the working state and satisfying the requirement of area coverage or target coverage is called cover set. If some sensor nodes belong to different cover sets, these cover sets are called nondisjoint cover sets. If each sensor node belongs to only one cover set, these cover sets are disjoint cover sets. The size of cover sets is an important measurement of the performance of one scheduling algorithm. The smaller a cover set is, the less the energy consumption is. Definition 4. Network lifetime is the time interval from the activation of the network until the first time at which a coverage hole appears, or from the point that the network starts operation until the set consisting of all the sensor nodes with nonzero remaining energy is not a cover set any more. According to the assumptions and definitions above, we can describe the problem as follows. Given a convex region A, transform the area coverage to the target coverage and find efficient strategies for target coverage to generate as many cover sets as possible such that nodes in each cover set can cover all the targets in the converted target coverage problem. Algorithm Description In this section, we propose an algorithm for the area coverage problem with boundary effects (Pseudocode 1). The algorithm includes two phases, transformation phase and cover set generation phase. The former considers how to transform the area coverage problem to the target coverage problem. In this phase, a large sensing region is divided into several smaller subregions, and in each of which, there are some parameters and information of intersection points among disks of sensor nodes or between disks of sensor nodes and boundaries, which can be used as the input of the second phase. These intersection points are the targets in the second phase. In the second phase, we propose two-centralized schemes for the converted target coverage problem obtained from the first phase. Each sub-region executes them independently. The two schemes generate cover sets that can cover all the targets obtained from the first phase. In both of the two schemes, every sensor node is assigned a weight. When selecting a new sensor node for a cover set, the node The transformation phase ends * / / * The following is the cover set generation phase * / while S remain / = ∅ and |C max | < max sets do Algorithm 1 or Algorithm 2 C max = C max ∪ {C} end while / * The cover set generation phase ends * / return C max Pseudocode 1: Area coverage algorithm. with a larger weight is preferred. The process operates until the target set is empty. Then a new cover set is generated. Obviously, the two schemes can also be used in the general target coverage problems. Transformation Phase. If sensor nodes are randomly deployed in a large convex sensing region A, A can be divided into several subregions, that is, where k is the number of subregions. Each sub-region has at least one virtual boundary. Given two sensor nodes w, z ∈ S, where S is the set of sensor nodes deployed in the sensing region A, if their sensing disks intersect (not tangent), that is, 0 < d(w, z) < 2r, we say w intersects with z. If the distance between a sensor node and one boundary of the region is less than r, we say the node intersects with the boundary. In Figure 2, suppose the sensing region is a rectangle ABCD, we divide it into four smaller subregions and the dotted lines l 1 and l 2 are the virtual boundaries: dist(l 1 , l 1 ) = dist(l 1 , l 1 ) = dist(l 2 , l 2 ) = dist(l 2 , l 2 ) = 2r. When nodes (such as u and v) are located between l 1 and l 1 or between l 2 and l 2 , the intersection points (t 1 and t 2 ) among disks of sensor nodes in different subregions can not be ignored. We let the intersection points among disks of sensor nodes or between disks of sensor nodes and boundaries (not including virtual boundaries) be the targets forming a target set used in the second phase, denoted by T, T = {t 0 , t 1 , . . . , t m }; t j is the target and m is the number of targets in T. Each target in T is assigned an ID. If the number of targets in T at the same place exceeds one, we regard them as a single target and assign only one ID for them. Theorem 5. Given a set of sensor nodes C ⊂ S, if C covers all targets in T, then C is a cover set that can cover the sensing region A completely. Proof. By contradiction, assume that all targets in T are covered, but there is a small region not covered, denoted by D and p is a point in D. D can be one of the following two cases. Case 1. Point p lies in a region D whose boundary is only composed of exterior arcs of a collection of sensing disks (see Figure 3). Since the sensing disks themselves are outside the sensing range of sensor nodes (see Assumption 4 in Section 3), the entire boundary of D, including the intersection points of sensing disks, is not covered. This contradicts the assertion that all targets are covered. Case 2. Point p lies in a region D bounded by exterior arcs of a collection of sensing disks and boundaries. As shown in Figure 4, D is in a region bounded by exterior arcs of nodes u, v, x, and the lower boundary. Similarly to Case 1, the entire boundary of D, including the intersection points of nodes u, v, and x and intersection points between nodes v, x, and the lower boundary, is not covered. This contradicts the assertion that all targets are covered. In summary, C that covers all the intersection points among disks of sensor nodes or between disks of sensor nodes and boundaries (not including virtual boundaries) can also cover the whole sensing region. Thus, the theorem is true and we can transform the area coverage problem to the target coverage problem. Cover Set Generation Phase. In this phase, we propose two dynamic cover set generation schemes (Algorithms 1 and 2) for the converted target coverage problem. Both of them can generate cover sets to cover all the targets in T. Before executing the two schemes, the following parameters need to be stated. N i is the set of sensor nodes covering the target t i ; that is, Q j is the set of targets covered by the sensor node s j , that is, Q j = {t i | dist(s j , t i ) < r, i = 1, . . . , m}, where j = 1, . . . , n, and n is the number of sensor nodes. C cur is a set of sensor nodes that have already been selected into the cover set currently. Initially, C cur = ∅. C is the cover set produced by the two schemes, and C cur ⊂ C. C max is the set of all cover sets. |C max | is the number of cover sets. 6 International Journal of Distributed Sensor Networks Input The target set T. Output A cover set C ifT remain / = ∅ then selected = none for all s j ∈ S cur do weight(s j ) = |Q j | end for weight(u) = max{weight(s j ), s j ∈ S cur } / * If there is more than one sensor node that has the largest weight in S cur , the one with more energy is selected. weight(x) = max{weight(s j ), s j ∈ S avail } / * If there is more than one sensor node that has the largest weight in S avail , the one with more energy is selected. S is the set of sensor nodes deployed. S remain is the set of sensor nodes with nonzero energy. Initially, S remain = S. S cur is the set of sensor nodes currently remaining with nonzero energy except the sensor nodes that have been added into the current cover set C cur . Input The target set T. S avail is the set of candidate sensor nodes that have not been added but could be added into the current cover set C cur . Initially, S avail = ∅. T is the set of intersection points obtained in the transformation phase. T remain is the target set uncovered. Initially, T remain = T. International Journal of Distributed Sensor Networks 7 T covered is the target set including the targets already covered by sensor nodes in the current cover set C cur . w is the number of cover sets that each sensor can participate in initially. L sj is the remaining number of cover sets that the sensor node s j can participate in currently. Initially, L sj = w. W(s j ) is the number of targets lying in the sensing range of s j but uncovered by other nodes that have been selected for C cur . weight(s j ) is the weight of the node s j assigned by the two cover set generation schemes. Initially, Algorithm 1 assigns a weight for every sensor node s j , that is, weight(s j ) = |Q j |, which means the number of targets lying inside the sensing range of s j but uncovered by other nodes that have been selected for C cur . Afterwards, an initial sensor node that has the largest weight is selected from S cur . If there is more than one sensor node that has the largest weight in S cur , the one with more energy is selected. If there is no sensor node in S cur , Algorithm 1 ends. After selecting a new node, denoted by selected, Algorithm 1 removes it from S cur , adds it into C cur , removes the targets covered by it from T remain , and puts them into T covered . But the targets that have ever been added into T covered will not be added again. For each target t i in Q selected , the weight of each node s j in N i decreases by 1, that is, weight(s j ) = weight(s j )− 1, which is to bring down the possibility of being selected. If the weight of a sensor node is 0, which implies the targets covered by it have already been covered by other sensor nodes in C cur , it will be removed from S cur . Afterwards, the sensor nodes with nonzero weights in N i and CN(selected) are put into the set S avail , and the nodes that have ever been added into S avail will not be added again. Then the sensor node in S avail with the largest weight is selected for C cur ; that is, the node that can cover the largest number of targets uncovered by other nodes in C cur until now is selected. Loop until all the targets are covered and T remain is empty or S avail is empty. Then C cur becomes a cover set C. Initially, Algorithm 2 assigns a weight for every sensor node s j with weight(s j ) = a × (W(s j )/|Q j |) + b × (L sj /w), where a + b = 1. a and b are parameters used to decide the percentage of residual energy and the percentage of the number of uncovered targets. a and b are initialized at the beginning of the application and do not change during the application lifetime. W(s j )/|Q j | is used to describe the current coverage contribution of sensor node s j , where W(s j ) is the number of targets lying in the sensing range of s j but uncovered by other nodes that have been selected for C cur . L sj /w can be used to describe the energy available currently. The process of Algorithm 2 is similar to that of Algorithm 1. When selecting a sensor node for a cover set, Algorithm 1 emphasizes on its coverage contribution, that is to say, selecting the one covering as many targets uncovered as possible in T, while Algorithm 2 considers the tradeoff between energy and coverage contribution. We prove the correctness of the two cover set generation schemes above now. Theorem 6. Both of the two schemes can generate at least one cover set, if there exists one in the network. Proof. Assume that C 1 is a cover set. For where C 1 ⊆ S, C 1 / = ∅, Q j ⊆ T, and Q j / = ∅. By contradiction, suppose that C 1 does exist, that is to say, sensor nodes in S can cover all the targets in T. However, our schemes can not produce a cover set, which means that there does not exist a sensor node s j in S cur such that s j can cover targets in T remian , for (2) C cur is a set of nodes that have already been selected into the cover set, and Remove all the targets covered by nodes in C cur from T remain . Then, for From (2), (3), (4), we have In addition, T remain ⊆ T, C 1 ⊆ S, thus (5) contradicts (1), and the initial hypothesis proves to be false. Therefore, our schemes can generate at least one cover set to guarantee full coverage of the sensing region. Algorithm Analysis In the cover set generation phase, we select sensor nodes for cover sets according to the following three criteria. (1) The sensor node covering a larger number of targets has a higher priority. (2) Minimize the probability of covering a target multiply. (3) Promote candidates with more remaining energy. For (1), Algorithm 1 assigns a weight for every sensor node s j , that is, weight(s j ) = |Q j |, and selects a sensor node that has the largest weight. For (2), in Algorithms 1 and 2, as for every target t i in Q j , the weight of every node in N i decreases to reduce the probability of covering a target multiply after selecting a new node s j for a cover set. For (3), Algorithm 1 selects a sensor node with more energy when there is more than one node that has the largest weight. When considering the tradeoff between the remaining energy of a sensor node and the number of targets covered by it, we propose Algorithm 2, which combines criterion 1 and 3, and assigns a weight for every sensor node s j as follows. 8 International Journal of Distributed Sensor Networks Now, we give the algorithm for solving area coverage problems. The pseudocodes are as follows. The complexity of the algorithms executed in each subregion is O(n 2 m), where n is the number of sensor nodes in the smaller sub-region and m is the number of targets in the sub-region produced in phase one. Since the number of targets and sensor nodes in sub-region is smaller than those in the whole sensing region, our algorithm is effective. Since there are many sensor nodes in the sensing region, one target may be covered by multiple sensor nodes. Assume that there are x sensor nodes in the smallest set N i . Initially, each sensor node has the energy with the capability of participating in w cover sets. Thus, the algorithm can produce at most x×w cover sets, which is an upper bound of the number of cover sets produced and is also a theoretical result, that is, max sets = x × w. When there is no sensor node in S remain or the number of cover sets produced is max sets, the algorithm ends and returns C max . Obviously, the cover sets produced by the algorithm are nondisjoint cover sets. Both of Theorems 5 and 6 above fit the case in one subregion, and the proof of coverage in the whole region A is as follows. Theorem 7. The whole large region A can be covered when it is divided into multiple subregions and each of them executes the algorithm independently. Proof. We divide a large sensing region A into several smaller subregions by following a divide-and-conquer approach in order that our algorithm can be widely used in large sensing regions. In the transformation phase, the intersection points among disks of sensor nodes or between disks of sensor nodes and boundaries (not including virtual boundaries) of subregions are calculated as the targets used in the cover set generation phase. Targets in all these subregions constitute the targets in T. In each sub-region, cover set generation schemes work independently on the targets that also include the intersection points between disks of sensor nodes and virtual boundaries. Thus, the number of targets in all these subregions is more than the number of targets in T. From Theorem 6, the cover sets generated can cover all the targets in each sub-region. Thus, all the targets in T can also be covered. From Theorem 5, if all the targets in T can be covered, the whole region can also be covered. Thus, the theorem is right. According to the theorem, the proposed algorithm can be applied in large networks with thousands or more nodes. Theorem 8. The network constructed by sensor nodes in any cover set is connected when the communication radius of any sensor node is no smaller than twice of its sensing radius. Proof. By contradiction, if the network is not connected, there exists a pair of sensor nodes with no path between them. Assume that (u, v) is a pair of nearest (Euclidean distance) sensor nodes that are not connected. Given a disk, denoted by x, which lies on the line segment from u to v, dist(u, x) = r (as is shown in Figure 5). x  x  1 2 Figure 5: (u, v) is a pair of nearest sensor nodes that are not connected. Firstly, we prove that there must be other sensor nodes inside the disk of x. We prove it by contradiction. Assume that there are no other nodes inside the disk. In order to contain another sensor node y, the disk x moves a shortest distance ε along uv to the node v and is located at x . When the disk moves a distance less than ε along uv to the node v and is located at x , there are still no other nodes inside the disk since dist(x , y) > r > dist(x , y) and dist(u, x ) > r. Thus, the point on the disk center x is not covered by any sensor node, which contradicts the condition that the whole region is covered completely. Therefore, there must be other sensor nodes inside the disk of x. Assume that a sensor node p is inside the disk of x before moving. Since dist(p, u) < 2r and 2r ≤ R, node u and node p are connected. However, u and v are not connected. Thus, p and v are not connected. Otherwise, u and v are connected. In addition, dist(u, v) > dist(p, v), for ∠1 < π/2 < ∠2. This contradicts the hypothesis that (u, v) is a pair of nearest sensor nodes that are not connected. Therefore, the theorem is true. According to Theorem 8, we can ensure the network connectivity. Then the sensed data can be transmitted to the base station. Simulations We describe the simulation results from three aspects, the size of cover sets, the number of cover sets, and coverage redundancy. The size of cover sets is the number of working sensor nodes in cover sets. The size of cover sets is inversely proportional to the lifetime of the network. The number of cover sets is the number of cover sets theoretically produced by the algorithm and is directly proportional to the lifetime of the network. When the required coverage degree is 1, the case that every point in the sensing region covered by only one sensor node is hardly achieved for full coverage and the case that many small parts of the sensing region are multicovered is called coverage redundancy. Without considering fault tolerance of the network, coverage redundancy must be reduced when designing an algorithm for coverage problems. The smaller the redundancy is, the longer the lifetime of the network is. Our algorithm includes two phases, transformation phase and cover set generation phase. The second phase works on the targets generated in the first phase. The schemes in the second phase can also be used in the general target coverage problems. To prove the superiority of our schemes for target coverage problems, we firstly compare our schemes International Journal of Distributed Sensor Networks with static-CCF [15]. As shown in Figure 6, t and n are the number of targets and the number of sensor nodes in a 50 m × 50 m region, a = 0.5, b = 0.5 in Algorithm 2, α = β = γ = 1/3 in static-CCF, and w = 5. The sensing radius r of every sensor node is 10 m. Figures 6(a) and 6(c) consider the case that the number of targets t is 500. When the number of nodes increases, the density of nodes increases, and it will generate more cover sets to cover all these targets. Thus, the number of cover sets increases while the average size of cover sets decreases. Figures 6(b) and 6(d) describe the case that the number of sensor nodes n is 400. When the number of targets increases, it will need more sensor nodes to cover all these targets. Thus, the number of cover sets decreases while the average size of cover sets increases. When the number of targets is small, the number of nodes covering them is also small. However, in these three schemes, when the number of targets and the number of sensor nodes is large enough, the average size of cover sets almost the same. From Figure 6, we can conclude that our schemes generate more cover sets than static-CCF, which represents our schemes can prolong the network lifetime compared to static-CCF, since our schemes pay more attention to sensor nodes that can cover more targets uncovered. Therefore, our schemes are energy-efficient. Figure 6 considers target coverage problems. In the following simulations, we consider area coverage problems. If the sensing region is large, we can divide it into several subregions by following a divide-and-conquer approach. Each sub-region executes the algorithm independently. In the following simulations, we mainly consider one sensing region with the size of 50 m × 50 m and the sensing radius r of every sensor node is 10 m. In the following simulations, the number of cover sets produced by our algorithm is almost the same as the number in theory. When different numbers of sensor nodes are deployed, the number of targets generated differs, and so does the number of cover sets. The network topologies under Algorithms 1 and 2 are shown in Figure 7. Only Algorithm 2 considers the values of a and b, and here we only consider the case of a = 1, b = 0 and the case of a = 0.5, b = 0.5. In the figure, " " means the sensor nodes deployed, "+" means the working nodes in the cover set generated by Algorithm 1, and " * " means the working nodes in the cover set generated by Algorithm 2. Here, 100 sensor nodes are deployed in a 50 m × 50 m region. Since the first cover set generated by Algorithm 2 remains the same with different values of a and b, we choose the second cover set to describe the network topology. Figure 8 shows the average size of cover sets with w = 10 under different values of coefficient a in Algorithm 2. Here, 100 sensor nodes are randomly deployed in the region. When a = 0 and b = 1, the remaining energy of sensor nodes is the only factor considered and their coverage contributions are not considered. Thus, the average size of cover sets produced is large. As the value of a increases, the algorithm begins to focus on the coverage contributions of sensor nodes and the average size of cover sets decreases. When a = 1 and b = 0, sensor nodes can participate in producing cover sets as long as their remaining energy is enough for covering the region once again. Figure 9 shows the number of cover sets under different numbers of sensor nodes randomly deployed. Here, a = 0.9, b = 0.1, and w = 10. Obviously, the number of cover sets increases with increasing number of sensor nodes. Algorithm 1 can produce more cover sets than Algorithm 2 and CPLC [12]. The reason is that Algorithm 1 selects sensor nodes that can cover a larger number of targets uncovered by other nodes that have been selected for C cur . Algorithm 2 selects sensor nodes that have the largest comprehensive abilities of the remaining energy and the coverage contributions; that is, Algorithm 2 considers both targets remaining uncovered by other nodes in C cur and the residual energy of nodes. When different numbers of sensor nodes are deployed, the number of sensor nodes in the smallest N i differs, and so does the number of cover sets. CPLC firstly selects two nearest sensor nodes, denoted by x and y, and then two furthest neighbors of them in the two communication neighbor sets are selected into the cover set, respectively. The process continues until the two sets are empty. However, some nodes nearby may be all selected. Thus, some redundant sensor nodes are selected into cover sets and the number of cover sets is reduced correspondingly. Figure 10 describes the relation between energy and the number of cover sets produced. Here, a = 0.9, b = 0.1, and n = 100. In our paper, the number of cover sets produced increases linearly with w since the number of cover sets produced is almost the same as the number in theory and the number of cover sets produced in theory is x × w, where x is a constant since all sensor nodes are static. In CPLC, if we want to guarantee full coverage, the number of cover sets is small. Figure 11 represents the comparison of the average coverage degrees among CPLC, Algorithms 1 and 2 under different numbers of sensor nodes randomly deployed. When the required coverage degree is 1, many small parts of the sensing region may be multicovered, and the coverage degree of the whole sensing region on average is called the average coverage degree. Obviously, compared to CPLC, the average coverage degree obtained from our proposed schemes is lower; that is, the proportion of the sensing region multicovered is small. Without considering fault tolerance of the network, the lower the average coverage degree is, the less the coverage redundancy is. Here, w = 5, a = 0.5, b = 0.5, and R = 2r = 20 m. The average size of cover sets produced by Algorithm 1 remains 20-22 all the time as the number of sensor nodes randomly deployed increases. The average size of cover sets produced by Algorithm 2 increases slowly since it considers the coverage contribution and the remaining energy when selecting sensor nodes. However, when the number of sensor nodes is large enough, the average size of cover sets almost remains 23-25 since the density of sensor nodes is large enough for covering the whole region. The average size of cover sets produced by CPLC is larger than our proposed two schemes. Thus, our algorithm can prolong the network lifetime compared to CPLC. Therefore, our algorithms are energy-efficient. Figure 13 represents the comparison of coverage redundancy under different algorithms. Here, 200 sensor nodes are randomly deployed, a = 0.5, b = 0.5, and R = 2r = 20 m. We compare our algorithm with CPLC [12]. We can obtain the coverage percentage of the sensing region by dividing the region into 1 m × 1 m small squares. In Figure 13, each data column represents the proportion of small squares whose coverage degree is no less than the distributed coverage degree. From Figure 13, we can see that all the squares are 1covered in these three algorithms. In CPLC, 92.5% of squares are 2-covered, and 19.4% are 4-covered. In Algorithm 1, 70% of squares are 2-covered, and 3.5% are 4-covered. In Algorithm 2, 72% of squares are 2-covered, and 3.9% are 4-covered. We can conclude from Figure 13 that CPLC has the highest coverage redundancy, Algorithm 2 comes second, and Algorithm 1 has the lowest redundancy. Without considering fault tolerance, the lower the redundancy is, the smaller the number of working nodes in the cover set is. Conclusion and Future Work In this paper, we discuss the area coverage problem with boundary effects and propose a new approach that integrates the area coverage with the target coverage and transforms the coverage of all the points in the sensing region to the coverage of a fraction of targets to achieve full area coverage. The algorithm is divided into two phases, transformation phase and cover set generation phase. The first phase is to transform the area coverage problem to the target coverage problem. The second phase gives two cover set generation schemes for the converted target coverage problem. The two schemes can also be used in general target coverage problems. Then, we set the communication radii of sensor nodes no smaller than twice of their sensing radii to guarantee network connectivity. Finally, we prove by simulations that our proposed algorithm is better than other algorithms and that the lifetime of the network is prolonged. In Section 3, we give some assumptions. Assumption 2 is that every sensor node is static and location-aware. Assumption 3 is that all the sensors are homogeneous. Due to the variety of the real environment, we can relax the two assumptions to be closer to the real environment. For example, we can study the case that the locations of sensor nodes are unknown, and the case that all the sensor nodes are heterogeneous, or both of the two cases.
10,122
2012-10-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Change sign detection with differential MDL change statistics and its applications to COVID-19 pandemic analysis We are concerned with the issue of detecting changes and their signs from a data stream. For example, when given time series of COVID-19 cases in a region, we may raise early warning signals of an epidemic by detecting signs of changes in the data. We propose a novel methodology to address this issue. The key idea is to employ a new information-theoretic notion, which we call the differential minimum description length change statistics (D-MDL), for measuring the scores of change sign. We first give a fundamental theory for D-MDL. We then demonstrate its effectiveness using synthetic datasets. We apply it to detecting early warning signals of the COVID-19 epidemic using time series of the cases for individual countries. We empirically demonstrate that D-MDL is able to raise early warning signals of events such as significant increase/decrease of cases. Remarkably, for about \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$64\%$$\end{document}64% of the events of significant increase of cases in studied countries, our method can detect warning signals as early as nearly six days on average before the events, buying considerably long time for making responses. We further relate the warning signals to the dynamics of the basic reproduction number R0 and the timing of social distancing. The results show that our method is a promising approach to the epidemic analysis from a data science viewpoint. Motivation We address the issue of detecting changes and their signs in a data stream. For example, when given time series of the number of COVID-19 cases in a country, we may expect to warn the beginning of an epidemic by detecting changes and their signs. Although change detection (see e.g. [1,2,3]) is a classical issue, it has remained open how the sign of changes can be found. In principle the degree of change at a given time point has been evaluated in terms of the discrepancy measure (e.g.. the Kullback-Leibler (KL) divergence) between probability 1 distributions of data before and after that time point (see e.g. [1,4]). It is reasonable to think that the differentials of the KL divergence may be related to signs of change. This is because the first differential of the KL divergence is a velocity of change while its second differential is an acceleration of change. The problem is here that in real cases, the KL-divergence and its differentials cannot be exactly calculated since the true distribution is unknown in advance. A question lies in how we can estimate the discrepancy measure and their differentials from data when the parameter values are unknown. The purpose of this paper is to answer the above question from an information-theoretic viewpoint based on the minimum description length (MDL) principle [5] (see also [6,7] for its recent advances). The MDL principle gives a strategy for evaluating the goodness of a probabilistic model in terms of codelength required for encoding the data where a shorter codelength indicates a better model. We apply this principle to change detection where a longer codelength indicates a more significant change. Along this idea, we introduce the notion called the differential MDL change statistics (D-MDL) for the measure of change signs. We theoretically and empirically justify this notion, and then apply it to the COVID- 19 pandemic analysis using open datasets. Related Work There are plenty of work on change detection (see e.g. [1,2,3,4,8,9,10,11]). In many of them, the degree of change has been related to the discrepancy measure for two distributions before and after a time point, such as likelihood ratio, KL-divergence. However, there is no work on relating the differential information such as the velocity of the change to change sign detection. Most of previous studies in change detection are concerned with detecting abrupt changes [3]. In the scenario of concept drift [12], the issues of detecting various types of changes, including incremental changes and gradual changes, have been addressed. How to find signs of changes has been addressed in the scenarios of volatility shift detection [13], gradual change detection [14] and clustering change detection [15,16,17]. However, the notion of differential information has never been related to change sign detection. The MDL change statistics has been proposed as a test statistics in the hypothesis testing for change detection [14,18]. It is defined as the difference between the total codelength required for encoding data for the non-change case and that for the change case at a specific time point t. A number of data compression-based change statistics similar to it have also been proposed in data mining [19,20,21]. However, any differential variation of the compression-based change statistics has never been proposed. Significance of This Paper The significance of this paper is summarized as follows: (1) Proposal of D-MDL and its use for change sign detection. We introduce a novel notion of D-MDL as an approximation of KL-divergence of change and its differentials. We then propose practical algorithms for on-line detection of change signs on the basis of D-MDL. (2) Theoretical and empirical justification of D-MDL. We theoretically justify D-MDL in the hypothesis testing of change detection. We consider the hypothesis tests which are equivalent with D-MDL scoring. We derive upper bounds on the error probabilities for these tests to show that they converge exponentially to zero as sample size increases. The bounds on the error probabilities are used to determine a threshold for raising an alarm with D-MDL. We also empirically justify D-MDL using synthetic datasets. We demonstrate that D-MDL outperforms existing change detection methods in terms of AUC for detecting the starting point of a gradual change. (3) Applications to COVID-19 pandemic analysis. On the basis of the theoretical and empirical advantages of D-MDL, we apply it to the COVID-19 pandemic analysis. We are mainly concerned with how early we are able to detect signs of outbreaks or the contraction of the epidemic for individual countries. The results showed that for about 64% of outbreaks in studied countries, our method can detect signs as early as about six days on average before the outbreaks. Considering the rapid spread, six days can earn us considerably long time for making responses, e.g., implementing control measures [22,23,24]. The earned time is especially precious in the presence of a considerably long period of the incubation of the COVID-19 [25,26,27]. Moreover, we analyze relations between the change detection results and social distancing events. One of findings is that for individual countries, an average of about four changes/change signs detected before the implementation of social distancing correlates a significant decline from the peak of daily new cases by the end of April. The change analysis is a pure data science methodology, which detects changes only using statistical models without using differential equations about the time evolution. Meanwhile, SIR (Susceptible Infected Recovered) model [28] is a typical simulation method which predicts the time evolution of infected population with physics model-based differential equations. Although the fitness of the SIR model or its variants to COVID-19 data was argued in e.g. [29,30], the complicated situation of COVID-19 due to virus mutations, international interactions, highly variable responses from authorities, etc. does not necessarily make any simulation model perfect. Therefore, the basic reproduction number R0 [31] (a term in epidemiology, representing the average number of people who will contract a contagious disease from one person with that disease) estimated from the SIR model may not be precise. We empirically demonstrate that as a byproduct, the dynamics of R0 can be monitored by our methodology which only requires the information of daily new cases. The data science approach then may form a complementary relation with the simulation approach and gives new insights into epidemic analysis. The software for the experiments is available at https://github.com/IbarakikenYukishi/ differential-mdl-change-statistics. An online detection system is available at https: //ibarakikenyukishi.github.io/d-mdl-html/index.html The rest of this paper is organized as follows: Section 2 introduces D-MDL and gives a theory of its use in the context of change sign detection. Section 3 gives empirical justification of D-MDL using synthetic datasets. Section 4 gives applications of D-MDL to the COVID-19 pandemic analysis. Section 5 gives concluding remarks. Definitions of Changes and their Symptoms X be a domain where we assume that X is discrete without loss of generality. For a random variable x ∈ X , let p(x; θ) = p θ (x) be the probability mass function specified by a parameter θ. Supposing that θ changes over time. In the case when θ gradually changes over time, we define the signs of change as the starting point of that change. Let us consider the discrete time t. Let θ t be the parameter value of θ at time t. Let D(p||q) denote the Kullback-Leibler (KL) divergence between two probability mass functions p and q: We define the 0th, 1st, 2nd change degrees at time t as t−1 = D(p θ t+1 ||p θ t ) − 2D(p θ t ||p θ t−1 ) + D(p θ t−1 ||p θ t−2 ). When the parameter sequence {θ t : t ∈ Z} is known, we can define the degree of changes at any given time point. We can think of Φ t as the degree of change of the parameter value itself at time t. We can think of Φ (1) t , Φ (2) t as the velocity of change and the acceleration of change of the parameter at time t, respectively. All of them quantify the signs of change. However, the parameter values are not known in advance for general cases. The problem is how we can define the degree of changes when the true distributions are unknown. Differential MDL Change Statistics In the case where the true parameter value is unknown, the MDL change statistics has been proposed to measure the change degree in [14,18] from a given data sequence. Below we denote x a , . . . , x b = x b a . In the case of a = 1, we may drop off a and write it as x b . When the parameter θ is unknown, we may estimate it asθ using the maximum likelihood estimation method from a given sequence x n . I.e.,θ = argmax θ p(x n ; θ). Note that the maximum likelihood function max θ p(x n ;θ) does not form a probability distribution of x n because x n p(x n ;θ) > 1. Thus we construct a normalized maximum likelihood (NML) distribution [33] by max θ p(x n ; θ) y n max θ p(y n ; θ) and consider the logarithmic loss for x n relative to this distribution by (1) which we call the NML codelength, where C n is called the parametric complexity defined as It is known in [32] that equation (1) is the optimal codelength that achieves the Shtarkov's minimax regret in the case where the parameter value is unknown. It is known in [33] that under some regularity condition for the model class, C n is asymptotically expanded as follows: where I(θ) is the Fisher information matrix defined by I(θ) = lim n→∞ 1 n E θ [− ∂ 2 log p(X n ;θ) ∂θ∂θ ], d is the dimensionality of θ, and lim n→∞ o(1) = 0. According to [14], the MDL change statistics at time point t is defined as follows: {(− log p(x t 1 ; θ 1 ) + log C t ) + (− log p(x n t+1 ; θ 2 ) + log C n−t )} , The MDL change statistics is the difference between that the NML codelength of a given data sequence for non-change and that for change at time t. It is a generalization of the likelihood ratio test [1,34]. If the parameters are known such that θ 0 = θ 1 = θ 2 , then under the independence assumption, ≈ n − t n D(p θ 2 ||p θ 1 ). The last equality holds by the law of large numbers when n is sufficiently large, under the assumption that the true distribution is p(x t 1 ; θ 1 )p(x n t+1 ; θ 2 ). Letting t = n/2 and p θ 2 = p θ t+1 , p θ 1 = p θ t , we have D(p θ t+1 ||p θ t ) This implies that the MDL change statistics in equation (4) is equivalent with the KL-divergence between two probability distributions in the case where their parameters are known in advance. Therefore, by extending the change degrees Φ t , Φ t , . . . to the cases where the true parameters are unknown, we may consider the following statistics: the αth differential MDL change statistics, which we abbreviate as the αth D-MDL (α = 0, 1, 2, . . . ). The 0th D-MDL is the original MDL change statistics as in [14]. For example, let us consider the uni-variate Gaussian distribution: where x ∈ R and θ = (µ, σ). We assume |µ| < µ max and σ min < σ < σ max where µ max < ∞, 0 < σ min , σ max < ∞ are hyper parameters. The 0th D-MDL at time t is calculated as whereσ 0 ,σ 1 andσ 2 denote the maximum likelihood (ML) estimators calculated for x n 1 , x t 1 and x n t+1 , respectively. C n is the normalizer of the NML calculated as The 1st and 2nd D-MDL are calculated according to equation (5) and equation (6) With the MDL principle, the test statistics is given as follows: For an accuracy parameter > 0, where Ψ (0) t is the 0th D-MDL as in equation (4). H 1 is accepted if h 0 (x n ; t, ) > 0, otherwise H 0 is accepted. We call this test the 0th D-MDL test. We define Type I error probability as the probability that the test accepts H 1 although H 0 is true (false alarm rate) while Type II error probability as the one that the test accepts H 0 although H 1 is true (overlooking rate). The following theorem justifies the use of the 0th D-MDL in change detection. Theorem 2.1 [14] Type I and II error probabilities for the 0th D-MDL test are upper bounded as follows: where C n is the parametric complexity as in equation (2) and This theorem shows that Type I and II error probabilities in equation (10) and equation (11) converge to zero exponentially in n as n increases for some appropriate . We see that the error exponents depend on the parametric complexities of the model class as well as the Bhattacharyya distance in equation (12) between the null and composite hypotheses. In this sense the 0th MDL test is effective in change point 6 detection. The 1st D-MDL test Next we give a hypothesis testing setting equivalent with the 1st D-MDL scoring. We consider the situation where a change point exists at time either t or t + 1. Let us consider the following hypotheses: The null hypothesis H 0 is that the change point is t while the composite one H 1 is that it is t + 1. We consider the following test statistics: For an accuracy parameter > 0, which compares the NML codelength for H 0 with that for H 1 . We accept H 1 if h 1 (x n ; t, ) > 0, otherwise we accept H 0 . We call this test the 1st D-MDL test. We easily see where Ψ (1) t is the 1st D-MDL. This implies that the 1st D-MDL test is equivalent with testing whether the 1st D-MDL is larger than or not. Thus the basic performance of discriminability of the 1st D-MDL can be reduced to that of the 1st D-MDL test. Let p t is the probability distribution at time t. Note that if t + 1 is the only change point, Ψ t+1 ≈ D(p t+1 ||p t ) and if t is the only change point, Ψ (0) t ≈ D(p t ||p t−1 ). Hence this test is also equivalent with comparison of the degree of change at time t + 1 and that at time t. The following theorem shows the basic property of the 1st D-MDL test. Theorem 2.2 Type I and II error probabilities for the 1st D-MDL test are upper bounded as follows: where C n is the parametric complexity as in equation (2), d is the Bhattacharyya distance as in equation (12) and . (The proof is in Sec. 1 of the supplementary information.) This theorem shows that Type I and II error probabilities in equation (15) and equation (16) converge to zero exponentially in n as n increases where the error exponents are related to the parametric complexities for the hypotheses as well as the Bhattacharyya distance between the null and composite hypotheses. In this sense the 1st MDL test is effective. Type I error probability in equation (15) will be used for determining a threshold of the alarm. The 2nd D-MDL test Next we consider a hypothesis testing setting equivalent with the 2nd D-MDL scoring. Suppose that change points exists either at time t or at t − 1 and t + 1. H 0 is the hypothesis that a change happens at time t while H 1 is the hypothesis that two changes happen at time t − 1 and t. In H 0 , t is a single change point while in H 1 , t is a transition point between two close change points. Thus this hypothesis testing evaluates whether time t is a change point or a transition point of close changes. The test statistics is: For an accuracy parameter > 0, Under the assumption t ≈ 2h 2 (x n ; t, ) + 2 (18) This implies that the 2nd D-MDL test is equivalent with testing whether the 2nd D-MDL is larger than 2 or not. Thus the basic performance of discriminability of the 2nd D-MDL can be reduced to that of the 2nd D-MDL test. The following theorem shows the basic property of the 2nd D-MDL test. Theorem 2.3 Type I and II error probabilities for the 2nd D-MDL test are upper bounded as follows: where C n is the parametric complexity as in equation (2), d is the Bhattacharyya distance as in equation (12) and . This theorem can be proven similarly with Theorem 2.2. This theorem shows that Type I and II error probabilities in equation (19) and equation (20) converge to zero exponentially in n as n increases where the error exponents are related the sum of parametric complexities for the hypotheses as well as the Bhattacharyya distance between the null and composite hypotheses. In this sense the 2nd D-MDL test is effective. Type I probability in equation (19) will be used for determining the threshold in Sec.2.5. Sequential Change Sign Detection with D-MDL In previous sections, we considered how to measure the change sign scores at a specific time point t. In order to detect change signs sequentially for the case where there exist multiple change points, we can conduct sequential change sign detection using D-MDL in a similar manner with [14]. We give two variants of the sequential algorithms. One is the sequential D-MDL algorithm with fixed windowing while the other is that with adaptive windowing. In the former, we prepare a local window of fixed size to calculate D-MDL at the center of the window. We then slide the window to obtain a sequence of D-MDL change scores as with [14] (see also [35] for local windowing). We raise an alarm when the score exceeds the predetermined threshold β. The algorithm is summarized as follows: Sequential D-MDL algorithm with fixed windowing Given: 2h: window size, T : data length, β: threshold parameter at t by sliding the window. Make an alarm if and only if Ψ Next we design the sequential D-MDL algorithm with adaptive windowing. In [36], the sequential algorithm with adaptive windowing (SCAW2) was proposed by combining the 0th D-MDL with ADWIN algorithm [9] (see also [37] for adaptive windowing) where the window grows until the MDL change statistics exceeds a threshold. The threshold is determined so that the total number of detected change points is finite. The sequential D-MDL algorithm can be obtained by replacing the 0th D-MDL with general D-MDL in SCAW2. It outputs the size of window whenever a change point is detected. Sequential D-MDL algorithm with adaptive windowing Given: T : data length, Hierarchical Sequential D-MDL Algorithm Practically, we combine the algorithm with adaptive windowing for the 0th D-MDL and the algorithms with fixed windowing for the 1st and 2nd D-MDL. We call this algorithm the hierarchical sequential D-MDL algorithm. It is designed as follows. We first output not only a 0th D-MDL score but also a window size with the 0th D-MDL with adaptive windowing and raise an alarm when the window shrinks, i.e., equation (21) is satisfied. We then output the 1st and 2nd D-MDL scores using the window produced by the 0th D-MDL and raise alarms when the 1st or 2nd D-MDL exceeds the threshold so as to expect the 1st and 2nd D-MDL to detect change signs before the window shrinkage. Note that the window shrinks only with the 0th D-MDL, but neither with the 1st nor 2nd D-MDL. In this algorithm, the threshold is determined so that Type I error probability in equation (15) is less than the confidence parameter δ 1 . That is, from equation (15) and equation (3), letting We employ the righthand side of equation (22) as the threshold of an alert of the 1st D-MDL. The threshold (2) w for the 2nd D-MDL Ψ (2) t can also be derived similarly with the 1st one. Note that by equation (18), the threshold is 2 times the accuracy parameter for the hypothesis testing. Letting δ 2 be the confidence parameter, we have (2) w ≥ 2(d log(w/2) + log(1/δ 2 )). (23) We employ the righthand side of equation (23) as the threshold of an alert of the 2nd D-MDL. In practice, δ 1 and δ 2 are estimated from data (see Sec. 4.2). Datasets To evaluate how well D-MDL performs for abrupt/gradual change detection, we consider two cases; multiple mean change detection and multiple variance one. In the case of multiple mean change detection, we constructed datasets as follows: each datum was independently drawn from the Gaussian distribution N (µ t , 1) where the mean µ t abruptly/gradually changed over time according to the following rule: In the case of abrupt changes, where H(x) is the Heaviside step function that takes 1 if x > 0 otherwise 0. In the case of gradual changes, H is replaced with the following continuous function: In the case of multiple variance change detection, each datum was independently drawn from the Gaussian distribution N (0, σ 2 t ) where the variance σ 2 t abruptly/gradually changed over time according to the following rule: In the case of abrupt changes, In the case of gradual changes, H is replaced with S as with the multiple mean changes. We define a sign of a gradual change as the starting point of that change. In all the datasets, change points for abrupt changes and change signs for gradual changes were set at nine points: t = 1000, 2000, . . . , 9000. Evaluation Metric For any change detection algorithm that outputs change scores for all time points, letting β be a threshold parameter, we convert change-point scores {s t } into binary alarms {a t } as follows: a t = 1 (s t > β), 0 (otherwise). By varying β, we evaluate the change detection algorithms in terms of benefit and false alarm rate defined as follows: Let T be a maximum tolerant delay of change detection. When the change truly starts from t * , we define benefit of an alarm at time t as (otherwise), where t * is a change point for abrupt change, while it is a sign for gradual change. The total benefit of alarm sequence a n−1 0 is calculated as B(a n−1 The number of false alarms is calculated as where Θ(t) takes 1 if and only if t is true, otherwise 0. We evaluate the performance of any algorithm in terms of AUC (Area under curve) of the graph of the total benefit B/ sup β B, against the false alarm rate (FAR) N/ sup β N , with β varying. Methods for Comparison In order to conduct the sequential D-MDL algorithm, we employed the univariate Gaussian distribution whose probability density function is given by equation (7). We employed three sequential change detection methods for comparison: (1) Bayesian online change point detection (BOCPD) [11]: A retrospective Bayesian online change detection method. It originally calculates the posterior of run length. We modified it to compute a change score by taking the expectation of the reciprocal of run length with respect to the posterior. We conducted the sequential D-MDL algorithms with fixed window size in order to investigate their most basic performance in terms of the AUC metric. The sequential D-MDL algorithm with adaptive windowing outputs the window size rather than the D-MDL values themselves, hence in order to evaluate the effectiveness of the magnitude of D-MDL, the sequential D-MDL with fixed windowing is a better target for the comparison. All of CF, BOCPD, and ADWIN2 had some parameters, which we determined from 5 sequences so that the AUC scores were made the largest. Results The performance comparison is summarized in Table 1. We see that both for the datasets, in the case of abrupt changes, the 0th D-MDL performs best, while in the case of gradual changes, the 1st D-MDL performs best and the 2nd D-MDL performs worse than the 1st but better than the 0th. That matches our intuition. Because the 0th D-MDL was designed so that it could detect abrupt changes while the 1st one was designed so that it could detect starting points of gradual changes. The purpose of our analysis is to demonstrate the importance of monitoring the dynamics of the epidemic through detecting the occurrence of drastic outbreaks and their signs. We define outbreak as a significant increase in the number of cases in a country. We are mainly concerned with the following two problems: 1. How early are the outbreak signs detected prior to outbreaks? 2. How are the outbreaks/outbreak signs related to the social distancing events? As a byproduct, the analysis of the dynamics of the basic reproduction number R0 [31] is conducted, which can serve as supplementary information to the particular value estimated from the SIR model [38]. Data Source We employed the data provided by European Centre for Disease Prevention and Control (ECDC) via https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-dis We studied 37 countries with no less than 10,000 cumulative cases by Apr. 30 since some countries started to ease the social distancing around the date. More details can be found in Sec. 2.1 of the supplementary information. Data Modeling We studied two data models by considering the value of R0, which by definition is the product of transmissibility, the average contact rate between susceptible and infected individuals, and the duration of infectiousness [38]. At the initial phase of an epidemic, R0 is larger than one [31]. And the cumulative cases may grow exponentially [39,40,41,42]. We thus employed the Malthusian growth model [43] because it is widely used for characterizing the early phase of an epidemic [41,42]. In particular, the cumulative cases at time t, C(t), grows according to the following equation: where C(0) is the number of cases at the start of an epidemic, and r is the growth rate. In the experiments, we took the logarithm of C(t) to obtain the linear regression of the logarithm growth with respect to time as follows: log C(t) = rt + log C(0). (25) We modeled the residual error of the linear regression using the univariate Gaussian. When a change is detected in the modeling of the residual error, we examine the increase/decrease in the coefficient of the linear regression, i.e., r. We expect to detect changes in the parameter of the exponential modeling to monitor the increase/decrease of R0 because R0 is proportional to r as derived from the SIR [40]. In later phases, the exponential growth pattern may not hold. For instance, when R0 < 1, daily new cases would continue to decline and cease to exist [31]. Considering the complicated real scenarios, epidemic models with certain assumptions on the growth rate or R0 may not fit an epidemic at a given time. Therefore, we employed the univariate Gaussian distribution as in equation (7) to directly model the number of daily new cases, without assuming any patterns of the growth. The change in the parameter of the Gaussian modeling may reveal the relation between one and R0, i.e., R0 > 1 when daily new cases increase significantly or R0 < 1 when daily new cases decrease significantly. We conducted the hierarchical sequential D-MDL algorithm as in Sec. 2.6. The confidence parameter δ for the 0th D-MDL as in equation (21) was set to be 0.05. Those for the 1st and 2nd D-MDL, i.e. δ 1 , δ 2 as in equation (22), equation (23) were determined as follows: we calculated the D-MDL scores around the time when the initial warning was announced by an authority; we determined δ 1 , δ 2 so that the score was the threshold. For example, the initial warning for Japan was set on Feb. 27, when the government required closing elementary, junior high and high schools. If the resulting δ 1 , δ 2 was larger than 1, it was set to be 0.99 because of the concept of confidence parameter. Results for South Korea We present two representative case studies (the results for all the studied countries are included in Sec. 2.2 of the supplementary information), South Korea and Japan (next subsection). The date of the implementation of social distancing was considered as Feb. 25 from which many non-essential services were closed. We present results in Fig. 1 and Fig. 2 for the Gaussian modeling and the exponential modeling, respectively. Change scores were normalized into the range from 0 to 1. With the Gaussian modeling, there were several alarms raised before the social distancing event. For each alarm raised by the 0th D-MDL, the interpretation can be that a statistically significant increase of cases occurs, with reference to the cases in Fig. 1(a). Hereafter, a change that is detected by the 0th D-MDL and that corresponds to the increase of cases is regarded as an outbreak, which instantiates our definition of outbreak. The outbreak detection is the classic change detection. We further relate it to R0. Around the dates of the alarms, R0 > 1 was considered since we can confirm that the new infections resulted from community transmission. Correspondingly, R0 was estimated at 1.5 by an epidemiological study [44]. When the 0th D-MDL raised an alarm, the window size shrank to zero. Before that, both the 1st and the 2nd D-MDL raised alarms, which are interpreted as the changes in the velocity and the acceleration of the increase of cases, respectively. We can conclude that the 1st and the 2nd D-MDL were able to detect the signs of the outbreak by examining the velocity and the acceleration of the spread. The sign detection is the new concept with which we propose to supplement the classic change detection. The 0th D-MDL raised several alarms after the event, and the latest ones corresponded to decreases of cases. It is not difficult to tell that the corresponding R0 was less than one. We think that the social distancing played a critical role in containing the spread because it can suppress R0 through reducing the contact rate, which was supported by studies e.g. [44,45,46,47]. Both the 1st and the 2nd D-MDL again demonstrated the capability of early sign detection. As for the exponential modeling, there was only one alarm raised by the 0th D-MDL and it was after the social distancing event. Although the alarm was raised on Apr. 2, the date as the change point was within the window as of Apr. 2, and was identified as Mar. 1. It turned out that the alarm corresponded to a decrease in the coefficient of the linear regression. As we can see the cumulative cases in Fig. 2(a), the sub-curve before Mar. 1 might have experienced an exponentially upward trend while the sub-curve after that became almost flatten. We can conclude that R0 declined from a value larger than one to a value considerably smaller than one on Mar. 1. The 1st D-MDL and the 2nd D-MDL did not raise alarms of signs, which may be because R0 changed slowly as shown by only one alarm raised by the 0th D-MDL. Results for Japan The results are presented in Fig. 3 and Fig. 4. The policies of social distancing were enacted on Apr. 7. With the Gaussian modeling, the observations were similar to those for South Korea before the event. But after that, the 0th D-MDL raised no alarms, implying that R0 did not decrease to a value considerably smaller than one. With respect to the exponential modeling, there were no alarms raised by the 0th D-MDL, showing that there were no statistically significant changes in the value of R0. As a comparison, the Gaussian modeling was effective and efficient at estimating the relation between one and R0. The exponential modeling was able to monitor the change in the value of R0, but in a relatively slower manner. The two models form a complementary relation on monitoring the dynamics of R0. For instance, for Japan, the Gaussian modeling showed that the value of R0 reminded at a value larger than one, and the exponential modeling showed that its value did not significantly change. In terms of sign detection, the Gaussian modeling successfully detected signs while the exponential modeling did not, which is because the Gaussian modeling deals with daily new cases whose growth pattern would significantly change in the same direction (i.e., either increase or decrease) within a short period with either R0 > 1 or R0 < 1 while the exponential modeling deals with cumulative cases whose growth pattern only changes significantly when R0 changes significantly, and a significant change of R0 within a short period did not happen in South Korea and Japan. Summarization of Results on Individual Countries This section presents two observations. The first is how early the signs can be detected prior to outbreaks with the Gaussian modeling. For the countries studied, there were 106 outbreaks in total. The number of outbreaks whose signs were detected by either the 1st or the 2nd D-MDL is 68, representing a detection rate of 64%. For each outbreak whose signs were detected, we measured the time difference between the earliest sign alarm and the outbreak alarm. The time difference in terms of the number of days is 6.25 (mean) ± 6.04 (standard deviation). Considering the fast spread, six days can buy us considerably long time to prepare for an outbreak, and even to avoid a potential outbreak. As a comparison, the 1st D-MDL detected signs for 65 outbreaks and the 2nd D-MDL detected signs for 27 outbreaks. The smaller number for the 2nd D-MDL might be because the 1st D-MDL is better at detecting starting points of gradual changes, and is consistent with results on the synthetic datasets as in Table 1. The number of days before which the 1st D-MDL detected signs was 6.35 ± 5.91, and the number for the 2nd D-MDL was 5.56 ± 6.50. Note that not all the outbreaks allowed for sign detection since the 1st D-MDL sign detection requires one more data point and the 2nd one requires two more data points in the window than the 0th D-MDL, respectively. The number of outbreaks allowing for a 1st D-DML sign is 88 while the number for a 2nd one is 81. Hence, it turns out that some outbreaks occurred too quickly before signs can be detected. Second, we observed that on average, countries responding faster in terms of a smaller number of alarms before the social distancing event saw a quicker contraction of daily new cases. As of Apr. 30, the curve of daily new infections in many countries had been flatten, and even started to be downward. Therefore, alarms for declines in the number of cases from the global peak number were raised for ten countries including Austria, China, Germany, Iran, Italy, Netherlands, South Korea, Spain, Switzerland, Table 2: Summarization of statistics where ± connects mean and standard deviation. Measurement Value Total number of outbreaks 106 Number of outbreaks whose signs were detected by either the 1st or the 2nd D-MDL 68 Detection rate of either the 1st or the 2nd D-MDL 64% Number of days before an outbreak for the first sign of either the 1st or the 2nd D-MDL 6.25 ± 6.04 Total number of outbreaks that allowed for the 1st D-MDL sign detection 88 Total number of outbreaks that allowed for the 2nd D-MDL sign detection 81 Number of outbreaks whose signs were detected by the 1st D-MDL 65 Number of outbreaks whose signs were detected by the 2nd D-MDL and Turkey in alphabetical order. These countries are referred to as downward countries. In total, the number of all kinds of alarms raised before the event for downward countries was 4.30 ± 2.79 while it was 5.96 ± 4.22 for other countries. Separately, the number of alarms raised by the 0th, the 1st, and the 2nd D-MDL for downward countries was 2.00 ± 1.55, 1.70 ± 1.42, and 0.60 ± 0.66, respectively, while it was 1.85 ± 1.41, 3.08 ± 2.34, and 1.04 ± 1.53, respectively, for other countries. Therefore, if the social distancing is a viable option, it is suggested that the action should better be taken before it is late, e.g., later than four alarms. We further measured that it took an average of 30 days to suppress the spread if prompt social distancing policies were enacted. By contrast, the average number of days from the event to Apr. 30 is nearly 37 for non-downward countries, which is considerably more than the time used for suppressing the spread in downward countries. Table 2 summarizes all the statistics, and please refer to Sec. 2.3 of the supplementary information for more detailed numbers. Conclusion This paper has proposed a novel methodology for detecting signs of changes from a data stream. The key idea is to use the differential MDL change statistics (D-MDL) as a sign score. This score can be thought of as a natural extension of the differentials of the Kullback-Leibler divergence for measuring the degree of changes to the case where the true mechanism for generating data is unknown. We have theoretically justified D-MDL using the hypothesis testing framework and have empirically justified the sequential D-MDL algorithm using the synthetic data. On the basis of the theory of D-MDL, we have applied it to the COVID-19 pandemic analysis. We have observed that the 0th D-MDL found change points related to outbreaks and that the 1st and 2nd D-MDL were able to detect their signs several days earlier than them. We have further related the change points to the dynamics of the basic reproduction number R0. We have also found that the countries with no more than five changes/change signs before the implementation of social distancing tended to experience the decrease in the number of cases considerably earlier. This analysis is a new promising approach to the pandemic analysis from the view of data science. Future work includes studying how we can integrate the change analysis such as our methodology with the conventional simulation studies such as SIR model. It is expected that our data science approach has a complementary relation with the simulation approach and gives new insights into epidemiology. Proof of Theorem 2.2 Let the maximum likelihood estimator of θ i beθ i (i = 0, 1, 2, 3). Let us define the event as Type I error probability is evaluated as follows: Let the true parameter be θ * 0 and θ * 1 . where we have used the following relations: Next we evaluate Type II error probability. Let us define the event as Let the true parameter be θ * 2 and θ * 3 . Then under the event (2), The Type II error probability is upper-bounded as follows: where d and p θ * 2 * p θ * 3 are defined as in Theorem 2.1 and 2.2. This completes the proof. Details of Experiments Here, we give more details about the experiments on the change/change sign detection for all the countries studied. Data Information We employed the data provided by European Centre for Disease Prevention and Control (ECDC) via https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-dis For information, there are 37 countries that had no less than 10,000 cases in total by Apr. 30 We collected the date on which the social distancing was implemented from the information listed in the IHME COVID-19 predictions via https://covid19.healthdata.org/united-kingdom. If a certain country is not listed in the website, we referred to the Wikipedia page for the COVID-19 pandemic of the country, e.g., the COVID-19 pandemic in South Korea https://en.wikipedia. Ecuador was excluded from the list above because the social distancing is introduced to be related to changes incurred by declines in the number of cases and there was a very large number of cases in the initial phase of the epidemic in Ecuador. The large number might be an outlier due to the data collection procedure, and would make any changes after that date downward changes. But we still studied the change/change sign detection for Ecuador. Results for All the Studied Countries This section presents the results for all the studied countries with both the Gaussian modeling and the exponential modeling. Since the interpretation for each country is similar and there are many countries, we omit the explanations. Please refer to the main content for the illustration of South Korea and Japan. One point worthy mentioning is that with the exponential modeling, the 1st and the 2nd D-MDL raised alarms for Pakistan ( Fig. 76 shows the histograms of the number of days before which the 1st D-MDL and the 2nd D-MDL detected the first signs of outbreaks, respectively. Fig. 77 shows the histograms of the numbers of alarms (all the 0th, the 1st and the 2nd) before the implementation of social distancing for downward countries and non-downward ones, respectively. Fig. 78(a) shows the number of days between the implementation of social distancing and the date of the first alarm raised by the 0th D-MDL for the decline in the number of cases for downward countries. Fig. 78(b) shows the number of days between the implementation of social distancing and Apr. 30 for non-downward countries.
10,057.6
2020-07-30T00:00:00.000
[ "Computer Science", "Mathematics" ]
Existence of Nonradial Solutions for Hénon Type Biharmonic Equation Involving Critical Sobolev Exponents and Applied Analysis 3 To avoid heavy notation from now on, we will write simply U j for U λ,x (j) , PU j for PU λ,x (j) , and φ j for φ λ,x (j) . We set d jl = x (l) − x (j) , d = 1 2 min j ̸ =l 󵄨󵄨󵄨󵄨 d jl 󵄨󵄨󵄨󵄨 = 1 2 󵄨󵄨󵄨󵄨 x (1) − x 󵄨󵄨󵄨󵄨 , (18) and we assume that 2d ≤ r and λd ≥ 1 for all λ under consideration. Note also that due to the definition of x, we have d = 2 (1 − r) sin π k ∼ C k (19) for all r small. Lemma 2. ∫ Ω U 2 ∗ −1 j U l dx = {{{{{{{{{ {{{{{{{{{ { S N/4 + O ((λr) −N ) , j = l, Introduction In this paper we consider the following Hénon type biharmonic problem: where ≥ 0, 2 * = 2/(−4), Ω is the unit ball of R , ≥ 5, and n denotes the unit outward normal at the boundary Ω. We consider first the case where = 0, namely, the equation It is well known that (2) admits no nontrivial radial solution (see [1], Theorem 3.11, or [2], Theorem 4).The nonexistence of any nontrivial solution to (2) seems to be still unknown; only more restricted results are available.In order to obtain existence results for (2), one should either add subcritical perturbations or modify the topology or the geometry of the domain.For subcritical perturbations, we refer to [2,3] and references therein.Domains with nontrivial topology are studied in [2,4].They demonstrated how domains with topology often carry solutions that cannot be present otherwise.The corresponding second order elliptic problem has been investigated by Bahri and Coron in [5].Berchio et al. [6], among other things, considered the minimization problem inf where 2 0,rad (Ω) denotes the subspace of radial functions in 2 0 (Ω).Actually, they treated general polyharmonic problem.They proved the infimum in (3) is attained.The minimizers of (3), after rescaling, are a solution of (1).It is natural to ask whether (1) has a nontrivial nonradial solution.We will answer this problem partially here. Our main result is as follows. Theorem 1.Let ≥ 8 and let Ω be the unit ball in R .Then, for every > 0 large enough, problem (1) admits at least one nonradial solution. The corresponding second order elliptic problem, namely, the Hénon equation, has been studied by many authors, where > 1. Ni [7], among other things, proved the existence of radial positive is radial provided is large enough.Serra [12] studied the case = ( + 2)/( − 2) and proved the existence of nonradial positive solutions of (4) for large.Theorem 1 can be regarded as an extension of Serra's result to biharmonic problem. In order to outline the proof of Theorem 1, we introduce some notations.We write R = R 2 × R −2 ≃ C × R −2 and = (, ).For a given integer , let be the group Z × O( − 2).We consider the action of on 2 0 (Ω) given by () () = () (, ) = ( (2/) , ) , where ∈ {0, 1, . . ., − 1} and ∈ O( − 2).Define It is easy to see that functions in are radial in .Since both the numerator and the denominator of the functional are invariant under the action of , the functional is invariant.So the critical points of restricted to are critical points of .After scaling, these correspond to weak solution of (1), which are in fact classical solutions by standard elliptic theory (see [13,14]).Set () . The paper is organized as follows.In Section 2, we establish some estimates we will need and investigate the compactness properties of Palais-Smale sequences for .In Section 3, we prove Theorem 1.Throughout this paper, the constant will denote various generic constants. Case 1.Consider = .Direct computations yield that where denotes surface area of unit sphere in R .Combining (21), we prove the first case of Lemma 2. Case 2. Consider with the same type of calculation as in the proof of the first case, we see that To estimate the integral over R in (22), we follow exactly the calculation in [5] (see page 279-280).It is easy to see that We have also where We denote by Green's function of Δ 2 ; that is, where denotes the Dirac mass at , and n is the outer unit normal at ∈ Ω.We also denote by the regular part of ; that is, (, ) = − where By the definition of and ( () , ), we get For each ∈ Ω, we have Thus, For each ∈ Ω, we have By [15] (page 155), we have the following explicit formula: where with ∈ Ω, ∈ Ω.Using (45) and (47), we have, for all ∈ Ω, Thus, We split the term to be estimated as and then for the last integral we have Concerning the integral over Ω\ /2 ( () ), we first notice that, by (39), Therefore, As in [5,12] we expand ( () , ⋅) up to the fifth order near () , writing where denotes the th order term (e.g., Note that Δ 2 ( () , ) = 0. Using the symmetry of and the usual scaling arguments, we have (59) By ( 53)-(59), we obtain The proof of Lemma 3 is completed. Case 2. Consider ̸ = .Using the same argument similar to the ones in the proof of the first case, we get the desired result. Proof.(i) The proof makes use of the same estimate as the one in the proof of Lemma 2, with replaced by this time. (ii) We first write and notice that the first integral in the right hand side in (61) has been estimated in (35).Next, we treat the second integral.We will make use of notation and formulas already established in the proof of Lemma 2 to get estimate (35).Since /2 ≤ | |/4 by definition, we have the decomposition Now we have to evaluate three integrals in the right hand side in (63).The first integral and the third integral have been estimated in (33) and (32), respectively.Finally, we deal with the second integral over Γ 2 \ /2 (0). Case 1.Consider = .Set For ∈ Ω, ∈ Ω, we have By definition of (, ), we get By (48), we obtain Therefore, we can write Since ≤ (1/2), the last term can be estimated as in (59); namely, which gives the required estimate. Case 2. Consider ̸ = .The computation can be adapted from the ones in the proof of Case 1. Define Due to the definition of the points () , we have ũ ∈ .Notice that ũ depends on , , and through the choice of the points () .Lemma 6.As → ∞ (i.e., → 0), one has Proof.By definition of (see (75)) and , By Lemmas 2 and 3, we have By the symmetry of the points () , we have since the series of 1− is convergent.Recall that will be taken small so that we can always assume ≤ 1/2.By (80), we obtain from Lemma 2 (81) Substituting ( 79) and ( 81) into (78) and recalling the definition of , we prove (76). By the first part of Lemmas 4 and 5, and recalling that > ∼ / and The remainders generated by the second part of Lemma 4 can be dealt with as in (80).We obtain Therefore, since / → 0. Finally, from Lemma 5, (89) Substituting ( 86), (88), and (89) into (85) and recalling the definition of , we obtain the required estimate.Proposition 7. Let ≥ 6.For every > 0, there exists > 0 such that, for every integer ≥ , Σ < 4/ . (90) Proof.The function ũ constructed in (75) depends on , , and , and for each it belongs to .We show that, for appropriate choice of these parameters, there results (ũ) < 4/ .For simplicity, we set and we begin with an estimate of , noticing that we can write it as By definition of and since | () − () /| () | 2 | ≥ for all , , we have Moreover, as in (87), so we obtain Note that (1 − 2) 2/2 * ≥ 1 − 3, for all ≥ 6, > 0 and small enough; we see that, from Lemma 6, Choose = −3(−5)/4(−2) and = 1+ with > 0 and small.It is easy to see that all the quantities depending on in the square brackets tend to zero as / → ∞; therefore, we obtain (ũ) We must check that, for suitable values of the parameters, the right hand side is strictly less than 4/ .Direct computations show that it is enough to prove We take so large that and this is possible because and 2+(3/4)(−5) < −3 for all ≥ 6, as one immediately checks.Furthermore, noticing that 3( − 5)/4( − 2) < − 2, we see that since / → 0. Therefore, the third and the last big is unnecessary in the expression of .We are thus led to Since is fixed, we have < 0 for large (depending on ) if we take small enough (essentially < 3(−5)/4(−2)(− 4)). Next we show that if Σ < ( The corresponding energy functional of problem ( 1) is defined by by Lemma 10.1 in [17], there exists a sequence of rescaling Since supp ⊂ Ω, we get → ∞ as → ∞ and ∈ Ω. We can also assume that → ∈ Ω. Lemma 10.Let T be the sequence constructed above.Then, as → ∞, one has Moreover, the sequence Set () = (/ + ).Changing variables as in the first part, we have Since → in 2,2 (R ), we get by the Brézis-Lieb lemma [19]: By changing variables, we obtain Inserting these into (118), we get which, combined with (116), yields (i). Proof.It is clear that there exists a sequence of positive numbers → ∞, a sequence 1 of points of Ω with 1 → 1 ∈ Ω \ {0}, and a nontrivial critical point V 1 of , 1 such that, setting T 1 = T( 1 , 1 ), the sequence is a Palais-Smale sequence for at level (2/)Σ /4 − (V 1 ).We now iterate this scheme.If 1 → 0 strongly in 2 * (R ), then the fact that it is a Palais-Smale sequence implies that 1 → 0 strongly in 2,2 (R ).Since also and the lemma is proved with = 1.Otherwise, 1 ⇀ 0 weakly in 2 * (R ) but not strongly.In this case, starting with Lemma 10.1 in [17], we can work on 1 as we did for .So we can find sequences 2 → ∞, 2 → 2 ∈ Ω \ {0} and a nontrivial critical point V 2 of , 2 such that the sequence is a Palais-Smale sequence for at level (R ), then we obtain and the lemma is proved with = 2. Otherwise, 2 ⇀ 0 weakly in 2 * (R ) but not strongly, and we iterate the above argument.This procedure will end after a finite number of steps.Actually, notice that, by Remark Proof of Theorem 1 This last section is devoted to the proof of Theorem 1.We are now ready for the main result of the paper. Proof of Theorem 1.For every > 0, problem (1) has a solution in some .Indeed, given > 0, there exists > 0 such that, for ≥ , By Proposition 14, Σ is achieved by a function ∈ .By invariance, is a critical point of on 2 0 (Ω) which, after scaling, gives rise to a weak solution of (1).By [13], is a classical solution.We have to show that, at least for large, is not radial. 4/ for suitable values of and . 4/ then Σ is achieved.So we are led to analyze what happens if a minimizing sequence in tends weakly to zero in 2 0 (Ω).Let ∈ be a minimizing sequence for problem (9) such that ⇀ 0 weakly in 2 0 (Ω).Without loss of generality, we can assume that ( ) → 0 in by Ekeland's variational principle.Since is invariant under the action of , we also have ( ) → 0 in 2 0 (Ω).By homogeneity of (), we normalize to obtain a sequence (still denoted by ) such that as → ∞, ( ) → Σ , T + (1) ∫ We are now ready to describe the behavior of Palais-Smale sequence of .Let { } be a Palais-Smale sequence for at level (2/)Σ /4 and ⇀ 0 in 2 0 (Ω).Then there is a positive (depending only on Σ ) such that, for every = 1, 2, . . ., , there exist sequences { } ⊂ R + and { } ⊂ Ω, with → ∞ and → ∈ Ω \ {0} as → ∞, and there exists a nontrivial critical point V ∈ 2,2 (R ) of , such that (up to subsequence) Remark 12. Checking the process of the proof of Lemma 11, it is easy to see that if one does not suppress the cut-off functions , one can obtain the following representation of : by definition of , so that, after at most := [(Σ /) /4 ] steps, the remainder will be a Palais-Smale sequence at level zero; namely, it will be (1) in 2,2 (R ), obtaining the requested representation for and ( ).
3,182.2
2014-10-14T00:00:00.000
[ "Mathematics" ]
A REAL TIME COMMIT PROTOCOL BASED ON PRIORITY FOR DRTDBS . With the rapid boom within the subject of information technological understanding, everything is becoming online and need of allotted database application is growing. To coordinate the transaction’s execution several methodologies have been proposed. Distributed real-time database device (DRTDBS) deals with numerous troubles that degrade the machine performance, priority inversion are truly certainly one of them. In DRTDBS primarily based programs, the vital goal is to lessen the amount of transactions missing their ultimate deadline by way of the usage of minimizing commit time. This paper presents a real-time commit protocol based mostly on priority to clear up the inversion problem in allotted real-time environments. Focus of this protocol is to lessen the commit processing time through reducing messages and time overhead. The real time overall performance of this protocol is measured with the help of distributed database machine simulation. The effects show the substantial improvement in actual time device overall performance with none consistency problem. INTRODUCTION Today everything is becoming online to make human life ease and diverse programs are growing to assist them, these applications are based on database systems. Various studies had been finished to improve the general performance of the actual-time facts based application programs. Due to an inherently allotted nature of these applications, they may be accessed by using many websites globally at the identical time. Many such applications are available within the industries. Stock marketplace, Educational institutions, Banking area, Social networking websites (Facebook, Twitter, Linked In), Organization Automation, name/call tracing etc. are some of the example of DRTDBS. These changing necessities attracted the database research network to develop a commit protocol especially focused on fulfilling the real time need of the end users.DRTDBS is the combination of real time system and disbursed database system, simply so it needs to satisfy the records (data) consistency and timing constraints. It is defined as the logical sequence of severe interrelated databases without globally shared memory and connected through a network, specifically designed to serve the purpose of real time system (RTS) in allocated environment [1]. For speedy access to the massive quantity of distributed real-time records, nowadays, RTS showcase very adaptive, dynamic or maybe intelligentbehavior and shares lengthy lifetimes, a couple of degree timing constraints, and becoming more and more complex. With the logical as well as temporal properties in Real Time Systems (RTS) correctness of end result is decided and ensures the atomicity too. Real time system assists transactions having explicit time constraints; this is represented as a deadline, which means that it ought to be finished earlier than given specific time [2, 3, and 4]. Distributed real-time transactions (DRTT) categorized as hard, firm and soft DRTT depending on the effects of lacking its deadline. Cohorts of a distributed transaction carry out their operations at different sites during the execution stage, while, we use commit protocol for making sure atomicity of the transaction all through a commit stage Although, to meet the closing date of DRTT continues to be tough due to numerous reasons, conflicts factor between transactions become a main component chargeable for degradation in overall performance of the device [5] [6].DRTDBS deals with many issues to ensure the ACID properties. Priority inversion problem is a cease end result of executing-committing conflict which takes place when a high priority transaction is blocked thru a low priority transaction's prepared cohort. To resolve the executing committing conflicts, the best choice of commit protocol is required in DRTDBS otherwise it is able to increase the execution times of transactions [7]. As an answer of the hassle cited, our proposed protocol reduces the delay as lots as half of by using requiring a single round of message switch which in turn leads to the enormous overall performance development over PIC protocol. Rest of the paper is ordered as follows: Section.2 in short gives the literature survey of severe protocols with their drawbacks. Section.3 defines the proposed protocol. Performance Evaluations of proposed protocol has been accomplished in Section.4, and, Section.5 concludes the paper. LITERATURE REVIEW Although, lot of research work is done to optimize the executing-executing conflicts, but forexecutingcommittingconflicts comparatively less work is done to optimize commit processing in DRTDBS. The major aim in DRTDBS is growing the proportion of the transactions which may be finished effectively earlier than its remaining time; not the throughput of the applications. A listing of commit protocols has been proposed to reduce the commit processing time in DRTDBS, to make sure swift completion of the DRTT. 2.1To reduce the fruitless.borrowing a commit protocol referred to as ACTIVE has categorized the borrower cohorts as commit.and abort based [8].So that data inaccessibility is reduced because borrowerdependent on commit is allowedto lend its data to an incoming cohort. But this protocol consider borrower having borrow factor greater.than a threshold value only. 2.2According to FIVE, to solve the problem of kill transactions and also to reduce the fruitless borrowing it categorizing the transactions into three sections according to borrowing factor [9]. This protocol overcome the problems if ACTIVE protocol. 2.3SPEEDITY is another approach which says that if there is delay from lender in commitment due to serious reason, the transaction starts their execution by reversing the abort dependency with commit dependency.in between shadow of borrower and lender and by applying the shadowing approach it ensures the transactions survival too [10]. 2.4An automated methodbased on multi-objective genetic algorithm and heuristic particle.-swarm-optimization technique called Design Space Explorationfor Component BasedReal-Time Distributed System says that after changing hardware topology and task mapping on different nodes and also by altering their priority to execution the presented method generates alternative architectures [11]. 2.5SWIFT and PROMPT study firm deadline based applications [12,13].In such programs if any transaction misses its pre-described remaining time or already disregarded their final time then it will be right away aborted, and all the resources held through it get released so that these resources will have become available to be used by a few special transaction inside the system. Here, permitting a transaction to run similarly which has already not noted its last time is of no need and once in a while it creates a horrific effect on the system average performance. 2.6PIC protocol creates an extra overhead with the two rounds of message transfer in processing priority inheritance records messages [13]. That's why, PIC protocol doesn't provide any performance enhancements over 2PC protocol due to the delays. 2.7 For transaction processing over the open community a queue sensing distributed real time commit protocol-QSDRCP is advanced with the aid of creating the chain of commit structured transactions [14].It increases the commit transaction percentages and increases the overall system's efficiency. PROPOSED PROTOCOL FOR DRTDBS The proposed protocol offers a well changed opportunity method to the PIC protocol. It lets in a prepared cohort of low precedence transaction to get access to data object. The overall performance of PIC protocol is not as expected as in disbursed real time environments.But the proposed protocol, gives significantly better result by completing the process in single phase.According to this protocol in case of Priority Inversion, when at any site acohort of low prioritytransaction 't 1 ' is in voting phase i.e., it receive VOTE MESSAGE from coordinator. If at the same time another new transaction 't 2 ' having high priority arrive and request for the same data item held by 't 1 The proposedprotocol follows the following steps. ' then their priority will be upgraded on the basis of HEALTH-FACTOR value. HEALTH-FACTOR is value that is equal to TIME-LEFT, which means time left for a transaction to reach their deadline. After that the PRIORITY-INHERENT message is directly send to all other sibling cohorts and coordinator in parallel fashion and after that, all further processing associated with the transaction at that site takes place at inherited priority. Likewise PIC protocol there may be no want to PRIORITY-INHERENT message trade between coordinator and cohorts. By the usage of this protocol all Cohorts will get hold of the PRIORITY-INHERIT message in half time compared to the base PIC protocol. Most importantly, it'll extensively restrict the firm real time transaction completion time by using manner of disposing of one round of message switch. This is the most vital resource in distributed real-time environment. Let's consider the low prioritytransaction denoted as 't 1 ' and high priority transaction denoted as 't 2 1. 2. Check, ifpriority (t is in voting phase; 2 ) is greater than or equal to priority(t 1 3. Block t ); 2 5. If priority(t 2 ) greater than priority(t 1 6. Calculate the health factor of t1 in the voting phase , t ); HF Block t ); 2 for the time w(t 2 10. Else If ); 11. Reverse the priority from t 2 to t 12. Send PRIORITY-INHERIT message to all participants (including Coordinator) of T End if until t1 is committed or aborted; With the help of figure 1 drawn below we can visualize the working scenario of proposed protocol. PERFORMANCE ANDEVALUATION . This eliminates one phase of message transfer, and thereby improves system's performance. A distributed real-time database system including 6 sites (200 data item/site) was simulated using different parameters assumed in earlier studies for main . memory resident [15,16,17]. We ensured large level of resource and facts contention at some point of general overall performance.We ensured significant level of resource and data contention during performance study. The 5 impartial run (5000transactions/run) is calculated as a result in every set of experiment. The proposed protocol is compared with PIC protocol. Health-Factor is considered as priority challenge coverage. In test, 'Miss %' that is described as percentage of the transaction this isn't able to satisfy their deadline, is used to measure the performance degree. The performance of proposed protocol is measured through particularly discovering out the amount of transactions that misses their deadline and gets killed. As Our work is an extension to PIC protocol. Therefore, we compared the performance of presented protocol with PIC protocol. Figure.2 and Figure. Result shows some differences between both protocols performances under all load conditions. CONCLUSION In this paper, we suggest a commit protocol based totally at the idea of precedence inheritance to overcome the trouble of execute-commit conflict. According to this protocol, there is no need of two round of message transfer between participants (Coordinator and Cohorts) when priority inversion occurs; only single round message transfer between participants is needed. Here, on the basis of HEALTH-FACTOR, transaction's priority is reversed between each other and all Cohorts will receive the PRIORITY-INHERIT message directly from updated cohorts instead of coordinator. So it reduces the time in assessment to the base PIC protocol and also reduces the commit processing time. Most importantly, it will significantly minimize the overall distributed firm real-time transaction completion time which is the most critical resource in distributed real-time environment. As future research work, an extensive performance study of proposed work on transaction execution in real environment.
2,553.6
2018-04-20T00:00:00.000
[ "Computer Science" ]
Social vices associated with the use of Information Communication Technologies ( ICTs ) in a Private Christian Mission University , Southern Nigeria 1 Department of Student Affairs, Covenant University, P. M. B. 1023 Ota, Nigeria. 2 Department of Psychology, Nnamdi Azikiwe University, P. M. B. 5025 Awka, Nigeria. 3 Department of Business Management, Covenant University, P. M. B. 1023 Ota, Nigeria. 4 Department of Religion and Human Relations, Nnamdi Azikiwe University, P. M. B. 5025, Awka, Nigeria. 5 Department of Political Science, Nnamdi Azikiwe University, P. M. B. 5025, Awka, Nigeria. INTRODUCTION The rapid aculturation arising from globalisation has been identified as an important factor responsible for increase social vices in modern societies (Udebhulu, 2009).Individuals have to contend with these vices because they violate societal norms and values.In other words, they could be regarded as 'a thorn in the flesh' of human peace and tranquility.Although Jones et al. (1985) noted that the rate of vices in the developed economy is very high as indicated by its increasing occurrence, but it could be observed that it has minimal impact on national development because of a robust structure to fund security systems that are committed to protecting lives and properties and bringing perpetrators to book.However, the problem is a major issue of concern in most developing countries, where complex vices are alien to their culture (Omonijo and Nnedum, 2012b).Moreover, powerful security network and committed security personnel to combat social-ills, mostly ICT related ones are relatively lacking in this aspect of the world, and most especially in their Universities. Nigeria is a classic case in point, where large quantity of literature on social vices are found.Prominent among them being the works by Jumaat (2001), Kuna (2008), Atabong et al., 2010;Fasasi, 2006;Kayuni, 2009;Olasehinde-Williams (2009), Okafor and Duru (2010), Jekayinfa et al. (2011), Osakwe (2011) and Omonijo et al. (2013b).Other studies focused on vices hindering the peace and smooth running of academic calender on many campuses.Some examples include, investigations on the escalation of cultism, which has claimed lives of many young promising students (Ajayi et al., 2010;Arijesuyo and Olusanya, 2011); dynamics of Gang Criminality and Corruption in Nigeria Universities (Kingston, 2011); cultism or gangsterism and its effect on moral development of learners in Nigerian tertiary institution (Pemede and Viavonu, 2010). Another frequently studied topic is on ICT-related social problems prevailing among underdraduates.These challenges have been threatening academic achievement of many students in these institutions (Okwu, 2006;Utulu et al., 2010;Abdulkareem and Oyeniran, 2011;Folorunso et al., 2006;Omonijo et al., 2011a;Omonijo and Nnedum 2012b).Global revolution in ICTs, in spite of its usefulness, has lucid problems it creates to diverse areas of human endeavours (Okonigene and Adekanle, 2010;Omonijo and Nnedum, 2012b).It is also evident from these studies that the educational sector seems to have failed in rendering quality education that is much needed for personal and national development, hence the birth of private Universities in Nigeria (Obasi, 2006;Ajadi, 2010;Aina, 2010) as cited by Anugwom et al. (2010); suggesting that the high level of discipline which has continued to decline in the public sector educational systems, is one of the core issues being addressed in the Private sector.Hence, social vices, mostly, ICT-related types, which students indulge in with impunity in the public sector is regarded as grievous misconducts and treated as such in the private sector driven educational system.Consequently, this study is focused on a Private Christian Mission University, where many students have been sanctioned based on ICT-related social vices.Implications of these sanctions for studentship are vital issues that have been hitherto ignored in the literature.Thus, to achieve the goal of this study the following research questions are raised. 1. What are the diverse disciplinary actions taken against students for involving in ICTs related -social vices on campus? 2. What are the ICTs associated with social vices and implications for studentship?3. What programmes could be used to rehabilitate students engaged in ICT associated vices?Findings of this work, as planned reveal how the use of ICT devices could result in social vices on campus.Apart Omonijo et al. 3079 from the academic value of this article, the study is expected to come out with programme to inform policy makers on how to rehabilitate the affected students.The fact that many existing studies in the area of education in Nigeria fell short of these efforts, suggests that this study could be significant.The quest for national development and its attendant successes is largely dependent on these youths (Enueme and Onyene, 2010).Therefore, the view of these young students as future leaders as suggested by Omonijo et al. (2011a) should inform the design and development of a transformation programme to assist in reclaiming them from the consequences of these vices and reposition them to be of immense value to a nation (Nigeria) in dire need of advancement. STATEMENT OF THE PROBLEM Numerous studies on ICTs related problems have been conducted within the Nigerian context (Folorunso et al., 2006;Lenhart and Madden, 2007;Abdulkareem and Oyeniran, 2011;Omonijo et al., 2011a;Adeoye, 2010;Arinola et al., 2012;Abdullahi, 2012).The flagrancy of the problem keeps on debasing measures put in place to curtail its escalation, and worsening general safety of the entire citenzenry (Adeyemi, 2012;Fasan, 2012).The involvement of regional vigilante groups such as Oodua Peoples Congress (OPC), Ijaw National Congress (INC), Arewa People`s Congress (APC), The Movement for the Actualization of the Biafra (MASSOB), "Bakassi Boys" (BB), Egbesu Boys of Africa (EBA) etcetera amplified the problem Fatai (2012), by creating more tension for the nation (Adebanwi, 2004;Akinwunmi, 2005).Scholarly endeavours on the possible solutions in recent times have so far concentrated on the need to have regular seminars, symposium, lectures and researches.The study of Adeoye, (2010) on various ways in which students employ mobile phones to perpetuate examination misconduct is one of such efforts.Findings of this study revealed four ways through which students indulge in examination misconduct with mobile phones and resolutions on how to curb them.Nevertheless, the study failed to examine "e-cheating" habit of students within academia."E-cheating" according to Omonijo et al. (2011a), is the habit of students employing ICT gadgets to indulge in examination misconduct.Although mobile phone is one of these devices, but other ICT materials such as I-pods, I-pads, desktop computer, galaxy tabs etcetera were conspicuously omitted in Adeoye (2010) study.This gap in knowledge was addressed by Omonijo and Nnedum (2012b), in three selected Universities in Nigeria.Using data of 199 students, five ICT devices were identified with examination misconduct.However, the work recommended 10 ways of getting rid of this social problem among the nation's undergraduates.Nonetheless, the study limits its scope to public citadel of learning and ignored private institutions, which are not only more ICT compliant, but effectively and efficiently managed than public sector institutions (Aina, 2010).Moreover, the study focused on only one social vice (examination misconduct) and thereby excluded other social-ills associated with ICTs, diverse disciplinary actions taken against students as well as implications of such actions on their studentship.Hence, the need to make up for these gaps in knowledge on this subject matter from the Nigerian perspective. AN EXPOSITION OF RESEARCH ON SOCIAL VICES IN NIGERIA Social vices arise from behaviours of maladjusted people in the society Okwu (2006), but this ailment does not constitute much problem to humanity because movement of affected persons is seriously restricted to a defined location.The bulk of social vices escalating in the society recently has to do with high level of illiteracy, mass unemployment Omonijo and Nnedum (2012a); abject poverty Omonijo et al. (2011b), prevalence of general indiscipline at all levels of the society, incomplete socialisation (Nwosu, 2009;Anho, 2011) and globalisation, which touches on economic, political, social, cultural, technological and environmental facets of human life (Jike and Esiri, 2005).The socio-cultural and technological aspects according to Jike and Esiri (2005) are crucial to this discourse, as it has resulted in the acculturation of countries worldwide.It has also prompted developing nations to embrace ICTs, which is partly responsible the current challenges confronting modern nations (Udebhulu, 2009). In the Nigerian, Omonijo and Nnedum (2012b) observed that exposure of children to ICTs has been instrumental to the raising wave of social vices such as examination misconduct, criminal behaviours, Srivastava (2005) among others.In many homes in Nigeria, parents are not available to train their children due to their engagement in white collar jobs, businesses and other economic activities (Nwosu, 2009).As a result, the activities of children are not checked by their parents at home.This is an indication that deviation from the traditional role of women in home keeping, caring for the children and aged as emphasized by Murdock (1949), has created a vacuum, which most parents filled with ICTs.The time spent on child training in the traditional settings is now being spent in work settings and businesses for salaries, remunerations and profit making ventures.The over pursuance of wealth syndrome by most parents has produced wards who do not know and comprehend their parents.Consequently, most children reflect what they watch in television programmes, videos and internet web pages (Aggarwal, 2010).Some of them equally learn from nannies, housemaids and relations (Nwosu, 2009).Such children are at risk of developing dysfunctional and psychopathic behaviours, due to ineffective parenting, poor supervision and unchecked access to ICTs (Ajiboye et al., 2012). On the other hand, in homes, where parents are available, children are often led into dysfunctional behaviours like cheating, dishonesty, cultism, smuggling, prostitution, probably for financial gain and other reasons best known to them (Nwosu, 2009).In fact, the study by Omonijo and Fadugba (2011) identified ten ways in which parents influence their wards to indulge in examination misconduct.The danger of this as Nwosu (2009) noted is that children first learn ways of coping with the society through socialization in the family before proceeding to institutions of learning.The implication of this is that if children are not brought up properly at home, it would definitely affect their behaviours in the school environment, this scenario seems to exhibit true situation of most children in various institutions of higher learning in Nigeria today. THEORETICAL INSIGHTS Over the years, studies have shown that man's society translates from a primitive form to a more complex state in the process of time (Marx and Engels, 1848;Darwin, 1861;Spencer, 1887;Durkhiem, 1893;Marx, 1894;Tonnies, 1925;Sorokin, 1937;Toynbee, 1946;Rostow, 1960;Lerner, 1958;Levy, 1966;(Comte, 1856) as cited by Coser 1977).Of all these works none to a large extent gives credence to this article than (Rostow, 1960).The paradigm emphasises the process of change towards social, economic, and political systems developed in Western Europe and North America from the 17 th to 19 th century and spread to other parts of the world (Eisenstadt, 1966).It claims that developing societies must pass through five stages before attaining development, engaging the efficacy of capitalism.The instrumentality of capitalism as a weapon of achieving development makes this view different from the standpoint of classical Marxism, which opts for socialism that seriously abhors private accumulation of wealth and exploitation of the working class that capitalism stands for (Rodeny, 1972).On this note, Frank (1971), condemns Rostow (1960), and presents 'development of underdevelopment' as the radical counterpart of his take off stage.Frank (1971), advances by scornfully describes the entire thesis as an uneven structure, tagged: "metropolis-satellite relations".The nature of this relationship is a gigantic and systematic rip-off, because 'surplus is continuously appropriated and expropriated upwards and outwards to the detriment of underdeveloped societies (Frank, 1971).Scholars such as (Lenin, 1919;Fanon, 1965;Rodney, 1972) share the same view with Frank (1971) mostly on the ground of slavery and colonialism, and conclude that development is not possible within capitalist relations.Hence, their advice for developing countries to de-link radically from the world system.Examining these approaches in development context, dependency scholars could be commended for observing the implications of slavery and colonialism in the process of development of underdeveloped societies, which Modernization scholars fail to recognise.Certainly, there is wide agreement among critics that the conceptual weakness inherents in modernization theories consists in failing to emphasize both internal and external connections or relationship between and within societies.Nevertheless, dependecy scholars should not have been employing this experience to justify the continuous underdevelopment of Africa in the comity of nations.Thinking and acting in that direction is what Omonijo et al. (2011b) called an 'escapist approach' in order to shift the blame to the extrinsic others. Dependency scholars have also been questioned for maintaining that development is not possible within metropolis-satellite relations.Although this suggestion justifies South-South underdevelopment experience within capitalist powers, but it negates the spectacular growth of many East-Asian economies-Japan, Hong Kong, Singapore, Taiwan and South Korea and the late developers like China, Thailand, Malaysia and Indonesia within metropolis-satellite relations (Pereira, 1993).Besides, dependency approach fails to recognize the role of internal factors in the backwardness of Africa.Factors such as inter-tribal war, religious riots, communal clashes Omonijo et al. (2011b), ethnicity and tribalism Nnoli (1980), prevalence and persistence of endemic corrupttion across the Nation (Akani, 2001;Offiong, 2001;Omonijo et al., 2013a) have played more cogent roles in her backwardness than imperialism (Warren, 1980).Moreover, radical de-link from the world system is a policy problem limiting the application of dependency paradigm.It is practically impossible and thus, very useless to any development planner worldwide.Therefore, it may never be an antidote to African development.No matter the situation, no country will ever be an Island on its own.Nations will continue to interact with one another in order to enforce the law of comparative advantage (Smith, 2003).In the process, acculturation, which introduces a new way of life including latest technologies to the existing culture Mishra (2010), is enforced.The leadership of each society determines how the change will be managed and sustained for the betterment of the entire citizenry.Therefore, underdeveloped nations' contact with the West was a positive development as posited by Warren (1980), but poorly managed largely by the elite of underdeveloped societies as indicated in Rostow's thesis. The first stage, that is traditional, is akin to Pre-Colonial era of underdeveloped societies, when there were no formal education, industrialization and white collar jobs.Traditional African religious worshipping was prevalent in traditional settings.Human beings were being used to make sacrifice to gods.Giving birth to twins and albinos was a taboo that warranted death.People's means of livelihood were subsistence agriculture, petty trading, fishing etc (Omonijo and Nnedum, 2012b).Obnoxious cultural practices such as widowhood, preference for male child, female genital mutilations etc were in vogue.The second stage, that is precondition for take-off, could be considered as colonial era, which introduced underdeveloped societies to western culture.The period marked establishment of formal education, paid jobs, modern means of communication, trade and commerce.Another form of government that brought about the commencement of the rudiment of democracy emerged (Omonijo, 2008).In the same manner, a new form of religion known as Christianity was introduced to traditional people.This development marked the commencement of destruction of barbaric cultural practices and worshipping of idols or gods.Education and politics brought about the emergence of elite as a social class.The third stage, that is take off could be considered as independence era, but it failed to capture Rostow's prescriptions in social, political and economic terms. Socially, our institutions could not be properly reshaped to permit the pursuit of growth.Elite elites are squandering resources by marrying many wives, taking chieftancy titles, organising unnecessary parties (obituaries, naming ceremonies, birthday parties etc.).Politically, the reign of power was handed over to mediocre in 1959 general election.1965 elections were mercilessly rigged.Chaos and anarchy that characterised the 1965 general elections was greater than that of 1959 (Akani, 2001).It resulted in military incursion into politics and later aggravated to civil war that cost the nation 2 million persons and valuable properties (Omonijo et al., 2011b).Economically, investment, which should have been a proportion of national income, suffered a serious setback due to corruption and other internal problems.These adversely affected strategies such as Exportation of Primary Produce (EPP), Import Substitution Industrialization (ISI) and Export Oriented Industrialization (EOI) put in place to ensure the growth of manufacturing industries (Ake, 1986;1996).Consequently, per capital output failed to outstrip population growth.Hence, continuous progress that would have ushered in Nigeria to industrialization and the last two stages of development, Stage of maturity and age of mass consumption, was nipped in the bud.Haralambos et al. (2000) believe that a country is considered to be industrialized when her industrial sector contributes at least 25% of GDP, consists of 60% or more of manufacturing and employs more than 10% of the population.These conditions failed to materialize in Nigeria because elites, mostly the rulling class, siphoned financial resources meant for their actualisation Omonijo (2008), which invariably affected the nation's human capital, in term of brain drain (Omonijo et al., 2011b). In every developed nation, elites played an active role in her take off stage.Such includes Bismarck of Germany, Meiji of Japan, Lenin of USSR, Ataturk of Turkey, Bonaparte of France and Chamberlain of Britain (Aboribo, 2009).Thus, the stage is an actively pursued project in which the state plays crucial economic roles.Instead of taking a clue from these societies, as well as Paraguay, which refused satellization and permitted selfgenerating development, African elites, either in the military or politics, being 'a class in itself', (Wright 2006;Borland, 2008) have failed woefully to act decisively.They mercilessly embezzled resources meant for national development (Omonijo et al., 2013a) in their countries and stockpiled the loot in foreign banks.It is not evident in the literature that a white man loots the resources of his country and stockpiles the loot in Africa.Ironically, the loot of African elites is being used to boost economies of Western nations.What a shame!Consequently, the structure of Nigeria could not make adequate provision for employment for classes of people interested, leading to high rate of joblessness (Omonijo and Nnedum, 2012b) that aggravates abject poverty among the citizenry (Omonijo et al., 2013c).Hence, the nation retrogressed from one of the richest 50 nations in early 60s to become one of the poorest in the world in recent times (Omonijo et al., 2013c).Going by Merton, (1968), inability to secure means of livelihood legitimately can prompt escalation of social-ills among youths.Untrained, unfed and uncared for children may likely resort to stealing, child prostitution, thuggery, kidnapping, Advance-Fee-Fraud "419", secret cult, examination misconduct, in order to fend for themselves.With proliferation of ICTs, such social-ills could be facilitated easily than before through "almajiri" and "omole" or "agboro" or "omoonile" and "boko haram" syndrome in Nigeria. Although, underdeveloped countries began to embrace modernization in early 60s as a tool of achieving development, but the process was largely derailed by the elites of these countries as noted earlier.Global revolution in ICTs is an aspect of modernization meant for effective and efficient computer systems for processing information for the betterment of humanity in underdeveloped societies (Olaniyi, 2009;Ramjit and Singh, 2004 as cited by Omonijo and Nnedum, 2012b).E-learning, e-administration, e-banking or commerce are all parts of ICT, which modernity originally designed for human comfort, but hoodlums in Nigeria seem to have hijacked the initial good intention of introducing ICT through modernity to the detriment of the nation's advancement.While developed nations are advancing in science and technology many Nigerians are advancing in using the same to perpetrate social-vices (cyber scam, echeating, hijacked e-mails, fake websites and all sort of computer fraud) through the aid of the internet online business transactions (Chawki, 2009) as cited by (Igwe, 2011).This could be corrected by instituting sound education at all level, through discipline, not only in the Private Christian Mission Institutions but also in the public sector. Research design This study employs ex-posit descriptive design because events that led to it took place in the past.Moreover, cross-sectional design was used to complement the former.This is because opinions of different sections of the university community were sought for the study. Research instruments Primary and secondary means of data collection was adopted.It involved excursion into literature and retrieving of information from registers.Information concerning the number of students penalized for ICTs related social vices in the last seven years and their penalties were retrieved from written documents produced by the Chairman of Disciplinary Committee (CDC) in the institution under study.Moreover, in-depth interview was used to complement retrieved information. Population of study and sample size Students and staff constitute the population of study.The total population in figure is 8, 322.Out of this figure, the student body represents 7, 840 academic staff constitutes 402 while staff of Student Affairs represent 80. Out of this number -8, 322, 60 interviewees were randomly selected, that is, 45 from the student body, 10 from the academic staff and 5 from student affairs unit. Sample techniques Opinions of staff acquainted with students' activities as regard ICTs on campus were of paramount importance to the study rather than general opinions of people that may not reflect reality.Thus, purposeful sampling method was used to select interviewees from the population.Proportional sample method was first of all applied to the population.Therefore, the University was divided into 22 departments.Each of them produced interviewees according to its poplulation.The same method was applied to academic and student affairs' staff.Simple random method was later used to select each interviewee from their departments and sub-units. Data analysis The data was analyzed using frequency tables and percentages. RESULTS Table 1 presents the descriptive statistics on diverse Table 2 shows descriptive statistics on ICTs which students used to indulge in social vices and its implications.Around 18.7% of them used their laptops to indulge in pornography while 15% used their I-Pod.Also 4.8% used their mobile phones while 2.7% used their modems.This was followed by 28.3% students that were involved in the act of stealing ICT tools.The result shows that 24.6% of them were identified with laptops, 3.2% were linked with I-Pods while 0.5% associated with mobile phone.Similarly, 10.2% of the students involved in indecent behaviour relating to storing pictures, where they were smoking cigarette and Indian hemp as well as drinking of alcohol on the following ICT materials: I-Pods 5.9% and laptops 4.3%.Students caught in possession of cult-related materials on their laptops represent 9.1% while those caught for examination misconduct with ICT devices represent 8.5%.3.2% of them used their desktops to cheat, 2.7% used their I-pod and 2.1% used their calculators.1.6% students involved in computer related fraud i.e. hacking into data base and bank fraud.Finally, 1.1% students used their laptops to store indecent pictures.The first implication of the above on studentship is expulsion from the University, which is the ultimate penalty.This is followed by advised-to-withdraw from the institution. Data in Table 2 also show that greater proportion, 8.7% of the students served 1 year suspension for using their ICT devices to engage in indecent behaviour.About 17.9% of them used laptops while the same number used I-pods to store indecent movies.Also 38.6% used the following ICT devices in storing pornographic pictures and browsing restricted websites.These include 19.2% that used modem, 18.1% I-pod and 1.3% mobile phones.In the same vein, 10.5% of the affected students used the following devices to store indecent movies: 12.8% I-pods, 6.4% mobile phone and 1.3% laptop.52 of them involved in possession of indecent pictures on their ICT tools: 25.8% used mobile phones, 24.2% used modem, 17.7% used laptops and 14.5% used I-pods.6.4% students were caught for indecent music and video with I-pods.1.3% student each was caught for examination misconduct with mobile phone and I-pod while 1.3% was caught with cult-related materials with an I-Pod.In addition, 76.4% of the students used their mobile phones to engage in immoral sexual communication with the opposite sex, followed by 9.2% students who used the following ICT tools for indecent movies: laptops 5.1% and mobile phones 4.1%.However, 3.6% of the students used their modem to browse pornographic web sites, 3.1% used their I-pods to store indecent music and video materials, 2.6% students used their laptops to store indecent pictures and finally 0.2% used his Galaxy tab to store indecent music and video.These students served 4 weeks suspension. Note: 5 students did not agree with counselling and advise. 1 academic staff did not subscribe to referral to youth development and leadership institute.Finally, 3 academic staff, 2 staff of student affairs and 15 students did not agree with Bible school attendance. Examination of the result in Table 3 will, reveal that, three programmes for rehabilitation of affected students were suggested by 59 interviewees among the staff.The first on the list is compulsory counselling and advice.It was found that 16. 7% of the interviewees were academic staff, 8.3% were members of staff in Student Affairs and 75% were students.The remaining 8.3% did not subscribe to this program.This was followed by Referral to Youth Development and Leadership Institute in which 15% academic staff subscribed to.8.33% members of student affairs subscribed to it and 75% students suggested the same thing.Meanwhile, 1.7% academic staff did not agree with this programme.Lastly, Bible school attendance is the third program.11.7%, 5% and 50% of academic staff, members of Student Affairs Department and students subscribed to this respectively.Meanwhile 5% academic staff, 3.3% members of Student Affairs and 25% students did not agree with this programme. Note: 4 academic staff and 1 member of student affairs unit did not subcribe to referral to youth development and leadership institute. From Table 4, it is evident that three programs were suggested by 60 interviewees.The first on the list is compulsory counselling and psychotherapy for students slated for suspension.16.7% of them were academic staff, 8.3% were members of staff in Student Affairs and 75% of them were students.This is followed by Referral to Youth Development and Leadership Institute during holidays in which 14.3% academic staff subscribed to.9.5% members of student affairs subscribed to it while OTHER IMPLICATIONS OF INVOLVING IN ICT-RELATED VICES ON STUDENTSHIP. Implications of involving in ICT related vices are diverse, therefore, they are stated in categories below: Category A, Expulsion: Expulsion is the ultimate penalty an erring student can obtain.Such a student would vacate campus as soon as he collects his/her letters of expulsion.They are no longer students of the institution ad infinitum, except by a decision of the highest level of management reversing the expulsion.Privileges of registration, class attendance, residence in the hall of residence are withdrawn.It equally connotes withdrawal of privileges of the use of university facilities like sport complex, cyber cafe, library etcetera.All the money spent from admission to the point of expulsion is wasted.They do not have any right to academic transcript.Instead, they will start their academic career afresh by sitting for another Univertsities Matriculation Examination.Category B, Advised-to-withdraw: Advised-to-withdraw is next to expulsion.Under this category, students involved are expected to vacate campus as soon as they collect their letters of withdrawal.They are no longer students of the university ad infinitum, except by a decision of the highest level of management reversing the withdrawal.Privileges of registration, class attendance, residence in the hall of residence are withdrawn.It equally connotes withdrawal of privileges of the use of university facilities like sport complex, cyber cafe, library etc.All the money spent from the day of admission up to the point of expulsion is wasted.However, students involved are allowed to collect thier academic transcript at the point of withdrawal and continue their study in another university. Category C, 1 year Suspension: This is next to withdrawal.Students involved are expected to vacate campus as soon as they collect their letter of suspension.With this development, they are no longer students of the university for one solid year, except by a decision of the highest level of management reversing the suspension.Privileges of registration, class attendance, residence in the hall of residence are suspended.It equally connotes suspension of privileges of the use of university facilities like sport complex, cyber cafe, library etcetera.Such students will forfit one year and they will not be able to graduate with their set.Category D, 4 weeks suspension: This is the least penalty that can be giving to students.Such students will vacate campus as soon as they collect their letters of suspension.With this development, they are no longer students of the university for a month, except by a decision of the highest level of management reversing the suspension.Privileges of registration, class attendance, residence in the hall of residence are suspended.It equally connotes suspension of privileges of the use of university facilities like sport complex, cyber cafe, library etcetera for this period.However, if this period falls within examination, such students may not sit for all examination slated for that period.This will prevent them from registering for the next academic semester.In that wise, such students will loose one academic session. Apart from the above, a copy of disciplinary letters received by students is always kept in their files for future reference.That serves as a negative implication on their studentship.Moreover, such students are not allowed to take part in any excursion outside the University.Furthermore, they are not allowed to hold any leadership position throughout their studentship on campus. RELATIONSHIP BETWEEN ICT AND SOCIAL VICES A-Mobile phone: Students employed this device to store relevant materials in courses being examined prior to examination.Such students were caught while copying these materials from their phone to answers scripts in the examination hall.Moreover, the same device was used to store pornographic materials and indecent movies.Such students (male and female) used to watch these materials at night in order to learn, secretly how to engage in fornication with the opposite sex.Moreover, students engaged mobile phone to indulge in indecent sexual communication with the opposite sex in and outside campus.Discussion on how to meet with the opposite sex in club houses and hotels were being made with this device.Finally on this device, those who cannot afford expensive mobile phones used to steal from other careless colleagues.B-Laptops/Desktops: These devices were used to store pornographic materials, indecent movies, cult-related materials and pictures such as nudity.Since laptops are being used to browse for academic materials, they copy these materials from the internet and watch them secretly on campus.More often than not, they store them in hidden places where they can only be discovered by ICTknowledgeable persons.Secret cult materials such as songs, pictures, logos etcetera are stored on their laptops.Furthermore, students use this device in hacking into data base with the aim of committing fraud.Also, pictures where students were drinking alcohol, smoking cigarette and Indian hemp in parties outside the campus were stored on laptops.Where computer desktops are very essential for examination, students used to copy answers from other students through internet in the examination hall.The use of laptop led to high rate of stealing on campus.Students who need money for other things such as school fees, secret cult initiation and material things stole the laptops of thier colleagues who were careless and sold them outside the University.C, I-Pods: I-Pods were also used to store pornographic materials, indecent movies and pictures.Moreover, cultrelated materials such as logo, songs and pictures are stored in it directly.Also, pictures where students were drinking alcohol, cigarette and Indian hemp in parties outside the campus are taken and transferred to their I-Pods.Since I-pods are used to store lecture notes, students also use it to store materials relevant to subjects being examined during examination.The aim of such attempt is to copy answers from the device.Also, it has led to stealing on campus.Students who cannot afford it steal from their friends or roommates.D-Calculator: This was used to perpetrate examination misconduct.Relevant materials such as formula etcetera were inscribed on calculators prior to examination.Students were caught in the process of coyping them to their answer scripts.E-Galaxy tap: This device was used to store indecent music and video.Students used to listen and watch them at their private time.F-Modem: This device was used to browse web sites for pornographic materials, indecent movies and music etcetera.They stored these materials on their ICTs. DISCUSSION Apparently, it could be deduced from this study that the citadel of learning under study operates by strigent rules and regulations.Moreover, it is observed that justice is administered without fear or favour.No matter whose ward is involved, the wrath of law is applied as indicated in Tables 1 and 2, and the seriousness of vices committed determined penalties meted out.This is largely appreciated and commendable because these are very rear not only in the management of public sector education, but other institutions in Nigeria and their implementation of justice (Omonijo and Nnedum 2012b).Nigeria is a country where justice is denied.Thus, evil peoples hold sway in every affairs of life.This may be associated with the escalation of social-ills in her tertiary institutions as well as moral decadence in the country at large.Being caught and punished may deter offenders from committing a crime again, as well as deter future offend-ers who contemplate committing crimes (Saridakisa and Spenglerb 2012).However, the financial cost of these disciplinary actions on parents is grevious.It costs a parent .6 million naira to finance a ward in a year, if such a child is expelled after two or three or five years, it means, such a parent has lost a substantial amount in cash as well as in human capital to ICT related vices.Its psychological trauma, mostly to parents who borrowed money to pay school fees could lead to high blood presssure, hypertention and sudden death.The sociological implication is that such a child may lose proper parenting that is needed at that point in life.Psychologically, such a child will have to grapple with self enacted stigma that trails a university drop-out in Nigerian society.In term of time, if a child is rusticated after four years on campus, it means starting his or her higher education afresh.Four years lost may prevent a student from catching up with his mates in life.In order not to inflict permanent stigma on such students, disciplinary actions could be complement with spiritual exercise such as counselling, psychotherapy and spiritual transformation as presented in Tables 3 and 4.These measures could enable them to be transformed, as Jesus is not interested in the death of any sinner, but his repentance and acceptance to iternal glory.Therefore, such programme could go a long way in destroying and reconditioning the dysfunctional habit in the lives of affected students. CONCLUSION As long as human society exists, occurrence of socialvices may not be altered.More importantly, the more human society advances in science and techonology, the more likely humanity experiences more complex vices which are the vicious fallout of postmodernism.However, the rate of its escalation in Nigeria, previously known for moral decency and decorum is beyond the writers imagination.In other word, the way Nigeria suddenly emerged as a purveyor of cultural and structural vices due to the alarming level of social decadence arising from ICTs is an issue of concern to academia, which public institutions seem to have failed in addressing.Therefore, the hope of restoring sanity in the citadel of learning in Nigeria lies in Private Christian Mission Institutions.Elites who lack discipline would definitely be void of ingredients for effective and effecient administration.This may not be unconnected with bad government, resulting in the backwardness of Nigeria.When the righteous are in positions of authority, the people may likely rejoice, but when the wicked are in power, the people may lament, mourn and regret.Nigeria has degenerated to this level because graduates recruiting to work settings from the public tertiary institutions lack the moral, spiritual, psychological and sociological competence required for constructive engagement in postmodern workplace.Also, it is observed with keen interest that most students enroling in Private Christain Mission institutions do not realise the essentials of discipline that the institution is out to enforce.They must have conceived out of their ignorance that social vices being displayed with impunity in the public sector are part and parcel of life, which should be left unchanged.This must have ocassioned the large number of victims recorded in the last 7 years.However, programs of action should be instituted to rehabilitate them, as a faith based citadel of learning, to ensure their usefulness in the nation's building at this crucial time that the nation needs a crop of regenerated leaders to bail the nation out of oblivion. RECOMMENDATIONS Based on the above conclusion this study makes the Omonijo et al. 3087 following recommendations. 1.The use of ICT devices should be strictly monitored by the school management on campus and parents at home. 2. Parents should equally make themselves available at home to train their children and stop leaving them in the hands of nannies, house boys and maids. 3. Parents should stop exposing their wards to ICT without adequate check.The use of ICTs should not be used to replace their non-availability at home. 4. Manifestations of social-ills should be reviled in the formal sector and disparaged in informal in Nigeria. 5. Faith based organizations should commence emphazising on holiness and righteousness, mostly among children instead of much trouncing on prosperity.6. Good conducts among children should be commended and rewarded in the family, school and church environments.7. Acculturation is good, but things that add value to the existing culture should be copied while bad habits should be left in the lurch. Table 1 . Descriptive statistics on diverse disciplinary actions taken against perpetrators' of ICTs related social vices. Table 2 . Descriptive statistics on ICTs associated with social vices and implications on studentship. Table 3 . Descriptive statistics on suggested programmes to rehabilitate expelled and advised-to-withdraw students. Table 4 . Descriptive statistics on suggested programmes to rehabilitate suspended students.
8,596
2013-08-21T00:00:00.000
[ "Education", "Sociology", "Computer Science" ]
High Yield and Packing Density Activated Carbon by One-Step Molecular Level Activation of Hydrophilic Pomelo Peel for Supercapacitors Highly hydrophilic pomelo peel is used as an activated carbon (AC) precursor so that KOH can be homogeneously absorbed within it. Subsequent cryodesiccation retains the original morphology of the pomelo peel and distribution of KOH, which provides the precondition of the one-step molecular level activation. The resulting AC has a high yield of 16.7% of the pomelo peel. The specific surface area of the AC prepared by the one-step molecular activation of cryodesiccated mixture of pomelo peel and KOH (CAC-1) is 1870 m2 g−1, which is higher than that of the AC by the one-step activation of oven-dried mixture (AC-1) and AC by the two-step calcination (AC-2). The CAC-1 has the highest specific capacitance of 219 F g−1 among all the three samples. Importantly, the CAC-1 electrode has a high packing density of 0.63 g cm−3. The aqueous supercapacitor based on the CAC-1 has a volumetric cell capacitance of 30.7 F cm−3, which corresponds to 123 F cm−3 for a single electrode. When the ionic liquid of 1-ethyl-3-methyl-imidazolium tetrafluoroborate is used as electrolyte, the CAC-1 shows maximum specific energy of 40.5 Wh kg−1 and energy density of 25.5 Wh L−1. With the exploitation of renewable energy, supercapacitor has become one of the most important electrochemical energy storage devices. Compared with secondary batteries, supercapacitors based on carbon materials have unique advantages of higher power and longer lifetime. [1][2][3] Among all carbon materials, AC is currently the most ideal commercial electrode material [4][5][6] for supercapacitors due to its low cost, large specific surface area (SSA), tunable pore structure, good electrical conductivity and excellent surface chemistry. In a conventional approach, ACs are typically prepared by a twostep process consisting of carbonization and subsequent activation both at high temperature. 7,8 Biomass wastes as natural materials are often selected as the precursors of ACs. 9 For example, recently, the pomelo peel was carbonized and activated, both of which were carried out at high temperatures of 400°C-800°C. 10 Sometimes, in order to improve the electric conductivity and capacitive performance of ACs, even a post-synthetic step of vacuum treatment was supplemented, for instance, when bamboo was used as the precursor. 11 The processes at high temperatures increase the cost in the preparation of ACs. Additionally, the solid coke after carbonization is hydrophobic and cannot homogeneously contact with KOH. Therefore, high cost and heterogeneous mixing of carbon and activating agent are two main obstacles in the preparation of commercial ACs. Currently, one-step activation of precursors is desirable for the preparation of low-cost ACs. 12,13 However, the one-step activation derived AC generally has low yield and low packing density. 14 Therefore, it is still a challenge to prepare ACs with low cost, high yield and packing density. Here, we reported an AC with high yield (16.7%) and packing density (0.63 g cm −3 ) by the one-step molecular level activation of hydrophilic pomelo peel for supercapacitors. The highly hydrophilic pomelo peel thoroughly sucked KOH solution and subsequent cryodesiccation retained the original morphology of the pomelo peel, which ensured the homogeneous distribution of KOH and provided the precondition of molecular level activation. The AC by the one-step activation after cryodesiccation had an SSA of 1870 m 2 g −1 and a specific capacitance of 237 F g −1 at 0.5 A g −1 in 6 mol l −1 KOH, corresponding to a volumetric capacitance of 149 F cm −3 . Besides, the one-step activation saved energy and cost in the production of AC from the commercial view. Its symmetrical two-electrode supercapacitor with aqueous electrolyte showed maximum specific and volumetric cell capacitances of 48.8 F g −1 and 30.7 F cm −3 , respectively. CAC-1 also showed maximum specific energy of 40.5 Wh kg −1 and energy density of 25.5 Wh l −1 in cell with the 1-ethyl-3-methyl-imidazolium tetrafluoroborate (EMIMBF 4 ) electrolyte. Experimental Preparation of samples.-The pomelo peel was chopped and dried before use. 1 g of pomelo peel was immersed into a 10 ml solution containing 0.45 g of KOH for 24 h. The wet solid was cryodesiccated in the chamber above the cold trap under 75 Pa. After cryodesiccation, the mixture was heated to 800°C at 5°C min −1 and held at the temperature for 1 h under Ar atmosphere. When it was cooled to room temperature, the powder was washed and dried to obtain the AC which was recorded as CAC-1. For comparison, CAC-1a and CAC-1b samples were prepared by the same method but with 0.225 g and 0.90 g KOH respectively, while the AC-1 sample was prepared by the same method but by oven desiccation instead of cryodesiccation. Additionally, a conventional two-step method consisting of carbonization and activation with KOH both at 800°C for 1 h was also carried out to obtain AC-2. Characterization of samples.-The morphology and elemental content of the samples were investigated by scanning electron z E-mail<EMAIL_ADDRESS>= These authors contributed equally to this work. *Electrochemical Society Member. microscopy (SEM, FEI Nova 400 Nano SEM). The elemental analysis of CAC-1 was executed by CHN and O modes in Elementar Vario el III. The elemental analysis of the ash was revealed by X-ray energy disperse spectrum (EDS). The N 2 adsorption/desorption measurements were carried out at 77 K using automatic volumetric adsorption equipment (ASAP2020HD88). The surface element analysis was conducted by X-ray photoelectron spectroscopy (XPS) and the functional group by infrared (IR) spectrum. Graduated cylinders were used to measure the tapping density of the cryodesiccated and the oven-desiccated pomelo peel respectively after soaking in KOH solution by shaking. Electrochemical measurements.-A mixture of AC, acetylene black and binder, polyvinylidene difluoride (PVDF) or poly tetrafluoroethylene (PTFE), with a weight ratio of 75: 15: 10 was pasted on the nickel substrates as the working electrodes. The packing density of the electrode was calculated by Eq. 1, where ρ is the electrode packing density, m is the mass of the mixture on the current collector weighted by balance (Mettler Toledo MS105DU), h is the thickness of the electrode and r is the radius of the electrode. For the three-electrode cells, a platinum foil and a Hg/HgO electrode were used as the counter and reference electrodes in 6 mol l −1 KOH as electrolyte. For the two-electrode tests, the two AC electrodes were assembled to coin-type symmetrical supercapacitors with 6 mol l −1 KOH and 1-ethyl-3methyl-imidazolium tetrafluoroborate (EMIMBF 4 ) electrolyte. Cyclic voltammetry (CV) and galvanostatic charge/discharge (GCD) were employed with a CHI 660E electrochemical workstation and the cyclic performances were measured by a LAND CT2001A system. Results and Discussion The IR spectrum of the pomelo peel in Fig. S1 (available online at stacks.iop.org/JES/168/060521/mmedia) shows the main functional groups of C-OH bond at 1054 cm −1 , C=C bond at 1650 cm −1 , C=O bond at 1748 cm −1 and hydroxyl at 3335 cm −1 . After calcination in inert atmosphere, CAC-1 exhibited the C, O and H weight fractions of 87%, 4% and 1% respectively by the organic elemental analysis. After quick ash test of CAC-1 by rapid annealing in air, the ash with 7% weight percentage remained, consisting of 48% of O, 20% of Ca, 15% of Ni, 7% of Mg, 6% of S and 4% of Si in atomic percentage revealed by EDS. The other 1% component could be attributed to absorped water in CAC-1. Only Ni element in the form of Ni(OH) 2 or NiO was possibly electrochemical active in aqueous electrolytes. The 30% Ni in weight percentage in the ash corresponded to 2% Ni in CAC-1. The specific capacitance of the commercial Ni(OH) 2 was only 17 F g −1 (Fig. S2), meaning that the Ni in CAC-1 may produce a specific capacitance of 0.5 F g −1 . Therefore, the contribution of the ash on capacitance can be negligible. The XPS C1s spectrum of CAC-1 in Fig. S3 shows three peaks which are attributed to C-C, C-O and C(O)O bonds at 284.6, 286.0 and 288.9 eV respectively, suggesting that residual oxygen functional groups append on carbon skeleton. Figure 1a shows the SEM image of the pomelo peel as the carbon precursor with the dense wrinkle. The conventional two-step process obtained a similar morphology with shrunk size of ca. 126 μm without any macropores (Fig. 1b). After soaking the pomelo in KOH, the oven-desiccated pomelo peel sample was tightly stacked due to shrunk volume, compared with pomelo peel (Fig. 1c). The cryodesiccated pomelo peel after soaking in KOH basically maintained the appearance of the untreated pomelo peel and became even looser, which indicated the advantage of cryodesiccation (Fig. 1e). The retained morphology by cryodesiccation resulted in the homogeneous distribution of KOH in the pomelo peel, while the shrunk volume by the oven-desiccation led to the partial segregation of KOH during evaporation of water under ambient pressure. Many macropores with the average size of 1.0 μm and the average wall thickness of 0.8 μm are seen in the bulk carbon with size of 28 μm in Fig. 1d for AC-1. Figure 1f shows a similar porous structure for CAC-1 with a similar bulk size of 26 μm. The decreased bulk size of AC-1 and CAC-1 compared with AC-2 suggested that KOH could attack both the pomelo peel precursor and its carbonization products at different stages of the one-step activation process. However, the distribution range of macropores on the surface in CAC-1 was slightly narrower than that in AC-1, which may be attributed to the more homogeneous distribution of KOH in the pomelo peel by cryodesiccation. The adsorption and desorption isotherms of N 2 and the pore size distributions of AC-1 and AC-2 and CAC-1 are shown in Fig. 2. All the samples in Fig. 2a exhibit type I isotherms, which correspond to the microporous structure. CAC-1 possessed the largest SSA of 1870 m 2 g −1 and the largest total pore volume of 0.99 cm 3 g −1 . According to Eq. 2, 15 the apparent density of CAC-1 was calculated to be 0.69 g cm −3 . where Vt is the total pore volume, ρ a and ρ 0 are the apparent and true density. In contrast, AC-1 showed an SSA of 1383 m 2 g −1 and a total pore volume of 0.70 cm 3 g −1 , while AC-2 the smallest SSA of 867 m 2 g −1 and the smallest total pore volume of 0.44 cm 3 g −1 . As shown in Fig. 2b, the maxima of CAC-1 pore distribution lies at 0.50 nm, while both of AC-1 and AC-2 lie at 0.58 nm, suggesting that cryodesiccation of the mixture leads to more homogeneous distribution of KOH in pomelo peel. In the range of 0.7-2.0 nm, CAC-1, AC-1 and AC-2 exhibit similar pore size distributions but CAC-1 possesses more solvated ion-accessible pores in the same range. The electrochemical properties of the three comparative samples with the PVDF binder were measured by CV and GCD in threeelectrode cells with the 6 mol l −1 KOH electrolyte at a potential range from −1 to 0 V vs Hg/HgO. At a scan rate of 10 mV s −1 , all the three cyclic voltammograms (CVs) exhibit a quasi-rectangular shape with no redox peaks (Fig. 3a), indicating that all the samples have nearly ideal capacitive behavior. Apparently, CAC-1 has the largest CV area, indicative of the best specific capacitance. Figure 3b exhibits the GCD curves of CAC-1, AC-1 and AC-2 at a specific current (I m ) of 1 A g −1 . Consistently, CAC-1 had the highest specific capacitance of 219 F g −1 (calculated by the segment with shorter time referred by the arrow in Fig. S4), which was 33% higher than 165 F g −1 for AC-1 and 49% higher than 147 F g −1 for AC-2 at the same specific current. After the same cryodesiccation and activation, different mass ratio of KOH to pomelo peel gave rise to different properties. Fig. S5 shows the GCD curves of CAC-1, CAC-1a and CAC-1b. The specific capacitances of CAC-1a and CAC-1b were 182 F g −1 and 196 F g −1 respectively, which were evidently lower than 219 F g −1 of CAC-1. When the scan rate increases to 100 mV s −1 , the CV of CAC-1 in the three-electrode cell show a distorted rectangle (Fig. 3c), suggesting that most of the charge storage might have occurred on the outer surfaces at higher scan rates because the cations has much shorter times to build a certain level of potential along the porous structure of the electrode. 16 Figure 3d shows that the GCD curves of CAC-1 at different specific currents. A maximum specific capacitance of 237 F g −1 was obtained at 0.5 A g −1 . The corresponding areal capacitance was 166 mF cm −2 , which was much higher than 15 mF cm −2 of hollow carbon spheres 17 and 10 mF cm −2 of porous carbons, 18 but lower than 1590 mF cm −2 of well-aligned carbon fibers. 6 In order to obtain the packing density, a CAC-1 electrode with the thickness of 69 μm was prepared with the PTFE binder, as shown in Fig. S6. The whole electrode packing density was calculated to be 0.63 g cm −3 , which was close to its own apparent density (0.69 g cm −3 ), comparable to the apparent density (0.65 g cm −3 ) of the commercial YP50f and higher than the packing density (0.50 g cm −3 ) of the AC from pomelo peel by a two-step method. 10 Assuming that CAC-1, acetylene black and binders had the same true density, the volumetric capacitances of the CAC-1 material could be obtained according to Eq. 3. where C v and C m are volumetric and specific capacitances respectively. The maximum volumetric capacitance of the CAC-1 material in the three-electrode cell was 149 F cm −3 obtained at 0.35 A cm −3 , which was higher than 98 F cm −3 of YP50f according to the results in the literature. 19 Compared with that of AC-2, the electrochemical performances of CAC-1 and AC-1 were significantly improved. The effectiveness of the one-step calcination was mainly because the pomelo peel had good absorption of KOH solution, which could homogeneously activate the carbon precursor. Among the three samples, CAC-1 showed the overwhelming superiority in specific capacitance. Microscopically, from the SEM images, only the cryodesiccated pomelo peel after soaking in KOH maintained the appearance of the untreated pomelo peel and became even looser because KOH was absorbed within it. Macroscopically, the tapping density of the cryodesiccated mixture was 0.14 g cm −3 , which was smaller than 0.35 g cm −3 of the ovendesiccated mixture, meaning that partial segregation of KOH might occur during the evaporation of water accompanied by volume shrinking by oven desiccation. Therefore, cryodesiccation was beneficial to homogeneous distribution of KOH in pomelo peel. It was worth noting that the yield of CAC-1 was 16.7% of the pomelo peel, which was much higher than 2.7% for the AC by the one-step activation of agar and similar with 16.2% for the agar derived AC by the two-step method. 14 The high performance and yield illuminated prospect of CAC-1 as a commercial AC. A symmetrical two-electrode coin-type supercapacitor was assembled with two pieces of the CAC-1 electrodes with 6 mol l −1 KOH as the electrolyte. Because of the compact structure of the twoelectrode cell, the CV shows a rectangular outline even at 100 mV s −1 (Fig. 4a). According to the discharge curves in Fig. 4b, the specific capacitance of the CAC-1 cell with was 49.8 F g −1 at 0.5 A g −1 , corresponding to 199 F g −1 at 1 A g −1 for a single electrode, which was close to 219 F g −1 in the three-electrode cell. At 0.25 A g −1 , the specific capacitance of the CAC-1 cell was 51.3 F g −1 , corresponding to 205 F g −1 for a single electrode. Figure S7 shows the cyclic performance of the CAC-1 cell. 112.6% of the initial capacitance could be retained for the CAC-1 cell at 2.5 A g −1 after 10000 cycles. When the specific current was altered to be 1 A g −1 , followed by another continued 10000 cycles, 104.5% of the retention was obtained relative to the initial capacitance at the first cycle at 2.5 A g −1 . It was worth noting that the CAC-1 cell showed progressively increased specific capacitances in the initial cycles, which was mainly attributed to the following two aspects. On one hand, the wettability at the electrolyte/electrode interface increased with cycles, leading to the thorough utilization of deep pores. 20 On the other hand, the residual oxygen content of 4%, as revealed by the elemental analysis, contributed more pseudocapacitance after cycles. 21 The good cyclic performance of the CAC-1 cell at two different specific currents indicated that the CAC-1 materials could be qualified for high rate capability and long-time stability. However, the typical mass loading of the CAC-1 electrodes with the PVDF binder was only 0.7 mg cm −2 , which was far from the commercial requirement. When the PTFE binder was used, the average mass loading of the CAC-1 electrodes increased to 7 mg cm −2 . Figure S8 shows the CVs of the supercapacitor with the thick electrodes, also demonstrating the ideal capacitive behavior. As shown in Fig. 4c, at 0.25 A g −1 , the specific capacitance of the supercapacitor with the thick electrodes only decreased to 48.8 F g −1 , corresponding to 195 F g −1 for a single electrode, which was 95% of the value for thin electrode with loading of 0.7 mg cm −2 . As the packing density of the electrode was 0.63 g cm −3 , the volumetric cell capacitance was 30.7 F cm −3 at 0.16 A cm −3 , which corresponded to 123 F cm −3 for a single electrode (Fig. 4d). When the current density was 3.15 A cm −3 , the volumetric cell capacitance was still 23.3 F cm −3 . In order to increase the energy of the CAC-1 based supercapacitor, EMIMBF 4 was applied as the electrolyte to extend the potential window. 22 The CVs of CAC-1 in EMIMBF 4 was investigated in a three-electrode cell with a silver reference electrode at 10 mV s −1 (Fig. S9). Figure S10 also shows the CVs in a twoelectrode supercapacitor with the ionic liquid electrolyte within 3.5 V at different scan rates. Small positive current tails are seen at a potential window (cell voltage) of 3.5 V in both Figs. S9 and 10. According to Fig. S11, the specific cell capacitance of the supercapacitor with 3.5 V was calculated to be 32.5 F g −1 at 0.5 A g −1 with the coulombic and energy efficiencies of 94% and 73%. As the energy efficiency at 1 A g −1 with 3.5 V was also 73%, the cyclic performance of the CAC-1 cell with ionic liquid electrolyte was carried out with the condition. Figure S12 exhibits that 75.0% of the initial capacitance can be retained for the cell after 10000 cycles at 1 A g −1 , suggesting the moderate durability of CAC-1 in EMIMBF 4 with 3.5 V cell voltage. Nevertheless, the retention of CAC-1 was inferior to that of the AC prepared by an extra post-synthesis vacuum annealing, which suggested that the deterioration of cyclic performance was attributed to irreversible redox reaction between the O functional groups in CAC-1 and ionic liquid. 23 However, the coulombic and energy efficiencies at 0.25 A g −1 were only 87% and 67%, which were too low in real energy stored systems. Combining with the CVs of CAC-1 in a three-electrode cell in Fig. S9, the very stable potential window should be from −1.5 to 1.5 V vs Ag/Ag + . As shown in Fig. S13, the CVs of the CAC-1 supercapacitor with a 3 V cell voltage present rectangular profiles without tails even at the small scan rate. Figure 5a shows the GCD curves of the CAC-1 supercapacitor with a 3 V cell voltage. The specific cell capacitances of the supercapacitor were calculated to be 30.1 and 32.4 F g −1 at 0.5 and 0.25 A g −1 , respectively. The coulombic and energy efficiencies increased to 93% and 75% at 0.25 A g −1 . Although the energy efficiency was lower than that of the D-glucose derived carbon, 24 it was still much higher than agar derived carbon. 14 Such reduced energy efficiency was at least partly due to heat dissipation. 25 Based on the GCD curves in Figs. 4b and 5a, the relationships between power and energy of the aqueous and ionic liquid cells are plotted in Fig. 5b respectively. Accordingly, the highest specific energies of the cells with ionic liquid (3 V) and aqueous (1 V) electrolytes are 40.5 and 6.8 Wh kg −1 at specific powers of 375 W kg −1 and 125 W kg −1 , respectively. Similarly, the highest specific powers of the cells with ionic liquid and aqueous electrolytes are 7500 and 2500 W kg −1 at specific energies of 23.8 and 5.1 Wh kg −1 , respectively. The maximum energy densities of the cell with the ionic liquid and aqueous electrolytes were calculated to be 25.5 and 4.3 Wh l −1 respectively, while the maximum power densities to be 4725 and 1575 W l −1 , respectively. The specific energy (34.3 Wh kg −1 ) at 1 A g −1 of the CAC-1 cell with EMIMBF 4 was higher than 21.6 Wh kg −1 at the same specific current for the AC derived from pomelo peel by the two-step calcination 26 because larger potential window of 3 V could be achieved for CAC-1. Based on a weight ratio of 30% for AC material in a packaged supercapacitor device, a practical specific energy of 12.2 Wh kg −1 for a packaged device was expected, which was beyond 5 Wh kg −1 for a commercial application. Conclusions Homogeneous distribution of KOH in the pomelo peel has been realized by immersing hydrophilic pomelo peel into the KOH solution and subsequent cryodesiccation. Based on this, the one-step molecular level activation has been developed. The resulting CAC-1 has advantages of high yield (16.7%) and high packing density (0.63 g cm −3 ) in addition to low cost due to the one-step process at high temperature. Compared with AC-2 prepared by the conventional two-step method and AC-1 by the one-step activation of oven-dried mixture, CAC-1 has the highest SSA of 1870 m 2 g −1 and the highest specific capacitance of 219 F g −1 at 1 A g −1 in the KOH electrolyte of all the three samples. Because of the high packing density of the CAC-1 electrode, the volumetric capacitance is 149 F cm −3 , which is much higher than 98 F cm −3 of YP50f in three-electrode cells. The maximum specific cell capacitance of the two-electrode CAC-1 cell is 48.8 F g −1 , corresponding to 195 F g −1 for a single electrode. The highest volumetric cell capacitance is 30.7 F cm −3 , which corresponds to 123 F cm −3 for a single electrode. When the ionic liquid of 1-ethyl-3-methyl-imidazolium tetrafluoroborate is used as electrolyte, CAC-1 shows a maximum specific energy of 40.5 Wh kg −1 and energy density of 25.5 Wh l −1 . High yield, high packing density and low cost indicates that the pomelo peel derived CAC-1 has a commercial promise for supercapacitors.
5,499.2
2021-06-02T00:00:00.000
[ "Chemistry" ]
Effects of periodic matter in kaon regeneration We study the effects of periodic matter in kaon regeneration, motivated by the possibility of parametric resonance in neutrino oscillations. The large imaginary parts of the forward kaon-nucleon scattering amplitudes and the decay width difference $\Delta\Gamma$ prevent a sizable enhancement of the $K_L\to K_S$ transition probability. However, some interesting effects can be produced using regenerators made of alternating layers of two different materials. Despite the fact that the regenerator has a fixed length one can obtain different values for the probability distribution of the $K_L$ decay into a final state. Using a two-arm regenerator set up it is possible to measure the imaginary parts of the $K^0(\bar{K}^0)$-nucleon scattering amplitudes in the correlated decays of the $\phi$-resonance. Combining the data of the single-arm regenerator experiments with direct and reverse orders of the matter layers in the regenerator one can independently measure the CP violating parameter $\delta$. Recently, there has been a renewed interest [1] in the possibility of parametric resonance in neutrino oscillations in matter suggested in [2,3]. For a neutrino beam propagating in a medium with periodic density, one can obtain a large probability for the transition from one flavour state to another, even if the neutrino mixing angles both in vacuum and in matter are small. In nature, there are other systems similar to oscillating neutrinos, in particular the neutral mesons K 0 −K 0 . Hence, it is interesting to investigate if one can obtain the parametric resonance in this case. The analogue of the neutrino weak flavour basis are K 0 andK 0 and the mass eigenstates are K L and K S . Since the former states are maximally mixed, it is obvious that one cannot enhance the K 0 −K 0 transition probability. However, in this case this is not the relevant question. Let us assume that we have a neutral kaon beam propagating in vacuum. After a time t larger than the K S lifetime (τ S = 0.894 × 10 −10 s) the beam is essentially a K L beam. If this beam traverses a thin slab of material, a small K S component will emerge, because K 0 and K 0 have different scattering amplitudes. This is the well-known regeneration phenomenon (see, e.g., [4]). Assuming that the beam enters the regenerator at t = 0 and denoting by |K R (t) the state of the beam when it emerges, we have where K S,L |T |K L are the transition amplitudes in the regenerator, and K S,L | are the reciprocal states (see Eqs. (6) and (7) below). If the regenerator is a medium with a density that is a periodic function of the coordinate along the beam direction, we would like to see if it is possible to enhance the K L → K S transition amplitude. Our aim in this letter is to address this question. Assuming CPT conservation, but not CP conservation, the rest-frame evolution equation for the K 0 −K 0 system propagating in a medium is where t is the proper time (we follow closely the notation of ref. [4]). Hence, in vacuum (V =V = 0) the eigenstates of the Hamiltonian H 0 are and with the corresponding eigenvalues µ L,S = m L,S − i 2 Γ L,S , µ = (µ L +µ S )/2 and ∆µ = µ L −µ S . Since the phase of p/q is of no physical significance, we shall assume this ratio to be real 1 . We write where δ ≃ 3 × 10 −3 [5] is a measure of CP violation. Since CP is not conserved, the diagonalization of H 0 cannot be accomplished with a unitary transformation. This, in turn, implies the use of the reciprocal basis [6] K L | = 1 2 In a medium with N a nuclei per unit volume, V (V ) is given in terms of the forward scattering amplitude f (0) (f (0)) for a K 0 (K 0 ) beam [4], i.e. with the average kaon mass m = 2.52 fm −1 . The simplest way to introduce a periodic medium is to consider two different elements with number densities N a and N b positioned one after the other and to build a regenerator with κ layers of this ab junction. The beam evolution through this multilayer regenerator can be described in terms of the evolution operator with and where H i (i = a, b) are the Hamiltoneans for layers a or b, given in Eq. (2). Since this is a 2 × 2 matrix it is convenient to represent it using the Pauli σ matrices. One can then write where and E a is a three dimensional vector with components which are complex numbers. Introducing the complex unit vector and one immediately obtains With the obvious replacements a → b one obtains from Eq. with X = sin ϕ a cos ϕ b n a + sin ϕ b cos ϕ a n b − sin ϕ a sin ϕ b (n a × n b ) . The vectors that we have introduced have complex components. However, the dot products, such as n a · n b , must be simply understood as Notice that the third component of n a × n b is identically zero. Then Eq. (22) shows that X (3) is symmetric with respect to the interchange of a and b. On the other hand, Eq. (15) shows that n (2) a and n (2) b vanish in the limit of CP conservation. Then, in this limit, X (2) is antisymmetric in a and b. Furthermore, in the same approximation the first component of n a × n b is also zero. Hence X (1) is symmetric with respect to the interchange of a and b. A straightforward calculation shows that Y 2 +X·X = 1. Then, defining another complex angle Φ such that it is possible to rewrite Eq. (20) in the form This evolution operator is written in the K 0 −K 0 -basis. Denoting U ab ≡ U b U a , the symmetry properties of X (i) deduced above enables us to obtain but i.e. the difference vanishes if CP is conserved. Finally, the evolution matrix for the propagation through κ ab-layers is simply Inserting U κ between the appropriate bra-and ket-vectors given by Eqs. (3)-(4) and (6)- (7) respectively, one obtains the K L → K L and the K L → K S transition amplitudes and Let us start our discussion with a careful examination of Eqs. (14)-(16). The vectors E a and E b have the first components proportional to ∆µ/2 and the third components proportional to ∆V i /2. These quantities ∆µ and ∆V play a crucial role in the effect that we are searching for. On the contrary, the mean values µ and (V i +V i )/2 are far less important. Their real parts disappear when we take the modulus square of the amplitude to obtain the transition probabilities and their imaginary parts give the overall damping factors. As a first approximation, we neglect CP violation. Let us further assume that ∆µ and ∆V i are real. A real ∆µ means that ∆Γ = 0. Although this is not true for the K-meson system, there is no fundamental reason why it could not be so. Indeed, such a situation occurs closely for the B 0 −B 0 mesons. A real ∆V i implies equal imaginary parts for the K 0 andK 0 forward scattering amplitudes. As it is well known, this is not the case. This is in contrast with the case of neutrinos, where the absorption is weak, and to the leading order in weak interaction the scattering amplitudes are real. Within this unrealistic approximation it is possible to achieve a parametric resonance in K L ↔ K S transitions in matter. The parametric resonance condition is X 3 = 0 [7]; we shall consider a particular realization of this condition in which the times t a and t b are chosen such that cos ϕ a = cos ϕ b = 0. Then it follows from Eq. (22) that As described above, the third component of the cross product n a × n b is identically zero and if one neglects CP violation the first component is also zero. In this approximation Eqs. (33) and (34) become and For an appropriate number of layers, κ, one can suppress the K L → K L probability and, at the same time, enhance the K L → K S transition probability. To illustrate this effect, we plot the K L → K S transition probability as a function of κ in Fig. 1. The calculation was done for a regenerator made of 27 Al and 184 W and for an initial K L beam obtained from the decay of the φ resonance at rest. The values of the K 0 andK 0 scattering amplitudes on protons and neutrons were taken from ref. [8]. As we have explained, ∆Γ and the imaginary parts of ∆V i were set equal to zero. The times t a and t b were chosen in such a way that a complete K L → K S conversion could be obtained. If we move away from this resonance condition we still obtain an oscillatory K L → K S transition probability P (K L → K S ) but with a smaller maximal conversion. For instance, decreasing both t a and t b by 17 % reduces the maximum value of P (K L → K S ) from 1 to 0.145. The resonance values of t a and t b (59.210 × 10 −11 s and 57.289 × 10 −11 s) are a factor of seven or six larger than τ S . This by itself is sufficient to explain that the effect disappears as soon as we introduce the right values of Γ and ∆Γ, even with Im(V ) = Im(V ) = 0. We have checked that, in this case, P (K L → K S ) ≃ 10 −4 for κ = 1 and decreases slowly with κ. In addition, if we introduce the correct values for the imaginary parts of the scattering amplitudes, P (K L → K S ) for κ = 1 is further reduced to 8 × 10 −5 and even P (K L → K L ) becomes 0.05, while in the previous case it was 0.93. Clearly, any measurable effect with kaons propagating in matter requires times of the order of τ S . Unfortunately, for such times, even the toy model without imaginary parts gives a maximum value for P (K L → K S ) of the order of 0.02 only. Then, the damping due to the imaginary parts washes out the effect. This is shown in Fig. 2 where we compare for the same t a and t b P (K L → K S ) in the toy model (P 1 ) and for real kaons traversing a real 27 Al -184 W regenerator. So far, in all cases that we have considered, the total time that the particles spend in the regenerator, t = κ(t a +t b ), increases linearly with κ. Obviously, after a few layers most of the particles will disappear due to their decay or absorption. Hence, it is interesting to examine another type of experiment, where the total time t is kept fixed, i.e. as κ increases the times t a and t b are proportionally reduced. In Fig. 3 we plot P (K L → K S ) as a function of κ for this situation. For the beam velocity that we are considering, 10 −11 s corresponds to a pathlength of the order of 1 mm in vacuum. Then from Fig. 3 one can see that a regenerator made of a 12 mm layer of 27 Al followed by another layer of 12 mm of 184 W (κ = 1) is less efficient than another regenerator with four alternating 27 Al -184 W layers of 6 mm each (κ = 2). Perhaps this effect is better illustrated if, instead of the transition probability, we consider the decay of the kaons into a final state f after traversing the regenerator. From Eq. (1) one can calculate the time distribution P (K R (t) → f ) of the final state f after the kaon state initially produced as K L passes through the regenerator and then spends outside it the proper time t (which for simplicity we took equal to the proper time spent inside the regenerator). The result is (e.g. ref. [4]) where In our example, shown in Fig. 4, we have assumed that one measures the π + π − final state. The magnitude and the phase of η +− were taken from ref. [5]. The probability distribution after passing κ ( 27 Al-184 W) layer junctions increases with κ. In the same Fig. 4 we also plot the probability distribution for a regenerator where the layers are in reverse order. In this case P (K R → f ) decreases with κ, and both curves tend to a common limit. This is easy to understand. As the number of layers increase we are effectively approaching a "mixed material" with a density that has the average density of aluminum and tungsten. Since the regeneration effect is proportional to the density of the regenerator, one can understand that P (K R → π + π − ) increases with κ for the 27 Al-184 W regenerator and decreases in the 184 W-27 Al case 2 . The variation with the order of the layers (notice that their total number is fixed) is a nice example of quantum mechanics interference. In this problem, the evolution matrix for each individual layer (cf. Eqs. (10)-(11)) is an element of the U(2) group. Hence, the evolution for the total number of layers is, of course, an element of U(2). Since U(2) is a non-Abelian group, shuffling the layers one obtains a different evolution operator. From this point of view, Fig. 4 is a consequence of the non-commutativity of the U(2) group. One should realize that the results shown in Fig. 4 are independent of the CP-violating parameter δ. However, it is possible to use this type of regenerators to measure CP violation at the φ-factories. To see how the effect arises let us recall that the φ-meson decays into the antisymmetric combination where p denotes the momentum of the particle. We assume that in the direction of −p we have a detector, called "left", and in the direction of p another detector, called "right". Both detectors measure muons from the semileptonic decays of the kaons. These decay amplitudes are The kaons propagating to the right from the decay point have to traverse a regenerator made of two layers of different materials a and b. On the other hand, the kaons that propagate to the left must traverse a similar regenerator with two layers of the same width but in reverse order, b followed by a. With this setting one can show that the amplitude to detect in coincidence two µ + on both detectors is We have introduced the ratio between the ∆S = ∆Q violating amplitude and the dominant one. Experimentally, x = [−2 ± 6 + i(1.2 ± 1.9)] × 10 −3 [5], which is consistent with zero; theoretically, within the standard model one expects x ∼ 10 −7 . Therefore in Eq. (43) we have neglected the term of order x 2 . Finally, let us point out that Eq. (43) goes trivially to zero when a=b. This is a simple consequence of the antisymmetry of the initial state. With a similar notation one can obtain the amplitude for two µ − in coincidence. The result is The amplitude for a µ + in the left detector and a µ − in the right detector is The other asymmetric amplitude A(π + , π − ) is We shall now consider two asymmetries which can be measured in the two-arm experiments, and also an asymmetry which can be measured in the single arm experiments of the CPLEAR type (see e.g. ref. [9]), Here The ratios R 1 and R 3 are CP-asymmetric observables. They depend on the intrinsic (i.e fundamental) CP violation parameter δ. Furthermore, since the regenerators are made out of matter and not of equal amounts of matter and antimatter, they are themselves CPasymmetric and so induce a macroscopic, extrinsic CP violation which in general contributes to both CP-violating observables, R 1 and R 3 . However, interchanging the order of the layers leads to a partial cancellation of the extrinsic CP violating effects. For this reason the ratio R 3 is primarily sensitive to the fundamental CP violation. In the limit x = 0 the cancellation of the extrinsic CP violation in R 3 is exact and this leads to the result R 3 ≃ 2δ. This is not so for R 1 which does not vanish when δ = 0. The ratio R 1 , for example, is normally of the order of unity as the extrinsic CP violation is of this order. For an aluminum-tungsten regenerator, using x = 0, t a = 24 × 10 −11 s and t b = 12 × 10 −11 s, we find R 1 = 1.349 for δ given in ref. [5], whereas for δ = 0 the corresponding value is R 1 = 1.334. One should notice that R 1 is very sensitive to the imaginary parts of the effective Hamiltonian. Switching off the imaginary parts of the matter-induced potentials V i andV i reduces R 1 by about a factor of 200, while switching off the decay rates Γ S and Γ L R 1 would reduce it by about a factor of 8. If all the imaginary parts are set equal to zero, R 1 is suppressed by a factor 2 × 10 −4 . The parameter R 2 may appear as a CP-violating observable too, but in fact it is not. To see that one has to notice that CP transformation not only interchanges particles with their antiparticles but also flips the sign of all the momenta; for the two-arm setup under discussion this implies an additional interchange of the arguments of A(π i , π j ) so that R 2 is unchanged under the CP transformation δ → −δ. It has a moderate sensitivity to the imaginary parts of the effective Hamiltonian. For the same regenerator and x = 0, we find R 2 = −0.69 with normal values of all the imaginary parts. Switching off V i andV i reduces |R 2 | by about a factor of 2, while switching off Γ S and Γ L would reduce it by about a factor of 1.4. If all the imaginary parts are set equal to zero, R 2 goes to zero. Thus, by measuring R 1 and R 2 one can obtain an information on the imaginary parts of the effective Hamiltonian of the K 0K 0 system in matter, and in particular on the imaginary parts of the K 0 (K 0 )-nucleon scattering amplitudes. In conclusion, we have studied the effects of periodic matter in kaon regeneration. Motivated by the possibility of the parametric resonance in neutrino oscillations in matter we considered similar effects in K L → K S transitions. Unfortunately, the large ∆Γ and imaginary parts of the forward kaon-nucleon scattering amplitudes prevent a sizable enhancement of the K L → K S transition probability (cf. Fig. 2). However, some interesting effects can be produced using regenerators made of alternating layers of two different materials. Despite the fact that the regenerator has a fixed length one can obtain different values for the probability distribution of the K L decay into a final state (cf. Fig. 3 and Fig. 4). Finally, we have pointed out that using a two-arm regenerator set up it is possible to measure the imaginary parts of the K 0 (K 0 )-nucleon scattering amplitudes in the correlated decays of the φ-resonance. Combining the data of the single-arm regenerator experiments with direct and reverse orders of the matter layers in the regenerator one can independently measure the CP violating parameter δ.
4,774
2001-07-23T00:00:00.000
[ "Physics" ]
Transparent Polyurethane Nanofiber Air Filter for High-Efficiency PM2.5 Capture Fine particulate matter (PM) has seriously affected human life, such as affecting human health, climate, and ecological environment. Recently, many researchers use electrospinning to prepare nanofiber air filters for effective removal of fine particle matter. However, electrospinning of the polymer fibers onto the window screen uniformly is only achieved in the laboratory, and the realization of industrialization is still very challenging. Here, we report an electrospinning method using a rotating bead spinneret for large-scale electrospinning of thermoplastic polyurethane (TPU) onto conductive mesh with high productivity of 1000 m2/day. By changing the concentration of TPU in the polymer solution, PM2.5 removal efficiency of nanofiber-based air filter can be up to 99.654% with good optical transparency of 60%, and the contact angle and the ventilation rate of the nanofiber-based air filter is 128.5° and 3480 mm/s, respectively. After 10 times of filtration, the removal efficiency is only reduced by 1.6%. This transparent air filter based on TPU nanofibers has excellent filtration efficiency and ventilation rate, which can effectively ensure indoor air quality of the residential buildings. Introduction Fine particulate matter (PM) is composed of various solid fine particles and droplets with up to hundreds of chemical components. PM is mainly composed of three major chemical substances, including water-soluble ions, carbon-containing compounds, and other inorganic compounds [1][2][3][4][5]. PM is mainly from the burning of fossil fuels and garbages, and it is rich in toxic substances and harmful particulate matter [1,[3][4][5][6]. According to the size of the particle diameter, PM is mainly divided into PM2.5 and PM10, which means that the aerodynamic diameter of the particles is less than 2.5 μm and 10 μm. PM10 stays in the air from a few minutes to a few hours with a limited travel distance; however, PM2.5 has a long residence time in the atmosphere and can last from several days to several weeks [2,5]. Even if PM2.5 falls to the ground, it is easy to be blown back into the air by the wind. Through the process of breathing, PM2.5 can enter the body and accumulate in the trachea or the lung, which will negatively affect the human health [7][8][9]. PM2.5 also has a major impact on the climate and the ecological environment, such as affecting the rainfall process [10][11][12][13][14]. In the past 10 years, PM2.5 air pollution is becoming more and more serious, especially in some developing countries such as China and India [4,15]. In daily life, people at those countries often encounter severe haze weather. For this reason, it is very necessary to take some protection against PM2.5. At present, the protection measures to the severe haze are mainly focused on the outdoor personal protection, such as wearing professional dust masks, which can effectively filter the particle matter [16,17]. The indoor personal protection, such as ventilation systems and air purifier are expensive, complicated to install and requiring replacement for the filter elements [6]. The indoor air filters generally provide air protection for commercial building, due to the high cost of pumping systems for active air exchange. Recently, there are two transparent air filters for residential buildings by windows passive ventilation come into the vision of consumer [17]. One is porous membrane filter, but the porosity of this filter is very low, which means high ventilation cannot be achieved. Another one is nanofiber air filter, which porosity can reach 70% and can achieve high ventilation. Some laboratories have prepared a variety of window screens to protect the quality of indoor air with nanofiber. For instance, Chen et al. [18] reported an air filter prepared using electrospun TPU polymer; TPU nanofiber air filter is very effective for removing PM2.5 (98.92%) with very low-pressure drop (10 Pa). Khalid et al. [19] reported a nanofiber window screen made by direct blowing technology, which has good optical transparency (80%) and high PM2.5 filtration efficiency (99%). Liu et al. [6] prepared a transparent air filter by electrospinning, which achieved high ventilation and high PM2.5 filtration efficiency (> 95.0%). However, this research was developed in laboratories and the research of the industrial process of nanofiber filter is little. Materials and Instruments Polymer TPU was obtained from Bayer Co., Ltd., Germany, with tear resistance, abrasion resistance, and UV protection; the substrate conductive mesh is provided by Qingdao Junada Technology Co., Ltd., China. The N, N-dimethylfomamide (DMF) and acetone were provided by the Tianjin Zhonghe Shengtai Chemical Co., Ltd. Scanning electron microscopy (SEM Feiner High Resolution Professional Edition Phenom Pro) is used to study the morphology of TPU fibers. An automatic filtration performance tester for evaluating filtration performance FX3300 Lab Air-IV was purchased from Shanghai Lippo Co., Ltd., China. AFC-131 is used to test ventilation rate purchased from Shanghai Huifen Electronic Technology Co., Ltd. Thermo Scientific Nicolet iS5 is used to measure infrared and analyze the functional groups of TPU fiber membranes. Theta optical contact angle meter was used to analyze the contact angle of TPU fiber film. The light transmittance was evaluated using a UV1901PC ultraviolet spectrophotometer and purchased from Shanghai Aoxiang Scientific Instrument Co., Ltd., China. Preparation of Nanofibrous Membranes TPU nanofiber membrane was fabricated using electrospinning equipment NES-1 (Qingdao Junada Technology Co., Ltd.), which is displayed in Fig. 1a. The mainframe is 2350 mm long, 2200 mm wide, 2700 mm high, and weighs 1980 kg. The touch screen is Siemens PLC, the power is 30 kV, and the spinning width is 1.1 m. The average fiber diameter is about 120 nm, and the weight of the nanofiber membrane is about 0.5 g per square meter. The substrate is suitable for cellulose, synthetic fiber, etc., and the polymer material is suitable for TPU, PVP, PAN, etc. The electrospinning principle is shown in Fig. 1b, and schematic diagram of a nanofiber membrane produced by electrospinning is shown in Fig. 1c. The solution used in the electrospinning was to dissolve different masses of TPU in a mixed solvent in a ratio of DMF to acetone in a volume ratio of 1:1; the spinning voltage was positive pressure 30 kV and negative high pressure − 30 kV, which resulted in a stable jet; substrate moving speed was 10 m/min; and the spinning distance was controlled at 200 mm. The temperature and relative humidity during this process were controlled at 25°C and 50% RH. In order to get different average diameters of nanofibers, the concentration of TPU in the solution was adjusted from 6 to 16 wt%. The TPU solution was electrospun onto conductive mesh under the same conditions. The different concentrations of TPU fiber membranes were named TPU-6, TPU-8, TPU-10, TPU-12, TPU-14, and TPU-16, respectively. Characterization of Morphology and Structures One of the important trends in the membrane characterization of nanofibers is the morphology of the membrane surface. The morphology of the TPU nanofiber membrane was observed by SEM, and the voltage used was a 10 kV, scanning imaging system. As shown in Fig. 2a-f, the microscopic morphologies of the nanofiber membrane obtained from the electrospinning TPU solution are showed under different TPU concentrations of 6 wt%, 8 wt%, 10 wt%, 12 wt%, 14 wt%, and 16 wt%, respectively. When the TPU concentrations between 6 wt% and 12 wt% (Fig. 2a-d), there are many bead-like nanofibers of different sizes. This can be attributed to the low viscosity of the polymer TPU molecular chain with the low concentration of the TPU solution. Therefore, in the process of electrospinning low concentration TPU solution, the ejection was difficult to resist the stretching of the electric field force [32]. In addition, due to the viscoelasticity of the TPU molecular chain, the ejection stretched by the electric field force will aggregate to form beaded nanofibers [33]. However, as the concentration of TPU increases, the viscosity of the solution increases, and the electrospinning process will form nanofibers instead of beaded nanofibers, so the beaded nanofibers become less and less and eventually disappear completely (Fig. 2e-f). On the other hand, the viscosity of the solution is an important parameter affecting the diameter of the nanofiber [34]. When the concentration of the TPU solution increases, the viscosity of the solution also increases, so the diameter of the nanofiber increases, as shown in Fig. 2a-f. When the concentration of TPU is higher than 14 wt%, the diameter of nanofibers increases rapidly (Fig. 2e-f). The average diameter of the nanofiber is calculated by Nan-Measurer. The average TPU nanofiber diameter is0 .10 μm,~0.12 μm,~0.14 μm,~0.17 μm,~0.34 μm, and~1.97 μm, corresponding to TPU-6, TPU-8, TPU-10, TPU-12, TPU-14, and TPU-16. Fourier Transform Infrared Spectrum Analysis To identify the composition of the prepared TPU nanofiber membrane, it is necessary to carry out Fourier transform infrared spectroscopy (FTIR) analysis on the sample. First, preheat the equipment for one and a half hours, the pressure is controlled at 15 Mpa, the working voltage is 220 V, the ambient temperature is controlled at 20°C, the ambient humidity is controlled at 40%, the frequency is 50 Hz, and the current is 7.5 A. The test results are as shown in Fig. 3, which is obviously the same as the infrared spectrum of the substrate polyurethane. The spectrum is shown in Fig. 3 Filtration Efficiency Analysis Filtration efficiency is the most important parameters for evaluating transparent air filters. The filtration efficiency test was carried out on different TPU fiber membranes. In this study, the test conditions were the same, the temperature was 20°C, the relative humidity was 40.6%, the flow rate is 2.0 m 3 /h, and PM pollutants are aerosol particles. The size distribution of PM and the filtration effect of each sample are shown in Fig. 4a. The filtration efficiency is positively correlated with the PM particle size. For the same size of PM particles, such as PM2.5 (Fig. 4b), with the TPU concentration increases from 6 to 12 wt%, the removal efficiency is significantly increased, which can be attributed to the fact that the membrane waved by nanofibers with larger diameter are better to resistant PM particles. However, with the TPU concentration increases from 12 to 16 wt%, the increase in the spacing between the fibers and the disappearance of the bead string fibers results in a significant decrease in the removal efficiency of the TPU fiber membrane [18]. The increase in the concentration of the solution makes the elongation of the electrospinning jet more difficult and slower, resulting in an increase in the pore size of the TPU fiber membrane. Figure 4c-e shows the passage of particulate matter through different diameter fiber membranes. The larger fiber diameter effectively prevents the PM from passing through the fiber membrane, and as the TPU concentration becomes larger, the fiber diameter becomes larger, but the distance between the phase fibers also becomes larger, resulting in a decrease in filtration efficiency. The highest removal efficiency of PM2.5 is the TPU-12. When the particle diameter is ≥ 0.525 μm, the removing efficiency is 100%, and the pressure drop is only 10 Pa. In addition, the TPU-10 on PM2.5 removing efficiency is 99.654%. Ventilation Rate Analysis Maintaining high ventilation is an important property to evaluate the performance of the air filter. Six samples were tested for ventilation rate under the same conditions. The measurement area was 20 cm 2 and the measurement pressure was 200 Pa. The ventilation rate of different concentrations of TPU nanofiber membranes is shown in Fig. 5a, and reasons for affecting the ventilation rate: nanofiber packing density and the fiber average diameter [34]. The nanofiber packing density is calculated as follows: Here, α is the nanofiber packing density, W is the basis weight of the nanofiber membrane, ρ f is the density of nanomaterial, and Z is the nanofiber film thickness. The ventilation rate begins to decline is primarily owing to the addition of TPU nanofiber average diameters (Fig. 5b, c). As the concentration of TPU increases from 8 to 14 wt%, decreasing in the packing density of nanofibers leads to an increase in the distance between the nanofibers, which is beneficial to ventilation rate, even though the diameter of the nanofibers is increased (Fig. 5d). When the nanofiber membrane is made of a solution with a TPU concentration of 14 to 16 wt%, nanofiber diameter plays a crucial role in ventilation rate, and the associated ventilation rate drops slightly (Fig. 5e). When the TPU concentration increases to 10 wt%, the ventilation rate is up to 3480 mm/s, such a high ventilation rate is equivalent to a blank screen without a nanofiber membrane. Contact Angle Analysis Hydrophobicity is an important parameter for evaluating the performance of air filters, and the wettability of obtained TPU fiber membrane was measured by DSA using a 5-μL droplet. The results are shown in Fig. 6a-f, the contact angles are 138.6°, 133.4°, 128.5°, 122.8°, 112.7°, and 107.7°, corresponding to TPU-6, TPU-8, TPU-10, TPU-12, TPU-14, and TPU-16. The contact angle of all samples was greater than 90°, indicating that the transparent air filter prepared with polymer TPU is highly hydrophobic due to the hydrophobic functional groups on the surface of the TPU nanofiber membrane, the small fiber diameter leads to smooth membrane surface and fiber membrane dense structure. However, as the concentration of TPU becomes larger, the contact angle becomes lower and lower (Fig. 6g), because the roughness of the surface of the fiber membrane becomes larger. The relationship between contact angle and surface roughness of nanofiber membrane can be understood by Wenzel equation, which is defined as follows: Here, r is the surface roughness factor, which is the proportion of the actual area of the surface to the geometric projected area ( r ≥ 1), θ ′ is the contact angle of the rough surface. As shown in Fig. 6h-i, with the TPU concentration increases, the diameter of the TPU nanofiber increases, and increased roughness of the surface of the nanofiber membrane, resulting in an increasingly low contact angle. Transparency and Reproducibility Testing Another important parameter of the transparent air filter is transmission; the transmittance of the six samples was tested and the results are shown in Fig. 7a. It was found that the transmittance first kept decreasing and then increased, corresponding to the increase in TPU concentration from 6 to 12 wt% and 12 to 16 wt%. When the TPU concentration is from 6 to 12 wt%, the transmittance of the fiber membrane is gradually reduced, mainly because the solution concentration is too low at the beginning (such as 6 wt% and 8 wt%), and the electrospinning process does not easily form fibers. When the concentration of the solution increases, the solution concentration is more suitable for electrospinning, so that more and more fibers are formed by electrospinning. The nanofiber diameter also becomes larger, and the fiber membrane becomes thicker and thicker, so that less light can pass through the fiber membrane. On the other hand, since the concentration of the solution is too low, electrospinning forms a large number of beads ( Fig. 2a-d), which is adverse for light to pass through the fiber membrane. When the solution concentration is from 12 to 16 wt%, the transmittance of the fiber membrane gradually increases, mainly because the viscosity of the solution increases, and the electrospinning process becomes difficult gradually, so that less nanofiber is produced. Another reason is that as the concentration of the solution increases, the beaded string disappears, contributing more light to pass through the fiber membrane. Transmittances of 80%, 75%, 60%, 30%, 45%, and 70%, corresponding to TPU-6, TPU-8, TPU-10, TPU-12, TPU-14, and TPU-16. The TPU-10 not only have a filtration efficiency of 99.654% and the transmission rate is as high as 60%. Figure 7b shows the photograph of the TPU-10 nanofiber membrane with 60% transmittance. For air filters with a transmission of more than 50%, sufficient light can be transmitted through the room to meet indoor lighting requirements. Considering that long-term filtration performance and high air flow are important factors in air filters, we have recycled TPU fiber membranes and continued to test filtration efficiency and ventilation rate, and the results are shown in Fig. 8. Figure 8a shows error bars for combined removal efficiency of 10 cycles of testing of PM2.5 filtration of TPU nanofiber membrane. After 10 passes of TPU-10 filtration, the filtration efficiency was only reduced by 1.6% (from 99.4 to 97.8%). In addition, an error bars for the aeration rates of the 10 test cycles for different TPU concentration fiber membranes are shown in Fig. 8b. The ventilation rate changed slowly and did not decrease significantly. After ten breath tests, the ventilation rate was only reduced by about 10 mm/s, indicating that the ventilation effect is very stable. Conclusion In summary, we use a rotating bead spinneret for electrospinning to create a transparent air filter that can be produced in a large scale. By changing the concentration of TPU polymer in solution, not only significant PM2.5 removal efficiency (99.654%) is achieved, but also good optical transparency (60%) Fig. 7 Transmission properties of TPU fiber membrane. a Transmittance of different concentrations of TPU fiber membrane. b Photographs of TPU concentration of 10 wt% transparent air filters at 60% transparency and ventilation rate (3480 mm/s) are achieved. In addition, by performing 10 cycles of filtration and gas venting tests on the TPU transparent air filter, the results showed that the filtration efficiency was only reduced by 1.6%, and the ventilation rate was changed very slowly and remained substantially unchanged. These results indicate that TPU nanofiber membranes prepared by electrospinning have many advantages such as good water repellency, good optical transparency, high ventilation rate, and high filtration performance, which can be used as filter materials in a lot of fields.
4,099.2
2019-12-01T00:00:00.000
[ "Engineering" ]
Beyond DNA Repair: Additional Functions of PARP-1 in Cancer Poly(ADP-ribose) polymerases (PARPs) are DNA-dependent nuclear enzymes that transfer negatively charged ADP-ribose moieties from cellular nicotinamide-adenine-dinucleotide (NAD+) to a variety of protein substrates, altering protein–protein and protein-DNA interactions. The most studied of these enzymes is poly(ADP-ribose) polymerase-1 (PARP-1), which is an excellent therapeutic target in cancer due to its pivotal role in the DNA damage response. Clinical studies have shown susceptibility to PARP inhibitors in DNA repair defective cancers with only mild adverse side effects. Interestingly, additional studies are emerging which demonstrate a role for this therapy in DNA repair proficient tumors through a variety of mechanisms. In this review, we will discuss additional functions of PARP-1 – including regulation of inflammatory mediators, cellular energetics and death pathways, gene transcription, sex hormone- and ERK-mediated signaling, and mitosis – and the role these PARP-1-mediated processes play in oncogenesis, cancer progression, and the development of therapeutic resistance. As PARP-1 can act in both a pro- and anti-tumor manner depending on the context, it is important to consider the global effects of this protein in determining when, and how, to best use PARP inhibitors in anticancer therapy. INTRODUCTION Poly(ADP-ribose) polymerase-1 (PARP-1) is a nuclear enzyme which binds DNA via two zinc finger motifs and transfers chains of ADP-ribosyl moieties (PARs) from nicotinamide-adeninedinucleotide (NAD + ) to chromatin-associated acceptor proteins, including PARP-1 itself. This post-translational modification plays an important role in promoting DNA repair by releasing PARP-1 from DNA and allowing for recruitment of proteins involved in both base excisional repair (BER) and homologous recombination (HR) (1). Accordingly, PARP-1 is an attractive anticancer target, and poly(ADP-ribose) polymerase (PARP) inhibitors have been identified as chemo-and radiation-sensitizing agents in an array of cancers (2)(3)(4)(5), including our report on the sensitization of head and neck cancer to radiotherapy following PARP inhibition (6). Perhaps the most well-known tumoricidal effects of PARP inhibitors are in BRCA-mutated cancers, which harbor DNA repair defects and become dependent on PARP-1-mediated repair for survival. Two landmark studies (7,8) found inhibition of PARP-1 in cells containing BRCA mutations resulted in the generation of chromatid breaks, G2 cell cycle arrest, and enhancement of apoptosis, results which have been confirmed in early phase clinical trials (9,10). Interestingly, recent studies also show potential efficacy of PARP inhibition in sporadic tumors lacking DNA repair defects. A clinical study of the PARP inhibitor olaparib in women with heavily pretreated high-grade serous ovarian cancer without germline BRCA1/2 mutations resulted in objective responses in 11/46 (24%) (11), indicating there may be additional determinants of sensitivity to PARP inhibition. Pre-clinical studies have identified susceptibility to PARP inhibition alone in HR-proficient HER2positive breast cancer, pancreatic cancer, prostate cancer, Ewing's sarcoma, small cell lung carcinoma, and neuroblastoma, among others (12)(13)(14)(15)(16)(17). These reports demonstrate the existence of non-DNA repair functions of PARP-1 that may be targetable for cancer treatment. It is thus becoming increasingly apparent that a number of PARP-1-mediated cellular processes influence characteristics of tumor development, progression, and treatment response, including several of the eight "hallmarks of cancer" proposed by Hanahan and Weinberg (18) (Figure 1). In this review, we will discuss cancer-related functions of PARP-1 -including regulation of inflammatory mediators through NF-κB, cell death and energetics, ERK-mediated tumor progression and invasion, mitosis, gene transcription, and sex hormone signaling -and examples of how these functions may be exploited to expand the patient population potentially benefiting from treatment with PARP inhibitors. NF-κB-MEDIATED TUMOR-PROMOTING INFLAMMATION In multiple cancers, including breast, prostate, and head and neck among others, the NF-κB signaling pathway undergoes a loss of regulation resulting in constitutive activation (19). Briefly, NF-κB is a family of transcription factors including RelA/p65, RelB, c-Rel, p50, and p52, which exist as homo-and hetero-dimers. DNAbinding affinity and DNA sequence specificity is dependent on the composition of the dimer. Inhibitory proteins bind NF-κB dimers FIGURE 1 | Non-DNA repair functions of PARP-1 influence the "hallmarks of cancer" (18). This schematic depicts multiple PARP-1-mediated processes which either stimulate or inhibit six of the eight "hallmarks of cancer," as indicated by green and red boxes respectively. These hallmarks, proposed by Hanahan and Weinberg, are malignant characteristics that provide a framework for understanding the biology of cancer. and sequester them in the cytosol in the absence of a stimulus; pathway activation causes proteasomal degradation of inhibitors, allowing the dimer to translocate to the nucleus and activate proinflammatory transcription programs. Although NF-κB signaling mediates the acute immune response responsible for targeting and eliminating cancerous cells, chronic inflammation mediated by this "hallmark" pathway can lead to the malignant phenotype (Figure 1), facilitating escape from immune surveillance, cancer survival, metastasis, and angiogenesis (20). Activation of NF-κB can be regulated by PARP-1 via multiple mechanisms (Figure 2). First, PARP-1 directly interacts with histone acetyl-transferases p300 and CREB-binding protein (CBP) to synergistically co-activate NF-κB-dependent gene expression. In response to inflammatory stimuli, p300/CBP acetylates PARP-1 at specific lysine residues. This modification is necessary for PARP-1-p50 interaction, enhancement of p300-p50 interaction, and co-activation of NF-κB-mediated transcription programs (21,22). Co-activation is negatively regulated by the activity of class I histone deacetylases (HDACs) (22) and SUMO1/3-mediated SUMOylation of the automodification domain of PARP-1 (23). Second, enzymatic activation of PARP-1 variably affects NF-κB, with outcomes dependent on the identity of the PAR acceptor protein. AutoPARylation of PARP-1 following detection of DNA strand breaks promotes the formation of a "signalosome" containing IKKγ (NEMO), the regulatory subunit of a NF-κB inhibitory complex, along with PIASγ, and ATM. Chains of PAR on activated PARP-1 provide the scaffold needed for SUMOylation of IKKγ by the PIASγ PAR binding motif, leading to activation of IKK and NF-κB (24). The effects of PARylation on NF-κB itself are less clear, with different sources reporting decreased, increased, or unaffected DNA-binding activity (25)(26)(27). Taken together, these studies demonstrate a strong role for PARP-1 in regulating NF-κB activity. The interaction between PARP-1 and the NF-κB pathway promotes production of pro-inflammatory cytokines such as TNFα, IL-6, INFγ, E-selectin, and ICAM-1, as well as expression of nitric oxide synthase (28)(29)(30); PARP inhibition has been shown to attenuate upregulation of these factors in response to inflammatory stimuli (28,29). Furthermore, PARP inhibition may also prevent inflammation-associated adverse side effects of traditional chemotherapeutics (31), supporting the use of PARP inhibitors in multidrug regimens. Loss of PARP-1 activity not only decreases pro-tumor inflammation, but also inhibits two related hallmarks of cancer through anti-inflammatory mechanisms: proliferative signaling (32) and metastasis (33,34) (Figure 1). Recently, we discovered an unexpected sensitivity to PARP inhibition in DNA repair proficient HER2-positive breast cancer cells through attenuation of NF-κB-mediated signaling (13). HER2 over-expressing cancers have activated NF-κB, which acts to block apoptosis and possibly mediate resistance to HER2-targeted drugs (35). In HER2-positive breast cancer cells, treatment with PARP inhibitor significantly reduced the expression of NF-κB activator Frontiers in Oncology | Cancer Molecular Targets and Therapeutics IKKα and phosphorylated p65 while increasing inhibitory IkBα. These events resulted in decreased NF-κB transcriptional activity in HER2-positive, but not HER2-negative, breast cancer cells (13). Furthermore, overexpression of HER2 alone was sufficient to confer sensitivity to PARP inhibitor, suggesting synthetic lethality with PARP inhibition in tumors that are oncogene-addicted to HER2 signaling through NF-κB. This study represents a specific application of PARP-1-regulated NF-κB signaling to cancer therapy, one that may soon be expanded into a clinical trial. CELLULAR ENERGETICS AND CELL DEATH Cancer cells are characterized by excessive proliferation, impaired cell death signaling, and deregulated metabolism (Figure 1). These features are often mediated by altered mitochondrial activity coupled with inactivation of apoptotic signaling through decreased expression of pro-apoptotic factors like p53 or overexpression of anti-apoptotic factors like Bcl-x. Integrity of regulatory pathways for cell death and metabolism is important for response to many cancer treatment modalities, as well as in cancer imaging and diagnostics. Cellular energetics and death signaling are heavily regulated by PARP-1, allowing activity of this protein to serve as a switch between cell fates and to affect both tumor proliferation and therapeutic response. In response to damage stimuli, activated PARP-1 acts early in the apoptosis initiation pathway to stabilize p53 and facilitate its function (36). If damage is excessive, high levels of PAR synthesis by PARP-1 deplete its NAD + substrate; additional interactions between PARP-1 and NMNAT-1, a NAD + synthase, and SIRT1, a NAD + -dependent protein deacetylase, further contribute to PARP-1 as a controller of NAD + availability and, thus, NADdependent metabolic reactions. ATP-dependent NAD + salvage saps cellular ATP stores, resulting in energy deprivation and, eventually, energy crisis-induced necrosis (Figure 3). Furthermore, PARP-1-mediated PARylation may inactivate caspase-8 and reduce caspase-mediated apoptotic signaling (37). Hyperactivation of PARP-1 and accumulation of PAR can also cause translocation of PAR to the cytosol, where it interacts with the outer mitochondrial surface. Here it binds apoptosis inducing factor (AIF) and induces its release and translocation to the nucleus, ultimately resulting in large-scale DNA fragmentation and a novel PARP-1-dependent cell death mechanism known as "parthanatos" (38). To prevent these events, activated caspases cleave PARP-1 into two fragments: an 89-kDa C-terminal fragment with low levels of catalytic activity and a 24-kDa N-terminal peptide which inhibits the catalytic activity of uncleaved nuclear PARP-1. Conservation of NAD + and, thus, ATP allows the cell to undergo programed cell death (39)(40)(41). Accordingly, inhibition of PARP-1 preserves ATP levels, improves antioxidant status, and normalizes anti-apoptotic Bcl-x levels in the kidney following chemotherapy-induced injury (42,43). Poly(ADP-ribose) polymerase-1 also regulates the classical necroptotic pathway mediated by the death promoting MAP kinase, c-Jun N-terminal kinase (JNK). This signaling network is activated in many cancers and has been implicated as a driver of both tumor development and treatment response (44,45). PARP-1 downregulates MAP kinase phosphatase MKP-1 expression and inhibits the survival kinase Akt, both of which activate JNK (46,47), suggesting potential benefit for PARP inhibition in tumors with elevated JNK activity. JNK1 mediates phosphorylation and sustained activation of PARP-1, creating a feed-forward regulatory loop (48). In conjunction, PARP-1-induced depletion of ATP stimulates AMP-activated protein kinase (AMPK) while inhibiting mTOR to promote autophagy, yet another cell death pathway important in cancer survival and treatment response (49). Pharmacologic inhibition of PARP-1 promotes Akt activity and mTOR signaling resulting in decreased cell death (50), although these results are contradicted by a recent report showing PHLPP1mediated downregulation of Akt activity and increased cell death following PARP inhibition (51). Clinically, targeting the role of PARP-1 in cell death pathways appears to be complex. PARP-1 inhibition may reduce PARmediated inactivation of caspase-8, sensitizing cancer cells to tumor necrosis factor-related apoptosis-induced ligand (TRAIL) therapy (37). Additionally, inhibition of PARP-1 prevented cisplatin-and methotrexate-induced ATP depletion and nephrotoxicity (42,43), as well as imatinib (Gleevec)-induced JNK activation and cardiotoxicity (52), without significantly affecting the anticancer activity of these agents. However, activation of the Akt survival pathway may counteract the cytotoxic effects of PARP inhibition and cause resistance to therapy (47), suggesting Akt pathway inhibition may enhance PARP inhibition in anti-tumor therapy. Despite these complexities, the influence of PARP-1 on metabolic co-factors and cell death signaling is significant, and further studies examining the role of PARP inhibition in manipulating these processes is warranted. ERK-MEDIATED ANGIOGENESIS AND METASTASIS In addition to the JNK-mediated signaling described previously, a second family of MAP kinases known as extracellular signalregulated kinases or ERKs is involved not only in cell death determination but also in tumor progression, angiogenesis, and metastasis. ERK activation is pivotal in cancer cell survival through upregulation of anti-apoptotic proteins and inhibition of caspase activity (53). Inhibition of this pathway by targeting ERK or MEK, which is immediately upstream of ERK in signaling, has been associated with suppression of ovarian tumor growth (54), reduced metastatic potential of melanoma cells (55), and increased sensitivity to cytotoxic agents (56). Recent studies indicate an important role for PARP-1 in promoting ERK signaling. Poly(ADP-ribose) polymerase-1 is activated and autoPARylated by a direct interaction with phosphorylated ERK2 (pERK2), resulting in enhanced pERK2-catalyzed phosphorylation of target transcription factors and increased gene expression (57). Furthermore, PARP inhibition causes loss of ERK2 stimulation by decreasing the activity of critical pro-angiogenic factors including vascular endothelial growth factor (VEGF), transmembrane signaling protein syndecan-4 (SDC-4), platelet/endothelial cell adhesion molecule (PECAM1/CD31), and hypoxia inducible factor (HIF). This ultimately results in reduced angiogenesis and inflammation (58)(59)(60)(61)(62). The effects of PARP-1 on ERK signaling are further enhanced by PARP-1-mediated transcription of vimentin, an intermediary angiogenic filament upregulated in tumor vasculature and pivotal for the endothelial-to-mesenchymal transition characteristic of metastasis (63). Pharmacologic inhibition of PARP reverted this transition, correlating with a reduction in the number and size of metastatic melanoma foci in a mouse model (63). Collectively, these studies indicate PARP-1 directly fosters ERK signaling in addition to mediating separate but parallel signaling pathways reinforcing the same end result of increased angiogenesis and metastasis, two tumor-promoting features (Figure 1). As such, PARP inhibition may be effective in blocking the ERK signaling network or increasing activity of ERK/MEK inhibitors, agents already shown to be efficacious in acute myeloid leukemia, multiple myeloma, melanoma, colorectal, breast, lung, and pancreatic cancers (64)(65)(66)(67)(68). Furthermore, selective ERK inhibition induces tumor regression in MEK inhibitor-resistant models (67), raising the question of whether PARP inhibition could be similarly effective in either MEK or ERK-resistant tumors due to its proximity in the signaling pathway. As MEK, ERK, and PARP inhibitors have only recently entered early phase clinical trials, it will be some time before we know which patients benefit most from these drugs, either alone or in combination, but their interaction warrants further investigation. MITOTIC REGULATION The high proliferation rate of cancer cells is a result not only of decreased cell death but also of improperly regulated cell cycling, allowing evasion of growth suppressing signals. Although multiple cell cycle checkpoints can be impaired in cancer, the mitotic Frontiers in Oncology | Cancer Molecular Targets and Therapeutics or spindle assembly checkpoint is of great importance both in tumorigenesis and as an anticancer target. This point of regulation, which is responsible for ensuring appropriate chromosome segregation, is required for cell viability. Cells with a weakened mitotic checkpoint are capable of survival but do not maintain proper chromosome segregation, resulting in genomic instability and aneuploidy. These are common features of tumor cells and may even act as drivers in cancer development (Figure 1). PARP-1 can act on many mediators of cell cycle progression through its effects on gene expression (68), which will be detailed in a later section. However, direct regulation of the mitotic checkpoint by PARP-1 is another important factor that may be targetable in cancer treatment. Recent reports suggest multiple roles for PARP-1 in the structural machinery of mitosis. First, PAR, which is primarily synthesized by PARP-1, is required for assembly and function of the bipolar spindle (69). In addition, PARP-1 both localizes to and PARylates proteins at centromeres and centrosomes during mitosis (70,71). PARP-1 also mediates PARylation of p53, which is responsible for regulating centrosome duplication and monitoring chromosomal stability (71). Loss of PARP-1activity is associated with mislocalization of centromeric and centrosomal proteins, resulting in incomplete synapsis of homologous chromosomes, defective chromatin modifications, and failure to maintain metaphase arrest, indicating loss of mitotic checkpoint integrity (71,72). Similarly, inhibition of PARP-1 is associated with genomic instability characterized by reduced stringency of mitotic checkpoints, centrosome hyperamplification, and chromosomal aneuploidy, the most common characteristic of solid tumors (71,73,74). Furthermore, PARP-1 has been shown to interact with the E3 ubiquitin ligase, CHFR, a tumor suppressor with an important role in the early mitotic checkpoint. Binding of these two proteins results in degradation of PARP-1 and cell cycle arrest in prophase, an effect stimulated by the microtubule inhibitor docetaxel resulting in resistance to this drug in CHFR-over-expressing cancer cells. Concomitant use of a PARP inhibitor with docetaxel significantly increased apoptosis in these cells, suggesting a role for PARP inhibition in sensitizing cancers with high CHFR activity to microtubule inhibitors (75). GENE TRANSCRIPTION The clinical characteristics of cancer, including growth, metastatic potential, and response to treatment, are greatly influenced by dysregulation of gene transcription. Gene expression profiles are currently being utilized as tumor biomarkers, indicators of treatment sensitivity or resistance, and prognostic predictors. In the future, there may even be a role for therapeutic agents that reactivate a silenced tumor suppressor or silence an activated oncogene. In total, 3.5% of the transcriptome is regulated by PARP-1 with 60-70% positively regulated (76), including genes involved in tumor promotion such as JUND, MDM2, HGF, FLT1 (VEGFR1), EGFR, HIF2A (EPAS1), SPP1 (OPN), MMP28, ANGPT2, and PDGF (77). As discussed below and shown in Figure 4, this regulation can occur broadly through interactions with nucleosomes and modification of chromatin, can be gene specific through interactions with promoters and binding factors, or can result as a combination of the two, as binding of PARP-1 to nucleosomes mediates its localization to specific target gene promoters (78,79). CHROMATIN STRUCTURE One mechanism by which PARP-1 alters gene expression is through regulation of chromatin structure and, thus, DNA accessibility. Simultaneous binding of multiple neighboring nucleosomes by PARP-1 compacts chromatin into a supranucleosomal structure, repressing gene transcription (79). This structural change is further stimulated by histone deacetylation mediated by a complex consisting of PARP-1, ATP-dependent helicase Brg1 (SmarcA4), and HDACs (80). Conversely, PARylation of core histones promotes charge repulsion-induced relaxation of chromatin and recruitment of transcription machinery (81)(82)(83). PARP-1mediated PARylation also results in disassociation of linker histone H1, a repressor of RNA polymerase II-mediated transcription; accordingly, higher proportions of PARP-1:H1 indicate active promoters (84), suggesting potential utility of PARP-1 as a biomarker for actively transcribed genes. Although these outcomes can be separated by PARP-1 activity (protein binding versus enzymatic function), pharmacologic inhibition of PARP affect both actions, indicating manipulation of chromatin accessibility through PARP-1 is not currently an option for cancer therapy. METHYLATION PATTERNS Along with chromatin structure, methylation patterns also play a large role in determining DNA accessibility. Alterations in DNA methylation are commonly found in many cancers and serve as a functional equivalent to a gene mutation in the process of tumorigenesis. Inhibition of PARP-1 is associated with transcriptional silencing through accumulation of DNA methylation and CpG island hypermethylation throughout the genome (85). This effect may be mediated by dimerization of PARP-1 with CCCTCbinding factor (CTCF), a chromatin insulator which binds to hypomethylated DNA regions. As the CTCF-PARP-1 interaction is PAR-dependent, decreased PAR following PARP inhibition abrogates this function (86,87). Loss of CTCF-PARP-1 complex activity results in transcriptional silencing of multiple loci including tumor suppressors CDKN2A-INK4 (p16), CDH1 (e-cadherin), and P19ARF (88,89). Poly(ADP-ribose) polymerase-1 can also hinder DNA methylation by dimerization with DNA (cytosine-5-)-methyltransferase 1 (DNMT1), a methyltransferase found overexpressed in gastrointestinal tract carcinomas, resulting in inhibition of its methyltransferase activity (85,90). In contrast, PARP-1 binding and PARylation of the Dnmt1 promoter actually enhances its transcription by preventing methylation-induced silencing (91). The reduced catalytic efficiency of PARylated DNMT1 may come as a result of negatively charged PARylated PARP-1 out-competing DNA for binding with DNMT1 (92). Interestingly, PARP-1-DNMT1 can form a ternary complex with CTCF at unmethylated CTCF-target sites in a PAR-dependent manner. Loss of PAR from this complex causes dissociation of PARP-1 and CTCF, allowing the still-bound DNMT1 to methylate the site and inhibit transcription (92). Although some specific tumor suppressors are mentioned above as being affected by PARP-1-mediated chromatin insulation, the activity of PARP-1 in regulating DNA methylation patterns www.frontiersin.org at specific genes or genic regions is largely unknown. As such, it is difficult to predict the effect of PARP inhibition on cancer growth and progression through this mechanism. However, with the advent of genomic profiling, it has recently become possible to identify methylation changes specific to certain cancer subtypes. Anticancer agents with epigenetic modifying activity, such as DNA methyltransferase inhibitors, are being investigated in these cancers and show promising results, especially in hematologic malignancies (93). The effect of PARP inhibition on epimutations has not been studied, but the reports described above suggest PARP inhibitors could have similar applicability. RNA POLYMERASE II ACTIVITY Poly(ADP-ribose) polymerase-1 can also promote transcription in a more sequence-specific manner by positively regulating RNA polymerase II activity at active promoters. This occurs through: (1) PARylation-induced exclusion of histone demethylase KDM5B, maintaining levels of activating histone mark K3K4me3 (82), (2) PARylation-induced dissociation of the DEK repressor, promoting loading of the RNA polymerase II mediator complex (94), and (3) creation of a PAR scaffold for retention of RNA polymerase II (95). Surprisingly, a recent report showed that inhibition of PARP-1 enzymatic activity was associated with increased H3K4me3, resulting in upregulation of sodium iodide symporter transcription and elevated radio-iodine uptake in thyroid cancer cell lines (96). This contradictory work may result from target gene specific functions of PARP-1, as the previously cited studies were focused on genes known to be positively regulated by PARP-1. However, it does illustrate the need for greater understanding of PARP-1 involvement at active gene promoters, as well as the potential for manipulating PARP-1-mediated transcription to enhance efficacy of cancer therapy. DNA AND TRANSCRIPTION FACTOR BINDING Gene expression can be further regulated by direct interactions between PARP-1 and DNA elements or binding factors. PARP-1 acts as a promoter-specific switch at target genes, facilitating the release of inhibitory co-regulators and recruitment of stimulatory co-regulators (97,98). PARP-1 binding of the NF-κB immediate upstream region (IUR) element activates transcription of CXCL1, which encodes melanoma growth stimulatory activity protein and is overexpressed in the progression of malignant melanoma (99). Binding of PARP-1 to the transcription factor E2F-1 increases E2F-1 promoter activity and expression of the E2F-1-responsive oncogene Myc (c-Myc) (100). PARP-1 expression and activity are also required for cancer cell invasion (Figure 1) mediated by ETS transcription factors -whose fusion products drive Ewing's sarcoma, acute myeloid leukemia, and prostate cancer -and the Ewing's sarcoma fusion protein EWS-FLI (14,15). While PARP-1 interaction with these factors promotes pro-tumor signaling, other interactions have the opposite effect. PARP-1 suppresses selfinhibition of AP-2, a transcription factor that negatively regulates Frontiers in Oncology | Cancer Molecular Targets and Therapeutics (58)(59)(60)(61)(62) cell cycle and proliferation (101). Increased AP-2 expression suppresses cancer cell growth (102) and may inhibit ras oncogenemediated transformation (101), effects likely diminished by PARP inhibition (Figure 1). PARP-1 has also been shown to bind the inhibitory element of COX-2, which mediates inflammation and promotes VEGF-mediated pro-angiogenesis pathways activated in cancer cells (103,104). Instances of PARP-1-mediated enzymatic activity affecting specific transcription factors or genes often translate to a clear role for PARP-1 inhibitors as anticancer agents, even in monotherapy. For example, ETS-positive prostate tumors and EWS-FLI-positive Ewing's sarcomas are highly sensitive to PARP inhibitors (14,15). However, PARP-1 has multiple and diverse functions involving both PARylation activity and DNA-binding capability. Enzymatic inhibition, which decreases PARP-1 self PARylation, actually increases DNA binding and may be detrimental in some cancers, such as the malignant melanoma example given above. A greater understanding of the relative effects of PARP-1 on transcriptional activity is needed in order to select tumors with a molecular profile conducive to pharmacologic inhibition through this mechanism. SEX HORMONE SIGNALING Sex hormones have been implicated in development, progression, and treatment sensitivity of prostate, breast, gynecologic, and colon cancers. Sex steroid effects are mediated through their receptors, which act as transcription factors in steroid-responsive tissues. Any of the multiple levels of regulation controlling these signaling pathways can become impaired, leading to abnormal proliferative responses characteristic of cancer progression (Figure 1). Similar to PARP-1-mediated regulation of transcription factor activity, PARP-1 plays a role in regulating three of the sex hormone receptors most commonly linked to cancer: estrogen receptor (ER), progesterone receptor (PR), and androgen receptor (AR). Approximately 80% of breast carcinomas are positive for ER, identifying ER-targeted therapies as excellent, although not unfailable, treatment options in these cancers (105). PARP-1 interacts www.frontiersin.org with the ERa isoform both directly and through estradiol-induced PARylation to enhance binding of ERa and other activating factors to target gene promoters (106,107), suggesting PARP inhibition may enhance the activity of ER-targeted agents. A similar interaction occurs between PARP-1 and PR: PARP-1 binding of PR, as well as hormone-activated CDK2-induced PR PARylation, acts to stimulate cancer cell proliferation (108). PARP-1 regulation of PR activity is of great interest in endometrioid carcinomas specifically, as expression of PARP-1 and PR is positively correlated at each pathologic stage of this cancer (109). However, the effects of PARP inhibition in endometrial cancer have yet to be determined. Recently, a report detailing the strong interaction between PARP-1 and AR has generated much excitement over the potential for PARP inhibitors in prostate cancer treatment. Human prostatic adenocarcinoma, a cancer highly resistant to standard therapies, is reliant on AR activity for growth and survival. Accordingly, ARtargeted therapies are the primary treatment for these patients. Unfortunately, there are multiple mechanisms for AR reactivation leading to tumor recurrence, a lethal phenotype known as castration-resistant prostate cancer. PARP-1 enzymatic activity, which is significantly upregulated in castration-resistant prostate cancer, promotes both AR chromatin binding and transcription factor functions. Although PARP-1 does localize with AR to regulatory sites of AR-target genes, the two proteins appear to be members of separate complexes at these loci. Inhibition of PARP-1 in vivo: (1) depletes both PARP-1 and AR at target genes, (2) significantly reduces expression of target genes, including protumorigenic ets genes referenced previously, (3) sensitizes both castration-resistant and castration-sensitive prostate cancer cells to genotoxic insult and androgen depletion, (4) enhances the antitumor effects of anti-androgen therapy, and (5) delays onset of resistance to anti-androgen therapy. Ex vivo studies of castration resistance prostate tumors displayed a significant anti-tumor response to both veliparib and olaparib, two well-known PARP inhibitors, that correlates with reduced AR activity (110). These results suggest PARP inhibitors have the potential to significantly enhance existing prostate cancer therapy and improve outcomes for patients with castration-resistant tumors. PROMISE AND CHALLENGES Poly(ADP-ribose) polymerase inhibitors are exciting new drugs that are easily delivered, can be highly efficacious, and are associated with few side effects. Mild nausea is commonly reported, with rare instances of more serious symptoms such as temporary cognitive deficits and myelosuppression. While ongoing clinical trials are focused on exploiting the role of PARP-1 in DNA repair, we have identified in this review multiple targetable functions of PARP-1 that are not dependent on HR defects (Figures 1-4; Table 1). One of the challenges in broadening the use of PARP inhibitors in anticancer therapy is more efficient identification of patients who may respond to these drugs. Some ongoing clinical trials include analysis of protein expression -including HR proteins, NF-κB, and PARP-1 itself -in relation to clinical response in search for potential biomarkers of sensitivity. However, the list of candidates is extensive and will continue to grow as additional functions of PARP-1 are discovered. Banking tumor biopsies from patients enrolled in PARP-1 clinical trials will greatly expedite the development of a panel of biomarkers, as will increased use of cancer genome sequencing and microarray technologies. Another challenge will be in identifying and overcoming mechanisms of resistance to PARP inhibition. For example, a second BRCA mutation or a deletion of the original mutation can cause reversion to HR-proficiency and resistance to PARP inhibitors in BRCAmutated cancers (111). As the majority of clinical applications proposed here are theoretical or in pre-clinical development, associated mechanisms of resistance are entirely unknown, although development of such resistance is practically assured. Thirdly, many of the functions discussed here are effected by PARP-1 binding rather than enzymatic activity. Currently available PARP inhibitors act at the catalytic site of PARP-1, which does result in some degree of altered binding capacity via changes in autoPARylation status. However, treatment with PARP inhibitors may not effectively inhibit specific PARP-1 interactions, or may require different dosing. It will be important to study the various clinically available agents to determine if, and to what extent, binding domains are affected. Despite these obstacles, PARP inhibition is an extremely promising anticancer strategy and, as the first agents near completion of phase III trials, it will be exciting to see the magnitude of impact PARP inhibitors will have in clinical practice. AUTHOR CONTRIBUTIONS Alice N. Weaver and Eddy S. Yang conceptualized the topic. Alice N. Weaver conducted the literature review and wrote the article. Eddy S. Yang critically revised the article and provided guidance and supervision.
6,518.4
2013-11-27T00:00:00.000
[ "Biology", "Medicine" ]
Denial of Service Attacks Prevention using Tra ffi c Pattern Recognition over Software-Defined Network Recent trends have shown a migration of software from local machines to server-based services. These service-based networks depend on high up-times and heavy resistance in order to compete in the market. Along with this growth, denial of service attacks have equally grown. Defending against these attacks has become increasingly di ffi cult with the growth of Internet of Things and the di ff erent varieties of denial of service attacks. For this, our research o ff ers a solution of implementing software-defined networking and real-time metric based techniques to mitigate a denial of service attack within a smaller time window than other comparable solutions. The use of our method o ff ers both e ffi cient attack handling and also flexibility to fit a variety of implementations. The end result being a network that can automatically adapt against new attacks based on previous network activity. Introduction Networks are implemented with the core ideas of network integrity, confidentiality, and availability.The security of a network depends on the state of these core ideas, and how they are implemented.As network technologies see advancements in transfer speeds and computational power, the ability to implement flexible and effective security solutions has become difficult.The availability of networks are under constant threat by attacks focusing on bringing down the service or network itself.Defending against denial of service (DoS) attacks has become important in an environment where users rely on these services on a daily basis.The DoS attack in general seeks to bring down network services by flooding a service with dummy traffic with the goal of overloading the service, bring it down and affect its availability.An example of such attacks is presented in Fig. 1.As a service becomes unavailable due to DoS attack, the hosting companies will be significantly affected through the loss in profit and their number of customers.The development of software-defined networking (SDN) has brought network security solutions that will help in solving these issues. Figure 1. DoS vs DDoS Software-defined networking is the process of virtualizing network hardware to facilitate the flexibility and scalability of software implementations.The idea is to connect hosts through the use of virtual switches and virtual controllers that can be used to automate network functions programmatically.A virtual switch serves the same purpose as a hardware switch by creating a topology to connect hosts while handling basic traffic flow.The virtual controller delegates flows to the switches and handles network-wide operations.The two elements of virtual controllers and virtual switches allow complete control over a network that can adapt to threats.In this work, we present a novel DoS mitigation scheme based on SDN.This work serves to implement a SDN that uses the adaptability and programmability features to defend against different DoS attacks. The remainder of this paper is organized as follows: We will discuss the related work in Section 2 followed by our motivations and contributions in Section 3. Our problem statement will then be outlined in Section 4. This section will consist of detailed definitions for both the simulations and technologies used as well denial of service attacks.Our proposed denial of service prevention scheme will be presented and discussed in Section 5, followed by our analysis and performance evaluation in Section 6.We will summarize and conclude this paper in Section 7. Related Works The details of how denial of service attacks are handled and defend against are outlined in [1].There are a variety of current solutions exist today that involve specialized hardware switches, and server load balancing.Solutions such as AutoSlice serve to handle load balancing in order to reduce the overall strain on the network [2].This is accomplished by evenly distributing traffic throughout the network when large flows are detected.The downside of applying this method to defend against DoS is that it can create network-wide strain during larger denial of service attacks.During a large DoS attack, the solution might propagate the attack across the entire network. Singh et.al. in [3] offered another solution by the use of a buffer to create a waiting window before blocking the traffic.In their proposed solution, as soon as a suspicious traffic is identified, the authors proposed to increase the queue buffer at the switches and see whether the traffic keeps coming.During this process the attacking hosts are requested to decrease their traffic rate.If the attack persists, the host is then blocked by the network.Unfortunately, this slowly allow potential attacks to consume more bandwidth over time.Although this prevention strategy offers a solution to identify large legitimate traffic, it is still vulnerable to large distributed attacks.This is because of the large timing window the prevention algorithm allows before blocking an attacking host.Both solutions stated above also have a large increase in processing overhead on the network controller.This comes as a large cost for smaller networks that may not be able to handle the increase in workload.The costs are based on fundamental network architecture problems involving limited bandwidth and security implementations [4] [5]. For many companies a complex security solution for DoS is not an option due to the lack of resources or the incompatibility with their current systems.To lessen the resource requirement in order to better secure a network, cheaper solutions to monitoring networks have been found such as sFlow [6].Technologies such as sFlow allow for low resource monitoring that can be adapted to improve security needs [7] [8].Monitoring the network allows the polling of different metrics at the switch or host level, such as the number of messages exchanged between network entities, the bandwidth consumption, the number of dropped packets, etc . . .that can be used to recognize the traffic pattern in the network, which can then be easily implemented in scripting solutions.By nature a network is suppose to transmit data, and the denial of service attack is exploiting this core element, which makes it hard to completely defended against [1][4] [5].The idea of denial of service prevention is to identify and mitigate attacks in a timely manner.This will minimize the time frame in which the targeted network is affected.In this work, we intend to refine this defense against denial of service using more adaptable scheme that offers rapid detection and mitigation. Motivations The current customer climate demands a high degree of reliability and availability from network services that are used on a daily basis.This exponential increase in network traffic has put a strain on older network architecture models that fail to handle large data flows without down time.As we observed that current solutions to denial of service attacks through the use of load balancing and buffer limits for the networks fall short of delivering quick identification and mitigation of attacking hosts [1][3], we focused our work to contribute a more rapid solution to denial of service attack compared to the previously stated solutions as well as lower processing workloads for the virtual network devices. Problem Statement The main issue in current solutions to DoS attacks involves decreased availability.During a DoS attack affected networks are effectively cut off from the Internet rendering the services being provided unusable [1].Our research seeks to improve on this solution by allowing for increased service availability by recognizing traffic pattern through smartly analyzing the network traffic and packets.This will improve the overall network uptime while also maintaining network performance during attacks. Throughout our discussion, we will be using the following definitions: Definition 1.A Denial of Service (DoS) attack is a network threat that seeks to affect the availability of a network or service by introducing fake traffic to the target.We define our Software-defined networking Denial of service Mitigation (SDM) problem that our work aims to solve as follows: Given a network under DoS attack, our scheme seeks to detect and mitigate any DoS attack taking into consideration the improvement of the efficiency and the speed of mitigating denial of service attacks. To achieve this we will use traffic pattern recognition using metric based traffic analysis to rapidly identify attacking hosts while allowing for legitimate traffic to continue to be processed. Software-defined networking Denial of service Mitigation Our research focuses on mitigating two different types of denial of service attacks.This is done through a single algorithm that uses live network metrics to gauge the state of the network.By basing decisions off previous network states, our DoS Mitigation Scheme is able to defend against several attacks.Our Software-defined networking DoS mitigation scheme is presented in Algorithm 1.Our scheme begins with initializing the network through identifying any OpenFlow switches (S) on the network and their available ports (Lines 2-5).By doing this we can map the network topology allowing for better flow control.Once this is done the execution phase of the scheme will begin (Lines 6-18).This phase polls each OpenFlow switch for live metrics.During normal traffic states these metrics are used to set the aggregate threshold (T r ) for each metric such as packet rate.Once this is complete, every polling cycle will check the current metrics against the metric thresholds.Any conflict during this step will push the scheme into the update phase.This phase consists of identifying the traffic flows that breached the thresholds by tracing the packets back to the host.Once found, the host (h) will be removed from the OpenFlow switch's flow tables and the switches will be set to drop all packets from the host.This continues for a set timeout interval (T i ) at which time the host can re-enter the network.end for 18: end for Mitigating Single-Host Denial of Service Attacks One aspect of our DoS Mitigation Scheme focuses on mitigating a denial of service attack that originates from a single attacker.This attacker will have a single target host with the goal of bringing down any services running on the host.Since denial of service attacks can be based in overworking common network functionality there are various ways they could be executed.Amongst all of these variation there are common traits and characteristics of a single host denial of service attack.These characteristics can then be used along side monitoring network activity.In order to properly handle denial of service attacks real network activity must be identified versus malicious activity.Our SDM scheme does this by monitoring packet rate, packet size, and flow path. To illustrate our algorithm, let us consider the following example by defining a network flow that contains common properties of a denial of service attack.This network flow can then be monitored at the network switch level through sFlow to gain real time metrics that will allow us to recognize network traffic patterns.Through the REST API and JSON handling, we will be able to keep an up to date list of all network flows and their load on the network.Once the current flows of the network are mapped, thresholds can be defined based on the network activities.The threshold will be set to signify network flows that are acting outside of normal network metric ranges.Note that, the thresholds can be adapted based on the host network's common activity.This makes our SDM Scheme be adapted across a variety of networks settings. Let us consider the packet rate of each flow on the network as the first threshold to be monitored, and let us define a network flow with a packet rate exceeding 1000 packets per second as being malicious.Note that, any flow that passes this threshold is collected by sFlow and sent off to the Floodlight Controller for analysis.Besides that packet rate, we use packet size, which is compared across packets to check for variance.Without this variance, it can be assumed that the network flow could be malicious.With this comparison along with the packet rate monitoring threshold, our SDM Scheme is able to rapidly identify and stop a single host denial of service attack by detecting large increases of packet rate over a short time period. Figure 2. DoS Attack Mitigation Scheme This scheme is presented in Fig. 2, which outlines a generalized idea of how our SDM scheme handles a denial of service flow entering the network.The important part of our SDM scheme is the ability for other flows to still function during the mitigation process, which allows for regular network activity to continue while the attack is being handled.In step 1 of the flow chart we see a large DoS flow entering the network.The flow is then handled by the OpenFlow switches (step 2), where they can detect when a threshold is met.Once a threshold is met, a message will be sent to the Floodlight controller (step 3), which in turn replies with instructions to all network switches to drop the detected flow (step 4).With this, our SDM scheme can minimize the amount of malicious traffic the target host receives. Mitigating Distributed Denial of Service Attacks Attacking Figure 3. DDoS Attack Mitigation Scheme Another form of denial of service attack is the distributed denial of service attack.This attack follows the same principle of a single host denial of service attack, but instead of using a single host, the attack is distributed across multiple attackers.Spreading the attack across multiple hosts makes it difficult to identify the source of the attack.Conventional methods of blacklisting an attacker's IP address no longer work due to the large amount of sources [1].Our SDM scheme can further detect a distributed attack by focusing on aggregating the flow data instead of singling out a single flow.This can be done by tracking the same metrics previously tracked, such as packet rate and packet size.With this, the thresholds are also adjusted to better handle aggregate network data.The threshold for our SDM scheme for packet rate can then both monitor a single flow exceeding the threshold as well as the summation of multiple smaller flows exceeding the threshold. We will use Fig. 3 to provide a visualization of how our SDM scheme handles distributed denial of service attack.Similar to Fig. 2, step 1 shows a large DDoS flow from multiple attacking hosts entering the network.Once this attack triggers the threshold across the OpenFlow switches in step 2 a message can then be sent to the Floodlight controller in step 3.This JSON message contains all suspect flows that have entered the network during the time frame of the attack.The Floodlight controller then replies to drop all suspect flows blocking all communications for a short period of time with the attacking hosts. Another aspect of our SDM scheme is the timing window of the metrics captured.For this, a constant buffer of metrics will be kept and refreshed with the most current sFlow data.By only focusing on current sFlow metrics, it allows normal network flows to not be flagged as suspect.Once our defined thresholds are triggered by the aggregate metrics, flows will then be dropped by the Floodlight controller.The selected dropped flows will be chosen based on their packet rate and packet size.Once a flow is dropped it will then be allowed to reconnect with the network after a set amount of time. Analysis and Performance Evaluation In this section we will present the performance of our proposed SDM scheme and discuss how it handles both types of denial of service attacks.In our implementation, we used Mininet to build the network topology [9], which is presented in Fig. 4. Figure 4. Test network architecture -Mininet For gathering results sFlow was used to capture real-time data for the network.The D-ITG traffic generator was used to create scripts that simulate both normal network activity, single-host denial of service, and distributed denial of service attacks [10].The network typologies created in Mininet consists of simple tree-based typologies along with a larger topology to simulate the distributed denial of service attack. In Fig. 4 OpenFlow switches are denoted as s1 through s6 while hosts are denoted as h1 through h16.For this example c0 represents the remote Floodlight controller used for our network.For the sake of this research ping floods will be used to test the resilience of the network to denial of service attacks.These attacks will be combined with UDP flows to show that our algorithm is able to mitigate a variety of denial of service attack types.Using the above topology in Fig. 4, our SDM algorihm was put into place with a packet rate threshold of 1000 packets per second.In this test individual flows will be monitored to see if each reach the designated threshold.Along with the packet rate threshold, packet size is also monitored to gauge how much variance each flow has.The first run of our algorithm consisted of an isolated flow where a single DoS attack is sent through the network without legitimate traffic.For this the attacking host sends a ping flood to the target host through several OpenFlow switches.The resulting network flow is presented in Fig. 5. Analysis and Evaluation of a Single-Host DoS Attack As seen in Fig. 5, once the denial of service attack begins, it quickly reaches an upwards of 15,000 packets per second.This heavy traffic on a switch disrupts any normal network traffic that may be on the switch.Fig. 6, also shows further behavior of the denial of service attack flow.The majority of the packets during the denial of service attack are under the Internet Control Message Protocol (ICMP) protocol.This helps further identify the attack as a ping flood.The overall effect of the denial of service attack on the network load is shown in Fig. 7. Figure 6. Unprotected DoS Flow Packet Analysis As seen in the preceding figures a single-host denial of service attack can be enough to generate substantial network load.We can now compare these results with the results of our denial of service prevention algorithm (SDM).Our results for this are presented in Fig. 8.It can be seen that as soon as the packet rate approaches the threshold of 1000 packets per second the flow begins to be mitigated.From this point a JSON message is produced containing the source and the destination for the flow along with any metrics pertaining to the flow itself.Upon receiving the JSON message the Floodlight controller can then push an update to all OpenFlow switch flow tables allowing for the denial of service traffic to be dropped and the host to be blacklisted. Figure 8. Protected DoS Flow In this example our scheme is able to quickly handle the ping flood produced by the attacking host.For the attacking host to continue communicating within the network a JSON message will need to be sent to remove the host from being blacklisted.For our SDM this functionality is automated and it occurs after a set amount of time.To further test our SDM scheme and its performance in identifying DoS flows, the same ping flood was sent during normal network operations.D-ITG traffic generator was used to send a stream of UDP data packets across the network.During this time the ping flood was sent to the target host to disrupt the UDP traffic flow. Fig. 9 outlines our scheme's performance during the attack.Once the ping flood begins, a large influx of ICMP packets enters the network.Our algorithm quickly identifies this flow as malicious, and it is able to smartly drop the packets while maintaining the UDP data flow.The use of sFlow metrics and the Floodlight controller allows for fine-grained control over network flows allowing normal network traffic to remain untouched. Figure 9. Protected UDP Flow In comparison to the prevention scheme presented in [3], our solution is able to quickly identify and mitigate attacking hosts.By using a metric-based approach we were able to avoid the waiting times that are implemented by Singh et.al.'s algorithm that requests hosts to verify that they are sending legitimate traffic.Along with this, Singh et.al.'s scheme causes increased overhead for the virtual controller.During larger scale attacks this overhead can lead to further delays and network strain.Our SDM scheme avoids this by performing the detection and processing through an sFlow server, and only communicating with the Floodlight Controller when flow handling updates are needed. Figure 10. Average Mitigation Time: SDM vs EP As seen in Fig. 10, on average our method of denial of service mitigation performs faster than Singh et.al.'s method.This is due to the added processing time Singh et.al.'s method takes to identify and mitigate flows.Before a flow is dropped, Singh et.al.'s method allows for at least a 100 second response time from the attacker.We argue in our scheme that this adds too much delay in mitigating the attack, which could lead to a decrease in the network performance during the attack. Overall, our single-host denial of service prevention algorithm was successful in identifying and mitigating large denial of service flows.This was done through the use of highly adaptable software-defined network techniques.In the following section a similar algorithm Analysis and Evaluation of a Distributed DoS Attack To simulate a distributed denial of service attack several hosts on our test network were scripted to send a synchronized attack on a single host using various traffic types.A D-ITG script was used to synchronize the attack across multiple hosts.Hosts were chosen across the network to involve multiple network switches in the simulation.This allowed us to test the ability of our system to collect aggregate switch data while watching for spikes.Both ICMP and UDP traffic were used in our simulated attack to further test the ability of our algorithm to identify suspect flows of various traffic types.Along with this the D-ITG script produced a uniform distribution for packet rate during the simulation.Other network traffic distributions can further be tested to probe for weaknesses in detection.Due to processing constraints, fluctuations in packet rate from each attacking host can be expected, but on average were set at a fixed rate of 5000 packets per second. The idea behind our mitigation scheme is to use the sudden increase of network activity as an indicator of which flows are suspect.For this the Floodlight controller for our network will keep an aggregate total for several metrics that will update with only the most current values.This will create a timing window to monitor the network.The same type of script we used in our single-host denial of service attack can be used to select suspect flows and send triggers to drop them once identified.A threshold of 20,000 packets per second was used to trigger the mitigation script.This threshold was chosen based on normal network activity, and can be adjusted based on an individual network's needs.For our simulation 20,000 packets per second allows for normal network traffic to continue without triggering mitigation.Fig. 11 displays our distributed denial of service script in action against a multi-host attack on a single target. Once our script is implemented with sFlow, we can see that the aggregate packet rate sees a spike once the distributed denial of service begins.Once this threshold window is detected, suspect network flows are dropped and blocked for a set period of time.As seen in Fig. 11, once the aggregate packet rate drops below the threshold, the rate at which flows are dropped decreases.This is done to avoid dropping legitimate network traffic.Even if attacking flows still persist the attack as a whole has been mitigated. Table 1 shows a sample of the data collected by sFlow and outlines the collection of the aggregate packet rate across all network switches.Mean packet rate is shown as approximate values due to fluctuations in packet rate caused by hosts performance.Network switches are shown as sFlow agents labeled s1 through s6 in our example.Each sFlow agent sends its metrics to the remote sFlow instance where it is analyzed further.Based on our topology presented in Fig. 4, we can see that each network switch excluding s1 had an attacking host, and that the attacking flows were unable to maintain their flows for the target at 10.0.0.1.For our attack the subnet 10.2.0.0/16 consisted of all attacking hosts across multiple OpenFlow switches.Once an individual flow is identified, a blocking signal can then be sent to the Floodlight controller which can then issue orders to drop packets pertaining to the suspect flow.By acting at the switch level, our SDM scheme is able to stop the distributed denial of service attack at the first switch it encounters. The values and thresholds used in our model are specific to the simulated network we created.For our work to be adapted to a new network, new thresholds and values will need to be adjusted to match the common traffic load of the network. Since our method is based on a variety of realtime metrics, it allows for a large variety of traffic types to be identified.Even more fine tuning can be added to increase the versatility in the ability for our model to detect suspect network flows.By relying on real-time network metrics our research successfully mitigated denial of service attacks.Further work and study could help improve the understanding of more advanced attacks to find the key indicators of their occurrence.Once this behavior is found it could quickly and efficiently be implemented in our type of mitigation model. Conclusion Recent trends have shown a migration of software from local machines to server-based services.These service-based networks depend on high up-times and heavy resistance in order to compete in the market. Along with this growth of network services, denial of service attacks have equally grown.With a simple set of tools, attackers could bring down one of these services.Our research in the use of software-defined networking to mitigate and detect denial of service attacks has shown that the adaptability and flexibility metrics based solutions allow for a complete solution to denial of service attacks.By relying on real-time metrics, our network was able to adapt to large flows of data quickly and effectively while maintaining services on unaffected flows. 3 Denial of Service Attacks Prevention using Traffic Pattern Recognition over Software-Defined Network EAI Endorsed Transactions on Ambient Systems 03 2018 -12 2019 | Volume 6 | Issue 18 | e1 Figure 7 . Figure 7. Unprotected DoS Flow Network Load 7 Denial of Service Attacks Prevention using Traffic Pattern Recognition over Software-Defined Network EAI Endorsed Transactions on Ambient Systems 03 2018 -12 2019 | Volume 6 | Issue 18 | e1 2 Definition 2. A Distributed Denial of Service (DDoS) attack is a DoS attack where a multitude of attackers are used to send a synchronized attack on a target.2 3. A virtual switch is a software-based implementation of a hardware switch in a network.It often serves as a node to route traffic between hosts on a network. 2 Definition 4. A virtual controller is a software-based implementation of a hardware controller in a network.A controller often serves as a primary point of control that can designate work to network switches.2 hosts Table 1 . Sample Network Flows
6,168.4
2019-12-12T00:00:00.000
[ "Computer Science", "Engineering" ]
Gradient flow exact renormalization group The gradient flow bears a close resemblance to the coarse graining, the guiding principle of the renormalization group (RG). In the case of scalar field theory, a precise connection has been made between the gradient flow and the RG flow of the Wilson action in the exact renormalization group (ERG) formalism. By imitating the structure of this connection, we propose an ERG differential equation that preserves manifest gauge invariance in Yang--Mills theory. Our construction in continuum theory can be extended to lattice gauge theory. Introduction The gradient flow [1][2][3][4][5][6] is a continuous deformation of a gauge field configuration A a µ (x) along a fictitious time t ≥ 0. It is given by a gauge-covariant diffusion equation is the field strength of the flowed or diffused field B a µ (t, x), 1 and is the covariant derivative with respect to B a µ (t, x). The gradient flow bears a close resemblance to the coarse graining along renormalization group (RG) flows [7]. This aspect of the gradient flow has been investigated from various perspectives [6,[8][9][10][11][12][13][14][15][16][17]. In this paper we further our understanding of how the gradient flows are related to the RG flows by using the exact renormalization group (ERG) formalism (for reviews of ERG, see for instance Refs. [18][19][20]). In scalar field theory, the analogue of Eq. (1.1) would be [21] ∂ t ϕ(t, x) = ∂ µ ∂ µ ϕ(t, x), ϕ(t = 0, x) = φ(x). (1.4) It is actually possible to make a precise connection between the gradient flow and the flow of a Wilson action under ERG [15] (see also Ref. [17]). In D dimensional Euclidean space, the ERG differential equation for the Wilson where K and k are cutoff functions satisfying K(p) = 1 for |p| → 0, 0 for |p| → ∞, , k(p) and ∆(p) ≡ −2p 2 dK(p) dp 2 . (1.8) The origin of the anomalous dimension η τ in the above has been elucidated in Ref. [23]. Particularly for K(p) = e −p 2 , it has been shown [15] that the correlation functions of the 1 f abc is the structure constant defined from the anti-hermitian generator T a of the gauge group by [T a , T b ] = f abc T c . 2 Throughout this paper, we use abbreviations, p ≡ d D p (2π) D , δ(p) ≡ (2π) D δ (D) (p). (1.5) preserves gauge invariance. In Sect. 3.4, we solve the ERG equation in the lowest approximation, i.e., in the lowest order in a parameter λ (3.10). This parameter turns out to provide a convenient expansion parameter analogous to the conventional gauge coupling. In Sect. 4, we generalize the construction of the Wilson action in Sect. 3.1 to lattice gauge theory. We conclude the paper in Sect. 5. There is a short appendix to Sect. 3 about the normalization of the gauge field. In this paper, we only present the basic idea and basic equations for our formulation of Yang-Mills theory; we defer possible applications for future studies. Scalar field theory As pointed out in Ref. [25], the change of a Wilson action S τ under a change of the cutoff scale in Eq. (1.6) can be formulated as an equality of modified correlation functions. In terms of dimensionless variables, Eq. (38) of Ref. [25] with t → 0, ∆t → τ , and e ∆tγ → Z The anomalous dimension in Eq. (1.6) and the wave function renormalization factor Z τ are related by Here, the modified correlation functions are defined by [25] 3) where the ordinary correlation functions are denoted with single brackets: (2.4) In terms of ordinary correlation functions, Eq. (2.1) reads Now, let us choose the Gaussian as the cutoff function K. We then have where is the diffused scalar field in Eq. (1.4) given in momentum space. In terms of functional integrals, this reads Using field variables in coordinate space we get δ δφ(p) = d D x e ipx δ δφ(x) and δ δϕ(t,p) = d D x e ipx δ δϕ(t,x) . Hence, we can rewrite Eq. (2.9) as (2.14) The first equality is obvious. In the second equality, we have made the replacement, , which is justified in front of the delta function. Then, we have interchanged δ δφ(x ′ ) and φ(x ′ ) neglecting an infinite constant δ δφ(x ′ ) φ(x ′ ) = δ (D) (x = 0) because this contributes only to the constant term in S τ [φ]. Finally, using the relation , (2.15) we obtain an ERG equation Here, the derivative with respect to x ′ does not act on x ′ in δ δφ(x ′ ) . Switching back to momentum space, we get where, as in Eq. (2.8), we identify the flow time t and the scale parameter τ by The field B ′b ν (t, x ′ e τ ) in the delta function is diffused from the integration variable A ′ by the flow equation Note that we have added a "gauge fixing term" with the parameter α 0 [3,4] to the original flow equation (1.1); this term suppresses the gauge degrees of freedom along the diffusion and guarantees the finiteness of gauge non-invariant correlation functions of the diffused gauge field in perturbation theory [4]. This somewhat peculiar addition is due to our tacit assumption of perturbation theory in this section. In fact, we exclude this term in lattice gauge theory discussed in the next section. In transcribing Eq. (2.13) to gauge theory, we have set Z τ = 1 because the diffused field does not receive wave function renormalization [4]; we will see that this choice is consistent with an effective presence of a cutoff in the Wilson action. We have also adopted k(p) = p 2 which yields Under a change of the scale parameter τ , Eq. (3.1) preserves the partition function: (3.4) The first equality follows from the vanishing of a total derivative [dA] δ δA a µ (x) F[A] = 0 for any well-behaved functional F[A]; for the second equality, we have used Eq. (3.1). The invariance of the partition function, expected of a Wilson action, remains formal unless the functional integral in the most right-hand side of Eq. (3.4) is regularized. In perturbation theory, at least, we can give a gauge invariant meaning to the last integral by dimensional regularization. With the lattice transcription of Eq. (3.1) in the next section, the invariance of the partition function can be given a rigorous meaning. Another important relation that follows immediately from Eq. This is analogous to Eq. (2.7) in scalar field theory. As for the right-hand side, note that the flow equation (3.3) can be written as an integral equation [3,4]: is the integration kernel of a linear diffusion, and Using Eq. (3.6), we can express δB δA , necessary on the right-hand side of Eq. (3.5), as a power series in B. The right-hand side of Eq. (3.5) is then given by correlation functions of the diffused field B. We now suppose that the "bare" action S τ =0 [A] contains a gauge coupling g 0 . Setting 6 g 0 = µ ǫ Z g (ǫ)g, where µ is an arbitrary mass scale and D = 4 − 2ǫ, we take ǫ → 0 for a continuum limit. By a general theorem [4] the right-hand of Eq. (3.5) has a finite limit. Hence, the correlation functions with respect to S τ [A] on the left-hand side of Eq. (3.5) are finite in the continuum limit. This suggests that our definition of the Wilson action (3.1) implements effectively an ultraviolet cutoff for the Wilson action. 7 6 Here, Z g (ǫ) = 1 − g 2 (4π) 2 β0 2ǫ + O(g 4 ) and β 0 = 11 3 C A , where C A is the Casimir of the adjoint representation, f abc f bcd = C A δ ab . 7 In a lattice transcription of Eq. (3.1) in the next section, the presence of an ultraviolet cutoff in the Wilson action is obvious. Gauge invariance We next show that S τ [A] defined by Eq. (3.1) is invariant under any infinitesimal gauge transformation of the scaled gauge potential The τ dependent factor λ acts like a coupling constant: An infinitesimal gauge transformation on A is but the corresponding gauge transformation on A is modified by λ as , we first note that the first factor in Eq. is invariant under the transformation (3.12) because the functional derivative transforms in the adjoint representation under Eq. (3.12): (3.14) We next examine the argument of the delta function in Eq. (3.1). Under the transformation (3.12), we find (we write x ′ as x for simplicity) In the third line above, we can replace e −τ (D−2)/2 A c ν (x) by B ′c ν (t, xe τ ) since ω is infinitesimal, and the two are equal when ω = 0. The last line implies that the gauge transformation (3.12) on the external variable A induces a gauge transformation on B ′b ν (t, xe τ ) with the gauge function −ω b (xe τ ): In the functional integral (3.1), the integration variable A ′ and the diffused gauge field B ′ are related by the flow equation (3.3). We wish to show that there is a gauge transformation on A ′ that gives the gauge transformed B ′ , given by Eq. (3.16), as the solution of the diffusion equation (3.3). To show this, let us consider an infinitesimal gauge transformation on the diffused field B that depends on the flow time s (we save t for t = e 2τ − 1): This changes the flow equation (3.3) to If we choose ξ as the solution to the linear diffusion equation, where the gauge potential A a µ (x) under the tilde ( ) is replaced by the rescaled potential, Eq. (3.9). Using a relation analogous to Eq. (2.15) (with δ (D) (x) replacing D(x)): (3.25) Here, the gauge potential A a µ (x) is replaced by the combination (3.24) if it appears under the hat. This is our ERG equation for Yang-Mills theory. Note that without the hat, Eq. (3.25) would involve only the first order differentials of S τ , and our ERG equation would be merely a change of variables. It is the differential operator in the hat (3.24), whose origin is the exponentiated second order differentials in Eq. (3.22), that introduces higher-order differentials in Eq. (3.25). Once the ERG equation (3.25) has been obtained, we may forget the original construction (3.1) and the gradient flow behind it. Under the ERG flow, the gauge invariance is preserved in the sense explained in Sect. 3.2. For completeness, we give a little more explicit form of the ERG equation (3.25): In deriving this, we have interchanged the order of A c µ (x); this is justified because f abc is anti-symmetric in b ↔ c. To write a differential equation for S τ , we multiply e −Sτ from the left of Eq. (3.26) and write covariant derivatives explicitly to obtain Differentiating e Sτ further, we obtain a non-linear ERG equation that involves up to quartic differentials of S τ : . (3.28) Approximate solution to O(λ 0 ) From Eq. (3.28), we see that the parameter λ, whose original definition is Eq. (3.10), provides a convenient expansion parameter which organizes terms in the ERG equation. We expand the Wilson action in powers of λ as (3.29) where w n = O(λ 0 ). By substituting this into the right-hand side of Eq. (3.28), we obtain terms of the form ∞ n=2 λ n−2 1 n! d D x 1 · · · d D x n W a1···an n,µ1···µn (x 1 , . . . , x n )A a1 µ1 (x 1 ) · · · A an µn (x n ). (3.30) Therefore, the expansion of the Wilson action in the form (3.29) is consistent with the ERG equation (3.28). In this paper, we study only the lowest order O(λ 0 ) terms in some detail, 9 postponing the higher-order calculations for future studies. We thus set Equation (3.28) then gives ∂ ∂τ In deriving this, we have neglected δ (D) (x = 0) assuming dimensional regularization. Imposing the translational and rotational invariance and global gauge invariance, we can write where T (p) and L(p) are functions of p 2 . Equation (3.32) then gives The general solution is given by where C(p) and D(p) are arbitrary functions of p 2 . Locality demands that C(p) and D(p) can be expanded in powers of p 2 at p = 0: Unitary demands C 0 > 0 and D 0 > 0. 9 This is the only term for the abelian gauge theory. 14 As τ → +∞, the action S τ [A] approaches an infrared fixed point S * [A], corresponding to constants C 0 and D 0 : (3.37) Since C 0 > 0 and D 0 > 0 are arbitrary, their variations give marginal operators: (3.38) It can be seen that these correspond to the change of normalization of the gauge field A (see Appendix). 10 Infinitesimal C n and D n , on the other hand, give where n = 1, 2, . . . , which correspond to irrelevant operators at the fixed point. If we make a particular choice C 0 = 1 and D 0 = ∞ in Eq. (3.36), the fixed point action becomes transverse: (3.40) and the marginal operator at the fixed point is given by It is important to pursue the above analysis to higher orders in λ to see how the ordinary beta function arises in our formalism. Lattice gauge theory In the previous section, we have constructed a gauge invariant Wilson action and its associated ERG equation for a generic Yang-Mills theory in continuum R 4 . We now tailor the construction for lattice gauge theory. For simplicity, we consider an infinite volume lattice Z 4 . The discrete coordinates on Z 4 render our ERG transformation discrete. This discreteness is introduced through "block-spins." Let us pick a fixed "block-spin" factor b from one of the integers, 2, 3, . . . We then define a "block-spin" link variable by where U (x, µ) is a conventional link variable on the Z 4 lattice; here,μ denotes the unit vector in the µ direction. This U (x, µ) is regarded as a link variable on the coarse lattice bZ 4 scaled by the factor b. We then divide the range of the scale factor τ , originally continuous in 0 ≤ τ < ∞, into the contiguous intervals n∆τ < τ ≤ (n + 1)∆τ, n = 0, 1, 2, . . . , The nth interval corresponds to the scaling of x by a factor between b n and b n+1 . Multiplying a lattice coordinate x ∈ Z 4 by e ∆τ = b gives the coordinate bx on the coarse lattice bZ 4 . Now, we consider a continuous change of the Wilson action within one of the intervals in Eq. (4.2). A natural extension of Eq. (3.1) for the interval τ = (n∆τ, (n + 1)∆τ ] would be the discrete transformation from S n to S n+1 , given by 11 This needs a fair amount of explanation, which we give below. First, ∂ a x,µ is a link differential operator defined by (see also Appendix A of Ref. [3]) where T a denotes a (anti-hermitian) generator of the gauge group. The exponentiated link differential operator in Eq. where ∂ x,µ ≡ T a ∂ a x,µ . The initial value at τ = 0 is given by the "block-spin" link variable (4.1) constructed from the integration variable U ′ defined on Z 4 : It is the value of W τ at τ = ∆τ that appears in the delta function. A possible choice of S w [W ] is the plaquette action, where the sum runs over the plaquettes p belonging to the coarse lattice bZ 4 , and W (p) is the product of the "block-spin" link variables around p. Note that the lattice flow equation (4.6) is written in terms of the scale factor τ rather than the flow time t = b 2n e 2τ − 1. We have used ∂ ∂t = b −2n e −2τ ∂ 2∂τ and absorbed the factor b 2n e 2τ into the right-hand side; this prescription 11 Note that the formula (3.1) can be used to relate the Wilson actions between two non-zero τ s. is natural because we have rescaled the lattice coordinates by the factor b 2n e 2τ compared with n = 0. Thanks to this prescription, the ERG transformation (4.4) from S n to S n+1 does not depend on n explicitly. We obtain the lattice Wilson Hence, the partition function is preserved just as in Eq. (3.4). As for the gauge invariance, we first note that a gauge transformation is given by If ω is infinitesimal, the link differential operator transforms in the adjoint representation, (4.12) where the link differential operator acts on U g on the left-hand side, but it acts on U of U g on the right. This shows that (∂ a x,µ ∂ a x,µ F[U ]) U →U g = ∂ a x,µ ∂ a x,µ F[U g ], and in Eq. (4.4) the gauge transformation on U and the first exponentiated link differential operator commute. The gauge transformation (4.11) acts on the delta function in Eq. (4.4) as (we set x ′ → x for simplicity) This shows that the gauge transformation (4.11) on U induces an inverse gauge transformation W g −1 ∆τ on W ′ ∆τ defined on the coarse lattice bZ 4 . Now, if W ′ τ is the solution of the lattice flow equation (4.6) with the initial condition U ′ , given by Eq. (4.7), then W ′g −1 τ is the solution with the initial condition U ′g −1 as long as g does not depend on τ ; this follows from the property (4.12). Hence, the gauge transformation g on U induces the inverse gauge transformation g −1 on the initial condition U ′ . To obtain this transformation on bZ 4 , we can introduce the following gauge transformation on Z 4 : otherwise. (4.14) This gauge transformation commutes with the second exponentiated link differential operator in Eq. (4.4) and, as long as S n [U ′ ] is gauge invariant, the resulting Wilson action S n+1 [U ] is also gauge invariant. This completes our argument for the gauge invariance of the lattice ERG transformation. The structure of our Wilson action defined recursively by Eq. (4.4) resembles the "lattice effective action" that has been advocated and studied in Refs. [8,9]. Our definition is different in two crucial aspects, however: Eq. (4.4) has exponentiated link differential operators, and the lattice points are rescaled in each step of the ERG transformation. As we have emphasized in the previous section, these two are essential ingredients for obtaining an ERG differential equation that is non-linear in the Wilson action and entails scale transformation of space. Finally, let us derive an ERG differential equation in lattice gauge theory that follows from the definition (4.4) of the Wilson action. For this, we define S n+1 (τ )[U ] by We have introduced a diffusion factor τ so that As τ → 0+, S n+1 (τ ) reduces essentially to S n , written for the block-spin link variables U defined by Eq. (4.7): The dependence of S n+1 (τ ) on the diffusion factor τ is given by the differential equation, = exp x,µ,a For the first equality above, we have used the lattice flow equation (4.6) in evaluating , which follows from the definition of the link differential operator (4.5). It is understood that the operator ∂ ′c y,σ acts on W ′ τ . For the second equality, we have rewritten ∂ ′c y,σ as the derivative on U , ∂ ′c y,σ → −∂ c y,σ ; this identity holds because the link differential operator acts on the delta function as d ν)). This link differential operator on U can be put outside to act on the integral over U ′ . Then, we can replace ∂ c y,σ S w [W ′ τ ] by ∂ c y,σ S w [U ] thanks to the delta function. Therefore, from Eq. (4.15), we get an ERG differential equation By integrating this from τ = 0+ to τ = ∆τ , we restore the finite change of the Wilson action in Eq. (4.4). Thus, our ERG transformation in lattice gauge theory consists of the rescaling of lattice points by Eq. (4.17) and the diffusion from τ = 0+ to τ = ∆τ by Eq. (4.19). See Eq. (4.16). As we have shown, this transformation preserves the partition function and manifest gauge invariance of the Wilson action. It is important to note that neither Eq. (4.17) nor Eq. (4.19) depends explicitly on n. This implies a possibility of finding a fixed point solution, S n+1 = S n . The technique in Ref. [2] appears helpful to study such questions. Conclusion Imitating the structure of the Wilson action in scalar field theory, expressed by the field diffused by the flow equation, we have constructed a manifestly gauge-invariant Wilson action and its associated ERG differential equation in Yang-Mills theory. The construction, extended to lattice gauge theory, provides a non-perturbative gauge invariant Wilson action of Yang-Mills theory. We have presented only the basic idea and basic relations in this paper; we expect many future applications including analytic or numerical searches for non-trivial RG fixed points in gauge theory. We can also expect extensions in various directions, such as inclusion of matter fields and search for a reparametrization invariant ERG formulation of quantum gravity. It should be also interesting to clarify a possible relation to the other gauge invariant ERG formulations of gauge theory [30][31][32][33][34]. A. Normalization of the gauge field In Sect. 3, we have normalized the gauge field A a µ (x) so that the rescaled field A a µ (x) ≡ λA a µ (x), defined by Eq. (3.10), has the ordinary gauge transformation (3.11). In fact this is not the only choice of normalization. We can change the normalization of A a µ (x) arbitrarily so that the rescaled field is given by Let S z,τ [A] be the Wilson action of this field. We should then obtain This implies [25] e Sz,τ [A] = exp For where ǫ is infinitesimal, we obtain Hence, S z,τ satisfies the same ERG equation (3.25) as S τ except with the addition of on the right-hand side. We can interpret − dz(τ ) dτ as the anomalous dimension of the gauge field. The marginal operator O 0 (p), Eq. (3.41), that we have found at the end of Sect. 3 is in fact the operator N ; we find We believe that the right choice of the anomalous dimension is necessary to obtain a fixed point of the ERG transformation.
5,444.6
2020-12-07T00:00:00.000
[ "Physics" ]
Multi-degree-of-freedom systems with a Coulomb friction contact: analytical boundaries of motion regimes This paper proposes an approach for the determination of the analytical boundaries of continuous, stick-slip and no motion regimes for the steady-state response of a multi-degree-of-freedom (MDOF) system with a single Coulomb contact to harmonic excitation. While these boundaries have been previously investigated for single-degree-of-freedom (SDOF) systems, they are mostly unexplored for MDOF systems. Closed-form expressions of the boundaries of motion regimes are derived and validated numerically for two-degree-of-freedom (2DOF) systems. Different configurations are observed by changing the mass in contact and by connecting the rubbing wall to: (i) the ground, (ii) the base or (iii) the other mass. A procedure for extending these results to systems with more than 2DOFs is also proposed for (i)–(ii) and validated numerically in the case of a 5DOF system with a ground-fixed contact. The boundary between continuous and stick-slip regimes is obtained as an extension of Den Hartog’s formulation for SDOF systems with Coulomb damping (Trans Am Soc Mech Eng 53: 107–115, 1931). The boundary between motion and no motion regimes is derived with an ad hoc procedure, based on the comparison between the overall dynamic load and the friction force acting on the mass in contact. The boundaries are finally represented in a two-dimensional parameter space, showing that the shape and the extension of the regions associated with the three motion regimes can change significantly when different physical parameters and contact configurations are considered. Introduction Improving the fundamental knowledge of the dynamic behaviour of friction damped systems is one of the most pressing challenges in structural dynamics. In fact, friction joints and interfaces are found in a wide range of mechanical and civil structures. Furthermore, friction dampers are often introduced in engineering applications to achieve energy dissipation, isolation and vibration control. However, their effect on the dynamic performances of such systems is not yet fully understood. The dynamic response of systems with frictional interfaces is not always continuous. In fact, the following behaviours can also be observed in the relative motion between the surfaces of the joint: (i) stops can periodically occur in the motion, leading to the so-called stick-slip regime; (ii) the surfaces in contact can be completely stuck, a condition which will be referred to as no motion regime. These phenomena can have undesired and critical consequences on engineering structures if not accounted for during the design stage. For example, stick-slip can result in noise, energy loss, excessive wear and component failures [1], while unexpected full-stuck conditions in friction contacts can lead to a significant reduction of damping effects and alter the dynamic behaviour of the structure. The goal of this paper is the development of an analytical approach for the formulation of the boundaries of these motion regimes for multi-degree-of-freedom (MDOF) systems with a Coulomb friction contact. Specifically, two different boundaries will be investigated: (i) between continuous and stick-slip regimes; (ii) between motion and no motion regimes. Different contact configurations will be explored, considering different masses involved in the friction contact and either fixed or oscillating wall cases. The boundaries will be represented in two-dimensional parameter spaces, which will be therefore divided into three regions associated with continuous, stick-slip and no motion regimes. The observation of these parameter spaces will enable the determination of the motion regime for each given set of parameters of the system, of the contact and of the excitation considered. The steady-state response features of harmonically excited systems presenting a Coulomb contact between the mass and a fixed wall were widely explored in the literature (see, e.g. [2][3][4][5][6][7]) for the single-degree-offreedom (SDOF) case. Specifically, the determination of an upper bound for continuous non-sticking motion was mainly tackled by Den Hartog [2] and Hong and Liu [5]; in addition, many authors [7][8][9] further investigated the motion bounds accounting also for the different number of stops per cycle in stick-slip regime. In these systems, the upper bound for the presence of mass motion, either in continuous or stick-slip regime, is obtained when the amplitudes of the exciting and friction forces are equal, independently of the exciting frequency. However, a different behaviour was observed by Marino et al. [10] in Coulomb damped SDOF systems subject to joined base-wall harmonic excitation, where the rubbing wall is assumed to oscillate jointly with the base. The wall motion introduces a different dynamic load on the mass, whose amplitude becomes proportional to the square of the exciting frequency. Therefore, also the upper bound for the presence of a relative motion in the contact will become frequencydependent. The response of MDOF systems to harmonic excitation is often investigated numerically [11][12][13]. As time integration can be computationally expensive [14], frequency domain methods such as harmonic balance [15][16][17][18] or multi-harmonic analysis [19][20][21] have been explored. A more complete review on friction damped systems and current numerical approaches can be found in reference [22]. Analytical approaches are also described in the literature for 2DOF systems with a Coulomb contact: in 1966, Yeh [23] derived a closedform solution for the continuous non-sticking response of 2DOF systems with combined viscous and Coulomb damping, while more recently further theoretical developments were presented in references [24][25][26]. Alternative approaches such as the method of averaging have also been explored for finding approximate solutions when the number of DOFs of the system is larger [27]. Finally, the problem has often been addressed by introducing an equivalent viscous damper to account for the energy loss due to the frictional dissipation [14,28,29]. Nevertheless, to the best of the authors' knowledge, the problem of the determination of the boundaries among continuous, stick-slip and no motion regimes has never been tackled for these systems. In this contribution, the upper bound for nonsticking motion is evaluated by extending Den Hartog's approach [2]. In fact, Den Hartog determined the continuous dynamic response and the boundary between continuous and stick-slip motion regimes by considering a time interval, equal to half period of motion in steady-state conditions, where the governing equations are linear. This approach can also be used to investigate the behaviour of MDOF systems if a single friction contact, i.e. a single nonlinearity, is considered. In particular, this enables the use of standard modal analysis to evaluate the terms appearing in the boundary equation. An ad hoc procedure is introduced for determining the domain where relative motion is allowed in the friction contact. The approach is based on the evaluation of the overall dynamic load acting on the mass in contact when it is fixed. The upper bound is then described by equating the amplitudes of this dynamic force and of the friction force. Three different types of friction contacts are investigated for two-degree-of-freedom (2DOF) systems: ground-fixed wall contacts (Sect. 2), achieved between one of the masses and a fixed wall; base-fixed wall contacts (Sect. 3), achieved between one of the masses and a wall oscillating jointly with the base; mass-fixed wall contacts (Sect. 4), where two masses are connected by a spring and a Coulomb contact in parallel. These MDOF systems can provide a simplified model for several engineering applications, including friction dampers for civil building, car suspensions, bladed discs and many others. For each of the listed contact configurations, the analytical boundaries are evaluated and validated with results found using a numerical approach, which is introduced in Sect. 4. Subsequently, the analytical results for ground-fixed and base-fixed contacts are extended to systems with more than two DOFs in Sects. 2 and 3, respectively; particularly, a numerical validation is proposed for the case of a 5DOF system with a ground-fixed contact applied on either the fourth or the second mass at the end of Sect. 2. Ground-fixed wall contacts This section focuses on the study of a MDOF system with a Coulomb contact between one of the masses of the system and a ground-fixed wall. The purpose of this investigation is determining which motion regime (continuous, stick-slip or no motion) can be observed for each set of physical parameters of the problem. Den Hartog's approach for the determination of motion regimes in SDOF systems [2] is recalled and extended to MDOF systems by considering the superposition of modal behaviour. Analytical expressions for the bounds of the different motion regimes are presented and validated with numerical results obtained using the approach described in Sect. 4 for 2DOF systems with a fixed contact on either the lower or upper mass and for a 5DOF system with the either fourth or the second mass in contact to a fixed wall. Governing equations and dimensionless groups definition Let us consider a 2DOF system composed of two masses m 1 and m 2 and two springs of stiffness k 1 and k 2 , where either the lower mass (Fig. 1a) or the upper (Fig. 1b) is rubbing against a ground-fixed wall generating a Coulomb friction force of amplitude F. Such systems are excited by a harmonic base motion of amplitude Y and frequency ω, described by the coordinate y. The coordinates describing the position of the two masses are x 1 and x 2 , respectively. The governing equations of each of these systems can be written, respectively, as: and: where y = Y cos(ωt) and: When the sliding velocity is zero, the sgn() function is meant to assume any value between -1 and 1. The actual value will be such that the system is in equilibrium, i.e. the sum of the spring forces and of the friction force is zero. By using the definition in Eq. also assumed that the magnitudes of static and kinetic friction forces are equal. As several parameters appear in Eqs. (1) and (2), it is convenient to rewrite them in a non-dimensional form, using the smallest possible number of parameters required for describing the dynamic behaviour of the systems. A possible non-dimensional form of Eqs. (1) and (2) is: and : In the above equations, a non-dimensional time and a non-dimensional position for the j-th mass were introduced, respectively, as: and the symbol indicates the derivative with respect to τ . The four non-dimensional groups chosen are: -the frequency ratio: -the friction ratio: -the stiffness ratio: -the mass ratio: It is worth noting that Eqs. (4) and (5) can be interpreted as the governing equations of equivalent nondimensional systems where, as shown in Fig. 2a, b, the masses are r 2 1 and γ r 2 1 and the springs have a stiffness equal to 1 and κ, respectively. The friction ratio β represents the amplitude of the friction force, while the base excitation is of unitary amplitude and unitary frequency. Sticking conditions The conditions for which a sticking phase will occur in the mass motion are discussed here. These conditions are required for the numerical integration of Eqs. (4) and (5) with the approach described in Sect. 4. Sticking will occur when, at a specific time, the relative velocity between the components in contact is zero and the amplitude of the sum of all the non-inertial forces acting on the mass in contact does not overcome the amplitude of the friction force. This translates into the conditions: for the system in Fig. 2a and in: for the system in Fig. 2b. Boundaries of motion regimes for a SDOF system The boundary between continuous and stick-slip motion regions for MDOF systems will be determined as an extension of the expression found by Den Hartog for SDOF systems with Coulomb damping [2]. Within Den Hartog's approach: -the Coulomb friction force is expressed as −Fsgn(ẋ), whereẋ is the relative velocity between the mass and the wall. This force introduces a nonlinearity in the problem only if the velocity sign changes in a certain time interval; -a steady-state response period included between two subsequent response maxima is considered. Assuming that the motion is continuous, the minimum displacement will occur in the middle of the interval, so the velocity sign will be constant and negative if only the first half cycle is taken into account; -therefore, a linear problem is defined for this subinterval and an analytical solution for mass motion is found, allowing the determination of closed-form expressions of the amplitude and of the phase angle of the response; -the conditions for which a stop occurs inside this time interval are used to evaluate a closed-form expression of the upper bound for non-sticking motion. For each frequency ratio r , the smallest friction ratio for which a stop occurs inside the considered time interval is expressed as: The following quantities are introduced in the above equation: -the response function: is the frequency response of an undamped SDOF system; -the damping function: describes the friction effect on the frequency response of the system; -the function: has been observed to be unitary for most values of r [2] and, therefore, the assumption of S = 1 will be considered in what follows. This assumption eliminates the time dependence of Den Hartog's boundary and reduces Eq. (13) to the solution presented by Hong and Liu in reference [5], which has been obtained with a different analytical approach. It is worth noting that Den Hartog's boundary was obtained under the assumption of steady-state motion. In reference [4], Shaw demonstrated that SDOF systems with Coulomb friction are asymptotically stable in the absence of viscous damping, except that for r = 1/n, n = 1, 2, ...; particularly, an infinity of equally marginally stable solutions coexist if r = 1/(2n) [6]. Therefore, excluding these particular values, different motion regimes cannot coexist for given r and β depending on the initial conditions. Moreover, it must be observed that for r = 1 the amplitude of the response will grow indefinitely if β < π/4 [2] and, therefore, steady-state condition will not be reached. Continuous motion will occur below the boundary described by Eq. (13) and is depicted by the blue area in Fig. 3, while stick-slip motion is expected above this line (the orange area in Fig. 3). Steady mass motion will not be possible when the amplitude of the exciting force is smaller than the amplitude of the static friction force; this happens when β ≥ 1 (grey area). This basic notion will be used in more complex systems to obtain the boundary between motion and no motion regions. The analytical approach proposed for the evaluation of the boundary between continuous and stick-slip motion regimes in MDOF systems with a Coulomb friction contact is based on the following assumptions and observations. -It is assumed that the steady-state response of the system is independent of the assigned initial conditions and converges asymptotically to a stable solution. As previously mentioned, this stability property is well known for SDOF systems but, to the best of the authors' knowledge, it has never been thoroughly investigated in the MDOF case. The convergence to a unique steady-state response has been verified in all the numerical investigations carried out in this paper. -If the relative motion between the mass and the wall in contact is non-sticking, the governing equations of the system will be linear within a time interval equal to half period of motion. In fact, the Coulomb force will be constant in any interval where no change in the sign of their relative velocity occurs. -The response functions of the system can be obtained by neglecting the friction force and using a standard modal analysis procedure, as described in Sect. 2.4.1. -In addition, this conjecture is proposed: the boundary between continuous and stick-slip regimes can be expressed by using Eq. (13), in the assumption of S = 1. In this equation, the response function V is obtained as described above. The damping function U is formulated in a similar fashion as a superposition of the damping functions of each vibrating mode. Numerical investigations are carried out for varying parameters, masses in contact and numbers of DOFs to validate the boundaries obtained under these assumptions. Response functions Although the response functions of a MDOF system can be determined by using standard modal analysis, the main steps of the procedure will be reported in this section to define the relevant variables. The approach is described in detail for the 2DOF case and can be easily extended to systems with a larger number of degrees of freedom, as described in Sect. 2.6. The first step consists in evaluating the natural frequencies and the corresponding mode shapes of the undamped system, therefore disregarding the friction effect and the external excitation. Let us denote as i the natural frequencies of the linear system in the physical coordinates space. The natural frequencies of the non-dimensional system can be expressed as: Such frequencies can be obtained as solutions of the generalised eigenvalue problem written in the form: where M = r 2 1 0 0 γ r 2 1 (19) and: are, respectively, the mass and the stiffness matrices of the non-dimensional system and where the corresponding mode shapes are indicated with ψ i = ψ 1,i ψ 2,i T . Thus, the natural frequencies can be viewed as eigenvalues and the mode shapes as eigenvectors. For a non-trivial solution of Eq. (18), it is required that: which leads to an algebraic equation. The resulting natural frequencies can be written as: Having found the natural frequencies, the mode shapes ψ i must satisfy: The mode shapes are defined up to a constant, so only the ratio between their components can be obtained from the system in Eq. (23): In order to define uniquely the components of each mode a normalisation is usually operated according to different criteria (see reference [30]). In this paper, the modes will be normalised so that the modal masses: are equal to 1. The normalised mode shape vectors obtained from such procedure are: These eigenvectors are independent and therefore any undamped motion of the system can be written as their linear combination. Let us define the modal matrix as the matrix whose columns are the mode shapes: The modal matrix is used to introduce the coordinate transformation: where the componentsη i of the vectorη are defined as modal coordinates. The introduction of this system of coordinates allows the rewriting of Eqs. (4) and (5) as systems of uncoupled equations. In fact, neglecting friction force at this stage, Eq. (4), as well as Eq. (5), can be written in matricial form as: wherep = cos τ 0 T . By introducing the transformation in Eq. (28), the governing equations assume the form: (30) or, in a more compact form: wherê and: are, respectively, the modal mass and the modal stiffness matrices, while: is defined as the modal force vector. Therefore, the i-th equation of the system in Eq. (31) can be written as: Equation (35) represents the governing equation of a SDOF system characterised by the natural frequencȳ i . Therefore, the amplitude H i of its response to the exciting forcep i can be expressed as: is the amplitude of the i-th modal force. From Eq. (28), it can be observed that: and, therefore, it is possible to obtain the response function for the j-th degree of freedom of the undamped system as: By introducing the i-th modal frequency ratio as: it is possible to write V j as: It is worth noting as the excitation vector p can assume different forms if different loading configurations are considered, e.g. when the harmonic excitation is applied to the upper mass. This case is not accounted in this section but it will be dealt with in Sects. 3 and 4. Let us introduce the modal weight: and denote the response functions of the i-th mode as: It is then possible to rewrite the j-th response function as: and, by introducing the matrix of the modal weights P and the vector v whose components are v i , the response vector as: This notation can be particularly useful when dealing with systems with a larger number of DOFs. The response functions of a 2DOF system under harmonic base excitation, observed on the lower and on the upper mass, are obtained by substituting Eqs. (26) and (37) into Eq. (41) and can be written, respectively, as: and: Damping functions and results In this study, it is proposed that modal superposition can be used to express the damping functions of a MDOF system. In a similar fashion as in Eq. (15), let us denote the damping function of the i-th mode as: Let us suppose that the damping function of a 2DOF system with a ground-fixed wall contact on the j-th mass can be written as: where F, ji is the friction modal weight relative to the i-th mode, expressed as: and e F is a vector where only the j-th component is different from zero and it is equal to 1. Comparing Eqs. (42) and (50), it is possible to note as in the latter the excitation vector P is replaced by e F . From Eq. (50), it is easily obtained that: By introducing the matrix of the friction modal weights, whose coefficients are F, ji , the damping vector U of a MDOF system can be written as: The components U j of such vector will indicate the damping function that must be considered if a fixedwall contact is imposed on the j-th mass of the system. An expression is proposed for the boundary between continuous and stick-slip motion in the space r 1 -β. Denoting with β j the friction ratio relative to a contact between the mass m j and the wall, the boundary can be written in a similar fashion to Eq. (13) as: The termm j refers to the second-order coefficient in the non-dimensional governing equations in Eqs. (4) and (5), i.e.m 1 = r 2 1 andm 2 = γ r 2 1 . The damping function U j can be rewritten, by substituting Eqs. (48) and (51) into Eq. (49), as: With respect to the system illustrated in Figs. 1a and 2a, where the contact occurs between the lower mass ( j = 1) and a fixed wall, the damping function will be therefore expressed as: and, consequently, the boundary will be expressed as: The boundary obtained from Eq. (56) is represented in Fig. 4 for different values of the mass and stiffness ratios. In the figure, it is shown as this analytical curve has an excellent agreement with the results obtained via numerical integration, using the approach described in Sect. 4 for 0 ≤ r 1 ≤ 2.5 and 0 ≤ β ≤ 1. Stickslip motion occurs also for low friction ratios when the frequency ratio is small; furthermore, in the same frequency range, the boundary shows an irregular pattern, partially recalling the one observed in SDOF systems ( Fig. 3). Nevertheless, a main difference is that a peak can always be observed in this range, specifically in correspondence of the lowest natural frequency of the system. Moving towards higher frequency ratios, it is possible to observe a very thin grey region (more clearly in Fig. 4a,d). This corresponds to an antiresonance of the system, which can be observed in the lower mass of a 2DOF system, independently of damping, at: At this frequency, in the presence of Coulomb damping, the friction prevents the system from exhibiting any vibration in steady-state conditions; therefore, no motion has been observed numerically. The right side of the boundary reproduces the same pattern observed in SDOF systems, with a finite peak with β ∼ = 0.8, reached slightly before the second resonant frequency ratio of the system, and then decreasing towards an asymptotic value [10]. The same approach can also be applied to a 2DOF system where the ground-fixed wall contact involves the upper mass (Figs. 1b, 2b). In this case, the damping function and the boundary condition can be written, respectively, as: and: This analytical function is shown in Fig. 5, where it is compared with the numerical boundary between continuous and stick-slip motion, showing also in this case an excellent agreement. Particularly, the boundary appears to increase from zero to a finite peak, although not regularly at low frequencies. This peak is located between the peak of the boundary between motion and no motion regions (described in Sect. 2.5) and the second natural frequency of the system. It is also possible to observe as the frequency ratio of the peak appears to be only weakly influenced by the mass ratio. Finally, increasing r 1 above the peak frequency ratio, the boundary converges to zero. Some irregularities in the agreement between analytical and numerical boundaries can be observed locally (for instance at r ∼ = 1.2 in Fig. 5d); this is due to the approximation introduced by assuming S = 1, as specified in Sect. 2.3. Condition for the presence of a no motion region in 2DOF systems In Fig. 5, numerical results revealed a large no motion region, shown in grey. As stated in Sect. 2.3 for SDOF systems, steady-state response can be observed in Coulomb damped systems only when the amplitude of the exciting force is larger than the amplitude of the friction force. Specifically, in MDOF systems with a single source of Coulomb damping, this condition must be verified on the mass directly involved in the contact by comparing the amplitudes of the friction force exerted by the fixed wall and of the overall dynamic load acting on such mass when its displacement and velocity are equal to zero. Therefore, the conditions for steady motion between such mass m i and the wall in contact can be found assuming that the mass is perfectly fixed to the wall. For instance, when a friction contact between the lower mass and the wall is considered (Fig. 2a), the only force acting on m 1 , in addition to the friction force, is the base excitation transmitted by the lower spring. As the amplitude of the non-dimensional base motion and the stiffness of this spring are both unitary, the motion condition will be given, trivially, by β < 1, as observed for SDOF systems. Instead, when the contact occurs on the second mass, the only exciting force to be considered is the spring force due to the displacement of the lower mass and transmitted by the upper spring. Thus, the condition for the presence of a steady motion is expressed by: The amplitude X 1 of the lower mass motion can be evaluated by fixing the upper mass in the non-dimensional system (Fig. 2b). In this case, the system reduces to a SDOF, where the lower mass is attached to the ground on either side, by springs of stiffness, respectively, 1 and κ. Therefore, its governing equation will be: By imposingx 1 = X 1 cos τ , it is possible to write the response amplitude as: Substituting Eq. (62) into Eq. (60), it is possible to rewrite the motion condition as: This analytical boundary is plotted in Fig. 5 and shows a very good agreement with the boundary obtained from the numerical integration. A first observation is that this boundary is completely independent of the mass ratio; this justifies also the already mentioned weak dependence on γ shown by the peak of the boundary between continuous and stick-slip regimes. It is possible to observe how the motion is allowed at r ∼ = 0 for force ratios smaller than κ/(1 + κ). The boundary increases monotonically until reaching an infinite peak for r 1 = √ 1 + κ, which is, therefore, the only frequency ratio for which motion is always allowed. No motion Finally, the boundary converges to zero at high frequencies. It is worth observing that the several spikes shown by the numerical results in Fig. 5 above the boundary are due to residual transient motion not completely decayed at the end of the time interval considered in the numerical simulation, as underlined in Sect. 5. The motion regimes scenario defined by the analytical boundaries found in this section for 2DOF systems with a ground-fixed wall Coulomb contact is summarised in Table 1. 2.6 Boundaries for systems with more than two DOFs The procedures described in Sects. 2.4 and 2.5 for the analytical determination of the boundaries between motion regimes for 2DOF systems with a fixed Coulomb contact can be extended to systems with a larger number of DOFs, maintaining the limitation that Coulomb damping must be generated by a single contact between one of the masses and the ground-fixed wall. Governing equations First of all, it is important to define the governing equations of a generic MDOF system consistently with the formulation used for 2DOF systems so far. Consider a NDOF system, made of N masses m i connected in series by N springs of stiffness k i , which is subjected to a monoharmonic excitation with driving frequency ω, due to either a base motion or a direct mass excitation (Fig. 6a). If a friction contact is imposed between the j-th mass and a fixed wall, it will be possible to write the j-th governing equation of the system as: If present, the load k 1 y due to the base motion must be included in the equation if j = 1. Equation (64) can also be written in non-dimensional form as: where the j-th mass ratio and the j-th stiffness ratio are defined, respectively, as: and: The non-dimensional system described by Eq. (65) is shown in Fig. 6b. It can be observed that the system is completely described by 2N parameters: the frequency ratio r 1 , the friction ratio β, the mass ratios γ 2 , ..., γ N and the stiffness ratios κ 2 , ..., κ N . Trivially, γ 1 = 1 and κ 1 = 1 by definition. Boundary between continuous and stick-slip regimes Regarding the boundary between continuous and stickslip regimes, it is intuitive that the modal superposition can be applied to any number of DOFs. According to Eq. (65), the mass and the stiffness matrices will be, respectively: and: By substituting these matrices into Eq. (22), it is possible to derive the N natural frequencies of the undamped system and, from Eq. (21), its N mode shapes. Once these quantities are determined, it is possible to follow the remaining part of the procedure described in Sect. 2.4, determining the response function V j and the damping function U j for the j-th mass. The boundary curve is finally obtained by substituting these values, as well as posingm j = γ j r 2 1 ( j = 1, ..., N ), into Eq. (53). Boundary between motion and no-motion regimes The determination of the boundary between motion and no motion regimes can be lead similarly to Sect. 2.5. The first step consists in determining which dynamic forces act on the mass in contact m j when it is fixed atx j = 0. The sum of these forces, which will be compared with the friction force, can include, in general, dynamic loads applied directly on the mass and the spring forces due to the dynamic responses of the masses m j−1 and m j+1 . Particularly: -a spring force of module κ j X j−1 will be considered if any source of excitation is found in the lower part of the system (Fig. 7a); -a spring force of module κ j+1 X j+1 will be considered if any source of excitation is found in the upper part of the system (Fig. 7b); The second step consists in the evaluation of the unknown response amplitudes X j−1 and/or X j+1 , which can be obtained by referring to the following undamped subsystems: -the response amplitude X j−1 can be evaluated from the lower subsystem, which is composed of the j −1 masses located below the mass in contact m j , while such a mass is replaced by a fixed wall, as shown in Fig. 7a; -the response amplitude X j+1 can be evaluated from the upper subsystem shown in Fig. 7b, where the N − j upper masses are instead considered. These subsystems will be, in general, two undamped MDOF systems and their dynamic response can be determined analytically using standard approaches (see, e.g. [29,30]). These responses can be substituted into the motion conditions obtained at the end of the first step, therefore yielding the final formulation of the motion boundary. It can be observed that the response amplitudes evaluated from each of the subsystems will display infinite peaks in correspondence of their natural frequencies and this will affect the shape of the boundary: if the excitation is below m j (such as a base motion), the boundary between motion and no motion regimes will exhibit j − 1 infinite peaks. Similarly, N − j infinite peaks will be visualised if any dynamic force is acting on the upper part of the system; finally, N − 1 peaks will be found if both loading conditions occur simultaneously. These infinite peaks imply that, at specific frequencies, steady-state motion will be observed in the contact for any values of friction ratio. Example: 5DOF systems with a ground-fixed Coulomb contact The procedure introduced for the analytical determination of the boundaries of motion regimes is applied to a 5DOF system under harmonic base motion with a Coulomb contact as an example of NDOF system with N > 2. Without loss of generality, let us consider a 5DOF system where all the masses are equal to m and all the springs have stiffness k, i.e. where all the stiffness and mass ratios are unitary. A ground-fixed contact, characterised by a friction force of amplitude F, is applied to the mass m 4 and the system is subjected to a base motion y = Y cos(ωt), as shown in Fig. 8a. The response and the damping functions V 4 and U 4 have been evaluated by applying the modal superposition procedure introduced in this section and the boundary between continuous and stick-slip motion has been obtained by substituting their values into Eq. (53) for j = 4. Regarding the boundary between motion and no motion regimes, it can be observed that the only dynamic force acting on m 4 , when fixed at x 4 = 0, is a spring force of amplitude k X 3 . Therefore, the boundary is obtained when F = k X 3 or, non-dimensionally, when β = X 3 . The value of X 3 can be determined by using standard modal analysis on the undamped subsystem shown in Fig. 9a. The so-determined boundaries are shown in Fig. 10a, where a comparison with the numerical boundaries obtained with the approach introduced in Sect. 5 is also achieved, exhibiting a very good overall agreement. As already mentioned for other results, the small spikes present in the grey area are due to a not completely decayed transient motion in the numerical solutions, while the presence of some local disagreement in the boundary between continuous and stick-slip regimes at r 1 ∼ = 1.5 is instead related to the approximation of S = 1 (see Sect. 2.3). As expected, the boundary between motion and no motion regions exhibits three infinite peaks for r 1 = 0.7654, r 1 = 1.4142 and r 1 = 1.8478. Such peaks correspond to the resonances of the 3DOF undamped subsystem in Fig. 9a. If the friction contact is applied on the second mass, as shown in Fig. 8b, the corresponding lower subsystem (a) (b) Fig. 9 Lower subsystems corresponding to the no-motion configurations of the main systems in Fig. 8a (a) and in Fig. 8b (b) has only one DOF (Fig. 9b) and a single infinite peak is found (at r 1 = √ 2) in the motion boundary plotted in Fig. 10b. In the figure, it is possible to observe a good agreement between analytical and numerical results; all the observations regarding the discrepancies between these results made for the previous system also apply to this configuration. Finally, it can be observed in both cases that the shape of the upper bound for non-sticking motion is strongly affected by the presence of resonances in the motion/no motion boundary and usually exhibits the same number of major peaks; however, in all the cases investigated for the fixed-wall configuration, their value was always finite. Base-fixed wall contacts In this section, the analytical formulation of the boundaries achieved for MDOF systems with a ground-fixed wall contact is extended to systems where a Coulomb contact occurs between a mass and a wall moving jointly with the base. As proposed in reference [10] for SDOF systems, Den Hartog results [2] can be extended to systems excited by joined base-wall motion if an appropriate reference system is chosen. The analytical bounds of the motion regimes for 2DOF systems with base-fixed wall contacts are derived in what follows. Governing equations and sticking conditions Let us consider a 2DOF system consisting of two masses m 1 and m 2 and two springs of stiffness k 1 and k 2 . The system is assumed to be excited by a harmonic base motion y = Y cos(ωt) and a friction contact is achieved between a moving wall jointed to the base and either m 1 (Fig. 11a) or m 2 (Fig. 11b). Such a system is governed by the equations: in the first configuration and by the equations: in the latter. In order to apply to this system the procedures described in Sect. 2, it is convenient to rewrite Eqs. (70) and (71) in the same form as Eqs. (1) and (2); this can be achieved by applying an appropriate variable transformation. Let us define the relative motions between either mass m 1 or m 2 , respectively, as: . Substituting Eqs. (72) and (73) into Eqs. (70) and (71), and after some algebraic manipulations, it is possible to write: Fig. 12 Equivalent system with a ground-fixed wall contact for a 2DOF system with a base-fixed wall contact involving a the lower mass or b the upper mass (a) (b) Fig. 13 Non-dimensional equivalent system with a ground-fixed wall contact for a 2DOF system with a base-fixed wall contact involving a the lower mass or b the upper mass when the contact occurs between the base-jointed wall and the lower mass and: when the upper mass is in contact. Equations (74) and (75) are the governing equations of the systems shown in Fig. 12a, b, which will be defined equivalent systems of the systems introduced in Fig. 11a, b. These 2DOF equivalent systems present a ground-fixed wall contact and, therefore, the modal superposition procedure can be applied as described in Sect. 2.4. As it can be observed from Fig. 12a, b, both masses are excited by equivalent harmonic forces whose amplitudes are proportional to r 2 1 ; therefore, the dynamic load will increase significantly at high frequency ratios, unlike the friction force, allowing the presence of continuous motion also when high friction ratios are considered. This result is in perfect agreement with what was observed in [10] for Coulombdamped SDOF systems under harmonic joined basewall motion. In order to apply the modal superposition procedure, it is convenient to rewrite Eqs. (74) and (75) in a nondimensional form. Introducing the dimensionless state variables: and considering all the non-dimensional groups introduced in Sect. 2.1, it is possible to write, with the respect to the two different contact locations considered in this section: and: Equations (77) and (78) are representative of the non-dimensional equivalent systems shown in Fig. 13a, b. Following the criteria detailed in Sect. 2.2, it is possible to derive from Eq. (77) the sticking conditions needed for the numerical integration: Equation (79) can be rewritten in terms of x 1 and x 2 as: Similarly, the conditions for the configuration involving a friction contact on the upper mass will be: Boundary between continuous and stick-slip regimes for 2DOF systems The systems in Fig. 13a, b exhibit the same contact configurations as the systems shown in Fig. 2a, b, which have been referred to when introducing the modal superposition procedure in Sect. 2.4. The only relevant difference between these systems is found in the different load configurations, as the equivalent systems considered here are subjected to dynamic forces directly applied on the masses, rather than to base motion. Let us then write the excitation vector for the current sys-tems as: Substituting Equations (27) and (82) into Eq. (34), it is possible to write the modal force vector as: Therefore, considering Eq. (46), it is possible to write the response functions for each mass of the equivalent systems as: and: Regarding the damping functions, as the contact configurations and the friction forces are the same considered in Sect. 2, it is possible to write U z 1 = U 1 and U z 2 = U 2 , referring to Eqs. (55) and (58). Finally, the boundaries between continuous and stick-slip regimes are described, respectively, for the two cases, by Eqs. (56) and (59) for V j = V z j and U j = U z j . The analytical boundary between continuous and stick-slip regimes obtained from such a procedure for a 2DOF system with a contact between the lower mass and the base-fixed wall is represented in Fig. 14 for varying stiffness and mass ratios and it shows a good agreement with the numerical results. The curve is split into two parts by an antiresonance, which is further described in Sect. 3.3. At low frequencies, the boundary presents very small values of friction ratio until reaching a first sharp peak in correspondence of the lower natural frequency, while a second smoother peak appears shortly before the antiresonance. In this frequency range, some discrepancies between analytical and numerical results can be observed and they are due to the assumption of S = 1 (see Sect. 2.3). After the antiresonance, the curve gradually increases to infinity; therefore, it will always be possible to observe a continuous motion between mass and wall by increasing the frequency ratio until a certain threshold value. The analytical boundary between continuous and stick-slip regimes for the contact configuration involving the upper mass is depicted in Fig. 15. It shows an excellent agreement with the numerical results. The boundary is very similar to the one shown for the previous configuration in Fig. 14, except for a few differ-ences, which will be described in more detail in the following subsection. Condition for the presence of a no motion region in 2DOF systems Both Figs. 14 and 15 highlight the presence of regions where no relative motion was observed numerically between the mass and the wall in contact in steadystate conditions. This means that the mass involved in the friction contact is stuck on the base-fixed wall and, therefore, forced to move with the same harmonic motion as the base. As specified in Sects. 2.3 and 2.5, this eventuality occurs when the amplitude of the friction force acting on the mass in contact is larger than the amplitude of the sum of the other forces acting on such a mass when its relative position and velocity are zero. In order to determine the analytical formulation of the boundary between motion and no-motion regions in the case where the lower mass is in contact, let us consider the non-dimensional equivalent system shown in Fig. 13a. The non-frictional forces acting on such a mass when it is still inz 1 = 0 are the equivalent exciting load r 2 1 cos τ , due to the base motion, and the spring force κz 2 , due to the motion of the upper mass. Therefore, according to what previously stated, the motion condition will be: If the lower mass is fixed to the wall, the system in Fig. 13a will behave like a SDOF system governed by the equation: The amplitude of the response to the excitation γ r 2 1 cos τ can be determined by imposingz 2 = Z 2 cos τ in the above equation and it can be written as: Substituting Eq. (88) into Eq. (86), the final motion condition can be written as: The boundary described by Eq. (89) is represented in Fig. 14 and shows a good agreement with numerical results. As shown in the figure, the boundary starts from the origin of the parameter space and increases until reaching an infinite peak at r 1 = √ κ/γ . Further increasing the frequency ratio, the boundary decreases until the already mentioned antiresonance. The frequency ratio of the antiresonance can be determined as a root of the numerator of Eq. (89): After the antiresonance, the boundary increases to infinity, coherently with what has been observed for the boundary between continuous and stick-slip motions. The same procedure can be used also for determining the motion condition when the upper mass is in contact with the moving wall. Referring to the nondimensional equivalent system in Fig. 13b, it is possible to observe that, when the upper mass is still, the overall excitation on this mass is given by the sum of the equivalent dynamic load γ r 2 1 cos τ , due to the base motion, and of the spring force κz 1 , due to the motion of the lower mass and transmitted by the spring of stiffness κ. Thus, the motion condition can be written as: When the upper mass is stuck, the system turns into a SDOF system where the lower mass is connected to a fixed wall by both springs, therefore with an overall stiffness equal to 1 + κ. The governing equation will be: and, therefore, the response amplitude will be: Introducing Eq. (93) into Eq. (91), the motion condition becomes: Equation (94) describes the boundary between no motion and stick-slip regime in Fig. 15, which shows a very good agreement with the corresponding numerical boundary. The analytical curve shows a similar behaviour to the one described for the previous configuration with two main differences: -the infinite peak is observed at r 1 = √ 1 + κ, which is the root of the denominator of Eq. (94); -the antiresonance is placed at: All the boundaries described in this section are summarised in Table 2. Boundaries for systems with more than two DOFs The formulation of the boundaries among motion regimes in the parameter space r 1 − β can be extended to joined base-wall excited systems with a larger number of DOFs, similarly to Sect. 2.6, if only one mass of the system is rubbing against the moving wall. Also for this contact configuration, a fundamental step is the definition of a system of governing equations for the MDOF system, expressed consistently with the formulation used for 2DOF systems in Sect. 3.1. Let us consider a harmonically excited NDOF system where a friction contact is achieved between the mass m j and the wall. It is possible to write the governing equation for the j-th DOF of the system as: The RHS will be equal to k 1 y if j = 1. As proposed in Sect. 3.1, it is convenient to introduce the state variable z j = x j − y, so that Eq. (96) can be rewritten as: or, in a dimensionless form, as: As it can be deduced comparing Eqs. (98) to (65), the introduction of the coordinates z 1 , ..., z N allows the representation of the system as a NDOF system with a ground-fixed contact. Specifically, the mass and stiffness matrices of the system will be the same as in Eqs. (68) and (69). Therefore, all the considerations stated in Sect. 2.6 apply. Particularly, it is worthwhile observing that the joined base-wall excitation produces equivalent dynamic loads equal to γ i r 2 1 cos τ on all the masses of the system. This means that, unless the mass in contact is placed at bottom ( j = 1) or the top of the system ( j = N ), both the lower and the upper undamped subsystems must be taken into account when following the procedure described in Sect. 2.6. Mass-fixed wall contacts This section focuses on the formulation of the boundaries among motion regimes for 2DOF systems where a Coulomb contact is achieved between the two masses, in parallel with a spring. The analytical results found in Sects. 2 and 3 can be extended also to this configuration after finding a variable transformation that allows the formulation of this problem in terms of an equivalent 2DOF system with a ground-fixed contact. Generalities and sticking conditions Let us consider a 2DOF system where the masses m 1 and m 2 are connected in parallel by a spring of stiffness k 2 and a Coulomb contact characterised by the friction force F. The lower mass m 1 is connected to the base by a spring of stiffness k 1 ; the system is excited by a harmonic base motion y = Y cos(ωt) (Fig. 16a). The governing equations of this system can be written as: The dynamic behaviour of this system can be analysed by seeking a variables transformation allowing to No motion Fig. 16 2DOF system under harmonic base excitation with a spring and a Coulomb contact in parallel between the masses (a), its equivalent representation as a 2DOF system with a ground-fixed wall contact on the lower mass (b) and the non-dimensional system corresponding to the latter (c) rewrite Eq. (99) in the same form as Eq. (1). This would allow to extend the results found for ground-fixed wall contacts in Sect. 2.4 to the contact configuration investigated in this section. The first step is the introduction of the state variable: i.e. the relative displacement between the components in contact. Multiplying Eq. (99a) by m 2 /m 1 and subtracting it from Eq. (99b), it is then possible to write: and therefore: Equation (102) partially recalls a result previously described by Den Hartog. In fact, in reference [31], he observes that a system composed by two masses m 1 and m 2 connected in parallel by a spring k and a Coulomb contact with friction force F, where a harmonic motion x 1 = X 1 cos(ωt) is imposed on mass m 1 , is equivalent to a SDOF system characterised by: -a mass m 1 m 2 m 1 + m 2 ; -a spring of stiffness k; -a ground-fixed wall contact with friction force F; -harmonic excitation of amplitude m 2 m 1 k 1 x 1 . Therefore, in Den Hartog's system the motion x 1 is given a priori and not intended as a response, differently from what happens in the system described in this section. Furthermore, Den Hartog's system is not connected to the ground, so it will exhibit only one oscillating mode in addition to a rigid-body motion. Conversely, the system in Fig. 16a is by all means a 2-DOF system, where both x 1 and x 2 are unknown, so the problem cannot be reduced to the analysis of an equivalent SDOF system. Equation (102) also allows the definition of the sticking conditions for this system, useful for the numerical integration approach described in Sect. 4: or, in terms of x 1 and x 2 : Boundary between continuous and stick-slip regimes Let us introduce the coordinate x c of the centroid of the system: and consider the sum of Eqs. (99a) and (99b): This can be rewritten as: As expected, the motion of the centroid of the system is not influenced by either the friction force or the action of the spring k 2 . By writing x 1 and x 2 in terms of the new state coordinates x d and x c : it is possible to remove x 1 from Eqs. (101) and (107), obtaining the following system of equations: Equation (109) provides a useful alternative description of the system in terms of the relative motion between the masses and its centroid. However, although the friction force is now present only in Eq. (109)a, this system does not accomplish the requirement of presenting the same form as Eq. (1); in fact, it does not describe a 2DOF system. It is possible to further transform Eq. (109) in order to achieve this purpose. First of all, let us introduce the constant: in order to keep notation to its minimal. Eq. (109) yields: Consider a new variable z c defined as: is obtained. Equation (113) describes the equivalent 2DOF system with a ground-fixed wall contact on the lower mass shown in Fig. 16b. Particularly, this system is excited by a harmonic load (1 + m 2 /m 1 )k 1 r 2 1 y applied on the upper mass; the amplitude of this force depends on the frequency ratio and therefore, for a given system, it will grow when the driving frequency is increased. It is convenient to rewrite Eq. (113) in a non-dimensional form by using the quantities described in Sect. 2.1 and introducing:x Eq. (113) will assume the form: These equations can be seen as the governing equations of an equivalent non-dimensional system, shown in Fig. 16c. At this point, the procedure introduced in this paper for the analytical determination of the boundaries of the motion regime from the parameters of the system can successfully be applied also to the system considered in this section. The mass matrix of the system will have the same expression as shown in Eq. (19), while the stiffness matrix will be: It is possible to evaluate the natural frequencies 1,2 of the system from Eq. (21) and it can be verified that, despite the different expression of the matrix K, they will be the equal to the ones obtained in Eq. (22). The mode shapes can be determined from the general eigenvalue problem indicated in Eq. (18), which yields: The ratio between the components of each mode shapes can be determined from either Eq. (117a) or (117b). Considering, for instance, Eq. (117b), the ratio will be: The formulation ofφ 1 andφ 2 is different from the one found in Eq. (24), but the mode vectors and the modal matrix will maintain the same form described, respectively, in Eqs. (26) and (27) for ϕ 1 =φ 1 and ϕ 2 =φ 2 . After introducing the transformation in modal coordinates from Eq. (31), it is necessary to evaluate the modal force, considering that the equivalent load shown in Fig. 16c is applied to the upper mass and has a different amplitude compared to the case studied in Sect. 2. For this system, the applied force vector is: By applying Eq. (34), the correspondingp is found: Following the same steps done for evaluating V 1 (in Eq. (46)), the response function V d , referred to the rel-ative motion x d (i.e. the lower mass of the equivalent system), can be obtained as: Similarly, the damping function can be obtained from Eq. (54). As the equivalent system exhibits the same natural frequencies as the systems studied in Sect. 2 and a formally identical modal matrix, the damping function will have the same form as the function U 1 described in Eq. (55) for the case of a friction contact applied on the lower mass: In conclusion, the boundary curve between continuous and stick-slip regimes in the r 1 -β parameter space can be written as: where the expression of β 1,lim obtained in Eq. (56) is multiplied by G since the amplitude of the friction force acting in the equivalent system is β/G. The boundary curve described by Eq. (123) is shown in Fig. 17 for different values of mass and stiffness ratios and agrees well with the boundary highlighted by numerical results. Starting from low frequencies, continuous motion is possible only for very small friction ratios; a very sharp peak can be observed in correspondence of the first natural frequency of the system. After the peak, the boundary increases reaching a smoother second peak, whose value is always smaller than 1 in the observed cases. Condition for the presence of a no motion region In Fig. 17, it is shown clearly that it is not always possible to observe a relative motion between m 1 and m 2 as the parameters of system are varied. In the absence of such motion, the system will exhibit a stuck configura-tion, reducing to an undamped SDOF system of mass m 1 + m 2 and spring k 1 . The condition for which the relative motion is possible can be described analytically by applying the procedure introduced in Sects. 2.5 and 3.3 to the nondimensional system. Referring to Fig. 16c, it is possible to observe that the only exciting force acting on the lower mass when it is fixed, excluding friction force, is the spring force due to the motion of the upper mass. Being G the stiffness of the upper spring and β/G the intensity of the friction force, the motion condition will be: which can be rewritten as: In order to determine Z c , it must be considered that, when the lower mass is stuck, the upper spring and the upper mass behave like a SDOF system excited by the equivalent force (1+γ )r 2 1 cos τ due to the base motion. Thus, the governing equation of this system will be: and the amplitude of the response, obtained by substitutingz c = Z c cos τ , can be written as: Substituting Eqs. (110) and (127) into Eq. (125), and after some algebraic manipulations, the motion condition can be finally expressed as: This condition describes the boundary between the stick-slip region (in orange) and the no motion region (in grey) in Fig. 17. This boundary starts from the origin of the parameter space and quasi-static motion can be observed only for very small values of the friction ratio. As it can be deduced also from Eq. (128), an infinite No motion peak is reached at: After the peak, the boundary decreases until reaching an asymptotic value given by: when r 1 → +∞. Finally, it is interesting to observe that the motion condition is independent of the stiffness ratio. The motion regime scenario described in this section is summarised in Table 3. Numerical approach In the previous sections, numerical results have been used for validating the analytical expressions of the bounds among motion regimes in MDOF systems with different configurations of Coulomb contacts. This section focuses on the description of the numerical methods used for this purpose. The numerical integration of the governing equations of the MDOF systems analysed in this paper can be performed using standard numerical solvers as long as the solution is continuous; however, particular care must be taken when sticking phases appear in the motion. In fact, in stick-slip regime, the transitions between sliding and sticking phases cause sudden variations in the solution, which cannot be easily dealt with by most numerical methods. Stiff solvers are usually implemented in order to address this particular numerical problem (see, e.g. [32]). Nevertheless, in reference [33], it was shown that better performances in terms of accuracy and computational cost can be achieved for stick-slip motion in SDOF systems if a standard nonstiff solver is used for the integration during the sliding stages and explicit conditions are set a priori to account for the transitions between the sliding and the sticking regimes. This approach has been extended in this paper to account for MDOF systems and is detailed below, for simplicity, in the 2DOF case. However, the same procedure can also be used to account for systems with a larger number of DOFs. In the presence of either ground-fixed or base-fixed contacts, only the mass in contact will be stuck on the wall. In fact, the remaining mass will keep oscillating and, therefore, also during the sticking phases, its motion will not be known a priori. The integration process can be summarised as follows. -During the sliding phases, both masses are oscillating continuously and the solution is nonstiff, so the governing equations are integrated by using a variable-step Runge-Kutta (4,5) method, implemented in the Matlab function ode45 [34]. -The integration is stopped when the relative velocity between mass and wall, i.e. the argument of the sgn function, is equal to zero. If also the second sticking condition, defined in Eqs.(11b) and (12b) for ground-fixed contacts and in Eqs.(80b) and (81b) for base-fixed contacts, is verified, a sticking phase will start; otherwise, a further sliding phase will follow. -During the sticking phases, the mass in contact will move jointly to the wall, so its displacementx s and its velocityx s are imposed. Specifically, if the stop occurs at time τ 0 and at the positionx s 0 , the imposed values will be: for a stuck ground-fixed contact and: x s =x s 0 −cos τ 0 +cos τx s = − sin τ (132) for a stuck base-fixed contact. -At the same time, numerical integration is performed for the mass not in contact, whose motion represents the only unconstrained degree of freedom of the system at this stage. Therefore, the dynamic behaviour of the system will be described by a single equation. For instance, if the case of a ground-fixed contact on the lower mass is considered, the governing equation is obtained by posinḡ x 1 =x s . Substituting Eq. (131) into Eq. (4b), it can be written as: All the other configurations analysed in this paper can be dealt with similarly. -The sticking phase will be stopped when the second sticking condition is no longer verified, i.e. when the resultant dynamic load overcomes the friction force. In mass-fixed contacts, the sticking occurs between the masses so this case needs to be addressed differently. As specified in Sect. 4.3, when the sticking occurs, i.e. when the sticking conditions expressed in Eq. (104), the system will transition to a stuck configuration, behaving as an undamped SDOF system of mass m 1 + m 2 and stiffness k 1 . Indicating withx d 0 the relative displacement between the two masses when the stop occurs and referring to Eq. (109b), it is possible to write the only governing equation needed for describing the dynamic behaviour of the system in a stuck configuration as: (1 + γ )r 2 1x c +x c = cos τ + Gx d 0 The position of the two masses during this stage will be determined, from Eq. (108), as: In this paper, each integration, for varying r 1 , β, γ and κ parameters, has been performed for 100 cycles of base motion, aiming to determine the motion regime in steady-state conditions. For almost all the configuration of these parameters, no change of regime could be observed with longer durations. A few exceptions are described in the literature, regarding transitions between continuous and stick-slip regimes after a considerable number of motion cycles (see, e.g. reference [6]), but they were not considered relevant within the purposes of this numerical analysis, as limited to a few particular sets of parameters. As already mentioned in Sect. 2.5, residual motion in the friction contacts was sometimes observed after 100 excitation cycles above the boundary between stick-slip and motion regimes; this resulted in the small spikes observable in most of the graphical representations of the parameter space r 1 − β presented in this paper. Nevertheless, the amplitude of such residual motions was found to be negligible in most cases. During the integration process, the absolute and relative tolerances were set, respectively, to 10 −6 and 10 −12 . Concluding remarks The analytical boundaries of motion regimes for three types of MDOF systems with a Coulomb friction contact have been investigated. Specifically, the boundaries among regions of: (1) continuous motion; (2) stick-slip motion; (3) no motion have been investigated in a nondimensional parameter space in terms of the frequency ratio and the friction ratio. The boundaries were evaluated in closed-form and validated numerically for 2DOF systems with a (i) ground-fixed, (ii) base-fixed and (iii) mass-fixed wall contact. A procedure for extending these results to systems with more than two DOFs was also proposed for the cases (i) and (ii), with a further numerical validation for the case of a 5DOF system with a ground-fixed wall contact. The approach for the definition of the boundary between continuous (non-sticking) and stick-slip regimes was directly achieved for 2DOF systems with a fixed-wall Coulomb contact on either the upper or the lower mass by considering the superposition of the modal behaviour and applying Den Hartog's approach [2] for determining the response of each mode. Differently, the non-sticking conditions for 2DOF systems presenting the contact configurations (ii) and (iii) were obtained by reducing these systems to equivalent configurations with a ground-fixed wall contact. This was achieved by introducing appropriate variable transformations in the governing equation of such systems. An ad hoc procedure was introduced for the determination of the boundary between motion and no motion regions, based on the principle that sliding will occur in a friction joint only if the overall dynamic load applied on the components in contact has a larger amplitude than the friction force. An excellent agreement was observed when comparing the analytical and the numerical boundaries for 2DOF systems in all the cases analysed. The investigation of the parameter space highlighted how the shape and the extension of regions associated with the three motion regimes change significantly when different mass and stiffness ratios, wall motions or masses in contact are considered. It was observed that, for particular configurations and parameters, the boundary curves can be very close or overlap, generating regions where different motion regimes can be verified if small variations are introduced in either the friction or the exciting forces. This dynamic behaviour is usually unsuitable for structural design. Finally, it was shown that the boundary between motion and no motion regions is independent of: (i) the mass ratio for a ground-fixed wall contact and (ii) the stiffness ratio when the contact occurs between the two masses. Overall, the presented results give information relevant to the design and the analysis of friction joints in engineering structures. Current work on this topic is focusing on the analysis of the dynamic response features of MDOF systems with a Coulomb contact. Moreover, future work will focus on (i) the determination of motion regimes for discrete systems with more than one friction contact and (ii) the extension of this approach to continuous multi-modal structures. Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/ by/4.0/.
15,563.8
2021-03-01T00:00:00.000
[ "Engineering" ]
Supercomputer Modeling of Dual-Site Acetylcholinesterase ( AChE ) Inhibition Molecular docking is one of the most popular tools of molecular modeling. However, in certain cases, like development of inhibitors of cholinesterases as therapeutic agents for Alzheimer’s disease, there are many aspects, which should be taken into account to achieve accurate docking results. For simple molecular docking with popular software and standard protocols, a personal computer is sufficient, however quite often the results are irrelevant. Due to the complex biochemistry and biophysics of cholinesterases, computational research should be supported with quantum mechanics (QM) and molecular dynamics (MD) calculations, what requires the use of supercomputers. Experimental studies of inhibition kinetics can discriminate between different types of inhibition—competitive, non-competitive or mixed type—that is quite helpful for assessment of the docking results. Here we consider inhibition of human acetylcholinesterase (AChE) by the conjugate of MB and 2,8-dimethyl-tetrahydro-γ-carboline, study its interactions with AChE in relation to the experimental data, and use it as an example to elucidate crucial points for reliable docking studies of bulky AChE inhibitors. Molecular docking results were found to be extremely sensitive to the choice of the X-ray AChE structure for the docking target and the scheme selected for the distribution of partial atomic charges. It was demonstrated that flexible docking should be used with an additional caution, because certain protein conformational changes might not correspond with available X-ray and MD data. Introduction Therapy of Alzheimer's disease (AD) involves inhibition of brain AChE to restore acetylcholine (ACh) levels.[9].In addition to hydrolyzing ACh, AChE promotes aggregation of βamyloid peptide through its interaction with the AChE peripheral anionic site (PAS).Thus, dual-site inhibitors of the active and PAS sites are expected to be disease-modifying agents [10]. To develop dual-site anti-AD drugs, we combined two known pharmacophores, methylene blue (MB) and carbolines into single conjugates (MBC) [14], see Fig. 1, and demonstrated that they were effective inhibitors of AChE capable of displacing propidium from the AChE PAS [1]. Docking and other computational methods have been used in drug design for decades.However biophysical constraints can hamper the predictive power of these approaches [2].For example, AChE contains a gorge with a midpoint constriction ("bottleneck") that separates the PAS and active site regions [15].Consequently, inhibition is determined not only by geometric and interaction energy factors, but also by binding dynamics [7].In the present work, we analyzed the results of different molecular docking approaches for MBC into AChE and compared them to kinetic data, demonstrating mixed-type inhibition (Fig. 2).Thus, the compound should bind competetively to the active site and noncompetitively to the PAS, and docking should provide poses of the ligand both above and below the bottleneck. Methods The carboline part of MBC contains a piperidine ring condensed with an aromatic system that implicates conformers and enantiomers.Using OpenEye OMEGA 2.5.1.4:OpenEye Scientific Software, Santa Fe, NM. http://www.eyesopen.com[5], 4 configurations of MBC were generated (Fig. 3).pK a values were calculated with Schrödinger Jaguar QM DFT pK a module [16].Geometries were optimized with Gamess-US [11] software (B3LYP/6-31G*).For docking, optimized ligand structures were used with Gasteiger partial atomic charges and those derived from QM results according to Mulliken and Löwdin schemes.Additionally, Schrödinger QM-Polarized Ligand Docking (PLD) [4] was used with extra precision docking and redocking; charges were calculated using the Jaguar accurate QM method.Five X-ray structures of human AChE (PDB IDs 4EY4-4EY8, [3]) were used for docking.Rigid docking was performed with AutoDock 4.2.6 [8] as described earlier [12].For flexible docking, Schrödinger Glide Induced Fit [13] was used with AChE as a target.The docking volume included the entire gorge, and extra precision docking The QM-calculated pK a value for the piperidine nitrogen was 7.84.Thus, under experimental conditions mimicking physiological pH 7.4, both protonated and non-protonated forms could be present.For this reason, both states were used for the docking study and analysis of results. Partial atomic charges are crucial for docking results from the algorithms used in our study, as the charge distribution calculation scheme defines the estimated binding energies and geometries of complexes [4].With respect to MBC docking into AChE, the influence of partial atomic charges was even more pronounced.In the case of apo-AChE as a target, poses below the bottleneck were obtained only for structures with partial charges derived from QM calculations according to the Löwdin scheme (Fig. 5).The results obtained with the Gasteiger scheme and derived from QM data according to the Mulliken scheme and Shrödinger QM PLD docking showed poorer occupation of the active site compartment for other targets (Fig. 5). Only in the case of the AChE structure co-crystallized with Donepezil was MBC docked in full correspondence with experimental data (below and above the bottleneck) regardless of the partial atomic charges scheme (Fig. 5).This is ensured by the Tyr337 side chain, which forms the bottleneck, being rotated so that it does not block the gorge. For AChE structures co-crystallized with Huperzine A and fasciculin-2 (Fig. 6), the MBC ligand could be found only in the PAS, which corresponds to non-competitive inhibition, and thus does not agree with experimental data.The Schrödinger Glide Induced Fit protocol for molecular docking of MBC provided positions similar to those obtained by rigid docking to 4EY7 as a target.The major difference in the compound's position was a flipped MB fragment, achieved through appreciable displacement of Phe297 and Tyr124, while conformational changes for other principal residues of the gorge were less significant (Fig. 7). Conformations of Phe297 and Tyr124 side chains, namely, the χ 1 torsion angle for induced fit docking complexes, could be compared with conformations found in X-ray structures of human and mouse AChE and along MD trajectories for apo-AChE and AChE in complex with another bulky inhibitor [6].We found side chain conformations from flexible docking different from those found in X-ray data or during MD simulations even with a bulky inhibitor in the gorge (Fig. 8).This suggests that results of the Induced Fit protocol of Glide should be compared with other available data and certain torsion angles should be fixed for redocking. Conclusions The results of kinetic and docking studies demonstrate the importance of choosing the right target structures.For bulky ligands, the structure of AChE co-crystallized with Donepezil (4EY7) gave the best agreement with experimental data.The use of different partial atomic charges also leads to markedly different docking results; the use of charges derived from QM calculations is advisable.Induced-fit docking should be used with caution; conformational changes of protein residues should be related to protein dynamics data (X-ray and MD) to avoid artifacts.Overall, to achieve reliable results, docking studies require the support of computationally demanding QM and MD calculations, as afforded by supercomputing facilities. Figure 2 . Figure 2. Steady state inhibition of AChE by this compound; Lineweaver-Burk double-reciprocal plots of initial velocity and substrate concentrations in the presence of the inhibitor, showing mixed-type inhibition Figure 3 . Figure 3. Overlaid configurations of the piperidine fragment of the γ-carboline ring of MBC Figure 5 .Figure 6 . Figure 5. Molecular docking results of conjugate MBC into AChE, corresponding to experimental data.Carbon atoms of the target AChE amino acids are colored according to Fig. 4; catalytic residues are colored violet.In the left columns, cyan color shows poses obtained with partial atomic charges derived from the Gasteiger scheme, red-derived from QM calculations according to the Mulliken scheme, and green-derived from QM calculations according to the Löwdin scheme.Results of Schrödinger QM-PLD for each X-ray AChE structure are shown separately in the right column-ligand poses are colored pink Figure 7 . Figure 7.Protein-inhibitor complex obtained as a result of Induced Fit procedure (Schrödinger/Glide).The MBC ligand carbon atoms are green and protein carbon atoms are cyan.The docked complex is overlaid with the apo-AChE X-ray structure (carbon atoms are magenta) Figure 8 . Figure 8. Distribution of values of χ 1 torsion angle over MD trajectories for apo-AChE (black line), total length 350 ns and 50 ns with the bulky inhibitor C-547 [6] (green line).Corresponding values for X-ray structures of human and mouse AChE available in the PDB are overlaid on the distribution plot with red points, and Induced Fit docking results are overlaid with blue stars
1,871.8
2018-12-01T00:00:00.000
[ "Computer Science", "Medicine", "Chemistry" ]
Delocalization of Two-Dimensional Random Surfaces with Hard-Core Constraints We study the fluctuations of random surfaces on a two-dimensional discrete torus. The random surfaces we consider are defined via a nearest-neighbor pair potential, which we require to be twice continuously differentiable on a (possibly infinite) interval and infinity outside of this interval. No convexity assumption is made and we include the case of the so-called hammock potential, when the random surface is uniformly chosen from the set of all surfaces satisfying a Lipschitz constraint. Our main result is that these surfaces delocalize, having fluctuations whose variance is at least of order log n, where n is the side length of the torus. We also show that the expected maximum of such surfaces is of order at least log n. The main tool in our analysis is an adaptation to the lattice setting of an algorithm of Richthammer, who developed a variant of a Mermin–Wagner-type argument applicable to hard-core constraints. We rely also on the reflection positivity of the random surface model. The result answers a question mentioned by Brascamp et al. on the hammock potential and a question of Velenik. Introduction In this paper we study the fluctuations of random surface models in two dimensions. We consider the following family of models. Denote by T 2 n the two-dimensional discrete torus in which the vertex set is {−n + 1, −n + 2, . . . , n − 1, n} 2 and (a, b) is adjacent to (c, d) if (a, b) and (c, d) are equal in one coordinate and differ by exactly one modulo 2n in the other coordinate. Let U be a potential, i.e, a measurable function U : R → (−∞, ∞] satisfying U (x) = U (−x). The random surface model with potential U , normalized at the vertex 0 := (0, 0), is the probability measure μ T 2 n ,0,U on functions ϕ : V (T 2 n ) → R defined by dμ T 2 n ,0,U (ϕ) := (1.1) where the vertices and edges of T 2 n are denoted by V (T 2 n ) and E(T 2 n ) respectively, dϕ v denotes Lebesgue measure on ϕ v , δ 0 is a Dirac delta measure at 0 and Z T 2 n ,0,U is a normalization constant. For this definition to make sense the potential U needs to satisfy additional requirements. It suffices, for instance (see Lemma 3.1 for additional details), that inf Suppose ϕ is sampled from the measure μ T 2 n ,0,U . The expectation of ϕ is zero at all vertices by symmetry. How large are the fluctuations of ϕ around zero? Let us focus on the variance of ϕ at the vertex (n, n). It is expected that this variance is of order log n under mild conditions on U . This has been shown when the potential U is twice continuously differentiable with U bounded away from zero and infinity, and certain extensions of this class, as discussed in the survey paper [31,Remarks 6 and 7]. Specifically, a lower bound of order log n has been established by Brascamp et al. [5] when U is twice continuously differentiable, exp(−αU (x))dx < +∞, ∀α > 0, lim |x|→∞ (|x| + |U (x)|) exp(−U (x)) = 0, and either of the following holds: (1) sup x U (x) < ∞ or (2) sup x |U (x)| < ∞ or (3) U is convex and U (x) 2 exp(−U (x))dx < ∞. The class of potentials covered by their result can be further extended by taking suitable limits, as indicated in [5]. In addition, using arguments of Ioffe et al. [15] it is possible to derive qualitatively correct lower bounds for the variance for a class of, possibly discontinuous, potentials satisfying U −Ũ ∞ < ε for a small enough ε > 0 and some twice continuously differentiableŨ satisfying sup xŨ (x) < ∞. The case of the hammock potential, when U (x) = 0 for |x| ≤ 1 and U (x) = ∞ for |x| > 1, is explicitly mentioned as open in [5] and [31,Open Problem 2]. In this paper we prove a lower bound of order log n on the variance for a wide class of potentials, which includes the hammock potential. A sample from the random surface measure with the hammock potential is depicted in Fig. 1, both in 2 and 3 dimensions. We say that U ∈ C 2 (I ) for an interval I ⊆ R if U is twice continuously differentiable on I . We consider the class of potentials U satisfying the following condition: Either U ∈ C 2 (R) or U ∈ C 2 ((−K , K )) for some 0 < K < ∞ and U (x) = ∞ when |x| > K . interval. Sampled using coupling from the past [28] This class includes the hammock potential as well as "double well" potentials, oscillating potentials with finite support (that is, infinity outside of a bounded interval) and all smooth examples. In the case that U ∈ C 2 ((−K , K )) we allow the possibility of a discontinuity at the endpoints −K and K . The following theorem is the main result of this paper. Besides proving a lower bound on the variance at the vertex (n, n) we also obtain estimates for other vertices, for small ball and large deviation probabilities and for the maximum of the random surface. (1.2) and (1.3). Let n ≥ 2 and let ϕ be randomly sampled from μ T 2 n ,0,U . There exist constants C(U ), c(U ) > 0, depending only on U , such that for any v ∈ V (T 2 n ) with v 1 ≥ (log n) 2 we have In addition, We remark that condition (1.2) is mainly required in this theorem for the probability measure (1.1) to make sense. One may replace it by other conditions of a similar nature. Additional remarks may be found following Theorem 4.1 below. Our results can be viewed in a broader context of Mermin-Wagner-type arguments. Such arguments show, roughly, that continuous translational symmetry cannot be broken in one-or two-dimensional systems. For lattice models with compact spin spaces this implies that spins are uniformly distributed in the infinite volume limit. For lattice models with non-compact spin spaces, such as the random surface models we consider, such arguments prove delocalization and consequently non-existence of infinite volume Gibbs measures. We present now a non-exhaustive list of papers studying these phenomena. Such arguments were pioneered by Mermin and Wagner [18], who worked in a quantum context and relied on the so called Bogoliubov inequalities. These techniques were later extended and transferred to a classical context-see e.g. Hohenberg [14] and Brascamp et al. [5]. New techniques were developed by Dobrushin and Shlosman [6,7], McBryan and Spencer [17] and Fröhlich and Pfister [9,26]. The methods in all of the above papers require the potential to satisfy certain smoothness assumptions. Ioffe et al. [15] and Gagnebin and Velenik [12] presented extensions to some classes of non-smooth potentials. These works left open the case of potentials taking infinite values and a solution to this problem came from Richthammer [29], who studied Gibbsian point processes in R 2 . Our approach follows closely his elaborate technique introduced for proving that all Gibbs states of such point process models are translation invariant, even in the presence of hard-core constraints, as in the hard sphere model. The main ingredient in Richthammer's approach is an algorithm designed for perturbing a given configuration in a prescribed manner while preserving the hard-core constraints. Our proof adapts this algorithm from the continuum to the graph setting and from the point process to the random surface context. The resulting adaptation is presented in some detail in Sect. 2 and we hope that it will be useful in other contexts as well. 1.1. Overview of the proof. In order to illustrate our proof we first explain how to establish a lower bound on fluctuations in the simpler case that the potential U satisfies that U is twice continuously differentiable on R and sup x U (x) < ∞, (1.4) in addition to the condition (1.2). The methods of this section are similar to the one of [26]. We then provide details on the modification of this method, following the approach of Richthammer, which we use for potentials satisfying condition (1.3). Modification of the argument for potentials satisfying The measure μ T 2 n ,0,U is supported on Lipschitz configurations (satisfying ψ(0) = 0) under our assumption on U . The fundamental difficulty in applying the previous argument to this case is that it may happen that although ψ is a Lipschitz configuration, one of the configurations ψ + or ψ − defined by (1.8) may fail to be, in which case the inequality (1.9) will not be satisfied. The solution we use for this problem is to replace the configurations ψ + and ψ − in the previous argument by T + (ψ) and are certain mappings, termed addition algorithms in our paper, which share many of the properties of the operations of adding and subtracting τ while preserving the class of Lipschitz configurations. The definitions and properties of T + and T − are adapted from the work of Richthammer [29], who showed that all Gibbs states of point process models in R 2 with hard-core constraints, such as the hard sphere model, are translation invariant. Our adaptation translates Richthammer's notions from the continuum to the graph setting and from the point process to the random surface context. The main properties of T + and T − are detailed in Sect. 2.1. We highlight the possibility of defining these mappings for general graphs and general addition functions τ , as we believe these extensions to be useful in other contexts and as they are captured with the same definitions and proofs. The mappings T + and T − are defined to satisfy T − (ψ) := 2ψ − T + (ψ), just as in the definitions of ψ + and ψ − in (1.8). It thus suffices to define T + (ψ). Let us remark briefly on this definition for a Lipschitz configuration ψ. Roughly speaking, a certain ψ-dependent ordering on the vertices of the graph is chosen. Then, for each vertex v in this order, an amount between 0 and τ (v) is added to ψ v in such a way that the Lipschitz property is maintained with respect to the previously treated vertices in the chosen order. The amount added at vertex v is chosen to vary continuously with the value ψ v , in such a way that the resulting operation is invertible. Two difficulties arise when replacing ψ + and ψ − by T + (ψ) and T − (ψ) in the argument of Sect. 1.1.1. First, the change of variables used in inequality (1.11) relied on the fact that the mappings ψ → ψ + τ and ψ → ψ − τ preserve Lebesgue measure. When making a change of variables from T + (ψ) and T − (ψ) to ψ, a Jacobian factor enters, which needs to be estimated. Second, the argument uses the fact that ψ + (n,n) and ψ − (n,n) differ significantly from ψ (n,n) , by the amount log(2n + 1). Thus we also need to show that the difference of T + (ψ) (n,n) and T − (ψ) (n,n) from ψ (n,n) is close to log(2n + 1), at least for most configurations ψ. It turns out that both these difficulties may be overcome if we can control the following percolation-like process. We say an edge e = (v, w) ∈ E(T 2 n ) has extremal slope for the configuration ψ if |ψ v − ψ w | ≥ 1 − ε, for some small ε > 0 fixed in advance. Sampling ϕ randomly from the measure μ T 2 n ,0,U , we denote by E(ϕ) the random subgraph of T 2 n consisting of all edges with extremal slope for ϕ. Both difficulties described above may be overcome by showing that with high probability, the subgraph E(ϕ) is "subcritical" in the sense that its connected components are small. Proving this turns out to be a non-trivial task, which requires us to make use of reflection positivity techniques, specifically, the chessboard estimate. We remark that here (and only here) we rely essentially on the fact that T 2 n is a torus (i.e., has periodic boundary) and that the measure μ T 2 n ,0,U is normalized at the single vertex 0. Analogous estimates were also required in Richthammer's work [29] but were provided by the underlying Poisson process structure of the problem considered there, via so-called Ruelle bounds. Reader's guide. In Sect. 2 we describe the mappings T + and T − mentioned in the previous section. The section begins by listing the main properties of T + and T − , continues with a precise definition of T + and proceeds to prove that the required properties of T + indeed hold with this definition. In Sect. 3 we discuss reflection positivity for random surface models and prove, via the chessboard estimate, that the subgraph of edges with extremal slopes mentioned in the previous section is "subcritical" with high probability. Sections 2 and 3 address disjoint aspects of the problem and may be read independently. In Sect. 4 we prove our main theorem, Theorem 1.1, under alternative assumptions, by modifying the argument presented in Sect. 1.1.1 to make use of the mappings T + and T − and extending it to provide information also on small ball and large deviation probabilities and on the maximum of the random surface. In the short Sect. 5 we use the results of Sect. 3 to reduce Theorem 1.1 to the case discussed in Sect. 4. Section 6 contains a discussion of future research directions and open questions. The Addition Algorithm and its Properties In this section we define the addition algorithm T + which forms a core part of our proof. The algorithm is an adaptation to the graph setting of an algorithm of Richthammer [29] used in a continuum setting. Our presentation adapts the proofs in [29] but emphasizes the applicability of the algorithm to general graphs and general addition functions τ . Properties of the addition algorithm. Here we describe the properties of the addition algorithm which will be used by our application. The algorithm itself is defined in the next section and the fact that it satisfies the stated properties is verified in the subsequent sections. Let G = (V, E) be a finite, connected graph. We sometimes write v ∼ w to denote that (v, w) ∈ E. Let τ : V → [0, ∞) and 0 < ε ≤ 1 2 be given. We define a pair of measurable mappings T + , T − : R V → R V related by the equality and satisfying the following properties: (1) T + and T − are one-to-one and onto. (2) For every ϕ ∈ R V and every v ∈ V , The properties stated so far do not exclude the possibility that T + is the identity mapping [implying the same for T − by (2.1)]. The next property shows that T + (ϕ) − ϕ is close to τ under certain restrictions on the set of edges on which ϕ changes by more than 1 − ε. We require a few definitions. Let d G stand for the graph distance in G. The next two definitions concern the Lipschitz properties of τ . In the following definitions we consider the connectivity properties of the subset of edges on which ϕ changes by more than 1 − ε. and write, for a pair of vertices where we mean in particular v Together with property (2) above this shows that T + (ϕ) − ϕ and ϕ − T − (ϕ) are approximately equal to τ when M(ϕ) ≤ L(τ, ε). A slightly stronger property is given in Proposition 2.7 below. Our final property regards the change of measure induced by the mappings T + and T − . We bound the Jacobians of these mappings when the subgraph E(ϕ) does not contain many large connected components. Partition the vertex set V into V 0 and V 1 by letting Given a function θ : for the measure on R V given by product Lebesgue measure on the subspace where (5) There are measurable functions J + : R V → [0, ∞) and J − : R V → [0, ∞) satisfying that for every θ : V 0 → R and every g : (2.10) Moreover, if ϕ satisfies M(ϕ) ≤ L(τ, ε) then Description of the addition algorithm. In this section we define the mapping T + whose properties were discussed in the previous section. Let the graph G = (V, E), function τ and constant ε be as above. Fix an arbitrary total order on the vertex set V . Define a Lipschitz "bump" function f : R → R by x ∈ [1, ∞). (2.11) We also define a family of shifted and rescaled versions of f . For a vertex v ∈ V and h, t ∈ R let if τ (v) < t . (2.12) One should have in mind the case τ (v) ≥ t and think of m v,h,t as being the same as f , scaled and shifted to have maximum τ (v), minimum t and to have its "center" at h. However, if the function just described has Lipschitz constant more than 1/2, we lower its maximum so that its Lipschitz constant becomes 1/2. For easy reference we record this as the function m v,h,t has Lipschitz constant at most 1 2 . (2.13) The case τ (v) < t is not used in the definition of T + below. It is included here as it is technically convenient in the analysis to have m v,h,t defined for all values of the parameters. The definition of T + is based on the following algorithm. The algorithm takes as input a function ϕ ∈ R V . It outputs three sequences indexed by 1 ≤ k ≤ |V |: (1) A sequence (P k ) which is a ordering of the vertices V , that is, (2) A sequence (s k ) ⊆ [0, ∞) with s k representing the amount to add to ϕ at vertex P k . (3) A sequence (τ k ) of functions, τ k : V × R → R, which will play a role in analyzing the Jacobian of the mapping T + . The mapping T + is then defined by (2.14) Addition algorithm Loop. For k between 1 and |V | do: (1) Set P k to be the vertex v in V \{P 1 , . . . , P k−1 } which minimizes τ k (v, ϕ v ). If there are multiple vertices achieving the same minimum let P k be the smallest one with respect to the total order . (2) Set s k := τ k (P k , ϕ P k ). In the next sections we verify that the mapping T + defined by (2.14) satisfies the properties declared in Sect. 2.1. An illustration of the action of the addition algorithm is provided in Table 1. Increments and Lipschitz property. In this section we verify properties (2) and (3) from Sect. 2.1 for T + . Property (2) is an immediate consequence of the definition (2.14) of T + combined with (2.17) below. (2.18) We shall prove by induction that Assume that for some 1 ≤ k ≤ |V | we have Recall that the function τ is non-negative. It follows from (2.19), (2.21) and the initialization and step (3) of the addition algorithm that In particular, In addition, it follows from (2.19) that Thus (2.15) implies that As k is arbitrary, this establishes (2.18). In the next lemma we investigate the gradient of T + (ϕ), establishing property (3) from Sect. 2.1 for T + . Lemma 2.2. For any ϕ ∈ R V and any edge Assume without loss of generality that v = P k and w = P for some 1 ≤ k < ≤ |V |. Observe that, by step (3) of the addition algorithm, Now assume that |ϕ v − ϕ w | ≥ 1. Then, by the definition (2.12) of m, we have that Combining the last two inequalities with (2.18) shows that s = s k . The equality (2.24) now follows from (2.14). Assume now that |ϕ v − ϕ w | < 1. On the one hand, by (2.18), On the other hand, by (2.26) and the definition (2.12) of m, Therefore, by (2.13) and our assumption that |ϕ v − ϕ w | < 1, Bijectivity. In this section we define an inverse (T + ) −1 to the mapping T + , thereby establishing that T + is one-to-one and onto as claimed in property (1) from Sect. 2.1. The definition of (T + ) −1 uses the same graph G = (V, E), function τ , constant ε, total order on V and family of functions m v,h,t as the definition of T + . It is based on the following algorithm which takes as input a functionφ ∈ R V and outputs four sequences indexed by 1 ≤ k ≤ |V |: Inverse addition algorithm Loop. For k between 1 and |V | do: If there are multiple vertices achieving the same minimum letP k be the smallest one with respect to the total order . (2.28) is well-defined on R, is also continuous and strictly increasing and we havẽ Proof. Fixφ ∈ R V and v ∈ V . We prove the lemma by induction. Let 1 ≤ ≤ |V |, suppose the algorithm is well-defined and the lemma holds for all 1 ≤ k < and let us prove the assertions of the lemma for k = . Observe thatτ (v, ·) is obtained by taking the minimum of τ (v) and the function m v,h,t (·) with various values of h and t. Thus, since m v,h,t (·) has Lipschitz constant at most 1 2 by (2.13), it follows thatτ (v, ·) has Lipschitz constant at most 1 2 . Thus h → h +τ (v, h) is continuous and strictly increasing from R onto R. The remaining assertions of the lemma are immediate consequences. We claim that (T + ) −1 is indeed the inverse of T + , that is, that These assertions are proved in the next two sections. Injectivity. In this section we prove (2.29), showing that T + is one-to-one. be the sequences generated when calculating T + (ϕ) and when calculating (T + ) −1 (φ) with ϕ := T + (ϕ). By (2.14) and (2.27) it suffices to show that We prove this claim by induction. We haveτ 1 = τ 1 by the initialization steps of the algorithms. Fix 1 ≤ k ≤ |V | and assume that (2.31) We need to show that These sequences need not be equal. However, they satisfy certain relations as the following lemma clarifies. Comparing the definitions of P k , s k and τ k+1 with those ofP k ,s k andτ k+1 and using (2.31) and (2.14) we deduce from the lemma that (2.32) holds, completing the inductive proof. Surjectivity. In this section we prove (2.30), showing that T + is onto. The proof is similar to the proof that T + is one-to-one as given in the previous section. The proof requires the following lemma, an analog of Lemma 2.1 for T + . Proof. The proof of (2.37) and (2.38) follows in exactly the same way as the proof of It remains to prove (2.39). We start by showing that To verify this, observe that by Lemma 2. Our choice of the pointP k in step (2) of the inverse addition algorithm ensures that We conclude by (2.40) thats The definition (2.12) of m implies that Putting together (2.41) and (2.42) and recalling (2.28) yields whence, by (2.40) again, As k is arbitrary, this establishes (2.39). To show that T + is onto it suffices, by (2.14) and (2.27), to show that We prove this claim by induction. We have τ 1 =τ 1 by the initialization steps of the algorithms. Fix 1 ≤ k ≤ |V | and assume that P j =P j , s j =s j for 1 ≤ j < k and τ j =τ j for 1 ≤ j ≤ k. (2.43) We need only show that As in the previous section, these sequences satisfy certain relations as the following lemma clarifies. Comparing the definitions of P k , s k and τ k+1 with those ofP k ,s k andτ k+1 and using (2.43) and (2.27) we deduce from the lemma that (2.44) holds, completing the inductive proof. Proof of Lemma 2.6. Let us first show that P k =˜ P k . By (2.27) and Lemma 2.3, . Thus, using (2.43), Hence we may write, using Lemma 2.3, Consequently, by (2.43), (2.37) and (2.39), The definition ofD k now yieldsD from which we conclude that completing the proof. 2.5. The shifts produced by the algorithm. Our goal in this section is to analyze the shifts produced by the addition algorithm of Sect. 2.2 and to give conditions under Recall from Sect. 2.1 that E(ϕ) is the subgraph of edges on which ϕ changes by at is the diameter of the largest connected component of E(ϕ). Recall also the definitions of τ (v, k) and L(τ, ε). Depending on the choice of τ and ε the value of L(τ, ε) may be negative, though our theorems will be meaningful only when this is not the case. The following is the main proposition of this section. The definitions of M(ϕ) and L(τ, ε) imply the following corollary. Proof of Proposition 2.7. Fix ϕ ∈ R V and let (P k ), (s k ) and (τ k ) be the outputs of the addition algorithm of Sect. 2.2 when running on the input ϕ. For v ∈ V , let k v stand for that integer for which v = P k v and let The lower bound in Proposition 2.7 is a consequence of the following fact: For any v ∈ V , This, together with |ϕ v − ϕ u | < 1 − and (2.12), imply that in step (3) of the addition algorithm, when finishing the proof of (2.47). 2.6. Jacobian definition. In this section we find a formula for the Jacobian of the mapping T + . We start with some smoothness properties of the functions used in defining T + . We write (P k ), (s k ) and (τ k ) for the outputs of the addition algorithm of Sect. 2.2 when running on the input ϕ. where the notation ∂ 2 τ k (P k , ϕ P k ) stands for the right derivative of τ k with respect to its second variable (which exists by Lemma 2.9), evaluated at (P k , ϕ P k ). Lemma 2.9 ensures also that the factors in the product are positive. Recall the definition of the partition V 0 , V 1 of V and the measure dμ θ from (2.8) and (2.9). Lemma 2.10. For any θ : V 0 → R and any function g : (2.49) We remark that T + is clearly Borel measurable by its definition in Sect. 2.2 and hence the integrand on the left-hand side of (2.49) is measurable. The rest of the section is devoted to proving this lemma. We need the following basic facts about Lipschitz continuous maps. Let d ≥ 1 be an integer. First, by Rademacher's theorem a Lipschitz continuous map T : R d → R d is almost everywhere differentiable. Second, the following change of variables formula holds for any integrable h : where we have written dϕ for the Lebesgue measure on R d . Here, as remarked in [8], Referring back to the definition of the addition algorithm in Sect. 2.2 we see that each A σ is measurable, possibly empty, and R V = ∪ σ ∈ A σ . For each σ ∈ we define a version of the addition algorithm in which the points are taken in the order σ . More precisely, we define an algorithm taking as input a function ϕ ∈ R V and outputting two sequences indexed by 1 ≤ k ≤ |V |: Addition algorithm with order σ Initialization. Set τ σ 1 (v, h) := τ (v) for all v ∈ V and h ∈ R. Loop. For k between 1 and |V | do: (1) Set s σ k := τ σ k (σ (k), ϕ σ (k) ). We then define a mapping T σ : Comparing the definitions of T + and T σ we conclude that Observe that T + maps X bijectively onto X by properties (1) and (2) (see Sect. 2.1) and the definition of V 0 . The measure dμ θ is supported on X ; identifying X with R V 1 in the natural way it coincides with the Lebesgue measure on X . By (2.12), the function m v,h,t (h ) is Lipschitz continuous as a function of h, t and h , for every fixed v. In addition, the composition and pointwise minimum of Lipschitz continuous functions is also Lipschitz continuous. It follows that for every v and k, the function τ σ k (v, h) is Lipschitz continuous as a function of h and ϕ (i.e., as an implicit function of ϕ w for every w ∈ V ). We thus deduce from the definition of s σ k and (2.53) that T σ is a Lipschitz continuous map. We also note that T σ maps X into X since as follows by induction on k using the fact that m v,h,t ≥ t by (2.12). Thus we may apply the formula (2.50) (by identifying X with R V 1 and dμ θ with the Lebesgue measure on for every σ ∈ and h : X → R integrable with respect to dμ θ . Here and below, we denote by ∇ W T σ , W ⊆ V , the matrix-valued function We continue to find a formula for |det(∇ V 1 T σ (ϕ))|. We note first that ∇ V 1 T σ (ϕ) exists for dμ θ -almost every ϕ ∈ X as, by the above discussion, T σ is Lipschitz continuous from X to X . By construction of T σ , ∇ V T σ has a triangular form when its rows and columns are sorted in the order of σ . Hence the definition of s σ k , Eqs. (2.53) and (2.55) yield that for dμ θ -almost every ϕ ∈ X we have (2.57) Now let h : R V → R be a function integrable with respect to dμ θ and define Putting together (2.48), the fact that J + ≥ 0, (2.51), (2.54), (2.57) and (2.56) we have Finally, T + is invertible by Sect. 2.4 and T + = T σ on A σ by (2.54). Hence T σ restricted to A σ is one-to-one. Thus, since h σ (ϕ) = 0 when ϕ / ∈ A σ , we may continue the last equality to obtain This equality is obtained for any h : R V → R integrable with respect to dμ θ . Letting g : R V → R be integrable with respect to dμ θ , Lemma 2.10 now follows by substituting h with g(T + (ϕ)). Formally, this is done by using the above equality to approximate g(T + (ϕ)) with h which are integrable with respect to dμ θ . In this section we establish that T − satisfies similar properties to those proved for T + , as claimed in Sect. 2.1. In this section, to emphasize the dependence on ϕ, we write (P ϕ k ), (s ϕ k ) and (τ ϕ k ) for the outputs of the addition algorithm of Sect. 2.2 when running on the input ϕ. Putting together (2.14) and (2.58) we see that (2.59) We claim that, due to the symmetry of the function f of (2.11), To see this observe first that the symmetry of f and (2.12) imply Thus, examining the addition algorithm of Sect. 2.2 we conclude that (2.61) Together with (2.14), this equality implies (2.60). Now, the fact that T − satisfies properties (1)-(4) in Sect. 2.1 follows immediately from (2.58), (2.60) and the fact that T + satisfies these properties. We now show that T − also satisfies (2.10). Define J − : We remark that the symmetry of the function f of (2.11), while essential for establishing (2.60), is not necessary for establishing the properties of T − described in Sect. 2.2. These properties may also be obtained without using (2.60) by repeating the proofs used for T + . 2.8. The geometric average of the Jacobians. In this section we provide an estimate for the geometric average of the Jacobians J + and J − in terms of the connectivity properties of the subgraph E(ϕ) and the Lipschitz properties of the function τ . This estimate establishes property (5) from Sect. 2.1. Lemma 2.11. For any ϕ ∈ R V satisfying M(ϕ) ≤ L(τ, ε) we have Proof. Fix ϕ ∈ R V satisfying M(ϕ) ≤ L(τ, ε). Write (P k ), (s k ) and (τ k ) for the outputs of the addition algorithm of Sect. 2.2 when running on the input ϕ. Denote (2.63) where we have used that |∂ 2 τ k (P k , ϕ P k )| ≤ 1/2 for all k according to Lemma 2.9. Examination of the addition algorithm of Sect. 2.2 reveals that τ k (v, h) is the minimum of τ (v) and m v,ϕ w ,σ w (h) where w ranges over a (possibly empty) subset of the neighbors of v. Observing that the Lipschitz constant of m v,h,t is at most max 1 ε (τ (v) − t), 0 by (2.12), we see that Now, using our assumption that M(ϕ) ≤ L(τ, ε), Proposition 2.7 yields that (2.65) Plugging (2.65) into (2.64) shows that The lemma follows by substituting this estimate in (2.63). Reflection Positivity for Random Surfaces Recall the random surface measure μ T 2 n ,0,U , defined in (1.1), corresponding to a potential U . In this section we estimate the probability that the random surface has many edges with large slopes. We start by explaining why the measure μ T 2 n ,0,U is well-defined under our assumptions. Proof. Let U be a potential satisfying condition (1.2). In order that μ T 2 n ,0,U be welldefined it suffices that satisfies 0 < Z T 2 n ,0,U < ∞. We first show that Z T 2 n ,0,U < ∞. Let S be a spanning tree of T 2 n , regarded here as a subset of edges. Then 2). By integrating the vertices in V (T 2 n )\{0} leaf by leaf according to the spanning tree S the integral above equals exp(−U (x))dx |S| , which is finite by (1.2). We now prove (3.1), implying in particular that Z T 2 n ,0,U > 0. Condition (1.2) implies the existence of some α < ∞ for which the set A := {x : U (x) ≤ α} has positive measure. The Lebesgue density theorem now yields the existence of a point a ∈ A and an ε > 0 such that where we write |B| for the Lebesgue measure of a set B ⊆ R. This implies that and, using that Denote by (V even , V odd ) a bipartition of the vertices of the bipartite graph T 2 n , with 0 ∈ V even , and define the following set of configurations, We conclude from the definition of A, (3.3) and (3.4) that the integral in (3.2), restricted to the set , is at least (0.4ε exp(−α)) |V (T 2 n )\{0}| > 0. This can be seen by again fixing a spanning tree of T 2 n and integrating the vertices in V (T 2 n )\{0} leaf by leaf according to it. As a side note we remark that the fact that T 2 n is bipartite was essential for showing that Z T 2 n ,0,U > 0. If T 2 n is replaced by a triangle graph on 3 vertices then the analogous quantity to Z T 2 n ,0,U is zero when, say, {x : U (x) < ∞} = [−3, −2] ∪ [2,3]. However, the above argument can be easily modified to work for all graphs if {x : U (x) < ∞} contains an interval around 0. For 0 < L < ∞ and 0 < δ < 1 we say a potential U has (δ, L)-controlled gradients on T 2 n if the following holds: (1) There exists some K > L such that U (x) < ∞ for |x| < K . Either U (x) < ∞ for all x or there exists some 0 < K < ∞ such that U (x) < ∞ when |x| < K and U (x) = ∞ when |x| > K . (3.6) Then for any 0 < δ < 1 there exists an 0 < L < ∞ such that for all n ≥ 1, U has (δ, L)-controlled gradients on T 2 n . This theorem is proved in the following sections, making use of reflection positivity and the chessboard estimate. Reflection positivity. We start by reviewing the basic definitions pertaining to our use of reflection positivity and the chessboard estimate. Our treatment is based on [3, Section 5]. Let n ≥ 1. For −n + 1 ≤ j ≤ n the vertical plane of reflection P ver j (passing through vertices) is the set of vertices The plane P ver j divides T 2 n into two overlapping parts, P ver,+ j and P ver,− j , according to where here and below, arithmetic operations on vertices of T 2 n are performed modulo 2n (in the set {−n + 1, −n + 2, . . . , n − 1, n}). The parts P ver,+ j and P ver,− j overlap in The reflection θ P ver j is the mapping θ P ver j : which exchanges P ver,+ j and P ver,− j . We also define horizontal planes of reflection P hor j and their associated P hor,+ j , P hor,− j ,P hor j and θ P hor j in the same manner by switching the role of the two coordinates of vertices in T 2 n . We write simply P, P + , P − ,P and θ P when the plane of reflection P is one of the planes P ver j or P hor j which is left unspecified. Denote by F the set of all measurable functions f : Equivalently, F is the set of all measurable functions depending only on the gradient of ϕ. For a plane of reflection P we write F + P (respectively F − P ) for the set of f ∈ F for which f (ϕ) depends only on ϕ v , v ∈ P + (respectively v ∈ P − ). We extend the definition of θ P to act on R V (T 2 n ) and F by When ϕ is randomly sampled from a probability measure on R V (T 2 n ) we will regard a function f ∈ F as a random variable [taking the value f (ϕ)] and write E f for its expectation. In particular, the right-hand side is non-negative. For completeness, we provide a short proof of the chessboard estimate in Sect. 3.3 below. We remark that the same proof shows that if P is reflection positive with respect to all measurable functions on R V (T 2 n ) then it also satisfies the chessboard estimate with respect to this class. We restrict here to the class F in view of our application to random surface measures, see Proposition 3.5 below. Controlled gradients property. In this section we prove Theorem 3.2. We start by proving that our random surface measures are reflection positive. Proof. Suppose ϕ is randomly sampled from μ T 2 n ,0,U . Fix a plane of reflection P, a vertex v 0 ∈ P and supposeφ is randomly sampled from μ T 2 n ,v 0 ,U [the measure μ T 2 n ,v 0 ,U is obtained by replacing 0 with v 0 in (1.1)]. We write E μ T 2 n ,0,U and E μ T 2 n ,v 0 ,U for the expectation operators corresponding to ϕ andφ, respectively. Observe that since the induced measure on the gradient of ϕ is translation invariant. In addition, by symmetry,φ d = θ Pφ . (3.14) For two bounded f, g ∈ F + P the relation (3.8) now follows from (3.13) and (3.14) by To see the relation (3.9) observe that, by the domain Markov property and symmetry, conditioned on (φ v ) v∈P the configurations (φ v ) v∈P + and ((θ Pφ ) v ) v∈P + are independent and identically distributed. Thus, for any f ∈ F + P we have We now prove Theorem 3.2. Fix 0 < δ < 1, n ≥ 1 and suppose ϕ is randomly sampled from μ T 2 n ,0,U . Let K be the constant from We need to show that there exists some 0 < L < K , independent of n, such that f e i ,L ≤ δ k for all k ≥ 1 and distinct e 1 , . . . , e k ∈ E(T 2 n ). Fix some k ≥ 1 and distinct e 1 , . . . , e k ∈ E(T 2 n ). Define four block functions at (0, 0) by The definition (3.11) of the reflection operators (ϑ t ) implies that there exist k 1 , k 2 , k 3 , k 4 ≥ 0 with k 1 + k 2 + k 3 + k 4 = k and, for each 1 ≤ j ≤ 4, distinct Assume, without loss of generality, that k 1 ≥ k/4 (as the cases that k j ≥ k/4 for some 2 ≤ j ≤ 4 follow analogously). Then, by the chessboard estimate, Theorem 3.4, and thus it suffices to show that there exists some 0 < L < K , independent of n, such that We note that Thus, recalling (1.1), we have We estimate the numerator and denominator in the last fraction separately. First, we have already shown a lower bound on Z T 2 n ,0,U in (3.1). Second, denote by H the subset of edges (( j, k), ( j + 1, k)) ∈ E(T 2 n ) for which k is even. Let S be a spanning tree of T 2 n , regarded here as a subset of edges, satisfying by (1.2). The integral above can be estimated by integrating the vertices in V (T 2 n )\{0} leaf by leaf according to the spanning tree S. Recalling the definition of E L , two cases arise depending on whether or not the edge connecting a leaf to the remaining tree belongs to H . Thus we obtain Condition (1.2) ensures that C 2 (U ) < ∞ and the definition of K gives that lim L↑K C 3 (U, L) = 0. Thus, using (3.18), for every ε > 0 there exists an 0 < L < K , independent of n, for which This inequality, together with (3.16), (3.17) and (3.1), implies that we may choose an 0 < L < K , independent of n, so that (3.15) holds, as we wanted to show. Proof of the chessboard estimate. In this section we prove Theorem 3.4. Let ϕ be randomly sampled from the given measure P. Reflection positivity of P with respect to F implies that for each plane of reflection P, the bilinear form E(gθ P h) is a degenerate inner product on bounded g, h ∈ F P + . In particular, we have the Cauchy-Schwarz inequality, For a function f ∈ F of the form and a plane of reflection P, define two functions, the "parts of f in P − and P + ", by Define also the function ρ P f ∈ F by ρ P f := f P + θ P f P + and note that E(ρ P f ) ≥ 0 by (3.9). Observe that Thus, using the Cauchy-Schwarz inequality (3.19) with g = f P + and h = θ P f P − we have Our first goal is to show that starting with a function of the form (3.20), one may iteratively apply the operator ρ P with different planes of reflection P to reach a function of the form (3.20) with all the block functions identical. Proof. Let s = ( j, k) ∈ V (T 2 n ). Define the vertical planes of reflection (Q i ), 0 ≤ i ≤ log 2 (n) , by Q i := P ver j i for j i := j + 1 − 2 i modulo 2n. One may verify directly that ((a, b)) = ( j, b) for all −n + 1 ≤ a ≤ n. In the same manner, one may now take the horizontal planes of reflection (R i ), 0 ≤ i ≤ log 2 (n) , defined by R i := P hor k i for k i := k + 1 − 2 i modulo 2n, and conclude that For a bounded block function f 0 at (0, 0) define which is well-defined and non-negative by (3.9). Let f have the form (3.20). With the above notation, the chessboard estimate (3.12) becomes the inequality where we note that in Theorem 3.4 we may assume that m = |V (T 2 n )| by taking some of the block functions to be constant. Consider first the case that f s = 0 for some s ∈ V (T 2 n ). (3.23) Let P 1 , . . . , P m be the planes of reflection corresponding to s as given by Proposition 3.6. By iteratively applying the Cauchy-Schwarz inequality (3.21) with the planes (P i ) we may obtain that |E( f )| is bounded by a product in which f s , raised to some positive power, is one of the factors. Thus we conclude from (3.23) that E( f ) = 0, establishing (3.22) in this case. Second, assume that (3.23) does not hold. Define Let h ∈ F be an (arbitrary) function maximizing |E(h)| among all functions of the form ϑ t h t with each h t being one of the (g s ). (3.24) Observe that, by the Cauchy-Schwarz inequality (3.21) and the definition of h, we have |E(h)| ≤ E(ρ P h)E(ρP \P h) ≤ E(ρ P h)|E(h)| for any plane of reflection P. Thus, |E(h)| ≤ E(ρ P h) for any plane of reflection P. (3.25) In particular, E(ρ P h) also maximizes |E(h)| among functions of the form (3.24) (so that equality holds in the last inequality). Let P 1 , . . . , P m be the planes ofreflection corresponding to s = 0 as given by Proposition 3.6. By iteratively applying (3.25) with these planes we obtain that since g s = 1 for all s and h has the form (3.24). Finally, the definition of h now shows that implying (3.22) and finishing the proof of Theorem 3.4. Lower Bound for Random Surface Fluctuations in Two Dimensions Recall the definition of the controlled gradients property from Sect. 3. Throughout the section we fix n ≥ 2 and a potential U with the following properties: • There exists an 0 < ε ≤ 1/2 for which U has (1/8, 1 − ε)-controlled gradients on We fix ε to the value given by the first property. Write 0 := (0, 0). For the rest of the section we suppose that ϕ is a random function sampled from the probability distribution The theorem establishes lower bounds for the variance and large deviation probabilities of ϕ v as well as upper bounds on the probability that ϕ v is atypically small. The lower bound on the variance is expected to be sharp up to the value of c(U ). The theorem is not optimal in several ways. One expects the results to hold for all v ∈ V (T 2 n ) without the restriction on v 1 , one expects that the exponent 2/3 may be replaced by 1 and that the restrictions on r and t may be relaxed. We believe that further elaboration of our methods may address some of these issues. However, since our main focus is on vertices v for which v 1 is of order n and on estimating the variance of ϕ v we prefer to present simpler proofs. Theorem 4.2. There exists a constant c(U ) > 0 such that Again, this estimate is expected to be sharp up to the value of c(U ). Tools. In this section we let τ : V (T 2 n ) → [0, ∞) be an arbitrary function satisfying τ (0) = 0. We let T + , T − be the functions defined in Sect. 2 acting on the graph T 2 n with the given τ function and constant ε. We also recall the notation J + , J − , M(ϕ) and L(τ, ε) from Sect. 2.1. Our main tool for lower bounding the fluctuations of ϕ is the following lemma. Lemma 4.3. Denote and let F 0 be the sigma-algebra generated by (ϕ v ), v ∈ V 0 . There exists a constant c(U ) > 0 such that for any a, s > 0, any u ∈ V (T 2 n ) and any event A ∈ F 0 we have for the density of the measure μ T 2 n ,0,U . Fix a function θ : V 0 → R satisfying θ(0) = 0 and denote by dλ the measure Define the event and the quantity We wish to bound I from below and from above. We start with the bound from below. 1)] and observe that where we have used property (3) from Sect. 2.1 to justify our use of (4.5). Together with the definition of the event E this implies that To bound I from above we use the Cauchy-Schwarz inequality and the Jacobian identity in (2.10) to obtain (4.7) Comparing (4.6) and (4.7) and recalling that ϕ is sampled from the probability distribution μ T 2 n ,0,U we conclude that We continue by noting that by the definition of E, In addition, we recall from properties (2) and (4) of T + in Sect. 2.1 that if ψ satisfies |ψ u | ≤ a and M(ψ) ≤ L(τ, ε) then −a − ε 2 ≤ T + (ψ) u − τ (u) ≤ a and a similar relation for T − by (2.1). In addition, since A ∈ F 0 , properties (1) and (2) imply that A = T + (A) = T − (A). Therefore, using that T + and T − are one-to-one, Combining the last two inequalities with (4.8) establishes the lemma. Our next lemma bounds the error terms appearing on the right-hand side of (4.4). Lemma 4.4. For any s > 0 we have Proof. Given a vertex v ∈ V (T 2 n ) and k ≥ 1 denote by P v,k the set of all simple paths in T 2 n starting at v and having length k. Here, by such a path we mean a vector (e 1 , . . . , e k ) ⊆ E(T 2 n ) of distinct edges with e i = (v i , v i+1 ) and v = v 1 . Observe that, trivially, |P v,k | ≤ 4 k for all v and k. Now note that since U has (1/8, 1 − ε)-controlled gradients on T 2 n we have for each v ∈ V (T 2 n ) and k ≥ 1, Observe that We estimate each of the terms on the right-hand side separately. First, using (4.9) we have observing that the inequality holds trivially if L(τ, ε) is zero or negative. Second, using property (5) from Sect. 2.1 we see that Now, and using again (4.9) we conclude that Thus, Markov's inequality and (4.12) show that The lemma follows by combining this estimate with (4.10) and (4.11). Fluctuation bounds. In this section we prove Theorem 4.1. Fix and the function η : We aim to use the lemmas of the previous section with the τ function a constant multiple of η. The above definition is chosen so that we may control the quantities appearing in Lemma 4.4. The first case allows us to lower bound the function L while the second and third cases ensure that η is slowly varying. The next lemma formalizes these ideas. Write, as in (2.3), (4.14) Lemma 4.5. There exists an absolute constant C > 0 such that For any α > 0 we have Proof. The fact that η(w) depends only on w 1 and η(w 1 ) ≥ η(w 2 ) when w 1 1 ≥ w 2 1 shows that for each w ∈ V (T 2 n ) and k ≥ 0 we have By considering separately the latter two cases in the above inequality we have where we have also used that there are at most 4m vertices w ∈ V (T 2 n ) with w 1 = m (strict inequality is possible when m ≥ n). Continuing the last inequality we obtain for some absolute constants C, C > 0. We note that for any x, s, k ≥ 0 we have that Thus, (4.15) follows from the definitions (2.4) and (4.13) of L(τ, ε) and η. Proof of Theorem 4.1. Assume that v 1 ≥ (log n) 2 . It suffices to prove (4.2) and (4.3) as (4.1) is an immediate consequence of the case t = 1 of (4.3) and the fact that Eϕ v = 0 by symmetry. Let N (U ) > 0 be large enough for the following derivations. We first claim that choosing c(U ) sufficiently small and C(U ) sufficiently large the theorem holds when n ≤ N (U ). Indeed, this is clear for (4.2) as we may make the right-hand side greater than 1 by choosing C(U ) appropriately. To see this for (4.3) first note that our assumption that the potential U restricted to [−1, 1] is bounded away from infinity implies that P(|ϕ v | ≥ 0.99 v 1 ) > 0. Thus it suffices to check that 99 v 1 and this follows, using our assumption that n ≥ 2, as Assume for the rest of the proof that n > N (U ). Consequently, since v 1 ≥ (log n) 2 , we have We start with the proof of (4.3). Let 1 ≤ t ≤ 1+ √ v 1 log n . If P(|ϕ v | ≥ t log(1 + v 1 )) ≥ 1 2 there is nothing to prove. Thus we suppose that P(|ϕ v | ≤ t log(1 + v 1 )) ≥ 1 2 . Pick the function τ := 8t · η so that, since ε ≤ 1 2 , we have τ (v) ≥ 2t log(1 + v 1 ) + ε 2 by (4.16). Combining the arithmetic-geometric mean inequality with Lemma 4.3, taking A to be the full event, we have where s > 0 is arbitrary. By Lemmas 4.4 and 4.5 we have Furthermore, our assumption that t ≤ 1+ √ v 1 log n and v 1 ≥ (log n) 2 combined with (4.15) yields that (2n) 2 and combining the last inequalities we conclude that for some c (U ), C(U ) > 0 depending only on U . We now prove (4.2). We may suppose that r ≤ 1 , to obtain similarly to (4.17), Assume k ≤ η(v) r [recalling that η(v) > r by our assumption that r ≤ 1 4 log(1 + v 1 ) and (4.16)], so that by (4.15) and our assumption that N (U ) is large we have that (4.18) holds. Hence by Lemmas 4.4,4.5 and the fact that r, k ≥ 1, for some constants C (U ), C (U ) > 0 (depending on U through ε). Thus, in particular, we have Summing over k and using that the sum of probabilities of disjoint events is at most one yields Since η(v) > r by our assumption that r ≤ 1 4 log(1 + v 1 ) and (4.16) it follows that Together with (4.16) this proves (4.2). Maximum. In this section we prove Theorem 4.2. Let ρ(U ) > 0 be a constant to be chosen later, depending only on U and small enough for the following derivations. We may choose c(U ) sufficiently small so that the theorem holds when n ≤ exp(1/ρ(U ) 2 ) and thus we assume that Fix a collection of arbitrary vertices u 1 , . . . , u n ∈ V (T 2 n ) satisfying u i 1 ≥ n 2 and d T 2 n (u i , u j ) > 2n 1/3 when i = j. Define the events, for 1 ≤ i ≤ n, where we mean that A 1 is the full event. We have and we aim to use Lemma 4.3 to estimate the summands on the right-end side. Let v 0 := ( n 1/3 , 0) and let η : V (T 2 n ) → [0, ∞) be the function defined by (4.13) with v = v 0 . Noting that η takes its maximal value at v 0 we may define η i : (4.21) where w − u i is the vertex in T 2 n obtained by doing the coordinate-wise difference modulo 2n. We define also the functions τ i : Lemma 4.6. For all 1 ≤ i ≤ n we have In addition, if ρ(U ) is sufficiently small then and for some absolute constant C > 0. Proof. Property (4.22) is an immediate consequence of the fact that η(v) = η(v 0 ) for all vertices v with v 1 ≥ v 0 1 and the definition of τ i . To see (4.23), recall (4.19) and observe that when ρ(U ) is sufficiently small, as in (4.16). Now use the definition of τ i and the fact that ε ≤ 1 2 . Since (4.21) defines η i via η we may use Lemma 4.5, taking ρ(U ) sufficiently small, to obtain (4.24). Finally, Eq. (4.25) follows from a similar derivation as in the proof of Lemma 4.5. We may now apply Lemma 4.3 with τ i playing the role of τ and A i playing the role of A, noting that by (4.22) and our choice of the u i , A i is indeed measurable with respect to the sigma algebra generated by {ϕ v : τ i (v) = 0}. Using also the arithmetic-geometric mean inequality and (4.23) we have where s > 0 is arbitrary. Combining Lemma 4.4 with (4.24) and (4.25) we have Choosing s := exp(−20Cρ(U ) 2 log n/ε 2 ), taking ρ(U ) small enough and using (4.19) yields Plugging back into (4.27) and summing over i using (4.20) gives Finally, choosing ρ(U ) sufficiently small this implies that It follows that there exists some 1 ≤ i ≤ n for which P(B i ∪ A c i ) ≥ 1 2 , whence, by the definition of A i , P(∪ n i=1 B i ) ≥ 1 2 and the theorem follows. Discussion and Open Questions In this work we prove lower bounds for the fluctuations of two-dimensional random surfaces. Specifically, we investigate random surface measures of the form (1.1) based on a potential U satisfying the conditions (1.2) and (1.3). These conditions allow for a wide range of potentials including the hammock potential, when U (x) = 0 for |x| ≤ 1 and U (x) = ∞ for |x| > 1, double well and oscillating potentials. We prove that such random surfaces delocalize, with the variance of their fluctuations being at least logarithmic in the side-length of the torus. We also establish related bounds on the maximum of the surface and on large deviation and small ball probabilities. In this section we discuss related research directions and open questions. Upper bound on the fluctuations. It is expected that under mild conditions on the potential there holds an upper bound of matching order on the fluctuations of the random surface. For instance, that if ϕ is randomly sampled from the measure (1.1) then Var(ϕ (n,n) ) ≤ C(U ) log n for some C(U ) < ∞ and all n ≥ 2. One may well speculate the result to hold for all potentials satisfying (1.2) and (1.3) and indeed even in greater generality. Certain potentials are known to satisfy such a bound but it appears that even the case of the potential U (x) = x 4 has not yet been settled [31, Remark 6 and Open Problem 1]. Reflection positivity. Our work relies crucially on reflection positivity and the chessboard estimate to establish what we called the controlled gradients property, see the beginning of Sect. 3. This restricts our results in ways that are probably not essential. Specifically, we may handle only random surface measures on a torus with even side length and we must normalize such measures at a single point. It is desirable to lift these restrictions, by possibly arriving at a more illuminating proof of the controlled gradients property. This will allow one to treat random surface measures on other graphs as well as on the graph T 2 n with other boundary conditions. For instance, one would expect our results to hold for zero boundary conditions, when ϕ v is normalized to zero at all v = (v 1 , v 2 ) with max(|v 1 |, |v 2 |) = n. With regard to this we put forward that the controlled gradients property possibly holds for any finite, connected graph G and any potential U , satisfying the conditions (1.2) and (1.3), say. Precisely, let G and U be such a graph and potential. Write K := sup{x : U (x) < ∞} ∈ (0, ∞] and let ϕ be randomly sampled from the probability measure 1) for some vertex v 0 ∈ V (G). Then it may be that for any 0 < δ < 1 there exists some 0 < L < K such that L depends only on δ and U (and not on G) and if we define the random subgraph E(ϕ, L) of G by then P(e 1 , . . . , e k ∈ E(ϕ, L)) ≤ δ k for all k ≥ 1 and distinct e 1 , . . . , e k ∈ E(G). More general random surfaces. One may try to extend the applicability of our results in several directions. First, one may try and relax the condition (1.3) to allow for singular potentials. Ioffe et al. [15] introduced a technique for proving lower bounds on fluctuations for potentials that are small perturbations, in some sense, of smooth potentials. These ideas were also incorporated in the work of Richthammer [29], upon which our addition algorithm is based. It is a promising avenue for future research to try to combine the techniques of [15] with our technique. This may allow one to treat all continuous (not necessarily differentiable) potentials as well as certain classes of discontinuous potentials. Second, one may try to extend the results to integer-valued random surface models. For instance, to probability measures on configurations ϕ : T 2 n → Z (rather than ϕ : T 2 n → R) with ϕ(0) = 0 for which the probability of ϕ is proportional to exp − (v,w)∈E(T 2 n ) U (ϕ v − ϕ w ) . This direction seems much more challenging, as our technique is based on an argument that relies crucially on the continuous nature of the model. We mention that while it is expected that many integer-valued random surface models have fluctuations with variance of logarithmic order this has been established only in two cases: when U (x) = β|x| and U (x) = βx 2 , both with β sufficiently small. This result is by Fröhlich and Spencer [10]. It is also known that if β is large then these models become localized, having fluctuations with bounded variance, a transition that is called the roughening transition. As specific examples of surfaces for which delocalization is expected but remains unproved we mention integer-valued analogs of the hammock potential, when U (x) = 0 for x ∈ {−1, 1} and otherwise U (x) = ∞ (the graph-homomorphism or homomorphism height function model) or when U (x) = 0 for x ∈ {−M, −M + 1, . . . , M} and otherwise U (x) = ∞ (the M-Lipschitz model). The former of these models can be used as a height function representation for the square-ice or 6-vertex models and is also related to the zero temperature 3-state antiferromagnetic Potts model (i.e., uniformly chosen proper colorings of T 2 n with 3 colors). For more on these models we refer to [23] where it is proved that the homomorphism height function and 1-Lipschitz models are localized in sufficiently high dimensions. Scaling limits and Gibbs states. The study of various limits for random surface models has received a great deal of attention in the literature. Infinite volume Gibbs states fail to exist for the random surface itself due to its delocalization but may exist for its gradients. Funaki and Spohn [11] proved that for uniformly convex potentials U , i.e., potentials satisfying 0 < c ≤ U (x) ≤ C < ∞, a unique infinite volume gradient Gibbs measure exists for any value of 'tilt'. Another direction studied in [11] was to consider the Langevin dynamics of the random surface. Under hydrodynamic scaling, convergence to a solution of a PDE, the so-called motion by mean curvature dynamics, was established. Naddaf and Spencer [21], with an alternative scaling, proved the convergence of the model to a continuous Gaussian free field. Further, Giacomin et al. [13] extended these results to the Langevin dynamics, obtaining convergence to an infinite-dimensional Ornstein-Uhlenbeck process. Under similar convexity assumptions Miller [20] extended the scaling limit results to handle various choices of boundary conditions. Finally, we mention deep connections with the SLE theory. Schramm and Sheffield [30] discovered that, in the scaling limit, appropriately defined contour lines of the two-dimensional discrete Gaussian free field converge to an SLE curve with parameter κ = 4. It was conjectured that this is a universal phenomena independent of potential details. A significant contribution in this area has been made by Miller [19], who resolved the conjecture for a large class of uniformly convex potentials. It is expected that the results described in this section hold under mild assumptions on the potential U . As a first step, one may let ϕ be randomly sampled from the random surface model (1.1) with the potential U (x) = x 4 or the hammock potential and try to prove that the law of ϕ (n,n) , suitably normalized, converges to a Gaussian distribution. The above-mentioned works used uniform convexity via the Brascamp-Lieb inequality, Helffer-Sjöstrand representation or homogenization techniques and novel techniques may be required to extend the results beyond this setting. The question of unicity for gradient Gibbs states seems more delicate as Biskup and Kotecký [4] gave an example of a non-convex potential admitting multiple gradient Gibbs states with the same 'tilt'. Maximum in high dimensions. Our work establishes that the expected maximum of the random surfaces we consider is of order at least log n and it is expected that this is the correct order of magnitude. A curious question regards the maximum in higher dimensions. For instance, denote by T d n the d-dimensional discrete torus with vertex set {−n + 1, −n + 2, . . . , n − 1, n} d and let ϕ be randomly sampled from the random surface measure (1.1) with T 2 n replaced by T d n for some d ≥ 3. It is known that for the discrete Gaussian free field, when U (x) = x 2 , the maximum of the field is typically of order √ log n as n tends to infinity. However, it may well be that the behavior of the maximum is now potential-specific. How would the maximum behave for the hammock potential, i.e., for a uniformly chosen Lipschitz function? Observe that if a Lipschitz function is at height t at a given vertex then it is at height at least t/2 in a ball of radius t/2 around that vertex, a ball containing order t d vertices. This raises the possibility that the probability of a random Lipschitz function to attain height t at a given vertex decays as exp(−ct d ). This bound would imply that the typical maximal height is of order at most (log n) 1/d , as n tends to infinity. Is this the correct order of magnitude? The technique of Benjamini et al. [2] may lead to a lower bound of this order. For the integer-valued models of Lipschitz functions mentioned above, the homomorphism height function and 1-Lipschitz models, an upper bound of order (log n) 1/d on the expected maximum was established in [23] in sufficiently high dimensions. We mention also the works [24,25], where the maximum of such Lipschitz function models is studied on expander and tree graphs. Decay of correlations. Let ϕ be randomly sampled from the random surface measure (1.1). Our results focus on estimating Var(ϕ v ) for various vertices v, i.e., the diagonal elements of the covariance matrix of ϕ. How do the off-diagonal elements behave? How fast do the values of ϕ decorrelate? A related question is to study the decay of correlations for the gradient of ϕ. Sufficiently fast decay of gradient correlations will lead to an upper bound on Var(ϕ v ), by writing ϕ v as the sum of the gradients of ϕ on a path leading from 0 to v and averaging over many such paths. With regard to this we mention the results of Aizenman [1] and Pinson [27], following ideas of Patrascioiu and Seiler [22], who give a lower bound, in a certain sense, for the decay of correlations for the Hammock potential and for the integer-valued homomorphism height function model mentioned above. High-dimensional convex geometry. The case that the potential U is the hammock potential is natural also from a geometric point of view. In this case the measure (1.1) is the uniform measure on the high-dimensional convex polytope of Lipschitz functions defined by Lip := ϕ : T 2 n → R : ϕ 0 = 0 and |ϕ v − ϕ w | ≤ 1 when v ∼ w . The field of convex geometry is highly developed and we mention here the central limit theorem of Klartag [16], which states that uniform measures on high-dimensional convex bodies have many projections that are approximately Gaussian. It would be interesting to use this point of view to obtain new results for the random surface with the hammock potential.
16,949.8
2015-09-02T00:00:00.000
[ "Mathematics" ]
In vivo corneal and lenticular microscopy with asymmetric fundus retroillumination We describe a new technique for non-contact in vivo corneal and lenticular microscopy. It is based on fundus retro-reflection and back-illumination of the crystalline lens and cornea. To enhance phase-gradient contrast, we apply asymmetric illumination by illuminating one side of the fundus. The technique produces micron-scale lateral resolution across a 1-mm diagonal field of view. We show representative images of the epithelium, the subbasal nerve plexus, large stromal nerves, dendritic immune cells, endothelial nuclei, and the anterior crystalline lens, demonstrating the potential of this instrument for clinical applications. Introduction Non-invasive cellular-scale imaging of the cornea is a valuable tool for disease diagnostics, management, and monitoring. In clinical practice, it is often used to distinguish forms of microbial keratitis in situ, when corneal biopsy is either infeasible or fails to yield a diagnosis [1]. It is used routinely to examine the endothelium for evidence of structural change or dysfunction, which can cause corneal edema and concomitant vision impairment [2]. Cellular-scale corneal nerve imaging has also been suggested as a means to monitor systemic disease, such as diabetes mellitus, or recovery from refractive surgery [3]. Currently, the established microscopic clinical imaging methods are specular microscopy (SM) [4] and in vivo confocal microscopy (IVCM) [5]. Whereas SM is restricted to the endothelium, IVCM is able to produce high contrast images throughout the full thickness of the human cornea and resolve nerves and cells in 3D [6]. A caveat is that IVCM is usually performed in contact with the cornea, meaning the objective lens (or protective cap) touches the cornea during the examination. Topical anesthetic must be administered prior to imaging. With an experienced technician, contact operation is safe and straightforward. Nevertheless, there are many subjects with phobias that will not tolerate contact operation. Moreover, for routine screening purposes where speed is critical, non-contact methods are highly desired. Much of the recent progress in non-contact in vivo corneal imaging has involved various flavors of optical coherent tomography (OCT) [7,8], largely due to its remarkable depth selectivity. Chen et al. described a micro-OCT system capable of cross-sectional swine cornea imaging [9]. However, the A-line rate was not fast enough to provide useful en face images in the presence of motion. Mazlin and colleagues took a different approach and applied a parallelized full-field version of OCT (FF-OCT). They were able to acquire very large en face images in human corneas [10]. With faster A-line rates, Tan et al. later showed high-resolution imaging in 3D was feasible with spectral-domain OCT [11]. Recently, Gabor-domain optical coherence microscopy has also been successfully demonstrated in vivo, albeit only in anesthetized mice [12]. All the techniques mentioned so far are based on reflection, or, more precisely, backscattered light from corneal microstructures. Here we introduce an in vivo microscopy technique based instead on transmitted light. We call this approach retroillumination microscopy, in deference to the related but lower-resolution slit lamp technique [13]. The key idea is to use the ocular fundus as a diffuse back-reflector, thereby folding the light path of a widefield transmission microscope into one which requires access to only one side of the sample (either the cornea or crystalline lens). To maximize back-reflection, we use near-infrared (NIR) light, which is weakly absorbed in the fundus and virtually undetectable to the subject. Additionally, we implement asymmetric illumination, a well-established method for enhancing intrinsic phase-gradient contrast [14][15][16]. Our method is non-contact and produces images with high lateral resolution, comparable to state-of-the-art IVCM systems, and across a large field in the cornea (1-mm diagonal). A strength of the system is its instrumentational simplicity, making it a promising candidate for disease screening or global-health applications. The purpose of this report is to describe the retroillumination microscope design in detail. We also present representative images of the cornea and lens obtained from healthy volunteer subjects. Hardware A schematic for the retroillumination microscope is given in Fig. 1 and cross-sectional illustrations of the illumination beam at the cornea and fundus are given in Fig. 2. The subject's head is placed on a chin rest, while their gaze is stabilized with an external fixation target. The microscope is mounted on translation stages for alignment with the eye. Illumination consists of a high power NIR LED (LZ1-10R602, Osram; 850 nm center wavelength), which is offset and magnified to span a semi-circular aperture and subsequently relayed through a beamsplitter to the back focal plane of a long working distance objective lens (MPlanApoNIR 20X/0.4, Mitutoyo). The diameter of the aperture is adjusted such that its image half fills the objective's back aperture. The objective lens then projects the semi-circular LED image onto the fundus with a visual angle of about 42°(right side of Fig. 2). The beam diameter at the pupil is about 3.8 mm (Fig. 2, left). The objective lens working distance is 20 mm, which allows the subject to freely blink. We assume the fundus acts as a spatially uniform diffuse reflector. Features such as retinal vessels and the optic nerve head exhibit rather weak contrast with NIR illumination [17] and hence can be ignored. Reflected light from the fundus obliquely transilluminates (i.e. retroilluminates) the anterior segment. Rays clearing the iris are collected by the same objective and magnified with a compound tube lens onto the sensor of a machine vision camera (acA2000-340kmNIR, Basler), running at 348 frames/sec. We chose this speed based on speeds used successfully in corneal FF-OCT [10]. We used two off-the-shelf achromats (AC508-300-B and AC508-150-B, Thorlabs) spaced by about 2 mm to achieve the desired tube lens power while avoiding additional aberration. The system described thus far is susceptible to direct backreflections from the objective and to a lesser extent, the anterior segment interfaces, which can severely impair image contrast. To mitigate direct backreflections, we use cross-polarized detection and exploit the fact that multiple scattering in the fundus largely depolarizes the retro-scattered illumination. Specifically, the LED light is linearly polarized (LPVIS100-MP2, Thorlabs) and combined with a polarizing beamsplitter cube (PBS252, Thorlabs). In the illumination path, we also insert a small central block (mounted on a transparent window) conjugate to the cornea in order to darken the microscopy field of view (FOV) and reduce depolarized stray light. The block is responsible for the octagon-shaped hole in the incident illumination on the left in Fig. 2. Although this configuration resembles an ophthalmoscope configuration, with its spatially segmented input and output beams, we emphasize that the retroillumination microscope is focused on the anterior segment FOV, not the fundus. In practice, it is difficult to align the microscope to the eye using only the high-resolution FOV. We obtain a wider field for alignment by using an auxiliary camera to image the subject's pupil with ambient light. Specifically, a motorized flip mirror is inserted in the illumination path just after the aperture (omitted from Fig. 1 for clarity), which temporarily redirects an approximately 7 mm diameter low-resolution image of the pupil onto a free-running machine vision camera (DCC3240N equipped with camera lens MVL6WA, Thorlabs). We use the edges of the subject's iris to then center the microscope prior to microscopic imaging. This alignment procedure increases operator repeatability. After locating the desired structures, we usually capture sets of 1024 frames (3 sec video). Resolution and field of view In the asymmetric retroillumination microscope, and indeed all microscopes based on asymmetric illumination, the phase-to-intensity point spread function is no longer an Airy pattern, making it difficult to ascribe a resolution based on, for instance, the Rayleigh criterion. Instead we report spatial frequency bandwidth as a surrogate for resolution. For 850 nm light and 0.4 NA (illumination and imaging), the maximum spatial frequency is laterally 940 mm -1 (1.1 µm period) and axially 98 mm -1 (10 µm period), in air. The camera produced images of dimensions 1540 x 1088 pixels. This corresponds to a field of view (FOV) of about 820 x 580 µm (or 1 mm diagonal) in the cornea. Light levels and safety The total incident light power on the cornea is about 50-100 mW. This light level (at 850 nm) does not cause significant pupil contraction and is just barely visible to the subject. Because the power is distributed over an area, the corneal and retinal irradiances are below the limits for non-hazardous Group 1 devices in the latest ophthalmic safety standard, the ANSI Z80.36-2016. Image processing Even with asymmetric retroillumination, intensity contrast at the image plane is low (<5%). Two major sources of noise restrict useful post-hoc expansion of this range: photoresponsive non-uniformity (PRNU) and shot noise. Note that offset noise is corrected on chip with correlated double sampling. Read and quantization noise are negligible. PRNU results from varying pixel gain and is easily corrected by dividing each raw frame by a calibration frame. We use a 256 frame average of uniform intensity, near the expected signal level, as the calibration frame. Averaging several frames (equivalent to integrating more photons) reduces shot noise, but increases susceptibility to motion blur. Nevertheless, we could still average several frames by registering frames prior to averaging. Registration was performed with standard FFT-based phase-correlation methods [18]. We found that we can usually average at least 9 frames before axial motion decorrelates the FOV. Additionally, we remove any slowly-varying illumination gradients by dividing the image by a Gaussian-filtered version of itself (σ = 24 pixels or 13 µm in the cornea). Processing is performed efficiently in a consumer GPU and a cropped version of the images is displayed in real time at approximately 20 Hz. Following acquisition, we perform the same processing on the full FOV and for each frame in the stack. Subjects We imaged the left eye of 3 healthy volunteers ranging in age from 26 to 57 and with varying fundus pigmentation. For each subject, informed consent was obtained prior to imaging. The research was approved by the Boston University Institutional Review Board and conformed to the principles stated in the Declaration of Helsinki. Epithelium The subbasal nerve plexus (SBP) was the most visible structure observed in the epithelium. The SBP is a highly-branched network of sensory nerve fibers located in a narrow plane between the basal epithelial cell layer and Bowman's layer. Figure 3(A) shows the SBP across nearly the entire FOV close to the corneal apex of a 28-year-old male. The large dark spots are out-of-focus aggregates of either mucin or shedded epithelial cell debris on the superficial cornea, anterior to the focal plane. These spots appear dark because the aggregates scatter light outside the collection aperture resulting in a decrease in local image intensity. The aggregates usually shift over time and move rapidly following a blink. Enlarged areas are given in Figure 3 anterior to the SBP plane of focus. These structures are more discernible in the video. They may be Langerhans cells (resident macrophages) or intraepithelial nerve fibers. Other periepithelial features are shown in Figure 4. The tear layer has a punctate appearance (Fig. 4(A)) with sparse highly-scattering aggregates. The edges of squamous and wing epithelial cells are occasionally visible (Fig. 4(B)), but have low contrast and are therefore difficult to distinguish. The basal epithelial cell mosaic (Fig. 4(C)) is always visible with positive or negative contrast depending on the relative position in the focal plane. In one subject, we saw numerous dendritic cells with clearly resolvable cell bodies and processes (Fig. 4(D)). We were unable to detect any discernible features in Bowman's layer. Stroma Within the stroma, we observed large branching nerves. In contrast to subbasal nerves which are largely confined to a plane, stromal nerves are distributed throughout the stromal volume. Hence, it was challenging to obtain images where large portions of the nerve were in focus. Figure 5 shows a few stromal nerve segments. Panels A and B are cropped views of the same stromal nerve trunk but at slightly different focal planes. Arrows indicate the common branch point. Keratocyte nuclei were notably absent from our retroillumination images. This was surprising based on their density in the stroma and high contrast in IVCM [5]. In video sequences, we occasionally observed distinct structures about the size of a cell, but in much lower abundance than expected for normal keratocytes. We also did not see any indication of the subepithelial nerve plexus, however we confined most of our imaging the central cornea where the subepithelial plexus may simply be absent [19]. Endothelium The endothelium, a monolayer of cells that coats the posterior surface of the cornea, was readily visible with retroillumination. A widefield 1-mm diameter view is shown in Figure 6. Interestingly, it appears that endothelial cell nuclei, and not cell edges, exhibit the best contrast. Similar to basal epithelial cells, when endothelial cell nuclei are above (i.e. anterior to) the focal plane, they produce positive contrast and conversely, when they are below the focal plane, they produce negative contrast. The curvature of the endothelium is also evident in Figure 6. Just anterior the endothelium, at the approximate location of Deschemet's membrane, we also repeatedly saw small high contrast spots. These spots were much smaller than the size of the nearby endothelial cells. Crystalline lens With a 20 mm working distance objective lens we can easily adjust the system to image the crystalline lens, which begins about 3.3 mm behind the air-cornea interface. However, ray tracing software indicated that strong spherical aberration, from the air-cornea refractive index mismatch, impedes clear imaging. To reduce sensitivity to spherical aberration, we lowered the effective imaging NA by relaying the objective's back focal plane to an external iris prior to focusing on the camera. Passage through the iris reduced the imaging NA down to 0.2. Despite the lower resolution, both the lens epithelium and anterior lens fibers were discernible. Example images from a 28-year-old male are shown in Figure 7. Discussion We have described a new in vivo corneal and lenticular imaging method, which we call retroillumination microscopy. The technique is non-contact and produces images with high lateral resolution (1-2 µm), comparable to state-of-the-art IVCM. Unlike most other in vivo eye imaging techniques, retroillumination microscopy is based on transmitted light. This difference has a fundamental impact on obtainable image contrast [8,20]. In order for light reflection, or more precisely, backscattering to occur, the sample must present an abrupt change in refractive index. This could either be an interface (specular reflection) or a clump of scattering structures each smaller than the wavelength of incident light (Rayleigh-like scattering). On the contrary, transmission microscopy is sensitive to forward-scattered light, such as that primarily generated by larger structures, for example cell bodies or nuclei. A clear example is the different appearance of the corneal endothelium, which in reflection contrast usually appears as a hyper-reflective interface with dark paths delineating cell borders [4]. In transmission contrast, it is the cell nuclei that are most apparent, while cell edges are undetectable (see Fig. 6). As an aside, the adaptive optics ophthalmoscopy community has already recognized the utility of forward scattered light as a method to enhance contrast of blood flow [21], photoreceptor cone inner segments [22], and retinal ganglion cells [23]. There are likely many other corneal features where transmission contrast can contribute complimentary information. Our method bears resemblance to the well-known slit lamp technique known as retroillumination [13] (n.b. the naming similarity is intentional). Both techniques reflect light off posterior structures in order to back-illuminate the lens and cornea. However, unlike the slit lamp biomicroscope, which features separate, non-overlapping illumination and imaging paths, our system is designed around a single objective lens. With this configuration we are free to use a much higher collection angle (NA) in the imaging path without physically obstructing the illumination. Thus our single lens configuration provides higher optical resolution than that obtainable with standard slit lamps. Similarly, our design enables unimpeded illumination of large fundus areas. We use this freedom to implement asymmetric illumination, a well-established method to enhance phase-gradient contrast [14]. Transmission imaging also avoids superficial sample reflections, such as the prominent corneal anterior surface reflection. Excess background from this reflection can easily dominate intracorneal backscattering. Hence, high optical sectioning strength (e.g. confocal filtering or coherence gating) is normally required to diminish its effect, which in turn increases system complexity. In the absence of this reflection, we are able to form useful en face images across a large, 1-mm diagonal FOV with little more than a widefield microscope made of readily available off-the-shelf components. Retroillumination provides excellent contrast of corneal nerves, particularly the subbasal plexus, which is recognized as a potential biomarker for diabetic peripheral neuropathy [24]. Combined with the large FOV (3X larger area than current IVCM) and non-contact operation, retroillumination microscopy may be a useful tool for monitoring diabetic neuropathy or other ocular diseases affecting corneal nerves. Disclosures The authors declare no conflicts of interest.
3,906
2020-03-11T00:00:00.000
[ "Biology" ]
T h e aorta can act as a site of naive C D 4 + T-cell priming strate that naïve T cells can be primed directly in the vessel wall with both kinetics and frequency of T-cell activa­ tion found to be similar to splenic and lymphoid T cells. Aortic homing of naïve T cells is regulated at least in part by the P-selectin glycosylated ligand-1 receptor. In experimental atherosclerosis the aorta supports CD4+ T-cell activation selectively driving Th1 polarization. By contrast, secondary lymphoid organs display Treg expansion. Conclusion Our results demonstrate that the aorta can support T-cell priming and that naïve T cells traffic between the circula­ tion and vessel wall. These data underpin the paradigm that local priming of T cells specific for plaque antigens contributes to atherosclerosis progression. and by what mechanism are they recruited?; (ii) Can T-cell priming occur directly in the aorta and where do T-cell/DC contacts take place?; and (iii) How does the aortic CD4+ T-cell phenotype compare with T cells in SLOs and are there divergent effects on these distinct populations af­ ter induction of pathology? Our results demonstrate that the aorta can act as a site of naïve CD4+ T-cell priming and selectively induce a localized Th1 immune re­ sponse in early experimental atherosclerosis. Introduction Aortic adaptive immune responses play a role in atherosclerosis,1 with several immune cell subsets identified in human2,3 and murine4,5 vessels; however, to date the precise mechanisms leading to T-cell activation in the arterial wall remain poorly understood. Several studies have investigated whether vascular resident antigen presenting cells (APCs) retain the ability to present antigen locally to T cells using reductionist approaches such as adoptive transfer of model antigen loaded dendritic cells (D C s),6 in vitro co-culture systems,7 and explanted aortas.8Building on these approaches, antigen presentation has been demonstrated in vivo by constitutive aortic plasmacytoid D Cs (pD Cs).9,10More recently, it was discovered that in the advanced stages o f atherosclerosis in apoE-/-mice, the vessel wall orchestrates the formation of artery tertiary lymphoid organs that control vascular T-cell responses with concomitant reduction in adjacent plaque size without involvement of secondary lymphoid organs (SLO s).4 Therefore, current evidence suggests that aortic A PC s possess the capacity to present antigen and that the aorta may act as a site of T-cell priming in atherosclerosis. In earlier stage pathology, it is assumed that adaptive immune responses are co-ordinated in SLOs and/or the vessel wall, but naïve T cells have yet to be identified to reside constitutively in vascular tissue and it remains controversial as to whether T-cell priming can occur within the aorta.Therefore, in this investigation, w e set out to address A nim als B6.129P2-Apoe(tm 1Unc)/J (apoE-/-) mice, C57BL/6 (wild type; W T ), O T-II, and T E a mice w ere used in this study.A ll animals w ere euthanized by carbon dioxide.No anaesthetic agent was used.For full details see Supplementary material online.A ll the procedures w ere performed in accordance with local ethical and U K Home Office regulations and con form to the guidelines from Directive 2010/63/EU of the European Parliament on the protection ofanimals used for scientific purposes. Flow cytom etry Cell suspensions from aortas w ere prepared by enzyme digestion as pre viously described.4,9 For experiments that involved removal of aortic ad ventitia, a modified digestion protocol was performed to allow removal of adventitia.11Spleens and renal lymph nodes (rLN s; abdominal aortic draining) w ere digested in collagenase D (Sigma-Aldrich, Irvine, UK). Intracellular staining was performed using a cytofix/cytoperm kit (BD Biosciences, Oxford, U K) o r a True-NuclearTM Transcription factor buffer set (Biolegend, London, U K) according to the manufacturer's instructions.For BrdU staining, a BD Pharmigen FITC BrdU flow kit (BD Biosciences) was used according to the manufacturer's instructions. Because T cells represent a major population (~20% ) o f blood leuco cytes coupled with the fact that aortic naïve T cells were likely to be rare, we included an additional control for all T-cell experiments to en sure that T cells analysed were 100% aortic resident: 3 min prior to the experiment endpoint, animals w ere injected i.v. with a CD45.1 antibody (OT-II mice) o r a CD 45.2 antibody (W T , TEa, and apoE-/-mice) to label circulating blood leucocytes for subsequent exclusion from analysis (Supplementary material online, Figure S1B).Experiments w ere analysed on a LSR II, LSRFortessa (BD Biosciences) or a MACSQuant Analyzer (Miltenyi Biotec, Bisley, U K) using FlowJo (FlowJo L LC , Ashland, OR, USA).For full details and antibodies used see Supplementary material online. A ssessm ent of Th1 cytokine expression W T and apoE-/-mice were injected i.p. with 300 mg Brefeldin A (Sigma-Aldrich) to block cytokine release and culled 5 h later.Aortas, spleens, and rLN s w ere harvested and stained for intracellular cytokines via flow cytometry as detailed in Supplementary material online. Naïve C D 4 + T cells exist constitutively in the adventitia and intima/medial layers of W T aorta The combination o f perfusion and removal of extra-aortic tissue resulted in a pure vascular preparation (Supplementary material online, Figure S1).(E-G) In a separate experiment, OT-II mice were injected with ovalbu min or PBS and culled 72 h later.Single cell suspensions prepared for flow cytometry analysis of (E) aorta, (F) renal LNs, and (G) spleen were stained with CD62L and CD44 to once again distinguish naïve from activated cells.All plots are gated on CD4+Va2+Vp5.Results are from two independent experi ments, with each group containing five-pooled aortas.Spleen, LNs, and blood were analysed individually. Prim ing of C D 4 + T cells occurs within the aorta Evidence exists that T-cell activation occurs within the aorta,6,8,12 but T cells to W T mice (Figure 1E).Following administration of antigen, there was a switch towards a more activated CD44hi phenotype with a con comitant reduction in naïve T cells (Figure 1E).This was also true for rLN s and spleen (Figure 1F and G In summary, these results clearly demonstrate that OT-II T cells are being primed locally within the vessel wall. Naïve T-cell recruitm ent to the aorta is suppressed by blockade of PSGL-1 Having o f cells w ere low, they were 10 times higher than that observed in W T aortas (Figure 5 E). Early stage atherosclerosis is associated with vascular T-cell expansion, activation and selective Th1 polarization On examination of rLN s and spleen (Figure 6), the C D 4 + populations were equivalent between W T and apoE-/-animals both for rLN s (Figure 6A) and spleen (Figure 6 E).Both tissues showed a modest increase in T EM cells (~5% ) in apoE-/-mice (Figure 6B Supplem entary m aterial the following questions: (i) Do naïve C D 4 + T cells reside in the aorta and by what mechanism are they recruited?;(ii) Can T-cell priming occur directly in the aorta and where do T-cell/D C contacts take place?; and (iii) How does the aortic C D 4 + T-cell phenotype compare with T cells in SLO s and are there divergent effects on these distinct populations af ter induction o f pathology?O u r results demonstrate that the aorta can act as a site of naïve C D 4 + T-cell priming and selectively induce a localized Th1 immune re sponse in early experimental atherosclerosis. 2 . 4 In one set o f experiments, OT-II mice w ere injected i.v. with 200 mg chicken ovalbumin (O V A ) and culled at 2 4 ,4 8 , o r 72 h later.Phosphatebuffered saline (PBS) served as naïve control.T w o hours prior to culling, mice w ere injected with 1 mg BrdU i.p.Single cell suspensions w ere pre pared for aorta, rLN s, and spleen, and C D 4 + O T-II+ T cells stained for proliferation markers: BrdU and Ki-67 as described in Supplementary material online.In a separate series o f experiments, OT-II mice w ere injected with 1 mg/kg FTY720 (Sigma-Aldrich) and 2 mg/kg anti-P-selectin (RMP-1) and anti-E-selectin (RME-1) antibodies (Biolegend) to block T-cell vascu lar recruitment, prior to receiving 200 mg O V A .Treatm ent with FTY720 and anti-P-selectin/anti-E-selectin was repeated on Day 2 (24 h following first dose) to maintain effective plasma concentrations.The control group received 200 mg O V A in addition to respective vehicle (dH2O ) and isotype control antibodies (mouse IgG1, k and mouse IgG2a, k).Mice w ere culled 48 h following O V A administration (Supplementary material online, Figure S2).Single cell suspensions o f aorta, rLN s, and spleen were obtained for flow cytometry analysis o f C D 4 + OT-II prolif eration as assessed by Ki-67.Blockade of aortic T-cell recruitm ent W T mice w ere treated with 100 mg anti-P-selectin glycosylated ligand (PSGL-1; 4RA10) o r equivalent dose of corresponding isotype (Bio X Cell, W e st Lebanon, N H , USA) at 0 and 24 h to block aortic T-cell re cruitment.Mice w ere culled at 48 h after first dose.Aortic single cell sus pensions w ere obtained for flow cytometry quantification of total C D 4 5 + cells, C D 4 + T cells, and naïve C D 4 + T cells. Results are expressed as mean ± standard error of the mean (SEM) of n animals/groups o f pooled tissues for each experiment.The Student's t-test was used to compare two groups.Analysis o f variance (A N O V A ) was used for comparing three o r more groups with Tukey's multiple comparison post-test being applied as described in Figure legends.GraphPad Prism 6 software (San Diego, C A , USA) was used.A P-value <0.05 was taken to indicate statistical significance. T o determine the constitutive repertoire o f T cells that reside in the whole naïve aorta, w e employed high resolution cytometry by time of flight (C y T O F ) which revealed multiple distinct C D 4 + and C D 8 + T-cell populations, two y § populations and unidentified T-cell subtypes (Supplementary material online, Figure S3).Inspection of this data reveals that the aorta harbours a mix o f C D 44low and CD44hi T cells (Supplementary material online, Figure S3).Using flow cytometry, we found ~40% of aortic C D 4 + T cells expressed the classical naïve pheno type: C D 6 2 L+ C D 4 4 lo (Figure 1A).This compares with approximately two-thirds o f C D 4 + T cells being naïve in renal lymph nodes (rLN s), spleen, and blood (Figure 1B-D ).Further characterization demonstrated that a clear C D 4 + T-cell population could be observed both within the adventitia and intima/media (Supplementary material online, Figure S4B). Analysis o f the T-cell population revealed naïve C D 4 + T cells in both compartments but with a higher proportion of activated T cells (CD44hi) being present within the adventitia (Supplementary material online, Figure S4C).T cells w ere also visualized by confocal microscopy and found to exist in both the intima and adventitia o f the naïve aorta Downloaded from https://academic. F ig u r e 1 Naïve T cells constitutively reside in C57BL/6 wild-type aorta and are reduced in OT-II murine aorta following antigen administration.(A-D) C57BL/6 wild-type mice were culled and single cell suspensions prepared for flow cytometry analysis of (A) aorta, (B) renal lymph nodes, (C) spleen, and (D) blood.C D 4+ T cells were stained with CD62Land C D 44to distinguish naïve from activated cells.All plots are gated on C D 4+ TC R -P+ T cells.Results are from two independent experiments, with each group containing five-pooled aortas. including T cells bound to the aortic endothelium (Supplementary material online, Figure S7A).T he presence o f naïve T cells within periph eral tissues remains a matter o f controversy.T o further validate our finding that the aortic C D 6 2 L+ C D 4 4 -cells are truly naïve, we employed T Ea mice that consist of a C D 4 + T-cell population expressing a single T-cell receptor (T C R ) specificity and are also RA G deficient, thus T cells remain naïve in the absence o f Ea52-68:1-Ab complex (their cognate antigen).By employing this model, we are able to determine if naïve T cells are capable of trafficking to and maintaining a clear resident population within the aorta, even under non-inflammatory conditions.This was indeed what w e observed: In T Ea mice, the vast majority of C D 4 + T cells in the aorta are C D 6 2 L+ C D 4 4 -, conclusively demon strating that naïve C D 4 + T cells do indeed reside within the aorta (Supplementary material online, Figure S5A).Data from renal lymph nodes are presented for comparison where similarly, most C D 4 + T cells were C D 6 2 L+ C D 4 4 -naïve cells (Supplementary material online, Figure S5B).The aorta, therefore, acts as a reservoir o f naïve C D 4 + T cells that may form interactions with aortic resident antigen presenting myeloid cells.T o fully delineate the repertoire of myeloid cells within the naïve aorta, w e performed C y T O F analysis of C57BL/6 aortas which revealed 12 types of myeloid cells including multiple major histocompatibility complex Class II (M HC-IIhi) populations consisting o f distinct cD C and macrophage subsets (Figure 2). in vivo proof o f naïve T-cell priming is lacking.Conventional approaches of assessing naïve T-cell homing and antigen-dependent activation via adop tive transfer and subsequent priming in recipient mice is not feasible given that recruitment of donor lymphocytes to the naïve aorta make up only a minor fraction o f the total aortic lymphocyte population.6Therefore, we adopted the method of only investigating endogenous T-cell homing and priming using OT-II mice.The use of OT-II mice, where the majority o f C D 4 + T cells have a T C R specific for chicken ovalbumin (O V A ) 323 339 in the context of I-Ab (alloantigen of H-2b bearing mouse strains, in cluding mice on C57Bl/6 background), allows us to assess whether the aorta can act as a site of priming by injecting OT-II mice with O V A so that the great majority of aortic C D 4 + T-cell-DC contacts will involve T cells with a T C R specific for the O V A peptide presented by aortic cDCs.PBS-treated OT-II aortas contained a similar proportion of naïve C D 4 + 2 F ig u r e 2 ) indicating each site harbours an antigen-activated C D 4 + population in parallel with a reduced frequency A Mass cytometry reveals myeloid populations in C57BL/6 aorta.(A) Myeloid cells were gated as Lin-CD11 blo"hl and clustered using viSNE on the expression of cell surface and intracellular markers.Expression levels of selected myeloid markers in the resulting viSNE clustered cell populations is shown for a representative C57BL/6 mouse.(B) Twelve cell populations consisting of monocytes (Ly6C+ and Ly6C-), conventional Types 1 and 2 dendritic cells (cDC1 and cDC2), neutrophils, five macrophage subsets, and two unidentified populations.Doughnut plot shows mean proportion of subsets in aortas of n = 6 mice.(C) Heatmap showing the relative expression level of 20 cell markers within the 12 myeloid cell subsets identified by the viSNE clustering shown in B. FF ig u r e 3 F ig u r e 4 T Figure S7C).DC-T-cell contacts were not observed in PBS-treated mice. identified naïve T cells within the aorta, w e investigated the mechanism o f naïve T-cell homing from the circulation into aortic tissue.PSGL-1 is the major leucocyte ligand for P-selectin, while also showing affinity for E-and L-selectin19 but its role in vascular recruitment o f naïve C D 4 + T cells remains unknown.T o investigate this, we injected C57BL/ 6 W T mice with the anti-PSGL-1 antibody, 4RA10,20 on two consecutive days and studied C D 4 + T-cell populations within the aorta 48 h following the first injection (Supplementary material online, FigureS8).Since PSGL-1 is common to all leucocytes, there was a decrease in the number of total leucocytes (>80%; 2.7-fold) in the vessel of anti-PSGL-1-treated animals (Supplementary material online, FigureS8A).A marked reduction was also identified for total C D 4 + T cells (88%; 4.8-fold; Supplementary material online, FigureS8B).Interestingly, both activated and naïve T cells w ere reduced by anti-PSGL-1 treatment with naïve C D 4 + T cells re duced by 96% which equated to a 5.7-fold reduction in this population (Supplementary material online, FigureS8C).O u r data demonstrate that PSGL-1 has a key role in promoting naïve T-cell recruitment to the aorta. Finally, we sought to assess differences in the phenotype of C D 4 + T cells in the aorta, rLN s, and spleen of W T vs. apoE-/-mice.A s already demonstrated by others,6 apoE-/-aortas contained significantly more (~400% ) C D 4 + T cells compared with W T (Figure 5A).In apoE-/-aortas, w e observed a lower frequency of naïve C D 4 + T cells and a significant Downloaded from https://academic.oup.com/cardiovascres/article/116/2/306/5454687 by guest on 21 December 2020 F ig u r e 5 CD 4+ T cells show a more activated phenotype in atherosclerotic aortas.Wild-type and apoE-/-mice were maintained on a high-fat diet for 10-12weeks.Mice were culled and aortic single cell suspensions assessed via flow cytometry for (A) CD 4+ T-cell number per vessel, (B) naïve (CD62L+CD44-) vs. effector memory (CD62L-CD44+) T-cell phenotype, (C) activation status, and (D) proliferation.(E) Additionally, aortas were stained for detection o fT RM cells (S1PR1-CD103+CD44+CD69+) cells.(A) Data shown represent n = 6-7 groups with each group comprising 3-8 pooled aortas from 6 to 7 independent experiments.(B-E) Data shown represent n = 3-4 groups with each group comprising 6-8 pooled aortas from 3 to 4 independent experiments.Individual data points represent average value per group; horizontal bars denote mean.Results are presented as mean ± SEM.The Student's unpaired t-test; *P < 0.05, **P < 0.01.switch from na ive (C D 6 2 L+ C D 4 4 -) to an effector memory (T EM) population (C D 6 2 L-C D 4 4+ ) was noted (Figure 5 B) concomitant with increased surface expression of the T-cell activation marker CD69 (Figure 5C).N o significant difference was noted for T-cell proliferation (Figure 5D).C D 4 + tissue-resident memory cells (T RM) have been identified in 21 22 23 several tissues including lung2 , and skin but have never been investigated in the vasculature.Using a well-defined marker profile for T R M , namely Sphingosine-1-phosphate receptor 1 (S1PR1)-C D 1 0 3 + C D 4 4 + C D 6 9 + , w e identified a small population fitting the T RM profile in the aortas of apoE-/-mice and while the absolute number EF and F, respectively) which is in contrast to the ~20% increase observed in apoE-/-aorta.Also, in contrast to the aorta, no increase in C D 4 4 + C D 6 9 + T cells was ob served in the rLN s (Figure6C) o r spleen (Figure6G).The cells o f rLN s in apoE-/-mice did; however, show a higher rate of proliferation compared with W T , a phenotype not observed in the spleen (Figure6D and H, respectively).The major pro-atherosclerotic T-cell response in atherosclerosis is driven by Th1 cells.24W e , therefore, employed direct in vivo intracellular cytokine stainingto quantify T cells actively secretingTh1 cytokines with out exogenous stimulation.25Following 12w eek's high-fat diet (H FD ), mice w ere injected with Brefeldin A 5 h before culling.C D 4 + T cells w ere assessed for the expression of the signature Th1 cytokine: IFN-y in addition to T N F-a (Figure 7A-C ).There w ere ~10 times more IFN-Y+ C D 4 + T cells in apoE-/-aortas compared with W T .Aortic Th1 cells w ere not actively producing T N F-a (Figure 7A).No significant differences in the Th1 phenotype w ere observed in either rLN s o r spleen (Figure 7B and C, respectively).W e considered that the increased frequency o f C D 4 4 hi C D 4 + T cells in the apoE-/-spleen and rLN s may be associated with Treg polarization and expansion which is known to play a protective role in the pathol ogy.24C D 2 5 + Fo xP 3 + Tregs were detected in both W T and apoE-/aortas and although a trend towards less Tregs was observed in apoE-/aortas, this did not reach significance (Figure 7D).This was in contrast to SLO s where T regs formed a larger proportion of the total C D 4 + popu lation with this value increasing in apoE-/-rLN s (Figure 7E) and spleen (Figure 7F).In summary, early stage atherosclerosis is associated with vascular T-cell expansion, activation and selective Th1 polarization, whereas SLO s are skewed towards enhanced Treg expansion as ob served previously.26ig u r e 6 C D 4+ T-cell phenotype in renal lymph nodes and spleen of apoE-/-mice vs. wild-type.Wild-type and apoE-/-mice were maintained on a high fat diet for 10-12 weeks.Mice were culled and single cell suspensions from (A-D) rLNs to (E-H) spleen were assessed via flow cytometry for (A, E) C D 4+ T-cell frequency, (B, F) naïve vs. effector memory T-cell phenotype, (C, G) activation status, and (D, H) proliferation.Data shown represent n = 20-21 mice from 3 to 4 independent experiments.Individual data points represent average value per mouse; horizontal bars denote mean.Results are presented as mean ± SEM.The Student's unpaired t-test; *P < 0.05, **P < 0.01, ***P < 0.001. W e have conclusively identified naïve T cells within the aorta o f several mouse strains.W e also illustrated how the aorta supports local T-cell priming in a model system.Indeed, in OT-II mice, aortic C D 4 + T cells can be primed as efficiently as lymphoid T cells.Naïve T cells also de pend, at least in part, on PSGL-1 for aortic homing.W e have also per formed a comparative assessment o f the C D 4 + T-cell phenotype between the atherosclerotic aorta, an aortic draining lymph node and the spleen, highlighting the importance o f the local aortic immune re sponse.A significant Th1 response evolved in the aorta, whilst T cells in SLO s were biased more towards a regulatory phenotype.These data support the hypothesis that atherosclerosis induces local vascular Th1 cell responses and that this local response is quite distinct from the more tolerogenic responses observed in SLOs.Naïve T cells have been identified in peripheral tissues including brain, pancreas, lung, skin, and testes.27Here, w e demonstrate that naïve C D 4 + T cells constitutively reside in the aorta of W T mice.W e found similar proportions o f T cells amongst total leucocytes in both the ad ventitia and intima/media with naïve T cells being detected in both com partments.W e confirmed by microscopy that T cells can be found bound to non-inflamed aortic endothelium and resided in close proxim ity to D C s indicating T cells may be able to enter the aortic wall from the lumen.Indeed, microvessels are generally absent in the healthy aortic wall,28 which strongly suggests that the major point of entry in naïve vessels is via the arterial lumen.The presence of naïve T cells within the intima/media is intriguing, given that the intima is the principal location of A PC s in the naïve aorta7 where they accumulate in the subendothelial space.Here, w e have fully delineated the repertoire of myeloid cells within the naïve aorta by C y T O F analysis revealing 12 types o f myeloid cells including multiple MHC-IIhi A P C populations.This offers the poten tial for naïve T cells to encounter A PC s in the environment where lipid accumulation/oxidation and atherogenesis begins.Much o f the evidence regarding local T-cell activation and clonal expansion directly in the ves sel is circumstantial such as co-localization immunohistochemistry of ath erosclerotic plaques3 or the use o f artificial systems ex vivo,8 hence it is unknown if T-cell priming can occur in the aorta in vivo.W e w ere able to demonstrate activation o f aortic OT-II T cells following antigen with ki netics paralleling those observed for splenic OT-II cells with peak prolif eration observed at 72 h.A t 72 h, we also observed BrdU uptake, indicating cells w ere undergoing mitosis.This was following a 2-h pulse of BrdU, which would strongly indicate the BrdU + cells were o f a local nature (i.e. in the target tissue of interest).T o confirm this hypothesis, w e pre-treated OT-II mice with FTY720 and blocking antibodies to P-and E-selectin prior to administration of antigen, thus ensuring any clonally expanded OT-II T cells within the aorta could only derive from endogenous naïve T cells encountering an aortic A P C presenting ovalbu min peptide on MHC-II.The result from this experiment confirmed that priming of naïve C D 4 + cells occurred in the aortic wall and is direct evi dence that A PC s within the vessel can uptake and present antigen locally Downloaded from https://academic.oup.eom/cardiovascres/article/116/2/306/5454687 by guest on 21 December 2020A C D 25 F ig u r e 7 Atherosclerosis induces divergent effects on aortic vs. lymphoid tissue T-cell polarization.Wild-type and apoE-/" mice were maintained on a high-fat diet for 10-12 weeks.Mice were then separated into two experimental groups.Group 1 received 300 mg of Brefeldin A 5 h prior to culling and single cell suspensions of (A) aorta, (B) rLNs, and (C) spleen were utilized for intracellular staining of C D 4+ T cells for the Th1 cytokines: IFN-y and TNF-a.Representative plots and graphs show IFN-y+ Th1 T cells.Plots are gated on C D 4+ TC R-P and results represent n = 3 per group with group containing three pooled aortas, three pooled rLNs, and n = 8 spleens from three independent experiments.In Group 2, mice were culled and single cell suspensions of (D) aorta, (E) rLNs, and (F) spleen were assessed via flow cytometry for the presence of C D 4+ Tregs (CD25+FoxP3+).Data shown for aorta represent n = 3-4 groups with each group containing 6-8 pooled aortas.Data shown for rLNs and spleen represent n = 20-21 individual organs.All data derived from 3 to 4 independent experiments.Individual data points represent average value per group/organ; horizontal bars denote mean.Results are presented as mean ± SEM.The Student's unpaired t-test; *P < 0.05, **P < 0.01, ***P < 0.001.to C D 4 + T cells and induce clonal expansion.Using OT-II mice, we were able to visualize T cells and D C s forming cell contacts following an tigen challenge.Such contacts were observable in the adventitia, media, and intima.O f interest, D C s and T cells w ere found to reside in areas of the media where muscle fibres w ere less dense.The observation that D C s and T cells can interact in all the aortic layers indicates that the ar chitecture o f the aortic wall can support local T-cell activation, even in the absence o f a local inflammatory stimulus.W h ilst priming o f T cells in healthy W T aorta is very unlikely given the small polyclonal population that exists with respect o f SLO s, the situation in athero sclerotic plaques may offer an environm ent m ore conducive to T-cell activation.In the context o f atherosclerosis-in the presence of chronic pro-inflammatory stimuli-local antigen presentation could, in theory, take place within the developing lesion, w here T cells and A P C s are m ore highly concentrated with T cells co-localizing with ac tivated D C s .2,3Evidence fo r aortic T-cell priming in atherosclerosis was previously suggested in apoE-/-mice by the fact that the T-cell repertoire within atherosclerotic aortas became m ore restricted o ver the course o f pathology w hile no changes in T-cell clonality could be detected in S LO s.12The mechanism by which naïve T cells enter the aorta is unknown.W e identified PSGL-1, the major leucocyte ligand for endothelial selectins, as a receptor o f interest in regulating homing o f naïve T cells into the aorta; since w e demonstrated that naïve T cells are depleted in the aorta following anti-PSGL-1 treatment.PSGL-1 blockade o r deficiency reduced atherosclerosis formation, adhesive interactions between endo thelial cells and leucocytes, and neointimaformation in apoE-/-mice.29-31Naïve T cells lack a fully glycosylated PSGL-1 so binding to selectins is lower than for activated T cells yet some binding affinity still remains.32Adoptively, transferred T cells have been previously shown to enter ath erosclerotic aortas in a partially L-selectin dependent manner despite Lselectin receptors being absent from aortic tissue.6This apparent dis crepancy can be reconciled by the fact that L-selectin expressed on an endothelial bound T-cell binds to PSGL-1 on an unbound T-cell (a pro cess termed secondary capture).33The initial interaction (primary cap ture) involves a T-cell binding to P-or E-selectin on the vessel wall.In fact, it has been demonstrated that the absence of L-selectin has no ef fect on aortic leucocyte primary capture and rolling.Moreover, following 12weeks HFD, neither T-cell number nor plaque area was altered be tween apoE-/-and apoE-/-/L-selectin-/-mice.34 W e also noticed that T EM cells w ere less affected by PSGL-1 blockade (2.7-fold reduction) compared with naïve cells, making up a greater pro portion o f the C D 4 + cells in the treatment group compared with iso type.T EM cells may use additional receptors, to facilitate binding to endothelial selectins including: T-cell immunoglobulin and mucin domain 1 (TIM-1),35 C D 44,36 and E-selectin ligand-1.37PSGL-1 on naïve T cells can also bind the chemokines, C C L1 9 , and C C L2 1 38 but expression of these chemokines in vascular tissue is low o r absent under non inflammatory conditions.4,39,40Therefore, w e consider these unlikely to contribute to the reduced trafficking observed in W T mice.There may also be chemotactic factors in the vessel wall derived from myeloid cells that contribute to the reduced trafficking o f naïve T cells observed after PSGL-1 blockade, as an indirect effect, due to concurrent reductions in other leucocyte subsets.Finally, we performed a phenotypic analysis of C D 4 + T cells between W T and apoE-/-aortas, under similar H FD conditions.In line with previ ous results,6 here, w e show that aortic C D 4 + T cells display a near four fold increase in numbers in apoE-/-mice compared with W T mice.W e next investigated the relative proportions of naïve vs. activated C D 4 + T cells.Naïve C D 4 + T cells w ere significantly reduced in aortas of apoE-/mice compared to W T , concomitant with an increase in T EM frequency.In contrast, when w e examined rLN s and spleens, w e discovered that the C D 4 + populations w ere equivalent in terms of magnitude with only a minor switch towards an activated phenotype, considerably less than what was observed in the aorta.This is consistent with a lack o f in creased proliferation o f splenic C D 4 + T cells previously observed at 8 weeks HFD in apoE-/-mice.41Th1 cells are pro-atherosclerotic in both humans3,42,43 and animal models.24By employing intracellular cytokine staining to reveal in vivo cy tokine prolifes, w e showed that IFN-Y+ (Th1) T cells w ere 10 times more abundant in apoE-/-aortas.In contrast, we did not detect significant differences in the Th1 population in either the rLN s o r spleen.W e also quantified C D 4 + Tregs, which are known to be atheroprotective in ani mal models.24Tregs w ere a small proportion of total C D 4 + T cells within the aorta with no significant differences between W T and apoE-/mice.In contrast, Tregs grew as a proportion o f C D 4 + T cells both in rLN s and spleen.In support of this data, a study conducted on Foxp3-eG FP/LDLr-/-mice showed a progressive increase in the frequency of splenic C D 4 + Tregs at 4, 8, and 20w eeks HFD , while the total C D 4 + T-cell population remained unchanged.26The immune-mechanisms that drive aortic T-cell activation and Th1 response in apoE-/-mice are likely to be multi-factorial with both anti genic and non-antigenic stimuli contributing to the aortic resident T-cell phenotype.One additional factor worth considering is the potential presence o f B cell follicles, resembling early tertiary lymphoid organs, which can be found in even young apoE-/-mice44 and these structures, if present, may exert immunomodulation on the underlying vascular T-cell response.W e have used a model in vivo system to illustrate the capacity that the naïve aorta has to promote local T-cell priming.O ther approaches that could have further validated these findings such as orthotopic aortic transposition utilizing an aortic graft from a donor with trackable (i.e.fluorescent) cells could also have aided in discriminating local from sys temic immune effects.However, not only are such approaches techni cally challenging, the grafts would yield a very low number o fT cells with respect to an entire intact aorta thus necessitating a large number of sur geries to produce sufficient tissue for quantifiable data.Future studies, however, utilizing such approaches coupled with detailed temporal phenotyping of T C R usage in experimental atherosclerosis would further enhance our knowledge of when and where antigenic stimulation of T cells occurs.In conclusion, the aorta can support T-cell priming and local activation o f C D 4 + T cells is associated with the vascular specific Th1 response we observed in early stage atherosclerosis in the apoE-/-aorta.
7,782.6
2019-04-13T00:00:00.000
[ "Biology", "Medicine" ]
Extreme enhancement of superconductivity in epitaxial aluminum near the monolayer limit BCS theory has been widely successful at describing elemental bulk superconductors. Yet, as the length scales of such superconductors approach the atomic limit, dimensionality as well as the environment of the superconductor can lead to drastically different and unpredictable superconducting behavior. Here, we report a threefold enhancement of the superconducting critical temperature and gap size in ultrathin epitaxial Al films on Si(111), when approaching the 2D limit, based on high-resolution scanning tunneling microscopy/spectroscopy (STM/STS) measurements. Using spatially resolved spectroscopy, we characterize the vortex structure in the presence of a strong Zeeman field and find evidence of a paramagnetic Meissner effect originating from odd-frequency pairing contributions. These results illustrate two notable influences of reduced dimensionality on a BCS superconductor and present a platform to study BCS superconductivity in large magnetic fields. INTRODUCTION Bardeen-Cooper-Schrieffer (BCS) theory has been vastly successful at explaining the behavior of conventional superconductors (1). Yet, superconductors, both conventional and unconventional, can exhibit complex and unexpected behavior when one or more length scales approach a lower dimensional limit (2). While the superconducting critical temperature (T c ) of some materials reduces in the monolayer limit, compared to the bulk (3)(4)(5), it has also been shown that T c can be greatly enhanced in this regime, as illustrated by FeSe/SrTiO 3 (6). Likewise, superconductivity can emerge at the interface of two insulating materials, as exemplified by the interface of LaAlO 3 /SrTiO 3 (7). As many types of quantum technologies depend on the growth of superconductors integrated into heterostructures, including superconducting spintronic devices (8), high-precision magnetometers (9), and qubits based on superconducting nanostructures (10), it is imperative to understand what the role of dimensionality and the influence of the environment is on the superconductivity. Elemental aluminum (Al) is exemplary of a type I BCS superconductor in the weak-coupling regime (1) and exhibits unexpected modifications to its superconducting behavior when scaled to the two-dimensional (2D) limit. It has been shown that the critical temperature of Al can be increased from its bulk value of T c = 1.2 K by growing thin films, both epitaxial and granular. However, widely varying growth procedures resulting in oxidized films (11)(12)(13)(14)(15)(16)(17)(18), granular Al (19)(20)(21), Al nanowires (22,23), or doped Al films (24,25) give dispersing values for T c clouding ultimately what contributes to the aforementioned enhancement. In some of these studies, the cleanliness of the interface and the Al itself, as well as the relevant thickness, is ill-defined. Moreover, these studies are often limited to a regime where the thickness is greater than six monolayers (MLs), mainly due to the challenges to synthesize monolayer scale epitaxial Al films. The dispersive findings question to what extent the enhancement of superconductivity is intrinsic to Al itself and to what extent the trend of increasing T c persists as films are thinned down further. To this end, experimental approaches that combine high-purity growth methods in a controlled ultrahigh vacuum (UHV) environment with a concurrent in situ characterization are vital to identify the intrinsic superconducting behavior of Al films near the 2D limit. In addition to the observed enhancement of T C , the upper critical field in the direction parallel to the film surface has been shown to increase substantially (16). Because of the low spin-orbit scattering rate in Al, these films characteristically show the Meservey-Tedrow-Fulde (MTF) effect, where the application of a magnetic field gives rise to a spin splitting of the quasiparticle excitations (26,27). In addition, it has been proposed that this high-field regime can promote odd-frequency spin-triplet correlations (28)(29)(30)(31)(32), but it has been challenging to confirm their presence experimentally (28,33,34). The combination of thin film Al and large magnetic fields, as used in superconducting qubit devices, especially those aiming to induce topological superconductivity (10,35,36), puts forward questions about how superconductivity is affected by external magnetic fields and the role of unconventional pairing. Here, we show that Al(111) films epitaxially grown on Si(111)-(7 × 7), approaching the monolayer limit, exhibit a greatly enhanced T c , up to about a factor of three, when compared to the bulk value. Using scanning tunneling microscopy/spectroscopy (STM/STS) at variable temperatures down to millikelvin, we first characterize the structural and large-scale electronic properties of epitaxial films of Al grown on Si(111) for various thicknesses (N). We subsequently characterize the associated superconducting gap (Δ) with each grown film. For the largest gap values, we corroborate these measurements with T c by measuring Δ(T ). Next, we probe the magnetic field-dependent properties of individual Al films for different thicknesses in magnetic fields with different field orientations. We confirm the expected type II behavior in out-of-plane magnetic fields, including the observation of an Abrikosov lattice. For inplane magnetic fields, we observe the MTF effect and use the spectral evolution in magnetic field to quantify the g-factor of the various films, which are all shown to exhibit g ≈ 2. We finally characterize the vortex structure in the presence of the MTF effect, which shows a reshaping of the vortex structure when compared to zero in-plane field. Based on numerical simulations using the Usadel equation, we quantify the observed structure and relate it to the presence of both even and odd-frequency pairing correlations as well as their contribution to the screening currents. Structural and spectroscopic properties of epitaxial Al films Epitaxially grown Al films (see Materials and Methods) imaged with STM typically show a closed film of a given thickness, decorated with a density of islands a monolayer higher ( Fig. 1A and fig. S2). Films with a given thickness exhibit two different periodicities ( Fig. 1, B and C). A short-range threefold periodicity with a ≈ 0.25 nm coincides with the expected atomic lattice constant of Al(111). In addition to the atomic periodicity, a long-range periodicity can be observed in films for thicknesses up to 26 MLs, which is also threefold symmetric and exhibits a periodicity a M ≈ 2.6 nm. This periodicity is commensurate with the underlying 7 × 7 reconstruction of Si(111) (37,38), and it is reminiscent of the moiré-type structures seen for other thin superconducting films (39,40). The appearance of both the moiré-type structure and the atomic periodicity is indicative that the interface is most likely pristine with negligible intermixing at the growth temperatures used. Epitaxial film growth is observed for Al films ≥4 MLs, as identified in (38). In attempts to measure even thinner Al films, our growth procedure resulted in broken and granular films. The thickness of a given film can be corroborated with STS measured in a voltage range of ±2 V. For a given N, layer-dependent broad peaks can be identified at given voltages, which vary depending on the given value of N (Fig. 1D). To better illustrate the measured peaks for both filled and empty states, dI/dV spectroscopy was normalized to I/V. Moreover, different films with the same value of N reproducibly show the same spectroscopic features, enabling spectroscopic fingerprinting of the layer thickness, although the films are closed (see section S1 and fig. S3). The appearance of such peaks in STS is reminiscent of quantum well states (QWS) typically observed on other thin metallic films grown on Si(111) (41). For reference, the QWS energies extracted from (42,43) are indicated in Fig. 1D by blue arrows underneath each measured spectrum. In this comparison, the QWS energies do not exactly match the measured peak positions, but there is a qualitative agreement between the energy difference between adjacent QWS, and the measured spectra, up to approximately 13 MLs. As seen from previous angle-resolved photoemission spectroscopy (ARPES) measurements (44) and the aforementioned calculations, the expected QWS have a smaller effective mass and are expected to disperse, when compared to the QWS of Pb/Si(111) (41). This inherently weakens the QWS intensity and makes a direct mapping of the exact QWS onset energies based solely on point-STS measurements imprecise. We note that a direct comparison to measured ARPES (44) is challenging, as we observe stronger features in the empty state region of the spectra, where there are no ARPES measurements. Likewise, ARPES spatially averages over regions of the film where we expect spectroscopic contributions from multiple thicknesses of the film. Superconducting gap and critical temperature as a function of film coverage We measured Δ as a function of coverage using high-energy resolution STS at variable temperature. Here, the coverage of a given film is defined as the cumulative Al material of its main layer and (vacancy) islands. Below, we first detail the spectral gap as measured at the lowest temperature, namely, T = 30 mK, for three coverages in Fig. 2A. A typical spectrum shows a BCS-like, hard gap structure symmetric around V s = 0 mV and sharp coherence peaks at the gap energy Δ, which can be fitted and extracted with a broadened Maki function (see section S2 and fig. S5 for a discussion on the possible broadening contributions) (45). We find that the gap value shows the largest enhancement of Δ = 0.560 ± 0.015 meV for a coverage of 3.9 MLs (4 MLs with a distribution of vacancy islands), which is more than a threefold enhancement compared to the bulk value of Δ bulk = 0.16 to 0.18 meV (46,47). We find that the spectra taken at various locations on the sample, including on (vacancy) islands and along the long-range periodicity, reveal a uniform superconducting gap with a constant Δ (± 0.02 meV) and small variations in coherence peak height (see fig. S4). Therefore, we assign Δ for each sample as the spatial average of all gap values extracted from ≥18 spectra, where the error bar represents the standard deviation of those values. The uniformity in the value of Δ is in contrast to the variations in the band structure on larger energy scales, where we see clear differences in STS for different layer heights. This observation suggests that the value of Δ is not significantly modulated due to the presence of different QWS stemming from variations in the film thickness, in contrast to reports on Pb/Si(111) (39,48) and in line with observations for Pb/BP (49). Measurements on films with different coverage yield a monotonously increasing trend in Δ as the film coverage is lowered, as shown in Fig. 2B for samples between 4 and 35 MLs. Here, each data point represents one grown sample. For the largest coverages we measured, namely, 35 MLs, we still observed a slight enhancement in Δ compared to the bulk value (blue bar), as was also seen in (18). The monotonous trend contrasts the observations for Pb/ Si(111), where the critical temperature oscillates due to a modulation of the local density of states (LDOS) at E F . Here, we see no clear correlation between the QWS energies and the corresponding gap size. To quantify T c in relation to the measured values of Δ at millikelvin temperature, we performed temperature-dependent measurements of Δ(T) for four different film coverages (see Materials and Methods for details and section S3 and fig. S6 for the temperature calibration). Δ(T ) was measured for a given sample by incrementally raising the sample temperature between 1.3 and 4.0 K. With increasing T, Δ(T ) shows the expected decrease until the gap is eventually fully quenched, coinciding with T c (Fig. 2C). To quantify the value to T c , we first fitted each measured spectra with a BCS Dynes function (see section S2) (50). We subsequently fitted the numerically determined temperature dependence of the gap within BCS theory to the extracted Δ(T ), as exemplified for an Al film with a 4.7-ML coverage in Fig. 2D, and find T c = 3.31 ± 0.11 K. In Fig. 2E, we illustrate the extracted T c for four different films (see fig. S7). Based on BCS theory, the ratio between T c and Δ(T = 0) leads to an expected ratio of 2Δ(T = 0)/k B T c = 3.53, which typically describes superconductors in the weak-coupling limit, such as bulk Al (46,47). Based on the extracted values, we plot the ratio between Δ and T c in Fig. 2E. The overall trend indicates that the ratio is in close agreement to the expected value 3.53 as seen for the bulk Al, suggesting that the thin Al films studied here may be in the weakcoupling limit. We note that the T c was only measured for four films, and not for a given film multiple times. Therefore, the error bars coincide with the standard deviation given by the fits shown in Fig. 2D and fig. S7. To infer a coverage-dependent trend in the extracted ratio, further measurements are needed. Moreover, the effect of the sample morphology and defects on the gap value and the ratio requires further study. The threefold enhancement of Δ and T c is distinctly larger than reported epitaxial Al films in the literature, where capped films were studied ex situ only down to 6 MLs (18). Likewise, it exceeds most reported values for T c of other studies on oxidized (single) Al films (12-18, 20, 21), likely due to the thinner films, the crystallinity, and the absence of the oxide layer. This observation directly refutes an early idea that the origin of the enhancement effect was due to the oxygen layer (12). In other reports (24,25), enhanced values of T c for Al films were obtained by doping with~2% of Si impurities. However, potential intermixing of Si and Al with this quantity of impurities would likely obscure the moiré pattern and atomic-resolution images presented in Fig. 1. In addition, we can also exclude a considerable influence of Si intermixing on the enhancement of superconductivity, since we do not observe a considerable change in gap enhancement for films when the annealing time (and thus potential intermixing) is minimized (see section S4 and fig. S8). These observations indicate that the enhanced superconductivity is an intrinsic property of ultrathin Al films, but it remains an open question if other weak-coupling superconductors present similar enhancement effects and what the role of the substrate/interface is (4). Abrikosov lattice and out-of-plane magnetic field response Subsequently, we characterize the magnetic field-dependent response of various Al films in two magnetic field orientations, i.e., perpendicular/parallel to the surface. First, we quantify the upper critical field for an Al film with an 11.7-ML coverage in a magnetic field perpendicular to the film plane (B ? c2 ). By incrementally increasing B ⊥ and measuring local point spectra (Fig. 3A), the coherence peaks flatten and the zero-bias conductance increases gradually until the gap has completely vanished at B ⊥ = 100 mT. This upper limit for B ? c2 gives an estimate for the coherence length ξ of 64 nm, as ξ ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi where Φ 0 is the magnetic flux quantum (51). The expected type II behavior can be observed by spatially imaging the zero-bias conductance for nonzero values of B ⊥ . We measured a constant-contour dI/dV conductance map at V s = 0 mV (B ⊥ = 50 mT), which reveals an Abrikosov lattice, with a vortex radius on the order of the coherence length ( Fig. 3B and fig. S9). MTF effect and the Clogston-Chandrasekhar limit After characterizing the out-of-plane response, we characterized the response of various films to an in-plane magnetic field (B ∥ ) for various coverages. Since screening currents cannot build up in the confined superconductor, orbital depairing is absent, and the magnetic field penetrates the superconductor, allowing us to study the superconducting state in combination with large magnetic fields compared to the typical out-of-plane critical values. In the absence of spin-orbit scattering, the quasi-particle excitations of the superconductor are sufficiently long-lived to observe the MTF effect in this regime (26,27). This effect is exemplified by a spinsplitting of the coherence peaks, where each peak shifts by ± gμ B SB ∥ , giving a total Zeeman splitting of |E z | = gμ B B ∥ for S = 1/ 2. For a homogeneous superconductor in the absence of spinorbit coupling, the superconducting state may only persist up to the Clogston-Chandrasekhar limit (52,53), given by h ¼ Δ= ffi ffi ffi 2 p , with h = μ B B ∥ , where a first-order phase transition to the normal state occurs. In Fig. 3C, we illustrate the measured MTF effect for two Al films with a coverage of 3.9 and 8.5 MLs, where the STS was measured for increasing values of B ∥ , up to B ∥ = 4 T. The manifestation of the MTF effect is the appearance of a spin-split gap structure. We quantify the splitting in Fig. 3C by subdividing the gap structure into two independent spin-polarized distributions and fitting two Maki functions with equal gaps, shifted with respect to each other by the Zeeman energy ΔE z . As illustrated in Fig. 3D, we measured ΔE z (B ‖ ) for four film coverages (also see fig. S10) and quantified the splitting of the coherence peaks at each field increment. The resulting linear trend is used to extract the g-factors (see inset of Fig. 3D) with an average of g = 1.98 ± 0.02 (where g = ΔE z /μ B B ∥ for S = 1/2). This measurement shows that the quasiparticles in the ultrathin regime remain free-electron like, and the linearity of the graph further illustrates that spin-orbit coupling is negligible in these films. In addition, we note that the expected Clogston-Chandrasekhar limit for the 8.5-ML film is at B k CC ¼ Δ= ffi ffi ffi 2 p μ B ≏ 5:5 T, i.e., above our experimental limit of B ∥ = 4.0 T. However, for films with a smaller gap size (with coverages of 11.7 and 17.4 MLs), we could observe a sudden quenching of superconductivity at in-plane fields near the theoretical limit. Vortex structure in the presence of the MTF effect The manifestation of the MTF effect in ultrathin Al films provides an opportunity to explore the atomic-scale variations in the conductance in response to variable magnetic field, for example, the resultant vortex behavior in the presence of the MTF effect. Moreover, the presence of large in-plane magnetic fields can induce pairing contributions in the form of odd-frequency spin-triplet correlations, which may act differently around a vortex and exhibit a paramagnetic Meissner response (33,54,55). Using a vector magnetic field, we induced vortices in a given Al film with B ⊥ = 30 mT and simultaneously applied B ∥ = 2.99 T to enter the MTF regime. We subsequently spatially mapped the zero-bias conductance in constant-contour mode, as illustrated for an 8.5-ML Al film (Fig. 4A). The resulting image shows multiple round vortices with an expected flux density (see also section S5). Note that the vortices may occasionally move, likely due to interactions with the tip (also see figs. S9 and S11). This can yield vortices that appear noisy as well as obscure the symmetry of underlying vortex lattice. To further characterize the structure, we also performed STS along a horizontal and vertical line across a given vortex (Fig. 4, C and D). Both directions show a split gap structure with Δ = 0.45 meV at~150 nm from the vortex center and a gradual decrease of Δ toward the center with a constant Zeeman splitting. Closer to the vortex center, the spectral gap is rapidly quenched, resulting in an extended region of~70 nm in diameter without any spectroscopic indications of superconductivity. In this regime, the apparent region with conductance associated with the normal state is radially larger than what is expected for a typical vortex in the absence of an in-plane magnetic field component (e.g., fig. S9). Besides this extended region where the quasiparticle gap is zero, the total radius of a vortex in the MTF regime is also larger compared to the typical vortex shape in the absence of an inplane magnetic field, as illustrated by comparing the zero-bias conductance profiles in Fig. 4B (also see fig. S9). To explain the observation of the vortex structure in the presence of the MTF effect, or the MTF vortex for short, we modeled the superconducting vortex structure using the quasiclassical Keldysh Green's function formalism (56,57), assuming a single-phase winding in the superconducting gap parameter. We assume that the coherence length of the superconductor is large compared to the mean free path, dictated by the sample morphology (sample thickness, island size, and moiré periodicity), such that the quasiclassical Green's function solves the Usadel equation (58). Therefore, we consider the diffusive limit, where only s-wave correlations can persist. This is in contrast to considerations in the ballistic limit (31). We fix Δ ∞ , the gap size at infinite distance from the vortex, and the spin-splitting field h ∥ = μ B B ‖ to the experimental values (h ∥ /Δ ∞ = 0.38) and solve the Usadel equation self-consistently with both the superconducting gap equation and Maxwell's equations (see section S5 for more details). In Fig. 4E, we illustrate the calculated density of states and account for Dynes broadening as well as experimental broadening by convoluting with the Fermi-Dirac distribution with T eff = 250 mK. The simulated distance-dependent spectra show an excellent agreement with the experimental data, reproducing the zero-bias conductance profiles (Fig. 4B), the evolution of the spin-split gap structure, and the extended region with a quenched quasiparticle gap (see fig. S9 for the calculated profile for h ∥ /Δ ∞ = 0). In addition, we can extract the coherence length of ξ = 42 nm. The theoretical model provides a detailed understanding of the MTF vortex structure in a varying in-plane magnetic field. First, the solution to the gap equation consists of both even-frequency (ω e ) spin-singlet 1 ffi ffi 2 p (|↑↓⟩ − |↓↑⟩) and odd-frequency (ω o ) spin-triplet 1 ffi ffi 2 p (|↑↓⟩ + |↓↑⟩) pairing contributions. Therefore, there is always a coexistence of both types of pairing contributions in the presence of an in-plane magnetic field. To understand the vortex structure, it is important to identify the role of both types of pairing contributions. In Fig. 5 (A and B), we plot the contributions of ω e and ω o pairing correlations, Δ even and Ψ odd , respectively, as a function of distance across the MTF vortex structure, where r = 0 refers to the vortex center. Toward the vortex core, both order parameters decrease monotonically and gradually as the distance to the core is reduced. By evaluating the gap equation for increasing values of h ∥ , we find an increasing contribution of ω o pairs, as well as a more extended and gradual vortex profile. The combination of the shallow vortex shape and the presence of ω o correlations near the vortex core, which are more susceptible to single-particle excitations (54), explains the extended quenched gap region, despite a finite order parameter being present in this region. We also note that close to the vortex center, Δ even is reduced beyond the Clogston-Chandrasekhar limit, which is only allowed for a local region in the superconductor. Mesoscopically, the presence of vortices is driven by a circulating supercurrent that screens the penetrating magnetic flux. Therefore, we additionally calculated the ω e and ω o contributions to the supercurrent density and plot this as a function of distance in Fig. 5C for various values of h ∥ /Δ ∞ . In the absence of h ∥ , we find the characteristic diamagnetic response of the screening current (59) (black dashed lines), consisting of purely ω e pairs. At finite values for h ∥ /Δ ∞ , we find two contributions to the screening current with opposite signs, originating from the ω e and ω o pairing correlations. This demonstrates a paramagnetic Meissner contribution from the ω o pair correlations. With increasing h ∥ /Δ ∞ , both screening current contributions extend further outward, and the paramagnetic component increases in amplitude, but the total screening current (i.e., the sum of both contributions) remains diamagnetic. In this way, the paramagnetic contribution to the supercurrent, originating from the odd-frequency correlations induced in the MTF regime, gives rise to an enhanced magnetic penetration depth and contributes to the enhanced vortex size. In addition to the aforementioned details, we calculated how the measurable vortex structure evolves as a function of h ∥ . Figure 5D provides a visual representation of the simulated spatial dI/dV signal at V s = 0 mV, showing the evolution of the vortex structure. For h ∥ / Δ ∞ = 0, the vortex starts as the expected structure with a sharp rise in conductance at the core (also see fig. S9). For a persistently rising field value, the high-conductance region broadens and flattens out near the core, as can be seen for h ∥ /Δ ∞ = 0.5, and finally develops a high-intensity ring around the vortex core at h ∥ /Δ ∞ = 0.7 due to the overlap of pronounced inner coherence peaks. We propose that these MTF vortices can appear in any type II superconductor in the presence of a large magnetic field, given that spin-orbit scattering and orbital depairing are negligible. These reshaped vortices are likely to occur in experimental setups, even in the absence of an applied out-of-plane field, since a small misalignment between the sample plane and the in-plane magnetic field direction can induce an out-of-plane component (where B k c2 /B k c2 ≪ 1). In our case, we find a small tilt angle of 0.2°(see section S5 and fig. S11), estimated by the observed vortex density at B ∥ = 4.0 T. Consequently, it is interesting to explore larger ratios of h ∥ /Δ ∞ , close to the Clogston-Chandrasekhar limit. In Fig. 6A, we show one instance of a vortex where B ∥ = 3.60 T, while B ⊥ = 0.0 T for an 11.7-ML Al film. Here, STS measurements along a horizontal line and the simulated dI/dV signal (Fig. 6, B and C) reveal the appearance of a zero-bias peak at finite distance from the vortex core, owing to the gradual merging of the two inner coherence peaks. We expect that for even larger h ∥ /Δ ∞ ratios, this will give rise to a pronounced ring as seen in Fig. 5D. For these films, where B ? c1 /B k c2 is very small, small angular offsets in the magnetic field can lead to vortex formation near the Clogston-Chandrasekhar limit. For experiments where large in-plane magnetic fields are needed to induce a topological superconducting phase, the appearance of the aforementioned in-gap states at zero energy may make it more complicated to assign a topological character in this field regime. DISCUSSION In conclusion, we have demonstrated that the superconducting gap size and critical temperature of Al can be enhanced up to threefold in the 2D limit, for films as thin as 4 MLs. Based on thickness-dependent measurements of the superconducting gap combined with variable temperature measurements, we establish that the ratio of Δ to T c remains near the expected BCS ratio. While the enhancement of superconductivity can be seen gradually as films reach the 2D limit, it remains an open question how the enhanced superconductivity arises. More specifically, it remains to be explored if, besides electron-phonon coupling, other low-energy excitations become relevant in the lower dimensional limit, such as plasmons. It is also particularly interesting to explore if this enhancement is unique to Al, or if it can be generalized to other superconductors in the weak-coupling limit. In addition to the enhancement of the critical temperature, we quantify the type II behavior of these films, including a characterization of the vortex lattice in the presence of the MTF effect. Notably, we find that the shape of the vortex structure in the presence of the MTF effect is strongly modified, including an experimental observation of a gapless region. Our simulations confirm a connection between the extended vortex shape and the presence of odd-frequency pairing contributions, as exemplified by a paramagnetic contribution to the screening supercurrent. In addition, these results highlight that the presence of pairing correlations and the observation of a tunneling gap are not synonymous in a tunneling experiment (60). Therefore, further investigation with pair-sensitive tunneling techniques can provide more insight into the unconventional pairing contributions in the high-field regime of superconductivity (59,61,62). MATERIALS AND METHODS All presented STM/STS measurements were performed using two different homebuilt systems with base temperatures of 30 mK (63) and 1.3 K (system A and system B, respectively). All presented experimental data were measured at T ≈ 30 mK, unless specified otherwise. Since both systems have an identical UHV chamber design (<5 × 10 −10 mbar), the sample growth was performed using the same procedures. First, the Si(111) wafer (As doped, resistivity <0.005 ohm·cm) is annealed at~750°C for >3 hours for degassing purposes by applying a direct current through the wafer. The temperature is measured by aligning a pyrometer onto the wafer surface. Afterward, the Si(111)-7 × 7 reconstruction is prepared by repeated flash-annealing to T = 1500 to 1530°C. Second, the Si substrate is cooled on a liquid nitrogen cold stage (~110 K) for lowtemperature Al growth. We deposited Al from a crucible with a cold-lip effusion cell (CLC-ST, CreaTec) at an evaporation temperature of T = 1030°C, yielding a deposition rate of 0.39 MLs (A) or 1.06 MLs (B) per minute (see section S1 and fig. S1). Third, after depositing the desired amount of material, the sample is placed onto a manipulator arm and annealed at room temperature for 30 min for coverages of >4 MLs and 10 to 20 min for coverages of <4 MLs (A) and 15 min for coverages of 4 to 6 MLs (B). The anneal time is stopped by placing the sample into a flow cryostat-cooled manipulator arm (for system A) and transferring the sample into the STM body. All samples were measured using an electrochemically etched W tip, which was prepared by dipping into an Au(111) crystal and subsequently characterized. STS measurements were done with a standard lock-in technique, where a sinusoidal modulation voltage (f mod = 877 to 927 Hz and V mod as indicated in the figure captions) was added to V s . For variable temperature measurements on system B, we calibrated the used temperature sensor by measuring and fitting the temperature-dependent superconducting gaps of a film of Sn/Si(111) and bulk V(111) (see section S3 and fig. S6). For vortex imaging, we spatially mapped the dI/dV signal in constant-contour mode, as done in (49). In this mode, we first recorded a constant-current line scan trace, measuring the values of z, with a closed feedback loop, at a bias voltage (V s = 3 mV). Next, the recorded values of z (including a z offset) were used at the measuring bias (V s = 0 mV). This method was repeated for every line of the image. Sharp topographic features, such as island edges, are likely to contribute to the signal in this measurement mode. In all presented vortex maps, the orientation of the in-plane magnetic field is 10°off the vertical (y) axis of the images (64)(65)(66)(67)(68)(69)(70)(71)(72)(73).
7,303.8
2022-10-19T00:00:00.000
[ "Physics" ]
Do blogs as a virtual space foster students ’ learner autonomy ? A case study UK higher education institutions strive to foster learner autonomy in their students to create more successful learners, yet due to its complex nature, educators and academics continue to search for effective ways to achieve this. This case study investigates how one virtual blogging space on the Independent Learning (IL) pre-sessional module at the University of Southampton seeks to cultivate learner autonomy. This qualitative study was driven by a lack of empirical research exploring both perceptions and practices in social learning spaces. Investigations into both of these elements help to gain a deeper understanding of how learning spaces function, which is essential to recognising how they can meet their pedagogical goals. Whilst the blogging space was effective in fostering learner autonomy to some extent, there was evidence of conflicts in how the students, IL Facilitators (ILFs), and curriculum designers perceived the blogging space. This suggests the need for more time spent conveying the rationale of the blogging space to ILFs and subsequently students, and it also highlights the wider importance of understanding individual context. Introduction It is widely argued that blogs increase learner autonomy by promoting active and reflective learning in an interactive learning environment (Radcliffe, Wilson, Powell, & Tibbetts, 2008;Williams & Jacobs, 2004;Chang & Yang, 2013). Murray, Fujishima, and Uzuka (2014) note that "how learners imagine a space to be, perceive it, define it, and articulate their understandings transforms a space into a place, determines what they do there, and influences their autonomy" (p. 81). This case study seeks to empirically investigate these ideas in the context of the IL module on the pre-sessional English for Academic Purposes (EAP) programme at the University of Southampton (UoS). By examining learners' perceptions and practices in the virtual blogging space, the investigation strives to determine how learners' autonomy may be affected. Context The study took place on an eleven-week EAP pre-sessional programme at the UoS, focussing on the IL module which aims to support students' transition towards learner autonomy. The module supports students through weekly workshops using a flipped (blended) approach, face-to-face advisory sessions and reflective blogs, as well as various other non-compulsory activities. On the programme, it is the ILF role to run all elements of the course. Conceptualisation of IL IL curricular developers consider IL as "the ability to take responsibility for one's own learning" (SotonSmartSkills, 2017, p. 4), which is one of the most widely cited definitions of learner autonomy (Benson, 2013). The course designers argue that learner autonomy is achieved by learners developing their own learning strategies and subsequently being able to reflect on these. Furthermore, it involves students organising themselves, setting goals and deadlines, and evaluating their use of time and their work. IL module developers appear to hold the view that it is reflection which reinforces development and truly helps learners to progress academically (SotonSmartSkills, 2017). Pre-sessional blogs IL students are given a blogging topic each week and are encouraged to write a weekly blog post of two or three short paragraphs. Chang and Yang (2013) have demonstrated that blogs give students the opportunity for learners to develop reflective thought, and in the context of the pre-sessional programme the blogging topics are designed to scaffold this reflection. The blog also serves as a safe space for students to develop their academic skills (time management, critical thinking, and research skills). Williams and Jacobs (2004) and Radcliffe et al. (2008) both recognise the importance of blogs to provide learners with a high level of autonomy whilst allowing peerto-peer learning spaces in promoting active and reflective learning. Therefore, the course blog space is ultimately a platform which allows the ILF to support students' learning through interaction, as well as the opportunity for peer-to-peer interaction. Methodology This case study was heavily influenced by ethnographic research methods, allowing the investigation of social practices in their complexity rather than viewing patterns in isolation, which Dörnyei (2007) argues is one of the key principles of ethnography. Ten students and four ILFs took part in the study. All students were of Chinese nationality in their twenties and enrolled on the UoS pre-sessional programme in preparation for their business-related Master's degrees. The ILFs had diverse professional backgrounds and a varied number of years working on the presessional programme. Data was predominantly obtained from the students' blogs and semi-structured interviews, as well as from course documents (notably the Independent Facilitator Guide -SotonSmartSkills, 2017), and observations for contextual information. A qualitative content analysis approach (Zhang & Wildemuth, 2009) was used to analyse emerging themes to answer the research questions below. • How do pre-sessional students and ILFs perceive, define, and articulate their understandings of the virtual blogging space? • What social and educational practices take place in the virtual blogging space? • To what extent does the above possibly influence students' learner autonomy practices? How do students and ILFs perceive, define, and articulate their understandings of the virtual blogging space? Students and ILFs demonstrated a mismatch in how they perceived the blogs in numerous ways. Students perceived technology predominantly as a social tool, with the majority demonstrating little awareness of how it could be exploited for educational purposes. Yet, at the same time, they believed that they did not have much choice over the content of their blogs, leading students to a perceived lack of 'learner empowerment' which Little (1991) views as one of the key pillars of learner autonomy. Furthermore, the ILFs and students demonstrated conflicting views regarding the purpose of the blogs, with many students confused about its real purpose and not contributing. Finally, there was a conflict in perceptions of the social element of blogs. Although both course developers and students expected the space to be used for student-student interaction, ILFs viewed blogs only as a space for student-ILF interaction. 3.2. What social and educational practices take place in the virtual blogging space? The analysis found that although students did not recognise the blog to be a reflective space, almost all did engage with reflection to some extent, which is crucial to foster their learner autonomy (Little, 1991;Reinders, 2010). When evaluating the reflection, two main gaps were identified -lack of engagement with the cyclical nature of reflective processes, and reflections limited to 'surface level' rather than exploring the ideologies behind their actions. In terms of the social interaction, not all students fully engaged with their ILFs as they did not see this as necessary, yet those that did were often more likely to develop their reflective practices on the blog further. An analysis found that responding to students with follow-up questions prompted further reflection than simply responding with statements. As well as scaffolding their learning, student-ILF interaction fostered good rapports between them which could in turn have a positive impact on learning. Lastly, not only was the space used as a reflective tool, but the analysis also found that it served to reinforce some of the skills covered on the module, particularly critical thinking, which was determined to also be an important factor in promoting learner autonomy. Conclusions Interviews with ILFs and students showed there were some misunderstandings from both sides surrounding the intended purpose of the blogging space. In practice, ILFs were only aware of the use of blogs for teacher-student interaction, meaning that they were not promoting the virtual blogging space to its full potential. Similarly, based on Waring andEvans' (2015, cited in SotonSmartSkills, 2017) emphasis on learners having 'voice and choice' to nurture their autonomy, course developers suggested that learners should have the freedom to direct the contents of their blog. However, possibly due to ILF instructions, participants did not perceive themselves to have this freedom. The findings of this study confirm how important the understanding of the pedagogy by teachers and/or facilitators can impact on both students' perceptions and practices as well as using such spaces to their full potential. An analysis of the educational practices that take place in the virtual blogging space revealed that the blog does however, to some extent, influence learner autonomy practices, particularly in terms of reflection and practising other transferable study skills. This study reiterates Murray et al.'s (2014) claims that how learners perceive a space really does affect how they use the space, which in turn influences their learner autonomy. Educators should be aware of these claims in understanding how learners use their own learning spaces and how to help them in their learning journey.
1,968.4
2020-08-10T00:00:00.000
[ "Education", "Computer Science" ]
African values and institutional reform for sustainable development in Africa The way society patterned its institutions and framed its laws, is predicated on the prevalent values of the people, which is rooted in their culture, philosophies, and spirituality. The way such a society makes progress and promotes coexistence is linked to the values that they uphold. Similarly, African values are those axiological principles that form the foundation of social living and social ordering in traditional African society, which can still be relevant today. These social values of the African people are what were used to construct African inclusive institutions in the traditional setting before it was eroded by colonialism and imperialism. Thus, for Africans of today to rebuild inclusive institutions that will guarantee sustainable development across the continent, there is a need to revive and reintegrate the principles from the study of African values into the reform of contemporary African social institutions. Therefore, in this paper, the authors argue that for African institutions to deliver the good of sustainable development, they must be reformed along the lines of foundational principles of African cultural values. The paper employs the philosophical method of critical analysis in dissecting the issues within this discourse. INTRODUCTION Africa is a continent that is blessed by Nature with various natural resources both humans and materials, which can be used for the development of the continent and her peoples.More so, the continent has the potential to lead the world in everything that is positive and humane.However, this is not the case because of bad governance, failing societal and state institutions in delivering the benefits of sustainable development.Consequently, the majority of her peoples are suffering from extreme poverty, malnutrition, widespread violence, decaying infrastructure, and corruption (Bassey, 2016).Thus, in this paper, we shall critically analyze African values and integrate its foundational principles in the reform of contemporary African institutions in order to reap the benefits of sustainable development.The paper argues that for contemporary African institutions to deliver the dividends of sustainable development, they must be reformed along the lines of foundational principles of African values. African values are those axiological principles that Africans cherish and holds firmly (Bassey & Bubu, 2019).They were used to order and patterned African traditional society which made mutual co-existence and cooperation possible.These values could also be the regulatory mechanism that shaped human behavior in traditional African society.Although, when we talk about Africans, we mean the totality of the people that are indigenous to Africa who share almost similar cultural identity and values, with little modifications based on place and time.Some of these cultural values include the value of the sacredness of human life, value of the dignity of human labor and creativity, value of religion and the sacred, value of family and kinship, value of community and complementarity, value of self-reliance, value of good relation (Awoniyi, 2015;Igboin 2011;Kanu, 2015).It is these values we shall be appraising and their foundational principles will be employed in reforming contemporary African institutions in order to engender sustainable development in Africa. Contemporary African socio-economic and political institutions were bequeathed to Africans and after independence, African political elites were unable to redesign these institutions to accommodate the foundational principles of African values.This is so because social institutions are an organic outgrowth from a people's values, culture and history.These institutions shape their behavior and their general outlook of life.Since the current institutional arrangement does not reflect the intrinsic values of Africans, it has consequently promoted a sense of alienation and social dislocation among the people.This has also created a dividing line between the people and their political leaders with some seeing political offices as 'Whiteman' property to be plunder and loot.This has promoted a culture of neo-patrimonialism and prebendalism, corruption, nepotism, sectional politics, godfatherism and a clientele state that is structured to favor a few at the expense of the majority.This has made independent African State institutions not to "command the respect, loyalty, and dedication that characterize institutions in the full sense of the term" (Goldsmith, 1998:1).Since the ideological foundation of contemporary African societal institutions does not reflect the common values of the people it then, call for a reform that reintegrates it and transforms it to reflect the values, aspirations, and needs of the people.This is the task before this paper. More so, contemporary African institutional arrangement is extractive in nature, which creates incentives and rewards for a few who are highly connected to the government (Acemoglu & Robinson, 2013).This has consequently widened the gap of inequality between people in society.Today, in the name of democracy and a free market, power is now in the hands of a few, who have come to see their offices as an avenue to enrich themselves, family members, ethnic and religious members.This is one of the factors that are responsible for ethnoreligious conflicts across Africa (Chua, 2004).The situation is so pathetic to the extent that young Africans due to the hardship at home, have continuously taken the dangerous voyage of crossing the Mediterranean Sea to Europe, all in the name of looking for 'greener pasture'-some of them are used for forced labor, prostitution, and slavery.An extractive institutional arrangement is responsible for brain drain syndrome in Africa, whereby the best African minds who ought to be responsible for the development of the continent move to other parts of the world in search of opportunities to better themselves and their families. Consequently, the pathway to sustainable development cannot be achieved in Africa because contemporary African socio-economic and political institutions are exclusive in nature and it is constructed to favor a few, who are wellconnected to Power Brokers in an African nation.It is on this basis that this paper is calling for the opening of the space to reflect that African personality that was the basis for coexistence and mutual cooperation in Africa before the advent of colonialism and neocolonialism.Sustainable development is all about meeting the needs of the present without undermining the capacity of the next generation in meeting their own needs.It is all about social, economic, political and environmental justice which is geared at making humans responsible for their actions in building a just world order. The core of sustainable development is human development, which fits properly in African ontology that sees human life as the center of all developmental schemes (Bassey & Mendie, 2019).The imperatives of sustainable development are grounded on the principles of African communalism which is amply captured in the axiological principles of "I am because we are and since we are, therefore, I am" (Bassey, & Pimaro Jr, 2019: 130).This principle sees the community as the foremost and paramount in the conception, design and implementation of all developmental schemes; because life in the African worldview is a communitarian project in which one individual cannot carry it alone.Therefore, sustainable development in the African sense is pro-life because "it must create life, enhance life, promotes life, protects life, save a life, increase life and prolong life" (Ndubuisi, 2013:228).It is not only the life of the present generation also that of the next generation. However, for African to reap the benefits of sustainable development, its institutional arrangement must be reform base on the foundational principles of African values.This is so because the way African institutions (especially its socioeconomic and political institutions) are constructed they cannot deliver the good of sustainable development.Thus, in this paper, we will look at the crisis of African institutions, African values and distilled the foundational principles that can be employed in reforming contemporary African institutions, in order, for Africans to enjoy the benefits of sustainable development.The paper is divided into five sections with each containing the issues that inform the writing of the paper.Section one contains conceptual clarifications, section two discusses African values and its endogenous principles.Section three discusses the institutional crisis in Africa, while, section four discusses the integration of the endogenous principles of African values in the reform of contemporary African Society, and section five contains evaluation and conclusion. African Values African values are those axiological principles that shaped and guided human behavior in traditional African society (Bassey& Bubu, 2019).Traditional African Society is that society that exists before the advent of slavery, colonialism, and westernization.It still exists in what is today referred to as African rural society (village).These African values are still upheld in the village community's society, although some are gradually dying due to westernization in the disguise of modernity.However, we shall consider the following African values: the value of the sacredness of human life, the value of the dignity of human labor and creativity, the value of religion and the sacred, value of family and kinship, the value of community and complementarity, the value of self-reliance, value of good relation.These values are an aggregation of ethics, sociopolitical and economic values and aesthetic.It is rooted in the principles of African communalism or Ubuntu. Institutions According to Robinson (2014:3), "Institutions are those rules (both formal-written laws and the constitution and informal -like social norms) that structure economic, political and social life and generate different patterns of incentives, rewards, benefits, and costs."This entails that institutions are an organic outgrowth from a people's belief, values, and norms which give society its structure.Gacan (2007:37) also maintains that institutions are those "norms and values of a society, together with those organizations that are capable of changing and promulgating those norms and values".He went further to submit that, the state, the market, and the civil society are major players in shaping and reshaping society's institutions which makes the critical players in the development of a society.In precise terms, institutions comprise of the economic market conditions, the legal framework, public policies, respect for human rights, governmental agencies, social services structure, the family structure, religion and the sacred; the educational and the cultural dimensions of society.For any society to function well and enjoy some level of stability its institutions must be strong and democratic that accommodates the majority of the citizens that make up that society.Therefore, to function well, society also needs other public services; roads and a transport network so that goods can be transported; a public infrastructure so that economic activities can flourish, and some type of basic regulation to prevent fraud and malfeasance (Acemoglu & Robinson, 2013:76). Sustainable Development According to the United Nations General Assembly (1987:43), sustainable development is a "development that meets the needs of the present without compromising the ability of future generations to meet their own needs".This definition highlighted the importance of meeting humans' economic needs without doing damage to the environment.The contention among scholars before this definition by the UN is that there is no relationship between economic development and environment sustainability because all productive resources are in the environment which must be exploited in order to foster economic development.Consequently, this has led to pollutions, climate change, global warming, and other environmental hazards as a result of humans' exploitative activities on the environment.However, currently, the concept of sustainable development has gone beyond environmental sustainability to include economic sustainability and sociocultural sustainability. According to Emas (2015:2), "the overall goal of sustainable development (SD) is the long-term stability of the economy and environment; this is only achievable through the integration and acknowledgment of economic, environmental, and social concerns throughout the decision-making process".It is within an integrative framework that the concept and practice of sustainable development can be possible.This in turn demands a new kind of thinking that looks at issues of human needs from a holistic perspective that put the imperative of intergenerational equity in decision making.It is in the relation of this, that Emas (2015:6) opines that "sustainable development requires the elimination of fragmentation; that is, environmental, social, and economic concerns must be integrated throughout decision-making processes in order to move towards development that is truly sustainable".Hence, the goal of sustainable development is to preserve life, protect life, and prolong life not only in a single generation but generations to come. Value of the Sacredness of Human Life In the hierarchy of Africa values, the sacredness of human life is the utmost important.The respect and dignity accorded human life cannot be over-emphasized.This is one of the reasons that the Igbo of South East Nigeria uphold the concept of "Ndubuisi", which mean life "Ndu" is great (isi).Respect for human life spreads beyond the confine of the nuclear family, tribal or nationality but embrace humanity in general.Hence, members of the extended family, community, and tribe are regarded as brothers whose lives must be preserved and protected (Igboin, 2011:99).More so life is seen in two dimensions in African thought; according to the Annang people of South-South Nigeria, life can be viewed from two perspectives which include "Eti Uwem" and "Uwem Akpok"."Eti Uwem" is regarded as the good life which is characterized by peace, respect, pleasure, happiness, material satisfaction and social relevance.This kind of life is regarded as good and every member of the society desires and pray for this kind of life.On the other hand, "Uwem Akpok" means the lizard life; it is a life that is characterized by pain suffering, deprivation, agony, poverty, and low class in society.The society does not regard this state of life as life because it undermines human happiness and fulfillment.The average Annang man prays against this kind of life and when this state falls on him, he regards it as a sign that life is not fair to him. In the African mind, human life has an ultimate worth because it is intrinsically related to the Creator of life (God).The value of life draws its meaning from God the creator of life.Traditional African believes that God creates life and as such, it is the duty of man to preserve and protect this divine gift.This is why suicide and murder are viewed as a serious abomination against the people, the "gods" of the land and the ancestors.The value of life is so engraved in the psychic of the African people that they maintain that their dead relatives still interacts with the living which allows every family to call their ancestors in time of need, celebration and peace. Since human life is paramount to the African people, the welfare and well-being of man are at the center of all societal thoughts and actions.In other words, man is valued above every other possession.More so there is a connection between the value of human life and a wide range of brotherhood, which may not be biologically based.The concept of brotherhood in Africa goes beyond that of Western understanding.Therefore, in whatever circumstance, the spirit of brotherhood stimulates patriotic response and disposition of one towards another. However, there are a handful of expressions of the abuse of human life despite the appreciation of life as a value.The case of inter-tribal wars and ethnic conflicts leading to the death of even blood-related African brothers are some of the experiences in traditional Africa setting that negate the spirit of brotherhood.Also Africans involvement in the ignoble slave trade which is a dehumanizing experience in the history of Africa.Although, some African scholars will dismiss these instances by attributing it to be, the aftermath of European contact with traditional Africans.Nonetheless, there used to be the internecine burial of people with decreased Kings and nobles which Gyekeye holds that it is based on false metaphysics (Gyekeye 1996:26).All these instances highlighted above, do not negate the fact that the value of life is the supreme value in the African system of values. Value of Religion and the Sacred Africans are notoriously religious (Mbiti 1969), in the sense that religion permeates every facet of African life.Africans carry their religion everywhere they go.They carry their religion to the parliament and state house if they find themselves in government or politics.If they find themselves in the military, business or in a foreign land, they carry their religion there.This is why in traditional African society, there is no atheist.Africans give a religious interpretation and meaning to everything that happened to them.If they had a bountiful harvest, they regard it as God's blessing otherwise they regard it as the wrath of God.This entails that all the African does or say and permit is impregnated with a vision of the divine and all natural reality is explainable in relation to the supernatural. However, moral values rest on religious values that are; it is perverse or sacrilegious to separate moral and religious value.To corroborate the above point, Kanu (2015:157) opines: "it is the presence of religion that lends meaning and authority to (moral) values.The sense of religion which is our spiritual selves is that instinctive feeling of immortality".The value of religion is the fundamental value that gives meaning to other moral values.This is why African moral imperatives are fully grounded on African religious beliefs.Religion in African worldview is the custodian and enforcer of morality in African society.The Africans use religious practices like Taboo, Totem, armlet, charm, and divination to promote social justice and adjudicate cases.Furthermore, religion in traditional African society is not an individualistic affair but that of the community (Ikegbu and Bassey, 2019).Almost everybody worships the god of the community but ISSN: 26146169 @Center for Humanities and Innovation Studies 139 each family or clan still maintained their individualize gods or deities.Hence, it is the sense of religion and the sacred that naturally endowed man with respect for human life and human dignity.Again, religion or spiritual element in the African man characterizes his relationship with the divine and it is an indubitable fact that the value of religion promotes moral excellence. Cosmologically, the universe from the African understanding is a composite one; a blending of the divine, spirit, human, animate and inanimate beings, which constantly interact with one another.These visible and invisible elements that comprise the African cosmology are what have been referred to as the "Force of Life" or "Vital Forces" (Igboin, 2011:98).The vital forces are hierarchically structured in such a way that God, the creator of the universe is at the top.In this pyramid structure, where God is at the top, invisible forces of life like divinities, spirits, and ancestors form part of the hierarchy.It is this God-Osanobuwa (Edo), Olodumare (Yoruba) Chukw (Igbo), Ubangiji (Hausa), Oghere (Urhobo) and Abasi (Efik) that commitment is ascribed.Hence, the ethical or moral standards of the Africans are also believed to be derived from the injunctions of God. Values of Community and Complementarity Africans place a high value on communal living.Communal values express the worth and appreciation of the community; it is the value which guides the social interaction of the people towards a common goal.Interpersonal bonds go beyond biological affinity in expressing the values of communality, Africans share mutually, they care for one another, they are interdependence and they are in solidarity with one another.Whatever happens to one member happens to the community as a whole (Igboin 2011:99).The joy and sorrow of one extend to other members of the community in profound ways.The willingness to help others for the development of the community is reciprocal.It is within this communality that Africans are mostly fulfilled.It is on this note, that Mbiti (1969:1) submits that: "I am because we are since we are therefore I am".This is dialectically opposed to the Western rugged individualism which has unfortunately threatened the very root of African communalism as a result of colonial activities in Africa. Africans do not reject individual values but place a high premium on communal values and everything that promote the good of the community.In African society, the individual comes to the awareness of himself as a person through the framework provided by the community.Existence only becomes meaningful when an individual lives in the community of fellow humans.It is beautifully expressed by this Igbo aphorism "a so adina" which means let me not be alone.This is so because the community gives the individual its identity and the existential tools for self-realization and actualization.Hence, in a situation where there is a clash between individual value and community values, the community takes precedence over individual values, though individualistic values are linked closely with communal values.Africans have and also appreciate personal will and identity.Among the Edo, individualistic values are expressed in the following Maxim: "You first see the forest before calling the trees by their names".This means that from afar off the trees make up the forest, but on entering into the forest one can begin to identify different trees by their names.Communal values guide the social life of individual members of the community and appeal to all that matters but when one takes an extensive look at the people, one discovers that there is individualistic value; however, both communal and individualistic values co-exist perfectly together.They may sometimes clash but the communal values are the superintending values of persons in the community and are not consciously trampled upon (Gyekye, 1996:4). Moral imperatives are usually constructed in such a way that its ultimate end is to promote the community.The total well-being and welfare of the community are essentially important to the extent it informs moral values in African traditional society.Thus, responsibility, kindness, honesty, hospitality, accommodation, generosity, companionship, faithfulness, fruitfulness, love, dignity, diligence; are considered to be moral values.These form the bedrock of social value which abhors ethical egotism.Ethical egotism in its conceptual meaning holds that everybody is to pursue his own welfare and interest (Igboin, 2011:100), which gives rise to selfishness.Thus, communal living and the sharing of the interdependence and interrelationship of the community is what characterizes life in the African sense.Through this value, the African maintains good neighborliness, mutual assistance and sharing of each joy and sorrow. Value of Family and Kinship In Africa, the value of the family cannot be over-emphasized, it is the primary unit of the social life of the community.Its cohesion is a sine qua non for the unity of the community.As fundamental as the family is, it has social and moral values.The nuclear family functions within the extended family.Interestingly children have their rights and obligations towards their parent, likewise the parent towards their children.In this unit, marriage becomes the basic institution for the establishment of a family.Marriage as part of rites of passage into family life has its social and moral code in various societies which makes it worth the name.For a woman to be found a virgin carries a high value and dignity that attracts respect and honors to her parent.To bear children is very important because of the socio-religious implications.Divorce had no place except as an excruciating last resort.Even till date, Africans still view divorce as obstructing the solidarity, mutuality, love, care, togetherness, cohesion, nourishment, fellowship and continuity of the family (Igboin, 2011: 100). Furthermore, the family and kinship give the individual person its identity and place in society.Any man without a family is a non-existing person.The family is the custodian of the individual person.The family forms the basic unit of the society and it gives the individual person it origin, functions, and expectation as regards the entire society.It is the first part of socialization which the individual receive which equips him or her to function in the society.In traditional African society, the family also guides the career choice of individuals.For example, if a family is known to be very good in a particular craft every member of the family, from one generation to another tends to follow the craft.Therefore, every Africans live to preserve and protect their family names and dynasties because family provides the basis for the engagement of the community. Value of Good Relationship Life in the African community is based on the philosophy of live-and-let-live.The principle is based on the concept of the "clan vital" and applies to a concrete community.The Igbo of South East Nigeria put it this way "biri ka biri" which means "live and let's live in harmony".The relationship between individuals recognizes their worth as human beings and not only what they possess or what can they do for each other.However, these can come as secondary considerations, in terms of reciprocity and in terms of interpersonal relationship.People help one another without demanding immediate or an exact equivalent remuneration.Everyone is mindful that each person has something to contribute to his welfare, sometime and somehow.A Hausa proverb illustrates this point clearly when they say: "friendship with the ferryman right from the dry season means that when the rain comes, you will be the first to cross".This proverb emphasizes consistency in friendship, in that, the worth of the ferryman, as a human being is not determined solely by what he can offer during the rains; hence he must be befriended right from the dry season when his occupation is not in strict demand. The art of dialogue and conversation is a cherished value in African human relations.People freely discuss their problems and look for suggestions and solutions together.The unwillingness to talk to people about either private or public affairs can be interpreted as unfriendly conduct.Above all the African believes that he who discusses his affairs with others hardly runs into difficulties or makes mistakes in the execution of his plans.According to the Igbo of Nigeria: onye na agwa madu nsogbu ya, na acho uzo niyinyan ka" which means he who tells people what he is passing through will always find solutions. In traditional African community everyone is accommodated; the weak, aged, sick are affectionately taken care of in the comforting family atmosphere.The "comforting family atmosphere" is provided by the extended family system also.It is a system that ultimately rested and still rests on the philosophy of live-and-let-live, otherwise known as "the eagleand-kit principle.The African by this value is obligated to care for the widows and orphans of his deceased relative.Failure to do this earns him strong public criticism and as a result, it becomes difficult to find someone in the community without help.Therefore, no beggar existed in the true sense of the word. Value of Dignity of Human Labour and Creativity The Africans highly appreciate hard work.Even the indolent also acknowledges that hard work is a value that engenders positive influence in the family and communal circles.The hardworking African makes persistent efforts regardless of failures and setbacks.In fact, those who were not industrious became the initial victims of the slave trade (Igboin, 2011:100).Ironically, apart from the children of the kings and nobles, the hardworking people of Africa were almost the last in receiving western education at the inception of Christian missions and western colonialism.Parents only sent their lazy children to school while the hardworking ones were doing the family job.The value of hard work is appreciated as work was regarded as a cure for poverty.Poverty or failure is an orphan while success has many fathers and long genealogy.Nobody wants to associate with lazy people, and many of them cannot even get married or perform the required social responsibilities demanded by the family and community.Wealth results from hard work and the Igbo of Southern Nigeria put it this way"aka aja ebute onu manu manu."The hand that work will put food on the table.This implies that human labour is the gateway to better man's material well-being in the community.Hence, anyone who possessed wealth he could not account for was viewed with suspicion: the community scorned at such a person.The African uphold the idea that hard work by individuals uplifts the material and intellectual well-being of the community.The human person through its ingenuity creates goods and services that address human want that can be exchanged for other material values.This can boost, the local economy and foster community prosperity.The principle which the value hinges upon is that human life is important and every human effort must be channeled towards the preservation and betterment of that life. Value of the Sense of Solidarity The value of solidarity is evidently seen in the building of a hut or house for a kinsman, especially of someone that is old or a person that is not well to do in the material sense of it.This act is always seen as a collective responsibility that calls for the contributions of many.More so, the whole community or kinsmen as the case may be can mobilize a workforce to the farm of a dead relative or someone who is bereaved to help out in maintaining the farm and keep the bereaved family going.When such a job is to be done, the whole community turns out en mass with their supplies and music and proceeds to sing and dance their way through to the successful execution of each particular job.In this way, work becomes a veritable means of socialization and solidarity, this type of solidarity is a vital value for sustainable African development. Furthermore, the concept of a man as a person who co-exists with others gives rise to collective responsibility, interdependence and social living which is an important aspect of African socio-religious life.In traditional African society, people help one another without demanding immediate or an exact equivalent remuneration.Everyone is mindful that each person has something to contribute to the general welfare of the society.Also, the African sense of solidarity is evident in people's action when someone dies in a community or village.In most cases, people forgo their personal businesses in solidarity, not by a sanction to condone with their bereaved family and to assist in burial arrangements and funeral of the dead person.In this way, the entire community gets involved in mourning rituals.Hence, Africans due to their ontological makeup are people who show mass solidarity in the support of individual members of the community who may be suffering from one calamity or the other.In this way, the community is enriched with the true spirit of brotherhood, which to some extent is lacking in today urbanized and westernized African society. ISSN: 26146169 @Center for Humanities and Innovation Studies 141 Value of Self-Reliance Traditional Africans believe in the capacity of members of the society to chart their own path of progress through independent thoughts and actions by harnessing the resources from their immediate environment without totally depending on others perpetual.Total dependence on someone in Africa is seen as a sign of loss of identity and capacity as a human being.As such, Africans through their value seeks to promise their own development or improvement through the pursuit of their own indigenous methods, principles, and resources without seeing their survival as dependent on someone else.Self-reliance is not self-sufficiency but the ability to meet basic needs without over demanding from others who are likewise trying to meet their own needs.The issue of African interdependence on one another is not based on demand from other to do for one what he can do for himself in normal circumstances.Therefore, from our study of African values, there is one thing that stands out clearly which is, African values are communalistic in nature.The implication of this is that the principle of social living and social ordering is based on communalism.Communalism as a principle of social ordering and social living is based on the assumption that the community or society is superior to the individual and those who do not share in the interconnectedness and interrelatedness provided by the community is considered a stranger.The individual has duties towards the community and the community has rights towards the individual, which include: his/her properties, body and the way he/she lives.The community is the foundation of all African social arrangements and it is the framework that provides the interdependence, interconnectedness and interrelationship that characterizes life in traditional African settings. THE CRISIS OF INSTITUTIONS IN CONTEMPORARY AFRICAN STATES Contemporary African institutions are colonial creation which was bequeathed to Africans after political independence.They do not reflect the values and the culture of the African peoples.They are constructed initially for the good of the colonial masters and their business interests in Africa.In order word, they were extractive in nature with minimal incentives for the majority of Africans.Thus, after political independence, African elites who took over from the colonial master never made any efforts to de-colonialize the very foundations of colonial society, which is predicated on extractive institutions.Post independent African society still maintains the colonial practice of excluding the majority of the mass from resource sharing and allocation of incentives.This time it was no longer between the 'Whiteman' and the rest of Africans but with Africans who are in power and those who are not in power.Consequently, this situation has created a backlash in Africa, which has led to wars, civil unrest, political instability and social dislocation.Some of the state structures are weak in Africa to manage the diversities that are inherent in independent African states.Almost, all African states are still battering with ways of establishing their legitimacy before their people because the state does not organically grow from the people values and culture as such, the people still see the state as the property of the colonizers and an arena for misappropriating the common patrimony of the community.This is the root of the crisis of institutions in Africa today. Legally, contemporary African courts are elitist in nature which has excluded the majority of Africans who are not schooled in the traditions of the west.The court is built to favor the winner and punish the loser in litigations.This winloss mentality is not African in nature because, in African traditional legal system, it is all about redistributive justice and reparations for offences against the land and the gods.In that system, judgment on any matter is reached based on the reasoning of the elders in council and the verdict of the oracle, which at the end all litigants are happy because even if the elders in council can be manipulated the oracle cannot be manipulated.Today, it is not the case, what we have is a situation whereby justice is for the highest bidder who can manipulate judges with their money to secure court verdict for their favor.This has weakened the people dependency and reliability on the judiciary system. Educationally, contemporary African education is western not in terms of the content of the knowledge but in terms of the values, it is imparting in learners.The system priced paper qualification above knowledge pursuit and skills development.The system is producing people who cannot link to their root and see everything in their root as diabolic, primitive and uncultured.It is this reason, Africans who after passing through western education see nothing good in Africa again.This is a big problem because an education system that does not develop the inherent capacities of a people, but rather replace it with something that is not intrinsically theirs is not oriented towards development. THE INTEGRATION OF THE ENDOGENOUS PRINCIPLE OF AFRICAN VALUES FOR SUSTAINABLE DEVELOPMENT Communalism is the endogenous principle from African values which hinges on the axiological principle of "I am because we are since we are, therefore, I am".Conceptually, communalism is a principle of social ordering and social living which is based on the assumption that the community comes first and it is superior to the individual; and those who do not share in the interconnectedness and interrelatedness provided by the community are not persons (Asuquo 2016:38).In traditional African society, this is the principle that informs the structure of society and the pattern of social ordering.People are disposed and open to the mutuality and the spirit of community and solidarity, which makes cooperation and collaboration possible.It is the need to foster this mutuality that the whole system of African ethics where built. In traditional African society the principle of communalism was what inform African brotherhood, political organization, and economic pursuit (Samuel & Leonard, 2018).Just as we have established that political and economic institutions are the strategic institutions for developing any society and people.The way public decisions are made, enforced and sustained in the long run is grounded in politics.The way we structure our politics is a determinate factor in the way we can develop as people.Also, the economy is all about, how we share the common goods of the society for the welfare and wellbeing of the majority in an equal playing ground.It determines the material wellbeing of the society which in turn, determines the social consciousness of the people.Economic is all about the wealth of the nation and how it should be created, administered and share for the total prosperity of the society (Okoro, 2011:12).The prevailing social values and ideology inform the way any society goes about this of the society.This is why in the West, due to the premium they placed on individualism; inform the creation of liberal democracy as an ideology for political arrangement, and capitalism as an ideology for an economic arrangement.The whole institutional structure of some Western societies is rooted in the individualistic values they hold and it is these values that form the way they engage with the world.Thus, our concern is how we can integrate African values in the reform of contemporary African political and economic institutions for sustainable development.This shows that we need to work out the ideological underpinning principles, which will be the foundation of building African institutions within colonial legacies in Africa.It is a fact in history that Africans existed before the advent of Western and Arabic intrusions into Africa.This entails Africans had their own indigenous way of doing things which consequently was upstaged by colonialism through colonial religions and education.Furthermore, we need to be aware that this effort of using African values as a basis for the reform of contemporary African political and economic institutions is part of the process of African 'self-retrieval and decoloniality', in order to recover the lost indigenous ways of doing things, which can also help in addressing African and human problems today.Since political and economic institutions are the key in the development of any society, our concern will be how we can integrate the principle of African communalism in reforming them, for African to reap the benefits of sustainable development. Politically, contemporary African political structure is presidential which give power to the center and weaken the peripheral units of the state.Also, African states are becoming democratic in the sense that some usually conduct periodic elections and some have a parliamentary arrangement for lawmaking.But the overall philosophy is built on free-market democracy (Chua 2004).Free market democracy in Africa is an arrangement that includes privatization of state assets, elimination of subsidies in education and health care, open border in the name of free trade and foreign investments; and periodic election with universal suffrage (Chua 2004:16).Thus, the moving force of this arrangement is capitalism, which is what is obtainable in Africa today.The weakness of this system is that it does not have an inbuilt mechanism for wealth distribution because it builds on the "win-loss" philosophy, the individual win and society loss.Therefore, Africans can pride one individual as the richest man in Africa and society is still poor. This ideology which is the ground norm for formulating public policies in Africa is un-African and it cannot deliver the good of sustainable development in Africa.Hence, to reclaim the communalistic existence of traditional Africa in the modern world, we need to negotiate between socialism and capitalism.Although, we need to be aware that traditional African society is not a society of "I" and "I-alone" neither is it a society of "We" nor "We-alone" but it is a society of "we and I".The implication, therefore, is that the "I" and the "we" were symbiotically entangled (Okoro, 2015:16).In the same vein, Okoro (2015:16) submits that the hallmark of every traditional African socio-politico-economic system is to harmonize the extremes of capitalism and socialism and integrate the two through African spirituality which must be grounded on the doctrine of metaphysical symbiosis.What then is the practical entailment of this submission? Coming from the on -going, the political cum economic ideology should be built on regulated capitalism as different from laissez-faire capitalism and consensus democracy as different from liberal democracy.Regulated capitalism is an arrangement whereby individuals are free to pursue their economic interest but must play by the rules set by the society through the laws of the state.The state through government is the umpire and not a player in economic activities in the society.The state regulates and redistributes the wealth of the society base on the principles of equitable shared prosperity which is rooted in the African communal economy of the "one for all, and the all for the one".The principles of shared prosperity include social justice, equal opportunities, equity, philanthropy and solidarity, and brotherhood and environmental protection and generational investments. On the other hand, consensus democracy which is an arrangement whereby political decisions, structure, and processes are based on the consensus of the people through their representatives.This arrangement recognizes the power blocs in the community and builds consensus among them.These power blocs include: the ethnic bloc, the educational bloc, economic bloc, territorial bloc and at the apex is the state bloc with all its bureaucracies.The political system must accommodate these blocs and allocate them their area of operation in society.In the selection of people who will man the state bloc, the ethnic bloc, and territorial bloc should be the ground of selection.This entails that from ethnic communities, people can be selected to represent the ethnic communities in territorial blocs who will now form the state bloc.For this to be possible the leadership philosophy should be based on ethical leadership and African ethics should be the basis of testing those who want to function in the state bloc. CONCLUSION This paper has highlighted African values and distilled it endogenous principle, which is communalism and use same as the foundational principle for reforming contemporary African institutions, in order for Africa, to reap the good of sustainable development.This paper works out, the principles of regulated capitalism and consensus democracy in the reform agenda of contemporary African socio-politico-economic institutions.The paper maintains that traditional African values can be the foundational principles for African social living and social ordering in the modern world.This African alternative to doing things is also necessary for fostering human flourishing in all segments of life.Therefore, for Africans to reclaim their identity in the global space, Africans must globalize their values and Africanize the global with African values.
9,734.2
2019-12-31T00:00:00.000
[ "Economics" ]
Binding Energies and Optical Properties of Power-Exponential and Modified Gaussian Quantum Dots We examine the optical and electronic properties of a GaAs spherical quantum dot with a hydrogenic impurity in its center. We study two different confining potentials: (1) a modified Gaussian potential and (2) a power-exponential potential. Using the finite difference method, we solve the radial Schrodinger equation for the 1s and 1p energy levels and their probability densities and subsequently compute the optical absorption coefficient (OAC) for each confining potential using Fermi’s golden rule. We discuss the role of different physical quantities influencing the behavior of the OAC, such as the structural parameters of each potential, the dipole matrix elements, and their energy separation. Our results show that modification of the structural physical parameters of each potential can enable new optoelectronic devices that can leverage inter-sub-band optical transitions. Introduction Quantum structures such as quantum wells, quantum dots (QDs), and nanowires are low-dimensional semiconductors that have enabled several technologies, such as singleelectron transistors [1], photovoltaic (PV) devices [2], light-emitting diodes (LEDs) [3], and photodetectors [4][5][6][7][8].QDs are particularly useful in optoelectronic applications due to quantum confinement effects that enable efficient luminescence, large extinction coefficients, and extensive lifetimes [9][10][11].For this reason, QDs are presently employed in various applications, including LEDs, photovoltaics, biomedical imaging, solid-state lighting, QD displays, biosensors, and quantum computing materials [12][13][14][15][16][17][18][19].QDs can be considered a middle ground between molecules and semiconductor materials that enable quantum mechanical properties that can be tailored by varying their physical features [20][21][22][23][24][25][26].For example, inserting a hydrogenic impurity at the center of a QD center affects the electronic distribution of all energy levels, their separations, and the electronic wavefunctions.This, in turn, affects the electrostatic attraction between the hydrogenic impurity and free carriers, the dipole matrix elements, and the optical absorption coefficient (OAC).There have been several studies that have examined the effects of inserting an impurity in the center of a QD [25,[27][28][29][30][31][32][33].The OACs in coupled InAs/GaAs QD systems were studied by Li and Xia, who found that the optical properties in these QD systems were different from QD superlattices [34].Schrey and coauthors studied the polarization and optical absorption properties in QD-based photodetectors and found that the QD enables large effects on the distribution of minibands in the superlattice [35].The variation of the OAC and nonlinear refractive index (NRI) as a function of the applied electric field, temperature, and hydrostatic pressure in a Mathieu-like QD potential with a hydrogenic impurity was examined by Bahar et al. [36].Batra and coauthors also evaluated the effect of a Kratzer-like radial potential on the OAC and NRI of a spherical QD [37].Bassani and Buczko studied the sensitivity of the optical properties to the impurity of donors and acceptors in spherical QDs [38].Narvaez and coauthors examined OACs arising from conduction-to-conduction and valence-to-valence bands [39]. A. Ed-Dahmouny et al. studied the effects of electric and magnetic fields on donor impurity electronic states and OACs in a core/shell GaAs/AlGaAs ellipsoidal QD [40].In their study, they showed that changes in the polarization of light caused blue or red shifts in the inter-sub-band OAC spectra, depending on the orientations of the two external fields and the presence/absence of a hydrogenic impurity.Fakkahi et al. examined the OACs of spherical QDs based on a Kratzer-like confinement potential [41].In their study, they demonstrated that the OACs and transition energies (1p -2s and 2s -2p) were strongly influenced by the structural parameters of the Kratzer confinement potential. In addition, the oscillator strengths in spherical QDs with a hydrogenic impurity were computed by Yilmaz and Safak [42].Finally, Kirak et al. studied the effect of an applied electric field on OACs in parabolic QDs with a hydrogenic impurity [43].In recent years, GaAs-based spherical quantum dots have emerged as a subject of intense research due to their unique properties.GaAs has a high electron mobility, good thermal stability, and excellent optical properties.Moreover, GaAs is widely used in thin film production and high-quality epitaxial growth methods.These factors collectively render GaAs quantum dots appealing for advancing high-performance semiconductor devices and facilitating nanoscale optoelectronic applications. In this work, we compute the two lowest energies, E 1p and E 1s , in GaAs spherical quantum dots as a function of the structural shape of two confining potentials: (1) a modified Gaussian potential (MGP) and ( 2) a power-exponential potential (PEP).We then present a complete analysis of OACs and binding energies as a function of energy separation and dipole matrix elements, as the structural parameters of these potentials are varied.The binding energy effectively captures the attractive force between the free electrons in different levels and the inserted impurity.Section 2 provides the mathematical details of our approach, and Section 3 presents our results for each potential. Geometrical Forms of MGP and PEP Potentials Before calculating the different energy levels and electronic wavefunctions in the QD, we first evaluate the effects of the structural parameters on the geometrical shape of the confining potentials.The spherical symmetry of these potentials introduces a quantization of the angular motion via the angular and magnetic numbers.Within this quantization, the total carrier wavefunction can be expressed by the well-known spherical harmonics.The adjustment and control of electronic transitions in QDs can be attained by varying the size of each layer in the structure or by changing the structural parameters governing the shape of the potentials. In the present paper, we examine two confining potentials: (1) the power-exponential potential, V PEP (r), and (2) the modified Gaussian potential, V MGP (r).These potentials are generated by the application of an external voltage and barriers/wells of the structure with analytical expressions given by the following [44][45][46][47][48]: where V c and R 0 are the depth and range, respectively, of these potentials, and q is a structural parameter.Figures 1 and 2 plot the two potentials as a function of the radius for a GaAs QD with different values of the structural parameter, q. Figure 1 shows that V PEP (r) has a global minimum of −V c at r = 0 and increases for higher values of r.For low values of q, the potential has a parabolic shape but gives a square-like confining potential for larger values of q.By increasing q, the potential widens but has the same value at r = R 0 regardless of the value of q.These geometrical changes enable us to understand their effect on the desired energy levels and optimize the transitions between the initial and final levels to obtain the desired absorption. where and are the depth and range, respectively, of these potentials and is a structural parameter.Figures 1 and 2 plot the two potentials as a function of the radius for a GaAs QD with different values of the structural parameter, . Figure 1 shows that () has a global minimum of − at = 0 and increases for higher values of .For low values of , the potential has a parabolic shape but gives a square-like confining potential for larger values of .By increasing , the potential widens but has the same value at = regardless of the value of .These geometrical changes enable us to understand their effect on the desired energy levels and optimize the transitions between the initial and final levels to obtain the desired absorption.To allow for a straightforward comparison, the same radius of = 200 Å is used in Figure 2 (which was considered in Figure 1).When = 2, the power-exponential and modified Gaussian potentials resemble each other; however, when is increased, the shape of the potential tends to a negative Dirac-delta function = 0, which will dramatically affect the confinement of wavefunctions and energy levels of the ground and first-excited states. Optical Absorption of the MGP and PEP Potentials To compute the and energy levels and the () and () wavefunctions, the radial part of the Schrödinger equation is solved with each of the confining potentials within the effective mass approximation.The Schrödinger equation with the hydrogenic impurity is given by [49,50]: where ℏ , , and ℓ are the reduced Planck constant, dielectric constant, and angular Figure 2 plots the modified Gaussian potential as a function of the radius, r.To allow for a straightforward comparison, the same radius of R 0 = 200 Å is used in Figure 2 (which was considered in Figure 1).When q = 2, the power-exponential and modified Gaussian potentials resemble each other; however, when q is increased, the shape of the potential tends to a negative Dirac-delta function at r = 0, which will dramatically affect the confinement of wavefunctions and energy levels of the ground and first-excited states. Optical Absorption of the MGP and PEP Potentials To compute the E 1s and E 1p energy levels and the R 1s (r) and R 1p (r) wavefunctions, the radial part of the Schrödinger equation is solved with each of the confining potentials within the effective mass approximation.The Schrödinger equation with the hydrogenic impurity is given by [49,50]: where ℏ, ε, and ℓ are the reduced Planck constant, dielectric constant, and angular quantum number, respectively.V conf (r) is the confining potential, V MGP (r) or V PEP (r).In addition, R nℓ (r) and E nℓ denote the radial wavefunction and energy level of the confined electron. Including/neglecting the hydrogenic impurity is controlled by setting Z = 1 or Z = 0, respectively.To find the values of E nℓ and R nℓ (r), the Schrödinger equation is discretized and transformed to an eigenvalue problem, Hx = λx, where H is a tridiagonal matrix, and λ and x represent E nℓ and R nℓ (r), respectively.After discretization, the Schrödinger equation can be written as follows: where the elements of H are: After discretization, the radial coordinate is r j = j∆r with j = 1, . . ., N, where ∆r = R N is the width of the radial mesh.As boundary conditions, the ground and first-excited wavefunctions vanish at the external boundary point (j = N + 1) due to the negligible probability of finding the electron at the edge of the confining potential at r = R.In our simulation, we diagonalized the N × N matrix with N = 1200 using the MATLAB (version 9.8) software package.The OACs of different potentials arise from an electronic transition from the 1s to the 1p states after the absorption of a photon having an energy of ℏω = E f − E i .We denote OAC as α(ℏω) and compute it using Fermi's golden rule with the following expression [51]: The parameters γ FS , V con , n r , and N i f are the fine-structure constant, confinement volume, and refractive index, respectively.The Dirac δ-function in Equation ( 6) can be replaced with the following Lorentzian function [51]: In our study, the initial (i = 1) and final ( f = 2) states are the 1s and 1p states, respectively.The physical parameters used in this study are: γ FS = 1/137, n r = 3.25, ℏΓ = 3 meV, m * = 0.067 m 0 , and V C = 0.228 eV.Furthermore, we use atomic units (ℏ = e = m 0 = 1) throughout this work, which corresponds to a Rydberg energy and Bohr radius of 1 R y ∼ = 5.6 meV and 1 a B ∼ = 100 Å, respectively.In addition, the electromagnetic radiation is polarized along the z-axis, and |M 12 | 2 is given by the following expression [51]: where the 1 3 pre-factor arises from the integration of the spherical harmonics. Optical Properties of GaAs Quantum Dot with PEP Potential In this section, we will discuss the effect of the structural parameter , q, on the E 1s and E 1p energy levels and the binding energy.We then analyze trends in M i f 2 and the OACs for the transition between these states.Figure 3 plots the energy levels of the ground (1s) and first-excited (1p) states as a function of the structural parameter q with and without the hydrogenic impurity.When q increases, the energy levels decrease rapidly at low values of q and tend toward constant values, which is due to the shape of the confining potential shown in Figure 1 (the energy levels are inversely proportional to the width of the well).Furthermore, in the presence of the hydrogenic impurity (Z = 1), the energy levels are reduced compared to those in the absence of the impurity (Z = 0) due to the strong attraction between the electrons and the impurity at the center of the QD.In addition, we observe a slow decrease in all energy levels for larger values of q, since the width of the potential (see Figure 1) becomes insensitive to the variation of large q values. Molecules 2024, 29, x FOR PEER REVIEW 6 of 13 The OAC between the ground and first-excited levels depends on the energy separation ∆ = − and the dipole matrix element, | | .Figure 4 plots these physical quantities as a function of the structural parameter, .For = 0, ∆ increases, reaches its maximum at = 3, and subsequently decreases.This arises because and decrease when < 3; however, the decrease in is faster than that of .As such, ∆ shows an increasing variation; however, the opposite trend occurs for > 3, leading to a reduction in ∆.Consequently, the OAC can undergo a red or blue shift as increases. Figure 4 shows the variation of the dipole matrix element, | | , which plays a crucial role in controlling the amplitude of the optical absorption.| | decreases for < 3 and increases for > 3, which is the opposite trend to that of ∆.For low values of , The OAC between the ground and first-excited levels depends on the energy separation ∆E = E 1p − E 1s and the dipole matrix element, |M 12 | 2 .Figure 4 plots these physical quantities as a function of the structural parameter, q.For Z = 0, ∆E increases, reaches its maximum at q = 3, and subsequently decreases.This arises because E 1p and E 1s decrease when q < 3; however, the decrease in E 1s is faster than that of E 1p .As such, ∆E shows an increasing variation; however, the opposite trend occurs for q > 3, leading to a reduction in ∆E.Consequently, the OAC can undergo a red or blue shift as q increases.its maximum at = 3, and subsequently decreases.This arises because and decrease when < 3; however, the decrease in is faster than that of .As such, ∆ shows an increasing variation; however, the opposite trend occurs for > 3, leading to a reduction in ∆.Consequently, the OAC can undergo a red or blue shift as increases. Figure 4 shows the variation of the dipole matrix element, | | , which plays a crucial role in controlling the amplitude of the optical absorption.| | decreases for < 3 and increases for > 3, which is the opposite trend to that of ∆.For low values of , the overlap between the ground and first-excited wavefunctions is reduced; however, the overlap increases for larger values of , resulting in an enhancement of | | .Figure 4 shows the variation of the dipole matrix element, |M 12 | 2 , which plays a crucial role in controlling the amplitude of the optical absorption.|M 12 | 2 decreases for q < 3 and increases for q > 3, which is the opposite trend to that of ∆E.For low values of q, the overlap between the ground and first-excited wavefunctions is reduced; however, the overlap increases for larger values of q, resulting in an enhancement of |M 12 | 2 . Figure 5 displays the variation of the OAC as a function of the incident photon energy for three values of the parameter q.The OAC peak moves to the left (redshifts) when q is increased, which arises from the variation of the energy separation shown in Figure 4. Furthermore, the amplitude diminishes for q = 6 and subsequently rises again when q = 11.The amplitude and position of the OAC is sensitive to q, which affects the geometrical shape of the confining potential and delocalization of the 1s and 1p wavefunctions.Figure 5 displays the variation of the OAC as a function of the incident photon energy for three values of the parameter .The OAC peak moves to the left (red shifts) when is increased, which arises from the variation of the energy separation shown in Figure 4. Furthermore, the amplitude diminishes for = 6 and subsequently rises again when = 11.The amplitude and position of the OAC is sensitive to , which affects the geometrical shape of the confining potential and delocalization of the 1 and 1 wavefunctions. Figure 6 shows the variation of the binding energy (Eb) as a function of the parameter .For low values of , the binding energy increases sharply for both the 1 and 1 states and subsequently decreases.For all values of , the binding energy of the 1 state is larger than the 1 state, which is due to the strong electrostatic attraction between the impurity and the electron in the 1 state compared to the 1 state.Furthermore, increasing enlarges the confining potential, as shown in Figure 1, which leads to a reduction in all energy levels of the QD with and without the impurity.Therefore, the binding energy will be influenced by two effects: (1) the electrostatic attraction and (2) the geometrical confinement imposed by the confining potential.For higher values of , the confining potential becomes too large and dominates the effect of the electrostatic attraction, leading to a reduction in the binding energies, as shown in Figure 6. Figure 6 shows the variation of the binding energy (Eb) as a function of the parameter q.For low values of q, the binding energy increases sharply for both the 1p and 1s states and subsequently decreases.For all values of q, the binding energy of the 1s state is larger than the 1p state, which is due to the strong electrostatic attraction between the impurity and the electron in the 1s state compared to the 1p state.Furthermore, increasing q enlarges the confining potential, as shown in Figure 1, which leads to a reduction in all energy levels of the QD with and without the impurity.Therefore, the binding energy will be influenced by two effects: (1) the electrostatic attraction and (2) the geometrical confinement imposed by the confining potential.For higher values of q, the confining potential becomes too large and dominates the effect of the electrostatic attraction, leading to a reduction in the binding energies, as shown in Figure 6. Optical Properties of a GaAs Quantum Dot with an MGP Potential In this section, we examine the effect of the structural parameter on the and energy levels, their energy separation, and the binding energy.We then discuss the behavior of the dipole matrix elements and the OACs between these states. Figure 7 plots the energy levels of the ground (1) and first-excited (1) states as a function of the structural parameter with and without the hydrogenic impurity.When increases, these energies increase considerably in the presence and absence the impurity, which is opposite to that observed in the previous section for the PEP confining potential.Increasing reduces the width of the MGP; for higher values of , the potential tends to the shape of a negative Dirac-delta potential (Figure 2), which increases the energy levels.In addition, the slope of each energy level is slowly reduced for higher values of since the confining potential no longer changes for very large values of . Optical Properties of a GaAs Quantum Dot with an MGP Potential In this section, we examine the effect of the structural parameter q on the E 1s and E 1p energy levels, their energy separation, and the binding energy.We then discuss the behavior of the dipole matrix elements and the OACs between these states. Figure 7 plots the energy levels of the ground (1s) and first-excited ( 1p) states as a function of the structural parameter q with and without the hydrogenic impurity.When q increases, these energies increase considerably in the presence and absence of the impurity, which is opposite to that observed in the previous section for the PEP confining potential.Increasing q reduces the width of the MGP; for higher values of q, the potential tends to the shape of a negative Dirac-delta potential (Figure 2), which increases the energy levels.In addition, the slope of each energy level is slowly reduced for higher values of q since the confining potential no longer changes for very large values of q. Comparing Figures 3 and 7, the evolution of the energy levels as a function of the structural parameter are opposite for the PEP and MGP potential.The PEP potential tends to a square-like quantum well, leading to a reduction in energy levels; however, the MGP potential tends to a Dirac-delta form, which shifts all of the energy levels to higher values.Figure 8 plots |M 12 | 2 and ∆E = E 1p − E 1s as a function of q, which shows that ∆E increases with q, reaches a maximum, and then diminishes.The maximum of ∆E in the presence of the hydrogenic impurity (Z = 1) is slightly different from that in its absence (Z = 0), which causes the blue and red shifts observed in the OAC.Furthermore, the amplitude of the OAC is sensitive to the variation of the dipole matrix element |M 12 | 2 .Figure 8 shows that |M 12 | 2 first decreases with q, reaches a minimum, and finally increases, which is an opposite trend to that of the energy separation, ∆E = E 1p − E 1s . behavior of the dipole matrix elements and the OACs between these states. Figure 7 plots the energy levels of the ground (1) and first-excited (1) states as a function of the structural parameter with and without the hydrogenic impurity.When increases, these energies increase considerably in the presence and absence of the impurity, which is opposite to that observed in the previous section for the PEP confining potential.Increasing reduces the width of the MGP; for higher values of , the potential tends to the shape of a negative Dirac-delta potential (Figure 2), which increases the energy levels.In addition, the slope of each energy level is slowly reduced for higher values of since the confining potential no longer changes for very large values of .Comparing Figures 3 and 7, the evolution of the energy levels as a function of the structural parameter are opposite for the PEP and MGP potential.The PEP potential tends to a square-like quantum well, leading to a reduction in energy levels; however, the MGP potential tends to a Dirac-delta form, which shifts all of the energy levels to higher values.Figure 8 plots | | and ∆ = − as a function of , which shows that ∆ increases with , reaches a maximum, and then diminishes.The maximum of ∆ in the presence of the hydrogenic impurity ( = 1) is slightly different from that in its absence ( = 0), which causes the blue and red shifts observed in the OAC.Furthermore, the amplitude of the OAC is sensitive to the variation of the dipole matrix element | | .Figure 8 shows that | | first decreases with , reaches a minimum, and finally increases, which is an opposite trend to that of the energy separation, ∆ = − . is increased from 2 to 6; it subsequently moves to the left (red shifts) when increases from 6 to 11.This arises from the variation of the energy separation shown in Figure 8. Furthermore, the amplitude decreases when varies between 2 and 6 and subsequently rises again when = 11. Finally, we plot the binding energy in Figure 10.For low values of , the binding energy increases gradually for the 1 and 1 states.For > 5, the binding energy of the 1 state starts to decrease, whereas the 1 binding energy continues its increase.For all values of the parameter , the binding energy of the 1 state is larger than that of 1, which arises from the attraction between the hydrogenic impurity and the free electrons.Furthermore, increasing subsequently reduces the confining potential (cf. Figure 2), The OAC peak moves to the right (blue shifts) when q is increased from 2 to 6; it subsequently moves to the left (redshifts) when q increases from 6 to 11.This arises from the variation of the energy separation shown in Figure 8. Furthermore, the amplitude decreases when q varies between 2 and 6 and rises again when q = 11. Conclusions In this work, we have examined the optical and electronic characteristics of spherical QDs in PEP and MGP potentials.A finite difference method was used to compute the energy levels, OACs, and binding energies for the two low-lying 1 and 1 states.Our calculations for the two confining potentials account for a hydrogenic impurity in the center of the QD.We first calculated the energy levels and their corresponding wavefunctions and subsequently evaluated the dipole matrix elements and energy separations between the 1s and 1p levels.We then examined the behavior of these physical quantities to interpret the blue and red shifts observed in the variation of OAC.Finally, we plot the binding energy in Figure 10.For low values of q, the binding energy increases gradually for the 1p and 1s states.For q > 5, the binding energy of the 1p state starts to decrease, whereas the 1s binding energy continues its increase.For all values of the parameter q, the binding energy of the 1s state is larger than that of 1p, which arises from the attraction between the hydrogenic impurity and the free electrons.Furthermore, increasing q subsequently reduces the confining potential (cf. Figure 2), which leads to the enhancement of all energy levels of the QD with and without the presence of the impurity.The difference in the variation of the binding energies in Figures 6 and 10 confirms the effect of the structural parameter q on the PEP and MGP potentials. Conclusions In this work, we have examined the optical and electronic characteristics of spherical QDs in PEP and MGP potentials.A finite difference method was used to compute the energy levels, OACs, and binding energies for the two low-lying 1 and 1 states.Our calculations for the two confining potentials account for a hydrogenic impurity in the center of the QD.We first calculated the energy levels and their corresponding wavefunctions Conclusions In this work, we have examined the optical and electronic characteristics of spherical QDs in PEP and MGP potentials.A finite difference method was used to compute the energy levels, OACs, and binding energies for the two low-lying 1s and 1p states.Our calculations for the two confining potentials account for a hydrogenic impurity in the center of the QD.We first calculated the energy levels and their corresponding wavefunctions and subsequently evaluated the dipole matrix elements and energy separations between the 1s and 1p levels.We then examined the behavior of these physical quantities to interpret the blue and red shifts observed in the variation of OAC. Our findings show that an increase in the structural parameter of the PEP potential produces a red shift in the OAC, which arises from the change in the energy separation due to the widening of the potential.In addition, our findings showed that an increase in the structural parameter of the MGP potential first produces a blue shift in the OAC and, subsequently, a redshift.The trends in the binding energy as a function of the structural parameter of each confining potential were attributed to the attractive force between the free electrons and hydrogenic impurity.Our simulations provide insight into the optical and electronic characteristics of spherical QDs in various confined potentials. Figure 2 Figure2plots the modified Gaussian potential as a function of the radius, .To allow for a straightforward comparison, the same radius of = 200 Å is used in Figure2(which was considered in Figure1).When = 2, the power-exponential and modified Gaussian potentials resemble each other; however, when is increased, the shape of the potential tends to a negative Dirac-delta function = 0, which will dramatically affect the confinement of wavefunctions and energy levels of the ground and first-excited states. Figure 1 . Figure 1.V PEP (r) for different values of the parameter q with R 0 = 200 Å.Molecules 2024, 29, x FOR PEER REVIEW 4 of 13 Figure 2 . Figure 2. V MGP (r) for different values of the parameter q with R 0 = 200 Å. Figure 3 . Figure 3. Variations in and as a function of the parameter .The solid lines are energies without the hydrogenic impurity ( = 0), and the dashed lines represent energies with the hydrogenic impurity ( = 1). Figure 3 . Figure 3.Variations in E 1s and E 1p as a function of the parameter q.The solid lines are energies without the hydrogenic impurity (Z = 0), and the dashed lines represent energies with the hydrogenic impurity (Z = 1). Figure 4 . Figure 4. Variations in − and | | as a function of the parameter . Figure 4 . Figure 4. Variations in E 1p − E 1s and |M 12 | 2 as a function of the parameter q. Figure 5 . Figure 5. OAC as a function of incident photon energy for different values of the parameter with ( = 1) and without ( = 0) the impurity.Figure 5. OAC as a function of incident photon energy for different values of the parameter q with (Z = 1) and without (Z = 0) the impurity. Figure 5 . Figure 5. OAC as a function of incident photon energy for different values of the parameter with ( = 1) and without ( = 0) the impurity.Figure 5. OAC as a function of incident photon energy for different values of the parameter q with (Z = 1) and without (Z = 0) the impurity. Molecules 2024 , 13 Figure 6 . Figure 6.Variation of the binding energy as a function of the parameter for the 1 and 1 states. Figure 6 . Figure 6.Variation of the binding energy as a function of the parameter q for the 1s and 1p states. Figure 7 . Figure 7. Variation of E 1s and E 1p as a function of the parameter q.The solid lines are energies without the hydrogenic impurity (Z = 0), and the dashed lines represent energies with the hydrogenic impurity (Z = 1). Figure 7 . Figure 7. Variation of and as a function of the parameter .The solid lines are energies without the hydrogenic impurity ( = 0), and the dashed lines represent energies with the hydrogenic impurity ( = 1). Figure 8 . Figure 8. Variations in − and | | as a function of the parameter . Figure 9 Figure9displays the variation of the OAC as a function of incident photon energy for three values of the parameter .The OAC peak moves to the right (blue shifts) when is increased from 2 to 6; it subsequently moves to the left (red shifts) when increases from 6 to 11.This arises from the variation of the energy separation shown in Figure8.Furthermore, the amplitude decreases when varies between 2 and 6 and subsequently rises again when = 11.Finally, we plot the binding energy in Figure10.For low values of , the binding energy increases gradually for the 1 and 1 states.For > 5, the binding energy of the 1 state starts to decrease, whereas the 1 binding energy continues its increase.For all values of the parameter , the binding energy of the 1 state is larger than that of 1, which arises from the attraction between the hydrogenic impurity and the free electrons.Furthermore, increasing subsequently reduces the confining potential (cf.Figure2), Figure 8 . Figure 8. Variations in E 1p − E 1s and |M 12 | 2 as a function of the parameter q. Figure 9 Figure9displays the variation of the OAC as a function of incident photon energy for three values of the parameter q.The OAC peak moves to the right (blue shifts) when q is increased from 2 to 6; it subsequently moves to the left (redshifts) when q increases from 6 to 11.This arises from the variation of the energy separation shown in Figure8.Furthermore, the amplitude decreases when q varies between 2 and 6 and rises again when q = 11. Figure 9 . Figure 9. OAC as a function of incident photon energy for different values of parameter q with ( = 1) and without ( = 0) the impurity. Figure 10 . Figure 10.Variation of the binding energy as a function of parameter for the 1 and 1 states. Figure 9 . Figure 9. OAC as a function of incident photon energy for different values of parameter q with (Z = 1) and without (Z = 0) the impurity. Molecules 2024 , 13 Figure 9 . Figure 9. OAC as a function of incident photon energy for different values of parameter q with ( = 1) and without ( = 0) the impurity. Figure 10 . Figure 10.Variation of the binding energy as a function of parameter for the 1 and 1 states. Figure 10 . Figure 10.Variation of the binding energy as a function of parameter q for the 1s and 1p states.
7,744.8
2024-06-27T00:00:00.000
[ "Physics" ]
Detection of Target Genes for Drug Repurposing to Treat Skeletal Muscle Atrophy in Mice Flown in Spaceflight Skeletal muscle atrophy is a common condition in aging, diabetes, and in long duration spaceflights due to microgravity. This article investigates multi-modal gene disease and disease drug networks via link prediction algorithms to select drugs for repurposing to treat skeletal muscle atrophy. Key target genes that cause muscle atrophy in the left and right extensor digitorum longus muscle tissue, gastrocnemius, quadriceps, and the left and right soleus muscles are detected using graph theoretic network analysis, by mining the transcriptomic datasets collected from mice flown in spaceflight made available by GeneLab. We identified the top muscle atrophy gene regulators by the Pearson correlation and Bayesian Markov blanket method. The gene disease knowledge graph was constructed using the scalable precision medicine knowledge engine. We computed node embeddings, random walk measures from the networks. Graph convolutional networks, graph neural networks, random forest, and gradient boosting methods were trained using the embeddings, network features for predicting links and ranking top gene-disease associations for skeletal muscle atrophy. Drugs were selected and a disease drug knowledge graph was constructed. Link prediction methods were applied to the disease drug networks to identify top ranked drugs for therapeutic treatment of skeletal muscle atrophy. The graph convolution network performs best in link prediction based on receiver operating characteristic curves and prediction accuracies. The key genes involved in skeletal muscle atrophy are associated with metabolic and neurodegenerative diseases. The drugs selected for repurposing using the graph convolution network method were nutrients, corticosteroids, anti-inflammatory medications, and others related to insulin. Introduction Spaceflight experiments using mice are being conducted to determine the impact of microgravity on different muscle groups [1]. A major health problem in spaceflight is muscle wastage due to microgravity. The primary muscles in the human body are the muscles of the upper limb and lower limb. Experiments on hind limb muscle wasting after a 13-day shuttle flight have shown reduced knee weight bearing and meniscal degradation, inducing an arthritic phenotype in cartilage and menisci [2]. Changes in electrical impedance characteristics in gastrocnemius muscles are also induced by spaceflight [3]. Skeletal muscle atrophy is a secondary effect of aging (sarcopenia) and diseases such as diabetes, cancer and kidney diseases. The primary muscles in the human body are the upper limb and lower limb. Studies have shown that muscle gene expression is different in spaceflight vs. that on the ground. Models of sarcopenia and age-related muscle loss have been studied in [4]. Spaceflight induces similar muscle loss, and the analysis of their gene expression (see [5]) has revealed that a majority of 272 mRNAs that were significantly altered by spaceflight displayed similar responses to hind limb suspension. There are several molecular processes that influence muscle atrophy. The muscle RINGfinger protein-1 that plays an important role in muscle remodeling is an E3 ubiquitin ligase expressed in skeletal and cardiac muscle tissues [6]. Spaceflight induces unique muscle atrophy in animal models. The MuRF1 nullified mice did not show improvement in soleus muscle loss, showing that atrophy proceeds under unique mechanisms in spaceflight [7]. Muscle mass is a balance between protein generation and degradation. A decreased rate of synthesis causes skeletal muscle wasting. The ubiquitin proteosome is the protein synthesis pathway in muscle atrophy. It has been shown that proteosome inhibition reduces denervation-induced muscle atrophy [8]. One of the most important muscle-wasting cytokines is tumor necrosis factor-a (TNF-a), elevated levels of which cause significant muscular abnormalities. Although there has been some advancement in understanding cellular and molecular mechanisms such as MuRF1/MAFbx/FOXO pathways and potential triggers behind muscle disuse, there is a significant gap in knowledge in the regulatory mechanism of the associated genes and their functional significance. It is known that anabolic and catabolic pathways regulate muscle atrophy in adult organisms. Deacetylase inhibitors represent a prototype of epigenetic drugs that have been proposed as a possible intervention that targets multiple signaling pathways in the pathogenesis of muscle atrophy. Niclosamide has also been proposed to regulate myogenesis and catabolic pathways in skeletal muscle. Apart from microgravity, radiation exposure in spaceflight has been reported to aggravate atrophic processes in soleus and gastrocnemius muscles, which is induced already by spaceflight. Radiation was shown to inhibit the reparative processes [9]. Oxidative stress is increased by higher levels of radiation. The upregulation of heme oxygenase-1 (HO-1) counters cellular damage due to radiation which can be artificially induced [10]. Several countermeasures have been proposed for alleviating muscle wastage in spaceflight. Exercise countermeasures do not alleviate the reduction in muscle function or muscle size due to the unloading effects of spaceflight [11]. While exercise countermeasures seem insufficient for maintaining muscle function in long deep space measures, it is important to find effective countermeasures for long duration spaceflights. Bone loss is preserved and tibialis anterior and gastrocnemius muscle changes are eliminated by countermeasures such as bisphosphonates and anti-RANKL therapies (Denosaumab and OPG-Fc) and treatment of young mice with REGN1033 (a monoclonal antibody against myostatin) [12]. With future space missions, finding effective countermeasures for muscle atrophy in spaceflight has gained paramount importance. Simulated microgravity, use of animal models, applications of countermeasures, studies of interrelationships between bone and muscle tissues, and studies on the effect of radiation on skeletal muscles are necessary for human exploration of space [13]. In our earlier paper on drug repurposing [14], we applied three Machine Learning (ML) methods for identifying drugs for treatment of organ muscle atrophy. In this paper, we have added the Pearson correlation method for identification of key gene regulators of skeletal muscle atrophy, and also have implemented Graph Convolutional Neural Network (GCN) for link prediction. GCN results for identification of repurposable drugs for skeletal muscle atrophy is compared with the GNN method reported as the best method in [14]. NASA's GeneLab [15] datasets are collected in spaceflight under microgravity and low radiations doses in low Earth orbit. The radiation details of these datasets are provided in [16]. Section 2 presents the GeneLab datasets and ML methods used to identify key diseases associated with skeletal muscle atrophy and drugs for repurposing. Section 3 presents the results of the ML algorithms for link predictions in the constructed Gene Disease Knowledge Graph (GDKG) and Disease Drug Knowledge Graph (DDKG). Section 4 discusses the key genes and repurposable drugs selected by link prediction, and Section 5 presents the conclusions. Materials and Methods Datasets from the GeneLab repository [15] related to skeletal muscle atrophy were mined for studying the effects of microgravity and low radiation doses in low Earth orbit found beyond Earth's atmosphere on mice. All the -omics datasets in GeneLab were preprocessed and normalized before being published. GeneLab Datasets GLDS-99, 101, 103, 104: A cohort of 16-week-old female mice were flown in the ISS for 37 days. They were euthanized in spaceflight and returned to Earth where left and right extensor digitorum longus muscle tissue (GLDS-99), gastrocnemius (GLDS-101), quadriceps (GLDS-103), and left and right soleus muscles (GLDS-104) samples were collected. RNA and DNA sequencing was carried out. GeneLab processed the RNA sequencing data into gene expression values using standardized methods. These datasets belong to the Rodent Research (RR) payload. The daily average absorbed dose of Galactic Cosmic Radiation (GCR) nucleic particle is 0.13126 mGy, Inner Radiation Belt (IRB) South Atlantic Anomaly (SAA) is 0.07331 mGy, and the cumulative absorbed dose of GCR is 4.98795 mGy, and SAA is 2.78573 mGy. GLDS-111 and GLDS-135: Adult male mice C57BL/N6 were flown aboard the BION-M1 biosatellite for 30 days on orbit (BF) or housed in a replicate flight habitat on Earth (BG) as the reference flight control. GeneLab processed RNA sequencing data from mouse soleus and EDL muscles (GLDS-111) and longissimus dorsi and tongue (GLDS-135). The radiation inside the Bion-M1 mouse habitat dosimeters SPD2 and SPD4 recorded an average absorbed dose of 0.630 and 1.149 mGy, respectively. These are averages of low and high LET radiation doses. The total average absorbed radiation dose for the mission is 18.81 mGy and 34.30 mGy for the SPD2 and SPD4 dosimeters, respectively. The total average absorbed dose of Galactic Cosmic Radiation (GCR), Outer Radiation Belt (ORB), and Inner Radiation Belt (IRB) is 0.985 mGy. GLDS-21: Mice were flown on the STS-18 shuttle flight mission for 11 days, 19 h and gene expression analysis was performed on gastrocnemius muscle. Mice were maintained on earth for the same period. Additionally, to identify changes that were due to unloading and reloading, ground-based mice were subjected to hind limb suspension for 12 days and microarray analyses were conducted on their calf muscle. The average absorbed radiation dose is 2.19 mGy for the entire mission with an average absorbed radiation dose rate of 0.18 mGy. The workflow pipeline for identifying key genes and drugs for treating skeletal muscle atrophy is shown in Figure 1. The stages of the pipeline are numbered from 1 to 4 and each stage is explained below. Finding Regulatory Relationships between Gene Pairs (Stage 1) Graph-based Gene Regulatory Network (GRN) inferencing methods of Pearson correlation and Markov Blanket (MB) are utilized to identify the most regulated genes in the seven GeneLab datasets [17,18]. The gene expression values of pairs of genes are used to compute the Pearson correlation value. The p-values are used to extract the most correlated pairs of genes by selecting all values below 5 × 10 −7 , which will extract the same pairs of genes as a correlation threshold of 0.9 and above. For identifying causal relational gene pairs, the Markov Blanket (MB) method is used. Joint conditional probabilities are computed from the gene expression values which are used to construct a Bayesian Network (BN). The incremental association Markov blanket of any node (gene) in a BN is the set of parents, children, and spouses (the other parents of their common children) of the gene. The genes are connected by edges if its upregulation is caused by another gene, or if it causes the upregulation of another gene. The MB(X) of a node (gene) X includes its parents, children, and spouses which are the strongly relevant genes to gene X. The output is a list of pairs of genes that are connected by edges. The list of pairs of most correlated genes and causally related genes are combined into one list and input to the next stage in Figure 1. Finding Regulatory Relationships between Gene Pairs (Stage 1) Graph-based Gene Regulatory Network (GRN) inferencing methods of Pearson correlation and Markov Blanket (MB) are utilized to identify the most regulated genes in the seven GeneLab datasets [17,18]. The gene expression values of pairs of genes are used to compute the Pearson correlation value. The p-values are used to extract the most correlated pairs of genes by selecting all values below 5 × 10 −7 , which will extract the same pairs of genes as a correlation threshold of 0.9 and above. For identifying causal relational gene pairs, the Markov Blanket (MB) method is used. Joint conditional probabilities are computed from the gene expression values which are used to construct a Bayesian Network (BN). The incremental association Markov blanket of any node (gene) in a BN is the set of parents, children, and spouses (the other parents of their common children) of the gene. The genes are connected by edges if its upregulation is caused by another gene, or if it causes the upregulation of another gene. The MB(X) of a node (gene) X includes its parents, children, and spouses which are the strongly relevant genes to gene X. The output is a list of pairs of genes that are connected by edges. The list of pairs of most correlated genes and causally related genes are combined into one list and input to the next stage in Figure 1. Construction of Knowledge Graphs (Stage 2) The selected genes from Stage 1 are input to the Scalable Precision Medicine Open Knowledge Engine (SPOKE), which is a database of databases [19]. SPOKE is used for creating a network based on a data integration approach to prioritize disease-associated genes [20]. It is a graph-theoretic database organized in a hierarchical manner with inputs Construction of Knowledge Graphs (Stage 2) The selected genes from Stage 1 are input to the Scalable Precision Medicine Open Knowledge Engine (SPOKE), which is a database of databases [19]. SPOKE is used for creating a network based on a data integration approach to prioritize disease-associated genes [20]. It is a graph-theoretic database organized in a hierarchical manner with inputs from molecular research, clinical insights, environmental data and others. Currently it integrates 19 different databases. The SPOKE creates a new graph with the provided list of skeletal muscle atrophy genes and the diseases associated with it. The list of genes and their associated diseases are input to Cytoscape to construct the Gene Disease Knowledge Graph (GDKG). The Disease Drug Knowledge Graph (DDKG) is constructed by finding the top ten drugs used to treat the diseases associated with skeletal muscle atrophy from the DrugBank database. A table of diseases and the top ranked drugs is built and input to Cytoscape to construct the DDKG. Graph Concepts and Properties for Analysis of GDKG and DDKG Graph concepts of random walk and preferential attachment used by the link prediction algorithms are described in this section. We also compute network measures on the constructed graphs. We follow Janwa, Massey, Velev and Mishra [21][22][23][24]. A graph is a representation of a set of entities and relations among them and represents an underlying concrete network, such as a GRN, the internet, or a social network. We formally present a graph as a pair of sets G = (V, E), where V are the vertices (nodes, points) and E ⊆ V × V are the edges (arcs), respectively. When E is a set of unordered pairs of vertices, the graph is said to be undirected. In a directed graph (representing key genes and target genes, for example) G = (V, E, o, t), E consists of an ordered set of vertex pairs, i.e., for each edge e ∈ E, e → (o(e), t(e)) where o(e) is called the origin of the edge e and t(e) is called the terminus of the edge e [22,23]. A graph is weighted if there is a map (weighting function, w : E → R + ), assigning to each edge a positive real-valued weight. Weighting can represent the strength of a signal in sender-receiver gene interaction, for example. A network's properties are governed by its topology, such as the degree distribution, clustering coefficients, motifs, assortativity, hierarchicity, etc. (see [24][25][26]); a more in-depth treatment regarding biomedical networks is given in [27]. The degree of a vertex v, deg(v), is the number of edges that connect the vertex with other vertices. In other words, the degree is the number of immediate neighbors of a vertex. In directed graphs, the in-degree and out-degree of a vertex can be defined as the number of incoming and outgoing edges, respectively. Thus, the degree distributions can tell a great deal about the structure of a family of networks. As probability distribution, degree distribution can be binomial, Poisson, or Gaussian (in the limit), or as we will see, it can follow a power-law distribution that is characterized by a scale-free property. We say that a graph is In random probability models such as the Erdos-Renyi model, one does not find nodes of a very high degree. Similarity measures computed from neighborhoods in a graph are widely used in link prediction algorithms [28]. A semi-supervised scalable feature learning method is proposed in [29], where the authors develop a family of biased random walks resulting in a flexible search space of nodes for link (edge) prediction. We have used this method to obtain the highest ranked nodes for possible links between the muscle atrophy gene and its disease associations, as well as between diseases and drugs in the Graph Neural Network (GNN) method. Random walks: A walk of length n in a graph is a sequence of alternating vertices and edges, v 0 , e 1 , v 1 , e 2 , . . . , e n , v n such that 0(e i ) = v i−1 and t(e i ) = v i for all i = 1, . . . , n. Let T be the diagonal matrix with d v along the diagonal. First, we consider the stochastic matrix P = T −1 A, which may be thought of as describing the probabilities of certain "information" being moved from one node to a neighboring node by a diffusion process. Let {v 0 , e 1 , v 1 , e 2 , · · · v s } be a random walk in the graph with (v i−1 , v i ) ∈ E(G), for all 1 ≤ i ≤ s, and determined by transition probabilities P(u, v) = Prob(x i+1 = v |x i = u) which are independent of i. Normally, we take p(u, v) = w(u, v)/d u , as defined by the stochastic matrix P. Apart from random walks, we have computed preferential attachment measures to obtain possible gene-disease and disease-drug link associations. We follow [30] for computation of preferential attachment. For any node u let Γ(u) denote the set of neighbors of u. Let Λ be a community of G, i.e., Λ is a set of cohesive vertices such that it contains more connections inside the set than outside the set. The preferential attachment score of u and v is defined as |Γ(u)||Γ(v)|. ML Methods for Link Prediction (Stage 3) We used four ML methods for identifying and ranking the top skeletal muscle gene disease associations in the GDKG, and for identifying the top ranked drugs for repurposing from DDKG. The Random Forest (RF), Gradient Boost (GB), and Graph Neural Network (GNN) were used for link prediction and drug repurposing for organ muscle atrophy [14]. In addition to the above, we implemented the GCN method. The problem of link prediction is to predict an edge between two existing nodes in a graph or network. Each of the methods are described below. Random Forest (RF) Method This method is based on decision trees, and an ensemble of trees is called a decision forest. Each tree is trained on a random subset of input features, and their predictions are combined to improve overall prediction. The tree is based on discriminants instead of likelihoods. Discriminants are estimated by passing class densities. The hyperparameters area: tree depth of 15 with 500 estimators. Gradient Boosting (GB) Method The GB method is also an ensemble decision tree method which trains one tree at a time. The regression trees were built from the previous step on the prediction error of the previous tree. This is a useful method for tabular datasets. Multiple weak learners are combined to give a better performance. It can find nonlinear relationships between model targets and features and can deal with outliers, and missing values. The feature labels are the value of various node centralities. The positive and negative samples are the labels for the existent and non-existent edge in the network, respectively. The features of the nodes at the end of the edges, along with the positive or negative label, form a well-defined dataset for the task of link prediction. The learning rate is 0.2 for this algorithm. Graph Neural Network (GNN) Method The GNN is a deep network with ten hidden layers with 100 nodes (neurons) in each of the hidden layers. The activation function for the hidden layers is the Rectified Linear Unit (ReLu) function. The limited-memory Broyden-Fletcher-Goldfarb-Shanno (lbfgs) solver from sktlearn library in Python was used to predict the links. The input layer of the GNN takes as input random walk features computed on the knowledge graphs. The output of the GNN is a matrix of predicted edges. Graph Convolution Neural Network (GCN) We used the Graph Convolution Neural Network (GCN) for link prediction in GDKG and DDKG for skeletal muscle atrophy and compared it with the above methods. The GCN takes as input the knowledge graph with N number of nodes, A is the N × N adjacency matrix. The GCN learns the graph G i = (V i , E i ), learns node embeddings, and predicts links between the nodes. The layer-wise propagation rule for each neural network layer is Here, A = A + I N is the adjacency matrix of the undirected graph G with added self-connections. I N is the identity matrix, D ii = ∑ j A ij is the diagonal node degree matrix of A and W (l) is the layer-specific trainable weight matrix, σ(.) is an activation function. With spectral analysis, a graph convolution is a multiplication of spectra of signal in a Fourier domain [31]. As it is computationally expensive, the convolution kernel is the existing Chebyshev polynomial of Eigenvalues in a spectral domain. A softmax activation function is applied row-wise to f (X, To evaluate loss in this semi-supervised model, cross-entropy error is calculated as follows: where Y L is the set of nodes with labels or the labeled training instances. The weights of the neural network W are trained using gradient descent. Figure 2 shows the GCN trained for link prediction on the GDKG. The GCN has two hidden layers with 32 nodes in the first hidden layer and 16 nodes in the second hidden layer, respectively. The GCN uses Adam optimizer for gradient descent and weight updates for the network. The probabilities of the predicted links range from 0 to 1. These probabilities are predicted using the ReLu activation function shown in Figure 2. ∈ training instances. The weights of the neural network W are trained using gradient descent. Figure 2 shows the GCN trained for link prediction on the GDKG. The GCN has two hidden layers with 32 nodes in the first hidden layer and 16 nodes in the second hidden layer, respectively. The GCN uses Adam optimizer for gradient descent and weight updates for the network. The probabilities of the predicted links range from 0 to 1. These probabilities are predicted using the ReLu activation function shown in Figure 2. Gene-Disease and Disease-Drug Associations (Stage 4) The knowledge graphs are split into training and validation sets. The GridSearchCV library is used to estimate the best split of the data for cross validation. This implementation uses 10-fold cross validation for link prediction in both the knowledge graphs. The computation of network features and graph features are implemented in Python using the libraries networkX, node2vec, pandas, numpy, and sktlearn. The link prediction accuracies for the four methods are calculated by comparing a binary label (an edge exists or not exists) with a real valued predicted score. The technique used for evaluation in this setting is the Area Under the Receiver Operating Characteristic (AUROC) curve. The predicted links are sorted from highest probability to lowest probability. The drug nodes with the highest link probability to the disease nodes are selected as candidates for repurposing. Results The seven gene expression datasets have from three to eight expression values. The datasets were combined, and the significantly regulated genes were extracted using the Pearson correlation and Incremental Association Markov Blanket (IAMB) methods. For details on the implementation of Pearson correlation and IAMB, please refer to [32]. Pearson identified the most correlated genes and IAMB identified causally related genes. A total of 473 genes were identified as the most significantly regulated from the seven datasets. Hence, we have included all of these genes in our analysis as important regulators of skeletal muscle atrophy in spaceflight. Many diseases such as metabolic and neuromuscular diseases, cancer, chronic inflammatory diseases, and acute critical illness are associated with skeletal muscle atrophy, Gene-Disease and Disease-Drug Associations (Stage 4) The knowledge graphs are split into training and validation sets. The GridSearchCV library is used to estimate the best split of the data for cross validation. This implementation uses 10-fold cross validation for link prediction in both the knowledge graphs. The computation of network features and graph features are implemented in Python using the libraries networkX, node2vec, pandas, numpy, and sktlearn. The link prediction accuracies for the four methods are calculated by comparing a binary label (an edge exists or not exists) with a real valued predicted score. The technique used for evaluation in this setting is the Area Under the Receiver Operating Characteristic (AUROC) curve. The predicted links are sorted from highest probability to lowest probability. The drug nodes with the highest link probability to the disease nodes are selected as candidates for repurposing. Results The seven gene expression datasets have from three to eight expression values. The datasets were combined, and the significantly regulated genes were extracted using the Pearson correlation and Incremental Association Markov Blanket (IAMB) methods. For details on the implementation of Pearson correlation and IAMB, please refer to [32]. Pearson identified the most correlated genes and IAMB identified causally related genes. A total of 473 genes were identified as the most significantly regulated from the seven datasets. Hence, we have included all of these genes in our analysis as important regulators of skeletal muscle atrophy in spaceflight. Many diseases such as metabolic and neuromuscular diseases, cancer, chronic inflammatory diseases, and acute critical illness are associated with skeletal muscle atrophy, muscle weakness, and general muscle fatigue. Additionally, skeletal muscle atrophy is the secondary effect of many diseases, and it is important to find the diseases linked with this condition. The Scalable Precision Medicine Knowledge Engine (SPOKE) was used for identifying all the diseases related to muscle atrophy. SPOKE is a large heterogeneous network with many types of biological data organized in a hierarchical structure for the benefit of biomedicine and human health (Scalable Precision Medicine Knowledge Engine n.d.). The maximally regulated genes identified from the GRNs were input to the SPOKE. Figure 3 shows the GDKG constructed from all the diseases related to the muscle atrophy genes. Next, we applied ML methods to predict new gene disease associations in the GDKG. Link Prediction Using GCN and Other ML Methods The graphs were preprocessed by computing the graph Laplacian. Each node was embedded into a feature vector and input to two hidden layers. Given the graph embedding, GCN model is trained to predict new gene-disease interactions in the GDKG. The GCN predicted 21 new gene disease associations with a probability greater than 0.8. The gene names and associated diseases are given in Table 1. Figure 4 shows the Receiver Operating Characteristics (ROC) curve for link prediction using the GCN and GNN, Random Forest, Gradient Boosting, and preferential attachment methods. The link prediction methods were trained with 80% of the data and the remaining 20% were used for testing. The ten-fold cross validation accuracies for the gene-disease link prediction using the four methods are given in Table 2. The key diseases associated with skeletal muscle atrophy genes were identified and sorted. Out of these top ranked, 100 diseases were selected. The drugs were selected from the drug bank database [33] and the ten most commonly used drugs for each of the diseases were selected. The Disease-Drug Knowledge Graph (DDKG) was then built from the diseases and drugs used to treat them. The DDKG is shown in Figure 5. Since the existing drugs are the most commonly used for these diseases, the link prediction method was used to find new repurposable drugs for these diseases which in turn can be used for repurposing for muscle atrophy in spaceflight. Figure 6 shows the Receiver Operating Characteristics (ROC) curve for link prediction using the GCN, GNN, Random Forest, Gradient Boosting, and preferential attachment methods applied to the DDKG. A total of 60% of the data from the DDKG was used for training and the remaining 40% for testing. Table 3 lists the new predicted links with the highest probabilities for disease and drugs using the GCN link prediction method. The predicted links with highest probabilities for drugs and diseases using the GNN method is given in Table 4 for comparison. The ten-fold cross validation accuracies for link prediction applied to DDKG are given in Table 5. The GDKG and DDKG are massively scalable knowledge graphs and have several properties, such as expansion and diffusion. Graph network measures computed on these graphs are listed in Table 6. The preferential attachment network measure-based link prediction gives an accuracy of 74.64% for the GDKG and 73.55% for the DDKG, respectively. 4 for comparison. The ten-fold cross validation accuracies for link prediction applied to DDKG are given in Table 5. The GDKG and DDKG are massively scalable knowledge graphs and have several properties, such as expansion and diffusion. Graph network measures computed on these graphs are listed in Table 6. The preferential attachment network measure-based link prediction gives an accuracy of 74.64% for the GDKG and 73.55% for the DDKG, respectively. . We have compared the GCN-based link prediction in the knowledge graphs with other ML methods, Random Forest, Gradient boosting, GNN, and preferential attachment. The GCN method demonstrated the best performance with highest accuracies from ten-fold cross validation for link prediction in both the GDKG and DDKG. Discussion All of the 423 genes in the GDKG are highly activated and related to muscle atrophy in spaceflight. However, it is necessary to identify a few most important genes related to other conditions that can enable the identification of drugs for repurposing. The GCN link prediction method has achieved the highest accuracy of 96.11%, as seen from AUROC values for the ten-fold cross validation accuracies for the four methods of RF, GB, GNN and GCN given in Table 2. The GCN link prediction method has predicted 20 important genes. Their association with other diseases [34] is given in Table 1. For example, RPS25 is an mRNA significantly affected in spaceflight gastrocnemius [5] and its reduction in bed rest [35]. From Table 1, we see that this gene is not only significantly activated in muscle atrophy but is also associated with disorder of central nervous system. Similarly, many of the muscle atrophy genes in Table 1 such as SNF8 [36], ELK4 [37], FTO, and EIF3H are associated with neurodegenerative diseases. The Eukaryotic Initiation Factor (EIF) is one of the most complex translation initiation factors and consists of several subunits. The EIF3 complexes are central regulators of atrophy in skeletal muscle and are also linked to neurodegenerative diseases [38]. Muscle activity causes the ubiquitin-proteasome system to remove sarcomeric proteins. A decrease in muscle mass is associated with: (1) increased conjugation of ubiquitin to muscle proteins; (2) increased proteasomal ATP-dependent activity; (3) increased protein breakdown that can be efficiently blocked by proteasome inhibitors; and (4) upregulation of transcripts encoding ubiquitin, some ubiquitin-conjugating enzymes (E2), a few ubiquitin-protein ligases (E3) and several proteasome subunits [39]. The proteins such as NDUFS3 identified by the GCN link prediction methods are important for reversion of myopathies in mice [40]. These are atrophy associated proteins (NDUFS3, NDUBF2 part of the ubiquitin-proteasome system [41]. The loss of other target genes such as MEF2A results in progressive atrophy [42]. Myostatin, a member of the TGF-β family is a negative regulator whose predominant secretion in skeletal muscles causes muscle atrophy. Similarly, an increase in autophagy related gene ATG3 is identified by GCN link prediction [43]. Resistive Exercise (RE) with superimposed vibration mechanosignals (RVE) is proposed to counter muscle atrophy, which is effective against the over expression of Mitochondrial Ribosomal Proteins (MRPs) and Mitochondrial Tu Translation Elongation Factor (TUFM) that cause muscle atrophy [44]. Some of the MRP proteins are identified to be linked with other diseases such as cancer. Lack of Zinc Finger RNA-binding (ZFR) proteins also cause severe muscle wasting [45]. The collagen β(1-O)galactosyltransferase type 1 (COLGALT1) has been identified, whose loss of function also causes muscle atrophy [46]. Many proteins such as RPL7A have increased expression in cancer [47]. Other critical regulators of muscle atrophy such as protein arginine methyltransferases (PRMTs) -PRTM5 is linked by the GCN method [48]. Other genes such as SNW1 are also prioritized in other diseases such as Amyotrophic Lateral Sclerosis (ALS) [49]. Hence, we find that genes overexpressed in skeletal muscle atrophy are also found to be prioritized in other diseases such as cancer, and neurodegenerative diseases. Mitochondria-related gene MRPS21 has been identified here as well, whose declined expression has been found in sarcopenia or age-related skeletal muscle deterioration [50]. The four ML link prediction methods are applied to the DDKG. As seen from Table 5, the GCN method obtains the highest accuracy of 99.19%. The top ranked drugs with new predicted links and highest probabilities above 0.7 using the GCN method are listed in Table 3. The drug L-carnitine is an essential nutrient that has been proposed as a dietary supplement to enhance β-oxidation and treat skeletal muscle atrophy conditions [51]. This nutrient is predicted with the highest probability by the GCN method. This is followed by thiamine, which is also an essential nutrient that has been selected by the GCN method. Thiamine is another nutrient whose deficiency causes myotonic dystrophy. It has been found that treating patients with intramuscular thiamine 100mg twice a week for 11 to 12 months is effective in improving muscle strength [52]. Both L-carnitine and thiamine are potential nutrients that can be given as a dietary supplement countermeasure for skeletal muscle atrophy in spaceflight. There is no specific treatment for muscle atrophy, with only recent advances in the identification of treatments such as nanotechnology approaches [53]. However, ML based methods such as the GCN can be used to select drugs. The drugs selected by the GCN method for repurposing are commonly used for the treatment of diseases that are associated with skeletal muscle atrophy. Bimagrumab is an anabolic medication used for treating muscle wasting in COPD [54]. Arcitumomab and golimumab are drugs belonging to the Monoclonal AntiBodies (MABs) family predicted by the GCN method (Table 3). Decline in anabolic signals and activation of catabolic pathways contribute differently to muscle atrophy pathogenesis associated with diseases or unfavorable conditions such as spaceflight. Hence, epigenetic drugs have been proposed [55] to target multiple pathways. Fluocinolone acetonide is a corticosteroid with glucocorticoid activity selected by the GCN method, which could be a useful drug for repurposing for skeletal muscle atrophy. As mentioned in [56], niclosamide is not a good drug for repurposing for glucocorticoid-induced muscle atrophy or cancer cachexia. Anti-inflammatory drugs such as dexamethasone, and drugs alendronate have been proposed for the therapeutic management of muscle wasting and sarcopenia [57]. Similar drugs such as hydrocortisone and chloroquine are selected by link prediction. Insulin resistance is a significant cause of decreased protein and glucose available for muscle anabolism [58]. It can be noted from Table 3 that four insulin related medications have been selected for repurposing. The four drugs: L-carnitine, clindamycin, vitamin C, L-ornithine, and nelarabine selected by GCN, have also been selected by the GNN method with new predicted links and higher probability as seen in Table 4. Additionally, the common top ranked diseases with predicted links using GCN and GNN from the DDKG are metabolic diseases, type 2 diabetes, cancer, and neurological disorders. Although there is some overlap in the identified diseases and drugs using the GCN and GNN methods, the drugs predicted by the GCN method are more reliable, as this method has the highest accuracy for the link prediction probabilities. It has better performance in training with lesser samples, and validation accuracies. The graph-theoretic measures of degree distribution, neighborhood connectivity, Eigenvector centrality, and subgraph centrality for the nodes in the GDKG and DDKG are listed in Supplementary Table S1 for the 473 genes, and in Supplementary Table S2 for the 98 drugs, respectively. The degree distribution ranges from 1 to 171 for the gene nodes in the GDKG network and between 5 to 76 for the drug nodes in the DDKG network, respectively. Some of the gene nodes, as well as drug nodes, have a higher number of connections in the networks. The neighborhood connectivity is higher in the GDKG because the network is constructed using a large number of diseases overlapping with skeletal muscle atrophy. The neighborhood connectivity is ten for all the drug nodes in the DDKG because we selected a maximum of ten significant drugs for each disease. The Eigenvector centrality is a measure of the influence of a node in a network, the higher this score, the greater the connectivity of this node with nodes that have a higher score for the same measure. This measure is similar for the genes and the drugs in both networks. The subgraph centrality of a node is a weighted sum of the numbers of all closed walks of different lengths in the network starting and ending at the node. There are more closed walks for the gene nodes in the GDKG, hence this value is higher for the gene nodes in GDKG than the drug nodes in the DDKG. The graph theoretic measures for the whole GDKG and DDKG networks are given in Table 6. The DDKG network has a higher value of spectral gap, indicating that the network is sparse, and has higher measures for random walk, diffusion, and expansion. The GDKG network has a higher average number of neighbors, indicating that the skeletal muscle genes have higher neighborhood connectivity measure. The preferential attachment network measure-based link prediction gives an average accuracy of 74.10%, while the ML-based methods give accuracies above 80%. The random walk measure is shown to be a better network measure for link prediction than preferential attachment. The ML methods of GNN, RF and GB which use random walk features perform better than preferential attachment-based link prediction alone. The ML method of GCN that uses semi-supervised learning of the graph structure by node embeddings performs best for link prediction in both the GDKG and DDKG networks giving an accuracy of 96.11% and 99.19% in the GDKG and DDKG networks, respectively. The average accuracy of the GNN, RF, and Gboost method for link prediction in the GDKG network is 88.69%, whereas the GCN gives a much better accuracy of 96.11%. Overall, ML methods can be used for novel applications such as the identification of new gene regulators of diseases from spaceflight datasets and candidate drugs for their treatment. Conclusions Though skeletal muscle atrophy is known to be an incapacitating consequence of several chronic diseases, increasing morbidity and mortality, no drug is approved to treat this condition. It also severely affects animal models flown in spaceflight missions. In this paper, we have presented a comprehensive study on skeletal muscle atrophy identifying the key genes that give rise to this condition in spaceflight microgravity. By the application of ML algorithms, we have identified the main gene regulators of skeletal muscle atrophy that are also highly activated in other diseases. By constructing disease drug networks and applying ML algorithms for link prediction, we have identified top ranking drugs with the highest probability that are novel candidates for the management of skeletal muscle atrophy in spaceflight microgravity. In this work, we have mined seven GeneLab datasets to identify key genes and drugs. Through network analysis and ML methods, we show that our networks are scalable and can be expanded to include as many datasets, genes and drugs for speeding up the process of identifying repurposable drugs for medical conditions that arise in long duration spaceflights.
9,164.4
2022-03-01T00:00:00.000
[ "Medicine", "Engineering" ]
Spin Quantization in Heavy Ion Collision We analyzed recent experimental data on the disassembly of $^{28}$Si into 7$\alpha$ in terms of a hybrid $\alpha$-cluster model. We calculated the probability of breaking into several $\alpha$-like fragments for high $l$-spin values for identical and non-identical spin zero nuclei. Resonant energies were found for each $l$-value and compared to the data and other theoretical models. Toroidal-like structures were revealed in coordinate and momentum space when averaging over many events at high $l$. The transition from quantum to classical mechanics is highlighted. I. INTRODUCTION Recent experimental data [1] have shown evidence of resonances in the disassembly of the 28 Si nucleus into 7α. The data were obtained from the collision of a 28 Si beam at 35 MeV/A on a 12 C target, the experiment was performed at the Cyclotron institute, Texas A&M university. The authors of [1] tentatively associated these structures to the population of toroidal high-spin isomers as predicted by a number of theoretical models [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. In particular the experimental analysis concentrated on the disassembly of the projectile nucleus into α-like particles. The data show to a high degree of confidence some structures at excitation energies 114, 126 and 138 MeV, respectively, close to the predicted toroidal state at 143 MeV [12]. These encouraging results call for more experimental and theoretical efforts to uncover these resonances also for different nuclei, different disassembly routes and as a function of excitation energy. Due to the dynamics involved in the disassembly, microscopic models such as the Anti-symmetrized Molecular Dynamics (AMD) model [21] or Constrained Molecular Dynamics (CoMD) model [22] could be used, but they may become numerically difficult to handle when a large number of events is needed. Furthermore, they may not be able to describe in detail the α-like events as selected from the data [1]. Hybrid models may help in overcoming numerical problems at the expense of some physical insights [23]. It was observed already in the 1930s that α-like nuclei ( 12 C, 16 O,...) [24][25][26][27][28][29] display many properties that can be easily explained by assuming that those nuclei are made of α particles with no internal structure. Inspired by these many works we implemented a dynamical model where α particles interact through suitable chosen two body forces. We enforced two main variations with respect to what can be found in the literature. The first is the α-α interaction. For simplicity we used the phenomenological Bass potential (for A = 4) widely used for low energy heavy ion collisions [30]. This potential is very attractive at short distances thus the particles strongly overlap, overcoming the Coulomb repulsion. As a second ingredient of the model, we treat α particles as Gaussian distributions with widths given by their radius. Overlapping particles experience Pauli blocking because of the internal structure of the αs. Thus, we include a repulsive effect due to the increase of the Fermi energy opportunely adjusted to take into account finite size effects [31]. With such simple assumptions we are able to reproduce the binding energies of even-even N = Z nuclei up to mass 104 with less than 5% discrepancy to the experimental data. We do not want to stress much the properties of the model since our main goal is to simulate the dynamics of the disassembly to compare to data at least qualitatively. Furthermore, we would like to confirm or disprove the existence of exotic unstable shapes using a simple and transparent model and hope to be of guidance for future experiments. We dubbed the model the hybrid α-cluster model (hαc). It is a semi-classical model since it includes Pauli blocking effects. In fact the model ground states display strongly overlapping α-particles and a strong repulsion due to the increase of the Fermi energy. It means that ground states cannot be described by α-point particles and the nucleons degrees of freedom are essential. Systems excited by some external probes expand and the α-degrees of freedom may become dominant. Notice that since the repulsion is due to the Pauli blocking and the Coulomb potential, heavy ion collisions using this model can be simulated at energies above the Coulomb barrier up to maybe 80 MeV/A as we discuss below. At higher energies we may need to introduce a suitable collision term, which is a task to be discussed in the future. We introduced another quantum effect in the initial conditions, i.e., we give to the nucleus at time zero a quantized angular momentum l = l z along the z-axis. We assume that the angular momentum is transferred in the collision of 28 Si and 12 C, or 28 Si and 28 Si. The substantial difference between the two systems is that only even-l values in the entrance channel are allowed in the latter case. Changing the initial angular momentum revealed a wealth of model features ranging from a first order phase transition of dynamical origin to the formation of short living toroids when averaging over events. Due to its simplicity and numerical affordability, we can make prediction to be tested in future experiments. II. THE HYBRID α-CLUSTER MODEL In our model α-degrees of freedom are treated explicitly while nucleon (protons and neutrons) degrees of freedom are treated implicitly hence the hαc acronym. The interaction between the α-particles is given by the Coulomb repulsion (in the monopole-monopole approximation for simplicity) and the nuclear attraction. The latter is approximated as V αα = V Bass (A = 4), i.e., the Bass potential for mass A = 4 nuclei [30]. Coulomb repulsion is not sufficient to prevent a strong overlap among α-particles. Overlapping nuclei increase the repulsion due to the combined action of the Pauli principle and Heisenberg uncertainty principle, in particular the Fermi energy (per α-particle) is given by: MeV is the average kinetic energy of infinite nuclear matter, the factor of four takes into account the fact that we are dealing with α and not nucleons. For small nuclei corrections are needed to take into account finite size effects, which reduce the Fermi energy thus the parameter x F . In ref. [31], Equation 5.5, such correction was discussed for medium light nuclei resulting in x F = 0.65 for 8 Be. For overlapping α-clusters only one nucleon in one α particle is identical to another nucleon in the other α. This parameter takes into account the fact that the Heisenberg uncertainty principle is at play as well for non-identical nucleons [32]. The overlap between α-particles can be described as Gaussian distributions with standard deviation proportional to the α-radius r α = r 0 4 1/3 : (2) The parameter β = 1.22 is fitted to reproduce the binding energy of 12 C, it is the only free parameter entering the model if we exclude the radius parameter r 0 . The value of r 0 has some consequences regarding the moment of inertia, which we discuss below. For the purpose of this paper we use r 0 = 1.15 fm, unless otherwise noted, similar to the parameters entering the Bass potential [30]. At maximum overlap ρ = 2 and ε F Nα = 86.7 MeV which is the maximum repulsion in the two body channel to compare to the nuclear attraction V αα (r = 0) = −58 MeV. This implies that colliding nuclei will become transparent at beam energies well above the Coulomb barrier, similar to Time Dependent Hartree-Fock calculations [33]. A suitable collision term may remedy this shortcoming but it is outside the purpose of this work [34]. Equations (1) and (2) give the repulsion between particles and we treat it as a classical two-body force. The classical Hamilton equations of motion for interacting α-particles are solved numerically using the O(dt 5 ) Runge-Kutta method, dt = 1 fm/c is a typical time step used in the calculations. At the highest excitation energies or angular momenta discussed in this paper, the particle velocities become very large thus we implemented relativistic kinematics. This correction is important but we stress that the description in terms of classical interactions is still valid. To obtain the nuclear ground states and their binding energies, the equations of motion were solved adding a friction force until a minimum and stable configuration is reached. The particles position are saved on a file and used as initial positions in dynamical simulations. To generate events, the initial positions are rotated randomly for each event and/or many different ground states are generated. In Figure 1, we plot the binding energies of α-cluster nuclei as function of the mass number A. The free parameter β of the model was fixed to reproduce the 12 C binding energy. This leads to an overestimation of the binding energy of 8 Be of 2.6 MeV: 59.1 MeV theory vs. 56.5 MeV experiment [35]. This is an important feature since fixing the free parameter to the binding energy of 8 Be would lead to a general underestimation of all the other nuclei. It implies that the α-particles must be more overlapping for heavier nuclei, thus the increase in Fermi energy. It confirms our discussion above that the correct description of nuclear ground states must be in terms of nucleonic degrees of freedom while α-clusters may dominate at lower densities, i.e., in the expansion stage of the nuclear dynamics. Our hybrid model reproduces the binding energies of nuclei up to mass 104 with an error less than 5%. In Figure 1, we have included for comparison the contribution to the binding due to the α binding energy, full diamonds. We notice that changing the value of r 0 to 1.26 fm produces a similar agreement to the binding energies with β = 1.02. Thus, these data are not able to constrain the parameter values to high degree and we will investigate fusion cross sections of even-even N = Z nuclei for further constraints. Once the initial conditions are found, we can generate many initial ground states to be used as initial conditions in dynamical calculations. We treat each α-particle as a Gaussian distribution normalized to one of radius (variance) r α . In Figure 2, we plot the density averaged over ensembles at two different times. Naturally the system is stable and the central density is rather reasonable. The displayed system is 28 Si and we are going to concentrate on this nucleus for the remainder of this paper since it was carefully investigated in ref. [1]. The calculated binding energy is 236.9 MeV (236.5 MeV from experiments). An important quantity is the moment of inertia I, which can be obtained This result should not surprise since the initial configurations are obtained by randomly oriented ground state initial condition and this procedure produces spherical shapes on average. Notice that increasing the value of r 0 → 1.26 fm gives Ψ = 0.125 MeV for a sphere, a result that we test briefly below. III. FUSION CROSS SECTIONS OF IDENTICAL SPIN ZERO NUCLEI The total fusion cross section of the nuclear reaction is where E cm is the reaction energy in the center of mass frame, µ is the reduced mass of the reaction system and Π l is the fusion probability of the reaction at angular momentum l. We simulate the reactions of identical spin zero nuclei, thus only even l-values are allowed, 28 Si + 28 Si and 12 C + 12 C at different angular momenta l for a given E cm to obtain the fusion probability Π l with hαc model. In Figure 3, we plot the fusion cross section of 28 Si + 28 Si and 12 C + 12 C as function of the reaction energy in the center of mass frame. The fusion cross sections calculated from the neck model are also presented for comparison [36,37]. The hαc model can reproduce the experimental cross section data qualitatively. While for 12 C + 12 C at high E cm where there are no experimental data, the difference between neck model and hαc model is quite large. Naturally, the model interaction can be improved for a better description of the data, but of course fusion below the barrier needs the inclusion of tunneling [37]. IV. ROTATIONS AND DYNAMICAL FIRST ORDER PHASE TRANSITION In this section, we will explore the dynamical properties of a 28 Si nucleus rotating along the z-axis with initial orbital angular momentum l = l z , in units of . We assume that the orbital angular momenta are transferred through the collision with the 28 Si target nuclei. Due to angular momentum and parity conservation, l must be even in the entrance channel. The amount of angular momentum transferred during the interaction of the two nuclei must be simulated microscopically but this is presently outside the validity of this model. For simplicity and numerical convenience we will restrict our investigation to even l. There are many methods theoretically to give the nucleus an initial angular momentum l, a popular one is the cranking model [1,[3][4][5][6][7][8][9][10][11][12][13][14]. Since we are dealing with individual particles (7α) we will use the following ansatz to give the initial momenta K y , K x ( -units) to particle i: In Equation (5) r 2 xy = i (x(i) 2 + y(i) 2 ) and the sum is extended to all the constituents α-particles of the nucleus. Different events are obtained by different ground states initial positions and we give the initial momenta according to Equation (5). Because of the finite number of particles, this will produce classical fluctuations, while the orbital momenta are quantized. This method is more justified when the excitation energy or angular momenta l is larger, i.e., at higher entropies. Of course one should not be surprised that we are utilizing the concept of entropy since our equations of motion are time reversible invariant. Entropy arises from events mixing and averages over phase space. Notice also that for each event there may be some angular momentum along the x and y directions, see Equation (5), but on average we have l x = l y = 0, this contributes to the initial classical fluctuations. In Figure 4, we plot the excitation energy as function of l(l + 1) recovering the familiar linear behavior [38]. The error bars are given by the variances obtained from event averaging and they are quite small at small l values as expected. The slope of this plot is proportional to the inverse of the moment of inertia and agrees with our estimate in the previous section for a rigid sphere. Changing the values of r 0 and β produces the expected variation, see Figure 4. The moment of inertia is a dynamical quantity since the system expands and breaks into fragments at high E * , thus the obtained value refers to time t = 0 fm/c. The calculations give the most probable energies E * for each value of l, the values obtained are reported in Table I. An interesting physical quantity is the excitation energy as function of the kinetic energy of the particles. If the system would reach thermal equilibrium, the latter quantity could be related to the temperature. In Figure 5, we plot these quantities and notice the peculiar behavior for E k near 25 MeV. The increase in excitation energy for fixed kinetic energy signals the occurrence of a first order phase transition and signals the opening of new channels such as evaporation, 'fission' and fragmentation, i.e., the sudden increase of the degrees of freedom of the system, from one nucleus to many fragments. We notice that we get a finite probability of breaking into 7α for l = 16, and E * (7α) = 54 MeV, which gives the model lowest excitation energy (in 1000 events) when breaking into 7α. The question remains if the transition is of thermal origin. A signature of thermal equilibrium is to observe the same features for each coordinate. We know that the nucleus is rotating along the z-axis, thus we expect the momenta along the same axis to be small, zero on average, see Equation (5). To reach equilibrium, energy must be transferred from the other directions and this may be impossible for high l-values (and angular momentum conservation). From the large 'error bars' (i.e., variances due to the initial conditions) in Figure 5 we may guess that the system becomes more and more chaotic but that does not prove that thermal equilibrium is reached. A definite answer to this problem may be obtained by repeating the plot as function of the kinetic energy along the z-axis, E kz . If the system reaches thermal equilibrium multiplying E kz by a factor of 3 should reproduce Figure 5. In Figure 6, we plot E * as function of 3 × E kz , compare to Figure 5. At low energy we observe an increase of E * up to about 25 MeV where the phase transition occurs. Higher l-values or E * values do not produce an increase in E kz but just an increase in the variances. It means that even if we increase the excitation energy the system does not have enough time to transfer kinetic energy from the x-y plane to the z-direction, i.e., to reach thermalization. This can be interpreted as an apparent maximum temperature that the system may sustain. Such behavior could be compared to the Lyapunov exponent of an expanding system as discussed in ref. [39], it proves that the phase transition is of dynamical origin. These findings could be experimentally tested [1]. The small density increase at large r is due to α-particle evaporation in some events. For this plot, 50 events were generated. In Figure 7, we plot the density distribution at two different times t = 150 fm/c and 1500 fm/c at E * = 34 MeV, i.e., in the region of the phase transition. The bump that we observe at later times is due to the escape (evaporation) of one α-particle in some events thus doubling the number of degrees of freedom. There have been suggestions that highly rotating nuclei may display toroidal shapes [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]r and this was also the focus of the experimental investigation in ref. [1]. Our model allows us to study the shape evolution as function of time for given l or E * . In Figure 8, we plot the coordinates of each α-particle in the x-y plane at two different times, 200 fm/c and 600 fm/c. Notice that only the events breaking into 7α are included in the plot. We use a simple algorithm to recognize the fragments, i.e., we assume that two particles belong to the same fragment if their relative distance is less than 5 fm. This may not be the best approach to recognize fragments at earlier times but it is of little importance since we follow the expansion for very long times. Using this algorithm we can easily estimate the probability of decays into all possible channels allowed by dynamics. In Figure 8, we see that at 200 fm/c matter is missing at the center during the expansion, left panel. At a later time, more α-particles are recognized and a toroidal shape is observed, notice the change of scales. This is due to the combined effect of the angular momentum and averaging over events. We can repeat the same considerations in momentum space, see Figure 9. At later times the system expands under the influence of Coulomb only and this explains why little expansion is seen in momentum space at the two different times, compare to Figure 8. V. CROSS SECTION ESTIMATE It is instructive to give an estimate of the cross-section in order to get some deeper insight into the process and be of guidance and stimulus to more experimental and theoretical investigations [1]. The cross section is given in Equation (4). We consider the reaction 28 Si on 12 C at 35 MeV/A [1], thus E cm = 294 MeV. In order to extend the sum to even l-values only, we assume that the angular momenta are transferred in symmetric Si + Si collisions. T l gives the probability that in the collision a certain angular momentum l and/or excitation energy E * is transferred to the 28 Si, see Table I. This is the part missing in the calculation and we will estimate it from the available phase space in the reaction. We assume that the maximum excitation energy E M that can be transferred to the Si is proportional to its mass, thus: To get this excitation energy in Si + Si we estimate a beam energy E/A = 29.4 MeV and this could be an interesting experiment to confirm our findings and widen the results of ref. [1] and also to investigate the angular momentum transfer to each nucleus during the reaction. Our simple ansatz for the available phase space is: T l → 0 if the excitation energy E * > E M . This crude approximation guarantees that the cross section vanishes at large excitation energies or large l-values. In particular from Table I we The hαc model provides the probability Π l for the system to break into different open channels for a given l-value using the simple fragment recognizing algorithm discussed above. This quantity is plotted in Figure 10 for the channels as indicated in the inset. We have included in the 7α channel events where one or more 8 Be are produced, this is to simulate the experimental data where those events are implicitly included; however, recall that the model binding energy of 8 Be is overestimated. For reference we have included the experimental points from ref. [1] with huge error bars just to indicate the position of the resonances. We have seen in the Figures 8 and 9 that those values correspond indeed to short living toroidal states. These results can be compared to the toroidal shell model results reported in ref. [1], table II. Notice that the two models differ on the way the orbital angular momenta is given to the system. The 7α channel dominates at high E * values, while lower resonances are dominated by events where at least one large fragment is present. The larger is the heaviest fragment the smaller is the resonant energy. This is qualitatively similar to what was observed in the data [1]. We stress the fact that all the possible α-decay channels can be estimated in the present model. To include channels where nucleons or other fragments are produced we could couple the hαc model to a statistical one [23]. These competing channels will decrease the probability that only α-channels are relevant, thus we expect that our cross sections are overestimated. The experiment [1] provided the reaction cross section per unit energy and different α-decay channel. The hαc model gives the energy distribution for each channel whose central values E * are reported in Table I, and the variances Σ l are reported in the Figures 3-5. We notice that when selecting particular channels (i.e., 7α), the most probable energies E * and their variances change, see Figure 10, and we use these values in the following calculations. We assume that the energy distribution for each l-value is given by the Gaussian distribution normalized to one, g l (E, E * , Σ l ), with units 1/MeV. Thus, we write the differential cross section as: In Figure 11, we plot the differential cross section for the 7α channel only. Different l-values contributions are included and indicated in the inset. The estimated cross-section is generally above the experimental one given by the open crosses. The integrated cross section is a factor 8 above the data (1.9 mb) partly due to the many different open decay channels not included in the model. Notwithstanding these differences there are some interesting features to notice. The model gives distinguishable bumps at low energies and l = 16-22. The lowest l-values are in a region where experimental values have large error bars. The experimental cross section starts from 62 MeV (55 MeV in the hαc model) [40,41]. Another feature worth noticing is the increase of the widths for increasing l-values, see also Figure 4. When those widths become very large, different l-values distributions overlap and they cannot be distinguished anymore. This signals the crossing into classical dominated dynamics where using the impact parameter becomes a good approximation. Our results suggest that repeating the experiment say at higher beam energies may not reveal more values of E * because they overlap with nearby l-values. Lower beam energies may reveal the wealth of distinguishable E * as in Figure 11. Of course a crucial point would be to have a detector with better granularity and coverage. If not just even l-values are admitted in the entrance channel (i.e., Si + C collisions) may open different scenarios. On the same footing experiments using a 29 Si on an identical target would also be interesting to confirm these findings. The 29 Si beam would have the problem that neutrons are emitted and those are usually difficult to detect in coincidence with 7α. Interesting consequences of our model can be derived from a simple inspection of Equations (5)-(8) and the oscillations displayed in Figure 11. Indeed similar features have been investigated in the fusion of two heavy ions below and above the Coulomb barrier [37,42,43]. Equation (8) suggests some simple scaling of the cross section by defining the dimensionless quantity: In the equation above, the E cm term is important when comparing the same system at two different beam energies. It does not guarantee overall scaling since at different beam energies, different l-values maybe relevant and we expect variations on the tail of the distributions. Similar to fusion reactions [42], Equation (9) gives the 'energy-weighted excitation function' (EWEF). Notice that in low energy fusion reactions it could be convenient to replace Equation (9) with [42,43]: , where σ R is the reaction cross-section. Since many exit channels are available in fragmentation reactions, it is useful to define the dimensionless excitation energy as: where the Q-value depends on the exit channel, in ref. [1] the decay of 28 Si in 7α was analyzed in detail and Q(7α) = 33.6 MeV. In Figure 12, we compare the experimental dimensionless quantities [1], open crosses, to the hαc model (full squares). The model calculations have been divided by a factor 8 to take into account the difference to the data in the total cross-section [1], compare to Figure 11. Standard errors have been included to the data to indicate the region of low statistics, thus of particular interest is the region 2.4 < E < 5.2. Figure 12 does not add much to Figure 11 but it will become a more interesting observable when data at different beam energies and different combinations of projectile and target will be available. Oscillations in the EWEF can be better displayed by defining its first and second derivative: The first derivative of the EWEF is plotted in Figure 13 as function of the dimensionless excitation energy. Interesting structures in the energies of interest can be noticed. In particular the data show peaks at E = 3.4, 3.6 and 4.1, corresponding to E * = 114, 122 and 138 MeV, respectively, very close to the data analysis of ref. [1]. More peaks may be seen at E = 3.1, 4.5 and 4.7 (E * = 104, 151 and 158 MeV) but in a region where statistics is rather low thus the need for higher statistics experiments. As expected, the model calculations display definite peaks especially at low excitation energies where the data is poor thus impossible to compare, see also Table I. The average data trend is rather well reproduced by the model. In Figure 14, we repeat our analysis for the second derivative of the EWEF. In this plot the difference between model and data is more marked. Peaks in the model calculations occur especially for E <3 while for the data E >3. The peaks are consistent to those obtained from the first derivative of the EWEF. To complete our comparison to fusion reactions we define the logarithmic derivative as [42,43]: This quantity has the obvious property that all constants entering the EWEF and its derivative cancel out, for instance the factor of 8 used to normalize the model to the data. In Figure 15, we plot L(E) as function of E and remark the substantial differences to the previous plots. Peaks in the data remain only for the values reported in ref. [1]. The model calculations show large fluctuations in the low energy region and in analogy to the fusion hindrance we can interpret these as a signature of the lowest resonances for this particular decay channel. Notice however that the model peaks reported in Figure 11 at E = 55 MeV have very low statistics and are not included in this plot (off scale). If this interpretation is correct we can assume that the L(E) becomes monotonic at large energies, i.e., in the classical limit. Unfortunately not much can be inferred from the low energy data since error bars are large, notice however that in the high-energy region of low statistics, the data show a flat behavior thus suggesting that low energy resonances are reachable with more statistics, better detectors and maybe lower beam energy. VI. CONCLUSIONS In this work, we have introduced a semi-classical model for nuclei whose constituents are α particles. In the ground state, the α-particles strongly overlap and this gives rise to repulsion due to the increase of the Fermi energy. This means that a proper description of the nuclear ground state must be done in terms of nucleonic degrees of freedom with the possibility that expanding nuclei coalesce into α-clusters. We have applied our hybrid model to the fragmentation results [1] of 28 Si breaking into even-even N = Z nuclei only. We have found preferential values of the excitation energies for each l-value but also larger and larger variances in the energy distribution due to the fluctuations in the initial conditions, which are classical in origin. For high l-values, the fluctuations become very large and different l-distributions overlap signaling the approach to classical mechanics. We have shown that the spin quantization could be determined experimentally in various different experimental situations by changing the beam energies and the masses of the colliding nuclei including radioactive species. These experiments require very well performing 4π detectors, high statistics and one could take advantage of running the experiments in inverse kinematics. A dynamical phase transition was also revealed together with the limiting temperature that the system could sustain. Above the phase transition, toroidal-like shapes are observed when averaging over many events. These findings may open up a new route of research based on the seminal results of ref. [1] and be linked to the oscillations seen in fusion reactions above and below the Coulomb barrier [42,43]. We have discussed scaled energy weighted excitation functions and its derivatives. We have shown how these quantities consistently display interesting features similar to the barrier fluctuations in fusion reactions. We believe this work may open a new route of investigations to link fusion reaction to deep-inelastic, incomplete fusion and fragmentation. Well performing 4π detectors are needed to improve the energy resolution and granularity and high statistics data are essential. More theoretical work is finally needed to link the proposed analysis to nuclear fundamental properties.
7,329
2021-09-24T00:00:00.000
[ "Physics" ]
Phylogeographic analyses point to long-term survival on the spot in micro-endemic Lycian salamanders Lycian salamanders (genus Lyciasalamandra) constitute an exceptional case of micro-endemism of an amphibian species on the Asian Minor mainland. These viviparous salamanders are confined to karstic limestone formations along the southern Anatolian coast and some islands. We here study the genetic differentiation within and among 118 populations of all seven Lyciasalamandra species across the entire genus’ distribution. Based on circa 900 base pairs of fragments of the mitochondrial 16SrDNA and ATPase genes, we analysed the spatial haplotype distribution as well as the genetic structure and demographic history of populations. We used 253 geo-referenced populations and CHELSA climate data to infer species distribution models which we projected on climatic conditions of the Last Glacial Maximum (LGM). Within all but one species, distinct phyloclades were identified, which only in parts matched current taxonomy. Most haplotypes (78%) were private to single populations. Sometimes population genetic parameters showed contradicting results, although in several cases they indicated recent population expansion of phyloclades. Climatic suitability of localities currently inhabited by salamanders was significantly lower during the LGM compared to recent climate. All data indicated a strong degree of isolation among Lyciasalamandra populations, even within phyloclades. Given the sometimes high degree of haplotype differentiation between adjacent populations, they must have survived periods of deteriorated climates during the Quaternary on the spot. However, the alternative explanation of male biased dispersal combined with a pronounced female philopatry can only be excluded if independent nuclear data confirm this result. Introduction Small-range endemism at both species and higher taxonomic level is a common phenomenon among all three amphibian orders. Various mechanisms exist leading to small ranges and a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 isolation (e.g., [1,2]). We here put light on Lycian salamanders (Lyciasalamandra), an old clade of the Salamandridae, which has evolved high diversity of allopatric micro-endemic lineages [3]. Lycian salamanders occur in the western Taurus Mountains along the Mediterranean coast of Turkey and on the Greek Karpathos Archipelago. Seven species, some with various subspecies, have been suggested. Each is endemic to an isolated mountain ridge of its own, with almost no signs of admixture among each other even when they live in close proximity [4]. Lyciasalamandra is the sister genus of Salamandra and evolved circa 9.29 million years ago (mya) (95% confidence interval (CI): 6.12-12.8 mya; [5]). The final emergence of the mid-Aegean trench 10.2-12.3 mya probably initiated species evolution within Lyciasalamandra [6]. Phylogenetic studies consistently support a scenario of largely simultaneous emergence of seven major lineages [7,8], and the hypothesis of a hard polytomy could in fact not be rejected [6]. Intraspecific evolution within Lyciasalamandra species can be explained by the Messinian Salinity Crisis (5.3-5.6 mya) as well as repeated climatic alterations since the Late Pliocene and throughout the Pleistocene [5,6]. Given this high evolutionary age of Lyciasalamandra species and their intraspecific lineages [6] in combination with a low level of genetic diversity of local populations and a high amount of private local mitochondrial DNA haplotypes [8], it is sound to assume that these lineages, once having evolved, were not able to substantially disperse out of their mountains. Repeated bottlenecks associated with an increasing effect of genetic drift may have contributed to or even reinforced this pattern of low degrees of local genetic variability. And if eventually small-scale dispersal may have occurred, such founder events again should have reduced local genetic variability. All this may have led to today's small-scale differences of pattern and colour among populations, forming the basis for the description of numerous new taxa within the last years (e.g., [9][10][11][12][13][14][15][16]). Unfortunately, up to know data supporting such a scenario solely stem from organelle data, so male-biased dispersal, which may erode local population differentiation, can only be detected by also analysing nuclear DNA. But how could these salamanders survive such long periods 'on the spot', especially with regard to the tremendous climatic alterations which repeatedly occurred during the last 2.4 my [17]? Especially, since that time, cold periods (stitials) and warm (interstitials) periods regularly alternated [18]. This is known to have drastically affected the spatial distribution of plants and animals in the entire Western Palearctic (e.g., [19,20]). Although ice ages along the Mediterranean coast may not have been as severe as in the interior land masses [21], it is suggested that the increasingly dry conditions during glacial periods had an impact on amphibian populations [22,23]. The ecological niche of the enigmatic Lycian salamanders might be the key to understand this unique situation. Almost all known populations live on karstic limestone. Throughout the entire Mediterranean, such rock formations are associated with an increased biological diversity because they offer a variety of micro-habitats [24]. Especially at the slopes of south-facing coastal mountains karstic limestone provides a sufficient moisture gradient to organisms due to the opportunity to deeply hide inside crevices of rock formations and boulder fields. As major pre-requisite of this life-style, the Lycian salamanders had evolved a viviparous mode of reproduction, with females giving birth to only two juveniles per year. Relying on stable micro-climatic conditions through vertical movements in order to constantly follow suitable micro-habitat conditions within short distances (e.g., [25]) may also have constrained Lycian salamanders to evolve a high degree of ecological plasticity. Rödder et al. [26] demonstrated that the climate niche of six Lyciasalamandra species is similar, with merely the single south-east Aegean Sea Island (i.e. non-coastal) species, L. helverseni, showing a deviating climate niche. Further support for intra-generic climate niche conservatism comes from a study on demographic life-history parameters. Sinsch and co-authors [27] found that detectable differences in life history traits among populations of different species and subspecies are mainly due to variation in the period of surface activity rather than being the result of fundamental differences in their ecological adaptation. On the one hand, the highly specialised ecological and reproductive adaptations of Lyciasalamandra species allowed them to survive in primarily hostile environments. On the other hand, this may have 'trapped' them on suitable habitat spots, with restricted options to disperse and to cross areas lacking suitable micro-habitats. Such rare colonization events would result in low levels of within and high levels of among population genetic differentiation (e.g., [8]). Up to now, evidence for restricted gene flow among Lyciasalamandra populations comes from studies that were mainly conducted to solve their inter-and intra-specific phylogenetic relation (e.g., [6][7][8]). Accordingly, sample designs were not elaborated to infer evolutionary processes at smaller geographic scales. We here apply a population sampling across all species to test the hypothesis that most Lyciasalamandra populations are strongly isolated from each other. In the absence of strong male-biased dispersal, this would be shown by the existence of numerous local haplotypes and by low levels of genetic (haplotype) diversity within and high levels of haplotype diversity among populations. This prediction is based on the assumption that the pronounced adaptation of Lycian salamanders to a largely subterranean life style, coupled with viviparity, has continuously trapped them inside once colonised mountains. By applying species distribution models (SDMs) and using current and past climate scenarios, we therefore test a second hypothesis that these salamanders do not need to horizontally move out of trouble in times of drastically changed climatic conditions (as reflected by the Last Glacial Maximum (LGM)), which we use as a proxy for the numerous climatic alterations that repeatedly occurred during the Pleistocene; rather were they able to survive on the spot during periods of cold and dry climate. Population sampling We analysed 559 specimens of Lyciasalamandra from 118 populations (population average 4.7 specimens) by either using sequences stored in GenBank or by analysing new tissue samples (Fig 1, S1 File and S1 Table). They cover all currently described species and subspecies as well as their entire ranges. Turkish samples were collected under license according to the ethical permission of the Ege University Animal Experiments Ethics Committee (2014#001) and special permission (2014#62406) for field studies from the Republic of Turkey, Ministry of Forestry and Water Affairs. Sampling in Greece took place according to the Hellenic National Law (Presidential Decree 67/81) and under a special permit (#107439/758) issued by the Ministry of the Environment. Salamanders were collected by turning stones in the respective habitats; tissue samples (toe clips) were immediately stored in absolute ethanol. DNA extraction and sequencing DNA was isolated using the Qiagen Blood and Tissue Kit following the manufacturer's protocol. We sequenced fractions of two mitochondrial genes: 16SrDNA (primers 16SAL and 16SBH of [28]; initial melting for 120 s at 94˚C, 33 cycles of denaturation for 30 s at 94˚C, primer annealing for 30 s at 51˚C, extension for 60 s at 65˚C, final step at 65˚C for 10 min) and ATPase (primers L-LYS-ML and H-COIII-ML of [8], covering fractions of subunits 8 and 6; initial melting for 10 s at 94˚C, 30 cycles of denaturation for 30 s at 98˚C, primer annealing for 30 s at 67˚C, extension for 30 s at 72˚C, final step at 72˚C for 1 min). PCR reactions were prepared using either 5Prime Master Mix (16S) or the Phusion Flash Master Mix of Thermo Science (ATPase). PCR products were purified using the High Pure PCR Product Purification Kit of Roche. Sanger reactions for all genes were run using the Big Dye Terminator (ABI) with initial melting for 60 s at 96˚C, 25 cycles of denaturation for 10 s at 96˚C, primer annealing for 5 s at 50˚C, extension for 240 s at 60˚C. We sequenced single stranded fragments from both directions each on an ABI 3500 Genetic Analyzer Serie 2 automatic sequencer using standard protocols. each codon position in ATP6 and ATP8), implementing the greedy algorithm and the Akaike Information Criterion (AIC). We compared the Partition Finder runs with linked and unlinked branch length and selected the model with linked branch length across partitions according to the AIC value. Analyses of demographic population history Nucleotide diversity (π) and haplotype diversity (hd) were calculated to estimate the genetic diversity of distinct phyloclades using DnaSP v. 6.10 [38] (for phyloclade delineation see below). To gain information about the demographic history of phyloclades we calculated further statistics: pairwise mismatch distribution [39], with observed distributions tested against expected distributions under a constant growth and a growth-decline model, respectively; Tajima's D [40]; and Fu's Fs [41]; all analyses done with DnaSP (see S1 Fig for mismatch distribution). The mismatch distribution describes the frequency of pairwise substitutional differences among individuals of a given group and is expected to be unimodal in populations that underwent recent bottleneck and rapid expansion [39]. Tajima's D statistic tests for a departure from neutrality as measured by the difference between the number of segregating sites (h) and the average number of pairwise nucleotide differences (p). In the absence of balancing or purifying selection, population expansion can cause significant negative departures of Tajima's D from zero, while a population bottleneck can cause a significant positive departure from zero [40]. Under the assumption of neutrality, Fu's Fs statistic provides a test for population growth as well by identifying an excess of rare haplotypes in an expanding population when compared with the number of expected haplotypes in a stationary population (Fs<0; [41]). Conversely, Fs>0 would indicate a recent population bottleneck. Fs-values should be regarded significant if p<0.02 [41]. As additional test statistics for detecting patterns of population growth, we calculated R 2 of [42], which has shown by these authors to be superior to the raggedness statistic rg of [43]. Statistical significance of R 2 was tested using coalescent simulations in DnaSP. In addition, the demographic population history was reconstructed for each phyloclade using the Bayesian Skyline Plots (BSP) [44] method in Beast (version 2.5.0, [45]). We used the selected partitions of Partition Finder as predefined subsets and bModeltest for calculating the substitution models and the phylogeny in one single analysis [46]. As estimation for the substitution rate we inferred the mean rate of [6] (9.70E-3 sites per Mya) as strict clock rate. Coalescent Bayesian Skyline was used as tree prior and MCMC tests were run for 10 million steps each and sampled every 1,000 th generation. BSP can be found in S2 Fig. Analysis of molecular variance (AMOVA) Ordered mitochondrial haplotypes (based on a p-distance matrix) were used to test for the degree of population fragmentation since, depending on their phylogenetic background, they may harbour information about their own history [47]. A hierarchical AMOVA [48] (ARLE-QUIN software of [49]) partitioned the total genetic variance into variance components within populations (V IP ), among populations within phyloclades (V PC ) and among phyloclades within species (V CS ). We also calculated the respective inbreeding coefficients (F-values) according to Weir and Cockerham [50]. Exact probabilities were calculated using the Markovchain method [51], with 1023 recombinations and replicates each. Phylogenetic tree and haplotype networks A phylogenetic tree of all haplotypes was calculated using Maximum Likelihood (ML) and Bayesian Inference (BI). For hierarchical outgroup rooting we added homologous gene fragments from complete mitochondrial genomes of Salamandra salamandra (EU880331), Chioglossa lusitanica (EU880308), Mertensiella caucasica (EU880319), Pleurodeles poireti (EU880329), Pleurodeles waltl (EU880330), Euproctus platycephalus (EU880317) and Euproctus montanus (EU880316). The ML tree was calculated with RAxML (Stamatakis 2014) using rapid bootstrapping and the greedy algorithm, running 2,000 bootstrap replicates. The BI tree was calculated with MrBayes (version 3.2.6, [52,53,54]) implementing two runs with four independent chains; each run with 10 million generations and sampled every 1,000 th generation with a burn-in of 20% (see S3 Fig for the BI tree). Afterwards we checked if the tree likelihood had converged. The results of Partition Finder were used as a priori configurations in MrBayes (see S2 Table). Since we could only use one substitution model for all partitions in RAxML [55], we decided to use the GTR+G model for all partitions. We generated species specific population level genealogies using TCS [56]. TCS is widely used to calculate haplotype networks based on statistical parsimony. We set a probability cutoff value of 90% that defined the maximum number of mutational connections between pairs of sequences and thus delineated unconnected haplotype networks as distinct intraspecific phyloclades. Modelling of current and past species distributions Grid-based correlative Species Distribution Models (SDMs; [57]) were used to predict the current potential distributions of Lyciasalamandra species within the general region of the genus' known range. For this purpose, we compiled 253 geo-referenced localities of all species based on literature and on own fieldwork data (S1 File). The minimum number of records per species was 18 (details below) after elimination of duplicates in the same grid cell, applying the grid system of SDM climate data (see below). Although SDM building is delicate when the number of records is low (cf. [58]), it can be feasible in narrow-ranged species [59], which is why we continued to compute SDMs for all seven species. As ecological predictors we used current high resolution climate data for the period 1979-2013 from the CHELSA project ([60]; version 1.2; data available at: http://chelsa-climate.org/, accessed 10 May 2018). CHELSA operates on monthly means and is based on a quasi-mechanistic statistical downscaling of the ERA interim global circulation model (GCM) with a GPCC bias correction [60]. The CHELSA website provides 'bioclim' variables (cf. [61]). From the 19 variables available, six were selected via pair-wise Pearson correlation analyses to avoid effects of multicollinearity, which is important when projecting SDMs into new space and time [62]. Of highly correlated variables (|r| > 0.7), we excluded the less informative one, based on a priori assumptions on biological importance to our target organisms: bio3, isothermality (BIO2/BIO7) ( � 100); bio11, mean temperature of coldest quarter; bio15, precipitation seasonality (coefficient of variation); bio18, precipitation of warmest quarter; bio19, precipitation of coldest quarter. CHELSA climate data are available at grid resolution 30 arc sec, thus reflecting macro-or mesoclimate. Like multiple authors before (cf. [57]), we consider these data as proxies to micro-climatic conditions. This is a crucial point to mention, especially with regard to Lyciasalamandra species, because part of their life these amphibians exploit deep-reaching systems of crevices in karstic limestone systems [3,27,63]. We defend our approach, as micro-climatic data for the focal taxa are not available and Rödder et al. [26] have shown that macro-climate can be effectively used for the computation of Lyciasalamandra SDMs. Along with these authors, we expect that CHELSA climate data show where the climate in general is favorable to our study species. With the goal to predict the potential geographic ranges of the target species during the LGM (ca. 21 K BP), SDMs based on the current climate were projected to LGM data available from the CHELSA website (i.e., bioclim variables, downloaded 10 May 2018). They are based on the implementation of the CHELSA algorithm on PMIP3 (Paleoclimate Modelling Intercomparison Project Phase III) data using the widely used and well performing Community Climate System Model 4 global circulation model [64] as a test dataset and a paleo-digital elevation model with resolution 30 arc sec [60]. Maxent 3.4.1 [65,66] was used for SDM building (https://biodiversityinformatics.amnh. org/open_source/maxent/, accessed 10 May 2018). This presence-only/background method operates with a machine-learning algorithm following the principle of maximum entropy. It makes predictions on the potential geographic range of a taxon by taking environmental (here: climatic) data from geo-referenced species records and random background data [65,67]. In this way, it contrasts the environmental conditions at species' presences against those at the background to fit a function to estimate the relative suitability to the species [68]. Maxent is a widely used SDM tool and often performs better than other SDM methods [69,70]. It offers various settings for SDM building allowing fine-tuning [65,70]. This requires some caution, however, as with these, the output can be dramatically altered when uncritically used [67,68,71,72]. Therefore, it is important to explore settings and to adapt them to the available data [66,70,73]. In our final model runs, we employed Maxent with specifications for SDMs based on small sample sizes with hinge features. In three species (L. antalyana, L. flavimembris, L. helverseni), with 29, 18 and 23 records, respectively, the cross-validate approach was chosen. For all other species (with 33 or more records), the subsample approach was used with each 25% of the records randomly set aside as test data. The number of replicates was equal to the number of records in the cross-validate and 100 in the subsample approach. Extrapolation was used, but no clamping, and response curves were explored. One background was chosen for all species as a window enclosing all Lyciasalamandra records. The number of background points was 100,000. All other settings were default [66,68,70,74]. A Multivariate Environmental Similarity Surfaces (MESS) analysis [71] revealed that there were no conditions in the general region of the genus' known range (i.e. coastal mountain ranges and coastal islands) potentially leading to unrealistic extrapolations of response curves (S4 Fig). Maxent calculates the area under the receiver operating characteristic curve (AUC) as a measure of predictive accuracy [65]. Following the classification of [75], AUC values range between 0.5 for models with no predictive ability and 1.0 for models giving perfect predictions; values > 0.9 describe 'very good', > 0.8 'good', > 0.7 'useable' models. Although criticized (e.g., [67]), the AUC is informative as it mirrors the model's ability to distinguish between species records and background points, i.e. showing how general or restricted a distribution is along the range of the variables in the studied region [68]. To account for AUC critics, we also calculated True Skill Statistic (TSS) as sensitivity + specificity-1 [76]. The Maxent default ClogLog output format (ranging 0-1) was chosen for processing the resulting SDM maps in a GIS approach, and the 'maximum training sensitivity plus specificity ClogLog threshold' was used to distinguish potential presence from absence, as this threshold might not overestimate distributions [66]. Because of the almost exclusive association of Lyciasalamandra species with karstic limestone formations (see above), areas identified with SDMs climatically suitable but outside these geological conditions were cut off using a geology shapefile of the Central Energy Resource Team of the US Geological Survey (http://energy.cr.usgs. gov/oilgas/wep/, accessed 10 May 2018). In addition, in LGM maps a sea level layer was overlaid to only show areas beyond today's coast line that indeed were accessible continental shelf. It was obtained from the LGM Vegetation Download Page (http://anthro.unige.ch/ lgmvegetation/download_page_js.htm, accessed 10 May 2018). We prepared boxplots of Maxent ClogLog values at species records for current and LGM SDMs. These were tested using a Wilcoxon signed rank test. Molecular diversity and population demography Altogether, we found 153 combined 16S and ATPase haplotypes, with the fewest in L. flavimembris and the most in L. billae (Table 1). The majority of haplotypes were private to a single population (78.4%). Only one haplotype occurred in a maximum of seven populations (ant-h1 of L. antalyana). Consequently, haplotype diversity was high in all phyloclades, with hd almost always approaching a maximum value of 1.0. In contrast, nucleotide diversity π was mostly below 1%. Analyses of population demography show in parts contradicting results within and among phyloclades (Fig 2, Table 1 and S1 Fig) (Table 1). For all other phyloclades, multimodal mismatch distributions indicate a pattern of recurrent population bottlenecks and expansions. Total molecular variance was lowest in L. atifi (6.9%) and highest in L. fazilae 19.8 (Table 2). Molecular variance distribution shows a consistent pattern across species. Within-population variance was extremely low, ranging from 1.50% in L. luschani to 7.89% in L. billae. The respective fixation indices F IP were close to one, indicating almost fixed molecular differences among populations. Among phyloclade molecular variance made up most of the entire molecular variance, with a maximum of 91.5% in L. fazilae (a comparable three-level AMOVA could not be calculated for L. flavimembris since only one phyloclade was found in this species). Haplotype networks and spatial haplotype distribution Two to six distinct phyloclades appeared in all species but L. flavimembris (only one phyloclade ; Fig 2A-2G). Most phyloclades correspond to distinct geographic clusters. Exceptions are ati-II (interspersed by ati-III), bil-I (some haplotypes also occur throughout the range of bil-III) and hel-I (it occurs on all islands of the Karpathos Archipelago, but with hel-II and-III being found in between its populations). Within most phyloclades there seems to exist a finescale geographic differentiation that is largely mirrored by haplotype positions within the networks, e.g. in L. atifi Haplotypes of two phyloclades co-occur in L. atifi (ati-p5), L. billae (bil-p8, bil-p9, bil-p12), L. luschani (lus-p16, lus-p18, lus-p19) and L. fazilae (faz-p4) Phylogeographic structure of species Lycian salamanders have inhabited the Anatolian Mediterranean coast and the Karpathos Archipelago for more than 10 mya [6]. Subsequent diversification resulted in the seven species known today, with a varying number of distinct intraspecific lineages (i.e. some suggested to be subspecies). Several authors had already emphasized that Lyciasalamandra populations often harbour exclusive haplotypes, indicating a high degree of isolation among populations (e.g., [6,8,77]); however, their sampling was geographically more limited. Our dense spatial sampling of 118 populations across the entire known geographic range of the genus, combined with new analytic approaches, now offers the opportunity to evaluate the degree of possible isolation among populations as well as the processes that shaped this unique pattern in this enigmatic amphibian clade. All but one (L. flavimembris) species show a pattern of deep haplotype differentiation in the mtDNA with pronounced geographic structure, indicating possible long-term isolation and/or haplotypes, with circle size being proportional to haplotype abundance; black dots indicate haplotypes not found. Each line in between two haplotypes accounts for one mutational difference. https://doi.org/10.1371/journal.pone.0226326.g002 [78]). Given the high degree of differentiation among phyloclades and their small-scale geographic occurrence, we consider this one of the most exceptional cases of fine-scale genetic differentiation in mtDNA within an amphibian. Of course, comparative levels of intraspecific differentiation at mitochondrial loci were described also in other amphibian species. European examples include Rana temporaria [79,80], Rana iberica [81], Hyla orientalis [82], Lissotriton boscai [83], Lissotriton helveticus [84] and Salamandra algira [85]. However, samples of these species came from ranges of up to two orders of magnitude larger than those inhabited by Lyciasalamandra species (maximum distance between two samples of a species range from 40 km in L. flavimembris to 160 km in L. atifi, respectively). In addition, admixture of haplotypes of neighbouring phyloclades as observed in other species (e.g. R. temporaria; [80]), a pattern that often mirrors geographic introgression after secondary contact of lineages (in Europe most often after re-expansion from glacial refugia; [86,87]), is found only in eight out of the 118 populations studied. Pronounced intraspecific phylogenetic discontinuity visible through high mitochondrial haplotype differentiation is typical for amphibians, presumably because their relative dispersal ability is low [88]. A complex population differentiation comparable to what we found in Lyciasalamandra, is also observed in some tropical species, mainly driven by Phylogeography of Lycian salamanders forest habitats and topographic heterogeneity (e.g., [89]). However, in such cases haplotypes are often shared among populations (e.g., [90]). Phylogeographic structure within phyloclades In all Lyciasalamandra species, even geographically nearby populations harbour private haplotypes, although, from a human point of view, the habitat in-between seems to be appropriate (most often karstic limestone with pine forests [3,63]). This is most obvious in phyloclades ati-I of L. atifi, ant-I of L. antalyana, bil-I of L. billae, lus-I and lus-II of L. luschani, faz-II of L. fazilae and hel-I of L. helverseni (own field observation). From a geographic point of view, an extreme example is the island of Kasos, where all three phyloclades of L. helverseni occur within only 7.3 km (populations hel-p12, -13 and -14 in Fig 2F). Given the large mutational difference of such neighbouring haplotypes, a pattern like this is most easily be explained by strict isolation. Widespread haplotypes, covering a maximum area of 10 to 15 km in Lyciasalamandra, are rare and indicate either moderate recent female dispersal or incomplete lineage sorting. The most common haplotype in terms of populations (ant-h1 was found in seven populations) covers a distance of more than 50 km. Widespread haplotypes are possibly indicative of geographic expansion, a process that is usually associated with demographic expansion (e.g., [88,91]). Therefore, one would expect respective population genetic signals within phyloclades. This is in fact the case in ant-II and lus-VI, both having wide-spread haplotypes and both showing a signal of recent demographic expansion in the Bayesian skyline plots (S4 Fig). On the other hand, also some phyloclades with no widespread haplotypes showed signs of recent demographic expansion after a bottleneck (Table 1: significantly negative values of Fu's Fs; alternatively, a selective sweep of mtDNA haplotypes may also produce such a pattern; however, this invokes pronounced female dispersal, which contradicts current knowledge of the ecology of Lyciasalamandra [4]). Only in hel-I this was corroborated by a significantly negative Tajima's D (hel-I). Bayesian skyline plots in most cases showed only moderate signs of change in effective population sizes towards the recent (again hel-I is an exception). We therefore consider the calculated population genetic parameters to be less conclusive, which might be attributed to the fact that most phylogroups harboured predominantly haplotypes private to single populations. Klewen [63] suggested that rivers and rock formations other than limestone act as strong barriers to dispersal and hence gene flow among Lyciasalamandra populations. However, in between such river systems, dispersal should have easily been possible, given that most often karstic limestone connects even adjacent populations. In fact, star-like haplotype networks with locally occurring descendent haplotypes or haplogroups branching off from a central one occur in several Lyciasalamandra phylogroups. Such patterns are typical of range expansions, followed by local differentiation and subsequent isolation [88,91]. One reason why in most cases this pattern is not identified by population demographic parameters may be that such star-like structures almost always form only part of the respective haplotype network, with longer branches also connecting more differentiated haplotypes to the central haplotype. This inevitably increases the raggedness of mismatch distributions and increases deviation from a pattern expected under a simple growth-decline model. Therefore, and apart from the idea of strong isolation of populations, we conclude that in fact range expansion occasionally must have occurred regularly in the past. Future research including bi-parentally inherited genes is needed to test this assumption. Interestingly, within phyloclade hel-I of L. helverseni, haplotype hel-h1 occurs at two localities, Olympos village (hel-p4) and the city of Pigadia (hel-p10) (see Fig 2F) at the almost respective ends of the island of Karpathos, ca. 27 km away from each other. Pigadia is the only harbour connecting Karpathos Island with the Greek mainland. We therefore assume that hel-h1 may be a case of anthropogenic translocation, in this case from Olympos village to the city of Pigadia (see review on distribution patterns of Aegean herpetofauna and examples of human mediated translocations therein [20]). Admittedly, incomplete lineage sorting, which may also produce a pattern of disjunctive haplotype distribution throughout a species' distribution range by preserving ancestral polymorphisms cannot completely be ruled out as an alternative explanation, although in none of the other Lyciasalamandra phyloclades a similar pattern was observed. Potential range dynamics during the Quaternary To explain the pattern of haplotype differentiation and distribution observed today, there must have been geological episodes when female dispersal was possible. According to [6], the Messinian Salinity Crisis (MSC) was most likely responsible for the onset of intraspecific evolution in L. fazilae, L. helverseni and L. luschani. In L. atifi, L. antalyana, L. billae and L. flavimembris, intraspecific diversification seems to have started in the Quaternary, a period when recurrent and dramatic climatic alterations [18,22,92] seem to have shaped the evolution of western Palearctic biota (e.g., [93,94]). Unfortunately, the number of stitials and interstitials that occurred in the Quaternary is high [22]; so, given the uncertainty associated with split time estimation based on the molecular clock it is not possible to affiliate any split within this period to a single climatic event. During the LGM, sites where nowadays Lyciasalamandra species occur were largely not explained (S5 Fig). At these sites, Maxent ClogLog values were significantly lower compared to current climate (Fig 3), strongly supporting that unsuitable conditions for salamander survival prevailed except for L. fazilae and L. flavimembris. In some species, SDM projections into the LGM revealed suitable area not too far from nowadays occurrences, all in all including areas which contain karstic limestone, too (S5 Fig). In part, e.g. in L. helverseni, this included the shelf area of the Mediterranean Sea which was about 100 m lower than today (e.g., [95]). In consequence, one may argue that range shifts in order to following suitable habitats could have occurred. There exist numerous examples of European plant and animal species for the validity of the so-called expansion-contraction (EC) model, a demographic scenario in temperate species whereby glacial cycles result in the contraction in size and shift toward lower latitudes into refugial areas during periods of cooling, followed by population growth and re-colonisation during postglacial warming (e.g., [86,96]). Interestingly, numerous L. fazilae populations, especially those living far from the Mediterranean coast, appear to live under suboptimal current macroclimatic conditions (Fig 3, S5 Fig). This can be interpreted as an indication that horizontal range shifts due to climate change do not always allow theses salamanders to quickly adapt their geographic ranges to the climate. Also not in line with such a past range shift scenario is our observation that almost no suitable area for four species, L. antalyana, L. billae, L. luschani and L. helverseni, could not be identified during the LGM (Fig 3, S5 Fig). This casts some doubt on potential range shift scenarios. Also, range shifts are typical bottleneck events and should leave characteristic imprints in the genetic signature of populations [87,88,93]. In the species where this has been observed, they led to an erosion of genetic diversity within and among populations, and after re-expansion genetically uniform populations would inhabited large areas [88,93]. Species with a low dispersal capacity were ultimately doomed to extinction through loss of suitable habitat( [97], unless they could shift their ranges at small scales along altitudinal gradients [93]. Alternatively, they might have survived in so-called refugial sanctuaries [84]. Any recurrent range shift, either on larger or on smaller geographic scales, would produce the above mentioned pattern of genetically similar and at the same time depauperate populations. We should observe such a pattern if Lyciasalamandra populations had followed shifting suitable habitats between stitials and interstadials and vice versa. Our results of numerous local haplotypes across most phyloclades' ranges contrast with this expectation. Hence, micro-refugial survival in glacial sanctuaries best explains this unique pattern of fine-scale geographic allopatry of mitochondrial haplotypes within Lyciasalamandra (see [23,98] for a further examples of cryptic diversity that evolved in the same area during Plio-Pleistocene glaciations). Based on demographic and growth pattern analyses, [27] recently showed that the mainly subterranean life-style of Lycian salamanders in deep-reaching systems of crevices allows them to survive both cold winter and dry and hot summer. They observed a very small age-and size-related life-history variation across populations and species and concluded that this hints at a pronounced niche conservatism in Lyciasalamandra. Transferring this into a temporal dimension, this highly specialised and at the same time conserved ecological niche in combination with viviparity provides a sufficient explanation why many populations could have survived repeated climatic deteriorations 'on the spot'. A pronounced female philopatry, which is considered advantageous when living in harsh environments as it can help females to acquire and defend an appropriate shelter (see [99] for the viviparous alpine salamander), may have enforced this niche conservatism. Did our mitochondrial markers capture the whole story? Mitochondrial markers as studied by us are inherited almost exclusively by females [100], so gene flow mediated by males would remain undetected when studying organelle genes. Slight male-biased dispersal was suggested by [4] to explain discordant patterns of mitochondrial versus nuclear genes in a hybrid zone of L. antalyana and L. billae. Such gene flow, if strong enough, should admix nuclear alleles among populations, while population specific mitochondrial haplotypes would persist. However, several authors mentioned that neighbouring populations often show divergent, but within populations seemingly stable, phenotypes in terms of coloration and pattern (e.g., [3,63]). Unfortunately, such studies neither were based on statistically solid sample sizes, nor did they appreciate the potentially high environmental and ontogenetic plasticity in amphibian colouration (e.g., [101,102]). Genes for coloration and pattern are coded by the nuclear genome, and unless strong natural selection would locally stabilize distinct colour and pattern morphs, even occasional male-mediated gene flow would homogenize populations in terms of nuclear genes. Therefore, our observed pattern of strong isolation among populations does not seem to be an artefact of a merely mitochondrial based analysis. However, analyses of bi-parentally inherited nuclear genes are needed to test this hypothesis. Taxonomic implications Within Lyciasalamandra, numerous new taxa have been described during the last few years, however, almost always based on morphology alone (see above). Our phyloclade structure and subspecies designation only match perfectly in L. antalyana, which is also supported by the phylogenetic tree. In L. fazilae and L. helverseni, phyloclades diversity is higher than current taxonomic diversity. In the phylogenetic tree, the five phyloclades of L. fazilae form three well supported clusters ((faz-I + faz-II), faz-II and (faz-IV + faz-V)), while in L. helverseni, phyloclade structure perfectly matches the three clusters supported by the pylogenetic tree. In L. atifi, L. billae and L. flavimembris several subspecies have been described even within phyloclades that have formed in our analysis. Most extremely, in L. billae populations morphologically and geographically assigned to three subspecies, L. b. billae (bil-p6), L. b. irfani (bil-p7, -8) and L. b. yehudahi (bil-p9 to -14), respectively, even harbour haplotypes of the same phyloclades. Such patterns may be due to incomplete lineage sorting or secondary contact. L. billae, phyloclades bil-I, bil-II and bil-III form a common cluster, however, without three clearly delineated sub-clusters. The situation is similar in L. atifi: although ati-I and ati-III are well supported by the phylogenetic tree (S3 Fig), phyloclade differentiation is not mirrored by cluster delineation. L. flavimembris does not show any significant intraspecific differentiation, although two subspecies have been discriminated based on morphology alone. In L. luschani, clusters formed by the phylogenetic tree perfectly match current subspecies taxonomy, while the four phyloclades identified within L. l. luschani do not transfer to an unambiguous cluster pattern in the phylogenetic tree. Variance distribution among phyloclades is more than twice as high in L. luschani, L. fazilae and L. helverseni compared to the other species ( Table 2), indicating that also taxonomic differentiation within these species might be more justified. Almost all taxa newly described during the last decade were delineated exclusively on the basis of colour and pattern polymorphisms (e.g., [10][11][12][13]) and allopatric distribution. While the latter criterion is concordant with the definition of subspecies [103], the discriminant power of the morphological characters used for the delimitation of recently described taxa within Lyciasalamandra has never been proven. Nevertheless, and although we doubt the validity of the current taxonomy within Lyciasalamandra, we abstain from a taxonomic revision since our result are based on mitochondrial data alone.
8,610
2020-01-13T00:00:00.000
[ "Biology", "Environmental Science" ]
Study of distribution network loss allocation with high photovoltaic penetration ratio With the rapid development of renewable energy, the penetration ratio of photovoltaic (PV) in the distribution network is getting higher and higher. The fair and reasonable allocation of high loss caused by high PV penetration in the distribution network is of great significance to the healthy development of distribution systems. In order to study the loss allocation in distribution networks with high PV penetration, based on the cooperative game theory, combined with the power flow algorithm, the power flow and loss of the 72-node active distribution network in agricultural and pastoral areas were analyzed. Three scenarios were included: no PV, low PV penetration, and high PV penetration. The allocated loss of the distribution network with high PV penetration and multiple distributed generators was computed. The results show that the loss allocation method, which takes into account the power as well as the network topology, is applicable to distribution networks with multiple distributed generators. In distribution networks with high PV penetration, distributed generators are the main cause of high loss. If the network contains the extreme reverse power flow concerned in this paper, the loss allocated to DGs accounts for more than 95% of the total network power loss. Introduction In recent years, with the popularity of the concepts of new power systems, "carbon peak carbon neutral" and the promotion of construction tasks of rooftop PV, the distributed PV generation in the distribution network has been sustained and rapidly developed.In 2022, the new installed PV capacity in China was 87.41 million kilowatts(kW), and the cumulative installed capacity reached 392.61 million kW, a yearon-year increase of 28.6%.The additional 51.114 million kW of distributed PV are connected to the grid [1].Distributed PV continues to develop rapidly. In the case of high PV penetration in the distribution network, if there is no configuration of energy storage, reverse power flow may occur in the distribution network [2] (Reverse Power Flow, RPF).It can result in a temporal mismatch between the photovoltaic power generation and the electricity consumed by loads.When there is sunshine in the daytime, the PV outputs more power.RPF can occur and may result in excessive loss and voltage overshooting. Current research on RPF focuses on improving the protection strategy of relay protection devices and methods to suppress RPF [3][4][5].There are also some studies on loss allocation in distribution networks with distributed generators (DGs).For the distribution network loss allocation with distributed generators, an improved average network loss method was proposed [6], but it lacks fairness in practical application.To address this issue, a new method of network loss sharing for radial distribution networks with DG was proposed [7], which allocates losses without considering any additional assumptions and approximations.However, the impact of reactive power transmission on network loss is not taken into account.Several studies have proposed loss allocation methods based on cooperative game theory [8][9][10][11][12], such as those based on Shapley value (SV), Nucleolus value (NV), Aumann Shapley (AS), and τvalue loss allocation.For example, the method for network loss apportionment based on Shapley value and circuit theory was proposed [8].However, the workload is large and the computational burden is high when there are more loads and DGs.High loss caused by the high PV penetration is not considered in the current loss allocation methods and the quantitative research on the responsibility of distributed generators for high loss of distribution networks is still blank.The study on reasonable loss allocation in the active distribution network with a high PV penetration ratio can clarify the responsibility of high loss borne by each power generator under the background of "double carbon".The basis can be provided for the subsequent development of dynamic feed-in tariffs for distributed PV. In this paper, the situation of an active distribution network with high PV penetration is considered, based on the theory of cooperative game.The power flow and loss are analysed for a 72-node active distribution network in agricultural and pastoral areas containing high PV penetration.Finally, the loss allocation is analysed in active distribution networks at typical moments. Loss allocation model The problem of loss allocation in the distribution network is modelled as a cooperative game problem.In this problem, the loads and distributed generators at different nodes in the distribution network are considered as the players involved in the game.The coalition in which all the players participate is the largest coalition which is denoted as N.The largest coalition's total gain is the total loss summed all line losses in the distribution network.The gain is denoted as v.In this paper, the value of the eigenfunction v is calculated directly by using the method of forward and backward for power flow calculation. Taking a radial distribution network containing ML loads and MG distributed generators as an example, the τ values of the loads and distributed generators are calculated separately, and the detailed calculation steps are as follows. It is assumed that the maximum union of the loads connected in this distribution network is represented by { 1 , 2 , 3 , ,} LL NM < Κ .The upper limit value of the load i is calculated as Formula (1). where Ploss is the network loss when all the loads are connected in the network and Ploss-i is the network loss after removing only the load i from the network. The minimum clearance value of load i is calculated as Formula (2).( , ) () where ploss-i is the network loss when only load i is connected in the network.We make { 1 , 2 , 3 , ,} GG NM < Κ a set of distributed generators connected to the distribution network. The upper limit value ( ,) iG XNv and the minimum clearance value G i κ of each distributed generator can be calculated according to the procedure for calculating ( ,) iL XNv and L i κ of the load.Finally, the L  and G  of loads and distributed generators are calculated according to Formula (3), and the τ values of loads and distributed generators are calculated according to Formula (4), respectively. ∋ ( ∋ ( Since the sum of line losses in the network with only loads and distributed generators is not equal to the line losses generated by connecting both loads and distributed generators in the actual distribution network, the calculated τ value is not directly used as the allocated loss of loads and distributed generators.To effectively and fairly allocate loss, the loss allocated to the loads is equal to the corresponding τ values calculated when only the loads are connected, while the losses allocated to the distributed generators should be equal to the losses of both loads and generations connected to the network minus the losses allocated to the loads. Simulation analysis The algorithm procedures, including the part of the forward and backward calculation of the power flow and the part of the calculation of the loss allocation based on cooperative game theory, are implemented in a 64-bit environment with MATLAB R2022.Based on the actual 10 kV network in agricultural and pastoral areas of Qinghai Province shown in Figure 1, the distribution network with high PV penetration is studied and analyzed. Analysis of distribution network losses with a high PV penetration ratio The RPF in the network is used to describe the PV penetration.The higher the PV penetration, the higher the RPF in the network.When the PV power is more than 10 times the power required by the total load, there is an extreme RPF in the radial distributed network.The distribution network concerned in this paper consists of 72 nodes with 71 branches and a PV distributed generator connected at Node 72.The PV power curve is the actual PV power recorded in one day.One sample is taken every 1 hour.This curve is used as the standard PV power curve.The moment at 0:00 AM is specified as the first moment.There are 24 moments in one day.The actual maximum generation of this PV power in the simulation is more than 10 times the actual required power of the radial network.To compare the network loss and loss allocation when the PV penetration ratio in the network is different, three simulation cases are set up.The standard PV power generation is reduced at the same rate at different moments by multiplying by a factor r. The simulation results are shown in Figure 2. Case 1: r=0.05, the network does not have reverse power flow during the day.Case 2: r=0.1, the network has a low reverse power flow at noon.Case 3: r=1, there is an extreme reverse power flow in the network at noon.In Case 1, the generation power at the slack node in the network is always positive throughout the day and there is no RPF.The network loss is very small during the day.In Case 2, there is a very small RPF only from 12:00-18:00, and the network loss is slightly larger at this time than at other times of the day.In Case 3, the generation power at the slack node of the network is negative from 10:00-18:00, there is a large RPF in the network, and the loss of the network is much higher in this period than in other periods of the day.The losses allocated to the DG are consistent with the network loss profile, and the high losses due to high PV penetration are borne by the DG. Loss analysis of distribution network with high PV penetration at a typical moment A typical moment of the day is selected for analysis.At that moment, the actual operating load of the distribution network is 241.71 kW of total active power and 48.70 kvar of total reactive power, and the total loss in the network without DG is 0.16 kW.In order to simulate loss allocation in the scenario with high PV penetration, four distributed generators DG1-DG4 are connected to the distribution network.The losses allocated to DGs are shown in Figure 3.They have a certain positive correlation with the output active and reactive powers of DGs.The loss allocation model not only takes the active and reactive powers of DGs into account but also the location of DGs in the distribution network.The method is still applicable in the case of multiple DGs with a high penetration ratio.The loss allocated to loads is shown in Figure 4. Network loss is not allocated to nodes with no load such as Nodes 12, 20, and 24.Although the required power at Node 53 is smaller than that at Node 2, the loss allocated to Node 53 is larger than that at Node 2. It is because Node 53 is further away from the slack node.It is reasonable for loss allocation. Conclusions Based on the cooperative game loss allocation method, the impact of extreme reverse power flow has been studied on power loss and loss allocation in active distribution networks.The conclusions are as follows: (1) In distribution networks with high PV penetration, if there is extreme reverse power flow in the network concerned in this paper, the loss allocated to DGs accounts for more than 95% of the total network loss. (2) In radial distribution networks, high PV penetration leads to extreme RPF in the network.The network loss rises and the economics and stability of the distribution network are degraded. .1088/1742-6596/2741/1/012069 3 where j denotes L or G. N is the grand coalition of all loads and all distributed generators in the distribution network. Figure 2 . Figure 2. The simulation results.(a) Generation power at the slack node; (b) Total power loss in the distribution network; (c) Loss allocated to the DG. Figure 3 . Figure 3. Proportion of DG generation power in total power generated and proportion of allocated loss in total loss allocated to DGs. Figure 4 . Figure 4. Loss allocated to the load of each node at noon.
2,741.2
2024-04-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Non-invasive imaging platform reveals a potential tumourigenicity hazard of systemically administered cells The number of clinical trials using cell-based therapies is increasing, as is the range of cell types being tested, but without a thorough understanding of cell fate and safety. Therefore, there is a pressing need for monitoring of cell fate in preclinical studies to identify potential hazards that might arise in patients. Utilising a unique imaging toolkit combining bioluminescence, optoacoustic and magnetic resonance imaging modalities, we assessed the safety of different cell types by following their biodistribution and persistence in mice. Our imaging studies suggest that the intra-arterial route is more hazardous than intravenous administration. Longitudinal imaging analysis over four weeks revealed that the potential of mouse mesenchymal stem/stromal cells (mMSCs) to form tumours, depended on administration route and mouse strain. Clinically tested human umbilical cord (hUC)-derived MSCs formed growths in 15% of animals that persisted for up to three weeks, indicating a potential tumourigenicity hazard that warrants further testing. Introduction Cell-based regenerative medicine therapies (RMTs) have the potential to treat various diseases 1 , but the risk of tumour formation is a primary safety concern 2 . Mesenchymal stem/stromal cells (MSCs) isolated from bone marrow, adipose tissue or umbilical cord are being tested in clinical trials, but in many cases, preclinical safety data are not available. Bone marrow-derived MSCs have been used for many years and appear safe 3 , but a review of adipose-derived MSCs concluded that while adverse events are rare, they nevertheless do occur, and are likely to be related to underlying health conditions of the patients or administration route 4 . Human umbilical cordderived (hUC)-MSCs have only recently been introduced in clinical trials, with more than 50% of these initiated within the last 3 years (Supplementary Table 1). hUC-MSCs are less immunogenic than other types of MSCs, which contributes to their attraction as clinical RMTs. However, because of their low immunogenicity in combination with higher proliferative behaviour these cells may 4 also pose a greater potential risk 5 , yet until now, their safety profile has not been robustly assessed. The importance of preclinical safety testing is highlighted by a recent report where a tumour developed in a patient's spinal cord following intrathecal administration of stem cells 6 . The most common way to administer cells systemically in small animals is via the intravenous (IV) route 7 , delivering cells directly to the lungs where they are sequestered as a consequence of the pulmonary first-pass effect [8][9][10][11][12][13] . Although the IV route is also frequently used in clinical trials, administration via the arterial circulation is not uncommon. For instance, clinical trials testing the potential of cell therapies to treat myocardial infarction administer cells into the coronary arteries or left cardiac ventricle 14,15 , while in patients with peripheral artery disease or stroke, intra-arterial injection via the femoral or carotid artery, respectively, is frequently employed 16 . Intra-arterial administration will also lead to systemic distribution to other organs, including the brain, and cells passing through the blood-brain barrier could pose an important safety concern. However, a detailed analysis of cell fate after intra-arterial cell administration has so far not been reported 17 . Non-invasive imaging technologies have opened up exciting new possibilities for preclinical assessment of the safety of cell therapies by monitoring cell biodistribution and persistence through longitudinal in vivo cell tracking; however, a platform approach using multimodal imaging for the safety assessment of cells has not been previously implemented. Preclinical imaging technologies for cell tracking, some of which have clinical relevance, include magnetic resonance imaging (MRI) to detect cells labelled with superparamagnetic iron oxide nanoparticles (SPIONs), multispectral optoacoustic tomography (MSOT) to detect cells labelled with gold nanorods or near-infrared red fluorescent protein (iRFP) [18][19][20][21][22][23] , and bioluminescence imaging (BLI) for the detection of cells expressing the genetic reporter, firefly luciferase [24][25][26] . Genetic reporters are particularly advantageous because signals are only generated from living cells, thus allowing the monitoring of cell proliferation and tumour growth, and avoiding problems based on nanoparticle dissociation from cells, which can lead to false positive signals. However, the spatial resolution of 5 BLI is poor, making it difficult to precisely locate the cells 24 . By contrast, both preclinical MSOT and MRI have much higher spatial resolution (150 µm and 50 µm, respectively), providing details of the inter-and intra-organ distribution of administered cells. Moreover, as MRI is routinely used in the clinic, it provides a bridge for preclinical and clinical studies. Here, we have implemented a multi-modal imaging approach comprising BLI, MSOT and MRI, to assess biodistribution and fate of different cell types following venous and arterial administration in healthy mice. Some of these cell types are currently being used in clinical trials, including hUC- Table 1), kidney-derived cells 27 and macrophages 28 . We show that in a small number of mice, hUC-MSCs started to proliferate over time. Although these hUC-MSC growths eventually regressed, our data raise safety concerns regarding the use of these cells in clinical trials. Results Whole body biodistribution of different cell types following intravenous (IV) and intracardiac (IC) administration Bioluminescence imaging showed that IV delivery of ZsGreen + /Luc + mMSCs, mKSCs, hKCs and hUC-MSCs, resulted in signals exclusively in the lungs, while signals from IV-administered macrophages were also located more posteriorly (Fig. 1a). This was expected because macrophages are known to traverse the lungs and populate other organs, such as the liver and spleen. In contrast, intraarterial delivery via the left ventricle (from now on referred to as intra-cardiac (IC)) resulted in a whole-body distribution of all cell types (Fig. 1a). Organ-specific ex vivo imaging within 1h of mKSCs being administered IV confirmed the signal was limited to the lungs (Fig. 1b, d). In contrast, after IC administration, bioluminescent signals were detected in the brain, heart, lungs, kidney, spleen, and liver (Fig. 1b, d). A comparable ex vivo biodistribution was observed for mMSCs and hKCs following IV and IC administration (not shown), 6 while with hUC-MSCs, we found low but detectable signals in other organs besides the lungs following IV administration ( Supplementary Fig. 1). IV-administered macrophages were found predominantly within the lungs by ex vivo imaging (Fig. 1c), but weaker signals were also detected in the spleen and liver ( Supplementary Fig. 2), confirming the in vivo signal distribution. Ex vivo analysis of macrophages after IC injection showed signals in all organs (Fig. 1c, e). (a) BLI immediately after administration, showing that cells were always confined within the lungs after intravenous (IV) administration, but distributed throughout the body after intracardiac (IC) administration; an exception were the macrophages which showed also a more posterior signal after IV administration. Luminescence intensity scale has been adjusted individually for each cell type and is described in Supplementary Table 2. Ex vivo bioluminescence imaging of organs within 5h of administration of (b) mKSCs or (c) macrophages confirmed the in vivo cell biodistribution. Organs are indicated as kidneys (k), spleen (s), liver (li), lungs (lu), heart (h) or brain (b). Quantification of the bioluminescence signal intensity of organs ex vivo post (d) mKSC or (e) macrophage administration. Values represent the mean signal intensity measured in each organ and normalised to the total flux from all organs (n = 3 each group). Error bars represent standard error. (f) Mean pixel intensity of GNR-labelled macrophages measured via multispectral optoacoustic tomography for a period of 5 hours post IV administration, displaying the kinetics of their accumulation in the spleen and liver. Arrow indicates the time point at which the cells were administered. To monitor the temporal dynamics of macrophage migration, cells were labelled with GNRs, injected IV, and monitored continuously for 4.5h using MSOT. Signal intensity began to increase immediately in both the liver and spleen until around 90 min (Fig. 1f), but remained close to basal levels in the kidney, consistent with BLI ex vivo analysis (Fig. 1e, f). However, when GNR-labelled macrophages were administered IC, increases in signal intensity in the kidney were comparable to those in the liver and spleen 4h post-administration ( Supplementary Fig. 3c). Cell distribution within organs using high-resolution magnetic resonance imaging Since the spatial resolution of BLI is poor, we used MRI to evaluate the intra-organ biodistribution of ZsGreen + /Luc + /SPION + mMSCs after IV or IC administration, focussing particularly on the brain and kidneys. Following IC injection, T2 * weighted imaging revealed hypointense areas distributed homogenously throughout the brain (Fig. 2a), and localised in the cortex of the kidneys (Fig. 2b). However, hypointense contrast was not detected in the brain or kidneys of IV-injected mice, confirming that IV-administration does not deliver mMSCs to either of these organs (Fig. 2a, b). Post mortem MR imaging of extracted organs performed at higher resolution confirmed the hypointense contrast throughout the brain and in the renal cortex of IC-injected mice (Fig. 2a, b). Histological analysis of ZsGreen expression by fluorescence microscopy in combination with Prussian Blue staining of SPIONs showed that labelled cells were located in the glomeruli (Fig. 2c). ZsGreen and Prussian Blue signals corresponded to the same spatial location, indicating that hypointense contrast in vivo was unlikely to result from false-positive detection of SPIONs (e.g. released from dead cells). To determine whether IC-administered cells had undergone extravasation, we performed confocal imaging of IB4-stained blood vessels. This demonstrated that ZsGreen + mMSCs were physically trapped in the lumen of microcapillaries (Fig. 2d), suggesting that the cells did not cross the blood brain barrier or the glomerular filtration barrier. Short-term fate of IC-injected cells To determine how long the cells persisted in major organs we injected 10 6 ZsGreen + /Luc + /SPION + mMSCs into the left ventricle of BALB/c mice and tracked their fate in vivo by MRI and BLI, and post mortem by MRI and fluorescence microscopy ( Fig. 3a). On the day of injection, whole-body distribution of IC-administered mMSCs by bioluminescence signals was observed, while in the kidneys, MRI revealed hypointense contrast specifically in the cortex. By 24h, bioluminescence signal intensity decreased, suggesting cell death. Correspondingly, fewer hypointense areas were observed in the renal cortex by MRI, supporting the disappearance of SPION-labelled cells. By 48h, bioluminescence was no longer detectable in the abdominal region, nor was any significant hypointense SPION contrast observed in the kidneys with MRI. This was confirmed by highresolution MRI of organs ex vivo, showing a decrease in contrast in the renal cortex over time, and a decrease in the frequency of ZsGreen + mMSCs in kidney glomeruli by fluorescence microscopy (Fig. 3a). Changes in the T2* relaxation time in the renal cortex indicated the relative number of SPION-labelled cells present at each time point. T2* was significantly lower on the day of cell administration ( Fig. 3b) than at baseline but then increased towards baseline levels at 24h and 48h. Because the liver is the major organ for clearance of blood-transported particulates, we quantified 10 the hepatic T2* relaxation time, which revealed a subtle but significant decrease from baseline through to 48h (Fig. 3c). These results suggest that following cell death, SPIONs accumulate predominantly in the liver and are not retained by the kidneys. where green fluorescence corresponds to ZsGreen expression and blue fluorescence to DAPI staining. Arrowheads indicate individual glomeruli. Scale bar corresponds to 100 µm. T2 * relaxation time of (b) kidney cortices or (c) liver before (baseline) and up to 2 days after cell administration. The T2 * relaxation time in the cortex of the kidney was significantly lower on the day of cell administration (day 0, mean = 7.98 ms +/-SE = 0.29) than at baseline (14.56 +/-0.32 ms; One-way ANOVA, p < 0.001). The T2 * relaxation time then increased towards baseline levels at day 1 (12.57 +/-0.50 ms) and day 2 (13.19 +/-0.23 ms), and by day 2 the difference compared with baseline levels was no longer statistically significant. In the livers, T2* relaxation time revealed a subtle but significant decrease in relaxation time from baseline through to day 2 (baseline, 7.19 +/-0.29 ms; day 0, 5.48 +/-0.38 ms; day 1, 5.10 +/-0.16 ms; day 2, 5.02 +/-0.94 ms; One-way ANOVA, p = 0.006). See Supplementary Table 3 for Tukey pairwise comparisons. Effect of administration route on the long-term biodistribution and fate of mMSCs To assess the effect of administration route on the long-term fate of cells, ZsGreen + /Luc + mMSCs were administered to BALB/c SCID mice by IC or IV routes, and biodistribution monitored by BLI at multiple time points for 28 days. While both IC and IV injection resulted in the typical immediate biodistribution patterns by 24h (Fig. 1a), by 96h following IV and IC administration, the bioluminescence signal was undetectable, indicating loss of cells via cell death (Fig. 4a). Continued imaging over time showed that bioluminescence signals began to increase again in animals after IC injection from around day 14, consistent with tumour development, but not in animals after IV injection. The increase in signal was particularly prominent in the hindquarters of all five ICinjected mice at day 14, and increased further until day 28 (Fig. 4a, Supplementary Fig. 4a). Detailed analysis of animals after IV administration of mMSCs revealed bioluminescence signals in the lungs of one mouse increased over time (Supplementary Fig. 4b). Overall, whole-body bioluminescence intensity initially decreased following both IC and IV administration, and subsequently increased rapidly in the IC-injected mice (Fig. 4b-d). Osteosarcoma formation after IC administration of mMSCs Multiple abnormal growths were present in IC-injected BALB/c SCID mice, predominantly in skeletal muscle surrounding the femurs, but also in muscle near the hips, ribs, and spine (Fig. 5a, f), suggesting tumours had formed. Tumour sites corresponded to foci of intense BL signals which could also be identified using T2 weighted MR imaging (Fig. 4e). Furthermore, T2 weighted MR imaging allowed us to detect an abnormal mass in the lungs of one (out of three) IV-injected mouse that displayed an intense bioluminescence signal (Fig. 4e, Supplementary Fig. 4b). Although cells of the mMSC line have been suggested to home to the bone marrow 29 , flow cytometry analysis showed the bone marrow was negative for ZsGreen + cells ( Supplementary Fig. 5). Histologically, tumours were characterised by atypical solid proliferation of spindle cells associated with multifocal formation of pale amorphous eosinophilic material (osteoid). The tumours were therefore classified as osteosarcomas (Fig. 5h, j, k). Frozen sections of the tumour tissue exhibited specific ZsGreen fluorescence (Fig. 5i), further confirming the neoplasms originated from mMSCs. Formation of mMSC-derived tumours in different mouse strains To determine whether tumours developed because the BALB/c SCID mice were immunocompromised, we investigated the long-term fate of the mMSCs following IC administration in three different immunocompetent mouse strains: BALB/c (same genetic background as mMSCs), FVB (unrelated inbred strain), and MF1 (unrelated outbred strain). The biodistribution immediately after injection was similar between the strains, but at day 28, only the BALB/c mice displayed bioluminescence signals as high as those in the BALB/c SCID mice (Fig. 5b-e). Moreover, the timing and location of tumour formation was consistent in all immunocompetent and immunocompromised BALB/c mice. In the FVB and MF1 strains, mMSC foci tended to form in similar locations as with the BALB/c mice, but bioluminescence signals were weaker. Although 14 signal intensity gradually increased in FVB mice from d7 to d28, in MF1 outbred mice, signals increased initially up to d21, but then started to decrease as the mMSC foci began to regress. Table 1). When following the fate of IV-and ICadministered hUC-MSCs in BALB/c SCID mice, we found that in most cases, BLI signals became weaker within a few days of administration, and remained undetectable for the duration of the study (8 weeks) (Fig. 6a, b). However, in a small number of mice (~15%) UC-MSC foci had developed in locations outside the major organs (Fig. 6c, red arrows). Although these foci initially expanded, they then appeared to regress, and by d21 were barely detectable, and did not reappear during the remaining 5 weeks of the experiment. Representative BLI of mice administered with 10 6 hUC-MSC via the IC or IV route. The signal was progressively lost shortly after administration, with no evidence of malignant growth. (b) Mean whole body quantification of the bioluminescence signal up to day 28 as obtained with two different cell doses. Error bars represent SE. (c) BLI images from mice that displayed signal that persisted up to or beyond day 7 (dose: 10 6 cells, ventral orientation). In all cases, the signals had disappeared by day 21 and not returned by day 56. Images in blue frames (a, c) are presented in a lower intensity scale (1.0 x10 4 -1.0 x10 5 p/s/cm 2 /sr) to display weaker signals. Discussion Here, we have employed a novel platform approach of non-invasive preclinical imaging encompassing BLI, MRI and MSOT to assess the biodistribution and persistence of a range of mouse and human cell types following IV and IC administration in healthy mice. These cells included mouse MSCs, kidney stem cells and macrophages, as well as human kidney-derived cells and hUC-MSCs, the latter being already tested as cell therapies in clinical trials. As expected, immediate analysis after IV administration revealed that apart from macrophages, all other cell types were mostly sequestered in the lungs, although small numbers of hUC-MSCs could be detected in other organs following ex vivo analysis. After IC administration, all cell types showed a widespread distribution. However, irrespective of the administration route, analysis using all three imaging technologies determined that cells disappeared from major organs within 24-48 hours, which based on the loss of BLI signals, was likely due to cell death. The observation that cells are cleared very quickly from the major organs following IC administration indicates that the arterial route poses no significant advantage for cell therapy administration. By contrast, our long-term tracking analysis over four to eight weeks, provides the first evidence that arterial administration of cells may carry a higher tumourigenicity hazard. Therefore, this striking finding suggests that IV administration of cell therapies is safer for clinical applications. We thus recommend that further investigations are urgently needed before conducting clinical trials where cells are delivered arterially. Our platform of imaging techniques was also able to provide some mechanistic insight into the fate of cells after administration. Macrophages have been previously shown to home to the liver and spleen after passage through the lungs 30 . However, the dynamics of this homing process had not been described. Using multi-modal BLI and MSOT, we could monitor macrophage accumulation in the liver and spleen for 4.5h continuously at high spatial resolution. We found that labelled macrophages immediately started to accumulate in liver and spleen, particularly in the first ~90 min, which indicated that some of the macrophages instantly passed through the pulmonary circulation. While BLI has the advantage of highly sensitive body-wide detection of luciferase-expressing cells, its spatial resolution is poor, which prevents organ-focussed imaging. To visualise cells within major organs such as kidney and brain, and monitor their fate over time, we implemented a bimodal approach comprising BLI and MRI, taking advantage of the high spatial resolution of MRI in addition to the high sensitivity of BLI and the fact that luciferase activity is dependent on cell viability [20][21][22]24 . Detailed analysis of the biodistribution of mMSCs after IC injection using in vivo, and subsequently ex vivo MR imaging techniques revealed that SPION-labelled cells were scattered throughout the brain, while in the kidneys, they were restricted to the cortical regions. Ex vivo histological staining and fluorescence microscopy demonstrated that cells in the kidneys were found only within the glomeruli, bounded by endothelial cells within the microvasculature. Similarly, cells in the brain were only localised within the microvasculature, indicating that they lack the capacity to pass through the blood brain barrier. These results demonstrate that the mMSCs cannot extravasate into the brain and kidneys, and are in line with our observation that tumours were not found in these organs after four weeks. Surprisingly, during long-term cell tracking of the BALB/c-derived mMSCs, we observed tumour formation in skeletal muscle following IC administration to a similar degree in immune-competent BALB/c mice as in BALB/c SCIDs. mMSCs also gave rise to tumours in an unrelated inbred strain, albeit at a slower rate, while in an unrelated outbred strain, small foci of mMSCs expanded at early time points and later regressed. Taken together, these data suggest that the adaptive immune system might not be able to recognise tumours derived from syngeneic MSCs (equivalent to autologous MSCs in human applications), and that the genetic background of the host appears to have an effect on the propensity of MSCs to form tumours. This could be a concern for human trials using autologous MSCs where the ability of the cells to form tumours may not be detected by the recipient's immune system. Furthermore, the results suggest that the risk of tumour formation might depend on undefined genetic factors that would vary from patient to patient. Our observation that mMSCs distributed to most organs following IC injection, but tumours were predominantly localised in the skeletal muscles and not within the organs they originally appeared in, raises the question of how tumour formation is regulated in different organs and tissues. Our data indicate that the cells had a 'survival advantage' in muscular tissue, but not in the brain and the kidneys, from which they failed to extravasate. We hypothesise that following IC administration, a small number of MSCs were able to extravasate from the capillaries in the skeletal muscle where they started to proliferate. The mechanisms that regulate the ability of the mMSCs to extravasate and form tumours in the skeletal muscle but not in other organs are not known, and further analysis is required to determine the molecular and cellular factors controlling this process. Our results also show that the cells failed to home to and populate the bone marrow, which is surprising given the cells had been originally isolated from the bone marrow 31 . The D1 mMSC line used here has not previously been reported to generate invasive tumours, since subcutaneously injected cells provided no evidence of metastasing, even if they proliferated at the injection site 32,33 . Our observation that the mMSCs did not form tumours outside the lung following IV administration is therefore consistent with this finding. Since these observations suggested that arterial administration of MSC-based cell therapies could have important safety implications, we followed the fate of hUC-MSCs, which are currently being used in several clinical trials (Supplementary Table 1). While in most animals the cells became undetectable within a few days after IV administration, in a few mice the cells persisted longer, albeit transiently, in other body regions where their presence was not expected. We suggest that this unusual behaviour is not linked to cell size, because the hUC-MSCs were not smaller than mKSCs or mMSCs, but could possibly be due to their surface proteins, allowing some of the cells to escape the lungs 11,34 . The observation that hUC-MSC foci appeared in a small number of mice, grew in size, but later disappeared, was difficult to explain, especially given that the mice were SCIDs and thus lacked an adaptive immune system. It is possible that the cells eventually elicited a xenogeneic response involving macrophages and natural killer cells 35 , after initially suppressing the native immune system, which is one of their central properties 36,37 . Alternatively, the hUC-MSCs may have expanded in the animal but then become senescent and died, irrespective of the host's ability to mount an immune response. Thus, after an 8-week period of cell tracking, we could not Average cell diameter was estimated by measuring the volume of a cell pellet in a packed cell volume (PCV) tube according to the manufacturer's instructions (Techno Plastic Products, Switzerland). The cell diameter was calculated using the formula: where V corresponds to the pellet volume, and c to the number of cells in the pellet. For MR tracking, cells were labelled with diethylaminoethyl-dextran coated SPIONs synthesised in house as previously described 42,43 Table 6). At the respective study end points, mice were culled and organs with any visibly identifiable tumours imaged ex vivo by BLI. Kidneys were cut coronally for ex vivo imaging, and all other organs were imaged whole. Bioluminescence signals of whole live mice or individual organs ex vivo were 25 quantified by drawing regions of interest (ROIs) from which the total flux (photons/second) was obtained. The relative signal intensity from each organ was calculated as a percentage of the signal intensity from all organs. For ex vivo kidney imaging, the ROI was drawn around all four kidney halves and a single value for total bioluminescence signal was recorded. Images were recorded at the following wavelengths: every 10 nm from 660 nm and 760nm, and every 20 nm from 780 nm and 900 nm, at a rate of 10 frames per second and averaging 10 consecutive frames. All mice were allowed to adjust to the imaging system for 15 minutes prior to recording data. For monitoring biodistribution of macrophages after IV administration, a 15 mm section of the abdomen to include the liver, kidneys and spleen of the mice was imaged repeatedly for a total of 4.5 hours; 30 minutes into the imaging the BALB/c mice (n = 3) received 10 7 macrophages via a tail vein catheter. For the IC imaging a 15 mm section of the abdomen was imaged once, followed by an ultrasound (Prospect 2.0, S-Sharp, Taipei city) guided injection of 10 7 macrophages into the left ventricle of the heart of 3 BALB/c mice. Mice were then returned to the photoacoustic imaging system for imaging as previously described. Data was reconstructed and multispectral processing was performed to resolve signals in the liver, kidney and spleen for GNRs. Regions of interest were drawn around the liver, right kidney and spleen ( Supplementary Fig. 3) to generate mean pixel intensity data. MR imaging ZsGreen + /Luc + /SPION + mMSCs (10 6 ) were administered to BALB/c mice IV (n = 2) or IC (n = 2 for short-term analysis; n = 5 for longitudinal tracking). The biodistribution of cells in the brain and kidney was imaged with a Bruker Avance III spectrometer interfaced to a 9.4T magnet system Statistical Analyses Statistical analyses were performed using Minitab 17 statistical software. A one-way ANOVA (analysis of variance) was used to compare multiple groups. When an ANOVA resulted in a statistically significant result (p < 0.05), a Tukey pairwise comparison was performed in order to determine which groups were significantly different. The Tukey pairwise comparison assigned each group at least one letter, and groups that did not share a letter were significantly different from one another.
6,218.8
2017-10-24T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Resonant Akhmediev breathers Modulation instability is a phenomenon in which a minor disturbance within a carrier wave gradually amplifies over time, leading to the formation of a series of compressed waves with higher amplitudes. In terms of frequency analysis, this process results in the generation of new frequencies on both sides of the original carrier wave frequency. We study the impact of fourth-order dispersion on this modulation instability in the context of nonlinear optics that lead to the formation of a series of pulses in the form of Akhmediev breather. The Akhmediev breather, a solution to the nonlinear Schrödinger equation, precisely elucidates how modulation instability produces a sequence of periodic pulses. We observe that when weak fourth-order dispersion is present, significant resonant radiation occurs, characterized by two modulation frequencies originating from different spectral bands. As an Akhmediev breather evolves, these modulation frequencies interact, resulting in a resonant amplification of spectral sidebands on either side of the breather. When fourth-order dispersion is of intermediate strength, the spectral bandwidth of the Akhmediev breather diminishes due to less pronounced resonant interactions, while stronger dispersion causes the merging of the two modulation frequency bands into a single band. Throughout these interactions, we witness a complex energy exchange process among the phase-matched frequency components. Moreover, we provide a precise explanation for the disappearance of the Akhmediev breather under weak fourth-order dispersion and its resurgence with stronger values. Our study demonstrates that Akhmediev breathers, under the influence of fourth-order dispersion, possess the capability to generate infinitely many intricate yet coherent patterns in the temporal domain. The most evolutionary physical system that has many internal interacting components or agents deviates from their initial equilibrium state over time and can develop instability in the system.A particular type of instability, namely, modulation instability (MI), arises in many areas of physics including but not limited to hydrodynamics 1-3 , nonlinear optics 4 , plasma physics 5 , biophysics 6 , nonlinear self-organization and pattern formations 7 . In nonlinear optics, MI remains at the heart of many nonlinear optical phenomena that arise when light propagates through a nonlinear optical medium such as crystal, optical fiber, or waveguides.Noise, which is naturally present in the applied optical field seeds the instability which upon further propagation amplifies exponentially due to its interaction with the dispersion and nonlinear properties of the medium.In the frequency domain this is equivalent to generating cascades of spectral sidebands 8,9 .In the more developed stage of MI inside the medium, the dynamics are highly complex and involve several stages of energy exchange among the spectral modes.This process is intimately connected with a novel nonlinear phenomenon called the Fermi-Pasta-Ulam (FPU) recurrence 10,11 . In an optical fiber, the FPU recurrence is: when the modulated input pump starts to propagate in the fiber, the pump generates new sidebands by giving up its own energy to these sidebands.When all the energy from the pump is transferred to many of the generated sidebands, we see the AB just reached its highest amplitude.However, the process starts to reverse at this point when the AB starts to come down from the highest amplitude point.The pump starts to take back its energy from the sidebands and eventually returns to its initial state where it started in the first place -which is the recurrence.This completes a growth-return cycle of an AB and is called the FPU recurrence. Applying the nonlinear Schrödinger equation (NLSE), early research on MI and FPU recurrence was done mainly using numerical studies.In 1984, Hasegawa first showed that one can generate a series of short optical pulses with a desired repetition rate with a limited number of initial conditions 12 .The following year, Akhmediev et al. developed a generalized theory and gave a solid mathematical foundation to the description of MI by deriving the exact analytical solution namely, the Akhmediev breather (AB) presented in 13 . Region of modulation instability The NLSE is the widely used equation that can capture the properties of long optical pulse propagation inside a fiber.However, with increasing pulse power, the fiber triggers varieties of higher-order linear and nonlinear optical effects such as higher-order dispersion, optical shock, and Raman effects 14 .With these effects, the fundamental NLSE is unable to model the propagation dynamics and needs modification.Incorporating these effects in NLSE the generalized nonlinear Schrödinger equation (GNLSE) is formed.Using the GNLSE, the first step towards studying the MI in a fiber system is to conduct a stability analysis using a propagating continuous wave to find the nature of MI in the system (see Methods section).We can represent the boundary of the region denoting the presence of MI for β 4 > 0 with the following pair of curves: In Fig. 1a, the region indicating the presence of MI is highlighted in the bright area for all β 4 > 0 , with the upper limit plotted up to β 4 = 3 .In Fig. 1b, the upper dashed blue curve represents β 4 (1) , while the lower solid curve represents β 4 (2) , enclosing the region where MI frequencies exist.The resonant MI region is defined within 0 < β 4 ≤ 0.75 .The corresponding MI band is presented in Fig. 1b with brick red lines.The primary MI band is situated in the central region between ω = −2 and 2 in Fig. 1a, where a narrow bandwidth MI curve emerges at the wings for small values of β 4 .The position of the MI band is marked by the black arrow at the bottom of Fig. 1a.As β 4 increases, this narrow band gain curve converges and eventually merges with the central region. (1) If an AB development is initiated by MI frequency from the resonant MI region, the MI dynamics here are predominantly influenced by phase-matching interactions.This behavior mirrors the well-explored phenomenon of soliton dispersive wave generation, where the soliton's wavenumber aligns with that of a linear wave, satisfying a phase-matching condition, resulting in energy leakage from the soliton in the form of dispersive waves 24 . Note that the phenomena of MI and phase-matching have an intimate relationship where the former is a process, and the latter is a condition required for optimal MI.In this work, the resonant MI dynamics refers to the interaction between ABs that arise from two separate MI bands where both MI and phase-matching mechanisms play a key role in the dynamic processes.In resonant MI dynamics, phase-matching takes place in two stages.In the first stage, during the development of the ABs, the phase-matching condition must be satisfied to spontaneously generate the discrete spectral sidebands from the noise 14 (see Sect. 10.2) that can achieve gain.It does not have any counterparts in general soliton dynamics.The second stage of phase matching takes place at the maximum compression point of the ABs when their spectrum overlaps with dispersive linear waves which have a direct counterpart in soliton phase matching phenomena arising from higher-order dispersion 24 . In the succeeding MI region within the range 0.75 < β 4 ≤ 1.5 , the narrow-band MI curve at the wings converges and becomes part of the central wide-band MI curve.One example of the corresponding MI curve in The relationship between the β 4 coefficient and the MI frequency ω is examined under different conditions.In (a), the MI frequency range is depicted for all β 4 > 0 , with β 2 = −1 .The bright region indicates the presence of MI, while the dark region represents its absence.Three distinct MI regions are identified and separated by red-dotted lines.(b) Illustrates the MI band within the resonance MI region with a solid brickred curve for β 4 = 0.04 , denoted by a black arrow in (a).Similarly, (c) shows the region of MI where two MI frequencies are within the same MI band, and (d) shows the MI band with one MI frequency only, with the corresponding positions highlighted in (a) using black arrows.The right Y-axis represents the growth rate g(ω) , and the blue curves (Eq. 1) represent the upper (dashed) and lower (solid) boundaries on the (β 4 , ω) plane where MI occurs.The left Y-axis displays β 4 . (e) Similar to (a) but numerically generated using a white noisy initial condition (Eq.2).(f) Presents only those modulation frequencies with the highest gain, based on Eqs. ( 3) and ( 4), depicted by purple and green curves.Below the dotted black line marks the resonant MI regimes (please see the Supplementary Material to see the impact where a movie is also provided with varying β 4 vs ω). this region is presented in Fig. 1c with β 4 = 0.8 .Unlike the preceding case, in this region, MI is characterized by a double-peak gain band.The perturbation frequency with the highest gain corresponds to the first peak and the second peak represents another harmonics of the first peak with the highest gain.Interaction between perturbation frequency and its harmonic within one gain band is defined as higher-order MI 37 .Finally, in the uppermost region ( β 4 > 1.5 ) only one MI band is situated in the central area.An example of a corresponding MI band with β 4 = 2.0 is presented in Fig. 1d.Here, the AB develops from a perturbation frequency from the unstable gain band predicted by the linear stability analysis.Further elucidation of the impact of each scenario on an AB will be provided in subsequent sections. To investigate if the similar MI region as shown in Fig. 1a can naturally arise in a waveguide, we employ a white-noise initial condition: in Eq. (7), where a(t) and b(t) are two independent real random functions with values uniformly distributed around 0 .Numerical results indeed demonstrate that the MI frequencies observed in Fig. 1a are excited in Fig. 1e.As the initial white noise propagates inside the waveguide, the frequency component corresponding to the highest gain in the noisy initial condition undergoes exponential amplification.We simulate the evolution over a distance of z = 20 , which proves sufficient to excite enough frequencies with the highest gain. For each value of β 4 in Fig. 1e, the excited frequency curve appears with roughness on its profile.When MI is excited with noise initially, the growth of the associated ABs becomes chaotic.This results in the development of a continuous spectrum around the MI frequencies with the highest gain, accompanied by noisy spectral content, as illustrated in Fig. 1e.This scenario mirrors realistic conditions encountered in waveguides or fibers. Conversely, when an MI is excited with an exact AB solution as the initial condition, the resulting spectra are discrete due to the exact periodicity of the ABs and are devoid of noise.In such cases, only the frequencies with the highest gain are stimulated.While this scenario is idealized, it does not always reflect practical situations in waveguides and fibers. It is noteworthy that Fig. 1a, derived from an exact stability analysis expression, demonstrates the absence of MI at approximately ω = 0 for all β 4 .However, when stimulated with noisy initial conditions as depicted in Fig. 1e, this region becomes filled with spurious excitation of frequency components.Nevertheless, utilizing the exact AB solution remains a crucial tool for systematically investigating AB behavior. To obtain a smoother profile, we simulate 1500 values of β 4 ranging from 0 to 1.5 .Remarkably, when put together a large number of β 4 vs ω profiles, the generated MI region closely aligns with the corresponding analyti- cal case depicted in Fig. 1a.The frequencies ω that achieve the maximum growth rate for each β 4 are given by: where q 1 = 6 − 2 √ 9 − 6β 4 and q 2 = 6 + 2 √ 9 − 6β 4 .ω R1 and ω R2 are the MI frequencies from the two sub- bands that have the maximum growth rate.This relationship is depicted by the purple curve in Fig. 1d.Equation (3) indicates that, for every β 4 value, the gain curve (purple) exhibits four symmetrical maxima on both sides of ω = 0 until β 4 < 1.5 .These maximas are divided into two groups, such as frequencies within the range 0 < β 4 ≤ 0.75 participate in resonant MI dynamics, while those within 0.75 < β 4 ≤ 1.5 initiate higher-order MI where MI frequencies are within the same MI band.Beyond β 4 > 1.5 , the highest gain MI frequencies indicate MI with a single MI frequency.Notably, at β 4 = 3 , the growth rate g(ω) = ± √ 2 deduced from Eq. ( 12) aligns with the standard MI frequency derived from the conventional AB solution 13 .Also, for each β 4 there exists at least one frequency on the growth curve g(ω) where MI is zero and these points are given by: which is presented by the green curve in Fig. 1d.The width of both MI bands can be expressed by simple formulas.The two endpoints of the narrow gain band occurring at the wings are given by: The bandwidth of this gain band, denoted by �ω b = ω b 2 − ω b 1 , represents the range of frequencies within it.On the other hand, the bandwidth of the central gain band always spans from 0 to ±2[3 − √ (9 − 12 β 4 )] .By varying the value of β 4 , the bandwidth can be determined for the MI frequencies within the three specific regions defined in Fig. 1a. Until now, our discussion has centered on understanding the nature of MI in the presence of FOD.However, to comprehend how different types of MI regions affect an AB solution, it is essential to numerically generate an AB.While white noise initial conditions can yield AB-like structures during evolution, they tend to be highly chaotic, making it challenging to provide a clear explanation of MI in the presence of β 4 .The standard analytic AB solutions can precisely explain MI only when β 2 = −1 , a scenario that excludes higher-order dispersion terms.In this study, our objective is to investigate the impact of arbitrary values of β 4 on AB.To achieve this, we numerically solve Eq. ( 7) with a more accurate initial condition: Here, α mod is a small real number representing the magnitude of modulation, and ω = ω R1 is the modulation frequency from the central MI band with the maximum gain which facilitates the formation of an AB.As the www.nature.com/scientificreports/initial wave propagates along the z-direction, multiple instances of AB emerge and recur due to the Fermi-Pasta- Ulam (FPU) recurrence mechanism, as detailed in 23 .The AB solution effectively characterizes both the MI and FPU processes.Given its comprehensive coverage in prior works 4,15,[37][38][39][40] , we refrain from reiterating it here. In the subsequent sections, we explore how an AB is influenced by the various MI regions outlined in Fig. 1a. Impacts of β 4 on an Akhmediev Breather in the resonant MI region In the resonant MI region, we start perturbing the standard AB with β 4 = 0.04 .In the evolution field, the initial condition Eq. ( 6), develops an AB that closely matches with the exact breather solution, at least the first appearance in Fig. 2a 3), this arises as the 12 th harmonics of ω R1 as ω 12 shown in the mid-panel where the pump ω 0 is shown with the black arrow and the first sideband ω 1 = ω R1 .Note that ω 12 is phase-matched with the dispersive wave generated at this frequency and always arises only when the AB is excited by ω R1 defined by Eq. ( 3).We shall see later that the strength of the phase-matched energy exchange among the spectral components significantly influences the ABs' extended temporal evolution. After a phase-matched excitation of ω 12 , it starts a highly complex energy exchange process among the neigh- boring discrete frequency components along z.Note that ω 12 gets excited at the first compression point of the breather at z ≈ 10 indicated by the red arrow.It (white vertical arrow in the frequency domain), acts like a second pump besides ω 0 and can generate sidebands similar to the pump.One of the side recurrences that arise from ω 12 is indicated by the white box on the frequency evolution on the top panel at z ≈ 19 .It is produced by the main AB formed at z ≈ 10 and arises due to spectral overlapping of the broadened main AB spectrum.We stress here that the first perturbation frequency is defined by Eq. ( 3) which is ω 1 = ω R1 = 1.43 and the phase-matched 12 th harmonic arises at ω 12 = 17.26 .A small offset of excitation frequency at ω 12 arises due to the fact that the solution is derived by numerical simulation which can deviate slightly from the exact value.After excitation, the small side recurrences interfere with the main AB, creating dispersive linear waves in the background in both directions. With β 4 = 0.04 , the strength of the interference among the generated dispersive waves is low enough that the main AB and its recurrences are still sustained.For all the values with β 4 << 0.75 , we observe these spectral and temporal dynamics where we show only one example in Fig. 2a.We will devote a separate section to discuss a more detailed picture of this scenario. With an increase of β 4 the external MI band is coming closer to the pump with widening bandwidth as shown in Fig. 2b with β 4 = 0.1 .The primary sideband next to pump ω 0 is excited at ω R1 = 1.43 as the first harmonic ω 1 shown in the bottom panel.The notable feature in the top panel is the formation of a highly complex temporal pattern with more amplified wide-bandwidth frequency components.Also, another notable feature is after its first appearance at z ≈ 10 , the AB lost its recurrence property entirely.The Presence of highly amplified disper- sive waves generated by the phase-matched ω R2 = 10.86 at the AB's ω th 8 harmonic dominates the background hindering the resurgence of the AB in the extended evolution.A detailed characterization of this behavior is provided in "Energy exchange among the harmonics in the resonant MI interaction regime" in Fig. 5d-f in the spectral domain and "Evolution trajectory of an AB under the influence of β 4 " where we track the evolution trajectory of the AB (see Fig. 7b). Comparing the spectral intensity of I|ω| for β 4 = 0.04 with β 4 = 0.1 clearly shows that the spectrums are amplified more in the latter case.The linear dispersive waves that arise symmetrically at the first compression point of the AB are strong enough to disrupt or destroy altogether the possibility of the next appearance of the AB.This is shown in the temporal domain where the interference among the linear waves creates a coherent pattern along z. With a further increase of β 4 to 0.65, the outside MI band is much closer to the central MI band with a slightly wider bandwidth shown in the bottom panel of Fig. 2c.The first sideband ω 1 excited at ω R1 = 1.51 and the second phase-matched harmonic ω 3 excited at ω R2 = 4.02 .The generated AB's bandwidth is narrower than before.It is worthwhile to mention that the spectral width depends on the position of the narrow MI band.If this band is far away from the pump, the AB excites all the harmonics until it excites the harmonics which is within the narrow MI band.With stronger β 4 , this band appears close to the pump, hence, the AB has to excite fewer harmonics before it can be phase-matched with the harmonic within the narrow MI band. However, the intensity of the spectrums near the pump is even higher as shown in the middle panel.With this, the interference among generated linear waves is even stronger albeit with fewer amplified frequencies resulting in the complex temporal evolution creating a coherent pattern shown in the top panel.Notably, the first appearance of AB at z ≈ 10 is highly modulated due to the strong interference that arises from the dispersive wave generated by the phase-matched harmonic frequency ω 3 with ω 1 and ω 2 .Also, the patterns on the background are qualitatively different from those formed in Fig. 2b.A strong resonant interaction among the ABs seeded by the phase-matched harmonics takes place only when 0 < β 4 ≤ 0.75 .It appears that there exists a correlation between the number of excited harmonics and their strength with the formed structures, and their complexity.A detailed explanation in this direction is beyond the scope of this current manuscript but can be the subject of future study and analysis. We find that when β 4 = 0.75 the MI band (brick-red curve) in Fig. 3a develops a cusp at ω x = 2 √ 2 = 2.82 and excits ω R1 = 1.53 and ω R2 = 3.7 at their maximum gain.The MI cannot develop at cusp point ω x .Note that it also marks the last point where the external and the central MI bands are separate from each other.After this point with β 4 > 0.75 , the external MI band starts to merge with the central MI band.Now we have one pertur- bation frequency and a phase-matched harmonic defined by ω R1 and ω R2 respectively.A double peak MI band characterizes them and they excite two mutually interacting AB. In this regime of MI, the development of an AB is dominated by higher-order MI dynamics where the resonant interaction is reduced significantly.This begins with the appearance of the lost AB again which is obscured previously by strong resonant MI interactions.In Fig. 3a, in the temporal domain, we can see hints of the return of the recurrence cycles.Two complete growth-return cycles take place at the early stage of the evolution. When MI bands are even closer to each other with increasing β 4 , the regular behavior of the recurrence cycle of an AB also starts to restored to its full shape.Figures 3b, c show the ABs evolution for β 4 = 1.0 and 1.5.In the top panel, on the frequency evolution, the repeated cycle of compression and decompression stages appeared clearly.With β 4 = 1.0 two MI frequencies within the same band are ω R1 = 1.6 and ω R2 = 3.0 .In these cases, the dynamics of higher-order MI dynamics are at play resembling the MI dynamics presented in 37 , (see Figs. 1 and 2). However, only at β 4 = 1.5 , the external MI band completely merges with the central MI band marking the end of the resonant MI dynamics regime.With β 4 ≥ 1.5 , the MI band possesses only one gain band with one maxima restoring the regular growth return cycle of a standard AB which is highlighted in Fig. 3c (please see the Supplementary Material to see these dynamics in a movie). Energy exchange among the harmonics in the resonant MI interaction regime In the resonant MI interaction regime, we observed that the presence of FOD with positive β 4 creates a complex energy exchange scenario during the AB development.In this section, we give a comprehensive explanation of how this happens and its connection to creating a highly complex temporal pattern as shown in Fig. 2b.Note a detailed explanation of the cascaded four-wave mixing process involved in generating this type of discrete harmonics is given in 41 .In this work, considering the experimental scenario two pump frequencies ( ω ±p ) on both sides of the central mean frequency are induced to initiate the cascaded four-wave mixing process.However, our narrative is aligned with the work 15 where the central mean frequency is considered to be the pump where all the energy is stored, and following the four-wave mixing process, the subsequent harmonics are developed.We emphasize more on the directional flow of energy in the AB's extended evolution dynamics.Figure 4a is the same as the spectral domain of Fig. 2a.Because the spectrum is symmetric, we highlight only half of it.The AB achieves its first compression point at around z ≈ 10 while the pump generates and allocates most of its energy into the sidebands along the dashed white arrow indicated by A. The phase-matched harmonic ω 12 indicated by the white arrow 1 also acts as a pump and creates sidebands along dashed arrow B. Note that as the pump ω 0 and the harmonic ω 12 breathes and exchange energy with the sidebands in a synchronized way, there could be an overlap of shared energy among the sidebands generated by both ω 0 and ω 12 .In other words, the sidebands generated by ω 12 could find their origin in the main pump ω 0 .The AB's spectrum is widest at its maximum compression point at z ≈ 10 .At the moment, the pump starts to take back its energy again from the sidebands following the rules of Fermi-Pasta-Ulam recurrence 42 .However, instead of flowing all the energy from the sidebands towards the pump ω 0 , now the flow is split following arrows 2, 3 and 4. While a part of the energy is flowing towards ω 0 indicated by 2, the remaining energy is flowing towards ω 12 shown by the arrows 3 and 4.This marks the end of the first recurrence. The beginning of the second recurrence starts with the transfer of energy from the pumps to the newly created sidebands indicated by the arrows 5, 6, and 7.At the second compression point, the AB reaches its highest amplitude at z ≈ 19 .Return from this summit to the background is complete when the pump takes back its energy following the arrows 8, 9, and 10.This cycle of energy exchange keeps repeating along z.Because energy is flowing from two pumps in the same sidebands between ω 0 and ω 12 , there is a build-up of energy among the sidebands and this grows with increasing β 4 values.This is clarified in Fig. 5 where the spectral intensity is plotted in the longitudinal z direction for β 4 = 0.04 .In Fig. 5a, the top-most thick blue line is the pump ω 0 shown in Fig. 4a with an arrow and the curves below are the next three sidebands to its right.The pump depletes at each AB's compression point by transferring energy to the sidebands.Generally, the nearest few sidebands have the most energy.The further the sidebands are away from the pump, the less energy it acquires.To show the energy exchange, we only plot the next three sidebands from the pumps.Compared with the temporal evolution in Fig. 2a, Fig. 5a clearly shows four AB recurrences with compression and decompression dynamics up to along z = 40. In Fig. 5b, c, the top-most thick blue line is the phase-matched harmonic ω 12 shown by the arrow 1 in Fig. 5a. Figure 5b shows the energy evolution among three sidebands to the left of ω 12 whereas Fig. 5c shows the sidebands to the right.Note that while ω 0 starts to act as a pump from z = 0 , for ω 12 it is from z ≈ 10 after it appears.Because they are pumped by both ω 0 and ω 12 , the sidebands in Fig. 5b are more amplified and show strong interaction with the pump compared to Fig. 5c.Remarkably, both groups of sidebands (ω 11 , ω 10 , ω 9 ) and (ω 13 , ω 14 , ω 15 ) interacts with ω 12 like regular AB sidebands making this part appeared like a secondary AB as The harmonic ω 12 , excited at the resonant frequency ω R2 = 17.26 , now acts as a pump (marked by the white arrow in Fig. 4a), interacting with sidebands (ω 11 , ω 10 , ω 9 ) originating from its left side.Similarly, (c) illustrates the evolution of three sidebands (ω 13 , ω 14 , ω 15 ) on the right of ω 12 . (d) Shows the evolution of the first three sidebands (ω 1 , ω 2 , ω 3 ) in Fig. 4b next to the pump ω 0 with β 4 = 0.1 , revealing energy exchange. (e) Displays the interaction of harmonic ω 8 indicated by the white arrow as the pump with sidebands (ω 7 , ω 6 , ω 5 ) situated on its left.Similarly, (f) shows the interaction of sidebands (ω 9 , ω 10 , ω 11 ) with ω 8 on the right. shown in Fig. 2a with a white box.One notable feature is, the pump ω 0 and sidebands in Fig. 5a are far more amplified than in Fig. 5b, c.A similar observation is made for β 4 = 0.1 where the sidebands between ω 0 and ω 8 are strongly amplified.In Fig. 5d, the topmost thick blue line is the pump ω 0 is indicated by the white arrow 1 in Fig. 4b.The three curves below are the first three harmonics to the left side of ω 0 .In Fig. 5e, f, the topmost blue thick line is the harmonic ω 8 and the lines below are the first three harmonics left and right side of ω 8 respectively.The important observation here is, in Fig. 5d, as the MI progressed further along z, it rapidly lost its growth-return (recurrence) cycles, leaving no trace of an AB in the evolution scenario.The three sidebands next to the pump ( ω 1 , ω 2 , ω 3 ) are amplified and interact with the pump at its proximity.Along the entire z, the energy in the pump appears to remain the same with almost no energy exchange with the sidebands which characteristically indicates the presence of strong linear waves at play.They are dominating the wave dynamics, and the interference among them creates complex temporal patterns. Figure 5e shows the pump as the resonant harmonic ω 8 which arises at the AB's first compression point at z ≈ 10 .The first three sidebands to its left are ( ω 7 , ω 6 , ω 5 ).Both groups of sidebands ( ω 1 , ω 2 , ω 3 ) in Fig. 5d and ( ω 7 , ω 6 , ω 5 ) in Fig. 5e are heavily pumped by both ω 0 and ω 8 and the energy is trapped between them resulting in a strong amplification of these sidebands.This amplification is seen in Fig. 4b along the dashed arrow A which is as strong as the primary sidebands close to ω 0 .However, in this situation, while harmonic ω 8 acts as a pump it also works as a barrier.It prevents energy flow from ω 0 to the sidebands ω 9 , ω 10 , ω 11 making them less amplified shown in Fig. 5f where the intensity of the harmonics are far below the pump ω 8 . In Fig. 6a, with β 4 = 0.65 , we can observe that the pump (blue thick line) is still highly aperiodic.Its energy exchange with the neighboring pumps ω 1 to ω 6 is almost static, hence no growth-return cycle leaving no trace of an AB in the z direction as outlined and demonstrated in Fig. 2c.With higher values of β 4 , the AB's spectrum reduced significantly with fewer sideband excitations.We only need to plot the first six harmonics to visualize the energy exchange interactions with the sidebands.These are shown in Fig. 6b-d where the black dashed line separates between more and less amplified sidebands.Starting from β 4 = 0.75 , we can observe the gradual restoration of an AB's growth return cycle from the behavior of the pump with increasing β 4 value.The full restoration is achieved in Fig. 6d with β 4 = 1.5. Evolution trajectory of an AB under the influence of β 4 Another way to describe and visualize the MI dynamics is to explore the movement of the AB's development trajectory on a complex plane.To investigate this we set a parallel where the first row of figures in Fig. 7 corresponds to the temporal evolution of ABs in Fig. 2 with weaker dispersion values.Similarly, the second row corresponds to the AB's temporal evolution in Fig. 3 with intermediary to stronger dispersion values.Each example of Fig. 7, shows how an AB develops along the evolution direction and undergoes multiple highs and lows creating a specific trajectory of change in a complex plane.The development trajectory of a standard AB followed by the marker + is shown by a black-dotted arrow line.This is rather clearly seen in the last example Fig. 7f and we do this to highlight how β 4 disrupts an AB's dynamics when it develops compared to a standard one. In Fig. 7a, the arrow 1 marks the start of the trajectory (bottom arrow on the colorbar) where ABs amplitude development starts.As the amplitude develops, the change in color profile follows the colorbar in the inset.When AB completes one recurrence, the trajectory ends at arrow 2 (middle arrow on the colorbar) following the upper half of the trajectory.One recurrence means the AB reached its maximum height and came back again on the same background field. For the next recurrence, the trajectory again begins at arrow 2, that is marking the color profile with the middle arrow in the colorbar and it completes the lower half trajectory marking the ends at the top arrow in the colorbar.With β 4 = 0 , within evolution length z = 40 , a standard AB appears twice, hence we observe only one upper and a lower trajectory.If there are more recurrences of AB appearances, they overlap with each other.www.nature.com/scientificreports/However, these overlaps may not be perfect depending on how irregularly the AB is evolving.The arrows 3 and 4 indicate where the AB reaches its highest amplitude.With β 4 = 0.04 in Fig. 2a, in the temporal evolution, we can see AB appears four times.For this reason, in Fig. 7a, we can see slightly irregular trajectories with round dot markers.However, with increasing β 4 , this trajec- tory continues to deviate from the ideal path.In Fig. 7b, with β 4 = 0.1 , we observe a drastic change in the ABs development trajectory.In fact, there is hardly any regular path formed which indicates that there is no AB at all.The scattered dots are the peak values of strong background linear waves that show a highly dispersed trajectory indicating the total disappearance of the FPU phenomena and the presence of strong dispersive waves.With further increasing values of β 4 = 0.65 in Fig. 7c, although the resolution of the trajectories increased, however, they are still highly irregular indicating persistent resonant interaction and the presence of linear dispersive waves on the wave field. In Fig. 7d with β 4 = 0.75 , many small irregular and incomplete trajectories are formed.However, these irregular behaviors reduced significantly in Fig. 7e with β 4 = 1.0 indicating that the disappeared AB due to a strong resonant interaction is appearing again.Finally, with β 4 = 1.5 , because there is only one MI frequency, a smooth and consistent trajectory appeared with the resurgence of the AB again.Note that these are the inner small amplitude trajectories which are much smaller compared to the standard AB trajectory. The trajectories in the complex plane can also be complemented by plotting the maxima of the ABs along z .The figures in the first and second rows of Fig. 7 correspond to Fig. 8a, b, respectively.In Fig. 8a, due to strong resonant MI interaction, the trails of the temporal peak values exhibit high oscillations due to the presence of amplified dispersive waves. The blue dashed line, representing β 4 = 0.04 , still shows regular recurrence dynamics due to the weak val- ues of β 4 ; the resonant MI frequencies interaction and the generated dispersive waves are not strong enough to eliminate the AB.However, with β 4 = 0.1 and 0.65 in Fig. 8a, the maxima indicated by the green and red dotted The trajectory of the ABs recurrence cycles on a complex plain.An ideal AB trajectory is presented with marker + in each from (a-f).The beginning of the AB's amplitude development starts following the dark-blue which is at the bottom arrow on the colorbar.This point is indicated by the black arrow 1 inside the upper-trajectory loop and the journey ends after one growth-return cycle at the middle arrow on the colorbar.This position is indicated by the black arrow 2 inside the loop.The beginning of the second growth-return cycle starts again at arrow 2 but follows the downward loop and ends at arrow 1 again taking the color bright yellow at the top of the colorbar.The top row shows for (a) β 4 = 0.04 , (b) β 4 = 0.1 , and (c) β 4 = 0.65 where the bottom row is with (d) β 4 = 0.75 , (e) β 4 = 1.0 and (f) β 4 = 1.5 .Provided Supplementary Material also highlights these dynamics. lines become highly irregular due to the strong presence of the dispersive wave, and the AB's recurrence behavior disappears completely.These correspond to Fig. 7b, c in the complex plane presentation. In Fig. 8b, when the resonance frequency band is closer to or inside the main MI band with β 4 = 0.75 and 1.0 , the maxima of the ABs fluctuate less, and the recurrence AB dynamics reappear.With β 4 = 1.5 , the green dotted curve shows perfect Fermi-Pasta-Ulam (FPU) recurrence phenomena, corresponding to Fig. 7f with resurgence of the AB.Note, that the maxima developed with weaker β 4 achieve higher amplitudes compared to those with stronger values. Discussion While previous studies have explored the impact of TOD and FOD on MI, a systematic examination of various MI regimes and their specific influence on AB development has not been addressed 36,43 .Although the disappearance and reappearance of AB with varying strengths of higher-order dispersion have been investigated concerning third-order dispersion 23 , an explanation for this phenomenon was not presented.A more comprehensive overview of various perturbations and robustness of an AB can be found in a recent collection of research topics 44 . Recently the emergence of sub-regions in the MI band is also reported in vector Manakov equations where the stable gaps between MI bands are discussed 45 .The MI frequency harmonics that fall within these stable gaps do not grow into an AB in both vector and scalar cases.However, the main differences between these two systems are, that the regions of active MI and their shape are different (see Fig. 1b in Ref. 44 and Fig. 1a in this manuscript).Also, the highest growth rates in sub-MI bands of the vector Manakov systems are unequal whereas in the scalar GNLSE, they remain equal which may play a crucial role in strong resonant interactions and spectral amplification.Another important difference is in the Manakov system, the splitting of the MI band is not related to the higher-order dispersion whereas in our case, it is for the FOD.Indeed it is remarkable to realize that even without higher-order dispersion the Manakov systems allow such resonant interactions.Nonetheless, to reveal the true extent of similarities and differences between these systems requires more in-depth research. Our goal in this work is to investigate how the introduction of higher-order dispersion in the scalar GNLSE system reveals resonant MI dynamics that significantly affect the extended temporal and spectral evolution of an AB.Also, the role of strong dispersive waves arising from resonant MI on an AB has not been explored.The current study addresses these limitations. In summary, we utilized the GNLSE and AB solutions to illustrate the influence of FOD on MI dynamics.Under anomalous dispersion conditions, the incorporation of +β 4 revealed a resonant MI regime and we explained in detail how it impacts the evolution of an AB.Our results demonstrated that FOD introduces complex behaviors in general MI dynamics, drastically affecting AB development in both the temporal and spectral domains.As β 4 values increase, the resonant MI regimes manifest a complex energy exchange process.We elucidated how this complex energy exchange process amplifies frequencies between the pump and phasematched harmonics, resulting in a variety of intricate temporal patterns. Note that while we addressed several key points, our analysis is largely based on the spectral behavior of the interacting ABs.However, it is also important to conduct a comprehensive analysis of the temporal dynamics to answer several crucial questions.One such scenario is how β 4 is related to varieties of complex patterns that form in the temporal domain.Exceedingly small changes in β 4 generate entirely different temporal structures (see the Supplementary Material).In the spectral domain, this variation arises only with a narrow or a wide spectral bandwidth.There must be a specific amplitude and phase relationship among the ABs and the background dispersive waves which play a central role in creating those composite patterns.Realizing these connections will enrich our understanding of how the spontaneous emergence of ordered structures forms in nature.Herewith, we acknowledge that AB dynamics under higher-order dispersion and nonlinear effects currently remain a field of active research 17,44 .These open questions may stimulate more debates and discussions leading to more concrete answers.www.nature.com/scientificreports/Our observation and analysis in this work provide a fresh insight into the intricacies of MI dynamics influenced by FOD.Practical applications include optimizing optical parametric amplification processes in diverse waveguide setups.Furthermore, this study contributes to a deeper understanding of MI-induced continuouswave supercontinuum generation, particularly in CMOS-compatible on-chip waveguides and photonic crystal fibers with substantial nonlinearity and strong dispersion 20,25,26,28 .Our research simplifies the comprehension of MI in these systems, potentially advancing their application in nonlinear light generation and the controlled formation of optical rogue waves.Besides, self-organization is a fascinating phenomenon in the field of nonlinear science.This process plays a pivotal role in shaping spatial patterns in fields such as biology, neural networks in the brain, chemistry, physics, fluid dynamics, and plasmas 7,46 .Current observation of intricate and complex pattern formation highlights the crucial connections between MI and self-organization which may improve our understanding of this highly complex process. Methods The propagation of a modulated continuous wave (CW) inside a waveguide involves employing the generalized nonlinear Schrödinger equation (GNLSE), given by: It is important to note that β 3 is omitted in the investigation of MI through linear stability analysis.For phase matching in a four-wave mixing process, the β 3 term cancels out, as shown in 14 and 20 (see Eq. 10.1.7),leading to β 3 = 0 .Although the theory of an AB establishes that odd-order dispersion introduces velocity in the pulse profile while even-order dispersion influences MI and the phase of the pulse profile 47 , it is crucial to clarify that the literature sometimes mistakenly suggests that third-order dispersion (TOD) does not contribute to MI dynamics at all.Contrary to this misconception, several works, such as 22,23,48 , have presented numerical studies on TOD's contribution to MI. For numerical simulations, we employed the standard split-step Fourier method in conjunction with the fourth-order Runge-Kutta method to solve Eq. ( 7) 49 .When β 4 > 0 and the second term with a positive sign in Eq. ( 7) is retained, the equation admits a steady-state solution in the form of: To assess the stability of this solution, the amplitude is perturbed at frequency ω , modifying the steady-state solution to: where a(z, t) = a 1 cos(kz − ωt) + ia 2 sin(kz − ωt) .Solving Eq. ( 7) under these conditions yields the dispersion relation given by: Here, B 2 and B 4 are defined as follows: with ω 2 c = (4 P 0 γ )/β 2 .The dispersion relation in Eq. ( 10) clearly illustrates the relative influences of the group velocity dispersion parameter β 2 and the FOD β 4 .The stability of the initial plane wave is determined by the reality of k, where instability occurs only when k is imaginary.It is essential to highlight that, under specific conditions ( γ = P 0 = 1 , β 2 = −1 , resulting in ω 2 c = − 4 ), Eq. ( 10) represents the dispersion relation for the normalized form of Eq. 7 14,32 .This specific form is utilized in our analysis.Furthermore, when β 4 = 0 , Eq. ( 10) yields k = ω/2 ω 2 − 4 , providing the gain expression directly in AB solutions as presented in 15,39 .For β 2 < 0 and β 4 = 0 , k becomes imaginary when (β 2 + β 4 ω 2 /12) < 0 and With these conditions, the gain can be given as 14,20,35 Figure1.The relationship between the β 4 coefficient and the MI frequency ω is examined under different conditions.In (a), the MI frequency range is depicted for all β 4 > 0 , with β 2 = −1 .The bright region indicates the presence of MI, while the dark region represents its absence.Three distinct MI regions are identified and separated by red-dotted lines.(b) Illustrates the MI band within the resonance MI region with a solid brickred curve for β 4 = 0.04 , denoted by a black arrow in (a).Similarly, (c) shows the region of MI where two MI frequencies are within the same MI band, and (d) shows the MI band with one MI frequency only, with the corresponding positions highlighted in (a) using black arrows.The right Y-axis represents the growth rate g(ω) , and the blue curves (Eq. 1) represent the upper (dashed) and lower (solid) boundaries on the (β 4 , ω) plane where MI occurs.The left Y-axis displays β 4 . (e) Similar to (a) but numerically generated using a white noisy initial condition (Eq.2).(f) Presents only those modulation frequencies with the highest gain, based on Eqs.(3) and (4), depicted by purple and green curves.Below the dotted black line marks the resonant MI regimes (please see the Supplementary Material to see the impact where a movie is also provided with varying β 4 vs ω). https://doi.org/10.1038/s41598-024-61533-1 38 . The development of the AB is initiated by the excitement of MI frequency ω R1 = 1.42 from the central MI gain region as shown in the bottom panel at Fig. 2a.The boundary of the MI region is shown with blue and dashed solid lines.After the AB is fully developed with perturbation frequency ω R1 = 1.42 , at the maximum compression point, it resonantly and symmetrically excites the second MI frequency ω R2 = 17.26 which falls under the outside MI bands at the wings.Following Eq. ( 3 CFigure 2 . Figure2.Impacts of β 4 on an AB in the resonant MI region.In each case (a-c), the top-panel shows the temporal and spectral evolution while the mid-panel shows the resonant amplification of the harmonics of the AB's discrete spectrum.The bottom-panel shows the movements of the MI band compared to its region boundary.The discrete spectrums are taken at AB's first compression points indicated by the red arrows with the values of (a) β 4 = 0.04 , (b) β 4 = 0.1 , and (c) β 4 = 0.65 .In each case, the MI frequency ω R1 excites the first harmonic ω 1 and ω R2 excites the phase-matched harmonics in (a) ω 12 , (b) ω 8 and (c) ω 3 . Figure 3 . Figure 3. Impact of strong FOD on a numerically excited AB in the MI region 0.75 < β 4 ≤ 1.5 where the MI frequencies ω R1 and ω R2 remain close to each-other.Three instances of evolution with (a) β 4 = 0.75 , (b) β 4 = 1.0 and (c) β 4 = 1.5 are shown.With β 4 = 0.75 , both MI-bands come in contact at ω x = 2.82 where there is no MI.With an increasing value of β 4 , the recurrence dynamics are returning to their regular behavior.In (c) the MI gain band resolved into perfect AB recurrence dynamics where only one MI frequency develops one standard AB. Figure 4 . Figure 4. Same as frequency evolution from Fig. 2a, b, the AB's spectral evolution along z with (a) β 4 = 0.04 and (b) β 4 = 0.1 showing the frequency component in +ω evolving up to z = 40 .The numbered arrows 2 to 10 show energy flow among the harmonics.The dashed long arrows (A & B) show the range of frequencies on both sides of the phase-matched excited harmonics in (a) ω 12 and (b) ω 8 . Figure 7 . Figure 7.The trajectory of the ABs recurrence cycles on a complex plain.An ideal AB trajectory is presented with marker + in each from (a-f).The beginning of the AB's amplitude development starts following the dark-blue which is at the bottom arrow on the colorbar.This point is indicated by the black arrow 1 inside the upper-trajectory loop and the journey ends after one growth-return cycle at the middle arrow on the colorbar.This position is indicated by the black arrow 2 inside the loop.The beginning of the second growth-return cycle starts again at arrow 2 but follows the downward loop and ends at arrow 1 again taking the color bright yellow at the top of the colorbar.The top row shows for (a) β 4 = 0.04 , (b) β 4 = 0.1 , and (c) β 4 = 0.65 where the bottom row is with (d) β 4 = 0.75 , (e) β 4 = 1.0 and (f) β 4 = 1.5 .Provided Supplementary Material also highlights these dynamics. Figure 8 . Figure 8. ABs amplitude variation along the propagation distance z with (a) weak β 4 and (b) strong β 4 .The corresponding β 4 values are the same as in Fig. 7 (a) for the top and (b) for the bottom row respectively.
11,615.8
2024-05-09T00:00:00.000
[ "Physics" ]
Finite-state discrete-time Markov chain models of gene regulatory networks [version 1; peer review: peer review discontinued] In this study, Markov chain models of gene regulatory networks (GRN) are developed. These models make it possible to apply the well-known theory and tools of Markov chains to GRN analysis. A new kind of finite interaction graph called a combinatorial net is introduced to represent formally a GRN and its transition graphs constructed from interaction graphs. The system dynamics are defined as a random walk on the transition graph, which is a Markov chain. A novel concurrent updating scheme (evolution rule) is developed to determine transitions in a transition graph. The proposed scheme is based on the firing of a random set of non-steady-state vertices in a combinatorial net. It is demonstrated that this novel scheme represents an advance in asynchronicity modeling. The theorem that combinatorial nets with this updating scheme can asynchronously compute a maximal independent set of graphs is also proved. As proof of concept, a number of simple combinatorial models are presented here: a discrete auto-regression model, a bistable switch, an Elowitz repressilator, and a self-activation model, and it is shown that these models exhibit well-known properties. Introduction Efforts to study gene-expression regulation networks has led to detailed descriptions of many such networks, and many more can be expected to be identified in the near future. Therefore, there is a need to develop methods of computational and theoretical analysis of gene regulatory networks (GRNs). One of the most promising directions is to reduce the problem to the study of Markov chains generated in some way from the GRN [1][2][3][4][5] . Usually, Boolean networks 6 are used as a formal representation of a GRN. Classification of process states, studies of long-term behavior 7 , and development of optimal strategies for therapeutic intervention [7][8][9][10][11][12][13][14][15] provide good examples of this approach 16 . In contrast to Boolean networks, Hopfield networks are defined using arithmetic operations 17 . They are a well-developed branch of science dealing with stochastic processes of asynchronous state switching as a result of interactions. As such, they are similar to Boolean networks. A Hopfield-like formalism also leads to the definition of a Markov chain. In the Hopfield network field, essential results have been obtained in the study of various update schemes 18 , network oscillations 19 , solutions of combinatorial optimization problems [19][20][21][22][23][24][25][26] , estimation of convergence rates, and many other problems. This makes it valuable to study the possibility of using Hopfield like-networks to construct Markov chains from GRNs and other interaction graphs. Here, a GRN is considered to be a kind of interaction graph. Interaction (regulatory) graphs have emerged in various fields of the life sciences 27 . In recent years, their transition graphs have often been used to analyze the properties of interactions (regulations). One promising way to understand the nature of the regulations or interactions represented by interaction graphs is to analyze the Markov chain associated with their transition graphs. Method The proposed method may be viewed as a version of the Hopfieldlike network 17 where groups of randomly selected unstable units are updated in parallel 18 . Interaction graphs and non-steady-state vertices Let G = (V, E) be a directed graph, where V is a set of vertices and E is a set of edges. Let B = {0, 1} be a set of vertex states. It is said that the vertex is active if the state of this vertex is equal to "1", otherwise it is said that the vertex is inactive. The mapping function M: V → B gives the state of each vertex. For a given vertex v ∈ V, M(v) is the state of vertex v that corresponds to map M. M(v) = 1 is equivalent to saying that vertex v is active under map M. M(v) = 0 is equivalent to saying that vertex v is inactive. The weighting function W: E → R gives value of each edge of graph G, which represents the power of interactions. If e = (u, v), e ∈ E, then u is said to be a direct ancestor of v and v to be a direct descendant of u. The influence on v under the map M is defined as the sum of weights of edges from all direct active ancestors of vertex v. The influence on v under the map M is denoted by I(v, M) (also called "the local field or the net input"). That is, By definition, if the forced state and the current state of v are the same, then the current state of vertex v under map M is steady. The following equation provides a formal definition: The random set update rule Now consider a stochastic process {Y j , j = 1, 2, 3, …} that takes on the set of maps of some interaction graph G, where Y j denotes the map of G at time period j. At each time period j for each non-steadystate vertex v i under map Y j , the current vertex state is changed to a forced state with probability p i , and the current state remains unchanged with probability 1p i . Let S = {v 1 , v 2 , … , v n } be a set of all non-steady-state vertices at time period j. The vertices chosen to change their state in a one-step transition constitute a random set X ⊆ S. To create a new map that is directly accessible from the current map Y j , all vertices in X simultaneously change their state, whereas other vertices remain unchanged. Let P={p 1 , p 2 , … , p n } be some vector of numbers such that 0 ≤ p i ≤ 1, where p i is referred to as the probability of a state change (firing) of the non-steady-state vertex v i . For any X ⊆ S, let 1 X : S → B be an indicator function such Hence, it can be assumed that each v i acts independently to create a random set X. Then the production of P X gives p X , which is the firing probability of the random set X: Evidently, Now this definition of the random set update rule and its probabilities can be used to define the transition graph of the combinatorial net model. Examples of combinatorial models The method described above will now be used to develop models of some important graphs of repressive interactions of self-activating nodes and to prove their main properties. Such models are called combinatorial models. Each combinatorial model consists of the interaction graph (combinatorial net) and the corresponding transition graph. The combinatorial model of an auto-repression A negative auto-regulation or an auto-repression occurs when the products of a certain gene represse their own gene. This form of simple regulation serves as a basic building block for most important transcription networks 27,29 . Auto-repression can produce oscillations. For example, embryonic stem cells fluctuate between high and low Nanog expression, and Nanog activity is auto-repressive 30 . The model of auto-repression presented here and shown in Figure 3 and Figure This means that there are only non-steady states in the model. Therefore, it will oscillate infinitely between 0 and 1. Figure 3 shows the full transition graph of the auto-repression model. then M X can be said to be produced by X from M. In other words, the random set X of non-steady-state vertices produces the directly reachable map M X from a map M. The weights of the edges from map M to map M X are given by the probability defined by Equation (4). Random-walk network dynamics Assume that whenever the process is in state M, there is a probability p X that at the next step, it will be in state M X . This probability is defined for each random set X of non-steady-state vertices of map M. Generalized random set update rules It is well known that asynchronous and random set update rules are equivalent in the sense of global stable states 28 . However, in the sense of the reachability of one state from another, they are not equivalent. Figure 1 show a Mace combinatorial model that illustrates this fact. The vertex e provides a constant level of repression for vertex c, that is, equal to -2. Let vertex d of the Mace model be active at the start. Then it can activate both middle vertices a and b. Due to repression, vertex c of the Mace model can be activated only if both middle vertices a and b are active simultaneously. Asynchronous (one at a time) updating excludes simultaneous activation of these vertices, but the random set update rule does not. A synchronous update rule does not exclude simultaneous activation of a and b, but it makes the system deterministic. The random set update rule is more general than either synchronous or asynchronous update rules because it allows all possible system evolution paths. Therefore, the transition graphs of both synchronous and asynchronous update rules are subgraphs of the random set update graph. Combinatorial model of a bi-stable switch A bi-stable switch is a bi-stable gene regulatory network that is constructed from two mutually repressive genes 31 . These are very common in nature and extensively used in synthetic biology 32,33 . The ordinary differential equations (ODEs) used to construct their mathematical models are a convenient way to analyze some small circuits in detail. In this research, techniques have been developed that can be used to construct models of large networks of bi-stable switches and to prove some of their important properties. For this purpose, a probabilistic coarse-scale modeling approach 34 has been used here instead of fine-scale ODE modeling. The proposed model of a bi-stable switch illustrated in Figure 2b and Figure 4 exhibits two steady maps. Figure 2b presents an interaction graph G = (V, E) of the bi-stable switch model. The set V = {v 1 , v 2 } contains two vertices, and the set E contains two edges with weights of -1. Let the probability of firing a non-steady-state vertex be 1/2. Figure 4 presents the transition graph of the model. Combinatorial model of the Elowitz repressilator The Elowitz repressilator consists of three genes 35 , each of which is constitutively expressed. The first gene inhibits the transcription of the second gene, whose protein product in turn inhibits the expression of a third gene, and finally, the third gene inhibits the first gene's expression, completing the cycle. Such a negative feedback loop leads to oscillations. The combinatorial model of an Elowitz repressilator produces oscillations and consists of three vertices and three edges with weights equal to -1. Table 1. Combinatorial model of self-activation A constitutively expressed gene is an example of self-activation. Such genes do not require any interaction to be active. A combinatorial model of self-activation consists of one vertex and no edges. In any case, the influence on it equals 0 because there are no other vertices. Therefore, a forced state of the vertex equals 1. Therefore, 1 is a steady state and 0 is a non-steady state. A vertex starting in steady state will stay in it infinitely. A vertex starting in a nonsteady state will flip to steady state with probability p and stay in non-steady state with probability 1-p. The amount of time T which the vertex spends in non-steady state is the random variable. The distribution of this random variable is a shifted geometric distribution with parameter p. A network of bi-stable switches An independent set (IS) in a graph is a set of vertices no two of which are adjacent. An independent set is called maximal (MIS) if there is no independent set that it contains properly. A Hopfield network whose stable states are exactly maximal independent sets was developed by Shrivastava 36 . An independent set in a graph is a clique in the complement graph, and vice versa. Therefore, cliques can be used to find or to enumerate MISs 20,21 . Finding independent sets (or cliques) has applications in various fields 37 Figure 7b illustrates the combinatorial network derived from the graph shown in Figure 7a. The first switch is formed by the subgraph induced by the {1,2} set of vertices of the C(H) network. The second switch is formed by the {2,3} set of vertices. Vertex 2 is a common member of these switches, and therefore they can interact by means of this vertex. Each edge of an underlying graph corresponds to a switch in a derived network. If two incident edges share a common vertex, then the corresponding switches interact because this vertex has the same state in both switches. C(H) is referred to as the derived network of a bi-stable switch, and H is referred to as the underlying graph. Conclusions A similar approach to constructing Markov chains for interaction graphs was developed in earlier works by the authors for neural and gene regulatory networks [38][39][40][41][42] . Both approaches can be used to construct Markov chains for gene regulatory networks. Systems of mutually repressive elements are ubiquitous in nature. A network of bi-stable switches can be used to create models of their stable states and of the self-evolution of such systems toward stable states.
3,106.6
2020-01-01T00:00:00.000
[ "Mathematics" ]
Tubulin Polymerization Promoting Proteins (TPPPs) of Aphelidiomycota: Correlation between the Incidence of p25alpha Domain and the Eukaryotic Flagellum The seven most early diverging lineages of the 18 phyla of fungi are the non-terrestrial fungi, which reproduce through motile flagellated zoospores. There are genes/proteins that are present only in organisms with flagellum or cilium. It was suggested that TPPP-like proteins (proteins containing at least one complete or partial p25alpha domain) are among them, and a correlation between the incidence of the p25alpha domain and the eukaryotic flagellum was hypothesized. Of the seven phyla of flagellated fungi, six have been known to contain TPPP-like proteins. Aphelidiomycota, one of the early-branching phyla, has some species (e.g., Paraphelidium tribonematis) that retain the flagellum, whereas the Amoeboaphelidium genus has lost the flagellum. The first two Aphelidiomycota genomes (Amoeboaphelidium protococcorum and Amoeboaphelidium occidentale) were sequenced and published last year. A BLASTP search revealed that A. occidentale does not have a TPPP, but A. protococcorum, which possesses pseudocilium, does have a TPPP. This TPPP is the ‘long-type’ which occurs mostly in animals as well as other Opisthokonta. P. tribonematis has a ‘fungal-type’ TPPP, which is found only in some flagellated fungi. These data on Aphelidiomycota TPPP proteins strengthen the correlation between the incidence of p25alpha domain-containing proteins and that of the eukaryotic flagellum/cilium. Introduction Avidor-Reiss et al. [1] previously suggested, based on bioinformatics analysis, that there are some genes/proteins that are present only and exclusively in organisms with flagella or cilia. Cilia (flagella) are microtubule-based cellular extensions of a sensory and/or motile function. The collection of these genes composes the ciliome. Genes of the ciliome are generally absent in species without cilium/flagellum. The flagellum and the cilium are basically the same microtubule-based organelle and are usually distinguished by their number and length [2]. TPPP (Tubulin Polymerization Promoting Protein) is a microtubule-stabilizing protein containing a p25alpha domain (Pfam05517 or IPR008907) [3]. It is not a structural domain but was generated automatically from a sequence alignment from Prodom 2004.1 for the Pfam-B database. I proposed that the TPPP protein also belongs to the ciliome based on the sequence data available at that time [4]. Later, I modified this suggestion so that the assumption would also be valid for 'TPPP-like proteins', which contain at least one complete or partial p25alpha domain [5]. The members of the family of TPPP-like proteins differ from each other in the completeness of the p25alpha domain (long, short, truncated, partial) and in the presence or absence of other domains (e.g., DCX or EF-hand) [5]. A distinct 'fungal-type' TPPP, which is found in some flagellated fungi, contains both a complete and a partial p25alpha domain [6] (Figure 1). An essential role of TPPP in the formation of flagella was demonstrated in Chlamydomonas reinhardtii, biflagellated green algae, through the use of null mutant of FAP265, its TPPP ortholog [7]. Very recently, it has also been shown that TPPP (Py05543) is required for male gametocyte exflagellation in Plasmodium yoelli [8]. J. Fungi 2023, 9, x FOR PEER REVIEW 2 of 9 formation of flagella was demonstrated in Chlamydomonas reinhardtii, biflagellated green algae, through the use of null mutant of FAP265, its TPPP ortholog [7]. Very recently, it has also been shown that TPPP (Py05543) is required for male gametocyte exflagellation in Plasmodium yoelli [8]. [6,9] and this paper). Black and dashed line squares indicate highly conservative sequence motifs. Dotted lines represent disordered regions of various length which are present in some species. aa-amino acids. Fungi consist of 18 phyla according to the latest classification by Tedersoo et al. [10]. Among these, the seven early-branching clades are the non-terrestrial fungi, which reproduce by using motile flagellated zoospores. In terrestrial fungi, the flagellum is lost. Thus, fungi provide an ideal opportunity to test and confirm the hypothesis of the correlation between the occurrence of the p25alpha domain and that of the eukaryotic cilium/flagellum since the flagellum occurs in some phyla and not in others. Earlier, I found that of the seven phyla of flagellated fungi, five had one or more TPPP-like proteins [6]. These phyla are Rozellomycota, Neocallimastigomycota, Monoblepharomycota, Chytridiomycota, and Blastocladiomycota ( Figure 1). Among the two phyla without TPPP-like proteins, Aphelidiomycota and Olpidiomycota, as complete genomes, were not available; thus, I predicted that if this situation changed, it could be shown that they also possess p25alpha domain-containing proteins. Recently, Chang et al. [11] published the genome of an Olpidiomycota, Olpidium bornovanus, which contained a fungaltype TPPP designated as hypothetical partial proteins, KAG5460860 and KAG545836. I have shown that they are parts of a single protein [9]. Thus, only the Aphelidiomycota phylum lacked data regarding proteins with a p25alpha domain. [6,9] and this paper). Black and dashed line squares indicate highly conservative sequence motifs. Dotted lines represent disordered regions of various length which are present in some species. aa-amino acids. Fungi consist of 18 phyla according to the latest classification by Tedersoo et al. [10]. Among these, the seven early-branching clades are the non-terrestrial fungi, which reproduce by using motile flagellated zoospores. In terrestrial fungi, the flagellum is lost. Thus, fungi provide an ideal opportunity to test and confirm the hypothesis of the correlation between the occurrence of the p25alpha domain and that of the eukaryotic cilium/flagellum since the flagellum occurs in some phyla and not in others. Earlier, I found that of the seven phyla of flagellated fungi, five had one or more TPPP-like proteins [6]. These phyla are Rozellomycota, Neocallimastigomycota, Monoblepharomycota, Chytridiomycota, and Blastocladiomycota ( Figure 1). Among the two phyla without TPPP-like proteins, Aphelidiomycota and Olpidiomycota, as complete genomes, were not available; thus, I predicted that if this situation changed, it could be shown that they also possess p25alpha domain-containing proteins. Recently, Chang et al. [11] published the genome of an Olpidiomycota, Olpidium bornovanus, which contained a fungaltype TPPP designated as hypothetical partial proteins, KAG5460860 and KAG545836. I have shown that they are parts of a single protein [9]. Thus, only the Aphelidiomycota phylum lacked data regarding proteins with a p25alpha domain. Multiple alignments of sequences were conducted by the Clustal Omega program [15]. Bayesian analysis, using MrBayes v3.1.2 [16], was also performed to construct a phylogenetic tree using whole sequences of TPPP proteins. Default priors and the WAG model [17] were used, assuming equal rates across sites. Two independent analyses were run with three heated and one cold chain (temperature parameter 0.2) for generations, as indicated in Figure legends, with a sampling frequency of 0.01, and the first 25% of generations were discarded as burn-in. The two runs were convergent. Results and Discussion Mikhailov et al. [18] recently published the genome of A. protococcorum and Amoeboaphelidium occidentale. The ancestor of Aphelidiomycota was flagellated, and in some cases, a reduction in flagellum occurred as in A. occidentale and A. protococcorum, while, in other species, it was retained (e.g., P. tribonematis) [18]. Mikhailov et al. [18], based on the data from [19], showed that the P. tribonematis transcriptome demonstrated the conservation of ciliogenesis and axonemal motor proteins, in accordance with the presence of flagellated zoospores, while the Amoeboaphelidium species lost most of their genes/proteins. Interestingly, despite the loss of the flagellum, A. occidentale possessed most of the components of the intraflagellar transport (IFT), which are perhaps the most characteristic flagellar proteins; however, A. protococcorum has only a few of them. What is the situation with TPPP-like proteins? My BLAST search shows that A. protococcorum does have, and A. occidentale does not have TPPP. Genomic and proteomic data for two different strains of A. protococcorum, X5, and FD95, were available. Both of them possess two long-type TPPPs (KAI3631655 and KAI3639621 in strain X5, KAI3650757 and KAI3652328 in strain FD95), which are almost identical to each other both within and between the strains (Table S1). A similar phenomenon occurred in the case of the green algae genus, Ostreococcus, which-unlike other green algae, such as Chlamydomonas-lost its flagellum but contained a highly divergent TPPP ortholog [4]. Long-type (i.e., animal-type) TPPPs are featured by the presence of a complete p25alpha domain ( Figure 1). They occur in Opisthokonta (e.g., Choanoflagellata, animals, and some species of flagellated fungi) ( Figure 2). The sequences of TPPPs in A. protococcorum are most similar to the RKP02545 protein of Chytridiomycota fungus, Caulochytrium protostelioides (Table S1). Interestingly, RKP02545 is a fungal-type TPPP, similar to the other three proteins of fungi among the 27 best hits listed in Table S1. All the other proteins are of animal origin. An NCBI Blast search was not possible for P. tribonematis as there were no P. tribonematis data on this site. However, its metatranscriptomic nucleotide contig assembly and the predicted proteome were deposited in figshare [13]. My analysis of the proteome revealed that P. tribonematis contained a fungal-type TPPP (TRINITY_DN24782_c0_g1_i2|m.37417), which showed a high homology with proteins of this type (Table S2, Figure 3). This type of TPPP is specific to flagellated fungi and was previously found in the phyla Chytridiomycota, Blastocladiomycota, and Olpidiomycota [6,9]. Within Chytridiomycota, most species of class Chytridiomycetes contain three kinds of TPPPs: an animal-type and two fungal-types [6]. An NCBI Blast search was not possible for P. tribonematis as there were no P. tribonematis data on this site. However, its metatranscriptomic nucleotide contig assembly and the predicted proteome were deposited in figshare [13]. My analysis of the proteome revealed that P. tribonematis contained a fungal-type TPPP (TRIN-ITY_DN24782_c0_g1_i2|m.37417), which showed a high homology with proteins of this type (Table S2, Figure 3). This type of TPPP is specific to flagellated fungi and was previously found in the phyla Chytridiomycota, Blastocladiomycota, and Olpidiomycota [6,9]. Within Chytridiomycota, most species of class Chytridiomycetes contain three kinds of TPPPs: an animal-type and two fungal-types [6]. An NCBI Blast search was not possible for P. tribonematis as there were no P. tribonematis data on this site. However, its metatranscriptomic nucleotide contig assembly and the predicted proteome were deposited in figshare [13]. My analysis of the proteome revealed that P. tribonematis contained a fungal-type TPPP (TRIN-ITY_DN24782_c0_g1_i2|m.37417), which showed a high homology with proteins of this type (Table S2, Figure 3). This type of TPPP is specific to flagellated fungi and was previously found in the phyla Chytridiomycota, Blastocladiomycota, and Olpidiomycota [6,9]. Within Chytridiomycota, most species of class Chytridiomycetes contain three kinds of TPPPs: an animal-type and two fungal-types [6]. Comparison of the sequence of Amoeboaphelidium protococcorum TPPP with those of some fungal-type TPPPs. The multiple alignment (manually refined) of the sequences of p25alpha domains was conducted by Clustal Omega [15]. The N-termini (amino acids before the p25alpha domain) and the interdomain parts are not included in the alignment. Amoeboaphelidium, A. protococcorum strain X5 KAI3631655; Paraphelidium, Paraphelidium tribonematis TRINITY_DN24782; Paraphysoderma, Paraphysoderma sedebokerense KAI9140125; Caulochytrium, Caulochytrium protostelioides RKP02545; Spizellomyces, Spizellomyces punctatus XP_016604112; Gorgonomyces, Gorgonomyces haynaldii KAI8912588; Chytriomyces1, Chytriomyces confervae TPX65886; Chytriomyces2, Chytriomyces confervae TPX72533; Olpidium, Olpidium bornovanus KAG5460860 + KAG5458366. Amino acids that are identical and biochemically similar in at least three quarters of the fungal-type proteins are labeled by a black and grey background, respectively. The "Rossman-like" sequences, GXGXGXXGR, and the LXXF(Y)XXF(Y)XXF sequence at the beginning of the p25alpha domain are indicated by bold letters. The phylogenetic tree of TPPPs of fungi was constructed using the Bayesian method ( Figure 4). In addition to the TPPPs of Fungi (fungal-and animal-types), some animal-type TPPPs of Metazoa and Choanoflagellata were also included. Proteins of the fungi/Metazoa group were separated from the Monosiga brevicollis (Choanoflagellata) protein (a long-type TPPP). The long (animal)-type TPPPs of the fungi/Metazoa group formed a distinct clade within which proteins of animal (Amphimedon, Caenorhabditis, Drosophila, Homo) and of fungal origin were separated from each other. This suggests that the fungal-type TPPP was not present in the common ancestor of Opisthokonta, as it is more parsimonious to imagine that it evolved in the common ancestor of fungi than to assume that this type of TPPP was independently lost in Choanoflagellata and Metazoa. J. Fungi 2023, 9, x FOR PEER REVIEW 6 of 9 Figure 4. The phylogenetic tree of some TPPPs constructed by Bayesian analysis [16]. The number of generations was 1.4 × 10 −6 . Full and open circles at a node indicate that the branch was supported by the maximal Bayesian posterior probability (BPP) and ≥0.95 BPP, respectively. All the other branches were supported by BPP ≥ 0.5. The accession numbers of proteins are listed in Table S3. Uppercase letters indicate animal-type TPPPs, lowercase letters indicate fungal-type ones, except for the outgroup Tetrahymena thermophila TPPP (XP_001023601) (phylum Ciliophora), which is a short-type TPPP. Fungal phyla are indicated by bold letters. The only exception is the Amoeboaphelidium long-type TPPP that formed a clade with the fungal type in Paraphelidium. Phyla (Aphelidiomycota, Chytridiomycota, Blastocladiomycota, and Olpidiomycota) and the classes (Spizellomycetes, Chytridiomycetes, Rhizophydiomycetes) formed distinct clades according to their phylogeny. Caulochytrium has a separate position on our Bayesian tree. The exact position of this species has long been disputed; it was even claimed without any molecular evidence that 'Caulochytriomycota' formed a separate phylum [20]. New evidence strongly suggests that it is included in Chytridiomycota as a sister to class Chytridiomycetes [21] or Synchytriomycetes [22]. Interestingly, Aphelidiomycota TPPPs are sisters to a clade of Chytridiomycete fungaltype TPPPs. First, the two Aphelidiomycota TPPPs formed a common clade with each other, although one (P. tribonematis) contained two p25alpha domains and the other (A. protococcorum) only one. Based on the sequence alignment ( Figure 5), they contained identical and biochemically similar amino acids in 47% and in 66%, respectively (cf. Table S2). It means that comparing these values with those of Table S1, P. tribonematis TPPP showed a higher homology with A. protococcorum TPPP than any other protein. (P. tribonematis TPPP is not involved in Table S1 since it is absent in the NCBI database.) Figure 4. The phylogenetic tree of some TPPPs constructed by Bayesian analysis [16]. The number of generations was 1.4 × 10 −6 . Full and open circles at a node indicate that the branch was supported by the maximal Bayesian posterior probability (BPP) and ≥0.95 BPP, respectively. All the other branches were supported by BPP ≥ 0.5. The accession numbers of proteins are listed in Table S3. Uppercase letters indicate animal-type TPPPs, lowercase letters indicate fungal-type ones, except for the outgroup Tetrahymena thermophila TPPP (XP_001023601) (phylum Ciliophora), which is a short-type TPPP. Fungal phyla are indicated by bold letters. Figure 5. Sequence alignment of Paraphelidium tribonematis TRINITY_DN24782 and Amoeboaphelidium protococcorum KAI3631655 proteins by Clustal Omega [15]. Amino acids that are identical and biochemically similar are labeled by a black and grey background, respectively. [15]. Amino acids that are identical and biochemically similar are labeled by a black and grey background, respectively. In addition, species in Chytridiomycetes contain two fungal-type paralogs labeled '1' and '2' in Figure 4. These paralogs are found in different clades, within which they are more similar to each other than to other TPPPs in the same species. The existence of the two groups has already been recognized [6]. They can be considered "outparalogs" [23] since the duplication events happened earlier than the species speciation, perhaps in the common ancestor of Chytridiomycetes. These recent data on Aphelidiomycota strengthen the correlation between the incidence of p25alpha domain-containing proteins and that of the eukaryotic flagellum. (I do not discuss here or whether Aphelidiomycota or Aphelida is the correct name depending on whether there is a sister relationship [18] between Aphelida and 'true' Fungi, or Aphelidiomycota is considered a part of the fungal kingdom [10,22]. Moreover, the place of Aphelidiomycota/Aphelida is the same on the phylogenetic tree obtained by both groups, and the monophyly of Aphelida and Fungi is shown [24]. The presence of a fungaltype TPPP in P. tribonematis is an interesting addition to the phylogenetic classification of Aphelidiomycota.) The other six phyla of flagellated fungi contain one or more TPPP-like proteins [6,19]. One exception occurs in the phylum Neocallimastigomycota: the flagellated anaerobic gut fungus, Orpinomyces sp. strain C1A, has no TPPP-like proteins. However, its closest relatives in this phylum have apicortin, a TPPP-like protein containing a partial p25alpha domain and a DCX domain (Pfam03607 or IPR003533). It should be noted that the genome of Orpinomyces sp. is 94% complete [25], and a TPPP-like gene can still be identified. On the other hand, TPPP-like proteins do not practically occur in terrestrial, non-flagellated fungi. A total of 1571 terrestrial (non-flagellated) fungi are available on the MycoCosm webpage (https://mycocosm.jgi.doe.gov/mycocosm/home) [26]. Only four possess a TPPP-like protein, namely, apicortin, which occurs in a family (Endogonaceae) of a relatively early branched terrestrial (i.e., non-flagellated) fungal phylum, Mucoromycota [6]. This may be a relic since no other p25alpha domain-containing protein was found in other terrestrial fungi. P. tribonematis of Aphelidiomycota retains the flagellum and possesses a fungal-type TPPP containing both a complete and a partial p25alpha domain. If the flagellum has been lost for a long time (e.g., terrestrial fungi), these proteins cannot be found even in traces that contrast the flagellated species. Sometimes they were preserved as 'relics' in species at smaller phylogenetic distances (e.g., A. protococcorum); in which case they may acquire a new function. This suggestion is in accordance with the fact that the zoospore of A. protococcorum possesses a pseudocilium: a permanent immotile posterior projection containing microtubules, which may be considered a reduced posterior flagellum [27]. Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/jof9030376/s1, Table S1. Best protein hits when using Amoeboaphelidium protococcorum KAI3631655.1 as a query. BLASTP search on NCBI protein database. Table S2. Best protein hits when using Paraphelidium tribonematis TPPP (TRINITY_DN24782_c0_g1_i2|m.37417) as a query. BLASTP search on NCBI protein database. Table S3. Accession numbers of proteins shown in Figure 4. Funding: This research received no external funding. Data Availability Statement: The data presented in this study are available in this paper and in the Supplementary Material.
4,130
2023-03-01T00:00:00.000
[ "Biology" ]
Enhanced astaxanthin production by oxidative stress using methyl viologen as a reactive oxygen species (ROS) reagent in green microalgae Coelastrum sp. Microalgae are known to be a potential resource of high‐value metabolites that can be used in the growing field of biotechnology. These metabolites constitute valuable compounds with a wide range of applications that strongly enhance a bio‐based economy. Among these metabolites, astaxanthin is considered the most important secondary metabolite, having superior antioxidant properties. For commercial feasibility, microalgae with enhanced astaxanthin production need to be developed. In this study, the tropical green microalgae strain, Coelastrum sp., isolated from the environment in Malaysia, was incubated with methyl viologen, a reactive oxygen species (ROS) reagent that generates superoxide anion radicals (O2‐) as an enhancer to improve the accumulation of astaxanthin. The effect of different concentrations of methyl viologen on astaxanthin accumulation was investigated. The results suggested that the supplementation of methyl viologen at low concentration (0.001 mM) was successfully used as a ROS reagent in facilitating and thereby increasing the production of astaxanthin in Coelastrum sp. at a rate 1.3 times higher than in the control. Introduction The ketocarotenoid astaxanthin (3,3'dihydroxy4,4' diketoβcarotene) is a secondary carotenoid from the same family as βcarotene, canthaxanthin, zeaxanthin, lycopene and lutein (Lorenz and Cysewski 2000). The carotenoids, astaxanthin is the highest value carotenoid since its potent antioxidative activity and more effective in scavenging free radicals (Dragoş et al. 2010). Its superior antioxidant property possesses a broad range of applications in food supplements, nutraceutical, and pharmaceutical industries as well as act as pigmentation sources for fish aquaculture (Guerin et al. 2003; Ambati et al. 2014; Fraser and Bramley 2004. Astaxanthin biosynthesis has been observed in a limited number of organisms, including bacteria, yeast Phaffia rhodozyma, fungi and in some green microalgae (Orosa et al. 2001). Among the carotenoid producing organisms, green mi croalgae are a potential resource of highvalue metabolites with the potential of producing astaxanthin (Liu et al. 2014). However, the low productivity of these products in the native microalgae requires to be overcome (Clarens et al. 2010). Microalgae with improved growth rate and enhanced carotenoid accumulation will generate the commercial production of astaxanthin more feasible. Presently, green microalgae that have potential in accumu lating astaxanthin has received tremendous attention be cause of its high cost and the possibility of health benefits (Nakano et al. 1995). Numerous attempts have been made to improve strain with a high yield of astaxanthin. To ob tain the production process more feasible, optimization of cultivation and genetically modified strains have been ap plied for the past few decades but yet have not been fully satisfied (Kilian et al. 2011). As an alternative method, chemicals as enhancers have been proposed to initiate the production and accumulation of astaxanthin. The applica tion of chemical enhancers could be a valuable approach in addressing the low productivity of astaxanthin (Asada 1994). Environmental oxidative stresses can enhance the massive accumulation of astaxanthin by green microal gae under the condition of high illumination, nitrogen starvation, salt stress or temperature stress (Lee and Soh 1991). The effects of these unfavorable conditions have been attributed to the formation of reactive oxygen species (ROS). The excessive ROS may damage the ability of the cells to detoxify the reactive intermediates and leading to oxidative stress conditions. These highly reactive ROS can react with lipid membranes, proteins and nucleic acids, and ultimately cause oxidative damage resulting in cell death (Lafarga et al. 2020). Therefore, ROS will be used as signal molecules in microalgae to trigger the accumula tion of astaxanthin and protect an oxidative stress damage (Apel and Hirt 2004). It might function as an effective an tioxidant with a primary line of defense against oxidative damage in scavenging free radicals (Hu et al. 2018). Biosynthesis of astaxanthin in the cell of microal gae can be enhanced by the addition of ROS reagent (Kobayashi 2003). A study from Li et al. (2010) shown that the tolerance to excessive ROS is higher in astaxanthinrich cells with the capacity to detoxify super oxide anion radical. In addition of the iron into the culture medium, the excess levels of ROS in Haematococcus plu vialis increased ROS level. Thereby, the synthesis of fatty acids and astaxanthin was observed in the cell for protect ing their lipid vesicles (Hong et al. 2015). Induction of oxidative stress using ROS reagent in Chlorella zofingen sis, was also effective to increase the carotenoid accumu lation (Hu et al. 2018). It is known that the addition of various ROS reagents into the culture medium was able to improve the carotenoid synthesis in microalgae (Chokshi et al. 2017). Previously, ROS generating the methyl viologen (MV) and iron ion (Fe 2+ ) compounds can lead to the formation of the superoxide anion radical and hydroxyl radical, respec tively (Kobayashi et al. 1997). The most common source of ROS reagents for astaxanthin synthesis was hydrogen peroxide (H 2 O 2 ), methylene blue (MB), and methyl vio logen (MV) for hydroxyl radicals, singlet oxygen, and su peroxide anion radicals, respectively (Ma and Chen 2001). The appropriate concentration of ROS reagent was impor tant to enhance astaxanthin formation in microalgae. Hy drogen peroxide (H 2 O 2 ) was used by Ip and Chen (2005) to generate hydroxyl radical for astaxanthin production in the heterotrophic culture of C. zofingiensis. The produc tion of astaxanthin was increased by the addition of 0.1 mM H 2 O 2 due to the formation of hydroxyl radicals (Ip and Chen 2005). The H 2 O 2 (0.1 mM) and MV (0.01 mM) was found to be the best ROS reagent in inducing caroteno genesis in Chlorococcum sp. where astaxanthin content increased almost 80% (Ma and Chen 2001). Similar re sults have been reported in H. pluvialis, the superoxide an ion radical generated from MV is the most effective ROS reagent involved in astaxanthin accumulation (Kobayashi et al. 1993). Currently, a green microalgae species of Coelastrum sp. has proved to be a potential producer of astaxanthin. Tharek et al. (2020a) identified that Coelastrum sp. as a viable strain with the capability in producing astaxanthin from a natural source under high light intensity and nitro gen starvation in mixotrophic culture. Besides that, studies on Coelastrum sp. HA1 showed that nitrogen limitation in the culture medium of this species enhances the produc tion of astaxanthin (Liu et al. 2013). Also, Coelastrum cf. pseudomicroporum culture in municipal wastewater and salinity stress can increase carotenoid production (Úbeda et al. 2017). However, further research is required to im prove the astaxanthin content in this species for commer cial astaxanthin production. Therefore, the present study aimed to enhance astaxanthin yield using reactive oxygen species (ROS) reagent by investigating the effect of ox idative stress generated by ROS reagent of methyl violo gen towards growth and astaxanthin synthesis in tropical green microalgae strain isolated from the environment in Malaysia, Coelastrum sp. The selection strategy was fo cused on driving Coelastrum sp. into a high yield and cost effective production of astaxanthin. (NIES collection), Japan. The cultures were grown under nor mal conditions at 25±1°C with continuous aeration and enriched with 1% CO 2 . It was illuminated at a continu ous light intensity with fluorescence light at normal pho ton flux densities (PFD) of 70 μmol photons m 2 s 1 until Coelastrum sp. cultures reach exponential growth phase for a period of 5 d. Cell growth was observed by measur ing absorbance at 750 nm using a spectrophotometer. Supplementation culture for stress inductionn In order to induce the astaxanthin biosynthesis, the biomass of Coelastrum sp. was harvested and various supplements were added according to optimize conditions in accumulating astaxanthin in Coelastrum sp. with de tails described in our previous work (Tharek et al. 2020b). Sodium acetate, sodium chloride and sodium nitrate were used at a final concentration of 0.5 g/L, 3 g/L and 0.1 g/L, respectively. Coelastrum sp. then was subsequently ex posed under continuous illumination of high photon flux densities (PFDs) of 250 μmol photon m 2 s 1 . The cells were then subjected to extraction of astaxanthin and all the experiments were carried out in triplicates. Exposure of reactive oxygen species generating reagent Methyl viologen (MV) is a reactive oxygen species reagent that can produce superoxide anion radical (O 2 ) (Rabinow itch et al. 1987 Determination of chlorophyll Microalgae culture at the end of the exponential phase (5 day culture) was subjected for analysis. The 200 µL of mi croalgae culture was treated with 80% acetone. The mix ture was vortexed for 30 s and centrifuged at maximum speed for 5 min. The absorbance of extracted chlorophyll was read at 663.6 nm and 646.6 nm. The concentration of chlorophyll a, b and total chlorophyll were calculated by the Lichtenthaler equation and expressed in mg/L content (Lichtenthaler 1987). Determination of astaxanthin To measure the astaxanthin content, a known volume of microalgae culture was taken and centrifuged at 2000 × g for 10 min. The pellet was then lyophilized using a freeze dryer (Lyphlock 6; Labconco, USA). Then, the carotenoids were extracted using solvent extraction by ho mogenized the cells with acetone and kept in a water bath at 70°C for 10 min followed by vortexing for few min utes. The mixture was centrifuged at 2000 × g for 10 min and the supernatant was collected. Supernatant collections were conducted repeatedly until the cells were faded. The astaxanthin concentration was then measured by the spec trophotometric method and calculated with the equation, c (mg/L) = 4.5 × A 480 × (V a / V b ) × f. Where c is the as taxanthin concentration, V a (mL) is the volume of solvent, V b (mL) is the volume of algal sample, and f is the dilution ratio. The absorption peak of astaxanthin is at 480 nm and thus, A 480 was determined by measuring the absorbance at 480 nm. Acetone was used as blank for the measurement. Statistical analysis The experiment was carried out with replication from three separate cultures. All values shown in the figures are ex pressed as mean SD. Student's t test was used to determine significant differences. Results and Discussion In general, there are two crucial roles of carotenoids in photosynthetic organisms. First, they act as light harvesting pigments by trapping light energy and pass ing it to chlorophylls. Second, and more importantly, carotenoids can quench singlet oxygen (1O 2 ) by protect ing the photosynthetic apparatus from unfavorable condi tions (Young 1991). Shaish et al. (1993), have reported that massive amount of carotenoid accumulated in green algae cells are involved in triggering ßcarotene biosyn thesis to protect the photosynthetic cell against oxidative stress. Under unfavorable conditions, such as high light, salt stress or nutrient deprivation, the reactive oxygen species (ROS) was generated in the chloroplast when the photosynthetic process and CO 2 fixation were perturbed (Mittler 2002). The ROS will be produced whenever there is excessive reducing power in photosynthesis and will then be used as signal molecules to initiate the production and accumulation of many bioproducts (Asada 1994). Only few studies have focused on the involvement of oxidative stress using ROS reagent in carotenoid synthesis (Lafarga et al. 2020). In the present study, the addition of . All data represent an average of 3 replications and error bars indicate mean ±SD. Statistical analyses were conducted using student's t-test. Different small letter represents significant different among control and different treatments. Small letters a and b above the bar graph indicates significant increases and decreases respectively, between control and treatments groups (P<0.05). *a for groups that higher than control; and b for groups that lower than control reactive oxygen species (ROS) reagent, which was methyl viologen (MV), can rapidly auto oxidizes to produce su peroxide anion radical (O 2 ). To investigate the tolerance of Coelastrum sp. cells towards ROS reagent, different concentrations of MV were tested by growing 10% inocu lums of Coelastrum sp. in the presence of MV and shaken manually daily. Based on the results obtained, the growth of cultures was markedly decreased after the 4th day in 0.01 mM, 0.1 mM and 1.0 mM of MV, as shown in Fig ure 1. Besides, the microalgae cultures in these conditions turn to white depicting the death of cells. The growth of culture in 0.0001 and 0.001 mM of MV was shown to in crease even after the 4th day of incubation, indicating the ability of the cells to survive in this range of MV concen tration. To investigate further the effect of superoxide anion radical (O 2 ) towards astaxanthin synthesis, ROS reagent (MV) was added to Coelastrum sp. culture during the ex ponential growth phase (5day culture) where a rapid uti lization of the substrate and cell division occurred at this stage. The same cell density was applied for all cultures supplemented with MV. The parameters included in the study to examine the effect of MV were the growth of Coelastrum sp., Chlorophyll content and astaxanthin con tent. Figure 2 shows the growth of Coelastrum sp. grown after incubated under different concentrations of MV en riched with 1% of CO 2 enrichment. The growth of Coelastrum sp. without the addition of MV (Control) was observed to be higher compared to cul tures supplemented with MV. In contrast, at higher con centration (0.1 mM and 1.0 mM), the growth of Coelas trum sp. were significantly decreased; therefore, the ac cumulation of astaxanthin was inhibited (Figure 3). The At the beginning of culture (day 0), the algal cells were in the green color relatively because of high chlorophyll content and low carotenoid content. With the addition of MV, the astaxanthin production proceeded markedly with a reduction of chlorophyll content, as shown in Figures 3 and 4. Superoxide anion radical generates by MV was found to be more effective for astaxanthin production at an extremely low concentration of 0.001 mM and 0.0001 mM. The results obtained in Figure 3 showed that MV at 0.001 mM increased astaxanthin content with 1.3 times higher than control after seventh day of incubation depict ing the highest astaxanthin content. While the color of Coelastrum sp. changes to orangish color after 7 d of incu bation under the lower concentration of MV as depicted in Figure 5 indicating the faster accumulation of carotenoids. However, the production of astaxanthin did not proceed at 0.01 mM MV and the astaxanthin content was found to decreased about 50% after 7 d of incubation, as shown in Figure 3 suggesting the low astaxanthin accumulation may be due to the free radicals being scavenged (Raman and Ravi 2011). At high concentration of 0.1 mM and 1.0 mM MV, the growth of microalgae was reduced and inhibited astaxanthin accumulation. In corroboration of this findings showed that astaxan thin rich cells are more effective to the concentration that can tolerate with the cells of microalgae. Methyl violo gen, which generated superoxide anion radical (O 2 ), was capable to trigger the astaxanthin synthesis in Coelastrum sp. and effective at low concentration of MV. This radi cal might enhance carotenoid formation in microalgae cyst cells by participating directly in the carotenogenic enzyme reactions as an oxidizer (Kobayashi et al. 1993). There fore, the accumulation of carotenoid acts as a protective agent against oxidative stress damage (Shaish et al. 1993). However, excessive addition of MV could cause massive cell death in the end and drastically reduced astaxanthin formation. Astaxanthin plays a vital role in protecting the algal cells against oxidative damage of reactive oxygen species. Consequently, the cell of microalgae has devel oped an efficient defense system to helps it to survive un der unfavorable condition. Conclusions To produce bioproducts in an economically feasible way, the low productivity of microalgae needs to be addressed. Therefore, methyl viologen as reactive oxygen species (ROS) reagent has been applied as an enhancer to improve the accumulation of high yield of astaxanthin from Coelas trum sp. In this study, we concluded that the methyl violo gen reacts as ROS reagent by generating superoxide anion radical at low concentration of MV (0.001 mM) and con sequently lead to highest astaxanthin production with 1.3 times higher than control.
3,767.4
2020-11-10T00:00:00.000
[ "Engineering", "Biology" ]
Critical Review of the Methods to Measure the Condensed Systems Transient Regression Rate Accurate knowledge of steady state and transient burning rate of solid fuels and energetic materials is very important for evaluating the performance of different propulsion and/or gas generator systems. The practical demands imply accuracy of available burning rate data on the level of 1% or better and proper temporal resolution. Unfortunately, existing theoretical models do not allow predicting the magnitude of the burning (regression) rate with needed accuracy. Therefore, numerous burning rate measurement methods have been developed by various research groups over the world in the past decades. This paper presents a critical review of existing techniques, including basic physical principles utilized for burning rate determination, an estimate of the temporal and spatial resolutions of the methods as well as their specific merits and limitations. There are known the methods for measuring linear regression rate via high speed cinematography, X-ray radiography and ultrasonic wave reflection technique. Actually, none of those methods could satisfy the practical demands. As an alternative is the microwave reflection method, which potentially possesses high spatial and temporal resolutions and may solve the measurement problem. In addition, there exist methods for measuring transient mass or weight of the burning material. They are based on recording the frequency of oscillations of elastic element with attached specimen or a cantilevered rod with a strain gauge pasted to the base. Practically, these methods could not provide needed accuracy. Much better parameters can be obtained when using the recoil force or microwave resonator techniques. Recommendations for special applications of certain methods are formulated. Introduction When designing different propulsion devices, it is necessary to know the burning rate (linear regression rate) of the energetic material and its dependency on the pressure and initial temperature. In particular, upon designing solid-propellant rocket motor, the error in the propellant stationary burning rate should not exceed 1%. Consequently, accurate determination of the burning rate and its functional dependency is highly critical. When going to transient burning rate measurements, due to great technical difficulties the needed accuracy can be slightly decreased but it is necessary to provide proper temporal resolution, for example, 1 kHz and higher. These requirements were formulated many years ago in review [1] and main conclusions of it were confirmed later in review [2]. Note that at present time none of the theoretical models is able to predict the burning rate with the accuracy required because detailed physical and chemical mechanisms of transformations occurring in the reaction zones above and below the burning surface are not fully understood and the values of the energetic materials characteristic parameters at high temperatures are unknown. Thus, it is quite evident that the development of reliable experimental methods for accurate measurement of the energetic material regression rates is extremely desirable to satisfy the stringent demands in the design of propulsion systems. This paper summarizes the characteristics of various burning rate measurement techniques available in the literature. A comparative analysis of the different techniques is presented, and recommendations for the application of these methods are provided. Experimental methods for measuring transient burning rates For transient regression rate measurements several experimental approaches are often employed. Measurement of the instantaneous web thickness One of the broadly used methods for measuring the instantaneous web thickness is based on highspeed cinematography of the transient combustion event. Despite the apparent simplicity of the method, its use for oscillatory combustion situations can produce significant error, which substantially sets the limit to their application. A spatial resolution associated with this technique is approximately 10-20 μm. If one wishes to measure the instantaneous burning rate under oscillating pressure and/ or heat flux conditions, a much higher spatial resolution is required. This point can be illustrated by considering a simple example with the following conditions: nominal burning rate of 1 cm/s, frequency of oscillation of 50 Hz, and the amplitude of burning rate fluctuation of 20%. During a half period of oscillation (0.01 s), the burning surface regresses at average distance of 100±20 μm. A spatial resolution of 10-20 μm translates into an error of 50-100% in the measurement of the burning rate fluctuation. This error is clearly unacceptable for the conditions stated. In many combustion instability studies, the frequencies of oscillation are often higher than 50 Hz; therefore, the cinematography methods are inadequate for oscillatory burning rate measurements. For studying the combustion behavior under high-pressure environment, real-time X-ray radiography coupled with high-speed movie/video cameras has been used [3,4]. This technique is especially useful for studying combustion processes in closed vessels. In this method, a continuous X-ray beam is used to penetrate the propellant sample and its enclosure. The attenuated beam strikes on an image intensifier which transforms the X-ray images into visible light images. These images are recorded on a high-speed, high-resolution video camera. The recorded images are then analyzed using an advanced image processor to determine instantaneous burning surface locations, grain motion as well as any anomalous behaviors, such as grain fracture. The time resolution of this system depends upon the framing rate of the camera. The spatial resolution depends on many factors, such as 1) the relative attenuation of the combustion chamber walls and the propellant grain, 2) the ratio of the distance between the X-ray source to the object and the distance between the source and the screen of the image intensifier, 3) the focal spot size of the X-ray source, and 4) the magnification scale of the image intensifier. The spatial resolution of real-time X-ray radiography is on the order of 100 μm. In addition to direct photographic techniques, optical projection methods have been used for the measurement of instantaneous length of a burning propellant sample in a windowed bomb [5]. The sample image is focused by optical lenses on an array of photodiodes which is masked by series of pin holes (0.5 mm in diameter) to increase the spatial resolution. As the propellant regresses, light from the flame illuminates a larger number of photodiodes. The time response of each photodiode is recorded and analyzed for the determination of the time variation of the burning surface location. The burning rate is then deduced by the use of a linear regression analysis. The spatial resolution of this method is estimated to be 100 μm. The instantaneous burning surface location can also be determined by ultrasonic wave reflection methods [6][7][8][9][10].The first experiments using ultrasonic waves for measuring transient burning rates of solid propellants were performed in the USA [6] and later the ultrasonic wave reflection technique was further developed by researchers in Europe [7].The principle of the method is to measure the time elapsed between an emitted sound pulse and its echo generated from the reflection at the burning surface. The pulse repetition frequency can be as high as 5 kHz. Further, the known value of the sound velocity is used to deduce the distance traveled by the ultrasonic wave. Numerical differentiation of the distance vs. time data generates the information of burning rate vs. time. Both the ultrasonic emitter and receiver are located in the unburned base of the sample. In this setup the ultrasonic wave first propagates through the cold unheated substances (consists of the propellant and coupling material located between the ultrasonic transducer and propellant) and then travels through the higher temperature region composed of the preheat and reaction layer. One of the uncertainties of this technique is associated with the unknown solid-phase thermal wave profile under transient conditions. Since the speed of sound is a function of temperature, the ultrasonic wave accelerates as it approaches the burning surface. Difficulties can also appear due to distortion of reflected wave signal upon passage through the boundaries of phase transitions and gas-filled reacting layers (foam layer). Anisotropy of mechanical characteristics of the propellant material and the dependency of sound velocity on the stress level in the propellant material also introduce some uncertainties. Errors in determining the precise time at which the echo wave returns to the sensor are caused by the non-ideal shape of reflected signal and the occasional weakening of signal intensity. For a nominal sound speed of 2.5 km/s in a solid propellant sample with a frequency of ultrasonic source of 2.5-5 MHz, an error of 1/4 of the oscillation period corresponds to an error of 100-500 μm for determining the burning surface location. For relatively slow transients in combustion conditions, the accuracy of this method is estimated to be 5-10% [9]. It is difficult to estimate the accuracy of highly transient regression rate measurements using ultrasonic methods. The microwave reflection method of measuring the instantaneous propellant web thickness is at present the most accurate one; however, its setup and operation are technically complex. The initial development of this method, based on the measurement of the Doppler frequency phase shift between the initial 30 mm band microwave signal and that reflected from the burning surface of the solid propellant, was published in 1967 [11]. Afterwards, interest in this technique stimulated subsequent work with further developments [12][13][14]. Phase resolution can be enhanced via using shorter microwave wavelength or employing interferometry [12] when the electronic setup includes two klystrons operating at frequencies (10 GHz) and (10 GHz + 500 kHz) and a pair of double balanced mixers. In this setup, any drifting of klystron difference is self-compensated in the two mixers and the phase resolution of the recording system is within 0.08-0.16 milliradian. This corresponds to a spatial resolution of about 0.2 μm allowing a very detailed measurement of the instantaneous burning rate, which is a much more accurate than other methods. The potential limitations of the system are caused by the influence of propellant compress-ibility, signal distortion resulting from the reflection of electromagnetic waves from the ionization zone in the flame, noises generated from vibrations of the test stand. It was experimentally verified that under small and rapid perturbations of pressure, the error caused by the effect of propellant compressibility is negligibly small, and the vibrations can be minimized to a suitable level by a special design of the experimental test rig. In addition, it was experimentally demonstrated [12,14] that the distortions of signal due to the reflection from flame zone are insignificant. Methodical problems of burning-rate measurements with use of microwaves are discussed in [15,16]. It has been shown that the microwave meters can be successfully used in measurement of burning rate of solid propellants. The 2-mm band microwave meter can be applied for testing non-metallized and weakly metallized propellants. The 8-mm band microwave meter can be used for testing the propellants with the metal content up to 20% by mass. Measurement of instantaneous mass and weight of the burning material The instantaneous mass of the burning specimen can be measured using a well-known physical principle according to which the period of resonance oscillations (or natural frequency) of an elastic element depends on its mass, physical dimensions and mechanical properties. This method can be implemented using a vibrating mechanical element with an attached mass (propellant specimen), an electronic data acquisition system to record the harmonic motions of the assembly, and an electromagnetic actuating device to sustain nondecaying oscillations. Several different versions of this device were utilized by various researchers. In Ref. [17], a mechanical element has a form of a thin metallic membrane with a diameter of 50 mm. The propellant specimen was located on a rod attached to the center of the membrane. In Ref. [18], the mechanical element represented a cantilevered quartz rod with a diameter of 8 mm; the propellant specimen was fastened at the free end of the rod. The nominal natural frequency of oscillation of these two systems was 1 kHz. The propellant samples studied had a nominal mass of about 100 mg. Very small masses of gasifying solid propellants were measured in Ref. [19] using a mechanical element in the form of a cantilevered quartz tube; a stainless steel tip-end was attached to its free end. A specimen of energetic material with 1 mg mass was affixed (thin layer of 20-60 μm) to the tip-end. The eigenvalue frequency of cross oscillations of the quartz tube was 130 Hz. The data were collected and processed using a personal computer. In general, the choice of the resonance frequency depends on the accurate measurement of oscillation period and the desired temporal resolution, which should be at least two times higher than the characteristic frequency of the process under study. In addition, there is a vague physical restriction: an oscillating propellant specimen must retain its mechanical characteristics (Young's modulus) during transient combustion. Otherwise, these measurements become questionable as was demonstrated by our experiments on the transient combustion of sodium nitrate based pyrotechnic mixtures which form a thick liquid layer on the combustion surface. This layer prevents correct determination of the mass of the oscillating specimen. The weighing methods technical implementation has its own peculiarities. The existing methods display very low temporal resolution, although, very small specimens can be utilized with this method. In Ref. [20], a method was described for the continuous weighing of a 1-2 mg specimen. The sensing element used was a cantilevered rod with strain gauges pasted to the base. The propellant specimen was located at the free tip-end. The natural frequency of oscillations of the system was 120 Hz, and the weight sensitivity was 10-20 mg. The apparatus can be used to study processes whose frequency is no more than 30-50 Hz. The sensitivity and frequency range of the method can be substantially increased by using more sensitive strain gauges and data acquisition system. An improved design of the force transducer is given in [21]. The solid propellant specimen was fastened to the upper tip-end of a movable electrode which was attached to the inner surface of the gauge case by a pair of metallic membranes. The change in specimen weight produced a change in the distance between the movable electrode and the stationary base which in turn causes a change in the capacitance of a condenser. The condenser was a part of an oscillating inductance-capacitance (LC) circuit. A data acquisition system was used to record signals that were proportional to the specimen weight and its temporal derivative. A liquid damper was used to damp the natural oscillations of the mechanical system. The nominal weight of the specimen was 1 g, and the sensitivity was 2 mg. The eigenvalue frequency of the system was 500-600 Hz, and the working frequency band was 0-400 Hz. In order to decrease the temperature errors, all metallic elements of the gauge were produced from Invar alloy with a very low thermal expansion coefficient. The reaction forces produced by the combustion products can amount to tens of percent of the measured weight and can be a substantial source of error in the measurement of the instantaneous burning specimen weight. The influence of the reaction forces can be adequately subtracted from the data reduction procedure by placing the specimen in such a way that the reactive force vector is directed in the gravitational direction. In this arrangement, only an occasional variation in the direction of gasifying products from the burning surface can slightly affect the value of the instantaneous specimen weight. Indirect methods for burning rate determination The instantaneous mass of the burning propellant can be determined by an indirect method when the amount of unburned substance is deduced from the electrical capacity of the specimen in Hermance's capacitance method [22]. In his setup, two opposing lateral surfaces of a rectangular propellant strand were covered with a combustible metallic foil, serving as the facing plates of a planar capacitor. The regressing propellant specimen, acting as a variable capacitor, was mounted in parallel with a capacitor and an inductor of known characteristics in an L-C circuit. When driven by an alternating current, the change in the resonance frequency of the L-C circuit gave the change in the value of variable capacitance. Therefore, the instantaneous length and the burning rate of propellant sample were deduced. One of the main errors of this method was caused by the unknown dependency of flame/plasma conductance on pressure [23]. According to a detailed analysis [24], special studies are necessary for estimating the real contribution of the flame conductance dependency on both the propellant formulation and the pressure level. It is suggested that the relative flame contribution can be decreased by increasing the resonance frequency of the L-C circuit. According to Hermance [22], the error analysis showed at least 10% error in the measurement of the instantaneous burning rates under oscillating pressure conditions (0.1-1.0 kHz). Pressure diagram is another indirect method for determining transient burning rates at oscillating pressure conditions can be achieved by solving a set of equations describing the unsteady behavior of interior ballistics of rocket motors [25,26]. In this method, the instantaneous chamber pressures at different locations should be accurately measured. The measured p-t traces are used as input information to the theoretical model for deducing the instantaneous burning rates. Essentially, the burning rates are solved in an inverse problem from the formulation and measured pressure-time data [26]. However, the accuracy and reliability of this method depend strongly upon the appropriateness of a large number of assumptions; e.g. spatially uniform distribution of pressure and temperature in the chamber, the absence (or formal account) of heat losses, the constancy of gas composition. Reactive (recoil) forces generated from the combustion products gasifying from the burning surface were first measured by Mihlfeith, et al. [27]. They measured the response of the burning rate of a solid propellant to the perturbations of thermal radiation flux. Subsequently, their method was used in similar experiments by other researchers [21,[28][29][30][31].The method is based on the relationship derived from the steady-state momentum equation: F = mp 2 /ρ g In equation the propellant recoil force, F, is directly proportional to the square of mass burning rate mp and inversely proportional to the gas density at the flame temperature, ρ g = PM W /R u T f . Assuming that the gas density is approximately constant during the experiment, the reactive force permits a reasonable estimation of the variations in mass burning rate. The application of the recoil-force transducer (Fig. 1) considerably enlarges the volume of experimental data on transient burning rate behavior. The capacitor type transducer possesses force measurements with limit value of measuring force 5 g, and working frequency range 0-500 Hz. The reactive force signal is recorded by turning the transducer axis into a horizontal position. In this manner the reactive force acts along the axis of a movable electrode. The weight of the specimen is compensated by the reaction of supports (membranes) which allows the measurements of sufficiently long (up to 10-20 mm for 10 mm diameter) specimens without losing the apparatus sensitivity. The gage sensitivity is 1-3 mg. Note that the information obtained corresponds to the signal averaged over the burning surface, and for some cases, additional tests are needed to decipher the information. For example, the reactive force signal in the combustion of noncatalyzed double-base propellant in air exhibits frequencies of 100-200 Hz, which are much higher than those of the natural frequency of oscillations of burning rate (15-20 Hz) associated with periodic perturbations of radiant energy flux. The use of high-speed cinematography testifies to the chaotic appearance and disappearance of hot spots with high intensity chemical reactions on the burning surface. The simultaneous existence of several hot spots on the surface can generate complex spectra of the reactive force signals. The characteristics of the reactive force spectrum for a given propellant depend on the active reaction sites on the burning surface. Specific features of registration of the recoil force using various types of transducers were discussed in [31]. The influence of various factors such as instrumental distortions, variable parameters of the ambient medium, and inhomogeneity of combustion was analyzed. It has been concluded that the method of registration of the recoil force for burning-rate measurement with the help of prior experimental calibration and visual control is preferable. Special measures should be taken to protect the transducer from the vibrations and the thermal action produced by equipment and combustion products, since these factors significantly affect the quality of the registered signal. These measures favor unique interpretation of the recoil-force signal. Finally, it can be noted that calibration is not necessary to obtain time characteristics such as the ignition delay, the combustion duration, the frequency of oscillations, or the phase shift. Thus, if the above measures for organization of an experiment are fulfilled, the method for registration of the recoil force can be effectively used to measure unsteady characteristics of the burning rate. Comparison of measurement methods The applicability of a prospective measurement method to a specific problem can be assessed by analyzing the characteristic parameters of both the instruments and the physical operating conditions. For all transient combustion events, the time resolution is the universal parameter to be considered in burning rate measurements. According to physical principles, any phenomenon can be detected by a recording system whose working frequency is at least two times higher than that of the physical phenomenon. Thus, the processes in the condensed phase with frequencies up to 500-600 Hz can be studied with a minimum time resolution of the recording system of about 1 ms. That requirement can be satisfied by the methods of cinematography, microwave probing, and the method of measuring instantaneous propellant mass. The method of measuring reactive forces can also be utilized in this range of operation. In fact, by decreasing the sensitivity of the force transducer, one can easily achieve a suitable time resolution. The sensitivity of a given method can be characterized through the determination of its spatial resolution. As already mentioned, the microwave method has the highest spatial resolution (better than 1 μm) among all techniques. The spatial resolution of other methods, including the ultrasound, optical and weighing methods, is at least an order of magnitude lower. For a fixed weighing sensitivity, the resolution of the burnout layer thickness can be increased by enlarging the burning surface area of the specimen. However, this causes the increase in the total weight of the specimen and complicates the isochronic ignition over the surface. Therefore, the method for recording the reactive force is sometimes preferable, since its signal is directly related to the square of the burning rate value and its treatment does not involve differentiation of the signal value. However, there exist the problems in application of recoil method for measuring the burning rate of real propellants. Several researchers revealed that theoretically predicted square dependency is not realized in static calibration runs and burning rate exponent may decrease up to the value of 1.2-1.5 [30,32]. Various measurement methods provide different degrees of averaging over the burning surface. It is evident that the methods of weighing and those for measuring the reactive force give information which is averaged over the entire burning surface. Deviation from the nominal burning surface area can certainly introduce errors into the deduced values of the linear burning rate. The methods of ultrasonic and microwave probing allow one to extract information from a definite size burning area, but the optical methods are mainly meant for the measurements at a certain point or along a particular line. Thus, the question of inconsistency arises when comparing the data obtained via using different methods. The solution of these problems often requires the detailed knowledge of the combustion mechanism. The reliability and accuracy of the measurements of regression rate using various techniques depend on the knowledge of the mechanical and physical characteristics of the substance in question. For example, the thermal expansion of the substance and its deformation due to the effect of alternating pressure introduce errors in the measurement of the regression rate recorded by the change of specimen length. In this family of techniques, the error in the measurement of the instantaneous burning rate is of the order of tens of percent. The interpretation of the data of the ultrasonic method substantially relies on the knowledge of the dependency of sound velocity in the condensed substance on temperature, phase state, and chemical composition. Even empirically this information is difficult to obtain because the experiments involving reacting exothermic substances usually require a special technique with a fast response and a high sensitivity. A similar situation is observed for the microwave method. In this case, one also must know the dielectric constants of the solid fuel being studied and the coupling material as well as the coefficients of microwave signal reflection from the boundaries of different media. Thus, some claims of measurement accuracy are still debatable [33]. Only through the analysis of combined information of the substance properties and physical principles of measurements, one can reliably estimate and substantiate the accuracy of a given method. Conclusions The interest in the study of transient combustion of solid propellants in recent years has promoted the development of several new techniques for measuring instantaneous burning rates. No universal methods exist for measuring burning rates in broad range of operating conditions. The selection of a measurement technique depends on the nature of the problem and the availability of the particular instrument to the researcher. The problems of the regression rate measurement in transient combustion conditions are generally categorized into small-and large-amplitude regression rate variations. For small-amplitude pressure oscillations, one anticipates small variations in regression rates. The problem usually becomes the measurement of the response function of burning rate to periodic oscillation of chamber pressure or heat flux at a known frequency. For this type of problem, either the microwave technique or reactive force measurement technique are preferable. When measuring large variations in burning rate associated with drastic changes in pressure or heat flux, it is desirable to use the method for measuring the instantaneous weight of the propellant sample in combination with the visualization of the combustion surface and flame behavior in order to interpret any non-uniform surface burning behavior. The flow visualization becomes especially important in transition regimes across which a slight change in test conditions can cause a drastic change in both the burning rate value and stability of the propellant flame. For high-pressure combustion environments with a large amplitude pressure excursion, the real-time X-ray radiography method is very attractive, since the history of surface re-gression as well as grain motion and fracture can be clearly observed. Some recently it was reported about novel microwave method for measuring transient mass gasification rate of condensed systems [34]. The microwave resonator method of dynamic measurement of mass of gasifying solid fuel samples is based on the measurement of the attenuation of a microwave signal passing through the resonator sensor (Fig. 2) loaded with investigated sample. Before firing experiments the sensor is calibrated via using samples of studied material having different channel radius. The sensor is intended for measuring the instantaneous gasification rate of the samples of dielectric gasifying materials under intensive gas blowing with the space resolution about few microns and frequency resolution better than 1 kHz. Finally, it should be noted that even though new breakthrough techniques using basic physical principles could still be developed in the future, it is believed that the further advancements in burning rate measurements will depend also upon improvement of the existing techniques.
6,339.2
2018-03-31T00:00:00.000
[ "Engineering", "Physics" ]
Sentiment Analysis of Students’ Feedback with NLP and Deep Learning: A Systematic Mapping Study : In the last decade, sentiment analysis has been widely applied in many domains, including business, social networks and education. Particularly in the education domain, where dealing with and processing students’ opinions is a complicated task due to the nature of the language used by students and the large volume of information, the application of sentiment analysis is growing yet remains challenging. Several literature reviews reveal the state of the application of sentiment analysis in this domain from different perspectives and contexts. However, the body of literature is lacking a review that systematically classifies the research and results of the application of natural language processing (NLP), deep learning (DL), and machine learning (ML) solutions for sentiment analysis in the education domain. In this article, we present the results of a systematic mapping study to structure the published information available. We used a stepwise PRISMA framework to guide the search process and searched for studies conducted between 2015 and 2020 in the electronic research databases of the scientific literature. We identified 92 relevant studies out of 612 that were initially found on the sentiment analysis of students’ feedback in learning platform environments. The mapping results showed that, despite the identified challenges, the field is rapidly growing, especially regarding the application of DL, which is the most recent trend. We identified various aspects that need to be considered in order to contribute to the maturity of research and development in the field. Among these aspects, we highlighted the need of having structured datasets, standardized solutions and increased focus on emotional expression and detection. Introduction The present education system represents a landscape that is continuously enriched by a massive amount of data that is generated daily in various formats and most often hides useful and valuable information. Finding and extracting the hidden "pearls" from the ocean of educational data constitutes one of the great advantages that sentiment analysis and opinion mining techniques can provide. Sentiments and opinions expressed by students are a valuable source of information not only for analyzing students' behavior towards a course, topic, or teachers but also for reforming policies and institutions for their improvement. Although both sentiment analysis and opinion mining seem similar, there is a slight difference between the two: the former refers to finding sentiment words and phrases exhibiting emotions, whereas the latter refers to extracting and analyzing people's opinions for a given entity. For this study, we consider that both techniques are used interchangeably. The sentiment/opinion polarity, which could either be positive, negative, or neutral, represents one's attitude towards a target entity. Emotions, on the other hand, are one's feelings expressed regarding a given topic. Since the 1960s, several theories about emotion detection and classification have been developed. The study conducted by Plutchik [1] categorizes emotions into eight categories: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. Sentiment analysis can be conducted at a word, sentence, or a document level. However, due to the large number of documents, manual handling of sentiments is impractical. Therefore, automatic data processing is needed. Sentiment analysis from the text-based, sentence or document-level corpora is employed using natural language processing (NLP). Most research papers found in the literature published until 2016-2017 employed pure NLP techniques, including lexicon and dictionary-based approaches for sentiment analysis. Few of those papers used conventional machine learning classifiers. Recent years have seen a shift from pure NLP-based approaches to deep learning-based modeling in recognizing and classifying sentiment, and the number of papers published recently on the undertaken topic has increased significantly. The popularity and importance of students' feedback have also increased recently, especially in the times of the COVID-19 pandemic, when most educational institutions have transcended traditional face-to-face learning to the online mode. Figure 1 shows the country-wise comparison breakdown of interest over the past six years in the use of sentiment analysis for analyzing students' attitudes towards teacher assessment. The number of papers published recently indicates a growing interest towards the application of NLP/DL/ML solutions for sentiment analysis in the education domain. However, to the best of our knowledge, in order to establish the state of evidence, the body of literature is lacking a review that systematically classifies and categorizes research and results by showing the frequencies and visual summaries of publications, trends, etc. This gap in the body of literature necessitated a systematic mapping of the use of sentiment analysis to study students' feedback. Thus, this article aims to map how this research field is structured by answering research questions through a step-wise framework to conduct systematic reviews. In particular, we formulated multiple research questions that cover general issues regarding investigated aspects in sentiment analysis, models and approaches, trends regarding evaluation metrics, bibliographic sources of publications in the field, and the solutions used, among others. The main contributions of this study are as follows: • A systematic map of 92 primary studies based on the PRISMA framework; • An analysis of the investigated educational entities/aspects and bibliographical and research trends in the field; • A classification of reviewed papers based on approaches, solutions, and data representation techniques with respect to sentiment analysis in the education domain; • An overview of the challenges, opportunities, and recommendations of the field for future research exploration. The rest of the paper is organized as follows. Section 2 provides some background information on sentiment analysis and related work, while Section 3 describes the search strategy and methodology adopted in conducting the study. Section 4 presents the systematic mapping study results. Challenges identified from the investigated papers are described in Section 5. Section 6 outlines recommendations and future research directions for the development of effective sentiment analysis systems. Furthermore, in Section 7, we highlight the potential threats to the validity of the results. Lastly, the conclusion is drawn in Section 8. Overview of Sentiment Analysis Sentiment analysis is a task that focuses on polarity detection and the recognition of emotion toward an entity, which could be an individual, topic, and/or event. In general, the aim of sentiment analysis is to find users' opinions, identify the sentiments they express, and then classify their polarity into positive, negative, and neutral categories. Sentiment analysis systems use NLP and ML techniques to discover, retrieve, and distill information and opinions from vast amounts of textual information [2]. In general, there are three different levels at which sentiment analysis can be performed: the document level, sentence level, and aspect level. Sentiment analysis at the document level aims to identify the sentiments of users by analyzing the whole document. Sentence-level analysis is more fine-grained as the goal is to identify the polarity of sentences rather than the entire document. Aspect-level sentiment analysis focuses on identifying aspects or attributes expressed in reviews and on classifying the opinions of users towards these aspects. As can be seen from Figure 2, the general architecture of a generic sentiment analysis system includes three steps [3]. Step 1 represents the input of a corpus of documents into the system in various formats. This is followed by the second step, which is document processing. At this step, the entered documents are converted to text and pre-processed by utilizing different linguistic tools, such as tokenization, stemming, PoS (Part of Speech) tagging, and entity and relation extraction. Here, the system may also use a set of lexicons and linguistic resources. The central component of the system architecture is the document analysis module (step 3) that also makes use of linguistic resources to annotate the preprocessed documents with sentiment annotations. Annotations represent the output of the system-i.e., positive, negative, or neutral-presented using a variety of visualization tools. Depending on the sentiment analysis form, annotations may be attached differently. For document-based sentiment analysis, the annotations may be attached to the entire documents; for sentence-based sentiments, the annotations may be attached to individual sentences; whereas for aspect-based sentiment, they are attached to specific topics or entities. Sentiment analysis has been widely applied in different application domains, especially in business and social networks, for various purposes. Some well-known sentiment analysis business applications include product and services reviews [4], financial markets [5], customer relationship management [6], and marketing strategies and research [5], among others. Regarding social networks applications, the most common application of sentiment analysis is to monitor the reputation of a specific brand on Twitter or Facebook [7] and explore the reaction of people given a crisis; e.g., COVID-19 [8]. Another important application domain is in politics [9], where sentiment analysis can be useful for the election campaigns of candidates running for political positions. Recently, sentiment analysis and opinion mining has also attracted a great deal of research attention in the education domain [2]. In contrast to the above-mentioned fields of business or social networks, which focus on a single stakeholder, the research on sentiment analysis in the education domain considers multiple stakeholders of education including teachers/instructors, students/learners, decision makers, and institutions. Specifically, sentiment analysis is mainly applied to improve teaching, management, and evaluation by analyzing learners' attitudes and behavior towards courses, platforms, institutions, and teachers. From the learners' perspective, there are a number of papers [10][11][12] that have applied sentiment analysis to investigate the correlation of attitude and performance with learners' sentiments as well as the relationship between learners' sentiments and drop-out rates in Massive Open Online Courses (MOOCs). Regarding teachers' perspectives, sentiment analysis has been widely adopted by researchers [13][14][15] to examine various teacherassociated aspects expressed in students' reviews or comments in discussion forums. These aspects include teaching pedagogy, behavior, knowledge, assessment, and experience, to name a few. Sentiment analysis was also used in a number of studies [16,17] to analyze student's attitudes towards various aspects related to an institution; i.e., tuition fees, financial aid, housing, food, diversity, etc. Regarding courses, aspect-based sentiment analysis systems have been implemented to identify key aspects that play a critical role in determining the effectiveness of a course as discussed in students' reviews and then examine the attitudes and opinions of students towards these aspects. These aspects primarily include course content, course design, the technology used to deliver course content, and assessment, among others. Related Work Referring to past literature, we found that one study [18] on sentiment analysis (SA) in the education domain focused on detecting the approaches and resources used in SA and identifying the main benefits of using SA on education data. Our study is an extended form of this article; thus a great deal of information is presented from different dimensions including bibliographical sources, research trends and patterns, and the latest tools used to perform SA. Instead of listing the data sources, we present the four categories of educationbased data sources that are mostly used for SA. Furthermore, to increase convenience for researchers in this domain, we present groups of studies based on the learning approaches, most frequently used techniques, and most widely used education related lexicons for sentiment analysis. Another review study [19] provided an overview of sentiment analysis techniques for education. The authors of this study provided a sentiment discovery and analysis (SDA) framework for multimodal fusions. Rather than the text, audio, and visual signals focused in [19], our review article aims to present all aspects related to the sentiment analysis of educational information with a focus on textual information only in a systematic way. Furthermore, we also provide a long list of current approaches employed for sentiment discoveries and the results obtained by them. Similarly, [20] aimed to review the scientific literature of SA on education data and revealed future research prospects in this direction. The authors of [20] focused on the area in more depth, including the design of sentiment analysis systems, the investigation of topics of concern for learners, the evaluation of teachers' teaching performance, etc., from almost 41 relevant research articles. In contrast, to conduct our scientific literature review study, we initially filtered 612 research articles from different journals and conferences. At the final stage of filtering, we finalized and included 92 of the most related and high-quality scientific articles published between 2015 to 2020 in this work. The main aim of this paper is to provide most of the available information regarding the sentiment analysis of educational information in a systematic way in a single place. Review studies of this kind are greatly helpful for readers in this domain. This review study will assist researchers, academicians, practitioners, and educators who are interested in sentiment analysis with a classification of the approaches to the sentiment analysis of education data, different data sources, experimental results from different studies, etc. Research Design To conduct this study, we applied systematic mapping as the research methodology for reviewing the literature. Since this method requires an established search protocol and rigorous criteria for the screening and selection of the relevant publications, we utilized the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, as indicated in [21]. The primary goal of a systematic mapping review (SMR) is to provide an overview of the body of knowledge and the research area and identify the amount of publications and the type of research and results available. Furthermore, an SMR aims to map the frequencies of publications over time to determine trends, forums or venues, and the relevant authors by which the research has been conducted and published. In contrast to the classical systematic literature review (SLR), which focuses on the identification of best practices based on empirical evidence, the focus of an SMR is on establishing the state of evidence. It is also worth mentioning that, from the methodology standpoint, SLR is characterized by narrow and specific research questions, and the studies are evaluated in detail regarding this quality. On the other hand, SMR deals with multiple broader research questions, and studies are not assessed based on details regarding the quality. To ensure that all relevant studies were located and reviewed, our search strategy involved a stepwise PRISMA approach, consisting of four stages. The overall process of the search strategy is shown in Figure 3. The first stage in the PRISMA entailed the development of a research protocol by determining research questions, defining the search keywords, and identifying the bibliographic databases for performing the search. The second stage involved applying inclusion criteria, which was followed by stage three, in which the exclusion criteria were applied. The last stage was data extraction and analysis. The research questions (RQs) devised for this study were as follows: • RQ1. What are the most investigated aspects in the education domain with respect to sentiment analysis? • RQ2. Which approaches and models are widely studied for conducting sentiment analysis in the education domain? • RQ3. What are the most widely used evaluation metrics to assess the performance of sentiment analysis systems? • RQ4. In which bibliographical sources are these metrics published, and what are the research trends and patterns? • RQ5. What are the most common sources used to collect students' feedback? • RQ6. What are the solutions with respect to the packages, tools, frameworks, and libraries utilized for sentiment analysis? • RQ7. What are the most common data representation techniques used for sentiment analysis? Search Strategy To develop a comprehensive set of search terms, we use the PICO(C) framework. PICO (Population, Intervention, Comparison and Outcomes) aims to help researchers to design a comprehensive set of search keywords for quantitative research in terms of population, intervention, comparison, outcome, and context [22]. As suggested by [23], to avoid missing possible relevant articles, we also added a "context" section to the PICO schema. First, for all the sections of PICO(C) in Table 1, we identified the adequate keywords, and then we constructed the search string by applying binary operators, as shown in Table 2. To ensure that no possible relevant article would be omitted in the study, we also used the context criterion. Context ("MOOC" OR "SPOC" OR "distance learning" OR "online learning" OR "e-learning" OR "digital learning") AND Intervention ("Sentiment analysis" OR "opinion mining") AND Outcome ("Students' feedback" OR "teacher assessment" OR "user feedback" OR "feedback assessment" OR "students' reviews" OR "learners' reviews" OR "learners' feedback") Time Period and Digital Databases The time period selected for this study was from 2015 to 2020, inclusive. The research was conducted in 2020; therefore, it covered papers published until 30 September 2020. For our search purposes, we used the following online research databases and engines: Identification of Primary Studies As of September 2020, the search in Stage 1 yielded 612 papers without duplicates. In Figure 4, we present the total number of selected studies distributed per bibliographic database, identified during the first stage. Study Selection/Screening Screening was stage 2 of the search strategy process and involved the application of inclusion criteria. At this stage, the relevant studies were selected based on the following criteria: (a) the type of publication needed to be a peer-reviewed journal or a conference paper, (b) papers needed to have been published between 2015 and 2020, and (c) papers needed to be in English. Besides, as can be seen in Figure 3, at this stage, we also checked the suitability of papers by examining the keywords, title, and the abstract of each paper. After we applied the mentioned criteria, out of 612 papers, 443 records were accepted as relevant studies for further exploration. Table 3 presents the screened and selected studies distributed according to year and database source. The distribution of conference and journal papers reviewed in this study is illustrated in Figure 5. As can be seen from the chart, there has been an increasing trend of research works published in journals in the last two years in contrast to the previous years, where most of the studies were published in conferences. Eligibility Criteria In Stage 3, we applied the exclusion criteria in which we eliminated studies that were not (a) within the context of education, (b) about sentiment analysis, and (c) that did not employ the techniques of natural language processing, machine learning, or deep learning. At this stage, all the titles, abstracts, and keywords were also examined once more to determine the relevant records for the next stage. This stage resulted in 137 identified papers, which were divided among the four authors in equal number to proceed to the final stage. The authors agreed to encode the data using three different colors: (i) greenpapers that passed the eligibility threshold, (ii) red-papers that did not pass the eligibility threshold, and (iii) yellow-papers that the authors were unsure which category to classify them as (green or red). The authors were located in three different countries, and the whole discussion was organized online. Initially, an online meeting was held to discuss the green and red list of papers, and then the main discussion was focused on papers listed in the yellow category. For those papers, a thorough discussion among the involved authors took place, and once a consensus was reached, those papers were classified into either the green or red category. In the final stages, a fifth author was invited to increase the level of criticism of the discussion among the authors, to double-check all of the followed stages, and to be able to distinguish the current contribution from the previous ones. After we applied these criteria, only 92 papers were considered for future investigation in the last stage of analysis. Systematic Mapping Study Results This section is divided into two parts: the first part presents the findings of the RQs, whereas the second highlights the relevant articles based upon the quality metrics. Findings Concerning RQs For the purposes of the analysis, the 92 papers remaining after the exclusion criteria were reviewed in detail by the five authors; in this section, the results are presented in the context of the research questions listed in Section 3. RQ1. What are the most investigated aspects in the education domain with respect to sentiment analysis? Students' feedback is an effective tool that provides valuable insights concerning various educational entities including teachers, courses, institutions, etc. and teaching aspects related to these entities. The identification of these aspects as expressed in the textual comments of students is of great importance as it aids decision makers to take the right action to specifically improve them. In this context, we examined and classified the reviewed papers based on the aspects that concerned students and that the authors aimed to investigate. In particular, we found three categories and their related teaching aspects which were objects of investigation in these papers: the first category comprised studies dealing with the comments of students concerning various aspects of the teacher entity, including the teacher's knowledge, pedagogy, behavior, etc; the second category contained papers concerning various aspects of the three different entities, such as courses, teachers, and institutions. Course-related aspects included dimensions such as course content, course structure, assessment, etc., whereas aspects associated to the institution entity were tuition fees, the campus, student life, etc.; the third category included papers dealing with capturing the opinions and attitudes of students toward institution entities. The findings illustrated in Figure 6 show that 81% of reviewed papers focused on extracting opinions, thoughts, and attitudes toward teachers, with 6% corresponding to institutions, whereas 13% presented a more general approach by investigating students' opinions toward teachers, courses, and institutions. RQ2. Which approaches and models are widely studied for conducting sentiment analysis in the education domain? Numerous approaches and models have been employed to conduct sentiment analysis in the education domain, which generally can be categorized into three groups. Table 4 shows the papers grouped based on learning approaches that the authors have applied within their papers. In total, 36 (out of 92) papers used a supervised learning approach, 8 used an unsupervised learning approach, and 20 used a lexicon-based approach. Thus, seven papers used both supervised and unsupervised approaches. Twenty papers used lexicon-based and supervised learning, whereas seven papers used lexiconbased and unsupervised learning. In total, three (out of 92) articles used all three learning approaches as a hybrid approach, in contrast with five other articles, which did not specify any learning approach. Table 5 emphasizes that the Naive Bayes (NB) and Support Vector Machines (SVM) algorithms, as part of the supervised learning approach, were used most often in the reviewed studies, followed by Decision Tree (DT), k-Nearest Neighbor (k-NN) and Neural Network (NN) algorithms. Furthermore, the use of a lexicon-based learning approach, also known as rule-based sentiment analysis, was common in a number of studies as shown in Table 4 and very often associated either with supervised or unsupervised learning approaches. Table 6 lists the most frequently used lexicons elaborated among the reviewed articles, where the Valence Aware Dictionary and Sentiment Reasoner (VADER) and Sentiwordnet were used very often compared to TextBlob, MPQA, Sentistrength, and Semantria. [61,79] RQ3. What are the most widely used evaluation metrics to assess the performance of sentiment analysis systems? Information retrieval-based evaluation metrics were widely used to assess the performance of systems developed for sentiment analysis. The metrics include the precision, recall, and F1-score. In addition to this, some studies employed statistical-based metrics to assess the accuracy of systems. It is very interesting to depict the number of articles that used a specific evaluation metric to assess the performance of systems versus the number of articles that either did not perform any evaluation or decided not to emphasize the used metrics. Figure 7 illustrates the evaluation metrics used and emphasizes the percentage of articles defined for a particular metric. As can be seen from Figure 7, 68% of the articles included either only the F1-score or other evaluation metrics including the F1-score, precision, recall, and accuracy. Only 3% of the studies used Kappa, 2% used the Pearson r-value, and the remaining 27% did not specify any evaluation metrics. RQ4. In which bibliographical sources are the metrics published and what are the research trends and patterns? The publication trend during the review period included in this paper indicated that there was a variation regarding the distribution of publications across years and bibliographic resources. According to our findings, as illustrated in Figure 8, it is obvious that the majority of the papers were published during 2019, where Springer and IEEE were the most represented bibliographical sources. It is also interesting to note that during 2017, there were only three resources in which papers on sentiment analysis were published. For a better overview, we present the absolute number of publications across years with the publishers' details in Table 7. This will assist readers to swiftly identify the time period and place of publication of the reviewed articles. Regarding the applied techniques, there were only two major categories of techniques used to conduct sentiment analysis in the education domain during 2015 and 2017: NLP and ML. The first efforts [12,32] towards applying DL were presented during 2018, as shown in Figure 9. Moreover, an increasing research pattern of DL application appeared in 2019 and 2020-especially during 2020, where an equal distribution of DL versus the other techniques can be observed. RQ5. What are the most common sources used to collect students' feedback? Based on the literature review in preparing this study, we came across several data sources, and based on their characteristics, we divided them into the four following categories for the convenience of our readers and the researchers working in this domain. The categories are as follows: • Social media, blogs and forums: This category of datasets consists of data collected from online social networking and micro-blogging sites, discussion forums etc., such as Facebook and Twitter; • Survey/questionnaires: This category comprises data that were mostly collected by conducting surveys among students and teachers or by providing questionnaires to collect feedback from the students; • Education/research platforms: This category contains the data extracted from online platforms providing different courses such as Coursera, edX, and research websites such as ResearchGate, LinkedIn, etc.; • Mixture of datasets: In this category, we grouped all those studies which used several datasets to conduct their experiments. As can bee seen in Figure 10, there were only 64 (69.57%) papers that reported the sources from which the data were collected, whereas almost one-third of the papers failed to show any information regarding the sources of datasets. Table 8 shows papers that reported the sources of the datasets used for conducting experiments along with their corresponding categories and description. RQ6. What are the solutions with respect to the packages, tools, frameworks and libraries utilized for sentiment analysis? Sentiment analysis is still a new field, and therefore there is no single solution/approach that dominates in sentiment analysis systems. In fact, there are dozens of solutions in terms of packages, frameworks, libraries, tools, etc. that are widely used across application domains in general, and the education domain in particular. Figure 11 shows the findings of articles reviewed in this study with respect to the most commonly used packages, tools, libraries, etc. for the sentiment analysis task. Figure 11. Packages/libraries/tools used to conduct sentiment analysis in the reviewed papers. As shown in the Treemap illustrated in Figure 11, Python-based NLP and machine learning packages, libraries, and tools (colored in blue) are among the most popular solutions due to the open-source nature of the Python programming language. Specifically, the NLTK (Natural Language Toolkit) package is the dominant solution, and it was used in 12 different articles for pre-processing tasks including tokenizing, part-of-speech, normalization, the cleaning of text, etc. Java-based NLP and machine learning packages, frameworks, libraries, and tools constitute the second group of solutions used for sentiment analysis. These solutions are colored in orange in Figure 11. Rapidminer is the most common Java-based framework and was used in three articles. The third group is composed of NLP and machine learning solutions based on the R programming language. Only three studies used solutions in this group to conduct the sentiment analysis task. RQ7. What are the most common data representation techniques used for sentiment analysis? To provide our readers with more information on sentiment discoveries and analysis, we briefly present the commonly used word embedding techniques for the sentiment analysis task. From the related reviewed articles, we observed that very few studies employed word embedding techniques to represent textual data collected from different sources. Only one article [48] employed the Word2Vec embedding model to learn the numeric representation and supply it as an input to the long short-term memory (LSTM) network. In addition to Word2Vec, GloVe and FastText models were used in two articles [14,45] to generate the embeddings for an input layer of CNN and compare the performance of the proposed aspect-based opinion mining system. As presented above, word embedding techniques were seen in very few papers (3) out of all the references (92), particularly regarding sentiment analysis in the education domain for students' feedback. Therefore, more focus is needed to bridge this gap by incorporating and testing different embedding techniques while analyzing the sentiment, emotion, or aspect of a student-related text. Most Relevant Articles To present the readers with a selection of the good-quality articles presented in this survey paper, we further narrowed down and short-listed 19 journal and conference articles. In particular, only articles published from 2018 to 2020 in Q1/Q2 level (https://www. scimagojr.com/journalrank.php) journals and A/B ranked (http://www.conferenceranks. com) conferences were identified as relevant, and these are summarized in Table 9. Table 9 depicts pivotal aspects that were examined in the reviewed articles, including publication year and type, techniques, approaches, models/algorithms, evaluation metrics, and the sources and size of the datasets used to conduct the experiments. It can be seen that it is almost impossible to directly compare the articles in terms of performance due to the variety of algorithms/models and datasets applied to conduct the sentiment analysis task. However, it is interesting to note that the performance of sentiment analysis systems has generally improved over the years, achieving an accuracy of up to 98.29% thanks to the recent advancements of deep learning models and NLP representation techniques. Identified Challenges and Gaps Based on the systematic mapping study, we found that there is still a wide gap in some areas concerning the sentiment analysis of students' feedback that need further research and development. The following list shows some of the prominent issues, as presented in Table 10. Limited resources: There is a lack of resources such as lexica, corpora, and dictionaries for low-resource languages (most of the studies were conducted in the English or Chinese language); • Unstructured format: most of the datasets found in the studies discussed in this survey paper were unstructured. Identifying the key entities to which the opinions were directed is not feasible until an entity extraction model is applied, which makes the existing datasets' applicability very limited; • Unstandardized solutions/approaches: We observed in this review study that a vast variety of packages, tools, frameworks, and libraries are applied for sentiment analysis. Recommendations and Future Research Directions This section provides various recommendations and proposals for suitable and effective systems that may assist in developing generalizable solutions for sentiment analysis in the education domain. We consider that the recommendations appropriately address the challenges identified in Section 5. An illustration of the proposed recommendations is given in Figure 12. Datasets Structure and Size There is a need for a structured format to represent feedback datasets, whether they are captured at the sentence level or document level via a survey or a questionnaire form. A structured format in either an XML or a JSON file would be highly useful to standardize dataset generation for sentiment analysis in this domain. Furthermore, there is a need to associate the meta-data acquired at the time of the feedback responses. The meta-data would help to provide a descriptive analysis of the opinions expressed by a group of people for a given subject (aspect). Moreover, more than half (56.7%) of the datasets used in the reviewed papers were of a small-size, with merely 5000 samples or less, which affects the reliability and relevance of the results [102]. Additionally, most of these datasets are not publicly available, meaning that the results are not reproducible. Therefore, we recommend the collection of large-scale labeled datasets [14] to develop generalized deep learning models that could be utilized for various sentiment analysis tasks and for big data analysis in the education domain. Emotion Detection We found only a small number of articles focused on emotion detection. We feel that there is a greater need to take into consideration the emotions expressed in opinions to better identify and address the issues related towards the target subject, as has been investigated in many other text-based emotion detection works [103]. Furthermore, there are standard publicly available datasets such as ISEAR (https://www.kaggle.com/shrivastava/isearsdataset), and SemEval-2019 [104] that can be used to train deep learning models for textbased emotion detection tasks utilizing the Plutchik model [1] coupled with emoticons [8]. People often use emoticons to address emotions; thus, one aspect that researchers could explore is to make use of emoticons to identify the emotions expressed in an opinion. Evaluation Metrics Our study showed that researchers have used various evaluation metrics to measure the performance of sentiment analysis systems and models. Additionally, a considerable number of papers (27%) failed to report the information regarding the metrics used to assess the accuracy of the their systems. Therefore, we consider that a special focus and emphasis should be placed on including the utilized metrics in order to enhance the transparency of the research results. Information retrieval evaluation metrics such as the precision, recall, and F1-score would be a good practice for the performance evaluation of sentiment analysis systems relying on imbalanced datasets. Accuracy would be another metric that could be used to evaluate the performance of systems trained on balanced datasets. Statistic metrics such as the Kappa statistic and Pearson correlation are other metrics that can be used to measure the correlation between the output of sentiment analysis systems and data labeled as ground truth. Moreover, this could help and benefit other researchers when conducting comprehensive and comparative performance analyses between different sentiment analysis systems. Standardized Solutions We have shown that the current landscape of sentiment analysis is characterized by a wide range of solutions that are yet to mature as the field is obviously novel and rapidly growing. These solutions were generally (programming) language-dependent and have been used to accomplish specific tasks-i.e., tokenizing, part-of-speech, etc.-in different scenarios. Thus, standardization will play an important role as a means for assuring the quality, safety, and reliability of the solutions and systems developed for sentiment analysis. Contextualization and Conceptualization of Sentiment Machine learning/deep learning approaches and techniques developed for sentiment analysis should pay more attention to embedding the semantic context using lexical resources such as Wordnet, SentiWordNet, and SenticNet, or semantic representation using ontologies [105] to capture users' opinions, thoughts, and attitudes from a text more effectively. In addition, state-of-the-art static and contextualized word embedding approaches such as fastText, GloVe, BERT, and ELMo should be further considered for exploration by researchers in this field as they have proven to perform well in other NLP-related tasks [106,107]. Potential Threats to Validity There are several aspects that need to be taken into account when assessing this systematic mapping study as they can potentially limit the validity of the findings. These aspects include the following: • The study includes papers collected from a set of digital databases, and thus we might have missed some relevant papers due to them not being properly indexed in those databases or having been indexed in other digital libraries; • The search strategy was designed to search for papers using terms appearing in keywords, titles, and abstracts, and due to this, we may have failed to locate some relevant articles; • Only papers that were written in English were selected in this study, and therefore some relevant papers that are written in other languages might have been excluded; • The study relies on peer-reviewed journals and conferences and excludes scientific studies that are not peer-reviewed-i.e., book chapters and books. Furthermore, a few studies that conducted a systematic literature review were excluded as they would not provide reliable information for our research study; • Screening based on the title, abstract, and keyword of papers was conducted at stage 2 to include the relevant studies. There are a few cases in which the relevance of an article cannot be judged by screening these three dimensions (title, abstract, keyword) and instead a full paper screening is needed; thus, it is possible that we might have excluded some papers with valid content due to this issue. Conclusions In the last decade, sentiment analysis enabled by NLP, machine learning, and deep learning techniques has also been attracting the attention of researchers in the educational domain in order to examine students' attitudes, opinions, and behavior towards numerous teaching aspects. In this context, we provided an analysis of the related literature by applying a systematic mapping study method. Specifically, in this mapping study, we selected 92 relevant papers and analyzed them with respect to different dimensions such as the investigated entities/aspects on the education domain, the most frequently used bibliographical sources, the research trends and patterns, what tools were utilized, and the most common data representation techniques used for sentiment analysis. We have shown an overall increasing trend of publications investigating this topic throughout the studied years. In particular, there was a significant growth of articles published during the year 2020, where the DL techniques were mostly represented. The mapping of the included articles showed that there is a diversity of interest from researchers on issues such as the approaches/techniques and solutions applied to develop sentiment analysis systems, evaluation metrics to assess the performance of the systems, and the variety of datasets with respect to their size and format. In light of the findings highlighted by the body of knowledge, we have identified a variety of challenges regarding the application of sentiment analysis to examine students' feedback. Consequently, recommendations and future directions to address these challenges have been provided. We believe that this study's results will inspire future research and development in sentiment analysis applications to further understand students' feedback in an educational setting. In future work, our plan is to further deepen the analysis that we performed in this mapping study by conducting systematic literature reviews (SLRs), as also suggested by [108]. Author Contributions: Conceptualization Z.K. and A.S.I.; methodology F.D. and Z.K.; Investigation and data analysis; writing-original draft preparation; writing-review and editing; supervision, Z.K., F.D., A.S.I., K.P.N. and M.A.W.; project administration, Z.K. and F.D. All authors have read and agreed to the published version of the manuscript. Funding: The APC was founded by Open Access Publishing Grant provided by Linnaeus University, Sweden.
8,997.8
2021-04-28T00:00:00.000
[ "Computer Science", "Education", "Linguistics" ]
Controlling WiFi Direct Group Formation for Non-Critical Applications in C-V2X Network The fifth-generation (5G) networks are expected to meet various communication requirements for vehicles. C-V2X, introduced in LTE V2X in Release 14, is designed to provide ultra-high reliability and ultra-low latency performance required by the most demanding V2X applications. In the literature, research interests are primarily focused on safety-critical applications in a dynamic environment. Therefore, in most communication models, both safety and non-safety critical applications operate through the same radio access technology. This is the case of both C-V2X Direct Communication and IEEE 802.11p. However, in an urban environment characterized by high traffic density, the availability of resources can be problematic. In that case, it would be best to propose new communication strategies because different use cases will have different sets of requirements. In this paper, we propose to increase the capacity of C-V2X Direct Communication by introducing WiFi Direct as a second connection alternative. Indeed, several works have shown that WiFi offloading can alleviate the congestion of cellular networks. Thus, an SDN-based P2P Group Formation is proposed by extending OpenFlow to manage the WiFi Direct control plane. This solution also allows establishing multi-hop communication, something that is not possible in the standard version of WiFi Direct. The performance evaluation of the P2P Group Formation procedure is proposed via simulations in an urban environment. The results show that our proposed procedure performs better compared to those proposed in the literature. To demonstrate the implementation feasibility of the proposed solution in real hardware, we also performed prototyping. I. INTRODUCTION The emergence of new ITS (Intelligent Transportation Systems) services suggests growing connectivity needs. Services such as creating a high-resolution video stream between two vehicles for real-time information sharing require highspeed, low-latency connectivity. In this scenario, the 3GPP has developed Cellular-V2X (C-V2X) to enable short-range communications. C-V2X can achieve the V2X (Vehicleto-Everything) requirements and pave most efficiently the way to connected and automated driving [1]- [5]. However, the exponential rise of cellular data traffic, especially with the deployment of the Internet of Things, poses a significant problem for mobile operators. Therefore, simply using cellular infrastructure for vehicular communication may worsen the overload problem [5], [6]. C-V2X enables new ser-The associate editor coordinating the review of this manuscript and approving it for publication was Oussama Habachi . vices and features to be created. Applications can be classified either into safety-critical or non-safety critical [2], [7]- [9]. Safety-critical applications mainly involve cooperation among vehicles and require direct communication (Vehicle-to-Vehicle, Vehicle-to-Infrastructure, and Vehicleto-Pedestrians). To address the constraints of vehicular communication (low latency, high reliability, high mobility, high density), the 3GPP Release 14 proposes C-V2X Direct Communications over the LTE PC5 interface [4], [5]. There are also non-safety critical applications that are mostly related to Vehicle-to-Network (V2N) communication over the LTE Uu interface [3]. This mode of communication can handle V2N use cases like infotainment and latency-tolerant safety messages. However, several scenarios may also involve cooperation among vehicles and require direct communication. This is the case in particular for Vehicular Cloud (VC) [10], [11] which concern vehicles which are still and at rest in parking lots, or dynamic v-cloud composed of moving vehicles. Social Internet of Vehicles (SIoV) [12], [13] is another scenario that needs direct communication. The discovery of vehicle-related services like the gas-filling station, electric vehicle charging station, public transport information can also be handled via cooperative communication. Common to all these scenarios is there is no need for ultra-low latency or high reliable communication. Therefore, using of LTE PC5 interface in such situations is not the right solution, as it could make resource allocation more difficult [9]. In addition, the implementation of traffic prioritization schemes remains complex and will not fully resolve overload issues. Several works [7], [14]- [16] propose hybrid schemes based on C-V2X and IEEE 802.11p communication. However, 802.11p is only optimized for day 1 safety applications [17]. So applications such as VC, SIoV. . . cannot be correctly handled by 802.11p, especially in dense traffic scenarios [5], [18]. The 3GPP proposes, via different releases, several methods to offload cellular traffic. WiFi offloading is a commonly used technique for reducing the traffic load in mobile networks [19], [20]. WiFi offloading process is typically infrastructure-based (via AP, Access Point). However, in the D2D (Device-to-Device) communication scenario, WiFi Direct has emerged as a potential candidate [21]- [23]. Offloading traffic to the D2D network enables low computational complexity at the base station besides increasing the network capacity [21]. In [6], the authors discuss the challenges and solutions for vehicular WiFi offloading. The highly dynamic topology of vehicles and intermittent connections constitute the major challenge for WiFi offloading in vehicular communication. However, in an urban environment, cars generally travel at slow speed with frequent stops and sometimes are in quasi-stationary mode during traffic jams. In such scenarios, WiFi Direct may well be suitable for direct communication [24]- [26]. WiFi-Direct, formally known as WiFi P2P, is a popular technology released by WiFi Alliance [27] aimed at enhancing direct D2D communications without connecting to an AP. WiFi Direct is built on the infrastructure mode of IEEE 802.11 and operates in 2.4 GHz and 5 GHz bands. P2P Device can operate concurrently with a traditional WiFi network by utilizing multiple physical or virtual MAC entities. P2P Group Formation (P2P GF) consists of determining which P2P Device will be the P2P Group Owner (P2P GO), the equivalent of AP. The choice of P2P GO is essential for the stability of the P2P Group. In literature, several works [28]- [30] propose to enhance WiFi Direct Group Formation. However, the proposed schemes do not take into account the constraints of vehicular communication. In contrast, some papers [24]- [26], [31], [32] discuss the use of WiFi Direct for data communication in VANET (Vehicular Ad Hoc Network). However, their work uses the standard P2P GF, or limited details are provided in their proposed procedure. Multi-hop communication is essential for vehicular networks. By default, WiFi Direct does not support intergroup communication. Thus, some works [33]- [35] propose multi-hop communication schemes for WiFi Direct. The fact is, however, that their work focused on the feasibility of their solution on Android devices without taking into account the mobility of the P2P Devices and did not provide the selection criteria of relay nodes. The idea of leveraging Software Defined Networking (SDN) in vehicular networks has emerged as essential in literature [36]. SDN is designed to make the network more flexible and agile by decoupling control and data plane. Thus, the whole network intelligence is placed in the control plane and managed by a central entity named Controller. In [19], SDN-based WiFi offloading using D2D communication is proposed. However, WiFi Direct is not used because the authors consider it unsuitable for D2D-based WiFi offloading. Based on the remarks raised, this paper proposes an SDN-based P2P Group Formation to offload Direct Communication in C-V2X network. The main contribution is the proposition of an extension of OpenFlow for managing the WiFi Direct control plane. We also proposed a novel P2P Group Formation procedure that takes into account the vehicular communication constraints and features such as multihop communication. The novelty of the proposed Group Formation procedure is the introduction of the stability factor for the selection of GOs. This factor allows the P2P WiFi network to react more effectively to vehicle dynamics. Load balancing is also taken into account by dividing the network into small areas and by setting a limit number of P2P Devices per Group. We have implemented and evaluated the proposed architecture in a simulation. We also showed that the proposed solution could be implemented with real hardware by proposing a prototype. The rest of the paper is structured as follows. Section II presents the background of C-V2X and WiFi Direct. Section III provides the literature review on the use of WiFi Direct for vehicular communication. Our proposed solution is presented in section IV, followed by performance evaluation based on simulation and prototyping, respectively, in section V and section VI. Finally, section VII concludes the paper. 1) DIRECT COMMUNICATION It is a short-range communication (< 1 kilometer) between vehicles (V2V), between vehicle to pedestrians (V2P), and vehicle to infrastructure (V2I). This mode, based on D2D communications, is implemented over LTE PC5 interface and operating in ITS 5.9 GHz bands independent of cellular networks. 2) NETWORK COMMUNICATION It is a long-range communication (> 1 kilometer) between vehicle to network (V2N). This mode is implemented over the LTE Uu interface and operates in a traditional mobile broadband licensed spectrum. The 3GPP Release 14 focused on specifically supporting V2V communications by adding two new modes: 3) MODE 3 (SCHEDULED) The network infrastructure provides centralized collisionfree resource allocation (Semi-Persistent Scheduling, SPS) for each V2V transmission over the PC5 interface. eNodeBs assist vehicular UE via control signaling over the Uu interface. 4) MODE 4 (AUTONOMOUS) Vehicular UE does not require support from the cellular infrastructure. V2V resource scheduling and interference management over the PC5 interface are supported in a distributed way. B. WIFI DIRECT In WiFi Direct, the role of each P2P Device is dynamically negotiated in the P2P Group, an equivalent of BSS (Basic Service Set). P2P Group Owner (GO) is a P2P Group creator, acts like an AP that provides BSS functionality and services for P2P Client. Legacy Client (LC), is a WiFi STA that sees the P2P GO as a traditional AP. P2P Discovery enables P2P Devices to find each other and create P2P Group quickly. P2P Discovery is composed of different phases. 1) DEVICE DISCOVERY Consists of the detection of other P2P Devices by scanning (passive scan) in social channels (channels 1, 6, and 11 of 2.4 GHz band). The delay of such a procedure can be relatively high if several devices are simultaneously performing Device Discovery [37]. 2) SERVICE DISCOVERY It is an optional feature that allows a P2P Device to seek information about the services available in the group prior to joining it. 3) GROUP FORMATION Consists of determining which P2P Device will be the GO. Three Group Formation modes are proposed in the WiFi Direct specification. In standard Group Formation, the GO is selected during the GO Negotiation phase. This is done by exchanging a GO Intent value (from 0 to 15) with the device sending higher intent becoming GO. In autonomous Group Formation, the role of GO is not negotiated. A P2P Device autonomously creates a P2P Group and starts sending beacons. In persistent Group Formation, P2P Devices can use the Invitation Procedure to quickly re-instantiate the group. The next phase is the establishment of secure communication by employing WPA2-Personal security, and finally, a DHCP exchange for setting up the IP configuration. III. RELATED WORK To study the suitability of WiFi Direct technology for VANETs, performance analysis of WiFi Direct for VANET is proposed in [24]. The simulation results show that WiFi Direct can be considered as a potential wireless technology for VANET. In [25], the authors propose to adapt the WiFi Direct protocol by taking into account vehicular communication constraints. Thus, a broadcast mechanism between the GO and the clients is proposed to reduce transmission delays in WiFi Direct. Performance analysis is based on both analytical and simulation results. In [26], a V2X communication system based on WiFi Direct and cellular technologies is proposed. Smartphones are used as communication devices. The Device Discovery and WiFi Direct Group Formation procedure are managed by a server that receives the status (MAC address, position, velocity) of each vehicle via the cellular network. When conditions (distance between vehicles < 250m) for Group Formation are reached, the server sends a message to the concerned vehicles to form a WiFi Direct Group. The evaluation of this system is performed in a real environment and consists of measuring discovery delay and 4G data usage in different scenarios. In [31], a smartphone integrated driving safety application based on WiFi Direct is proposed. When the vehicle faces unusual condition, V2V communication through WiFi Direct is used to report this incident to the neighbor vehicles. WiFi Direct secure location-aided routing protocols for VANET are proposed in [32]. A modified Diffie-Hellman key exchange protocol is used to establish secure communication links between vehicles. Several works propose to enhance the WiFi Direct Group Formation procedure. In [28], the WD2 algorithm is proposed and consists of computing the intent value based on RSSI. The device with the highest intent value is selected as GO. Compared to the default WiFi Direct protocol, WD2 increases throughput and reduces Group Formation delay. An optimized version of WD2 is proposed in [29]. The procedure consists first of determining the weight of each device based on RSSI and then mapping the weight with a valid intent value. The bit rate at each client device is evaluated based on SNR (Signal-to-Noise Ratio). GO that leads to near-optimal bit rate performance is selected from the list of preselected GO. In [30], a Seamless Group Reformation scheme is proposed to maintain connectivity even when the GO disappears without notice. A Dormant Backend Links mechanism is also proposed to reduce the disruption time. In [38], a redundant GO scheme is proposed to minimize the packet loss and network discontinuity-time in the response of unforeseen GO failures. By default, WiFi Direct follows the client-server architecture; therefore, it does not support multi-hop or intergroup communication. Intra and intergroup bidirectional communication schemes are introduced in [32] by letting a device have two virtual P2P network interfaces. Thus, the device can connect simultaneously between two groups by combining standard WiFi and WiFi Direct functionalities, and building a VOLUME 8, 2020 communication bridge between two groups. Some solutions are proposed in [34] to allow multi-hop communication in WiFi Direct multi-group network. In the first solution, a timesharing mechanism in which the gateway node switches between two groups are proposed. The second solution is based on simultaneous connections, as in [32]. Performance evaluation shows that the simultaneous connections mechanism best performs. In [35], bidirectional intergroup communication, also based on simultaneous connections, is proposed with the introduction of relay nodes. Very few papers [22], [23], [39] discuss LTE offloading onto WiFi Direct. In [22], performance gains for WiFi Direct offloading are studied. The cellular network assists P2P Devices to create WD Group. Their studies reveal that network-assisted D2D offloading provides significant gains in capacity and energy efficiency. A protocol for supporting D2D communications in cellular networks using WiFi Direct and LTE is proposed in [39]. This protocol allows the deployment of the D2D paradigm on top of the LTE cellular infrastructure. In [23], the same authors propose an extension of their previous work [39] by providing an analytical model for their proposed system. The cluster head, i.e., GO, is selected based on the device with the highest SNR with the eNB. The rest of the Group Formation procedure is identical to that of standard WiFi Direct. In [19], SDN-based WiFi offloading using D2D communication is proposed. Devices periodically send updated control information to the SDN controller using their cellular network interfaces. Based on this information, the controller can instruct devices to perform WiFi D2D communication when they are within the signal coverage range. IV. SDN-BASED WIFI DIRECT GROUP FORMATION In this section, we present the proposed SDN-based architecture for C-V2X Direct Communication over WiFi Direct. We suppose that vehicles move in an urban environment where cellular coverage is always present. The goal is to increase the capacity of C-V2X Direct Communication by introducing WiFi Direct as a second connection alternative for non-critical applications (VC, SIoV, or advanced vehiclerelated services). A. SYSTEM MODEL The system consists of vehicles, UTRAN (Universal Terrestrial Radio Access Network), and SDN WiFi-P2P Controller, as shown in Fig. 1a. Each vehicle is equipped with C-V2X interfaces (PC5 and LTE-Uu) and two WiFi interfaces; one for scanning and the other for communication, one of the GOs also acts as a GM in LC mode to communicate with the other P2P Group. The SDN WiFi-P2P Controller is a fog device [40], i.e., located near the eNB. This geographical proximity to vehicles allows rapid transmission and processing of the WD control plane. An extension of the OpenFlow protocol is used to define new features for transporting the WD control plane. Thus, the proposed new OpenFlow messages are preceded by the P2P_ prefix, as shown in Fig. 1b. After exchanging HELLO messages, vehicles must register to the SDN WiFi-P2P Controller using the P2P_REGISTER message. This message contains the vehicle identifier (vID). The controller responds by sending a P2P_CONFIG message that indicates the next time to scan (nts). Once this moment arrives, each vehicle performs a scan with a second WiFi interface to find WD devices, i.e., neighbor vehicles. After that, vehicles send P2P_STATUS messages to the controller. These messages are crucial because they contain all the information (position, speed, angle, scan result. . .), allowing the controller to perform Group Formation procedure. After this procedure, the controller selects a set of vehicles that will be set in GO mode and send them a P2P_GROUP_FORMATION message containing its role or mode=GO, the assigned IP address, and the next time to scan. When receiving this message, each vehicle selected as GO starts a P2P Group by creating a P2P interface and send a P2P_REGISTER message to the controller to indicate the MAC address of the created P2P interface (GO MAC address). This message is important because it is a confirmation of the creation of the P2P Group. Thus, by receiving this message, the controller is able to send to vehicles considered as GM a P2P_GROUP_FORMATION message containing the mode=GM, the assigned IP address, the GO MAC address, and next time to scan. When a GM receives this message, it connects the GO via WPS provisioning and starts a P2P Group session. For intergroup communication, the controller also sends the same P2P_GROUP_FORMATION message to the GO considered as GM with the difference that the role becomes mode=LC. By default, the GO is also required to run a DHCP server to provide P2P Clients (GM and LC) with IP addresses. However, this is not a viable option in vehicular communication because multiple GO will be used in addition to the recurrent handovers of P2P Clients. In the proposed architecture, the SDN WiFi-P2P Controller is responsible for IP addresses management. Therefore DHCP server is no longer needed. The advantage is that the P2P Clients quickly establish a connection with the GO and keep their IP addresses during a handover. In addition, the communication is interrupted only during the T3 or T4 period (see Fig. 1b). B. GROUP FORMATION PROCEDURE After receiving vehicles (P2P Devices) status, i.e., position, speed, angle, p2p rule and scan result, the SDN WiFi-P2P Controller starts the Group Formation procedure consisting of the following steps: 1. Discovery data consist of a set of scan results, noted scan Vi , sent by each vehicle V i . where RSSI Vj,i is the RSSI of the P2P Device of a vehicle V j measured by V i . 2. Vehicle stability factor S Vi depends on the intent value IV Vi , the difference in speed v Vi and in angle θ Vi , and the cost C Vi for Group Formation. (a) IV Vi is a numerical value between 0 and 15. It is determined based on the method proposed in [29]. First, the P2P Device weight w Vi is determined as follows: where n is the total number of discovered P2P Devices. Then: s is a constant whose value is chosen so that when w Vi = RSSI max then IV Vi = 15. (b) The difference in vehicle speed is normalized as follows: where δv = v Vi − v Vj , v Vi and v Vj are the speeds of vehicles V i and V j respectively. (c) The difference in vehicle angle (direction) is normalized as follows: where δθ = θ Vi − θ Vj , θ Vi and θ Vj are the angles of vehicles V i and V j respectively. (d) If V i is already GO, a Group Reformation is no longer needed. Finally, the vehicle stability factor S Vi is determined as follows: where α 1 , α 2 , α 3 and α 4 are weighting factors. (e) S Vi is used to rank vehicles in ascending order. In other words, vehicles with the highest stability factor are selected to be GO. The goal of subareas creation is to prevent all GOs from being in one place. Next, for each subarea z i z, determine the number of k i GO needed. k i = n Vzi n GM where n Vzi is the number of vehicles present in the subarea z i . If n Vzi < 2, no GO will be needed in this subarea. 4. Each vehicle V i,Zi present in z i need to be associated with a GO in an optimized way. Thus, for each GO i present in scan Vi during the discovery phase (see step1), determine: Then, where α 5 , α 6 and α 7 are weighting factors. S GMGOi is used to rank GO present in scan Vi in ascending order. The vehicle V i,Zi selects the GO from the top. VOLUME 8, 2020 5. This step begins with the GO with the highest S Vi (see step 2). The selection of candidates, i.e. GM, is based on the following rules: (a) Reject all candidates that are moving in the opposite direction. (b) Select the first n GM candidates with the highest S GMGOi values. (c) If a candidate is not selected, it will have to select the next GO based on the ranking result (see step 4). 6. After step 5, if there is a candidate that is not yet associated with a GO, this candidate will be associated with the first GO (based on the ranking via step 4) that has not yet reached the limit number n GM of GM. If all GOs have reached the limit number n GM , then the GM is associated with the GO from the top. After the Group Formation, the next step is the establishment of intergroup communication via the LC mode. Let: Then, establish a connection with the most stable neighbor in G * i,neigh based on the formula (9). 9. If there is a GO with no neighbor, i.e. GO i,neigh = 0, then it is considered as isolated. A. SIMULATION DESCRIPTION In this section, we assess the performance of the proposed Group Formation procedure. The simulation is carried out using the Python-TraCI Library for interfacing a python script with SUMO [41]. A segment of Dakar downtown is used to simulate an urban vehicular scenario, see Fig. 2. A complete scenario (Through Traffic Factor, Count, type of vehicles) is built using OpenStreetMap Web Wizard. We used the WiFi Direct channel model for VANET proposed in [25]. To improve GO selection procedures, two Group Formation strategies are proposed: • GF strategy 1: the aim is to maintain a good quality signal, reduce latency and packet losses in P2P Group by assigning high values for ponderation factors relative to IV Vi and C Vi . • GF strategy 2: the aim is to reduce connection losses by maintaining the same topology in the P2P Group. Thus, high values are assigned to ponderation factors relative to v Vi and θ Vi . To prove the validity of these two proposed strategies, we also implemented the approaches proposed by Jeong et al. [26], Zhang et al. [28], and Jahed et al. [29] for comparison purposes. The simulation key parameters are summarized in Table 1, and the following metrics are determined during the simulation: • Connection losses: percentage of GMs having lost connectivity with the GO before the next scheduled scan. • Control plane overhead: total number of OpenFlow packets exchanged between P2P Devices (via LTE-Uu interfaces) and SDN WiFi-P2P Controller. • Overloaded GOs: percentage of GOs with number of GMs connected to them greater than n GM . • Number of Group Formation: the total number of Group Formation during the simulation. • Number of handovers: the total number of times the GMs have switched from one GO to another. B. RESULTS AND DISCUSSION During the simulation, vehicles can enter or leave the area Z . Each point in the x-axis (Fig. 3) represents the total number of vehicles that crossed this area during the whole simulation period (1200s). The simulation is repeated with different values of OpenStreetMap parameters (Through Traffic Factor, Count) to obtain the values represented on the x-axis. Fig. 3 shows the impact of the traffic density, scan interval, and GF strategies on some WiFi Direct key performance metrics. First, we are focused on the impact of the scan interval in connection losses, overhead and P2P Group size. Fig. 3a shows that only the scan interval affects the connection losses. Indeed, the connection losses decrease with decreasing scan intervals. This result is predictable because the topology of the vehicles does not change significantly over a short time interval. However, the use of short scan intervals induces a high network overhead, as shown in Fig. 3b. Fig. 3c shows that the scan interval has no major influence on the size of the P2P Group, unlike traffic density. Indeed, when the number of vehicles increases, the number of overloaded GOs also increases. However, this percentage is relatively low thanks to the strategy of the division of the network into small areas and subareas described in step3 of Group Formation procedures. The observations mentioned above are not enough to decide how to set the P2P network. As a result, we also determined the impact of weighting factors in connection losses, the number of Group Formation and handover via the two proposed GF strategies, as shown in Fig. 3d-3l. The first observation is that the proposed strategies best perform compared with the strategies proposed by Jeong et al. [26], Zhang et al. [28], and Jahed et al. [29]. Indeed, Fig. 3d, 3e, and 3.f show that GF strategy 1 and GF strategy 2 have fewer packet losses. This is due to the use of the vehicle stability factor (formula (7) and (9)) for the selection of GOs unlike the strategies based exclusively on RSSI (Zhang et al. and Jahed et al.) or distance (Jeong et al. [26]). Note also that for all of the strategies, the scan interval has a great impact on connection losses as predicted by Fig. 3a. Fig. 3g, 3h, and 3.i also show that GF strategy 1 and GF strategy 2 have the lowest number of Group Formation. This is mainly due to the consideration of C Vi parameter (formula (6)) in the vehicle stability factor. In effect this parameter promotes the maintenance of already selected GOs. Consequently, fewer new GOs are created. In addition, traffic density and scan intervals have also an influence on the number of Group Formation. Indeed, short scan interval induces an increase in the number of Group Formation. This situation is undesirable. Thus, the integration of GF strategies allows a much better reduction of Group Formation, especially with the GF strategy 1. The division of network into small area (step 3), especially with the introduction of the limit number of GM per GO (n GM ), has resulted in a increase in the number of handover for the GF strategy 1 and GF strategy 2 as shown in Fig. 3.j, 3.k and 3.l. Indeed, to prevent GO overloading, some GMs are forced to leave their current P2P Groups to join less overloaded P2P Groups. In an attempt to maintain the same topology in P2P Group, the parameters v Vi and θ Vi (formula (4) and (5)) also promote the handover of GMs especially for GF strategy 2. Note also that short scan interval and high traffic density lead the higher number of handovers. In short, the performance evaluation shows that the proposed GF strategy 1 is the most suitable for P2P Group Formation for vehicular communication. Indeed, this strategy has fewer connection losses and less Group Reformation. The reduction of Group Reformation is essential for good communication quality. [42] is used because simple Layer 2 bridging does not work with wireless ethernet client in STA mode. Thus, parprouted is used for transparent IP proxy ARP bridging. However, the first tests show that bridge connectivity is very unreliable in Raspberry Pi. To solve this problem, we implemented an SDN-based proxy ARP in addition to parprouted. So, when a GO is select as a client, i.e., LC mode, the SDN WiFi-P2P Controller adds in the P2P_GROUP_FORMATION message the list of P2P Devices (MAC and IP addresses) located on both sides of the two interfaces (GO and LC). Based on this information, the concerned GO can sniff ARP Request packets and responds appropriately by generating ARP Reply packets using Scapy. B. PERFORMANCE STUDY For network performance study, iperf and ping tools are used to measure delay, bandwidth, and packet losses. Table 2 and Fig. 4 show the performances of intra and intergroup communication. The results demonstrate that network performance depends on the number of hops. Thus, one hop communication (GM11−GO1 and LC1−GO2) presents low delay, high bandwidth, and low packet losses, as shown in Table 2 and Fig. 4a, 4b, 4c and 4.d. For two hops communication, i.e., intra GM communication (GM11−GM12), network performance is relatively degraded compared to the one-hop performance; see Table 2 and Fig. 4e and 4f. Group Formation has a negative impact on network performance, as shown in Table 2 and Fig. 4i and 4j. Indeed, during Group Formation, the concerned P2P Devices need to create a new P2P interface, which causes their unavailability. The duration of the latter is measured via T4, see Fig. 1b and Table 2, which details the duration of the different stages (T1, T2, and T3) of Group Formation. Based on these results, P2P Devices' unavailability is T4=4.36s, and the negative consequences, i.e., high delay, low bandwidth, and very high packet losses are observed in Fig. 4i and 4j, exactly at t=40s. Concerning the intergroup communication (GM11−GO2 and GM11−GM21), we also observed the degradation of the network performance compared to intragroup communication. The degradation gets worse with the number of hops, i.e., three hops communication for GM11−GO2 (see Fig. 4.g and 4.h) and four hops communication for GM11−GO2 (see Fig. 4.k and 4.l). Another factor that may justify high latency, low bandwidth, and packet losses are the usage of L3 bridging for intergroup communication. Indeed, as mentioned above, bridge connectivity is very unreliable on Raspberry Pi; in addition, ARP Request sniffing and ARP Reply generation using scapy (Python library executed in user-space) presents performance issues. Note that the network performance also depends on the hardware, especially the WiFi chipset and the CPU load. VII. CONCLUSION In this paper, we experienced the potential of WiFi-Direct to offload C-V2X Direct Communication (V2V, V2I, and V2P). An SDN-based Group Formation procedure adapted for vehicular communication is proposed. In addition to RSSI, other metrics such as vehicle speed, angle, and cost for Group Formation are used for GO selection. Simulation results show that two major factors affect the WiFi Direct network performance. The first is the scan interval that allows much better reducing connection losses. However, a high network overhead, a high number of Group Reformations, and handovers are caused by the usage of short scan intervals. The introduction of Group Formation strategies allows reducing the number of Group Reformations and handovers, especially where RSSI and maintaining the same GO in P2P Group are privileged during GOs ranking. The evaluation of performance results also shows that our proposed strategies best perform compared to other works in the literature. To prove the feasibility of our solution, we implemented the proposed architecture in real hardware. In the proposed Group Formation procedure, vehicle trajectory prediction is not taken into account. However, this parameter could be decisive for the reduction of connection losses. He is currently an Associate Professor with the Ecole Supérieure Polytechnique. His main research interests include research covers mobile laboratories, distance education laboratories, technology-enhanced learning systems, and wireless networks. CLAUDE LISHOU is currently a Professor with the Université Cheikh Anta Diop (UCAD), Dakar, Senegal. He is also teaching several disciplines at Ecole Supérieure Polytechnique ranging from industrial IT to next-generation networks (NGN). As the Director of the Virtual Platform at UCAD, he is responsible for the coordination of the IT system and steers the e-learning strategy for the entire university. He directs the Research and Development Laboratory, which is internationally renowned (Laboratory for Information Processing) and dedicated to the use of ICTE in education and training, environment, energy, transport, and egovernance. On a scientific level, he has authored or coauthored around 54 articles in scientific reviews, which have an international circulation. For more than two decades, he has supervised and given support to dozens of engineers and researchers preparing their theses in the sub-region. As a recognized expert in the development of ICTE applications and services, he divides his time between teaching, research, and consultancy with international organizations, such as OIF, AUF, UNDP, UNESCO, and UNCTAD. He is a member of several academic societies, editorial committees of scientific reviews, and Information and Communication Technology Networks for Teaching and Research. He is also the Editor-in-Chief of the Journal des Sciences Pour l'Ingénieur (Journal of Sciences for the Engineer). VOLUME 8, 2020
7,779.8
2020-01-01T00:00:00.000
[ "Computer Science" ]
Growth mechanisms of hBN crystalline nanostructures with rf sputtering deposition: challenges, opportunities, and future perspectives Most hBN nanostructures were fabricated using the chemical method. However, growing by the physical method also has many advantages, they are easy to synthesize this material on a large area with up- scaling setups. Even two-dimensional hexagonal boron nitride is similar to graphene structure, however there is a little work referring to the fabrication process of this material. Hence, a sufficiently detailed report on physically fabricated hBN materials is essential. This review analyzes the results that we have studied over the past ten years with the synthesis and fabrication of this material using physical vapor deposition - RF sputtering, incorporation with other techniques, strongly emphasized on growth mechanisms of this material. Introduction The chemical composition of the boron nitride (BN) compound consists of equal numbers of atoms B and N, (B 3 N 3 )n [1][2][3][4]. This compound exists in various crystal structures such as hexagonal BN (hBN), rhombic BN (rBN), wurtzite BN (wBN) and cubic BN (cBN), depending on the conditions of the crystal structure processing. Each type of structure has certain advantages and disadvantages. The hBN structure is the softest and most stable, while cBN is the hardest material structure among the said BN phases [5][6][7][8]. With such diverse properties, BN can be used in various applications such as a lubricant in equipment requiring high chemical and thermal stability [9]. By adjusting the fabrication conditions, the structure of BN can be obtained in many forms such as nanotubes, nanosheets, nanowalls and nano-cocoons [1,3,10,11]. Hence, explicitly understanding the formation of each BN phase for a typical fabrication method, will open new possibilities for their applications, which is one of fundamental research tasks. BN is a binary compound synthesized from elements in columns 13 (group III) and 15 (group V) of the periodic table. It is isoelectronic where B − and N + ions have the same number of electrons (1s 2 2s 2 2p 2 ). BN is a light binary compound at which the B and N atomic numbers (Z) are 5 and 7, respectively, leading to similarities in physical and chemical properties of C-based compounds. For example, cBN and diamond have quite similar crystal structures such as lattice constant, hardness, wide band gap, and high thermal conductivity [6,9,11]. Similar to graphite, hBN has B-N and B=N bonds, which have similar properties to the C-C and C=C bonds in graphite, especially since both materials contain the van der Waals bonds between the sheets. However, one significant difference is that hBN has mixed covalent-ionic bonds, while graphite has only covalent bonds in the plane of each sheet. Because of its higher ionization, the hardness of hBN is lower than that of graphite. In addition, BN has a higher antioxidant capacity than carbonaceous compounds due to the formation of nonvolatile boron oxide. Similar to BN, the growth of cBN and hBN mono-crystals is difficult to perform [11,12], while the graphite and diamond structures can be tuned for easier control [13][14][15][16][17][18][19][20][21]. Four BN crystal structures are depicted in figure 1. There, hBN is structurally similar to graphite, figure 1(a), with B and N atoms interspersed on the corners of a hexagon. The B and N atoms in the same plane are linked together by strong covalent bonds to form the hBN sheet, the different hBN sheets have weak interactions governed by van der Waals forces [1,3,4,12]. The distance between the B and N atoms is 1.46 Å in the hBN covalent plane, denoted by the A or B plane in figure 1(a). Between the two A and B planes there is a distance of 3.33 Å. The position of each B or N atom on one hBN plane plane can be found in another hBN one when it rotates 60°. This arrangement is the so-called ABAB stack [22]. In fact, point/line defects due to the lack of one or more N/B atoms can occur on those planes from any fabrication technique. The hBN phase structure and its properties will be discussed in more detail in the following sections. A similar layered structure can also be stacked in a rhombohedral (rBN), figure 1(b), where the covalent plane BN slides by a 3 where a is the lattice constant of the rBN structure, and rotates 60°with respect to the original hBN plane. The rBN structure type can be synthesized using high-pressure and temperature (HPHT) methods or ion beam-assisted physical vapor deposition (PVD) techniques [23][24][25]. The third form of BN is the cubic BN (cBN or βN) in which the N atoms are located at the corners and faces of a cubic lattice unit cell, the B atoms bond to the N at one of the four corners of the BN cube, the other three N atoms bond to B at the nearest faces such that B lies at the center of a tetrahedron, as depicted in figure 1(c). This cBN structure is identical to the zinc blende (ZnS) crystal structure [3]. The fourth form of BN is the wurtzite BN or γBN where the unit cell is a superposition of a B and an N atom in a tetrahedron, figure 1(d). The bond of each atom has sp 3 hybridization [3,[26][27][28][29][30]. The transformation of those BN structural types that can occur depending on the specific synthesis conditions such as temperature (T), pressure (p) and the composition of the reactive gas present in the synthesizing process such as H 2 , N 2 , Cl 2 . The BN phases are summarized in the pressure and temperature (p − T) dependence or the p − T phase diagram as shown in figure 2. Based on crystallographic symmetry, polymorphism, phase transitions can occur in the following sequence: hBN → γN and rBN → βN [26][27][28][29][30][31]. As shown in the phase diagram, figure 2, the cBN phase is usually produced at high temperature (T > 1800 K) and high pressure (p > 4 GPa), while the hBN phase can be fabricated at high T and low p (p 100 kPa). This means that the hBN phase is thermodynamically more stable at low T and p, cBN is stable only at high p, depicted by the Bundy-Wentorf transform curve on the phase diagram. In this region, the cBN phase will form spontaneously [ [29][30][31][32]. Such a thermodynamic diagram as shown in figure 2 was obtained from equilibrium processes. In fact, we used an unbalanced RF sputtering system with the plasma creation processes, thus the above phase diagram is not completely fitted with our sample making process. However, based on this phase diagram, BN can be classified into two subtypes according to the hardness [3,[27][28][29][30][31][32][33][34]. The soft phases of BN have low density and are characterized by sp 2 bonding, figures 1(a) and (b). The hBN and rBN phases are in this soft phase. Besides, turbostratic BN (tBN) and amorphous BN (aBN) also belong to this soft category. There, the aBN has no periodic order of the B and N atoms, and tBN is partially crystallized. The formation of these phases is largely dependent on the density of defects that may be created during fabrication. The tBN and aBN phases have also been studied quite meticulously in our studies [35][36][37][38][39]. Meanwhile, the hard phases of BN are formed by sp 3 bonding and have a higher density compared to the soft ones. The wBN and cBN phases belong to this type of hard phase, figures 1(c), (d). We do not discuss much about wBN and cBN phases in this mini-review, because many systematic studies for the hard phases of BN have been published previously [3,6]. The fundamental parameters of the BN phases are listed in table 1, besides some parameters of graphite and diamond are given for comparison. 1.1. The fundamental properties of the hBN phase As mentioned above, the hBN structure has an ABAB stack type as shown in figure 1(a), the lattice of this phase is formed from the chemical composition (B 3 N 3 ) n [22,[40][41][42][43][44][45]. There, the c-axis of the ABAB stack is Bundy-Wentorf. Therein, the boundaries of hBN and cNB phase creations were defined from experimental data, denoted as hBN ↔ cBN.Reprinted from, [33] with the permission of AIP Publishing. perpendicular to the (B 3 N 3 ) n plane. In the bulk structure, each B atom is bonded to three N atoms where the B plane is rotated 60°with respect to the A plane and moved along the [0002] direction, perpendicular to the A and B planes. Therefore, hBN is considered as a highly crystallized anisotropic layered compound with strong bonds -covalent bonds in the layered plane and weak bonds in the third dimension, parallel to the c-axis. The a, b and c parameters of the hBN unit cell are shown in figure 1(a). Due to the lack of free electrons in the hBN crystal, the perfect hBN crystal is an insulator [1,3,23]. Moreover, the layers of materials can easily slide on their planes by weak bonds, thus hBN has very high compressive resistance and can be used in lubricating technology [1,23]. Furthermore, the π-electrons around the N atom partially create ionization from the B-N bond. To take advantage of the superior properties of hBN, various types of heterostructures have been created when combined with hBN [1,12,46]. However, impurities have always been an issue in the fabrication of hBN materials. When impurities are involved in hBN crystal formation, it leads to creating other undesirable phases such as tBN or aBN [36]. These sub-phases have a local structure similar to the hBN phase but have a long-range disorder of the crystal structure. The results of our research over the past ten years on hBN materials are mainly fabricated at small sizes such as hBN thin films containing hBN nanowalls (hBN-NWs). Here, an hBN-NW is defined as a group of about a few dozen to several hundred hBN nanosheets oriented vertically relative to the substrate [47]. BN compounds are almost all produced synthetically, with the exception of the cBN structure found in nature by Dr. Q. S. Fang (2009). In most cases, hBN is synthesized using chemical reactions based on boron trioxide-B 2 O 3 or boric acid-B(OH) 3 and ammonia (NH 3 ) or urea-CO(NH 2 ) 2 in nitrogen gas [47,48]. BN can be produced in a variety of ways such as hot pressing. There, BN powder together with boron oxide are compressed at high temperature. The thermal properties of the obtained hBN largely depend on the crystallinity of the hBN during compression and heating. The thermal properties of hBN materials are significantly enhanced by annealing under pressure, which further demonstrates that annealing increases the crystallinity of the hBN phase. In addition, the chemical vapor deposition (CVD) method is commonly used with boron trichloride (BCl 3 ) and N 2 precursors [3]. In particular, boron powder reacts with nitrogen plasma at very high temperatures >5000°C, resulting in ultrafine BN structure that can be used in lubricating technology [23]. Using chemical or physical deposition methods, low-dimensional hBN structures such as nanosheets, nanowalls, nanotubes and nanoshells can be fabricated [1,12]. The structural properties of the grown hBN films depend on the density of defects existing in such films. The researchers have found that hBN thin films are generally more defected than graphite or graphene consists of sp 2 monolayers. The BN structure is a binary compound with B and N components, while graphene is composed of only C atoms [1,12,[49][50][51][52][53]. Density and composition of defects can significantly alter the physical and chemical properties of hBN layers. For applications using hBN as the coating material, it is necessary to fabricate an hBN film with few defects. In contrast, using defects as quantum photonic centers, we need to use defects purposefully [54][55][56]. Therefore, scientists continue to exploit the interesting properties of 2D-hBN such as: (i) The chemical and mechanical stability of hBN will be useful for a number of industrial applications requiring high temperature thermal stability and flexible coating [1]. (ii) The hBN has a wide energy bandgap of 5.97eV which can be varied to emit ultraviolet light from deep levels [2]. There, a deep ultraviolet (DUV) emitter was designed and worked successfully when the hBN powder was heated to a certain temperature. This material emits photons with wavelength λ = 225 nm at a steady state of operation. The authors also show that the defect density existing in the hBN material plays an important role in the wavelength emitted from the device. This results in the wavelengths of the photons being in the DUV range of 225-400 nm [2,57]. (iii) In addition, the large area lowdimensional hBN layer is useful for electronic devices which are integrated with graphene (G) [58][59][60]. In such a device, the combination of the hBN dielectric and the high conductive G materials was realized. The 2D hBN film can be used as an insulating layer in heterostructures: metal/insulator/semiconductors (MIS) used in fieldeffect transistors (MIS-FETs) [61][62][63][64]. (iv) The hBN film has a wide band gap, this material is thus used to develop next generations of imaging detectors in the DUV-far-IR region. The hBN layer is also used in MIM tunneling structures [65][66][67]. (v) In particular, the porous hBN material containing many defects is useful for water purification applications [12,68]. Therein, porous hBN nanosheets can be produced by various techniques where they are able to absorb oils, organic solvents and dyes with an absorption weight of 33 times higher than that of its own weight, while the material is hydrophobic [61]. Emerging features of hBN structures at the nanoscale Two-dimensional (2D) materials with large band gaps have recently been of great interest because of their emerging properties, especially 2D-hBN materials [69]. Taking advantage of the intrinsic properties of those materials, which can be used for a variety of applications such as quantum technology, quantum computing, quantum communications or highly sensitive sensing devices [70]. Various fabrication methods recently used to fabricate 2D-hBN nanostructures with high purity or intentionally induced lattice defects through the fabrication processes such as bombarding the hBN crystal structures by a certain dose of high energy electrons or ions [71]. The creation of foreign elements implanted in the crystal or forming local lattice defects will cause a change of the system energy, especially a polarization could be formed due to electron deficiency or excess. However, the photon emission mechanism of defects in wide bandgap semiconductor materials such as diamond, SiC and hBN is still unclear [72,73]. Because there are many factors affecting the emission process at which the properties of defects are highly affected by the fabrication process on the optical and magnetic behaviors of those materials and/or the interference of noises occurring around the material surfaces under study. Hence, the factors affecting luminescence at defected sites/vacancies of hBN nanostructures still need to be exploited. Herein, we analyze some experimental results obtained when studying the nucleation and growth mechanisms of hBN-NWs films deposited on different substrate materials, i.e. Si, SiN and diamond using a physical vapor deposition technique, we have created hBN-NWs films with a high crystallinity [36,39]. However, the crystallization process of hBN-NWs still has many defects in their crystal lattice due to the bombarding events inside the chamber of the physical deposition method. The crystallization of hBN-NWs has produced many defects or vacancies, the orientation of the grown hBN-NWs still perpendicular to the substrate surface leading the luminescence detection is difficult because the c-axis of the hBN crystal layers is oriented relatively parallel to the substrate plane. This makes it very difficult to probe the luminescence of defect centers using any optical methods. For example, the optical detected magnetic resonance (ODMR) method will face many challenges because the hBN-NWs films are very rough, noise will be thus the main factor to saturate the signals of ODMR [74]. Due to the given reason, we evaluated the concentration of defects indirectly through measuring qualitatively the concentration of N-H bonds with the FTIR measurement [36]. There, we assumed that the N-H bonds were generated during the crystal formation process. At boron vacancy (VB) sites, we assumed that the free H atoms would bond to the N terminated edges. This assignment was also approved by many recent simulation research papers [75][76][77]. Those data also suggested that the formation energy with N atoms is greater than that of formation with B ones. Therefore, the crystal formation through bombarding of B and N ions from the BN target of the precursor material using a physical radio frequency (RF) sputtering technique will generate more B vacancies than N ones. Moreover, the outcome data from several research groups indicate that, some impurities are believed to interact or intercalate into the lattice defected sites atV B to enhance the emission brightness of the defected centers, i.e. C and O [76,77]. Based on the simulation data recently obtained in combination with our experimental results, we will therefore discuss in detail some experimental factors affecting the growth behaviors of the deposited hBN-NWs on different substrate surfaces. In fact, various semiconductor materials can be used as substrates such as Si, GaAs and InP [78][79][80] to enhance the crystallinity of the depositing materials at the substrate surface. Therefore, the surface properties of the substrate materials have a certain influence on the quality of the grown hBN film. During more than fifteen years of research on hBN materials using RF sputtering technique, we have used a variety of substrates to investigate the impact of substrate surface properties on the structural and optical characteristics of grown hBN films [36,39]. We temporarily divided the surface properties of the substrate into three categories. The first is, the Si substrate has a neutral surface characteristic profile, that is, Si atoms in the substrate structure are indirectly bonded to the B and N atoms in the fabrication process. The second is the diamond film grown by a CVD technique. The third is, a metallic bilayer substrate that fabricated with a combination of transition metals to lower the vaporization temperature of the given substrate relative to the higher vaporization temperature of each metal component, and also aiming at using transition metal atoms as catalysts in hBN crystal formation at the early stages of thin film deposition with our RF sputtering system [36,39,81]. Therefore, we will briefly describe in turn the types of substrate surfaces we have used in our studies in the following section: 1.3. Types of substrates used to fabricate hBN films Silicon (Si) is the main material used as the substrate for most of our studies. Herein, we can describe the two types of substrates as Si and diamond materials, as shown in figure 3. We only emphasized on those substrate surfaces because the Si is a neutral surface whilst the C edges are terminated at the NCD one. Such edges are facilitated to bond with free H atoms in our CVD/RF sputtering plasma. Moreover, the NCD substrate layer often consists of both sp 3 (cubic) and sp 2 (graphite) phases of the carbon. Therein, a small amount of the sp 2 phase is located at the boundaries of the NCD particles, while the sp 3 phase is in the cores of those NCD particles. We also used the Cr/Au substrate with its role as a catalyst agent in the crystallization process, however the Cr/ Au crystal structures are not shown in figure 3. In figure 3(a), we can see that the Si(100) surface has two atoms per cell contact to the (100) plane. Those Si atoms can be bonded indirectly to ionized particles such as H − , N + or B − . This binding is highly dependent on various parameters of atoms or ions in contact with the substrate surface and Si surface characteristics [36,39,47]. In most cases, the Si surface is treated as a neutral plane at which the ionized particles will stick after a number of physical collisions of those ionized particles and the chemical kinetics of those ions/atoms at the sole surface. The Si substrate is also capable of forming terminated H edges. In addition to Si(100) which was used as the substrate, the artificial diamond surface is also used to grow hBN films. Diamond is an allotrope of C where the C atoms are covalently bonded to each other [13][14][15][16][17][18][19][20][21]. The C atoms are arranged in a variant of the radial settable crystal structure, known as the diamond lattice, figure 3(b). Herein, eight C atoms are located at the corner of the unit cell with bond distance of 3.56 Å. Each C atom is symmetrically bonded tetrahedrally to four other nearest atoms. The bond distance from this atom for each corner of the tetrahedron is 1.54 Å. This very short bond distance creates a very strong bonding within those C atoms, this makes diamond the hardest material in nature. The crystal faces of diamond are terminated with C edges. At those terminated edges, the C atom can bond with either a free H or O atom or different atoms from other functionalized groups to minimize the total energy of the system [6,26,32]. Therefore, the matching possibility of hBN to diamond is also a possible feature for future applications, because diamond is applicable in many different fields such as chemistry and biology. In particular, it can be used as functionalization surfaces [13], biochemical platforms [19][20][21], high charge carrier mobility devices [82]. The high thermal conductivity of diamond is also an advantage [15]. The intrinsic properties of diamond can be modified by adding impurities to the diamond lattice, i.e. turning the diamond into dielectrics, metals or superconductors [18,83]. Even so, in our studies, we have only focused on exploiting the surface terminated C edges of diamond that can bond with free H atoms/ions in our CVD deposition systems. If H atoms are temporarily bonded to C, they can form a virtual buffer at the diamond surface. This buffer layer acts as a spring during the elastic collision between B and N ions when bombarded from the BN target of the RF sputtering. Hence, this buffer layer will temporarily reduce the acceleration of B and N ions, and at the same time limit the ions in the plasma to elastically interact with the neutral substrate surface as the case of Si. This will make the crystallization process faster, resulting in a better crystallized order than using a neutral Si surface substrate [39]. In fact, crystalline diamonds include a wide variety of defect concentrations, sizes and carbon phases [13,21,84]. An important factor commonly used to determine the quality of a CVD diamond sample is the sp 2 :sp 3 ratio [6,85,86]. Single crystal diamond (SCD) exists in various types and usually concerns with the impurity of N atoms [87]. Polycrystalline CVD diamonds can be grown on a non-diamond substrate such as Si, quark or Mo with a different crystal structure. An example of a CVD nanocrystalline diamond (NCD) growth is schematically shown in figure 4. The diamond seed particles are seeded on Si, they are grown step by step as a function of time (t). The growth development is often referred to as the column model [88]. There, each diamond is considered as an individual entity and is grown in the 3D space. In the process of development, the competition of those enlarged entities occurs, and the boundaries of those particles are subsequently defined. The sp 3 phase usually comes from the core of the diamond particle, whilst a large content of sp 2 comes from the boundaries of those crystal diamond particles. Depending on their average grain sizes, one can classify CVD diamonds into three categories: microcrystalline diamonds (MCDs), nano-crystalline diamonds (NCDs) and ultra-nano-crystalline diamond (UNCDs) with average particle sizes in the range of thousands, hundreds, and several nanometers, respectively [83,84]. In our work, NCD thin films are mostly used, while the two types of MCD and UNCD are not discussed in depth. In the experiment, the particle size and sp 2 :sp 3 ratio of the NCD film can be controlled by changing the deposition parameters such as seeding density, temperature, doping . The unit cells of crystalline (a) silicon (Si) and (b) diamond (nanocrystalline diamond, NCD) materials at which the surfaces of these materials were used in most of our studies over the past ten years or more when hBN films were grown on the given substrate surfaces using a homebuilt radio frequency (RF) sputtering technique. In addition, transition metals such as Ni, Cr, Fe, Pt, can also be used as catalysts in the crystallization process of hBN [46,89]. There, these transition metal atoms participate in the chemical formation of the first lattice cells in the hBN layer structures. However, most of the studies use transition metals in the form of a single element, in order to work out the role of each transition metal in the formation of the hBN phase with a CVD technique [1,12]. Our studies not only use transition metals as the catalyst for hBN phase formation, but transition metals were combined together into a bilayer aiming to reduce melting temperature of the bilayer substrate. This will facilitate the evaporation process of hBN films in our RF sputtering system with a low working temperature (<600°C). The above substrates were commonly used in our studies over the past fifteen years. In addition, we emphasize on analyzing the formation of hBN phase during the growth process by PVD-RF sputtering method affected by various physical parameters such as distance from target-to-substrate (d), angle created by substrate plane to the center target-substrate axis (α), substrate temperature (T sub ) and substrate surface behaviors when using different materials (Si, NCD, Cr/Au) as just discussed. For each change, we have systematically studied the structural characteristics of the material that is evaporating at the surface of the substrates, the degree of lattice defects is also systematically exploited by means of advanced techniques such as transmission electron microscopy (TEM), Raman and FTIR spectroscopy. To support the experimental TEM investigation, an amorphous substrate of Si 3 N 4 , was used [36]. The conclusions based on the data obtained from TEM have also been confirmed by Raman and FTIR spectroscopy techniques, the amorphous Si 3 N 4 substrate is only for the TEM investigation and not affecting the obtained data and the conclusions drawn from the analyzed results. 2. RF sputtering and experimental parameters 2.1. Unbalanced radio frequency sputtering Unbalanced RF sputtering is a physical deposition method based on the interaction between ionized gases (Ar, N 2 , H 2 , CH 4 , CO 2 ) and a solid target material (BN). In principle, the target material acting as the cathode can be bombarded with inert reactive gas ions or a mixture of both inert (Ar) and non-inert (N 2 , H 2 ) gases [47]. In conventional sputtering techniques, direct current (DC) is used, resulting in a positive charge being generated on the front surface of the target, this process can be prevented by bombarding the insulating target with fluxes of both ions and electrons [90]. Therefore, an RF potential is applied to a metal electrode placed behind the target. With this RF potential, the electrons are oscillated in the alternating field of the applied RF voltage. Particles with enough energy will cause collisions to ionize, resulting in the discharge being self-sustaining without being extinguished. For this reason, high voltage is no longer required to maintain plasma. Since electrons are easier to move than ions, therefore more electrons will reach the target BN surface in the positive half-cycle and similarly there are more ions in the negative cycle, resulting in a negative charge on the BN target surface. Hence, if a DC potential is used, a negative DC potential generated on the target BN surface will repel electrons from the BN surface. This creates a shell with a higher density of ions in front of the BN target. These ions bombard the BN target and sputtering is realized. If the frequency is less than 5 kHz, sputtering does not occur. Therefore, the actual RF frequency is usually in the range of 5-30 MHz, and the choice of 13.56 MHz frequency is widely used for plasma technology. The plasma forming criteria can be satisfied with this frequency. When electric charges appear within the two electrodes, the electrons will no longer oscillate in this regime because they do not receive Figure 4. A simplified model of the polycrystalline diamond growth process at which a few nanometers in size of single-crystal diamond particles are used as nucleation seeds (a). When the grown diamond particles are large enough and the spaces among them are narrowed, the interfaces are created when those particles initially come into contact (b). Those interfaces were created with increasing complexity as the diamond particles overlapped. However, some priority directions are continuously developed for further growth, and the other ones will be eliminated (c). enough energy to create plasma, the plasma between the electrodes can thus be extinguished. Hence, if a magnetic field is placed in the same direction as the static electric field, the kinetic energy of the electrons can be enhanced and the plasma will not be extinguished. Subsequently, the performance of RF plasma can be improved by controlling the strength of the magnetic field between the two electrodes. Various dielectric and amorphous materials are commonly fabricated by RF sputtering. Metals can also be fabricated with RF sputtering if the RF power supply is capacitively coupled to a metal electrode. This prevents DC current in the circuit, and prevents the accumulation of negative charge on the metal target. However, we only used one type of target BN throughout the course of our studies. The main difference between RF sputtering and other PVD techniques is the need for an impedance matching between the power source and the discharge chamber. Therefore, grounding of the substrate is important to avoid unwanted RF voltage fluctuations on the substrate surface. In our unbalanced RF sputtering technique, the electrons gain energy directly from the RF source to maintain the plasma. The oscillations of electrons are necessary to ionize the gas molecules present in the plasma chamber, so RF technology can work at low pressures. This means that plasma can be created with little collisions of gas ions [90][91][92][93]. The interaction of the active gas ions (Ar, N 2 , CO 2 , H 2 ) and the target material (BN) is quite complex. This process is largely influenced by a combination of gas composition, RF power, working pressure, and magnetic field strength behind the BN target [93][94][95][96]. These parameters directly affect the nucleation and growth of BN materials on the substrate surface. Because of that we fixed the above parameters, in order to stabilize the hBN sample fabrication conditions [35][36][37][38][39]. At the same time, we changed the external conditions such as the distance from the substrate surface to the BN target (d), substrate temperature (T sub ), tilting substrate surface in respect of the virtual line from the BN target center to the substrate (α) and the substrate materials (Si, NCD, Cr/ Au) for the purpose of creating hBN has the desired quality and orientation. To generate plasma in our RF sputtering system, the sample chamber is always maintained at a high vacuum to avoid the contamination of the air into the chamber. There, the vacuum level always remained at 9.8 × 10 −8 mbar. The mixed gases that we used have been optimized for the composition of Ar(51%), N 2 (44%) and H 2 (5%) with the condition d = 3 cm, working pressure of 2.1 × 10 −2 mbar, RF source power of 75 W will give optimal hBN sample quality [36,47]. With different gas compositions, we can observe the color of the plasma to be different. The plasma produced by the gas composition of only Ar/N 2 is blue-purple, while in the presence of H 2 , the plasma turns pale yellow, as shown in figure 5(a). Over time, the erosion of BN material is generated on the BN target, the erosion forms a race-track groove on the target surface. We can see this effect before and after a certain period of use in figures 5(b) and (c). There, the BN target was unused and used after four years. The shape of the groove is largely induced by the design of the magnetic field behind the target [97][98][99]. The trajectories of electrons from the BN target to the substrate surface depend largely on the magnitude of the magnetic field (B) and the magnetic configuration behind the BN target. The trajectory of an electron or ion satisfies the Lorentz equation [98], Figure 5. An example of the plasma environment in the sample chamber of our unbalanced RF sputtering machine where the evaporation process uses reactive gases is argon (Ar), nitrogen (N 2 ) and hydrogen (H 2 ). When these three gases are present, the color of the plasma bubble is pale yellow as shown in (a). A high purity of BN ceramic compound target was used. With an evaporation rate of a few tens to several hundred nanometers per hour, the erosion rate of the BN target is relatively low. The surfaces of the BN target are in a comparison when their surface images were recorded at the zero-year and four-year used, as shown in (b) and (c), respectively. Therein, a racetrack created resulting from the erosion process is seen. The morphological shapes of the racetrack were highly induced by the magnetic configuration behind the BN target. The dark bands at the edges of the racetrack were the result from the redeposition processes at the BN target surface. Here, v, m and q are the velocity, mass and charge of the electron or ion. E and B are the electric and magnetic field vectors generated by the magnet behind the BN target. The Lorentz force is directly determined by the components of E and B, where the two vectors are always perpendicular to each other in the 3D-space. Thus, the trajectory of an electron/ion tends to be a spiral around the magnetic flux with the radius of the helix expressed by the Larmor equation [98]. Herein, r Larmor is the Larmor radius, and v ⊥ is the velocity component perpendicular to the surface of the BN target, and the velocity component parallel to the BN target (v ∥ ) is unchanged. The vector sum of [E + (v ⊥ ×B)] is the third component of motion, which creates a drift in the direction perpendicular to both E and B directions, this is the so-called Hall drift. This affects significantly in an unbalanced RF sputtering system and negligible in a balanced one [98,100]. In most magnetron deposition systems, the drift effect always exists because the magnetic field design is imperfect. Resulting in a rather complicated trajectory of electrons because of several effects related to its path. This affects the condensation process of BN phases onto the substrates. Many research groups have been studying different sputtering techniques to build a unique model for both theory and experiment. However, quantitative results on such effects are yet to be observed [98,101,102]. The effect of B varies as a function of the deposition time or the thickness of the BN target. Such an effect is assumed to be constant during the sputtering process at which the depth of the erosion groove is small compared to the entire BN target thickness for a deposition time of several tens of hours. One of the technical solutions is used to control B locally by placing solenoid coils rolling around the sample chamber [103,104]. However, this solution is not applicable for our RF sputtering, because the magnetizing process during deposition can affect the condensation processes on the substrate surface. Physical parameters affecting the RF growing process As discussed, several parameters influence the nucleation and growth of thin films. This has raised many questions for researchers when designing a new generation of sputtering techniques [98,105,106]. The reactive gases can be used in the deposition process to combine with the target material. The ions of those reactive gases and the target material can react and condense on a substrate surface [107]. As described, the gas composition Ar/N 2 /H 2 was used for our research purposes. Therefore, the interaction between the ions of the said reactive gases and the BN target is much more complex than in the case of a single gas. During a deposition, various factors such as scattering, trapping, vibration, rotation and bouncing of ions or atoms/molecules on a substrate surface are often of interest [83,108,109]. Those parameters can participate in the dynamics of the chemical and physical processes of the reactive gases in the plasma environment and at the substrate surface. Here, we investigate the properties of deposited hBN films depending on several fundamental physical parameters which are highly related to our homebuilt RF sputtering system. During a deposition process, the mean free path of a particle, i.e. electron, ion, atom, molecule, from the BN target depends on the magnetic field at the backside of the target. Because the magnetic field strength changes as a function of d. This effect slightly affects the deposition rate of BN on any substrate material [36,104]. For each sputtering system, the dependence of the deposition rate as a function of d is different. Our RF sputtering system allows the farthest position of the substrate surface to the BN target is around 7cm. Therefore, we have selected d in the range of 3-6 cm in most of our results, as described in figure 6. This means that if each point on the BN target will have a different solid angle in respect of the substrate if d is changed. That is, the solid angle for a stage of d = 3 cm will be larger than the d = 6 cm case. The decrease in solid angle means a decrease in the efficiency of plasma per a substrate area unit as d increases from 3 cm to 6 cm. Because of this, the growth rate (R G ) of the BN material that deposited onto the substrate will decrease, and as a result, the material properties obtained on the substrate surface will be different. In addition to investigating the change in d, we also studied the effect when rotating the substrate plane at different angles (α) to exploit the adhesion of the BN material to the substrate surface and exploring the chemical role of the BN material in respect of the substrate materials [36]. We assumed that if the substrate plane is tilted at an angle of α with respect to the virtual line connecting the BN target and the substrate centers, the electrons/ions from the BN target when they collide with the substrate will behave differently in a deposition. This can reveal different chemical properties of the deposited BN material in respect of the substrate surface. We will discuss the results in detail in section 5.1. When changing d from 3 cm to 6 cm, this means that the acceleration of the particles present in the plasma environment is changed. This leads to the physical collisions of those mentioned particles and the substrate surface are altered. Therefore, the temperature of the substrate decreases naturally when the substrate position is moved away from the BN target [98], for example d = 3 cm (T sub = 125°C) and d = 6 cm (T sub = 78°C), the temperature difference of 47°C is the result of the 3 cm displacement. Hence, if we compare the quality of the BN thin films deposited at two positions, d = 3 cm and d = 6 cm, we have to provide the heat energy to the substrate to compensate the different temperature. However, to ensure comparison of the properties of the BN films deposited on a substrate, e.g. Si, NCD or Cr/Au, the d value is fixed at 3 cm and T sub is varied from 125°C to 550°C, depending on the specific research purpose [35][36][37][38][39]. Our RF sputtering deposition system is only capable of raising the substrate temperature up to 550°C, thermal energy is supplied by the DC source and is monitored by a thermocouple probe, as depicted in figure 6. The above parameters affect the properties of the deposited hBN films on the substrate, the obtained results will be discussed in more detail in section 5. Even so, we can easily see a number of possible effects in the grown hBN thin films such as lattice deformation and defects in the hBN nanostructures, a change in the composition of BN, the heterogeneity of structures, creation of new phases (e.g. aBN, tBN) and adsorbing impurities during the deposition process if a rather complex reactive gas composition of Ar/N 2 /H 2 or Ar/N 2 /CH 4 is used. This will alter the physical and chemical properties of the deposited hBN films [36]. Varying in the properties of the hBN films deposited on different substrates, leading to the structural and optical properties of those hBN films will change. In particular, the role of intercalated elements in the defect sites of the hBN lattice has many interesting properties, which many research groups around the world are currently interested in [53][54][55]. Hence, the results that we discuss in this mini-review only concern the results we have achieved, some challenges we are facing and few solutions that we are addressing. These aspects will provide the necessary information for a PVD approach to grow the hBN phase. Substrate material surfaces During the course of our studies, we used most of the commercial Si(100) substrates. There, the Si(100) substrate is considered as a neutral surface, easy to handle during measurements with both high (ρ = 10-22 kΩ.m) and low (ρ = 7-10 Ω.m) resistivity, which are suitable for many common measurements such as Raman and FTIR spectroscopy when measuring in both reflectance and transmittance modes. In addition, the NCD substrate is also used to take advantage of its surface containing free H-bonds. Using this substrate, we assume that the electrons and ions when landing on the substrate surface will be different from the case of Si substrate surface. Hence, we carefully prepare the Si(100) and NCD substrates before using them for our studies. The surface profiles of the two substrates that we used are shown in figure 7. Si surface The surface of a Si (100) substrate is shown at the atomic level, the image taken by a scanning probe microscope (SPM). There, the surface roughness of the Si substrate is characterized through an important factor, ρ RMS , which is calculated as follows [110]: Figure 6. The simplified diagram of a deposition setup in the sample chamber of our unbalanced RF sputtering system. Herein, some peripheral physical parameters are defined such as the distance between the target and substrate surfaces (d), the angle inclined from the substrate surface compared to the vertical direction (α) and the heating supplied to the substrate (T sub ) read out by a thermocouple. The reactive gases used for our studies such as argon (Ar), nitrogen (N 2 ) and hydrogen (H 2 ) were separately rectified by mass flow controllers, which were mixed together before injecting into the sample chamber.Reprinted with permission from, [36] Copyright (2016) American Chemical Society. Here, ρ RMS is the surface roughness, h i and h avg are the actual and the average heights of all points in the scanned Si(100) substrate region, an area of A = 512 × 512 pixels 2 was commonly scanned with a scanning speed of 4 lines/sec, the exposed time of the SPM image recording is about 3-5 min. There, mean height (〈h〉), mean roughness (ρ RMS ), mean grain size (〈g s 〉), standard deviation (σ gs ) and mean grain radius can be calculated from an area of 1 × 1 μm 2 . The ρ RMS value for the Si(100) substrate surface was calculated around 0.2 nm, as shown in figure 7(a). With a surface roughness value of about 2 Å, it is an acceptable condition to use for our depositions. We assume that the Si substrate surface is not oxidized to become SiO 2 . However, it is still possible to exist a rather thin layer of SiO x at the Si substrate surface. Nanocrystalline nanodiamond surface In addition to using the Si substrate as mentioned above, we used the NCD buffer as a substrate for hBN deposition. Therein, the surface roughness of the NCD buffer changes randomly depending on the conditions of the CVD fabrication method [39]. However, we used the NCD substrate with the same fabrication process and the fixed thickness, so the surface profile of the NCD substrate may differ only in terms of individual NCD grains at the NCD substrate surface. As an example, the surface profile of a 300nm thick NCD buffer is shown in figure 7(b), where we can see that the NCD grain faces have very different shapes and they expose their faces outward in very random directions. As discussed in section 1.3, in addition to diamonds that are created in nature under high pressure and temperature, diamonds can also be artificially synthesized through various methods [15,16,19,20]. However, we use diamonds made by chemical vapor deposition (CVD) technique in the laboratory, which requires relatively low temperatures of a few hundred to a thousand°C and pressures 40 kPa [20]. Using this technique, sp 2 and sp 3 phases of carbon are both formed during the NCD deposition, the percentage between the two phases largely depends on the fabrication conditions such as pressure, temperature, gas precursors [13]. Various CVD techniques are being used with different plasma generators, i.e. hot filament, microwave, RF, combustion flame [21]. Here, we use a microwave plasma generator with the gas precursors being two H 2 and CH 4 gases. Some other reactive gases can be added during NCD growth such as O 2 , Ar, N 2 , but we do not use them for these research purposes. The plasma generation during NCD fabrication was performed with a microwave frequency of 915 MHz [13][14][15][16]. The NCD deposition process realized in our ASTeX reactor is depicted as figure 8, and the deposition process is described as follows: The precursor gases were initially mixed and introduced into the reaction chamber before being diffused towards a substrate surface, e.g. Si. The microwave power is set to 2500 W. Microwaves interact with electrons in the gas phase and transfer energy to them through collisions. This leads to the dissociation of gas molecules and the formation of active molecules and ions in the plasma environment. The activation of the microwave breaks gas molecules into reactive radicals and atoms [21]. The ions and electrons of those gas molecules are initially created, and the temperature inside the reactor chamber increases to hundreds or thousands of°C. Those molecules, atoms, ions of the reactive gases can be absorbed, diffused, reacted or etched on the substrate surface until a suitable site for the NCD nuclei to grow is found, resulting in the NCD deposition is initiated. There are (100) and (b) diamond. The image of the Si substrate surface was recorded by scanning probe microscope (SPM) and the diamond surface was imaged by scanning electron microscope (SEM). Herein, a nanocrystalline diamond (NCD) thin film with a thickness of 300 nm was produced using the microwave plasma enhanced chemical vapor deposition (ASTeX-MW PE CVD) technique of Hasselt University. many ways to explain the nucleation of NCD particles during a CVD deposition. However, there is no unique explanation for all approaches. This means, the physical and chemical processes that occur during diamond creation and growth do not have a complete explanation. One of the outstanding research groups involved in diamond development, leading by Prof. Peter K. Bachmann, proposed a suitable picture for the diamond growth at the initial stage [111]. There, a C-H-O composition triangle was established based on their experimental results obtained from different reactors with different initial deposition gases. They concluded that the H atom is the most important element in the gas mixture and governs the entire chemical system that builds the diamond structure. A possible reaction process at the diamond surface and its growth is described in figure 8. Therein, the diamond surface is terminated by H radicals because a high density of H-radicals exists in the plasma, figure 8(a). During diamond growth, some H radicals can be removed and replaced by hydrocarbon radicals-CH 3 (methyl), figure 8(b). As a result there is an extra C added to the lattice, figure 8(c). The same process can be observed on the site adjacent to the attached methyl, figure 8(c) and (d). A further H abstraction process on one of the CH 3 groups and produces a radical on it, figure 8(e). This leads to C atoms in the neighboring positions attracting each other to complete the ring structure of diamond, figure 8(f). Resulting in locking the two C atoms into the diamond lattice, figure 8(g). When the two CH 3 reactants bond together, the H atoms are then released. Therefore, the stepwise NCD growth process is to add C atoms to the diamond lattice on suitable surface sites [21,111]. In addition to the above two substrate materials, we also used a bilayer substrate -heterostructure when combining two transition metal elements, Cr/Au [35,[37][38][39]81]. The aim is that the Cr and Au atoms will facilitate the deposition of the hBN films and the combination of those two metals will reduce the melting temperature of the bilayer substrate when compared with the melting temperature of each metal. The results of which will be discussed in section 5.8. Specimen preparations Regarding sample preparation, the procedure of cleaning the Si substrate surface before each prototyping is briefly described in this section. Herein, the Si substrate is commonly used for both PVD and CVD deposition methods. This was followed by the cross-sectional sample processing of SEM (X-SEM) and TEM (X-TEM) measurements. We use X-SEM measurement mainly to determine the thickness of hBN thin films in the range 0.5-4.0 μm. While X-TEM can accurately measure the thickness of thin films <500 nm. Determining the thickness of each hBN thin film, we can work out the growth rate of the thin film at a specific deposition condition, i.e. d = 3 cm or 6 cm and α = 0°or 90°, if the deposition time is known. We also used two X-TEM methods [112], however we highly analyze the results for the case with X-TEM specimens using focused ion beam (FIB) technique. One type of direct TEM sample used an amorphous Si 3 N 4 membrane to grow an hBN film on top, and then directly measure the projection of hBN nanowalls (hBN-NWs) or hBN particles on it. This allows us to determine how the hBN-NWs or particles initiate growth on the substrate. Finally, a metal bilayer of Cr/Au is used as a buffer on the Si substrate, with the aim of taking advantage of the bilayer with transition metals as catalysts to improve the quality of the deposited hBN film. Therefore, the following sections will briefly describe the preparation work for the above research process, which will assist in the interpretation of experimental results in the following sections of this mini-review. Si substrate cleaning Both high and low resistivity Si substrates are cleaned with the substrate cleaning procedure, which is cleaned prior to direct or indirect deposition of hBN films on top. The standard RCA sample cleaning procedure was used [113]. The composition of the solution used to clean the sample is in the ratio 5:1:1 for H 2 O:H 2 O 2 :NH 3 (RCA1) and H 2 O:H 2 O 2 :HCl (RCA2). The mixed solution heated up to 70°C. The Si substrates were immersed in the solution for 30 min. Those Si substrates were then taken out and washed with deionized (DI) water and dried with a pure gas flow of nitrogen [110,113]. The Si substrate surface was examined by SPM as seen in figure 7(a). With such surface quality, Si substrate continues to be used for further purposes. If the Si substrate surface is contaminated by a lot of dirt, then they can continue to use the process as described above. NCD seeding As discussed above, we use the NCD film as a substrate for the hBN-NWs to grow on it. To create a thin NCD film on Si substrate by CVD method, we prepared such samples following the procedure with some typical steps. In order to deposit diamonds on the Si substrate surface, this Si surface needs to be seeded with the initial diamond nanoparticles. There, a water-based colloidal suspension of nanodiamond particles with a concentration of 0.33 g.l −1 , was used. Such diamond powder was provided by the NanoCarbon Institute Co., Ltd., Japan [114]. The average size of diamond nanoparticles is estimated in the range of 5 -10nm. To seed diamond particles on the Si surface uniformly, a solution containing the nucleated diamonds was dropped to the Si substrate surface, and the Si substrate was then rotated at 4000 rpm. Therein, the Si substrate containing the seeded diamond particles was washed with DI water for the first 20 s of the 40 s spinning time [115][116][117]. After the Si substrate is seeded with NCD nano-diamond particles, the Si substrate is put into our CVD machine to deposit a NCD layer on it and the deposition process is described in section 4.3. Herein, the principle of diamond crystal nucleation was already described in section 3.2 (figure 8). Si/NCD/hBN heterostructure Prior to growing hBN films on 300 nm-thick NCD substrates, such a 300 nm-thick NCD film was realized with an ASTeX 6500 MWPE CVD [118][119][120]. There, the nanodiamond particles were seeded on the cleaned Si substrate, as described in section 4.2. The Si substrate with the seeded nanodiamond particles was loaded into the given ASTeX machine using a gas composition of H 2 and CH 4 . Working pressure and microwave power were maintained at 25 Torr and 2500 W. The deposition temperature was about 680°C. The Si/NCD bilayer film was then transferred to our unbalanced RF sputtering system for hBN thin film deposition. The transfer between CVD and RF systems is only 2 min, to ensure that contamination from the air is negligible for each NCD film surface. The growth mechanisms of the NCD and hBN nanostructures will be discussed in section 5.7. Cross-sectional SEM and TEM specimens The X-SEM and X-TEM images of the hBN films grown on both NCD and Cr/Au bilayer substrates were recorded. Therein, the samples with the thickness in the range of 0.5-4.0 μm are suitable to characterize by X-SEM image. The hBN thin films grown on the Si substrates were broken, and their cross-sections were exposed and attached to an L-shape substrate holder. The holder is capable of holding the samples upright, to perform SEM imaging from the cross-section of the sample with the normal scanning mode of SEM. Moreover, if the film thickness of samples is less than 500 nm, X-TEM samples were prepared to measure their thickness accurately by a TEM. Here, we used both the gentle ion milling (GentleMill TM ) and FIB techniques [112,[121][122][123][124][125][126][127] for sample cross-sectioning. These two methods were well described in our previous work [112]. Therefore, we skip describing it in this work. In order to exploit the structural properties of the interfaces between the substrate and hBN film, the FIB X-TEM samples are used most of all, this will be convenient in comparison of results obtained from samples with and without the NCD film. Because if the NCD layer is present, it is not feasible to use the GentleMillTM technique, as the NCD layer has a hardness much greater than that of the hBN one. Therefore, in this mini-review we mainly discuss the results of FIB X-TEM images. The FIB X-TEM samples are all about 100nm thick which is suitable for the characterization of structural and chemical properties, realized by Dr. Svetlana Kyerschuk at EMAT, Belgium. The detailed steps of FIB specimen preparation were described in [112]. Cr/Au bilayer buffer A DC sputter deposition technique was used to deposit metal materials to create a Cr/Au bilayer structure, performed at the Department of Materials Science and Engineering, National Tsing Hua University, Taiwan, realized by Dr. K. J. Sankaran and Prof. Nyan-Hwa Tai and Dr. Ping-Yen Hsieh. The Cr/Au bilayer was deposited on the Si substrate before growing the hBN thin film. The operation mechanism of the DC sputter is based on the Ar + gas glowing discharge [104,107]. The sputtering chamber contains a metal target (Cr, Au, Ni, Fe) as the cathode and a substrate holder as the anode, maintained in a high vacuum. The metal conducting target is bombarded by the high energy of Ar + ions, resulting in glow discharge plasma. The Si substrate is in contact with the plasma, a thin metal film is thus deposited. The thickness of the metal thin film depends on the deposition time and the distance from the metal target to the Si surface. Herein, the Cr and Au layers are deposited with their thicknesses of 10 nm and 100 nm, respectively. This aim at the thin Cr layer will help create the initial adhesion at the Si surface so that the Cr/Au bilayer will stick well on the Si surface. Moreover, the melting temperature of the Cr/Au bilayer will be much lower than the melting temperature of each individual metal [81]. Si 3 N 4 -TEM membrane To understand the nucleation of hBN-NWs on any substrate surface, we observed the hBN-NWs growing from the initial stage of thin film growth with TEM. With this approach, TEM images of hBN-NWs were viewed from the top of those hBN-NWs, perpendicular to the substrate surface. As mentioned above, the Si 3 N 4 -TEM membrane was chosen for this purpose. Using such a TEM membrane is suitable for direct structural analysis of TEM images, while being unaffected by sample preparation by FIB or GentleMill TM . The Si 3 N 4 thin film is not only transparent to the 200 keV electron beam of the TEM, but it is also an amorphous material where the interaction of the TEM electron beam with the Si 3 N 4 substrate is almost isotropic. The Si 3 N 4 membranes are typically around 35 nm-thick, supported on a 500 μm-thick silicon frame with an electron transparent window of 100 × 100 μm 2 , the Si 3 N 4 membranes were provided by TED PELLA, INC [39]. 5. hBN nanowalls growing on tilted substrates and varying in d 5.1. 0°and 90°-tilted substrates As discussed in section 2.2, growing hBN thin films on Si substrates rotated with different angles (α) to determine the role of Si neutral substrate surface. Moreover, the reactive gas components of RF sputtering contain a very strong reducing agent of H 2 . Hence, we want to see the role physical and chemical processes induce the process of hBN deposition [36,47]. To determine such effects, a series of samples were deposited in varying d in the range of 3-6cm. The samples were grown under the same deposition conditions such as gas precursors, working pressure, and substrate temperature. After the hBN-NWs were grown, then X-SEM images were recorded to measure the film thicknesses. The thickness value of each sample corresponds to its deposition time, which will let us calculate the R G values (nm/h). In addition to fabricating hBN films at the normal case (α = 0°), we also deposited hBN films at the α = 90°stage. Particularly, when the Si substrate rotated at an angle of 90°, we designed a Si substrate about four times larger than the size of a regular Si one (1×1 cm 2 ). This Si substrate was erected so that the plane of the Si substrate passes through the center of the BN target. The Si substrate was glued to an aluminum plate and attached it to the Mo substrate holder of the RF sputtering machine. Hence, we assume that the temperature variation of the sample positions is close to the BN target (d = 3 cm) similar to that at locations far from the BN target (d = 6 cm). Because we assumed that the thermal conductivity of the aluminum plate is good enough, then there is not too much temperature change between the near and far positions of the hBN film. The R G values are calculated for each position of d as the means of the films, and the error of each measurement is calculated from five different measurements with the same d value. The correlation of the RG values is a function of d, shown in figure 9(a). In the R G (d) correlation, we found that the trends of the R G (d) functions described for the two rotated Si substrates is similar in terms of the linear form [36]. A small difference is the slope of these two linear functions deviated from the R G values taken from the two α values (! GR ) at two positions of the sample in respect of the BN target, e.g. ! GR = 75nm/h (d = 3 cm) and ! GR = 40 nm/h (d = 3 cm). The decrease in R G value for the two α values is understandable because the solid angle from the d positions of the α = 0°case is reduced by 50% when the Si substrate is rotated 90°. This means that the flux of particles deposited on the Si substrate surface is physically reduced by 50%. In addition, the strong decrease of R G value at the d = 3 cm position indicates that the role of the kinetic process occurring at the Si surface is important. Therein, the kinetic processes of the ions occurring at the Si surface with a small d value are quite fast, so the efficiency of hBN material adhering to the substrate surface is low. Resulting in the degree of adhesion of hBN materials at d = 3 cm compared to the case d = 6 cm is smaller, leading to higher porosity of hBN deposited films at locations closer to the BN target. This result is confirmed by two X-SEM images of two hBN films with their thickness of 2.86 μm, deposited at d = 3 cm for the two different angles, figure 9(b) and (c). Based on the contrast of the two X-SEM images, we can see that the size of the hBN-NWs existing in the deposited hBN film with α = 0°is larger than that of the sample with α = 90°. That is, the density of nucleated hBN-NWs is low in the α = 0°case, compared with the α = 90°one. This results in the porosity of hBN films fabricated with α = 0°is lower than that of the α = 90°case. Therefore, the fabricated sample with α = 0°a ssisted hBN-NWs grows linearly when the spacing between hBN-NWs is large enough, figure 9(b). Therein, the hBN-NWs will grow spontaneously and do not compete for spaces at the early stage of thin film growth, resulting in less overlapping of the NW-branches than in the case of samples fabricated at α = 90°. This assessment is clearly seen through the amplitude contrast of the X-SEM images in figure 9(b) and (c). The early overlap of NW-branches can be observed from figure 9(c), where a higher density of hBN-NWs deposited on the 90°rotated Si substrate is observed at one-third of the film thickness from the Si surface (R 1 ). The hBN particles continue to grow and enlarge in size, resulting in competing in their spacing. Some hBN-NW branches were suppressed under the competition, while others were further grown, resulting in a higher porosity of the hBN film in the mid-film thickness region due to the presence of bigger hBN-NWs (R 2 ). When the grown hBN-NWs are large enough, they continue to nucleate small branches and compete with each other for space. Those hBN-NW branches compete with each other to increase the density of material or decrease porosity of material, as seen in the R 3 region of figure 9(c). In particular, even though the Si substrate is rotated 90°, the orientation of the hBN-NWs does not change where all the hBN-NWs tend to grow perpendicular to the Si substrate surface. Therein, the third axis of the hBN-NWs (c-axis) tends to be parallel or at a very small angle to the Si substrate surface plane. This can be visualized directly from our two X-SEM images or this was also studied in detail by our group member [128]. The c-axis of the hBN-NWs can be twisted at an angle of < 20°from the horizontal. Such a conclusion is quite consistent with our results that we observed directly from those X-SEM images. To further explore the growth of the deposited BN material near the Si substrate surface, X-TEM images of the hBN film deposited on the Si substrate with a thickness of 300 nm, d = 3 cm and α = 0° [36], as shown in figure 10. The 300 nm-thick hBN film is aimed at seeing the entire film thickness of the X-TEM specimen using TEM technique. Since this specimen has only a single layer of hBN that grows onto the Si substrate, hence we prepared the X-TEM specimen by the GentleMill TM method [112] to reduce a certain damage from the highenergy ion beam like FIB. From the X-TEM image, we can see very clearly that the two hBN films are facing each other because of using GentleMill TM preparation method, leading the two Si substrates appearing on opposite Figure 9. (a) The growth rate (R G ) of the hBN films deposited at different distances (d) from the target BN to the plane of the tilting substrates with different angles (α = 0°and 90°). The dashed lines serve as guides for the eyes. Each data point in the graph was calculated from a single deposited film, each error bar was calculated independently for each hBN film as the standard deviation for the thickness measurement of ten different points in the cross-sectional SEM (X-SEM) image. The measurement error comes from a single film, which does not represent repeatability for different films. (b) and (c) are typical X-SEM images of two hBN films of approximately the same thickness deposited at d = 3 cm with α = 0°and α = 90°, respectively. The SEM images of these samples are also seen on the right panels where the density of hBN-NWs is low in (b), while it is higher in (c) at which the wall branches of hBN-NWs emerged and participated in the growth process. In particular, the three thickness regions denoted as R 1−3 in the X-SEM image at (c) are determined based on the amplitude contrast of the X-SEM image, the explanations of these regions are detailed in the content of this paper. Reprinted with permission from [36], Copyright (2016) American Chemical Society. sides of the head-to-head hBN film border, figure 10(a). From the contrast of the X-TEM image, the thickness of the deposited hBN film was estimated around (300± 20) nm, as shown in figure 10(a). This thickness value is consistent with the expected growth thickness controlled by the deposition time, calculated from R G as seen in figure 9(a). The change of the structural properties of the deposited hBN film is also detected, based on the contrast of the TEM image. The results show that the BN film is nonuniform with the increase of thickness, closer to the Si substrate surface, the region with lower contrast in the TEM image, indicating the presence of less crystalline structures, this conclusion is consistent with the data shown from figure 9. To clarify the intrinsic structure near the Si/hBN interface, a high-resolution X-TEM (HRTEM) image was recorded at a location close to the Si substrate surface and the deposited BN film, as shown in figure 10(b). From the HRTEM image, we found that the material near the Si surface is a mixture of the imperfect hBN phase (tBN) and a part of the aBN phase. That is, the short range ordered atoms in the hBN film dominated at the early stage of thin film growth. Farther from the Si substrate to the mid-area of the hBN film, the hBN phase begins to appear, as shown in figure 10(c). The hBN phase grown as hBN-NWs is partly in the visible region, but two undesirable phases remain, tBN and aBN. The two unwanted phases are constrained in the spaces between hBN-NWs. The hBN phase in hBN-NWs is clearly seen in figure 10(c) where the hBN lattice planes are not perfectly aligned. This indicates a change in the lattice spacing between the hBN nanosheets, resulting in the lattice distortion possibly caused by crystal defects in the grown hBN-NWs. The dominant growth direction of the hBN-NWs is in the vertical direction relative to the Si surface. hBN nanowalls grown in differing film thicknesses As discussed in section 5.1, all hBN films were grown and characterized with X-SEM and X-TEM images. To get a comprehensive view of the growth of hBN-NWs on the Si substrate surface, we measured plane-view SEM images of those hBN films in differing thicknesses (t d ). This aims to clarify the growth process of many hBN-NWs on a large scale and investigate the interactions between hBN-NWs during their deposition. Hence, planeview SEM images of hBN films were recorded in the variations of td. Some typical SEM images are shown in figure 11. The SEM images shown on the left panel of figure 11 is for the films deposited at d = 3 cm, and the SEM images on the right panel is for the d = 6 cm samples. Figure 11 indicates that the growth characteristics of the hBN-NWs vary with the td of the hBN films. In particular, at the same thickness t d ∼300 nm, the lower density of hBN-NWs grown on a Si unit area at d = 3 cm is observed as compared to the case of d = 6 cm. With increasing t d , the size of hBN-NWs is increased linearly for the films with t d 710 nm. If the thickness continues to increase up to 1430nm, the NW-branches have started to grow and dominate over the growth of the cores of hBN-NWs. The growth of the hBN films is also related to the growth of NW-branches, leading to the growth rate of the thin films is low. There, the NW-branches tend to fill in the spaces between the hBN-NWs. When the NWbranches are large enough (t d ∼3050 nm), we found that the morphological characteristics of the hBN films are quite close for both cases of d = 3 cm and d = 6 cm. These results obtained from the given SEM images are consistent with the conclusions extracted from figure 9. Furthermore, the results are more extensive when we directly see the shapes of those hBN-NWs and their branches which were grown in the hBN films with different td. These results will support the arguments assigned from the data of figures 9 and 10. To further investigate the crystal quality as a function of the film thickness, Raman and FTIR spectra for the thin films given in figure 11, were recorded. In principle, when an infrared or laser beam illuminates an hBN film, the lattice of hBN will vibrate. Leading the change in the dipole moment during an oscillation period [129], which is constrained by the phonon modes of the hBN lattice [61]. Therefore, the reactive IR modes absorb energy at individual frequencies which are related to the vibration modes [129,130]. Those modes will appear in the interferometer after being transmitted or reflected with the grown hBN film. After a Fourier transform (FT) of the given signal, the absorption spectrum of the hBN film can be recorded. Since hBN-NWs in hBN films are oriented in a direction perpendicular to the Si surface. Hence, FTIR and Raman measurements were preferably realized vertical to the hBN film surface, as described in figure 12. Here, the FTIR spectra were mainly performed in the transmission mode in order to exploit the intrinsic properties of hBN films. As discussed above, the c-axis of the hBN-NW lattice is oriented relatively parallel to the Si substrate surface, the directions of the incident IR and laser beams in both FTIR and Raman measurements are described in figure 12(a). The vibration modes of the hBN lattice when IR and laser beams interacting with hBN films are simplified, as shown in figure 12(b) and (c). Characterizing the optical vibrational modes of hBN crystals indirectly exploits the intrinsic properties of the material, i.e. crystal anisotropy, thin film thickness, crystallinity, and elemental composition present in the crystal of the sample. To prevent some of the contamination from adsorbing from the H 2 O and CO 2 in the air, a continuous flow of dry nitrogen gas was supplied into the chamber of the FTIR instrument, which ensures that unwanted absorption from the air is negligible. Each measurement takes about 30min with a resolution of 2.0 cm −1 in the frequency range of 400-4000 cm −1 . There are two IR-active optical phonon modes for the hBN crystal with frequencies at 783 cm −1 and 1376 cm −1 [3,131,132]. The two IR modes are denoted as A 2u and E 1u correspond to the out-of-plane and in-plane B-N bonds as described in figure 12(b). In addition, there is a Raman active optical phonon mode at 1367 cm −1 , denoted A 2g [3], which is described in figure 12(c). Depending on the relative orientation between those phonon modes and the incident beam direction, the crystal and phase characteristics of the grown hBN film can be determined. Raman spectra of hBN films was recorded in the backscattering configuration using a blue Ar + gas laser beam with wavelength of λ = 488 nm and power of 10 mW, integration time in the range of 5-300 s, chromatic slit size of 0.1 mm [110]. We not only did Raman spectroscopy measurements with the sole hBN films, but also measured Raman spectra of NCD/hBN heterostructures, as shown in figure 13. Figure 13 shows the highfrequency Raman active peak of the hBN film (E 2g hBN ) is of 1367 cm −1 [132], while the NCD film has two vibration peaks of sp NCD 3 and sp NCD 2 [133,134]. The first peak is related to the vibration of the NCD cubic lattice, it is a sharp peak located at the frequency of ν = 1333 cm −1 . The second peak is related to the graphitic carbon phase existing with two broadened peaks, denoted as 'G' and 'D'. These peaks are contributed by the random vibrations of carbon atoms in the covalent plane. The 'G'-peak at ν = 1575 cm −1 resulting from the movements of neighboring carbon atoms in opposite directions and perpendicular to the sp NCD 2 plane. The 'D' peak results from dislocations in the NCD lattice, it is also known as disorder − induced mode. The relative intensity of the 'D' and 'G' peaks can be used as a rapid approach of determining the degree of disorder in the NCD sample [6]. The sp 2 carbon phase has an overlapping peak of those two 'D' and 'G' one, occurring at the frequency of ν = 1487 Figure 12. Optical properties of grown hBN films were characterized by FTIR and Raman spectroscopy techniques. In both measurements, the incident beams are always perpendicular to the Si substrate surface, as described in (a). The infrared (IR) and laser beams of those techniques penetrating into the hBN films in either transmittance or reflection modes will interact with the hBN lattice within the hBN films. Therein, we assume that the hBN nanosheets are vertical to the Si substrate surface. Therefore, the simple phonon modes of the crystalline hBN films in respect of the IR and laser beams are described in (b) and (c). Figure 13. Raman spectra of a heterostructure containing the grown NCD and hBN films, which have the same thickness of 300 nm. Therein, the NCD film was firstly grown on the Si substrate surface, then the hBN layer was continuously deposited on the given NCD film to form a Si/NCD/hBN structure. Raman spectra show some typical vibration modes of diamond (sp 3 ), hBN and graphite (sp 2 ), these phases are present at the core of the NCD particles, hBN-NWs and the boundaries of the NCD particles, respectively.Reprinted from [39], Copyright (2017), with permission from Elsevier. cm −1 . We did not exploit the NCD film in detail, but such information is necessary to recognize the presence of the NCD phase, the quality of the NCD film needs to be determined in case the hBN film was deposited on top of the NCD one. For Raman spectra, the inverse of the crystal size is proportional to the full width half of the maximum value (Γ 1/2 ) at the high frequency Raman peak (E 2g ) [132]. That is, if the value of Γ 1/2 decreases at the above frequency position, then we can conclude that the crystallization of the material increases. Raman spectra and Γ 1/2 values at the E 2g frequency were measured and calculated for the hBN films deposited at d = 3cm and d = 6 cm, as given in figure 11, results of which are shown in figure 14. The Raman peaks were fitted with a Gaussian distribution function to estimate the crystal size via their Γ 1/2 value. In fact, the Raman spectrum of deposited BN film is a combination of effects from three hBN, tBN and aBN phases. The hBN and tBN phases contributed mainly to the peak intensity and Γ 1/2 value, while the aBN phase had no contribution. Figures 14(a) and (b) show that the intensity of the Raman spectrum increases as the thickness of the hBN film increases. This means that the summation of the laser beam and the hBN lattice interactions are proportional to their film thickness. In addition, the width of the spectrum is inversely proportional to the thickness of the hBN thin film, which indicates that the signal caused by unwanted phases in the thin samples predominantly contributes in the entire Raman spectrum leading the Raman signal blurred. Herein, the peak of the Raman spectrum corresponding to the E 2g mode at ν = 1367 cm −1 is quite uniform for most of the samples, which indicates that the hBN phase is evident in the hBN films. Figure 14(c) shows that the hBN films with thickness t d > 1 μm deposited at d = 6 cm, the obtained hBN phase is quite homogeneous and govern all quality of the samples. In other worlds, the Raman signal is not affected by the BN layer close to the Si substrate surface. For the hBN films with thickness < 1 μm, the unwanted phases created at the Si substrate surface strongly induce the Raman signal, resulting in a wider Raman spectral width corresponding to a large Γ 1/2 value. This result is in good agreement with the data obtained from figures 10 and 11. As for the hBN films fabricated with d = 3 cm, the quality of the hBN phase is quite uniform, and is less affected by the material layer close to the Si surface. This leads to low and stable values of Γ 1/2 as a function of thickness. Figure 14. Raman spectra of the hBN films with different film thicknesses deposited at d = 3 cm (a) and d = 6 cm (b), which are described in figure 11. The mean Γ 1/2 values of the E 2g peak calculated from those Raman spectra measured on five different locations on each film, as shown in (c). The error bars represent the standard deviation of the Γ 1/2 mean value for each hBN film measured at five different locations.Reprinted with permission from [36]. Copyright (2016) American Chemical Society. hBN nanowalls grown in differing T sub The above results did not take into account the effect of temperature, because the hBN films depositing at d = 3 cm and d = 6 cm also differ by only about 47°C. This temperature difference is naturally created resulting from interactions between atoms or ions inside the plasma chamber of the RF sputtering system. Therefore, we investigated the influence of temperature supplied to the sample substrate during deposition. Therein, we predict that some kinetic processes occurring at the Si substrate surface during the deposition will be directly influenced by the substrate temperature (T sub ). Therefore, a series of hBN thin films were deposited with different T sub , X-SEM images of those samples were recorded, and the deposited rate (R G ) was then calculated for each specific value of T sub . The R G values as a function of T sub is plotted in figure 15(a). Furthermore, since the reactive gas component of the deposition process contains H 2 , the H ions have very strong reducing properties, so it is possible to act as an acid that can corrode the hBN material during the deposition process. This process is in contrast to the growth of the hBN film over time. Although we cannot directly measure the kinetic effect of the said corrosion process, we can estimate this effect at the static state. Hence, the fabricated hBN thin films were reloaded into the sample chamber and exposed to the H 2 plasma and the Ar and N 2 active gases were removed [36]. The removal of two Ar and N 2 gases will cause the working pressure in the sample chamber to change. Therefore, we added an extra amount of H 2 to the sample chamber to compensate, which means an increase in the corrosion concentration. At this stage, we assume that the corrosion process due to increased H 2 concentration balances with the bombardment produced by Ar and N ions during normal deposition. The hBN film thicknesses before and after etching by H 2 plasma were measured, resulting in the etching rate (R E ) as a function of T sub was found, as plotted in figure 15(b). As shown in figure 15 where the hBN films deposited far from the BN target (d = 6 cm), T sub slightly induces the growth rate of the hBN films. When T sub increases from 78°C to 250°C, R G tends to increase slightly. This shows that, in the temperature region < 250°C, the deposition process is enhanced by T sub , which will make the H ions more active, leading to the adhesion of the hBN material at the early stage of thin film deposition. When the temperature rises above 250°C, the etching process plays a key role, leading to a decrease in the R G value. For the hBN films deposited close to the BN target (d = 3 cm), R G was less affected by T sub < 250°C, but did play a key role for T sub > 250°C as the R G value highly decreased when the temperature increased. We temporarily conclude that the substrate temperature greatly affects the growth rate of hBN films. However, the question is, why this effect is so large in the films deposited closer to the BN target. We did assume that this effect is caused by the etching of the strongly reducing H ions at high temperature. Therefore, we measured this effect with the R E -T sub curve, as shown in figure 15(b). The obtained results are in agreement with our initial predictions. There, the etching rate of depositing samples at T sub > 250°C and d = 3 cm is increased as a function of T sub . Besides, the effect was not obvious in the hBN films deposited at d = 6 cm. The trend of R E -T sub curve (d = 3 Each data point represents a single deposition or etching experiment, error bars were calculated independently for each hBN film, each mean value was calculated from ten different locations on the X-SEM image of each sample used to determine each thickness measurement. The substrate temperature was firstly raised by the external heating source to the temperature naturally generated by the interaction of the reactive gas ions and the substrate surface at positions d = 3 cm (T sub = 125°C) and d = 6 cm (T sub = 78°C). The later measurement points were then started from T sub = 125°C to 500°C with an increment step of 125°C. For the case of etching, the samples after be deposited, those grown films were put into the RF sputtering chamber and removed the reactive gases of Ar and N 2 , leaving only H 2 in the RF chamber, to carry out the etching measurements. The etching rate was calculated as a thickness change of the grown film before and after etching divided by the etching time. Reprinted with permission from [36]. Copyright (2016) American Chemical Society. cm) seems to be inversely to the curve of R E -T sub , our initial prediction was thus correct. Therefore, we can conclude that hBN crystal growth is strongly dependent on T sub at d = 3 cm. In the case of d = 6 cm, the density of hBN-NWs is high where the hBN particles adhere to the Si substrate surface at the early stage of thin film growth, leading to the growth and etching rates of the hBN films are low. As the density of hBN-NWs is related to the growth of NW-branches. Hence, the etching process is expected to affect the NW-branches directly rather than the cores of the hBN-NWs. This effect could also be seen in X-SEM images, recorded from the surface of the deposited hBN films at two different T sub and d values, figure 16. Figure 16 shows that two hBN films were grown at two different T sub , the morphology of those two films are clearly different. The morphological properties of the two films grown at d = 3 cm and d = 6 cm with T sub = 125°C are quite similar. When the temperature increased up to 500°C, those film surfaces changed significantly. Therein, the film deposited at d = 3 cm and T sub = 500°C did not show the existence of hBN-NWs on its surface, and the morphological behavior was replaced by distinguishable clusters with the sizes of BN particles being very small. These clusters can be distinguished among each other by the amplitude contrast of the SEM images, denoted by the yellow dashed lines in figure 16(b). In contrast, the film fabricated at d = 6 cm and T sub = 500°C still exhibited the existence of hBN-NWs on its surface. These hBN-NWs are different from those that existed on the film grown with T sub = 125°C at which the NW-branches have been truncated and the cores of hBN-NWs remained, figure 16(d). Those hBN-NWs and NW-branches are denoted by yellow and cyan arrows in those SEM images. Clusters surrounded by yellow dashed lines have an average diameter of about 0.5-1.0 μm, figure 16(b). The formation of these BN cluster boundaries might follow the growth model of diffusion-limited aggregation during deposition with the mechanism of chemical rather than physical diffusion [135][136][137][138]. This hints that chemical processes strongly influenced the deposition process where chemical and temperature effects strongly influenced the intrinsic structure of the hBN film. Thermal stability and defects of hBN-NWs From the data discussed above, it leads us to conclude that the quality of the hBN-NWs changes as a function of the experimental parameters. Therein, optimal conditions can be selected for the deposition of the best quality hBN films. Even so, the intrinsic properties of the hBN-NWs reveal that they have many lattice defects, resulting in lattice distortion as shown in figure 10(c). Furthermore, when studying the surface properties of hBN films with their SEM images, we found that the material etching caused by H ions is very significant. Based on the obtained results of other research groups, we predict that the H ions will be temporarily bound during the deposition of hBN films. Therein, the H atoms will bind temporarily at N and B vacancies in the hBN lattice. H ions can be intercalated on those vacancies during the deposition of hBN films, H is thus an important factor to support the formation of hBN lattice cells. However, it is also the main factor causing defects in the hBN crystal lattice. Due to the fact that the random and complex arrangement of atoms in the physical and chemical kinetic processes that occur at the Si substrate surface, resulting in the hBN film contains many defects. If the defects exist, the residual bonds of B or N will directly connect to the free H ions. This means that there are N-H or B-H bonds in the grown hBN films [139][140][141][142]. Hence, the given bonds should be the clue to determine the degree of defects in the grown films [36]. Therefore, FTIR spectra of the hBN films deposited at different temperatures were recorded. We have (i) investigated the possibility of the existence of N-H or B-H bonds in the grown samples and (ii) if the given bonds exist, how stable are they under Tsub. We have examined most of the grown samples carefully, for the B-H bonds with their IR vibration frequency of 2400 cm −1 , and we did not find the said vibration signal despite increasing FTIR spectrum acquisition time. Even so, our results are also consistent with the experimental and simulation data of other research groups [139,140]. Therein, other research groups also concluded that the B site in hBN lattice is often the preferred position in the crystallization process of hBN materials. Thus, we also postulated that, if there is a lattice defect at the B site, then the H ions will bind to the N-terminated edges. Because of that, we assumed that our hBN films mostly contain N-H bonds. FTIR spectroscopy was used to quantify the strength of H-N bonding vibrations, as a measure of the relative density of H present in various hBN films. A series of hBN films with a thickness of 3.5μm, deposited at d = 3 cm and d = 6 cm in differing T sub . The FTIR transmission spectra of the hBN films were recorded in the H-N vibrational mode at ν = 3450 cm −1 [143,144], the results of which we reported in [36], as recited in figures 17(a) and (b). FTIR measurements were taken perpendicular to the substrate surface or quasi-parallel to the planes of the hBN-NWs, as described in figure 12. Sum IR absorption at the typical phonon peak of the FTIR spectrum was used to estimate the IR absorption band area at the N-H vibration mode (S IR−absorption−band ). This factor is calculated from the product of the absorbance and the bandwidth at half-absorbance, expressed by the following equation subtracting the given spectra. With the said spectral subtraction, the spectra of the individual hBN films will be obtained, and we used 128 averaged scans to obtain the final spectrum for each measurement. The peak area of the FTIR spectra calculated using equation (4), as described in figure 17(c) and the obtained results are shown in figure 17(d). Therein, the temperature difference between at d = 3 cm and d = 6 cm of 47°C was corrected by having samples measured at those d values heated to 125°C. This value is equal to the temperature naturally generated during the physical interaction of ions in the plasma environment in respect of the substrate surface at d = 3 cm. However, the FTIR spectrum of the unheated hBN film grown at d = 6 cm was still included for comparison. The data shown in figure 17 shows that the peak position of the H-N bond may differ slightly, resulting from the difference in the H-N binding environment. This could be explained that the defects and impurities present in the hBN films are not uniform. In addition, the presence of undesirable phases such as aBN and tBN affects the order of the N-H bonds/orientations. The absorbance areas obtained from the FTIR adsorption spectra tended to decrease as T sub increased from 125°C to 250°C for the samples deposited at d = 3 cm. This suggests that the H-N bond density is high in the sample deposited at low T sub . The density of H-N is kept stable and does not decrease when T sub 250°C. This means that T sub = 250°C is the critical temperature at which the N-H bonds are stable in the films deposited at d = 3 cm. The number of N-H bonds decreases with increasing temperature. This also means that the dangling bonds at the N-terminated edges are reduced, resulting in a low probability of NW-branches being created. The creation of NW-branches is low, which means the creation of unwanted phases is also low. Therefore, T sub clearly plays an important role in enhancing hBN phase formation. In other words, the chance of making NW-branches at low T sub is higher than that of high T sub . A significant decrease of S IR−absorption−band was seen when T sub increased from 78°C to 125°C in the film deposited at d = 6 cm. The density of N-H bonds remained at this absorption level when T sub was increased to 250°C. The density of N-H bonds present in the films deposited at d = 6 cm is higher than that of the films grown at d = 3 cm. The desorption mechanism for the d = 6 cm case is similar to that of the case of d = 3 cm. The N-H bond density in the films grown at T sub > 250°C also remained stable and lower than that of the samples deposited at d = 3 cm. This hints that the N-H bond density present in the hBN phase sample is higher than in the sample containing undesirable phases such as aBN and tBN. This is consistent with the conclusions obtained from the data discussed above. Many research groups have been delving deeper into this issue [81,[144][145][146]. There, they are trying to find a correlation between the N-H bonds and the N-H concentration. The change of the N-H content as a function of temperature can also be quantified by a thermal gravimetric analysis (TGA). The results show that the H-N bonds are usually broken in the range of 250-350°C. However, the samples used with TGA are suitable for a large amount, which is different from our approach when using a small volume of hBN film. The difference between the desorption temperature values of our N-H bonds and those studied using the TGA method could come from differences in sample mass or porosity of the measured films. To determine a difference in the quality of hBN films were deposited at different T sub at which the desorption process affects the quality of those deposited films. Therefore, a series of Raman spectra of the hBN films with a thickness of 700 nm, deposited at d = 3 cm and d = 6 cm were recorded, as shown in figures 18(a) and (b). The Γ 1/2 values of those Raman spectra recorded around the E 2g vibration mode were calculated, as described in figure 14(c), results of which are shown in figure 18(c). Since we only deposited hBN films with a thickness of t d = 700 nm in order to reduce the evaporation time. Using this thickness, the Raman signal is also strong enough for analysis and less affected by unwanted phases created by overlapping of the NW-branches when the thickness of the samples is large. The results reveal that the Raman spectra recorded around the E 2g vibration mode are clear, showing that the hBN phase quality is dominant in hBN films, figures 18(a) and (b). There is only a large change in the Γ 1/2 -T sub curve for the films when increasing T sub from 78°C to 125°C, the crystallinity of hBN markedly increases, figure 18(c). The results also show that T sub plays a role in the deposition process at the early stage of thin film growth. At the T sub > 125°C and d = 3 cm, the crystallinity of the hBN films changed insignificantly, but kept a little better than that of the samples deposited at d = 6 cm. While the hBN films grew at d = 6 cm, the crystallization quality of the hBN phase gradually decreased with the increase of T sub . This is assigned that the desorption of N-H bonds strongly affects the samples grown at d = 6 cm, while having little effect on the films deposited at d = 3 cm. This allows us to conclude that the amount of N-H bonds desorbed from the NW-branches is more than from the cores of hBN-NWs. To investigate the stability of the hBN phase at the high annealing temperature, the as-deposited films were annealed at temperatures up to 1000°C. We have measured the FTIR spectra of the samples that have not been annealed, and annealed at 750°C and 1000°C. The FTIR spectra of those samples are shown in figure 19. This approach allows us to investigate the thermal stability of the N-H bonds within those samples. Similar to what described in figure 17, the FTIR spectrum is recorded around the vibration mode of the H-N bond at ν = 3436.6 cm −1 and the optical phonon modes of the hBN can be observed as B-N bending A 2u (LO) at ν = 817 cm −1 and B-N stretching E 1u (TO) at ν = 1376 cm −1 . The FTIR spectra of the vibration modes of A 2u , E 1u and N-H are given in figure 19. These spectra were recorded from three hBN films with a thickness of 1.2 μm, which grew at d = 3 cm. One sample was selected for comparison without annealing or as-deposited sample, the other two samples, one annealed at 750°C and the other annealed at 1000°C. Both samples were incubated in a high vacuum (10 −6 mbar) for 5.5 hours to avoid contamination by the atmosphere. For the H-N vibration mode, figure 19(c), the intensity of the H-N peak decreases with increasing annealing temperature, indicating that desorption of the N-H bond occurs during annealing. Desorption is a temperature dependent process, the degree of desorption is greater as the temperature is higher. In particular, the crystal structure is suppressed when the annealing temperature reaches 1000°C, as noted the annihilation of the A 2u and E 1u phonon modes in figures 19(a) and (b), possibly resulting from the desorption of the N-H bonds. This is also evidenced by the results obtained from other research groups that the desorption of H atoms in BN is nearly E 1u (b), were recorded to investigate the stability of the hBN crystalline structures at high temperatures. In addition, the stability of H-N bonds existing in those hBN films was also exploited, as shown in (c).Reprinted with permission from [36]. Copyright (2016) American Chemical Society. completed when annealed at 1000°C [140,144,147]. If the H atoms leave the lattice, some vacancies appear in the hBN crystal structure. As a result, the hBN films showed more defects than the film annealed at 1000°C. Our data are quite consistent with those of other research groups [140,146]. The loss of phonon modes in the hBN film annealed at 1000°C can be caused by the formation of non-crystalline phases in the sample. Therein, the migration of the B and N vacancies can form states with more stable bonds such as BN, resulting in the existence of di-vacancies in the film at high annealing temperatures. The formation of di-vacancies will cause a lattice distortion of the hBN film or warping relative to the hBN-NW planes [140]. The hBN phase can be transformed to aBN and/or tBN. Although most of the hBN phase still remains, however the presence of new BN phases within the annealed hBN film can disrupt the long-range crystalline order that maintains the LO and TO phonon modes. Moreover, to investigate the degradation of hBN crystalline phase for different annealing temperatures, we recorded the SEM images of the above films, results of which are given in figure 20. Therein, the results show that the hBN-NWs appearing at the hBN film surface is quite stable and similar to the asdeposited sample in figure 20(a) or the annealed at 700°C in figure 20(b). Increasing the temperature further, the hBN-NWs and their NW-branches were degraded by the desorption processes that occurred strongly and caused the hBN lattice to be distorted, deformed or transformed to other phases such as tBN and aBN [36]. hBN-NWs grown on Si 3 N 4 membrane To have a better understanding how the hBN-NWs grow on any substrate surface, a thin layer of hBN film (< 100 nm) was grown onto a Si 3 N 4 membrane, which is suitable for structural and chemical TEM characterization without using a sample preparation process. Therein, the preparation is easy to damage the sample [36]. With the given thickness, the grown hBN film does not cover the substrate surface, figure 21(a) at which hBN-NWs were randomly nucleated and grown on the Si 3 N 4 substrate surface. Based on the imaging contrast, the hBN-NWs appear as black-gray fringes in the BF-TEM image. Those structures followed the configurations of the initially created hBN-NWs. Hence, the thickness of those hBN-NWs is higher than that of the regions without hBN-NWs. We can see that the hBN-NWs grow as they stack on each other and are randomly oriented in different directions. The selected area electron diffraction (SAED) TEM image, figure 21(b), shows that the hBN-NWs are oriented perpendicular to the Si 3 N 4 substrate surface. There, the halo intensity ring of electron diffraction from the (0002) plane from the hBN-NWs is dominant. The circular and uniform halo shape indicates that the hBN-NW planes contribute equally in any direction or the hBN-NWs are randomly orientated when growing on the Si 3 N 4 surface. In other words, the planes of hBN-NWs parallel to the electron beam of TEM. Furthermore, if hBN-NWs are considered as particles, and projecting them on the Si 3 N 4 substrate surface, we can estimate their sizes using an image analysis tool, e.g. ImagJ, by modeling the projection image of the hBN-NWs as ellipses. Therein, the minor axis length of the ellipse is the width of the hBN-NW. This analysis assumed that the hBN-NWs are single entities with a common shape, oriented perpendicular to the substrate surface. Several effects such as curling, overlapping, curving [1] can affect the size of hBN-NWs. However, we assume that these effects are insignificant compared to the majority of hBN-NWs. The process of calculating mean hBN particle sizes was realized in three steps. (i) A specific area is selected from the BF-TEM image. Therein, the hBN-NWs appear as dark areas, we adjusted through the electron intensity of the image, and then cropped a favorable area, as typically shown in figure 22(a). (ii) The hBN particles are selected when adjusting the electron intensity level of the TEM image, those particles will be covered with different color pixels from the background image, figure 22 limit of minimum and maximum gray level to estimate the mean hBN particle size from more than 1300 particles with the ImageJ software [148]. Based on those distributions of hBN-width and -length, the mean values of widths and lengths were estimated to be (6.2± 1.4) nm and (133.6 ± 4.4) nm, respectively. The statistical distribution of length values is usually estimated by fitting the data with probabilistic functions such as Gauss, Lorentz, Voigt. However, we used the log-normal distribution function [149,150], for the width . The BF-TEM image of the 100 nm-thick hBN film grown on the Si 3 N 4 membrane was recorded at which it was unmapped (a) and mapped (b) with the given mapping procedure of hBN-NW features, this image area was selected from the BF-TEM image given figure 21(a). The grain-size width and length distributions of about 1300 wall-features were projected onto the substrate surface plane. This data was calculated based on the distributions as shown in equation (5). From these distributions, the mean width and length values of those hBN-NWs were estimated. Reprinted from [39], Copyright (2017), with permission from Elsevier. calculation because it is the best fit for this distribution. The log-normal distribution function is described as equation (5). Meanwhile, the probability length distribution of the hBN particles is consistent with the Voigt function, which is a conjugate function of the Gaussian and Lorentzian ones. D is the width or length of the hBN particle, μ is the mean of ln(D), δ is the standard deviation of ln(D). The peak of the hBN width distribution is narrow, figure 22(c), confirming that the good homogeneity of the hBN-NWs grows on the Si 3 N 4 membrane. While the length distribution of hBN particles is wider, figure 22(d), implying that the hBN particles grew more freely in their lengths than in the widths during a deposition. Although we use two different functions to describe two similar physical properties, the physical significance of the distributions is not different. A detailed study of some specific regions on figure 22(a), aiming at investigating the intrinsic properties of the crystallinity of a single hBN-NW and the superposition could be occurred during the growth of hBN-NWs, HRTEM images were taken and shown in figure 23. Figure 23(a) shows that a single hBN-NW has a projection onto the Si 3 N 4 substrate surface with its shape similar to an ellipse, bounded by a red dashed curve. Based on the electron intensity histogram of the HRTEM image, the imaging histogram was scanned from A to B of the hBN particle, the electron intensity plot is also included as the insert of figure 23(a). This histogram morphology tells us that the height of hBN-NW is not uniform even within the grain. Therein, the core of hBN-NW has a higher height, the height decreases towards the either side of the hBN-NW, i.e. the two sides of the A and B points, or the top of the hBN-NW is tapered-end. Another HRTEM image recorded at a region with more complex hBN-NW structure, as shown in figure 23(b). This shows us that the degree of overlapping is greater between neighboring hBN-NWs. Both selected HRTEM regions show wrinkles, as indicated by the green arrows in figure 23. This wrinkling behavior may be due to distortion induced by defects of hBN-NWs [151]. The EELS spectrum of the hNW film deposited on the said Si 3 N 4 membrane was also recorded, as shown in figure 24. The EELS spectrum shows that the intensities of B-and N-edges are high enough to recognize the hBN phase. The sharp peak of the K-edge of sp 2 B is dominant and the very small peak of the C K-edge is also detected. The composition with C signals might come from contamination from sample handling processes. The C component present in the sample is not significant compared with the peak intensity of the C signal in other X-TEM specimens prepared with FIB, which will be discussed in section 5.7. The growth mechanism of hBN-NWs on Si substrate Based on the obtained results, the creation and growth of hBN-NWs on the Si surface were worked out. Those results are described through a growth model as shown in figure 25. Therein, a layer of BN material containing disordered phases such as aBN and tBN was coated onto the Si substrate surface at the early stage of the hBN film growth. The creation of these phases is due to the physical and chemical kinetic processes occurring at the substrate surface at which interacting of the ions present in the plasma environment and the Si substrate surface is dominantly driven. This disordered layer of BN material is about 10-30 nm thick. This thickness also depends on the concentration of H 2 present in the reactive gas mixture of RF sputtering. Herein, the initial etching of H ions will facilitate the creation of hBN-NWs earlier. This hBN-NWs created early, the hBN phase is enhanced. Figure 23. The two HRTEM images magnified from two typical areas in figure 22(a). One area consists of a well-defined hBN-NW configuration (a) and the other has a more complex configuration of hBN-NWs (b). In addition, an attached picture in (a) is an electron intensity profile extracted from the AB segment at which the A and B sites are at either side of the hBN-NW defined in the dashed ellipse.Reprinted from [39], Copyright (2017), with permission from Elsevier. Further to the Si substrate surface, the deposited BN film dominantly contains the hBN phase. The thickness of such an hBN phase layer is around 50-100 nm, which means that the hBN phase in this layer is the largest. Therein, the hBN-NWs are well defined in the hBN phase. The porosity of this layer is usually high due to the NW-branches not yet created. The thickness of this material layer depends on T sub and d. We found that the hBN-NWs are structurally heterogeneous with better hBN phase qualities in their cores than in either sides of the NWs. Moreover, the crystal planes of hBN-NWs are curled and distorted during deposition due to defects. These defects mainly come from the B sites rather than the N ones. When such Figure 24. An EELS spectrum was recorded from the hBN film grown on the Si 3 N 4 membrane as analyzed in figure 23. Therein, the Band N-K edge features were significantly exposed. This means that the hBN phase is highly existent. Moreover, the signal of the C-K edge is relatively low which means that the contribution of contaminated components is small due to the unprocessed TEM specimen. Figure 25. The growth mechanism of an imperfect hBN-NW at which it is nucleated from the Si substrate surface. Therein, the Si substrate is considered as a neutral surface, a thin BN layer deposited at the substrate surface is mostly a/tBN phase. Continuing to grow beyond the a/tBN layer, a block of hBN-nanosheets is grown perpendicular to the substrate surface and each hBN nanosheet contains many defects. Those defected sites are assigned to occur at the B sites rather than the N ones. The existence of such defects in the hBN-NWs structures will probably lead to a distortion in those NWs. Reprinted with permission from [36]. Copyright (2016) American Chemical Society. vacancies are created during deposition, H free atoms are directly bound to the N terminated edges. The random N-H bonds thus exist in the grown hBN film. The density of N-H bonds was also investigated, this might correlate relatively with the density of defects in the hBN crystal films. Such N-H bonds are weak, they can therefore be affected by temperature. If Tsub increases, it is possible to make the movement of the N-H bond higher, leading H atoms can be broken and released out of the hBN film. The as-deposited hBN films can be stable at around 750°C, the N-H bonds are completely destroyed if the temperature is increased to 1000°C. If the N-H bonds are absent, the vacancies existing in the hBN film can migrate and form di-vacancies in the hBN film. Those vacancies and displacements make the hBN phase transform to the disordered ones such as aBN and tBN. If the thickness of hBN films is greater than 100-150 nm, the NW-branches highly create, those branches interact/overlap with each other and produce more defects than the previous layer of deposited hBN film. The creation and growth of those NW-branches result in the porosity of this hBN layer decreasing. The whole growth of hBN-NWs is quasi-perpendicular to the Si substrate surface. We emphasized that the unsaturated N edges in the hBN-NWs can also interact with the terminated edges of other adjacent hBN-NWs. Those N-edges also act as nucleation sites for the other adjacent hBN-NWs, resulting in hBN-NWs extending outward in different directions. Therefore, the dangling bonds of N edges can act as a nucleation agent for the growth of NWbranches or sub-hBN-NWs from the primary hBN-NWs. Hence, the growth behaviors of the hBN films deposited with our unbalanced magnetron RF sputtering have been studied by changing in various parameters of the home-built RF machines such as d, T sub and α. A growth model of the hBN films deposited on the Si substrate surface described in figure 25. The quality of deposited hBN films depends on those studied factors. For example, hBN films are deposited at smaller d values, giving a higher R G , leading to a higher porosity and a higher crystallinity of the hBN phase, as compared to results obtained at larger d values. The orientation of the hBN-NWs does not change for the hBN films deposited at α = 90°and 0°, and increasing the α values only plays a role in reducing R G , the crystallinity of the deposited films decreases. This hints that the growth of hBN-NWs is dominantly governed by chemical processes rather than physical ones. Even so, undesirable phases such as aBN and tBN need to be reduced or suppressed. This is the main reason that we are looking for different methods to reduce unwanted phases to improve the quality of the deposited hBN films. Ultimately, the ability to change the quality or defect density of hBN films fabricated by our RF sputtering is possible by changing one of the d, T sub , α and content of H 2 in the reactive gas composition. 5.7. The growth mechanism of hBN-NWs on NCD film As discussed, it is beneficial to use the NCD surface for hBN deposition. Since there, the NCD surface has many free bonds of H at the terminated C edges. These free and weak bonds will act as springs at the NCD substrate surface. As a result, the ions present in the plasma environment of RF sputtering when interacting with the NCD surface will reduce the scattering and backscattering at the substrate surface much more than that of the Si one. Reducing scattering events leads to some effects such as bouncing, curling and chemical diffusion at the NCD surface are also reduced. This leads to faster binding between hBN crystal structures and the NCD substrate surface. Due to such initial predictions, hBN films were grown onto NCD substrates with a thickness of 300 nm. Using this NCD layer, the NCD particles at the NCD surface are also large enough and provide a flat surface for the hBN films to grow upwards. Figure 26 shows some surfaces of the thin films. Therein, the surface of the hBN film with a thickness of 300 nm deposited on the bare Si substrate is shown in figure 26(a). As seen, the hBN-NWs have fairly regular sizes which are oriented perpendicular to the Si surface, and the dimensions of the random hBN-NWs created from the Si surface have been discussed extensively in the previous sections. Figure 26(b) is an NCD thin film with a thickness of 300 nm grown on a bare Si substrate. Herein, the NCD particles with different shapes are seen clearly from the SEM image. This NCD film was then used as a substrate and deposited the hBN film on top, the morphology of the hBN film deposited on the NCD substrate is given in figure 26(c). A comparison of those surface morphologies of the two hBN films deposited on Si and NCD substrates is seen in figure 26(a) and figure 26(c). We found that the hBN-NWs tended to grow and follow the morphological structure of the NCD particles, denoted by the dashed yellow circle in figure 26(c). Those crystal faces of the NCD grains contain many C-H bonds, which will help accelerate the evaporation and adhesion of hBN phase. As a result, the size of hBN-NWs grown on the NCD substrate will be larger than that of the hBN-NWs deposited on the Si one [39]. In addition, the NCD particle size can also be estimated using the method described in section 5.5. A threshold image is generated by choosing the limit of minimum -maximum gray levels to estimate the grain size automatically, figure 26(e), cropped from an SEM image with an appropriate magnification, figure 26(d). As a result, more than 800 grains were selected and the particle size distribution function was then estimated, as plotted in figure 26(f). The average particle size was estimated to be (74± 8) nm, which is about nine times larger than the average width of hBN-NWs deposited on the Si substrate with the similar thickness of 300 nm. As we discussed in figure 8 that the diamond surface consists of many NCD particles with their facets oriented in different directions to the entire diamond substrate surface. These NCD facets are particularly terminated with H bonds. Moreover, the width of the 300 nm-thick hBN-NWs grown on Si(100) was averaged around 8.4 nm. Hence, hBN-NWs could be grown and localized directly from the facets of those NCD particles. This leads to the conclusion that the role of the facets of the diamond grains is very important, which localizes the hBN-NWs on each NCD grain. The hBN films deposited onto the Si and NCD substrates were also examined for their crystallinity with Raman and FTIR spectroscopy analyses. Raman spectra of the hBN film and the NCD/hBN heterostructure on the Si substrate are shown in figures 27(a) and (b). The Γ 1/2 value of Raman spectrum corresponding to the E 2g vibration mode at ν = 1367 cm −1 was also estimated to determine the sp 2 hBN phase [10,134,152]. The calculated Γ 1/2 value of 26 cm −1 at the E 2g vibration peak for the hBN film, while this value is lower (Γ 1/2 = 18 cm −1 ) calculated at the E 2g mode for the hBN film deposited onto the NCD substrate. This further confirmed that the crystal quality of hBN phase for the hBN film grown onto the NCD substrate is higher than that deposited on the Si one. In addition, the Raman spectra of the hBN film deposited onto the NCD buffer also Figure 26. Top-view SEM images of several thin films used to compare their surface behaviors. Therein, a 300 nm-thick hBN film was deposited on the Si substrate surface at which the hBN-NWs are well-defined on the Si(100) surface. These hBN-NWs are randomly and uniformly distributed on the Si surface and their sizes and shapes are quite similar (a). Moreover, a 300 nm-thick NCD layer was evaporated onto another Si substrate where the grown NCD particles are clearly seen on the substrate surface (b), as its surface properties discussed in figure 7(b). Furthermore, another 300 nm-thick hBN film was deposited onto the given NCD film in (b), the surface of the resulting hBN film is shown in (c). Therein, the hBN-NWs are localized to the faces of the exposed NCD grains at which they formed clusters of hBN-NWs following the NCD particle shapes. The low magnification SEM image of the NCD film in (b) was recorded to use for the NCD particle size distribution calculation as was similarly realized for the case of hBN-NWs in figure 22, is seen in (d), and its selected NCD particle map shows in (e). The mean NCD size distribution function calculated from the SEM image in (e) using equation (5) is given in (f). phases are clearly seen. Since the sp NCD 2 phase contributed from the NCD grain boundaries, they are therefore scattered over a wide spectrum and the relatively low signals correlate with the signals of the remaining peaks.Reprinted from [39], Copyright (2017), with permission from Elsevier. show that the characteristic properties of diamond are clearly revealed. Therein, the first-order Raman peak of diamond (sp 3 ) at ν = 1333 cm −1 and a non-diamond carbon peak (sp 2 ) at ν = 1487 cm −1 were also detected. The sp 3 bonds are present in the cores of the NCD grains, while sp 2 results from the carbonaceous structures present in the NCD grain boundaries [153]. FTIR spectroscopy is also used to detect vibrational modes at defects commonly occurred in hBN films, i.e. N-H at ν = 3437 cm −1 [144], B-C at ν = 1100 cm −1 [154,155] and sp 3 -BN at ν = 1085-1110 cm −1 [156,157]. FTIR spectra of the hBN films reported in figure 27, are shown in figure 28. Therein, a small peak was found corresponding to ν ∼1110 cm −1 , indicated by purple arrows, which means the presence of sp 3 BN phase or B-C bond. In fact, the sp 3 BN phase is difficult to form at low temperature (T sub = 125°C) and RF sputtering power is 75 W. Hence, the presence of sp 3 BN might come from disordered BN phases existing in hBN-NWs, this leads to a weak FTIR signal [156]. It is most likely that the vibration peak at ν = 1110 cm −1 comes from the B-C binding, where contamination has been observed in the samples and particularly at the interface of the NCD/hBN layers [155]. In addition, both samples have fundamental peaks corresponding to B-N bending (A 2u : ν = 817 cm −1 ), B-N stretching (E 1u : ν = 1376 cm −1 ) modes and optical phonon mode of sp 2 hBN [1,156]. Based on the given data, we realize that the hBN films integrated well onto the NCD substrate, the role of the NCD particles at the NCD surface is shown to be outstanding. In particular, the quality of the hBN film on the NCD buffer has been significantly increased. To investigate in detail the interfacial properties of the NCD/hBN heterostructure, the X-SEM image of the heterostructure shows that the hBN layer grew directly onto the rough NCD surface. Since the NCD particles are much harder than the hBN-NWs, the X-SEM imaging was facing a difficulty when the magnitude contrast of the SEM image is highly dominated by the NCD particles as compared to that of the hBN-NWs. This leads to those hBN-NWs being blurred, as seen in figure 29(a). Therefore, the X-TEM image of the said sample was then recorded at which the X-TEM specimen was prepared by FIB, figure 29(b). Based on the contrast of the TEM image, the NCD particles are still clearly visible throughout the entire thickness of the NCD buffer and dominantly contributed onto the imaging contrast. However, the hBN-NWs in the grown film were strongly eroded by the high-energy ion beam of the FIB technique [123]. The NW-branches are more susceptible to erosion, and the rest of the hBN film mainly consists of the hBN-NW cores. To visualize in detail at the interface of the Si/NCD layer, BF-STEM and ADF-STEM images recorded in the area bounded by a white dashed square are given in figures 29(c) and (d), respectively. In particular, an interface area of the NCD/hBN heterostructure, marked by a yellow solid line rectangular was also characterized by the STEM image, as shown in figure 30. The NCD layer is clearly visible to each NCD particle with the (100) and (111) NCD faces based on the STEM imaging contrast. The existence of ∼2.5 nm SiO 2 layer created during the substrate preparation is also visible. Therein, NCD grain boundaries are clearly seen where the growth process of NCD grains over time follows the growth mechanism of CVD diamond thin films [21]. In addition, the orientation of the sp 3 NCD and sp 2 hBN phases is clearly visualized in figure 30. The sp 2 hBN phase is directly grown from the NCD faces, the purity of this phase is high where the mixed a/tBN phase layer appearing in the case of the Si substrate has disappeared. The quality of hBN film is therefore increased in the case of depositing onto the NCD substrate surface, this result is in good agreement with the data obtained by calculating the value of Γ 1/2 from the Raman spectra in figure 27. In particular, the orientation of the hBN-NWs is always perpendicular to the NCD faces, as indicated by the arrows in figure 30(b). This caused a localization effect during the deposition of the hBN film onto the NCD buffer, which is also consistent with the data obtained from the SEM images, figure 26(c). A few bright spots are particularly seen in the ADF-STEM image, figure 30(b), these spots could be produced by adsorption of free C atoms generated during the FIB specimen cutting process with high-energy ion beams, denoted by yellow circles. This leads to the hBN-NWs that can be turned into a single or multi-photon emitting source at which many research groups are currently focusing on [54][55][56]. Even so this is one of the issues our research groups have been working on. In fact, we have not yet to fabricate hBN monolayers in which the defect sites must be controlled by doping elements such as carbon or some other foreign elements, which makes hBN monolayers capable of being a single or multi-photon emitter. Ultimately, the crystal structure of NCD particles located close to the Si surface are clearly seen where the crystal planes are clearly defined in the BF-STEM image, figure 30(c). A SiO 2 layer of ∼2.5 nm is also seen which is sandwiched between the NCD layer and the Si surface. To exploit the distribution of composition in the interface of the NCD/hBN-NW heterostructure, as already seen in figure 30(a), the EELS spectra of the two areas located on the NCD and hBN-NW film were recorded, as seen in figure 31(a). These two EELS spectra are recorded in two regions denoted by blue and purple circles in the spectrum image, defined as a green square in figure 31(b). The sp 2 hBN and diamond phases are clearly shown in the above spectra. Moreover, a small amount of amorphous carbon was found with the bonding energy of π * = 285 eV, while the bonding energy states of both carbon symmetries at π * = 285 eV and σ * = 291 eV were also worked out from the EELS spectrum recorded from the hBN layer. Although the X-TEM was not deposited with the carbon environment, the received carbon signal is however quite large. These carbon elements could be created during the FIB X-TEM specimen preparation [112]. This carbon signal is much stronger than the case of EELS spectra recorded from the TEM specimen without using FIB (figure 24). In particular, the broadened peak of σ * might result from the presence of an amorphous-like carbon phase [6]. This phase could be related to the white spots denoted in figure 30(b). However, some other researchers have suggested that carbon elements can be adsorbed or intercalated into the defected sites of hBN lattice, and it is a main factor that leads the hBN structures to emit single photons [76]. It is possible that our sample also has the ability to emit photons appearing as such white spots. Even so, whether single-photon or multi-photon emission needs to be further investigated in the future. Similarly, the energy states of boron symmetry at π * = 192 eV and σ * = 198 eV are also detected. An image showing the distribution of B, N and C elements recorded from the EELS spectrum image is given in figure 32. Therein, the elemental compositions of B and N are represented in dark color, while the elemental composition of C is shown in white one. The element distribution map of B shows the existence of B elements is less in the hBN lattice regions or the B content in the high hBN crystallinity is less. This means that B content will be large in regions consisting of disordered BN phases. This conclusion is consistent with the data that we obtained from the hBN film growing on the bare Si substrate. There, we showed that N-terminated edges frequently occurred in the main-hBN-NWs ( figure 25). This also reconfirms in the map of the N elemental distribution, figure 32(b). The distribution of N elements appears randomly where the N atoms are scattered on the map with dark pixels and appears over the entire hBN film thickness. This means that more N atoms exist on the surface of the hBN specimen and they have a greater contrast than the B elements [53]. Finally, the map of C elements is given in figure 32(c), indicating that C atoms are derived from NCD particle regions and mainly belong to the sp 3 diamond phase. This is also consistent with the sharp peak of σ * observed in the EELS spectrum of the NCD layer, figure 31(a). A small amount of carbon distributed near the hBN/NCD interface was also detected which may be due to C re-deposition during FIB X-TEM specimen preparation using a high-energy Figure 31. (a) EELS spectra were recorded from two different areas at the hBN and NCD layers, indicated as blue and pink circles in the spectrum image (b). Therein, the signal level of the C-K edge in the NCD EELS spectrum is very large as compared to the case of the hBN EELS one. However, the C-K edge signal in the hBN EELS spectrum is assigned to originate from the FIB cross-sectioning process which is significant compared to that of the sample without the FIB preparation, as seen in figure 24. In addition, the mixed sp 3 and sp 2 NCD phase caused the signal at the peak of the C-K edge to split in the energy range of 280-340 eV. Ga + ion beam [125,126]. Both B and N elements have small Z numbers, resulting in a lower Z imaging contrast than in the case of C. Herein, the thickness of the X-TEM specimen also plays an important role at which the mirror contrast was also overwhelmed by the heterogeneity of the specimen thickness, especially a difference in the thickness of the NCD grains and the BN film. The mechanism of nucleation and growth of hBN films on the NCD surface is clearly different from that of Si. Especially, the ability of the hBN phase film crystallized on an NCD substrate occurs earlier than the case of Si. There, we assume that the role of C-H bonds at the C-terminated edges of the NCD particle surface plays a dominant role. Due to such an early crystallization, the condensation process occurs rapidly at the early stage of hBN film formation. As a result, the growth rate of hBN films (R G ) will be greater than the case on Si substrate. Therefore, we verified this effect by comparing the RG values obtained from the growing hBN films on Si and NCD substrates with d = 3 cm and three different deposition times, t 1 = 66 min, t 2 = 150 min and t 3 = 240 min. The R G values were calculated for the samples depositing on the bare Si substrates, R G1−Si = 228 nm, R G2−Si = 280 nm, R G3−Si = 270 nm. The R G values were calculated for the samples growing on the NCD substrates, R G1−NCD = 336 nm, R G2−NCD = 320 nm, R G3−NCD = 242 nm. Based on the correlation of the above R G values, we can conclude that the hBN phase was crystallized faster on the NCD surface at the early stage of thin film growth with a short growing time (< 150 min) or a thin hBN layer. This conclusion is consistent with the results discussed above. Even so, when hBN-NWs are rapidly grown and those NWs are localized following NCD facets, the interaction among those hBN-NWs has occurred, leading to the creation of early NW-branches. The overlapping of those NW-branches will reduce the growth rate of the hBN films when using the NCD substrate. While the BN film deposited on the bare Si substrate does not have these limitations, the thickness continues to increase rapidly. This conclusion is truly reflected in our data, R G3−NCD < R G1−Si at t 3 = 240 min. Based on the above data, we propose a growth mechanism of hBN-NWs on the NCD substrate, as shown in figure 33. Therein, the formation of hBN-NWs is mainly a chemically oriented rather than a physical process, resulting in hBN-NWs oriented perpendicular to the NCD substrate surface, regardless of whether the substrate is neutral (Si) or C-terminated edges (NCD). As seen in the interface of NCD/hBN heterostructure, the presence of disordered BN phases is mostly absent where the hBN phase seems to grow directly onto the NCD surface. Even so, the hBN-NWs still oriented perpendicular to the substrate surface in the entire film, the hBN-NWs typically changed their orientation slightly during the first growth stage, figures 30(a) and (b). This implies that the flow of incoming ions, denoted by the green arrow in figure 30(b), means that physical effects do not affect too much on the growth direction of hBN-NWs. Hence, what is the fundamental process that promotes directly the nucleation of hBN on diamond to obtain hBN-NWs at which the addition of H 2 to the reactive gases of the RF sputtering is thought to be a prerequisite [128]. Using the NCD substrate where NCD particles are terminated by H-dangling bonds when a NCD film takes off from the CVD reactor. The growth mechanism of hBN-NWs deposited on the NCD substrate surface was already proposed [39], which can be described as the followings: The proposed growth model starts with the removal of an H atom and the substitution of a sputtered B ion at a C-H transient bond on the NCD surface [21,158], to initiate an hBN ring, figures 33(a) and (b). The sputtered N ions can connect to the B site to form a third of the hBN ring with two N-terminated edges, figures 33(c) and (d). The other two sputtered B ions are further connected to the given N sites, figure 33(e). Finally, one of the sputtering N ions locks the two bonds of the residual B atoms together, forming the first hBN ring in the lattice of hBN-NWs, figure 33(f). The processes as mentioned above occur in parallel, and continuously to form hBN-NWs. Since the C-H bonds are very random on the facets of the NCD particles. This leads the growing process asynchronously and can cause defects in the hBN lattice. The growth mechanism of hBN-NWs deposited on the NCD substrate herein is a hypothetical model obtained from our experimental results. In principle, both C-B and C-N bonds can be formed on the NCD surface at which the binding energies of C-B and C-N are 356 kJ/mol and 305 kJ/mol, respectively. Hence, we can create another model of the growth of hBN-NWs on the NCD substrate with an initial bond of C-N, similar to the growth mechanism with a starting bond of C-B. However, the results obtained in literature showed that C-B bonds are more stable than C-N. This means the B atom easily incorporates into the diamond surface during CVD growth rather than N. Those bonds can be estimated theoretically with their EELS spectra, while experimental results are more difficult to realize with FIB-TEM specimen preparation, and TEM alignment is also an issue. hBN-NWs grown on Cr/Au heterostructure buffer As discussed in sections 5.6 and 5.7, two different growth mechanisms are involved in the development of hBN-NWs depositing onto Si and NCD substrates. The quality of hBN phase is enhanced when growing the hBN material onto the NCD surface instead of the Si one. Therein, a layer of disordered BN phases is eliminated when the hBN film is deposited onto the NCD substrate. However, the entire hBN films still tend to grow perpendicular to the substrate surface whether it is Si or NCD at which the chemical role of the interface layer between the grown material and its substrate is assigned to be important affecting the quality of the hBN crystalline phase. Due to the given situations, we continued to carry out the deposition of hBN films onto the Cr/ Au bilayer substrates. We hoped to use the benefits of transition metal elements in this heterostructure as catalysts in the deposition process, to orient the material layers parallel to the substrate surface instead of the perpendicular orientation as in the two cases above. Herein, the characteristics of the substrate surface is a key factor in the nucleation of BN phases [83,159]. To minimize the kinetic energy of the free ionized particles at the substrate surface, some weakly chemical reactive metals such as Au, Pt, Pd, Ir, are often used as a buffer layer [1]. However, if we use the single metal layer mentioned above, the required temperature is high during the deposition process. Hence, we have coupled an additional layer of Cr (t Cr = 10 nm) below to create good adhesion of the Au layer (t Au = 100 nm) with Si substrate surface. The Cr/Au heterostructure is capable of reducing their vaporization temperature to a certain temperature within the working temperature range of our RF sputtering, < 600°C. A 300 nm-thick hBN film was grown onto the Si/Cr/Au substrate at d = 3 cm and T sub = 450°C. The reason for using this temperature is due to the melting temperature of the Si/Au composited structure as we predicted that it is in the range of 350-480°C. Since the eutectic point of Au-Si compound is 363°C, the melting point of this heterostructure is much lower than the melting point of each structure composition, i.e. Au (1063°C) and Si (1414°C). A low magnification HAADF-STEM image of the given X-TEM specimen recorded is shown in figure 34(a). The Cr/Au substrate with their Z numbers are of 24 (Cr) and 79 (Au), which are much larger than the Z numbers of N, B and C. At T sub = 450°C, the Cr/Au structure could be partially melted [152,[160][161][162]. Based on the Z imaging contrast of the STEM images, we can see that the Au layer has the brightest contrast and formed discontinuously as material droplets with a diameter of (145± 15) nm. The crystallization of the hBN layer seems to be improved at the Au/hBN interface, where the hBN-NWs appear to initiate and grow above the Au droplet regions. Bundles of hBN-NWs were formed, denoted by white U-shaped curves. Those bundles of hBN-NWs formed close to the Au droplets. This may give us an initial view of the enhanced hBN phase formation in the presence of Au. The elemental distribution through the cross-sectional TEM specimen measured with the energy dispersive x-ray (EDX) image is given in figure 34(b). Therein, the Cr/Au substrate appears heterogeneous when the Au layer is formed as islands or droplets. In fact, the Cr layer was only 10 nm, so it was barely detected at the expected location, it was possibly buried by a thin layer of Si redeposited from the Si substrate during FIB milling [163][164][165][166][167]. Some residual Cr accumulated as a droplet, located between the Si substrate and the hBN film, distinguished by the purple region in figure 34(d). This Cr droplet may be due to the migration of Cr elements during deposition at T sub = 450°C, where chemical diffusion effects can occur [167]. All three elements are scattered throughout the protective Pt layer and the hBN film, which was possibly resulted from the X-TEM FIB specimen preparation [126,127].The region containing the Cr component, indicated by the yellow arrow in figure 34(b) which was magnified and shown as a BF-STEM image in figure 35. Therein, some Au islands formed at the Au/hBN interface are clearly seen, and the Cr/Au bilayer thickness is not uniform, as seen in figure 35(a). The diffraction fringes of the hBN crystal structure at the Au/hBN interface are also recognized, as indicated by colored arrows in figure 35(b), this image was recorded in an area as marked by the cyan rectangular box in figure 35(a). It is clear that the hBN phase forms directly in the early stage of hBN film growth. The orientation of the grown hBN sheets seems to be locally dependent on the geometry of each Au island. This situation is similar to the case when hBN-NWs grow following NCD particle shapes, as already discussed in section 5.7. Therefore, hBN-NWs will grow competitively after a certain film thickness and the shape of Au droplets directly induces the orientation of the hBN-NWs, figure 35(b). BF STEM images recorded at three different locations near the Au island are shown in figure 36. In the first area, as seen in figure 36(a), the hBN sheets constituting hBN-NWs appear to be parallel to the Au island surface, denoted as blue arrows. The growth direction of hBN-NWs tends to be vertical to the substrate surface. Moreover, the diffraction fringes of Au crystals are also indicated by white arrows. A very thin layer of aBN is however observed at the Au/BN interface which might result from a random sputtering of N and B ions condensing on the Au surface or the contaminated Au surface during sample preparation prior to deposition of hBN film. In the other locations, as seen in figures 36(b) and (c), the mixed growth directions of the different bundles of hBN-NWs, where their lattice fringes are clearly seen, are annotated with different colored arrows. Thus, it is clear that the tBN and hBN phases already exist. Even the tBN phase remained, however the crystallinity of hBN film was significantly improved, as compared to the case where hBN films were grown onto the Si substrate at which the two films were deposited under the same conditions of RF sputtering. The above results show that Au layer has the ability to increase crystallization of hBN film, especially at the interface of Au/hBN layers. However, the morphological behavior of Au as islands or drops influenced the growth direction of hBN-NWs at the Cr/Au substrate surface. Because of the inhomogeneity of the Cr/Au substrate surface that leads to the competition of hBN-NWs occurring earlier than in the case of hBN-NWs grown on the flat Au surface. Moreover, the C signals were assumed to originate from the X-TEM specimen preparation. Therefore, the EELS spectrum of the X-TEM specimen was also recorded at the interface of the Au/ hBN structure, as shown in figure 36(a), the EELS spectrum was analyzed as figure 37. Therein, the EELS spectrum of the specimen shows that the C peak intensity is significant as compared to the main peaks of the Band N-K edges at which the spectrum was recorded at the hBN film. Such significant C K-edge signals come from the FIB sample preparation, as already discussed in figure 31. This conclusion is reconfirmed by the EELS elemental maps of B, N and C, as given in figure 38. The distribution of B and N elements in those maps is less obvious due to the low Z contrast, while the distribution of C atoms is mostly present on both sides of the Herein, the B and N signals are dominant shown, while the C signal is also clearly seen, this confirms the C signal generated from the X-TEM specimen preparation using the FIB technique. interface of the Au/hBN regime. This data is consistent with the results obtained from the EELS spectrum as discussed in figure 37(a). It is clear that the crystallinity of hBN films deposited on the Cr/Au buffer substrate is significantly enhanced. Similar to the other substrates, the overall orientation of the deposited hBN-NWs remains vertically oriented relative to the substrate surface. This may result from competition of hBN-NWs nucleated on those substrate surfaces to minimize the total free energy for the whole nucleation process [159]. The results obtained on the Cr/Au substrate also show that chemical processes greatly induce the formation of hBN-NWs at the early stages of thin film development. Using a Cr/Au bilayer substrate will improve the quality of the hNB phase at the Cr-Au/hBN interface and particularly the hBN-NWs oriented parallel to the Au droplet surface. However, it is difficult to control Au droplet size and substrate surface morphology when increasing Tsubin our current experimental setup. Despite this, the enhanced crystallization of the hBN phase would be beneficial for future applications using such optical properties of this material [168][169][170]. The structural characteristics of grown hBN films with less a/tBN phase and controllable orientated hBN-NWs can be particularly of interest in photonic devices where fundamental understanding the orientation of hBN-NWs with respect to the conducting characteristics of hBN film is important [1,13,[168][169][170][171][172]. Wetting and other properties of hBN film As discussed in the previous sections, we have succeeded in fabricating the highest possible crystallinity hBN-NWs with our homebuilt RF sputtering. Therein, the hBN crystalline phase was improved when the hBN film was directly bound to the NCD substrate. Furthermore, the ability to orient the hBN monolayers or nanosheets is possible when using the impact of the Cr/Au bilayer substrate at T sub = 450°C which reacted as a catalyst for the crystal growth process at the substrate surface, resulting in the reduction of unwanted phases such as a/tBN. Even so, such grown hBN films are still porous, which contain many defects and especially high surface roughness. In order to use such characteristics for the application purposes, we also measured the wettability of hBN films. Therein, we wanted to analyze the dependence of the wettability of grown hBN films as a function of surface roughness or the size of the hBN-NWs projected on their substrate surface. In other words, we investigated the wettability of the material with respect to the change of hBN film thicknesses. Two series of hBN films were deposited at d = 3 cm and d = 6 cm, with their thicknesses ranging from 100 to 900 nm. SEM images of those films were recorded, as shown in figure 39. Similar to the results discussed in figure 11. There the size of the hBN-NWs projected onto the substrate surface increases with the thickness of the hBN films for both d-values. Based on the SEM imaging contrast, the main difference in the morphology of the two hBN film series is that the widths and lengths of the hBN-NWs grown at d = 3 cm are larger than that of the case for the hBN-NWs deposited at d = 6 cm at the same hBN film thickness. We used the contact angle (CA) method to measure the wettability of the given films in respect of the water droplets. This is a useful method [173][174][175] in determining the wettability of any solid plane interacting with a liquid, indicating indirectly the degree of wetting of the solid material to water or another solvent. In fact, if a liquid droplet drops on a solid surface, the wetting properties of the given surface depend on several typical factors such as the solid surface tension (g ), wetting behaviors of a solid material. Hence, experimentally measuring the contact angle (θ Y ) value is an indirect route to understanding the surface tension of a solid material [173,174]. Therein, the contact angle is defined as the angle between the intersection of the liquid-solid interface (e.g. H 2 O-hBN) and the liquid-vapor interface (e.g. H 2 O-air). Herein, we used the droplet size in all our measurements as 15μl. In principle, the contact angle of a liquid drop on an ideal solid surface was first described by T. Young [176], where θ Y is determined by the interaction of three interfaces in equilibrium, which is figure 39, we conclude that if the roughness of hBN films is high, the water droplet-air interaction plays a dominant role in the CA value. This result is quite consistent with the results of the original theoretical work realized by Wenzel and Cassie-Baxter [173,177]. In this case, each hBN-NWs can be considered as a hierarchical structure or a groove shaped on a smooth surface. The shape and height of the hBN-NWs are strongly related to the surface roughness factor, which is directly seen in the SEM images in the upper panels of figures 40(b) and (c). In addition, interstitial spaces between hBN-NWs also play an important role, where air pockets exist in those interstitial spaces that repel water droplets from the surface of hBN films, as described in the lower panels of figures 40(b) and (c). Moreover, there are many external forces that affect the value of CA such as gravity at which droplet size and droplet type are also important factors. Therefore, the wetting characteristics of hBN films in respect of some other liquid droplets such as oils or dyes which can largely depend on the intrinsic properties of hBN films, i.e. defect density, impurities, surface roughness and the porosity or the hBN-NW shapes projected on the the hBN surface plane. To determine quantitatively the impact of the above parameters on the final CA value, it is still open questions for us, and we are also working on. The intrinsic properties of hBN materials are of interest, for example, (i) defects in hBN films are related to the ability of hBN films adsorb pollutants which can be used in Figure 39. SEM images of two series of the hBN thin films with different thicknesses deposited at d = 3 cm (a)-(d) and d = 6 cm (e)-(h), sputtered at the substrate temperature of 125 • (T sub ). Their surface behaviors expose that, the hBN-NWs have different sizes, especially the roughness of those films is different at which the surface of these films is considered as an idea plane and that was etched with hBN-NWs in differing shapes and different spaces between those hBN-NWs. This can result in wetting properties of the given films to water, oils and dyes. water purification technology; (ii) defects capable of absorbing foreign elements at their vacancies and having the ability to emit single/multi-photons, which can be used in quantum information encoding technology; (iii) the temporary adsorption mechanism of the H atom during the deposition of hBN-NWs is very important, if another gas such as CO 2 or CH 4 is substituted for H 2 , where they both react as reactive gases and they have the ability to intercalate into the lattice of hBN during hBN creation. Such properties of hBN materials with those conditions remain challenging and are hot topics for many research groups. Some features in porous hBN nanostructures The porous hBN nanostructures as we have been investigating, the ability to control the nucleating and growth of hBN-NWs on a substrate material array needs to be investigated more further and deeply. As an advantage, hBN films deposited by an unbalanced RF sputtering will be useful for a large-scale material coating purpose. The quality of the grown hBN phase films is highly influenced by experimental parameters. We have concluded that the hBN material is more porous when depositing at positions closer to the BN target or d is small. At the positions far away from the BN target or d is large, the density of grown material increases, disordered BN phases easily exist. On the other hand, increasing substrate temperature also improves the quality of the hBN film, T sub = 250°C was assigned to be the most suitable deposition temperature to obtain the best quality of the grown hBN films. The porous hBN film is thermally stable below 1000°C in high vacuum, if higher than that temperature the desorption of N-H bonds at the defected sites leads to many chemical processes occurring inside the hBN film. Due to the di-vacancy creating process and this breaks the hBN crystal structure, the hBN material turns into poorly ordered phases. The process of rotating the substrate only reduces the growth rate per unit area of the substrate plane for the same deposition time, as compared to the case of the un-tilting substrate. This situation is the same as we place the substrate plane away from the BN target, resulting in greater material density and more unwanted phases present. The hBN-NWs in all cases tended to grow perpendicular to the substrate surfaces. Hence, it is also an advantage to use hBN-NWs as field electron-emission sources in some potential applications. Altering the surface characteristics of the substrate materials, the quality of the hBN phase is enhanced for the two cases of deposition onto the NCD and 450°C-heated Cr/Au substrate surfaces. The growth mechanisms of the hBN films deposited onto typical substrates were proposed based on the detailed experimental data. Therein, hBN-NWs were directly nucleated on both above substrates without a thin layer of disordered aBN/tBN BN phase. There is little difficulty in controlling the surface roughness of the Cr/Au buffer at the melting temperature of the Cr/Au bilayer. Therein, the hBN-NWs were localized on Au droplets which we need to further investigate by reducing T sub to a certain point in the range of 363-450°C. The Cr/Au bilayer buffer not only suppress the aBN phase created at the interface of the hBN film and the Cr/Au substrate, but also figure 39. (b), (c) The schematics of the two thin films deposited at d = 3 cm with thicknesses of 100 nm and 900 nm, respectively. Herein, the film with a thickness of 100 nm which has small hBN-NWs, uniform surface and density of hBN-NWs is higher than that of the 900 nm-thick film containing large hBN-NWs, non-uniform surface and the density of hBN-NWs is low. Differences in surface behaviors of hBN films lead to different wettability in respect of some liquids, i.e. water, oils and dyes. produces a thin hBN layer close to the Cr/Au surface which tends to orient the hBN nanosheets parallel to the substrate surface. The lattice spacing between the covalent planes of the hBN monolayers in an hBN-NW was twisted, and this hints that a certain level of defects exists in those hBN monolayers, leading to change the conducting properties of those grown hBN films. The N-H bonds are abundant in the porous hBN films at which H dangling bonds are highly available in the reactive gas composition of our RF sputtering. Our data is consistent with the results reported in the literature [53] where B vacancies are preferable in such hBN monolayers. Therefore, the presence of H atoms cooperating in the hBN lattice, indirectly indicates the existence of B vacancies in hBN-NWs. During the growth of hBN-NWs, they grow unevenly, leading to hBN wrinkles that persist in the very thin hBN layer, resulting from defects or lattice distortion. Furthermore, an advantage of using NCD substrates is that the morphology and grain size of NCD particles can be controlled by the addition of N 2 during the growth of the NCD films with the CVD technique [162]. NCD layers with higher sp 3 content are more favorable for hBN films growing with uniform widths of hBN-NWs. This highlights the feasibility of this deposition approach for future coating purposes with negative electron affinity or field emission applications [1,13]. Concluding remarks As a highlight for this review, we have analyzed in detail the growth mechanisms of the hBN films containing hBN-NWs deposited on three different substrate surfaces, e.g. Si, NCD and Cr/Au. Therein, the Si surface is considered as a neutral material in which the terminated Si atoms at the substrate surface are very inert during the growth process of the hBN film. Besides, the NCD surface is more active at which the H dangling bonds can connect to the C-terminated edges at the facets of the NCD-sp 3 particles and their sp 2 boundaries. The weakly C-H bonds at such surfaces acting as springs, and the interaction of reactive gas ions in the plasma environment of RF sputtering with those at those facets will alter the crystallographic properties of grown hBN films. As a result, the hBN films grown on the NCD substrate have better crystallinity than the hBN films deposited on the Si one. In particular, the grown hBN-NWs are localized following the morphology of each NCD particle, leading the hBN-NWs deposited on the NCD substrate being more porous than on the Si. Those hBN-NWs were directly grown and oriented vertically to the facets of each NCD particle, this makes a competition in the growth orientations of hBN-NWs occurring early. This means that the disordered BN phases are compressed at the interface of NCD/hBN films, but they are more likely to exist between hBN-NWs due to the overlapping of those NW-branches. As for using the Cr/Au substrate, the crystallinity of the hBN film is enhanced, the hBN-NWs are however localized following the Au droplet surfaces, thus controlling the fineness of the Cr/Au surface is still an issue that needs further explorations. In particular, the hBN-NWs grown on the Cr/Au substrate changed the orientation of the deposited hBN-NWs parallel to the surface of the Au droplets with a very thin layer. That is, we can fabricate hBN films with crystal orientation parallel to the Cr/Au substrate at a local scale and at a relatively low working temperature (T sub = 450°C) using our homebuilt RF sputtering. Based on the obtained data analyzed above, some issues need to be solved and need further studies. For example, it is necessary to optimize the suitable Cr/Au substrate temperature to control the roughness of the Cr/ Au surface where Au droplets could be present, which will affect the growth orientation of hBN material relatively parallel to the substrate surface on a large scale. Moreover, highly crystallized hBN-NWs can be produced at an optimal condition, but defects still exist, so it is necessary to control these defects using other reactive gases such as CO 2 or CH 4 . There, the role of C, O, H impurities should be exploited systematically. Assuming that if the said impurities coexist in hBN-NW lattices, how the intrinsic properties of hBN-NWs will change. In fact, one research group has found that by absorbing C atoms into the vacancies of hBN monolayers, such monolayer is able to be a single-photon emitter source [55,56]. Particularly, the temporal adsorption properties of foreign atoms or molecules present in the plasma environment such as CO, CH 3 , SO 2 at which such gas molecules could be adsorbed into the defect site of hBN monolayer need to be studied systematically with a theoretical approach to save time and effort. Finally, as we discussed, transition metal elements act as catalysts for the nucleation of the hBN phase, the interaction between some transition metals such as Au, Ni, Pt, Pd, in respect of the hBN crystalline monolayers needs to be elucidated. (DQH). This is also supported by Duy Tan University (XHC) & the Military Institute of Mechanical Engineering (DKP), Vietnam. Data availability statement The data that support the findings of this study are available upon reasonable request from the authors. Author contribution statement DQH conceived the research, carried out the deposition of hBN films and realized some fundamental measurements such as Raman and FTIR spectroscopy, analyzed the obtained data, and drafted the manuscript. NHV, TQN, TDH, XHC and DKP planned for the main tasks of the mini-review, arranged references for the manuscript, searched and compared our results with data from other research groups during the writing/ editing process of the manuscript and co-supervised the project. Some advanced measurements such as FIB and S/TEM were supported by Hasselt University and University of Antwerp, Belgium.
37,248.4
2023-02-23T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Study on mechanism of differential concentration corrosion The pipeline easily gets corroded in a seawater environment. The oxygen in the seawater is one of major parameters causing the corrosion. In practice, the corrosion due to the oxygen concentration difference, i.e. differential concentration corrosion (DCC), cannot be avoided. However, a one-dimensional DCC model cannot satisfactorily predict the corrosion because the oxygen distribution near the pipe wall is two-dimensional. In this regard, a two-dimensional DCC model was proposed in this study to numerically investigate the distribution of corrosion potential and current in the ionic conductive layer near the pipe wall as well as the overall corrosion current. The results show that DCC plays a significant role in determining the corrosion potential and current. Without considering DCC, a large corrosion potential and current exist at the location with high oxygen concentration near the pipe wall, whereas the occurrence of the low corrosion potential and current corresponds to the location with low oxygen concentration. However, as DCC is considered, at the location with high corrosion potential, cathodic polarization was produced and the corrosion rate decreases; at the location with low corrosion potential, anodic polarization was produced and the corrosion rate increases. In general, the corrosion potential can be homogenized in terms of DCC. The differential concentration corrosion (DCC), which is due to the non-uniform distribution of oxygen concentration in an electrolyte 1 , is an important corrosion mechanism. For example, when a structure is immersed in seawater, the corrosion potential at its upper part is higher than that of the lower part because the oxygen concentration near the surface of seawater is higher than that in deep seawater. The potential difference generated between the upper and lower parts of the structure can lead to the transmission of electrons and subsequent corrosion on the structure. Generally, DCC occurs only when the oxygen concentration difference reaches a certain level. In reality, it is not easy to measure the oxygen concentration throughout the solution. Therefore, DCC was always ignored in the engineering analysis. However, in a complex environment where the distribution of oxygen is highly uneven, the DCC should be considered to help explain the mechanism for some peculiar corrosion phenomena. For example, Matsumura 2 studied the failure of pipelines in Japanese Mihama Nuclear Power Plant. It was found that the outer elbows of these pipelines became thinner and thinner and were even damaged. It is difficult to explain the result by using traditional flow accelerated corrosion theory (FAC). Traditional theory 3 indicates that the bend in the pipeline has a large shear stress due to a fast fluid flow rate leading to a thinner boundary layer and a higher concentration of oxygen. Thus, the oxygen concentration gradient is larger, causing a higher mass transfer rate and higher electrochemical reaction rate. Correspondingly, the corrosion rate increases. Based on this traditional theory, the internal side of pipeline bend should be corroded firstly. However, the fact is contrary to the above estimation, the external elbow of the elbow pipe is first reduced and even damaged. The contradiction result only can be explained by the DCC mechanism. In the past, this kind of research work [4][5][6][7][8][9][10][11] was carried out under laboratory conditions, and the oxygen concentration in the container was evenly distributed, which could not be applied to engineering. In practice, the distribution of oxygen is more complicated than that in the experimental conditions. For example, the fluid in the seawater pipeline is dissolved with a certain amount of oxygen, significant turbulence tends to cause a complex distribution of oxygen when the fluid flows in the pipeline. The oxygen distribution is very non-uniform even if the shape of a pipeline is not complex. The oxygen distribution can be non-uniform in both axial and circumferential directions. Laboratory conditions cannot meet the needs of engineering practice. In this regard, numerical simulation was applied to study the DCC under complex situations. For example, Lu et al. 12 proposed a model to predict the reducing-pipe flow accelerated corrosion. The concept of DCC was introduced into the traditional FAC model based on the difference in oxygen concentration at each end of a reducing pipe. The corrosion rate www.nature.com/scientificreports/ of the reducing pipe section was then calculated. Zhu et al. 13 predicted the corrosion rate of the loop pipeline elbow in a nuclear power plant, using a DCC model based on oxygen concentration difference between the inner bend and external elbow. There is a very thin ionic conductive layer between the inner bend and external elbow according to the model, forming an electronic conduction loop within the pipeline. An analytical method proposed by Song 14 was adopted to calculate the corrosion rate at several points along the conductive path between inner bend and the external elbow. The result indicates that the corrosion current considering DCC is higher than that without DCC by one order of magnitude. Therefore, the elbow can be corroded and damaged earlier. The verified result obtained by Masumura 2 indicates that macro-battery corrosion caused by concentration difference must be considered when the distribution of oxygen is not uniform to a certain extent. Previous numerical studies on the corrosion of pipeline have almost exclusively used one-dimensional model assuming a uniform distribution of oxygen. However, such an approximate analysis is not sufficient. In practice, the distribution of oxygen is not uniform. Therefore, considering uneven distribution of the oxygen along the axial and circumferential directions in the pipeline, in this study, a two-dimensional DCC model was proposed to numerically investigate the distribution of corrosion potential and current in the ionic conductive layer near the pipe wall as well as the overall corrosion current. . Differential concentration corrosion mechanism The corrosion process can be explained using the following experiment. As illustrated in Fig. 1a, semi-permeable membrane is used for dividing a galvanic cell containing electrolytes into two parts. Two iron sheets with the same properties are submerged into both parts. Different amounts of oxygen are injected into both sides through a particular device to make different oxygen concentrations, i.e. c o 2,2 > c o 2,1 . The following electrochemical reactions can occur at both parts: The equilibrium potential, exchange current density and Tafel slope at both parts, i.e. E e,a , I 0,a , β a and E e,c , I 0,c , β c , are listed in Table 1, respectively. The coupling of reactions (1) and (2) leads to a mixed potential, E corr , and corrosion current, I corr . In general, E corr is far different from the equilibrium potential E e,a , E e,c , so the contraries of the two reactions can be ignored. In addition, the control step of the whole reaction is determined www.nature.com/scientificreports/ by the discharge process if the concentration of oxygen appropriately increases. Therefore, the above coupling reaction can be described by the simplified Butler-Volmer formula: Consequently, E corr and I corr can be obtained by combining Eqs. (3) and (4) due to |I c | = I a = I corr : The electrochemical reactions on both parts of the galvanic cell can be described using Eqs. (5) and (6) in Fig. 1a. The equilibrium potential, E e,c , of electrode reaction (2) and the exchange current density, I 0,c , vary with different oxygen concentrations on both parts. Based on Nernest equation 15 , the equilibrium potential and exchange current density can be calculated as follows: where c o 2 ,0 is the oxygen concentration at inlet, corresponding to the equilibrium potential E e,c ,and its value is listed in Table 2. Equations (7)- (10) can been substituted into Eqs. (5) and (6), and then the corrosion potentials E 1 corr and E 2 corr as well as the corrosion current I 1 corr and I 2 corr on both parts can be obtained, respectively. This is the corrosion occurred on both parts when sampe 1 and 2 are not connected. When parts 1 and 2 are connected using wires as illustrated in Fig. 1b, in the condition of E 2 corr > E 1 corr , the electrons flow through a wire from part 1-2 due to potential difference, and the ion directional movement also occurs in the solution. Thus, the external current is generated. The corrosion potential. leads to polarization, i.e. E 2 corr decreases and E 1 corr increases,which help produce different polarization current I 1 P and I 2 P . the change of corrosion current of parts 1 and 2 can be described as I ′ 1 corr = I 1 corr + I 1 P and I ′ 2 Two-dimensional DCC model Calculation of oxygen distribution. The generative mechanism of the DCC was introduced using the aforementioned experimental devices. But in practice, the corrosion is more complex. The reduced pipeline,depicted in Fig. 2a, is used to help explain the corrosion mechanism in complex conditions. The oxgen distribution near the wall was simulated in ANSYS FLUENT 16.0 before carrying out DCC modelling,a multiphase flow model of mixure was applied. As shown in Fig. 2a, the inlet velocity and oxygen content, c o 2 ,0 , and outlet pressure were set. The values are given in Table 2. No sliding wall condition was applied. The turbulent flow in the pipeline was simulated by κ-ε model. Wall function method was used with the dimensionless distance, y + = 50 , which refers to the distance between the center point of the first layer element and the wall surface. Figure 2b,c show the computational mesh and oxygen distribution near the wall, respectively. Due to the low density of oxygen, under the action of gravity, most of the oxygen is concentrated in the upper part of the pipeline (Fig. 2c). The results show that the oxygen concentration in the upper part is 3 to 10 times higher than that in the lower part. Therefore, to simplify the simulation condition, the lower part of the pipeline was neglected. Only upper part of the pipeline was used in the subsequent simulation while the www.nature.com/scientificreports/ boundary between the lower and upper part is regarded as the insulation boundary, as illustrated in Fig. 3a. The corresponding near wall boundary layer is given in Fig. 3b. Numerical model of two-dimensional DCC. Viscous sublayer was used to understand DCC in the pipeline. The schematic of discretization of viscous sublayer is shown in Fig. 4a. Three zones (Zone I, II and III) can be identified. As shown in Fig. 5, a representative element (i, j) and its neighboring elements are taken to show potential and current flow between elements. If DCC is not considered, there only will be a natural corrosion potential E i,j corr and a natural corrosion current I i,j corr in each element, where (i, j) stands for arbitrary element number. But in fact, all elements are interconnected with one another, which inevitably generate current. Thus, the element corrosion potential ( E i,j corr ) can be polarized. The polarization potential is expressed as E i,j − E i,j corr and the polarization current is I i,j F (Fig. 6a,b). E i,j is the ultimate corrosion potential of element (i, j) after polarization. As the elements are connected to each other, the natural corrosion potential difference between them can drive a current, as illustrated in Figs. 4b, 5b, 6a and b. Taking element (i, j) as an example also, as DCC is not considered, the absolute values of anodic reaction current and cathodic current between the wall and element in the solution are equal, and no net current is generated. However, as DCC is considered, the current flow between elements leads to a polarization of the corrosion potential. The anode and cathode currents are no longer equal so that the net current, i.e. external current or polarization current, can be generated. At the same time, other elements around the element (i, j) will also have current flowing in or out, as illustrated in Figs. 4b, 5b, 6a and b. If all elements are considered as a circuit, the element (i, j) can be treated as a node in the circuit. When the current reaches a steady state, according to Kirchhoff 's Second Law 16 , a net flow of current through the element is zero. www.nature.com/scientificreports/ Derivation of discrete equations used in zone I and III. According to the balance of current at a steady state, an equation of the corrosion potential of the polarized element can be obtained. As an example, element (i, j) is surrounded by four neighboring elements. Since each element represents an electrolyte, according to the definition of corrosion potential, E i,j is the potential difference between the wall and the solution and −E i,j represents the potential difference between solution and wall. Therefore, the current flowing from element (i, j − 1) to (i, j) should be Similarly, the current from element (i, j) to element (i, j + 1) is Along the Y direction, the current flowing from element (i + 1, j) to element (i, j) is The current from element (i, j) to element (i − 1, j) is The Faraday current due to polarization is where R i,j P is the polarization resistance of element (i, j), Ω m 2 , more details please see "Discrete equations of elements at the boundary between zones I and II" section. S AA ′ B ′ B ≈ R�θ�x 1 , as illustrated in Fig. 6a. R I x1 ,R I x2 ,R I y2 ,R I y1 is the resistance between elements of the solution in X and Y direction , more details also can be found in "Discrete equations of elements at the boundary between zones I and II" section. The polarization current is assumed to flow from wall to element. According to Kirchhoff 's Second Law, for each element (i, j), there exist Substitute (11)-(15) into Eqs. (16), (17): www.nature.com/scientificreports/ www.nature.com/scientificreports/ The elements at other boundaries can be treated similarly. Fig. 4b to obtain discrete equations similar to Eq. (17). Derivation of discrete equations used in zone II. The Kirchhoff Second Law is applied to element (t, s) in Zone II as illustrated in where R II x1 R II y1 R II x2 R II y2 are the resistances of element of Zone II in X and Y direcion. The detailed expression of them will be found in "Discrete equations of elements at the boundary between zones I and II" section. It is easy to determine S A ′ EFB ′ according to the geometric relationship in Fig. 6a: where R x is the radius at coordinate x of reducing pipe section, its expression is: and k is r−R l 2 . Other definitions for r, R, l 1 , l 2 are shown in Fig. 3b. Equation (19) is a standard form. When the element is located at the boundary, the standard form should be changed. For example, for element (m, n) shown in Fig. 4b, because an insulated boundary is above this element, I y in_flow is zero.As Kirchhoff 's Second Law is applied to this element, the equation becomes The elements at other boundaries in Zone II can be treated similarly. Discrete equations of elements at the boundary between zones I and II. As the elements (i, k) and (i, k + 1) are at the junction of a straight pipe and a variable-diameter pipe, as illustrated in Figs. 4b and 6c,the calculation of resistance is different from that in Zones I or II . Application of Kirchhoff 's Second Law to element (i, k) can For element (i, k + 1), the equation becomes: where R x12 is the solution resistance between element (i, k) and element (i, k + 1), R x1 , R y1 , R x2 , and R y2 can be expressed thoroughly in "Discrete equations of elements at the boundary between zones I and II" section. Calculation of element resistance in zone II. Because the width of the element in this zone decreases with increasing values of x, the calculation of resistance is different from that in Zones I and III. As illustrated in Fig. 6c, a typical element is highlighted with the red lines. The total resistance should be the integral of differential resistances between neighbouring elements, Calculation of resistances. Calculation of element resistance in zones I and Scientific Reports | (2020) 10:19236 | https://doi.org/10.1038/s41598-020-75166-7 www.nature.com/scientificreports/ The integral of Eq. (28) gives where x c 1 and x c 2 is the center coordinates for two adjacent elements. In a similar way, the Y-direction resistance can be calculated by an integral method. The resistance is The integral of Eq. (30) gives where x 1 , x 2 are the element start and end coordinates in X-direction. Calculation of element resistance at the boundary between zones I and II. For the elements at the junction of a straight pipe and a variable-diameter pipe, such as elements (i, k) and (i, k + 1), the calculation of resistance between two elements is The similar calculation can be done to obtain element resistance at the junction of zones II and III. Calculation of plorization resitance. The polarization resistance between the pipe wall and the solution can be calculated according to the following formula: where β a , β c are the Tafel slope of iron and oxgen respectivly listed in Table 1. Figure 3c illustrates the oxygen concentration distribution near the pipeline wall. It can be seen that in the middle part of the upper wall, the concentration of oxygen is higher than that on both sides. In contrast, along the axial direction, near the outlet of the pipeline, the concentration of oxygen is the highest. www.nature.com/scientificreports/ The distribution of oxygen determines the corrosion of the pipeline wall. Figures 7a and 8a show the distribution of natural corrosion potential and current without considering DCC. The distribution of corrosion potential and current is closely in relation to the distribution of oxygen. The high natural corrosion potential and current occurs at location with high oxygen concentration, and vice versa. Results and discussions However, because the elements are actually connected, the difference in the natural corrosion potentials of the elements will inevitablely lead to a current flow between the elements. Natural corrosion potential is bound to polarize. Moreover, the absolute values of anodic and cathodic reaction current are no longer equal, causing polarization current. Figures 7b and 8b show the distribution of polarization potential and current, respectively. The final corrosion potential and current after polarization are illustrated in Figs. 7c and 8c, respectively. The mechanism of concentration corrosion can be described in more details. Figures 9 and 10 illustrate the distribution of the above physical quantities, such as corrosion potential, current et al. at the selected representative rows and columns. Figure 9c shows the polarized potential of each column, indicating the different polarization extents. For example, in the first column, the degree of polarization is the highest and the polarization is anodic. Its means that the corrosion potential increases. Because the natural potential of the element is the lowest, the current mainly flows into the element, resulting in anodic polarization. Similarly, anodic polarization occurs in columns 48 and 99. In comparison to column 1, the higher natural corrosion potential in these two columns causes a lower degree of polarization. www.nature.com/scientificreports/ For columns 222 and 247, cathodic polarization occurs in the middle part. Because the natural corrosion potential at these location are very high (Figs. 7a, 8b, and 11a). Therefore, the element current is mostly in the outflow state, and cathodic polarization is dominant. The polarization of the elements in these columns has something in common, i.e. the polarization degree near the circumferential edges is higher than that at the middle part. This is because the concentration of oxygen is lower at the edge and the corresponding corrosion potential is also lower. Driven by the polarization potential, the polarization current is generated, and its distribution is illustrated in Figs. 10b and 11b. In general, anodic polarization causes anodic current while cathodic polarization causes cathodic current and corrosion. The corrosion current is the algebraic sum of natural corrosion and polarization current, as illustrated in Figs. 8c and 10c. It can be seen that the polarization current is basically in the same order of magnitude as the natural corrosion current. Thus, the polarization current presents a significant influence on the final corrosion current distribution. It indicates that the concentration corrosion cannot be ignored in the corrosion analysis. Due to the concentration corrosion, at the location with high original corrosion current, the corrosion current tends to decrease, whereas the corrosion current can increase at the location with low original corrosion current. Additionally, if the solution resistance is not considered, the potentials of all elements will eventually tend to be uniform. Conclusions A two-dimensional DCC model was developed to predict the distribution of corrosion potential and current in the pipeline. The calculation results present a significant influence of concentration corrosion on the overall corrosion of seawater pipeline. The existence of concentration corrosion helps polarize the corrosion potential and subsequently causes the polarization current. The high natural corrosion potential area can cause cathodic polarization and cathodic current, leading to an increase in the corrosion rate. In contrast, the original low natural corrosion potential area can cause anodic polarization and anodic current so that the corrosion rate decreases. The corrosion potential tends to be homogenized due to the differential concentration corrosion. All these findings help clarify the corrosion mechanism in the seawater pipeline with the existence of the differential concentration. Methods Geometric modeling. Taking pipeline fluid as the research object. The geometry of the fluid is shown in Fig. 2b. The geometry is generated in software COMSOL and exported in format *.x_b. Structured grid generation. In order to esTablelish the DCC model successfully, the fluid must be divided into structured grids. This is done in ICEM,a model of Ansys software.The geometry of the fluid(in format *.x_b) was imported into ICEM and structured grid was generated using mapping method. At the same time, the boundary conditions are defined in this model. After meshing, it is saved into MSH format and imported into FLUENT. Calculation and data export. In FLUENT, select the type of calculating model (multiphase flow of mixture, turbulence model κ-ε), select the fluid material (the primary phase is water, the secondary phase is oxygen), and then set various boundary conditionss (inlet velocity, oxygen concentration, etc.), and then calculate. After the calculation, UDF technology is used to output the element center coordinates and oxygen concentration close to the wall surface to the file for matlab processing. Data processing and display in MATLAB. Use MATLAB script program to read the above data. Then calculate in the following order:
5,301.8
2020-11-06T00:00:00.000
[ "Materials Science" ]
Neural partial differential equations for chaotic systems When predicting complex systems one typically relies on differential equation which can often be incomplete, missing unknown influences or higher order effects. By augmenting the equations with artificial neural networks we can compensate these deficiencies. We show that this can be used to predict paradigmatic, high-dimensional chaotic partial differential equations even when only short and incomplete datasets are available. The forecast horizon for these high dimensional systems is about an order of magnitude larger than the length of the training data. Introduction For centuries, differential equation models derived from physical principles have been the preferred tool to forecast the behaviour of complex natural systems. More recently the advance of data-driven methods enabled many promising approaches for forecasting spatiotemporal system, e.g. with feed-forward neural networks [1], convolutional neural networks (CNN) [2] or reservoir computing [3]. In particular, chaotic systems are inherently difficult to forecast, as already the smallest deviations can lead to large errors later. Key challenges remain predicting complex systems that are high dimensional and chaotic, when only short time series and spatially incomplete data is available. We tackle these challenges by combining knowledge that we have about the governing equations of these systems with data-driven methods into a hybrid model. We explore how hybrid methods help predicting complex, chaotic systems of which we only have incomplete and sparse knowledge. Every numerical, physical model of a natural system is incomplete in some sense, for example due to unknown parts of the dynamics, or due to deliberately omitting higher-order effects. Hybrid approaches try to account for these deficiencies with data-driven methods to derive more complete hybrid models. The data-driven part of the hybrid models needs to be trained and when directly augmenting a differential equations with an ANN, it is no longer possible to use the standard backpropagation algorithm that is usually applied. Chen et al [4] presented an efficient algorithm to train through an ODE solver based on the adjoint sensitivity method. Rackauckas et al [5] expanded on this idea and developed the universal differential equations framework that allows to freely augment most types of differential equations with universal approximators such as ANNs. These approaches are also related to prior research that show how parameters of ODEs describing chaotic systems can be estimated, such as by Baake et al [6]. For the fully data-driven, and thus non-hybrid case, Sun et al [7] showed how the complete right-hand side of differential equations can be modelled with ANNs based on the neural ODE approach by Chen et al [4]. Another hybrid approach are physics-informed neural networks, which can approximate solutions of PDEs with ANNs and also set up ANNs whose outputs are solutions of a specific PDE [8]. Combining a knowledge-based differential equation model with a reservoir computer has also recently shown great promise for predicting chaotic systems like the Lorenz-63 and the Kuramoto-Sivashinksy (KS) equation [9,10]. Combining knowledge of systems with data-driven approximations such as polynomials has been done for low-dimensional ODEs [ [11], cf]. The approach that we aim for provides greater flexibility and capability through the use of ANNs as approximators and the possibility to use PDEs and their representations as high-dimensional ODEs through discretization. In this article we focus on a particularly challenging situation: we want to predict the dynamics of high-dimensional chaotic systems by combining discretized PDEs with ANNs, under the condition of very short training datasets and with parts of the spatial data missing. The universal differential equations framework [5] provides the basis for the introduction of the neural partial differential equations (NPDE) that we will use. Compared to existing works, the results we are going to present in the following advance the field of hybrid modelling in two key aspects. First, we show that it is possible to train models based on very short training data, and second, we do that for chaotic systems even when the data is subjected to noise and incomplete. We will first introduce the method of NPDEs and will then apply them to two prototypical, spatiotemporally chaotic systems: the complex Ginsburg Landau equation and the KS equation. We will show that our NPDE-based hybrid approach proposed here is capable of predicting the dynamics of these example systems in high spatial dimension and with only very short training data, compared to the forecast horizon. Methods The framework of universal differential equations [5] enables us to use universal approximators such as ANNs within partial differential equations (PDEs). The resulting NPDEs are hybrid models that are able to compensate missing parts of the PDE by learning them from data, thereby attenuating structural model errors. NPDEs are thus discretized PDEs with an ANN as part of the equation: The ANN N will mostly be comprised of densely connected layers of nodes Dense( where the (N out × N in )-matrix W and the N out -dimensional vector b are the trainable parameters Θ i = {W, b} and f NL is a nonlinear function, here the swish function f NL (x) = x/(1 + exp(−x)) [12]. In order to train models like NPDEs, one needs to be able to compute gradients of the solution of differential equations with respect to the parameters of the equation, thus also of the ANN. Appropriate algorithms such as the adjoint sensitivity method were originally used, e.g., for sensitivity analysis of meteorological models [13]. Recent advances showed that these can also be used within the context of artificial neural networks [4]. In particular the universal differential equations framework made these methods much more accessible and easier to use [5]. The NPDE training and computations are all optimized to run on GPUs which enables us to investigate even very high dimensional systems efficiently. The loss function that is minimized during the training process by a gradient descent algorithm is the sum of the least square errors of the predictions made by the NPDE and an additional parameter regularization of the ANN: The sum is taken over all discretized spatial coordinates x and time steps i t of the predicted trajectory. Throughout the article · 1 denotes the L1 norm and γ = 10 −5 . In [5] non-chaotic applications of universal differential equations are discussed and these are usually trained by minimizing the mean square error of a relatively long trajectory predicted by the universal or neural differential equations. We found that it is to be difficult to train models for chaotic processes on long trajectories as inherently small deviations at the start of the trajectory can lead to massive deviations later. We thus integrate the NPDE only from t 0 to t 0 + i τ Δt for a small i τ and repeat this from every initial condition in the training dataset. For i τ = 1 we therefore train on the one-step-ahead forecast error. Increasing the length of the integration interval also increases the computational complexity massively. We thus first integrate with i τ = 1 until the forecast error on a validation set converges and then slowly increase i τ to its final value τ . The final length of the integration interval τ is a hyperparameter of the training procedure. When integrating the NPDE, we save the trajectories at constant sampling time steps Δt for better comparability, even though the solvers will typically feature adaptive step size control. Depending on the model in question, Δt might need to be quite small to ensure a successful training, as we will address later. In order to model possible derivatives in the unknown parts of the PDE, we introduce a novel trainable layer, the Nabla layer ∇. It is defined by where ∇ FD is the finite difference derivative matrix and w is a trainable parameter. The second term of the right-hand side is thus a scaled, numerical first derivative of the input, and the layer learns whether or not to take a derivative of the input (or a linear combination of both input and its derivative). The parameter w is approximately bound to the interval [−1; 1] by an additional penalty in the overall loss function. For this function we chose p(w) = max(x 6 − 1, −x 4 + x 2 ) because it has large values outside of [−1; 1] and local minima at 0 and ±1. When stacking k of these layers and a multi-layer perceptron (MLP) together, we are able to model functions of derivatives up to order k. To increase the numerical precision of the Nabla layer, we use alternating forward and backward finite difference schemes when stacking Nabla layers as we noticed an impact of the accuracy of the finite difference schemes on the results, especially when higher order derivatives are modelled. While we investigate only a finite difference scheme here, using other discretization approaches are likely to work as well and could be investigated in future research. Additional skip connections can help training these models if they are comprised of many layers, resulting in a residual network (ResNet) [14]. Since we directly augment the differential equation, the NPDE approach is very flexible. It does not have a fixed input or output dimension. We could integrate the trained model in higher or lower resolutions, or as we deal with systems where local interaction are dominant, the NPDE approach enables us to deal with spatially incomplete data as well. We can learn the missing part of the equations from the incomplete data and predict the complete systems by defining a 'learn domain' that is situated well within the known data (see figure 1). The initial conditions for each integration are 0 where we have no data and the loss function is computed only from points within this learn domain. In these cases we only integrate for one sampling time step and are thus using a one-step-ahead error, as longer integration intervals would allow the propagation of features outside of the known domain to the inside of the learn domain. Results In the following we will assume that we know only a part of the equation we are investigating and 'forget' about another part of the equation, which is instead modelled by an artificial neural network N . The latter part will be trained with data. For this theoretical setup, we generate this data from the true, known system and then compare it with the prediction of the trained NPDE. We assess how well the NPDEs perform by first integrating them from the first initial conditions that were not part of the training part for a suitably long time. Importantly, when integrating we save the trajectory at the same regular sampling time steps Δt as when generating the initial true data. We then define the following measures of forecast accuracy: the non-normalized error and the normalized error [9] where · 2 is the L2 norm. As defined in [9], we also compute a valid time t v as the first time when e(t v ) > 0.4. The time will be either expressed in the number of forecasted sampling time steps N f , or by scaling it with the maximum Lyapunov exponent as a natural time scale of the system. The results of our NPDE approach will be compared to other methods. The first benchmark is a CNN with a bottleneck, similar to [15]. Another comparison is a hybrid reservoir computer [9] that also combines a knowledge-based model with a data-driven model, i.e., the incomplete PDE with a reservoir computer. For high-dimensional systems, the size of the reservoir network needs to be increased accordingly. For the systems investigated here, the necessary reservoir size becomes potentially prohibitory large. For our comparative purposes, we thus compute the hybrid reservoir with a lower-dimensional system with the same inter-grid spacing. Additionally, we also show how the incomplete model on its own performs as a predictor. Further details on these comparisons can be found in the appendices C and D. Complex Ginsburg-Landau equation The complex Ginsburg-Landau equation [16,17] is defined by where u(x, t) is a complex valued field on two spatial dimensions. The CGLE is a prototypical equation that models every reaction-diffusion system close to the onset of oscillation [17]. For various parameter configuration, like α = 2, β = −1 as chosen here, this system exhibits chaotic behaviour. The physical size of the domain is set to 192 × 192 in arbitrary physical units and periodic boundary conditions are applied. The domain is then discretized with a finite difference scheme to a grid with 128 × 128 nodes, thus transforming the PDE into a 16 384-dimensional ODE. Here, we focus on modelling the reaction term with an ANN. The NPDE we investigate here is given by As part of the NPDE N CGLE is defined in a way that it only has as a single input: the value of the spatiotemporal field u at one specific position. Since u is complex valued, the real and imaginary part are split as separate inputs. N CGLE is a multilayer perceptron with two hidden layers, each with 10 densely connected nodes (see figure 1). A single long trajectory of the CGLE is integrated with a Tsitouras solver [18]. The initial conditions are uniformly random within the interval [−0.005; 0.005] for both the real and imaginary part. Although the solver has an adaptive step size, the trajectory is saved every Δt = 0.1. The first 2000 steps are not saved to avoid any transient dynamics. Only the next 25 steps after the transient are the training set and the remainder of the trajectory is saved for validation and test set. The NPDE is trained minimizing the loss function equation (2) using a stochastic gradient descent with weight decay [19]. Figure 2 shows the prediction of the trained NPDE for different time steps N f and how the normalized error evolves. We found that the NPDE makes accurate predictions that exceed the length of the training set by far. The normalized error increases exponentially with increasing t until it levels off at around 0.4, which coincides with the threshold of the valid time t v for the CGLE NPDE. Therefore, we additionally measure when e(t) = 0.3 is reached for the first time. The valid time increases slightly when the final length of the integration interval τ is increased (see figure 2), however increasing τ needs considerably more computation time. Ultimately, this increase is so small that it does not seem to justify the much higher computation time in the case of the CGLE. The valid time is N f = 388 sampling time steps, for τ = 1 and N f = 479 for τ = 20, whereas e(t) = 0.3 is reached for Additionally, we investigated the sensitivity of the NPDE approach to noise in the underlying data by adding a small, normally distributed noise vector to the part of the trajectory x(t) that was used for training. We use only one constant noise vector where each element of η(t) is independently drawn from a normal distribution N (0, σ). In this way we simulate observational noise. For a given standard deviation of the noise, we train the model in the same manner as before, with 25 time steps of training data x η . The forecast error e(t) can be evaluated by comparing the NPDE forecast against the original time series x, or against the series with noise x η . In our trials the forecast length did not differ significant from each other in either case. In figure 3 we report the results for noise with a standard deviation between 0.01 and 0.2. One can see a relatively smooth response to the increased observational noise. The forecast length decreases, but even at σ = 0.2 meaningful forecasts can still be made. In the appendix table 1, forecast times are reported for the various noise levels. Accurate forecasts can even be made with incomplete data as results presented in figure 2 show when only one half of the spatial field was used to train the NPDE. The valid time is N f = 333 sampling time steps or 5.57λ max t. The slightly lower threshold e(t) = 0.3 is reached at N f = 254 sampling time steps or 4.24λ max t. There is no significant difference between the accuracy inside and outside of the known domain as the almost identical evolution of the normalized error shows. Kuramoto-Sivashinsky equation Another paradigmatic example of a spatiotemporally chaotic PDE is the KS equation This equation is solved again with a finite difference scheme and periodic boundary conditions and length L = 1160. We discretize it to 4096 spatial grid points and solve for a long trajectory of which we use a N t = 25 long training dataset at Δt = 0.02 sampling time steps. For the NPDE approach we 'forget' the term with the second derivative and replace it with an ANN consisting of four Nabla layers and an MLP with a residual connection (see figure 1), so that the NPDE reads During the training procedure the parameters of the Nabla layers that we introduced in equation (3) quickly converge to two being very close to 0 and two very close to 1, thus correctly identifying the order of the derivative that is missing in the incomplete model. Figure 4 shows the results of the predictions of the NPDE. The valid time t v is 2891 time steps which, given a maximum Lyapunov exponents λ max = 0.07, is equivalent to 4.05λ max t. The normalized error increases exponentially with increasing t. The hybrid reservoir can predict accurately up to a valid time of N f = 52 or 0.08λ max t. We found that especially for the KS system, the forecast profits from smaller values for the sampling time step Δt. This became most apparent when tasking the NPDE model with replacing the forth derivative term, as is shown in the appendix A. In this case a larger sampling time step Δt, e.g. Δt = 0.1 fails to result in meaningful forecasts. This can be understood in view of the fact that the KS system is very sensitive to even the smallest changes to this term. Using Δt = 0.02 leads to similar forecasts horizons, as reported here for the second derivative term, which are shown in the appendix A. Discussion Using NPDE one is able to make forecasts of only partially known high-dimensional chaotic systems, even when datasets available for training are extremely short and spatially incomplete. Importantly, we showed that our NPDE approach works best for chaotic systems with short integration intervals and small sampling time steps. Due to the chaotic nature of the investigated systems, training has to start by integrating only a single sample time step ahead, before slowly increasing the integration interval. However, for the prototypical systems that we investigated here, longer integration intervals do not significantly improve forecasts made by the NPDE. This should change when non-Markovian systems are investigated. Additionally, we introduced a novel finite-difference layer that enables the NPDE approach to work well with systems such as the KS system, as well when e.g. diffusive effects are modelled. Essentially, the NPDE approach makes use of the ergodicity of such systems and is thus able to train and make accurate forecast not despite but because these systems are high dimensional. In the setups we used the ANNs are an efficient tool due the uniformity of the domain and their capability to fit any right-hand side of the equation as long as enough training data is available. Despite the short time series, the large amount of spatial information give us enough data to train the artificial neural networks, even when the training data is subject to observational noise. The forecast horizon of the NPDE is much longer than the dataset used for training itself and as the differential equation is modelled directly, one can also make predictions from arbitrary initial conditions. In many fields such as climate science often datasets are rather short, so that the capability to be trained on such short datasets could prove extremely valuable. The CGLE system we investigated is 16 384-dimensional, whereas the KS system is 4096 dimensional. The NPDEs are optimized on GPUs and thus the approach is scalable and increasing the dimension further is certainly possible. The key challenges that we identified: high-dimensionality, chaotic behaviour, short time series and incomplete data are all successfully tackled by using NPDEs. As we showed, NPDEs are also useful in cases where only incomplete data is available. While this approach seems to be limited to systems without significant long-range interactions, it is still a powerful tool that enables predictions even when not the complete spatial domain is available as training data. Based on these results we conclude that NPDEs are a promising approach with a wide range of possible applications, especially because they solve one of the crucial limitations of machine learning: the need for long training datasets. In the future we hope to apply this method to experimental data from nonlinear optics on the one hand, and from atmospheric dynamics, on the other hand. General circulation models of the atmosphere seem to be an ideal application for our NPDE framework. Although very sophisticated models exist, they cannot resolve every possible influence and scale, which traditionally leads to parameterizations of the unresolved scales and processes such as cloud formation. In addition, the length of available observational training data is relatively short compared to the time scales of many phenomena in climate dynamics or in physiology, economy and ecology. Data availability statement All data that support the findings of this study are included within the article (and any supplementary files). architecture showed to be successful in approximating complex spatiotemporal fields, like simple global circulation models [15]. The CNN consists of three convolutional layers with 3 × 3-sized kernel, each with 8 channels and each followed by a 2 × 2 max pooling layer. The dimension is reduced, this is the so-called 'bottleneck' of the CNN. Then, three convolutional layers with 3 × 3-sized kernel, each followed by an upsampling layer, scale the dimension back to the input dimension. The CNN is trained by minimizing the one-step-ahead least-squared forecast error with a stochastic gradient descent method over 10 000 epochs of the dataset. The exact setup that we investigate is one where CNNs cannot excel easily. The reason the CNN approach is leading to worse results than the neural PDE is the very short training data combined with the high dimensionality of the training data. It would likely perform much better with training datasets of lengths of a few thousand samples. Nevertheless, the comparison we present highlights the strength of our hybrid NPDE approach in exploiting the spatial structure to allow to be trained on very short time series. Appendix D. Hybrid reservoir Combining knowledge-based but incomplete models with a data-driven numerical model has previously been achieved successfully using reservoir computers. Pathak et al [9] showed that such a setup is able to forecast chaotic processes for very long times. However, in these examples very long input datasets were used. Here, we use the same basic setup as reported in [9] with reservoir size N = 20 000, spectral density ρ = 0.4, sparsity d = 0.03, input coefficient uniformly drawn from [−0.5; 0.5] and regularization constant 10 −4 . The knowledge-based model is the NPDE without the neural network, thus the PDE with one term missing. It was integrated using the LSODA solver from the Fortran ODEPACK library. While for longer training datasets a forecast horizon of several Lyapunov times can be achieved, it is much lower for the short training datasets explored in this article. For the 128 × 128-sized grid that is used for the CGLE and the 4096-dimensional KS discretization, one would need much larger reservoir sizes. These are potentially prohibitively large. We therefore computed the hybrid reservoir comparisons on smaller grids, 50 × 50 for the CGLE and 128 for the KS.
5,591.6
2021-04-01T00:00:00.000
[ "Physics", "Computer Science" ]
A review on factors influencing mechanical properties of AlSi12 alloy processed by selective laser melting AlSi12 has a high strength-to-weight ratio and good corrosion resistance properties. As a result, it has potential for use in the automotive and aerospace industries. However, AlSi12 is difficult to process using conventional manufacturing technologies because of its characteristics of having high thermal conductivity and reflectivity and flowability is low. It is necessary to explore how emerging manufacturing technologies can be used to effectively process it. Additive manufacturing (AM) offers great design freedom. For the AM of metallic parts, several technologies are in use, including selective laser melting (SLM), electron beam melting, laser engineered net shaping, and cold spray additive manufacturing. Among these AM processes, SLM technology is a cutting-edge manufacturing technique that has the potential to change the way people think about design and production. SLM of AlSi12 alloy presents unique advantages in producing components with high strength and low weight while having increased design freedom. However, there is a need for more information on how SLM can be effectively used to manufacture AlSi12 parts in a way that reduces defects without compromising the mechanical properties. Thus, this paper aims to review the factors that influence the mechanical properties of AlSi12 alloy printed parts produced using SLM. This information is useful in determining the factors that can be considered for manufacturing parts with outstanding characteristics. Introduction Additive manufacturing (AM) allows for the manufacture of topographically optimized parts with complicated shapes [1]. As a result, the product's functional integrity can be ensured and enhanced at an early design level. Based on previous research, AM can be classified into different processes. Among them, selective laser melting (SLM) is the most used method for metal AM [2]. This method can be used to manufacture complex geometries that are costly to produce using conventional methods. SLM has several undeniable benefits over traditional production methods such as extrusion, grinding, powder metallurgy, and casting [3]. These benefits include the ability to manufacture maximum density three-dimensional components of complicated shapes, limited post-processing requirements, versatility in fabricating complex molded metal matrix composites, and so on [3]. Figure 1 shows an overview of the SLM process. A layer of metal powder is subsequently applied to a substrate surface using a powder coating system in the SLM technique. After depositing, the layer of powder gets melted according to a set scanning pattern. Following the scanning of a layer, the build platform drops down to a set distance (usually 20 to 40 µm in SLM), and then another layer is produced and scanned. The whole procedure takes place until the parts are entirely constructed [4]. The AlSi12 alloy has a lot of promise for SLM, particularly in the transport sector because of its excellent characteristics such as resistance to corrosion and high strengthto-weight ratio. This helps to reduce the weight of vehicle parts, hence reducing the overall vehicle weight and fuel consumption [2,5]. SLM provides distinctive advantages for AlSi12 (intricate design, tool-less welding, personalized style, and geometric freedom) [6]. Researchers are more interested in processing the Al-Si alloy to produce components with the required characteristics using SLM. Al powders have limitations such as high thermal conductivity, and reflectivity, and their flowability is low [2]. Furthermore, they have poor laser absorption and are easily oxidized and balled. Regardless of these limitations, aluminum may be alloyed with other metallic materials to solve any of these problems. AlSi10Mg and AlSi12 are the two Al-Si alloys that are commonly used in SLM. Silicon increases the fluidity of aluminum, thereby lowering the melting temperature. It was reported by Spierings et al. [7] that finer particles have advantages for high component densities, scan surface consistency, and process productivity. Liu et al. [8] also showed that powder with fine particle content results in high component density. Rijesh et al. [9] reported that when grain sizes are in the nanometer range, there is a substantial difference in mechanical characteristics (strength, ductility, and hardness). However, the knowledge of how the particle size of the starting powder of AlS12 alloy synthesized via SLM can significantly reduce the defects of the produced parts is still limited. Rashid et al. [10] reported that varying a scan strategy can result in significantly different microstructural and mechanical properties of the produced components. Maamoun et al. [11] also conducted a study to see how processing parameters influence the surface's porosity, relative density, and roughness. Authors have reported on different processing parameters in deriving the anticipated performance characteristics. It is highly recommended to understand the influence of varying processing parameters and other factors on the properties of the printed parts. Thus, this paper presents a review of the factors that influence the mechanical properties of AlSi12 alloy processed using SLM. Section 2 outlines the factors that influence the mechanical properties of AlSi12. Section 3 includes the developments over time on the processability of AlSi12 alloy. Discussion is found in Sect. 4 and Sect. 5 outlines conclusion. 2 Factors affecting the mechanical properties of AlSi12 alloy 2.1 Effect of particle sizes on the built components for different materials Table 1 below shows that there is a lot of curiosity in understanding the SLM processability of various alloys with various particle sizes. It is clear from Table 1 that there is a common interest in studying the behavior of the particle sizes on the built part, and it was seen that reducing the powder sizes improves the quality of SLM parts. Most of the research was conducted using micro-sized powder particles. Effect of process parameters and heat treatment on mechanical properties of AlSi12 alloy by SLM The authors used different processing parameters to enhance the resultant mechanical properties. The mechanical properties of focus are hardness, tensile strength, compressive strength, wear, and fatigue. Physical property is density. Density and hardness Baitimerov et al. [6] processed AlSi12 alloy through SLM to study the characteristics of the used powder during the manufacturing process. Three varying batches of gas atomized powders from separate vendors were used while adhering to the following parameters: the laser power was 200 W, the 50-µm layer thickness was stripe hatch, the drying of powder was done for 1 h at 100 °C, and there was 500 ppm of oxygen inside the chamber. According to the findings, the relative density was ≥ 95%. The formed powder particles were very fine and spherical, showing poor processability in SLM, while the opposite is true with AlSi12 powder (Fig. 2). The particles with a roughly spherical morphology had poor flowability. This poor flowability results in higher porosity values (Fig. 3). In another study, Baitimerov et al. [6] looked at the processing of AlSi12 by SLM to see what process parameters would result in the least number of pores. Samples had a variety of microstructures with a porosity of around 0.5%. The flowability of the AlSi12 powders was discovered to be slow. The surface morphology of the samples shows that they have a rough surface. To minimize surface roughness Table 1 The effects of particle sizes on the built components for different materials Method/study Reference Explored the densification behaviour of gas and water atomized 316L stainless steel powders, 3-40 μm and 6-50 μm respectively The results demonstrated that the parts fabricated with the gas atomized powders acquired a higher relative density, less porosity compared to those with the water atomized powders [28] Investigated 316L stainless steel powders with 2 types of particle size (Sandvik Osprey (SO) particle size 0-45 µm and LPW technology in the range 15-45 µm) distributions and the properties of as-built part using SLM The results indicated that powder with different particle size distributions behave differently and thus cause a difference in an as-built part's quality [29] Examined the effect of different powder sizes of the 316L stainless steel on the part quality in the SLM process They reported that the metal powders of a smaller size tended to reduce porosities in the fabricated parts compared to those of a larger size. The relative density (99.75%-26.36 µm) (97.50%-50.81 µm) [28] Investigated the effect of Ti-6Al-4 V powder variation of 20 μm to 50 μm on the powder bed thermophysical properties and the microstructure and tensile strength of as-built SLM parts It was found that there is a difference in flowability and porosities of the different powders, there was no significant difference in powder bed densities of 3 types of Ti-6Al-4 V powder [30] Investigated the powder particles of 316L stainless They reported that high fine particle content results in a higher powder bed density, which leads to higher density sections under low laser energy intensity [29] Compared the SLM processing activity of three 316L stainless steel powder batches with different particle size distributions They discovered that fine particles are advantageous for high component densities, process productivity, and scan surface consistency [31] Investigated the effect of three separate AlSi12 powders on SLM processability (with differing particle size distribution, morphology, and chemical composition). Powder B had large amount of fine particles (< 25 µm) than powder batches A and C It demonstrated that the flowability of the powder as well as the apparent density of AlSi12 SLM samples influence their processability [6] and porosity, a fine AlSi12 powder and a double-pass laser scanning strategy were proposed. According to the study by Rashid et al. [12], a relative density of 99.8% was obtained with an energy per layer of between 504 and 895 J. Samples had yield strength, tensile strength, and ductility of 225-263 MPa, 260-365 MPa, and 1-4%, respectively. Chou et al. [13] suggested a new method for controlling the heat input in AlSi12 by using pulsed SLM instead of traditional SLM. They used a laser with a power range of 0.5-4.5 kW, a travel speed of 90-180 mm/min, a 150-m spot size, a 0.1-mm hatch distance, and a 0.1-mm layer thickness to process the AlSi12 alloy. It has been proven that by printing with a pulsed laser, Si may be refined to a size of less than 200 nm. The print component density and hardness were 95% and 135 HV, respectively. Wang et al. [14] investigated how AlSi12 samples produced through SLM are influenced by the build chamber environment. When the samples were printed in argon, nitrogen, or helium chamber atmospheres, there was no discernible difference in density or hardness. The samples outperformed conventionally produced material by 1.5 times yield strength, 20% higher tensile strength, and twice the elongation. The parameters needed to achieve a consistent relative density for the AlSi12 alloy generated by SLM were studied by Louvis et al. [15]. Laser intensity and laser scanning rate were the process parameters studied. Based on their results, the oxidation factor was the most important parameter that impacted relative density. Experimentally, two different SLM devices were employed, each with a different laser power: one with 50 W and the other with 100 W. A relative density of 89.5% was obtained from the system of 100-W laser power with a combination of optimum parameters. Oxide formation should be prevented to create AlSi12 components with a 100% relative density. Tensile and compressive strength Tensile strength Kang et al. [16] investigated the microstructure and strength of an in situ produced eutectic Al-Si , d), and powder C (e, f) [6] alloy made from an elemental powder combination utilizing selective laser melting. A dense eutectic Al-Si alloy (approximately 99%) was produced using an argon atmosphere. The rapid cooling rate of SLM resulted in a microstructure with nano-sized Si particles and cellular Al. The ductility and tensile strength of SLM-treated materials diminish as the laser scanning speed rises. The pre-alloyed powder needs more energy density to produce denser samples from in situ SLM fabrication. Similarly, a controlled and ultrafine microstructure of AlSi12 treated by SLM was described in research by Li et al. [17]. Nonetheless, they performed a solution heat treatment for 4 h at 500 °C, followed by water quenching. At the Al grain boundaries, spherical Si particles formed. On the other hand, the coarse and fine Si precipitates were evenly dispersed throughout the Al matrix. The tensile characteristics improved after heat treatment as it was seen in the microstructure and had an extraordinarily high ductility of around 25%. The tensile behavior of AlSi12 produced by SLM was studied by heating the base plate and experimenting with four different hatch types [18]. The following parameters were used: 320-W laser, 50-µm layer thickness, 110-µm hatch spacing, 73° hatch rotation, argon atmosphere, scanning speed of 1455-1939 mm/s, checkerboard, single and double melt, and single melt continuous scanning methods were used. A solution heat treatment for 6 h at 473-723 K was done. According to the findings, the differences in tensile characteristics were linked to fracture propagation route variance. The ductility of the samples manufactured without using contour scans increased significantly without losing tensile strength. With the right processing parameters, the tensile characteristics at room temperature can be adjusted in situ. Prashanth et al. [19] used the same parameters as Prashanth et al. [18] to investigate the mechanical properties of AlSi12 components produced through SLM. The findings revealed that the built sample had a yield and tensile strength of 380 and 260 MPa, respectively, which were substantially greater when compared to the yield and tensile strength of cast equivalents. Contrary to Prashanth et al. [18], the microstructures' texture of the produced samples changed depending on the construction orientation; however, this did not affect the tensile characteristics. Compressive strength Ponnusamy et al. [20] studied the behavioral change of the AlSi12 alloy manufactured through SLM at high strain rates. Different scanning techniques were used to treat the alloy, with a focus distance of 4 mm, 1000mm/s scanning speed, 285 W of laser power, 100 µm of hatch spacing, and a 40-µm layer thickness. Horizontal, inclined, and vertical construction orientations were used. According to the findings, the dynamic compressive strength rose as the print orientation angle grew from 0° to 90°. At 200 °C, the compressive and yield strength of the produced samples decreases. At increased temperatures, flow stress Fig. 3 Change in porosity of AlSi12 alloy processed from one powder batch, changing the point distance (PD) and exposure time (ET) [6] was greater for dynamic loading than for quasi-static loading. The same parameters were utilized in the work by Ponnusamy et al. [21], and a post-treatment of annealing for 3 h at 200 °C and 400 °C was performed. On the contrary, a reduction in flow stress was observed because of thermal softening occurring in printed samples that are exposed to high temperatures. The heat-treated samples showed a significant decrease in flow stress. The samples that were heat-treated at 200 °C and 400 °C showed a decrease in flow stress of 12 and 45%, respectively. Wear and fatigue Wear Prashanth et al. [19] investigated the mechanical characteristics of AlSi12 alloy that was produced using SLM. When compared to cast equivalents, the as-built samples demonstrated superior resistance to wear and comparable corrosion resistance. Annealing heat treatment which caused the development of Si precipitates degraded both wear and corrosion characteristics. Rathod et al. [22] who investigated the tribological characteristics of the Al-12Si alloy agree with Prashanth et al. [19] that annealing causes Si to precipitate and the cellular structure to disintegrate, leading to a decrease in hardness. Similarly, they also found that when comparing the heat-treated SLM to the CC specimens, the as-prepared SLM specimens had the lowest wear rate. Although the hardness of SLM specimens manufactured using single melt (SM) and checkerboard (CB) scanning methods is equal, the wear rate of the former is substantially higher because of its high porosity. Fatigue Siddique et al. [23] studied the high cycle fatigue failure processes in AlSi12 alloys that were processed by SLM. The following parameters were used: 400 W of laser power, 39.6 J/mm 3 of volume energy density, argon environment, stress relief heat treatment at 240 °C for 2 h was done, followed by cooling in the oven. The microstructure of the printed samples was found to include precipitates and tiny grains, resulting in enhanced quasi-static strength when compared to their cast counterparts. The as-built and asbuilt hybrid samples were comparable in terms of fatigue strength. Siddique et al. [24] conducted a study assessing the performance of fatigue for selective laser melted parts. The X-ray and optical microscopy computed tomography methods showed similar porosity percentages. The loss in strength caused by hot isostatic pressing after treatment was equivalent to that of die-cast components. The presence of even smaller holes in the samples was considered responsible for the reduction in fatigue life. The post-treatment of hot isostatic pressing reduced the effect of surface weakening. Siddique et al. [25] investigated the fatigue behavior of the AlSi12 alloy that was produced through SLM. According to the findings, the stress reduction after a heat treatment at 240 °C resulted in increased pores owing to the formation of new pores. After comparing the two samples, the ones produced with and without the base plate heating, the results revealed that the ones produced with base plate heating had better fatigue performance at low loads. The porosity of those produced without base plate heating was greater, making them more susceptible to breaking owing to faults. For samples produced using base plate heating, this incidence was considerably decreased. Siddique et al. [26] investigated the impact of process-induced microstructure and defects on the mechanical characteristics of AlSi12 treated by SLM and agreed with Siddique et al. [25] that base plate heating had a positive impact. They discovered that because the cooling rate is reduced when the base plate is heated, the produced samples have a coarser granular microstructure. The printed samples have four times the tensile strength of sand-cast components and two times the tensile strength of die-cast parts. Residual stresses and fatigue data scatter were lower in samples made with base plate heating. Suryawanshi et al. [27] studied the influence of SLM on the simultaneous improvement of toughness and strength in an AlSi12 alloy. The results indicated that SLM alloys had lower crack development caused by fatigue and un-notched fatigue strength than cast alloys, which may be related to shrinkage porosity, unmelted particles, and tensile residual stresses. The inclusion of mesostructure Si in the printed samples resulted in increased toughness. Toughness was shown to be affected by crack and scan orientations concerning the construction. Table 2 summarizes the mechanical characteristics of SLMprinted AlSi12, thereby showing the effects of build orientation and heat treatment on hardness, yield strength (YS), and ultimate tensile strength (UTS). The dynamic compressive properties are as follows: UTS, ultimate compression strength, YS and strain, and lastly the fatigue and fracture toughness. It is clear from Table 2 that the AlSi12 alloy processed by SLM did not receive more attention for mechanical characterization in terms of heat treatment. Further research is needed to study the properties of the Al-Si alloys manufactured through SLM under varying heating effects, high temperatures, and wear. Figure 4 shows the studies that have been conducted on build orientation, heat treatment, and process parameter optimization of AlSi12 alloy over years. The included studies were conducted between 2014 and 2020. Much work was done on heat treatment of AlSi12 alloy. Thus, more work needs to be done on optimizing the process parameters and build orientation of AlSi12 to understand their behavior better. Table 3 shows the description of conducted studies in different years. Lately, there has not been much work done on AlSi12 through SLM. There is a balance in the number of studies carried out in 2015 and 2018. Table 3 The description of work done in different years Paper description Year Dynamic 6 illustrate the pie charts of the research focus area and mechanical properties of AlSi12 processed by SLM respectively. From the research focus area, heat treatment received much attention with 60% of study being conducted on it. Less work was done on build orientation, shown by 10%. From Fig. 6, it can be shown that the tensile property was studied a lot more than other mechanical properties. Discussion The purpose of this work was to look at factors that influence the mechanical properties of AlSi12 alloy. Some of the included factors are particle size, heat treatment, build orientation, and processing parameters: • Under density and hardness behavior, it has been reported that oxidation is another factor that influences relative density. Therefore, it must be avoided when producing AlSi12 components with a 100% relative density using SLM. The particles with a nearly spherical morphology had limited flowability, resulting in high porosity levels. From the literature, it was reported that there are no significant differences in hardness or density when the chamber's atmospheres are varied [14]. In other studies, SLM had been used to create the AlSi12 alloy, with 99.89% relative density [12]. Although there is a substantial influence of utilizing fine-fraction powder and a scanning approach that incorporates a double-pass laser scan strategy to decrease surface roughness and porosity, there has been little research on the behavior of particle sizes on the produced component. • About 44.4% of studies have been conducted on tensile strength. It was seen that the ductility and tensile strength of SLM-treated materials diminish as the laser scanning speed rises. The samples printed without a contour scan showed a substantial improvement in ductility while maintaining tensile strength. Findings show that the ten- sile characteristics at room temperature may be adjusted in situ with the right process parameters. • In terms of compression strength, it was discovered that when the print orientation angle rose from 0° to 90°, the dynamic compressive strength increased. When printed samples are evaluated at 200 °C, their compressive and yield strengths tend to be reduced. The flow stress becomes greater for dynamic loading than for quasi-static loading at increased temperatures. Heat treatment, on the other hand, results in a substantial decrease in flow stress. • Regarding wear, the as-built samples outperform their cast counterparts with greater wear resistance and equal corrosion resistance. The development of Si precipitates after heat treatment (annealing) causes wear and corrosion characteristics to decrease. Annealing also induces Si precipitate and cellular structure disruption, which results in a loss in hardness. Furthermore, when compared to the CC specimen, the as-prepared SLM specimens had the lowest wear rate after heat treatment. • According to the fatigue observations, stress reduction after a heat treatment at 240 °C resulted in increased porosity owing to the development of pores. Furthermore, at low loads, the samples produced with base plate heating outperformed the samples produced without base plate heating in terms of fatigue performance. Because the cooling rate is reduced while the base plate is heated, the printed samples have a coarser grain microstructure. When samples are printed using base plate heating, residual stresses are significantly reduced. Conclusion There is a lack of knowledge of how the particle size can reduce the defects of SML printed parts. Most of the research was done with micro-sized powder particles. As a result, there is a need to research the nano-scaled powder particles of AlSi12 to understand how they affect the mechanical properties of SLM-produced components. The conducted studies on heat treatment focused on techniques that are frequently utilized for traditionally made aluminum alloys, which may not be suitable for SLM-printed components due to their inherent properties. More study is needed to produce an optimal heat treatment for the AlSi12 alloy to increase its mechanical and tribological characteristics.
5,453
2022-07-16T00:00:00.000
[ "Materials Science" ]
Multi-spacecraft Analysis of the Properties of Magnetohydrodynamic Fluctuations in Sub-Alfv\'enic Solar Wind Turbulence at 1 AU We present three-dimensional magnetic power spectra in wavevector space to investigate anisotropy and scalings of sub-Alfv\'enic solar wind turbulence at magnetohydrodynamic (MHD) scale using the Magnetospheric Multiscale spacecraft. The magnetic power distributions are organized in a new coordinate determined by wavevectors (k) and background magnetic field ($b_0$) in Fourier space. This study utilizes two approaches to determine wavevectors: singular value decomposition method and timing analysis. The combination of the two methods allows an examination of magnetic field properties in terms of mode compositions without any spatiotemporal hypothesis. Observations show that fluctuations ($\delta B_{\perp1}$) in the direction perpendicular to k and $b_0$ prominently cascade perpendicular to $b_0$, and such anisotropy increases with wavenumber. The reduced power spectra of $\delta B_{\perp1}$ follow Goldreich-Sridhar scalings: $P(k_\perp)\sim k_\perp^{-5/3}$ and $P(k_{||}) \sim k_{||}^{-2}$. In contrast, fluctuations within $kb_0$ plane show isotropic behaviors: perpendicular power distributions are approximately the same as parallel distributions. The reduced power spectra of fluctuations within $kb_0$ plane follow the scalings: $P(k_\perp)\sim k_\perp^{-3/2}$ and $P(k_{||})\sim k_{||}^{-3/2}$. Comparing frequency-wavevector spectra with theoretical dispersion relations of MHD modes, we find that $\delta B_{\perp1}$ are probably associated with Alfven modes. Moreover, for the Alfv\'enic component, the ratio of cascading time to the wave period is found to be a factor of a few, consistent with critical balance in the strong turbulence regime. The magnetic field fluctuations within $kb_0$ plane more likely originate from fast modes based on isotropic behaviors. Introduction Plasma turbulence is typically characterized by a broadband spectrum of perturbations, transmitting energy across a wide range of spatial and temporal scales (e.g., Bruno & Carbone 2013;Verscharen et al. 2019). Plasma turbulence plays a crucial role in solar corona, solar wind, fusion devices, and interstellar medium (e.g., Bruno & Carbone 2013;Yan & Lazarian 2008). The large-scale behaviors of plasma turbulence, which have been successfully described using magnetohydrodynamic (MHD) models, are of particular astrophysical interest (e.g., Kraichnan 1965;Goldreich & Sridhar 1995). The solar wind is easily accessed for in situ measurements of fields and particles, providing a unique laboratory for studying the physics of turbulent plasma observationally (e.g., Tu & Marsch 1995;Verscharen et al., 2019). Spacecraft observations and associated modeling have advanced our understanding of the solar wind in the last decades. However, turbulent properties and the threedimensional structure of fluctuations remain unclear due to the limited number of sampling points and measurement difficulties. Thus, a study of the three-dimensional energy spectrum of the magnetic field is essential for understanding the dynamics of solar wind turbulence and their effects on energetic particle transports (e.g., Yan & Lazarian 2002;Yan & Lazarian 2004). Solar wind fluctuations are anisotropic due to the presence of the local interplanetary magnetic field, which has been suggested in various studies (e.g., Matthaeus et al. 1990;Oughton et al. 2015). Satellite observations and simulations have shown the variance, power, and spectral index anisotropy in magnetic field components parallel and perpendicular to the field (e.g., Cho & Lazarian 2009;Oughton et al. 2015). First, solar wind fluctuations perpendicular to the background magnetic field ( ! ) are typically more significant than parallel components, consistent with the dominance of incompressible Alfvén modes in the solar wind (e.g., Bruno & Carbone, 2013). Second, turbulence energy is predominantly transmitted perpendicular to ! , based on the spatial correlation functions measured by single and multiple spacecraft (e.g., Matthaeus et al. 1990;He et al. 2011). Third, although solar wind fluctuations are often interpreted as a superposition of fluctuations with quasi-two-dimensional turbulence and a minority slab component, the perpendicular fluctuations are non-axisymmetric with respect to ! , and preferentially in the direction perpendicular to ! and the radial direction (e.g., Bruno & Carbone 2013). Theoretical progress has been achieved in understanding the anisotropic behaviors. Goldreich & Sridhar (1995) predicted that a scale-dependent anisotropy is present in incompressible strong MHD turbulence. The energy spectra of perpendicular and parallel components are ( " ) ∝ " $%/' and ( ∥ ) ∝ ∥ $) , respectively. " and ∥ are wavenumbers perpendicular and parallel to ! , respectively. The smaller turbulent eddies are more elongated along the local mean magnetic field (e.g., Cho & Lazarian 2009;Makwana & Yan 2020). A mechanism called three-wave resonant interaction also seems to be responsible for the anisotropy of magnetic field fluctuations (e.g., Shebalin et al. 1983;Cho & Lazarian 2002). Furthermore, according to the compressible MHD theory, plasma turbulence can be decomposed into three eigenmodes (Alfvén, slow, and fast modes) in a stationary, homogeneous, isothermal plasma with a uniform background magnetic field (e.g., Makwana & Yan 2020;Zhao et al. 2021). Using the term 'mode' in this study, we refer to the carriers of turbulent fluctuations, which are not dependent on the propagation properties of classical linear waves. In Fourier space, incompressible fluctuations whose displacement vectors are perpendicular to the "" plane are defined as Alfvén modes, whereas fluctuations whose displacement vectors are within the "" plane are defined as magnetosonic modes (slow and fast modes). The mode compositions of the turbulence can profoundly affect turbulence anisotropy (e.g., Yan & Lazarian 2004). Two-order structure functions show that the cascade of Alfvén and slow modes is anisotropic, preferentially in the direction perpendicular to the local magnetic field rather than the parallel direction, whereas fast modes tend to show isotropic cascade (e.g., Cho & Lazarian 2003;Makwana & Yan 2020). Moreover, both Alfvén and slow modes follow the Goldreich-Sridhar scalings, whereas fast modes follow isotropic scalings (Goldreich & Sridhar 1995;Cho & Lazarian 2003;Makwana & Yan 2020). Direct evidence from solar wind observations for the cascade of each mode is still lacking. To investigate the anisotropy and scalings of solar wind turbulence with respect to the magnetic field at MHD scales at 1 au, we calculate threedimensional power spectra in wavevector space using the Magnetospheric Multiscale (MMS) spacecraft (Burch et al. 2016). Narita et al. (2010) have tried to obtain three-dimensional energy distributions of magnetic field fluctuations using the wave telescope technique in an ordinary mean-fieldaligned system. Compared with previous studies, this study organizes the three-dimensional magnetic power distributions in a new coordinate determined by wavevectors ( " ) and background magnetic field ( " ! ) in Fourier space. These measurements allow an examination of magnetic field fluctuations in terms of mode compositions. The organization of this paper is as follows. Section 2 describes data sets, analysis methods, and selection criteria. Section 3 offers observations. In Sections 4 and 5, we discuss and summarize our results. Data The study utilizes the magnetic field data from the fluxgate magnetometer (Russell et al. 2016) and the spectrograms of ion differential energy fluxes from the fast plasma investigation instrument ( ; + = proton plasma pressure; -./ = magnetic pressure), proton gyro-frequency 01 , proton gyro-radius 01 , and proton inertial length 1 . Analysis method The magnetic field observed by four MMS spacecraft consists of the background and fluctuating magnetic field, i.e., = ! + . The background magnetic field is obtained by averaging the magnetic field within the defined time window, ! = 〈 〉. We calculate three-dimensional power spectra of magnetic field fluctuations with the following steps. First, the time series of the fluctuating magnetic field is transformed into Fourier space by the Morlet-wavelet transforms (Grinsted et al. 2004). We obtain wavelet coefficients of three components of the fluctuating magnetic field, i.e., 2 %,'() ( , 30 ) , 2 *,'() ( , 30 ) , and 2 +,'() ( , 30 ) at each time and spacecraft-frame frequency ( 30 ), where the subscript represents the geocentric-solar-ecliptic coordinates. We utilize the intervals with twice the length of the studied period to eliminate the edge effect due to finite-length time series and cut off the affected periods. Second, we calculate unit wavevectors " ( , 30 ) = | | using the singular value decomposition (SVD) of the magnetic spectral matrix (Santolík et al. 2003;Zhao et al. 2021). The SVD technique provides a mathematical method to solve the linearized Gauss's law for magnetism ( • = 0) that states a divergencefree constraint of magnetic field vectors. The complex matrices of in this study are expressed as the wavelet coefficients. The wavevectors " are calculated by the SVD method with 32 s resolution since we find that the wavevectors of low-frequency fluctuations are relatively stationary with varying time resolution. selection criteria We search for events that satisfy the following criteria: (1) The spectrograms of differential energy fluxes show no evidence of high-energy reflected ions from the terrestrial bow shock, suggesting that fluctuations are in the free solar wind without the effects of the ion foreshock. (2) The magnetic field is devoid of strong gradients, discontinuities, and reversals, guaranteeing that plasma can be considered homogeneous. (3) The fluctuating magnetic field is smaller than the background magnetic field, and relative amplitudes of magnetic field Under such a condition, the nonlinear term ( ) ) is much less than the linear term ( ! • ), and thus fluctuations can be approximately considered as a superposition of MHD eigenmodes (Alfvén, fast, and slow modes). (4) Multi-spacecraft methods are sensitive to scales comparable to spacecraft separations and show limitations on much larger and smaller scales (e.g., Horbury et al. 2012). In this study, the spacecraft separations are roughly comparable to the proton inertial length ( 1 ). Therefore, given the applicability of MHD theory and measurement limitation, we only analyze fluctuations within ) > * < <=3> < 01 and 1 < 0.2, and set the magnetic power to zero out of this range. The parameter * is the duration studied. The only other criterion used is the requirement that the angle H I H 1 between " (obtained by SVD method) and ; (obtained by timing analysis) should be small. Typically, when analyzing solar wind data via single spacecraft, we assume 30 ∝ based on Taylor's hypothesis (Taylor 1938) for a given wave propagation angle H2 , . This approximation is considered reliable mainly because the velocity of the solar wind flow ( 3B ) is much larger than the phase speeds of MHD waves. However, the approximation does not always hold even though 3B is much faster than the phase speed of fluctuations, e.g., when fluctuations have wavevectors at large angles from the solar wind flow. Therefore, this study utilizes two methods for accuracy to identify the propagation directions, i.e., the SVD method and multi-spacecraft timing analysis (described in Section 2.2). The latter method allows determining wavevectors independent of any spatiotemporal hypothesis. Our results show that data counts are primarily concentrated in H I H 1 < 30°, whereas a small number of counts still exist in large-H I H 1 range. It indicates that not all fluctuations are aligned with the direction of minimum variance vectors of magnetic field fluctuations and satisfy 30 ∝ hypothesis (because the fluctuations are combinations of multiple modes with different dispersion relations). Therefore, we filter out fluctuations with a large H I H 1 , which invalidates the SVD assumption. Such fluctuations are beyond the scope of the present paper and will be the topic of a separate publication. Considering that the fluctuating magnetic field ( "# ) out of the ' ' ! plane dominates magnetic power (~80%), the propagation direction of "# should be mainly aligned with " , whereas fluctuations within the ' ' ! plane accounting for a tiny proportion have little impact on the direction of " . Therefore, this study sets a more stringent criterion H I H 1 < 10° for "# fluctuations and a moderate criterion H I H 1 < 30° for fluctuations within the ' ' ! plane. This study presents three representative events in sub-Alfvénic solar wind turbulence at 1 AU, and their properties are listed in Table 1. These three events are also included in the appendix by Roberts et al. (2020). During these intervals, four spacecraft have a ~1-minute time shift from the nose of the terrestrial bow shock, suggesting approximately the same plasma environment observed by MMS and OMNI. Meanwhile, the qualities of MMS tetrahedral configuration are around 0.9, allowing for distinguishing spatial and temporal evolutions and investigating three-dimensional structures of fluctuations. respectively. Observations An overview of three representative events of solar wind fluctuations is shown in Figure 1. For all three events, the magnetic field and plasma parameters are stationary. Figures 1c, 1j, and 1q show that relative amplitudes of the magnetic field <-3 | ! | = c〈| ( ) − 〈 ( )〉| ) 〉/|〈 ( )〉| ⁄ are less than unity, where the angular brackets denote a time average over 10, 20, and 30 minutes, respectively. Figures 1d, 1k, and 1r compare the fluctuating magnetic field and background magnetic field 〈 〉 '! -1M (average over 30 minutes). Given the small fluctuations with a strong background magnetic field, it is valid to linearize MHD equations, ignoring the second-and higher-order contributions. Thus, the fluctuations can be approximately considered as a superposition of MHD eigenmodes (Alfvén, fast, and slow modes). Figure 2 shows MMS locations and the directions of the background magnetic field in GSE coordinates for Event 1, Event 2, and Event 3, respectively. As shown in Figure 2, theoretically, the mean magnetic field is either not connected to terrestrial bow shock or nearly tangential to it. Indeed, the spectrograms of ion differential energy fluxes show no evidence of high-energy reflected ions (Figures 1e, 1l, and 1s). Thus, these intervals are free from ion foreshock contaminations. Figures 1f, 1m, and 1t show that average + is around 0.3, where proton plasma + is calculated by OMNI proton parameters and MMS magnetic field. In Figures 1g, 1n, and 1u, wave propagation angles with respect to the background magnetic field 〈 〉 '! -1M cover 0°-90°, allowing us to calculate the magnetic power distributions in wavevector space more reliable. The solar wind fluctuations are closely related to the local background magnetic field. This study explores the variation of the magnetic power distributions with the local background magnetic field by adjusting the length of time windows. We split the time intervals of these three events into several moving time windows with a step size of 5 minutes and a length of 10 minutes, 20 minutes, and 30 minutes, respectively. This study refers to them as 10 minute, 20 minute, and 30 minute data sets. We calculate the background magnetic field ( ) by averaging the magnetic field in each time window. In this way, in each time window is constant in time and along the same direction in both real and Fourier space, suggesting that the new coordinate determined by is independent of the space transformation. , is much less than the window length. As a result, it is reliable to assume that turbulent magnetic field fluctuations are stationary and homogeneous (Matthaeus & Goldstein 1982). We follow the approaches described in Section 2.2 to calculate threedimensional frequency-wavenumber magnetic power spectra in the spacecraft frame. Then we transform the magnetic power spectra into the rest frame of the solar wind by correcting the Doppler shift. To obtain " − ∥ magnetic power spectra, we construct a set of 100×100 bins, where ∥ represents the wavenumber parallel to the background magnetic field ( ! ), and " = c "# ) + ") ) represents the wavenumber perpendicular to ! . Each bin subtends approximately the same perpendicular and parallel wavenumber. We sum all magnetic power in each bin at all frequencies and times. To cover all MHDscale wavenumbers, we set the maximum wavenumber as -.S = ",-.S = ∥,-.S = 1.1 × Figures 4a-c, d-f, and g-i display magnetic power spectra of Event 1, Event 2, and Event 3, respectively. For all events, the magnetic power spectra are prominently distributed along the " axis, indicating a faster cascade in the perpendicular direction. Moreover, we observe an apparent scale-dependent anisotropy: magnetic power spectra are more stretched along the " axis with increasing wavenumbers. Besides, magnetic power spectra show more isotropic behaviors as the window length increases. To quantitatively analyze these properties of magnetic power spectra, we obtain the one-dimensional reduced magnetic power spectra of "# as The parallel wavenumber can be expressed as ∥ ∝ " 0 6 / ! / 6 , where ! is the injection scale (approximately equivalent to the correlation length in this study) (Yan & Lazarian 2008). Here, we take Event 1 as an example to estimate the minimum wavenumber. Given that YZ Z 7 can be roughly expressed as stabilizes at around 400 , even though using longer time windows (larger than 30 minutes). Thus, the correlation length is around 6000 km. If we assume the minimum perpendicular wavenumber ",-1M~0 .1 × ∥,-.S~2 .5 × 10 $6 $# , the minimum parallel wavenumber is ∥,-1M~2 .2 × 10 $6 $#~" ,-1M . The calculations can be confirmed by the relationship between " and ∥ at the small wavenumbers in Figure 10d. It is challenging to quantitatively present magnetic power distributions in wavevector space through limited measurements. However, the longer the intervals we choose, the closer the observed power spectra are to actual distributions. Thus, we present reduced power spectra of "# fluctuations using 30 minute data sets, although shorter data sets show similar behaviors. Figure 10d). However, for Events 2 and 3 (2 hr measurements), magnetic power distributions satisfy the scale-dependence scaling at partial wavenumbers. Their power-law fits are easily affected by the wavenumber ranges, likely because of the incomplete results from the limited-time series. To investigate the anisotropy of magnetic power spectra of "# fluctuations, we show the ratio ' ( " )/ ' ( ∥ ) in Figure 6. First, the ratios ' ( " )/ ' ( ∥ ) are much larger than one at most wavenumbers, indicating a faster cascade in the perpendicular direction. Second, almost all data sets show a similar tendency that the ratio ' ( " )/ ' ( ∥ ) increases with the wavenumbers, especially for > 5 × 10 $6 $# , suggesting that the anisotropy of magnetic power spectra increases with the wavenumber. This result is consistent with simulation results: the smaller eddies are more stretched along the background magnetic field (Makwana & Yan 2020). Third, the ratios ' ( " )/ ' ( ∥ ) obtained by 10 minutes (blue) and 20 minutes (green) data sets are larger than those obtained by 30 minute data sets (yellow) at most wavenumbers, especially for > 5 × 10 $6 $# . It means that the anisotropy decreases when we use the longer time windows to calculate the background magnetic field. Therefore, these observations provide evidence that solar wind fluctuations are more likely aligned with the local magnetic field based on the size of the fluctuations rather than a global magnetic field. represents fluctuations perpendicular to and within the ' ' ! plane. Based on the ideal MHD theory, these magnetic field fluctuations are provided by compressible magnetosonic modes. Figure 7 presents the sum of " − ∥ wavelet power spectra of ∥ and ") fluctuations, which are normalized by the maximum power in all bins ( ' 2 :; >> , @AB;C ( " , ∥ ) = ( 2 ∥ ( " , ∥ ) + 2 .0 ( " , ∥ ))/( 2 ∥ + 2 .0 ) -.S ). The Results of Fluctuations within the Since these sub-Alfvénic fluctuations within the ' ' ! plane only occupy a tiny part of the total magnetic power (~20%), magnetic power cannot cover all wavenumbers. Thus, many vacant bins are present in Figure 7. Nevertheless, magnetic power spectra within the ' ' ! plane still show explicit isotropic behaviors: the perpendicular magnetic power distributions are comparable to those in the parallel direction. We calculate one-dimensional reduced power spectra of magnetic field fluctuations within the ' ' ! plane using Equations (2) and (3) for in-plane components with 30 minute data sets. Although the reliability of quantitative analysis is questionable, the normalized perpendicular wavenumber spectra ' ( " ) are roughly comparable to the parallel wavenumber spectra ' ( ∥ ) (Figure 8), and the ratios ' ( " )/ ' ( ∥ ) are around one (Figure 9). Therefore, the isotropic behaviors are independent of the wavenumbers. Moreover, the reduced power spectra follow a similar scaling ' ( " ) ∝ " $'/) and ' ( ∥ ) ∝ ∥ $'/) in Figure 8. The isotropic scalings are consistent with fast-mode scalings (Cho & Lazarian 2003;Makwana & Yan 2020). Further Analysis on MHD Modes Given the small fluctuations with a strong uniform background magnetic field, the MHD model has three MHD eigenmodes (Alfvén, fast, and slow modes), roughly taking the place of exact nonlinear solutions (Cho & Lazarian 2003). Using the term 'mode' in this study, we refer to the carriers of turbulent fluctuations, which are not dependent on the propagation properties of classical linear waves. The incompressible fluctuations whose displacement vectors are perpendicular to the "" plane are defined as Alfvén modes, whereas fluctuations whose displacement vectors are within the "" plane are defined as magnetosonic modes (slow and fast modes). In this section, we present further analysis on whether fluctuations under such definition can propagate like classical linear waves and satisfy theoretical dispersion relations of MHD modes. We discuss the possible physical explanations of our observations by taking the 30 minute data set of Event 1 as an example. 4.1. " The frequency <=3> is obtained by correcting the Doppler shift <=3> = 30 − • )A , where <=3> represents the frequency in the rest frame of the solar wind and 30 represents the frequency in the spacecraft frame. Considering that the SVD method only determines the propagation direction ( " ), we calculate <=3> using wavevectors ; derived from timing analysis of "# fluctuations. Here, ; is expressed as Y2 ./ . Although we set a stringent criterion H I H 1 < 10° for To simplify, we correct <=3> uncertainties with uniform angles between −10° and 10°. The distribution trend will not change when we change the correction angle; thus, our main conclusions are solid. Figure 10a presents the observed <=3> − ∥ power spectra of "# in the rest frame of the solar wind after a uniform angle correction ( H I H 1 = 10°). Similarly to " − ∥ magnetic power spectra, we construct a set of 100×400 bins to obtain ∥ − <=3> magnetic power spectra. Each bin subtends approximately the same parallel wavenumber and frequency in the plasma flow frame. We sum magnetic power in each bin at all times. To cover all MHD-scale fluctuations, we set the maximum parallel wavenumber as ∥,-.S = 1.1 × !.) U 3 and the maximum frequency as Based on the compressible MHD theory, the fluctuations whose polarization is perpendicular to "" plane in wavevector space are expected from Alfvén modes. To further understand the observed magnetic power distributions out of ' ' ! plane, we first compare <=3> − ∥ power spectra of "# with Alfvén-mode theoretical dispersion relations. The theoretical frequencies of Alfvén modes are given by where [ is the Alfvén speed. In Figure 10b, we assume that magnetic power is totally provided by Alfvén modes. The theoretical dispersion relation of Alfvén modes is roughly consistent with the linear branch of the magnetic power spectrum (Figure 10a). This result provides direct observational evidence that these fluctuations may originate from Alfvén modes. It is noteworthy that <=3> are slightly higher than theoretical Alfvén frequencies if we do not correct the uncertainties resulting from H I H 1 . Comparing observations with the theoretical dispersion relations, we obtain the best match with the angle correction H I H 1 = 10°. To further understand the fluctuations without apparent linear relations between the frequency ( <=3> ) and parallel wavenumber ( ∥ ) at ∥ < 5 * 10 $6 $# , we calculate the collision number defined as = ( over 30 and then ∥ . Figure 10c shows collision number versus ∥ . We summarize the comparisons between <=3> − ∥ spectra and collision numbers as follows: (1) At ∥ < 5 * 10 $6 $# , the collision number is very close to 1 (Figure 10c), suggesting strong turbulence. At approximately the same wavenumbers, no clear relation between <=3> and ∥ exists (accounting for ~50% of total power; Figure 10a). This is likely because the fast decay of the turbulence within one wave period prevents the Alfvén waves from propagating. (2) At 5 * 10 $6 < ∥ < 1.5 * 10 $' $# , the collision number becomes slightly larger than 1 (Figure 10c), suggesting that turbulence becomes relatively weaker. This may be because some physical dissipation mechanisms diminish the turbulent amplitudes, leading to weaker nonlinear dynamics (Howes et al. 2011). It is worth noting that we deduce that the turbulence is still in the strong regime because the weak turbulence regime needs more collisions ( ≫ 1) than what we observed. The recent simulation also presents results similar to the power spectrum in Figure 10a (Gan et al. 2022). Gan et al. proposed an alternate interpretation for the low-frequency nonlinear fluctuations. They believed that the 'nonwave' power does not belong to any eigenmode branches since fluctuations cannot fit any dispersion relations. However, we have reservations about their explanations because there is no reason to completely separate the fluctuations in a continuous reduced magnetic power spectrum ' ( ∥ ) ( Figure 5d). In Section 3.1, we have shown that the reduced wavenumber spectra of "# fluctuations roughly follow the scalings: ' ( " ) ∝ " $%/' and ' ( ∥ ) ∝ ∥ $) , consistent with the Goldreich & Sridhar (1995) theory. To obtain a more intuitive wavenumber relationship, we extract the relation of " versus ∥ by taking the same values of the magnetic power spectrum at the " and ∥ axes. Figure 10d shows the variation of " versus ∥ for magnetic power of "# fluctuations (blue curve). The green dashed line represents isotropy ∥ = " , and the red dashed line denotes the Goldreich-Sridhar scaling ∥ ∝ " )/' . At wavenumber ∥ < 1.1 × 10 $' $# ( " < 1.5 × 10 $' $# ), the observed variation of " versus ∥ follows the Goldreich-Sridhar scaling ∥ ∝ " 0 6 / ! / 6 , with the normalization consistent with the correlation length ! obtained in Section 3.1. The collision number is close to one at approximately the same parallel wavenumber ∥ < 1.1 × 10 $' $# (Figure 10c). Therefore, our observations provide direct evidence for the validity of the Goldreich-Sridhar scaling in the solar wind. Given that the magnetic power within the ' ' ! plane only plays a limited role, the reliability of quantitative analysis is questionable. Besides, it is more difficult to correct the <=3> uncertainties resulting from H I H 1 , because we set a more relaxed criterion H I H 1 < 30° for in-plane fluctuations in order to obtain enough samplings. Therefore, magnetic field fluctuations within the ' ' ! plane are discussed qualitatively. Figure 11a shows <=3> − ∥ power spectra of the magnetic field within the ' ' ! plane in the rest frame of the solar wind without any angle correction for <=3> . The in-plane magnetic power ' 1M H I e I , +;.M= ( ∥ , <=3> ) = 1M H I e I , +;.M= ( ∥ , <=3> )/ 1M H I e I , +;.M=,-.S ( ∥ , <=3> ) is normalized by the maximum power in all bins. The magnetic power is concentrated in ∥ < 1 × 10 $' $# and shows no relationship between <=3> and ∥ . According to the ideal MHD theory, magnetic field fluctuations within the ' ' ! plane are most likely provided by compressible magnetosonic modes (fast and slow modes). The theoretical frequencies of fast and slow modes are given by where = m ∥ ) + " ) is the wavenumber, and ∥ is the parallel wavenumber to . In Figure 11b, we assume that magnetic power is totally provided by either fast or slow modes. Since H2 , between " and " ! varies with wavenumbers, fast modes do not show linear dispersion relations (Figure 11b). Given relatively large <=3> uncertainties, we only focus on the parallel wavenumber distributions of the magnetic power within the ' ' ! plane. If only fast modes existed, fast-mode magnetic power would roughly cover the wavenumber distributions of the observed magnetic power ( ∥ < 1 × 10 $' $# ) in Figure 11a. However, if there are only slow modes within the ' ' ! plane, the magnetic power would be concentrated in larger parallel wavenumbers ( ∥ > 7 × 10 $6 $# ) than actual observations. Overall, it is challenging to identify mode compositions of the fluctuations by comparing the disordered distributions with theoretical dispersion relations. In low-+ plasma, magnetic field fluctuations are expressed as ( for fast mode, where is the sound speed, and [ is the Alfvén speed. 3;fB and \.3> represent the fluctuating velocity of slow and fast modes, respectively (Cho & Lazarian 2003). If 3;fB~\.3> , we obtain ( ) \.3> . Therefore, fast modes theoretically may provide more magnetic field fluctuations than slow modes when + < 1 (Zhao et al. 2021). For these three events, the proton plasma + is ~0.3, in accord with this condition. Moreover, our observations show that magnetic field fluctuations within the ' ' ! plane present isotropic behaviors and follow scalings ' ( " ) ∝ " $'/) and ' ( ∥ ) ∝ ∥ $'/) , consistent with fast modes (e.g., Cho & Lazarian 2002;Makwana & Yan 2020). Therefore, we deduce indirectly that magnetic field fluctuations within the ' ' ! plane more likely originate from fast modes. Quantitative analysis for compressible fluctuations will be the subject of our future studies. Summary This study presents observations of three-dimensional magnetic power spectra in wavevector space to investigate the anisotropy and scalings of sub-Alfvénic solar wind turbulence at the MHD scale using the MMS spacecraft. The magnetic power spectra are organized in a new coordinate determined by " and " ! in Fourier space, as described in Section 2.2. This study utilizes two approaches to determine wavevectors: the singular value decomposition method and multi-spacecraft timing analysis. The combination of the two methods allows an examination of the properties of magnetic field fluctuations in terms of mode compositions independent of any spatiotemporal hypothesis. The specifics of our findings are summarized below. 1. The magnetic power spectra of "# (in the direction perpendicular to " and " ! ) are prominently stretched along the " axis, indicating a faster cascade in the perpendicular direction. Moreover, such anisotropy increases as the wavenumber increases. The reduced power spectra of "# fluctuations follow the Goldreich-Sridhar scalings: ' ( " ) ∝ " $%/' and ' ( ∥ ) ∝ ∥ $) . "# fluctuations are more anisotropic using a shorter-interval average magnetic field, suggesting fluctuations are more likely aligned with the local magnetic field than a global magnetic field. 2. The magnetic power spectra within the ' ' ! plane show isotropic behaviors: the perpendicular power distributions are roughly comparable to parallel Table 1 Sub-Afvenic fluctuations in solar wind turbulence
7,341.6
2022-04-11T00:00:00.000
[ "Physics" ]
Astrophysical detections and databases for the mono deuterated species of acetaldehyde CH 2 DCOH and CH 3 COD (cid:63) Context . Detection of deuterated species may provide information on the evolving chemistry in the earliest phases of star-forming regions. For molecules with two isomeric forms of the same isotopic variant, gas-phase and solid-state formation pathways can be di ff erentiated using their abundance ratio. Aims . Spectroscopic databases for astrophysical purposes are built for the two mono deuterated isomeric species CH 2 DCOH and CH 3 COD of the complex organic molecule acetaldehyde. These databases can be used to search and detect these two species in astrophysical surveys, retrieving their column density and therefore abundances. Methods . Submillimeter wave and terahertz transitions were measured for mono deuterated acetaldehyde CH 2 DCOH which is a non- rigid species displaying internal rotation of its asymmetrical CH 2 D methyl group. An analysis of a dataset consisting of previously measured microwave data and the newly measured transition was carried out with a model accounting for the large amplitude torsion. Results . The frequencies of 2556 transitions are reproduced with a unitless standard deviation of 2.3 yielding various spectroscopic constants. Spectroscopic databases for astrophysical purposes were built for CH 2 DCOH using the results of the present analysis and for CH 3 COD using the results of a previous spectroscopic investigation. These two species were both searched for and are detected toward a low-mass star-forming region. Conclusions . We report the first detection of CH 2 DCOH (93 transitions) and the detection of CH 3 COD (43 transitions) species in source B of the IRAS 16293 − 2422 young stellar binary system located in the ρ Ophiuchus cloud region, using the publicly available ALMA Protostellar Interferometric Line Survey. Introduction Acetaldehyde and its isotopic species have been the subject of many spectroscopic investigations, due to their astrophysical relevance and to the large amplitude nature of the internal rotation of the methyl group. The microwave spectrum of the normal species was first analyzed by Kilb et al. (1957) and has since then been investigated up to the ν t = 4 torsional state (Herschbach 1959;Iijima & Tsuchiya 1972;Bauder & Günthard 1976;Kleiner et al. 1990Kleiner et al. , 1992Kleiner et al. , 1996Smirnov et al. 2014), leading to its detection (Gilmore et al. 1976) in the interstellar medium (ISM). The isotopic species with a symmetrical CH 3 or CD 3 methyl group were also investigated (Kleiner et al. 1999;Coudert & López 2006;Elkeurti et al. 2010;Zaleski et al. 2017), but none are detected in the ISM. There is only a limited number of spectroscopic results for isotopic species with a partially deuterated CH 2 D or CD 2 H asymmetrical methyl group. The mono and bideuterated species CH 2 DCOH and CD 2 HCOH have been studied (Turner & Cox Tables 3,4,7,and 8 are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http: //cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/624/A70 1976; Turner et al. 1981), but only a few transitions characterized by a low K a value were assigned as there was no model available at that time to treat the internal rotation of a partially deuterated methyl group. Deuterated species are an important tool for understanding interstellar chemistry and specifically surface chemistry (Charnley et al. 1997;Ratajczak et al. 2011). Deuterium forms slightly stronger bonds than hydrogen at low temperatures (<100 K) and the abundance of deuterium-bearing molecules can become larger than the cosmic D/H ratio of 10 −5 . A large fractionation ratio has been found in many environments such as dark clouds, low-mass and high-mass protostars, as well as protoplanetary disks (see Ceccarelli et al. 2014, for a review). For complex organic molecules (organic molecules containing at least six atoms, Herbst & van Dishoeck 2009), there usually exist two different isomeric mono deuterated species and their abundance ratio yields additional information about interstellar chemistry. This may allow differentiation of gas-phase and grain surface formation pathways. For example, the observed gasphase [CH 2 DOH]/[CH 3 OD] ratios found in the Orion KL compact ridge as well as the low-mass protostar IRAS 16293−2422, are found to scale inversely with [HDO]/[H 2 O] owing to the H/D exchange equilibrium between the hydroxyl (-OH) functional groups of methanol and water in the ice (Faure et al. 2015). These observations are useful constraints for kinetics models of the deuterium chemistry occurring in the icy mantles of interstellar grains. This article focuses on the study of the mono deuterated CH 2 DCOH and CH 3 COD isotopic variants of acetaldehyde. We present in Sect. 2 the spectroscopic investigation of CH 2 DCOH and the compilation of its database and that of CH 3 COD. We first spectroscopically characterized the species that has a partially deuterated CH 2 D methyl group, prior to designing its database. For the species with a symmetrical CH 3 methyl group, the database is based on the previous spectroscopic investigation by Elkeurti et al. (2010). We present in Sect. 3 the astrophysical search and detection of both species. Spectroscopic investigation of CH 2 DCOH The main isotopic species of acetaldehyde and its isotopic variants with a symmetrical CH 3 or CD 3 methyl group were studied accounting for their internal rotation with theoretical approaches initially developed for methanol (Koehler & Dennison 1940;Burkhard & Dennison 1951;Ivash & Dennison 1953;Hecht & Dennison 1957a,b;Lees & Baker 1968;De Lucia et al. 1989). In the case of the present isotopic species, displaying internal rotation of an asymmetrical partially deuterated CH 2 D methyl group, several theoretical models are also available and were applied to mono and bideuterated methyl formate and methanol (Margulès et al. 2009;Coudert et al. 2012Coudert et al. , 2014Pearson et al. 2012;Ndao et al. 2015). In this section, the tunneling-rotation energy levels of CH 2 DCOH are calculated using the approach developed for mono deuterated methyl formate (Margulès et al. 2009), based on the high-barrier internal axis method (IAM) approach of Hougen (1985) and Coudert & Hougen (1988). This IAM treatment is used to analyze the previously available microwave transitions (Turner & Cox 1976;Turner et al. 1981) and the submillimeter wave and terahertz transitions measured in this work. Experimental The transitions measured in this work were recorded in the 150-990 GHz frequency range using the Lille spectrometer (Zakharenko et al. 2015). The absorption cell was a stainlesssteel tube (6 cm diameter, 220 cm long). The sample during measurements was at a pressure of about 10 Pa and at room temperature; the linewidth was limited by Doppler broadening. The frequency ranges 150-330, 400-660, and 780-990 GHz were covered with various active and passive frequency multipliers from VDI Inc. and an Agilent synthesizer (12.5-18.25 GHz) was used as the source of radiation. Estimated uncertainties for measured line frequencies are either 30 or 50 kHz depending on the observed signal-to-noise ratio (S/N) and the frequency range. Figure 1 shows two portions of the spectrum recorded in the submillimeter wave region. Theory The model developed previously for mono deuterated methyl formate (Margulès et al. 2009) can be applied to mono deuterated acetaldehyde CH 2 DCOH with almost no changes. The coordinates used in this model are the usual Euler angles χ, θ, φ and a large amplitude angular coordinate, denoted α, parameterizing the internal rotation of the methyl group with respect to the aldehyde group. Molecule-fixed coordinates of the atoms are obtained starting from the scheme introduced for the principal axis method in Sect. 3 of Hougen et al. (1994). The initial configuration drawn in their Fig. 1 defines atom positions in an x y z axis system such that the axis of internal rotation coincides with the z axis. The methyl group atoms are numbered from 1 to 3, with atom 1 being the deuterium atom and atoms 2 and 3, the two hydrogen atoms. The large amplitude coordinate α is the dihedral angle ∠DCCO. Using Sect. 3.1 and Eqs. (1) and (2) of Margulès et al. (2009) allows us to retrieve atom positions in an xyz molecule-fixed axis system which, for any value of α, is the principal axis system in the I r representation (Bunker 1979). In agreement with the IAM approach of Hougen (1985) and Coudert & Hougen (1988), the non-superimposable equilibrium configurations of the molecule are chosen. There arise three energetically inequivalent configurations, shown in Fig. 2, identified by their configuration number n, with n = 1, 2, and 3, and characterized by α (n) eq the value of the torsional angle α around which the reference function is centered. Configurations 1 and 2 are the two C 1 symmetry Out configurations with the deuterium atom outside the x z plane. They are higher in energy than Configuration 3, the C s symmetry In configuration with the deuterium atom in the symmetry plane. The energy difference E d between the zero point energy of the two Out configurations and that of the In configuration was estimated by Turner et al. (1981) to be 15.55 cm −1 . Equations (12) and (13) of Margulès et al. (2009) should be used to obtain the tunneling matrix elements H JKγ1;JK γ 2 of the 1 → 2 tunneling path connecting the isoenergetic Configurations 1 and 2. Similarly, Eqs. (14) and (15) should be used for tunneling matrix elements H JKγ1;JK γ 3 of the 1 → 3 tunneling path connecting Configurations 1 and 3. In Eqs. (12)-(15) of Margulès et al. (2009), h 2 and h 3 are the magnitude of the tunneling splittings and θ 2 , φ 2 and χ 3 , θ 3 , φ 3 are 5 Eulerian-type angles describing the rotational dependence of the tunneling matrix elements. In addition to these parameters, computing the rotationtorsion energy also requires the rotational constants of the In and Out conformations, A In , B In , C In and A Out , B Out , C Out , respectively, and their energy difference E d . When tunneling effects are small, the In conformation displays asymmetric-top rotational energies. For the + and − Both energetically equivalent configurations of the Out conformation and the lower energy configuration of the In conformation are identified by their configuration number n = 1, 2, and 3. α (n) eq is the equilibrium value of the torsional angle α = ∠DCCO. Configuration 3 displays a symmetry plane. Configurations 1 and 2 have C 1 symmetry. sublevels arising from the Out conformation, Eq. (21) of Margulès et al. (2009) shows that E d ± h 2 should be added to the asymmetric-top rotational energies, where the upper (lower) sign is for the + (−) sublevel. As h 2 is negative (Hougen 1985;Coudert & Hougen 1988), the + sublevel is below the − sublevel. Parallel a-type and perpendicular b-type transitions arise within the In and Out conformations. For the latter conformation, the selection rule ± ↔ ± holds. Perpendicular c-type transitions arise within the Out conformation only and obey the selection rule ± ↔ ∓. When tunneling effects are large, distortion terms to the tunneling matrix elements H JKγ1;JK γ 2 and H JKγ1;JK γ 3 should be added and those defined in Eq. (22) of Margulès et al. (2009) are used. Distortion effects to the five Eulerian-type angles are also accounted for using a polynomial-type expansion in J(J + 1). Assigning the levels arising from numerical diagonalization of the Hamiltonian matrix in terms of rotational quantum numbers K a and K c and of + and − tunneling labels is not straightforward as the ordering of the tunneling sublevels changes for large enough K a -values. The level assignment chosen here is consistent with symmetry and ensures a smooth variation of the tunneling splitting as a function of K a for each J-value. For a given K a -value, the tunneling matrix element H JKγ1;JK γ 2 couples the two members of an asymmetry doublet. A mixing of the JK a , K c , ± and JK a , K c ± 1, ∓ rotational-tunneling sublevels arises and leads to forbidden transitions with even ∆K a and ∆K c (Turner et al. 1981). Such transitions were assigned in the previous investigations (Turner & Cox 1976;Turner et al. 1981) and in the present work. The Eulerian-type angles θ 2 , φ 2 and χ 3 , θ 3 , φ 3 were calculated numerically solving Eqs. (49) of Hougen (1985) for each tunneling motion and computing α-dependent atom positions with the structure of Kilb et al. (1957). Table 1 lists the computed values along with calculated rotational constants and dipole moment components. The latter were obtained from Turner & Cox (1978) using their favored orientation. Notes. Eulerian-type angles, in degrees, involved in the rotational dependence of the tunneling matrix elements, the rotational constants, in cm −1 , and the dipole moments components, in Debye, are listed for the In and Out conformations. For symmetry reason, the relation χ 2 = φ 2 + π is fulfilled and µ In y is zero. Superscripted In and Out labels identify the rotational constants and dipole moment components. Fig. 3. Effects of the two tunneling motions ∆E are plotted in MHz as a function of J for several K a -values and for all three tunneling sublevels. In identifies the level arising from the In conformation. + and − identify the tunneling sublevels arising from the Out conformation. The effects of the tunneling motion connecting the In and Out conformations can be seen for the 17 8,9 In and 17 7,10 − sublevels. Line assignment and line frequency analysis Starting from the results of Turner & Cox (1976), parallel atype and perpendicular b-type transitions within the In conformation were assigned up to J = 36 and K a = 3. This first set of transitions was fitted with a Watson-type Hamiltonian. Parallel a-type and perpendicular b-and c-type transitions within and between the + and − sublevels of the Out conformation were afterwards assigned up to J = 36 and K a = 4, using the results of Turner et al. (1981). Fitting of this second set of transitions yielded rotational constants for the Out conformation, the magnitude of the tunneling splitting h 2 , and the Eulerian-type angles θ 2 and φ 2 . Both sets of transitions were then fitted together and the parameters corresponding to the 1 → 3 tunneling motion and the energy difference E d could be obtained. Transitions perturbed by the coupling between the In and Out conformations (Cox et al. 2003) could then be included in the fit. New transitions were predicted and searched for. For the In conformation, it was possible to assign a-type transitions up to J = 36 and K a = 14 and b-type transitions up to J = 39 and K a = 13. For the Out conformation, a-type transitions were assigned up to J = 35 and K a = 14 and perpendicular b-and c-type transition up to J = 36 and K a = 5. The smaller number of perpendicular transitions assigned for the In Out References a-type b-type a-type b-type c-type All 1 23 41 ---64 2 --38 35 21 94 This work 488 502 1033 349 26 2398 All 511 543 1071 384 47 2556 Notes. The number of assigned a-, b-, and c-type transitions for each conformation in the two previous investigations (Turner & Cox 1976;Turner et al. 1981) and in this work. c-type transitions within the In conformation are not allowed. No transitions were assigned between the In and Out conformations. (140) Notes. Lowest order parameters obtained in the line position analysis of Sect. 2.3. Parameters are in cm −1 except for the angles θ 2 , φ 2 and χ 3 , θ 3 , φ 3 which are in degrees. Uncertainties are given in parentheses in the same units as the last quoted digit. Out conformation than for the In conformation may be due to two factors. The first possible explanation is decreased strength, due to a smaller value of the x dipole moment component of this conformation compared to that of the In conformation, as emphasized by Table 1. Alternatively, there may be a less favorable Boltzmann factor due to a larger A rotational constant for the Out conformation than for the In conformation and the fact that the Out conformation is 15 cm −1 above the In conformation. Table 2 lists the number of assigned transitions for each conformation counting forbidden even ∆K a and ∆K c transitions of the Out conformation as a-type transitions. In the final analysis, experimental frequencies were introduced in a least-squares fit procedure where they were given a weight equal to the inverse of their experimental uncertainty squared. Unresolved K-type doublets were treated as in Margulès et al. (2009). The rotational Watson-type Hamiltonians used for the In and Out conformations were written using Watson's A-set of distortion parameters (Watson 1967(Watson , 1968a. The root mean square value of the observed minus calculated frequency is 0.054 MHz for transitions within the In conformation, 0.193 MHz for transitions within the Out conformation, and 0.151 MHz for all transitions. The unitless standard deviation of this final analysis is 2.3 For the whole dataset, assignments, observed and calculated frequencies, and residuals are listed in Table 3 available at the CDS. This table displays 13 columns. Columns 1-4 (5-8) give the assignment of the upper (lower) level in terms of J, K a , K c rotational quantum numbers and a vibrational label v. The latter is zero for the In conformation and + or − for the two tunneling sublevels of the Out conformation (see Sect. 2.2). Column 9 is the observed frequency Table 5. For the Eulerian-type angles describing the rotational dependence of the tunneling matrix elements, the discrepancies are at most 5%. Spectroscopic database for CH 2 DCOH and CH 3 COD For CH 2 DCOH, the spectroscopic database was built using the results of the previous sections. Transitions were calculated up to J = 26 and their line strength and line intensity were computed using the dipole moment components in Table 1. For CH 3 COD, the results of the analysis carried out by Elkeurti et al. (2010) were used and transitions were calculated using the same maximum value of J. Partition functions Q rot , listed in Table 6, were computed for several temperatures taking a degeneracy factors equal to (2J + 1). A zero energy was taken for the In conformation 0 00 level of CH 2 DCOH and for the ν t = 0 A 1 symmetry 0 00 level of CH 3 COD. For both species, lines were selected using the procedure in the JPL database catalog line files (Pickett et al. 1998). An intensity cutoff that depended on the line frequency was taken. Its value in nm 2 · MHz units at 300 K is where F is the frequency in MHz, and LOGSTR0 and LOGSTR1 are two dimensionless constants both set to −8. The linelists, given in Table 7 for CH 2 DCOH and in Table 8 for CH 3 COD, are available at the CDS. They are formatted in the same way as the catalog line files of the JPL database (Pickett et al. 1998) and display 16 columns. Columns 1-3 contain respectively the line frequency (FREQ) in MHz, the error (ERR) in MHz, and the base 10 logarithm of the line intensity (LGINT) in nm 2 · MHz units at 300 K. Columns 4-6 give the degrees of freedom of the rotational partition function (DR), the lower state energy (ELO) in cm −1 , and the upper state degeneracy (GUP), respectively. Columns 7 and 8 contain the species tag (TAG) and format number (QNFMT), respectively. Finally, cols. 9-12 (13-16) give the assignment of the upper (lower) level in terms of J, K a , K c , and a vibrational quantum number. For CH 2 DCOH, this quantum number is zero for the levels of the In conformation and 1 or 2 for the + and − sublevels of the Out conformation. For CH 3 COD, this label is 0 for A-symmetry levels and 1 and 2 for E-symmetry levels when ν t = 0. This label is 3 and 4 for E-symmetry levels and 5 for A-symmetry levels when ν t = 1. For both species, a minimum value of 10 kHz was selected for the calculated error (ERR). For observed unblended microwave lines, the line frequency (FREQ) and the error (ERR) were replaced by their experimental values. This is then indicated by a negative species tag. Astrophysical observations High deuterium fractionation has been observed in various types of environments such as prestellar cores, hot cores, and hot corinos. Its study is considered to be an efficient probe for studying the physical and chemical conditions of these environments and help us to understand their formation. This is especially interesting for the so-called complex organic molecules such as methanol and bigger molecules for which it may allow differentiation of gas-phase and solid-state formation pathways. We first used the ASAI (Astrochemical Surveys At Iram) 1 IRAM-30 m Large Program data to search for CH 2 DCOH and CH 3 COD. The goal of these observations was to carry out unbiased millimeter line surveys between 80 and 272 GHz of a sample of ten template sources, which fully cover the first stages of the formation process of solar-type stars, from prestellar cores to the late protostellar phase (Lefloch et al. 2018). We used the CASSIS 2 software for the line identification in the publicly available ASAI data 3 . We conclude that these species are not detected in all ASAI sources with the single-dish observations, either because they are only present in the dense and hot regions as the hot corinos, or present with a too small abundance in the colder extended envelope to be detected by these observations. We then used ALMA interferometric observations toward the very line rich source IRAS 16293−2422 (hereafter IRAS16293). IRAS16293 is a deeply embedded young stellar binary system located in the L1689 region in the ρ Ophiuchus cloud region, extensively studied through millimeter and submillimeter single-dish and interferometer observations. It has a cold outer envelope (with spatial scales of up to ∼6000 au) (Jaber Al-Edhari et al. 2017) and a hot corino at scales of ∼100 au (Jørgensen et al. 2016). Due to its hot-core-like properties, a wealth of complex organic have been reported toward its two binary components: I16293A and I16293B, separated by 5 (Wootten 1989). We used the publicly available ALMA Protostellar Interferometric Line Survey (PILS, Jørgensen et al. 2016), an unbiased spectral survey of IRAS16293 covering a frequency range of about GHz of ALMA's Band 7, performed in ALMA's Cycle 2 (project-id: 2013.1.00278.S). Full observational details are given in Jørgensen et al. 2016. The entire raw dataset of this survey is accessible on the ALMA website. In this work, we only used the data obtained with the 12 m dishes array (∼38 antennas in the array at the time of observations), that we reprocessed using the standard pipeline scripts to obtain data-cubes with the ultimate spectral resolution of δv ∼ 0.25 km s −1 , in a 0.5 beam located ∼1 east of source B (α J2000 = 16 h 32 m 22.5375 s; δ J2000 = −24 • 28 32.555 ), necessary to decrease the damaging effect of line blending. We first computed local thermodynamic equilibrium (LTE) synthetic spectra, using the CASSIS software, of the expected brightest lines of CH 2 DCOH and CH 3 COD species in the PILS frequency range, limiting the search for transitions with A i, j ≥ 0.001 s −1 and E up ≤ 500 K. For the synthetic spectra we assumed a source size larger than the beam (3 ), an excitation temperature of 100 K, a line width of 0.8 km s −1 , and a column density of 5 × 10 14 cm −2 for CH 2 DCOH and 3.5 × 10 14 cm −2 for CH 3 COD respectively. We limited the search in a 10 GHz spectral band among the 34 GHz of the PILS survey, where the density of CH 2 DCOH and CH 3 COD lines was the largest. The goal being the identification of the CH 2 DCOH and CH 3 COD lines, we did not optimize neither the data processing, nor the CASSIS LTE modeling to reproduce the line intensities. Figures A.1 and A.2 show the detection of 93 CH 2 DCOH lines and 43 CH 3 COD lines in this 10 GHz frequency range (among, respectively, the 101 and 99 present in the frequency range with the thresholds used). Tables A.1 and A.2 show the detected lines parameters. Note that Jørgensen et al. (2018) reported the detection of CH 3 COD in IRAS 16293-2422B. Conclusions The rotation-torsion spectrum of the non-rigid mono deuterated acetaldehyde CH 2 DCOH was experimentally and theoretically investigated. Due to the internal rotation of the asymmetrical CH 2 D methyl group, the ground vibrational state of the molecule is split into three torsional sublevels. Transitions within and between these sublevels were measured in the submillimeter wave and terahertz spectra described in Sect. 2.1. These transitions along with previously measured ones (Turner & Cox 1976;Turner et al. 1981) were fitted using the IAM treatment (Hougen 1985;Coudert & Hougen 1988) presented in Sect. 2.2. The frequency of 2556 transitions could be reproduced with a 2.3 unitless standard deviation. The good agreement between calculated spectroscopic parameters in Table 1 and their experimental values in Table 5 emphasizes a fairly good understanding of the three first torsional levels of mono deuterated acetaldehyde CH 2 DCOH. The present analysis allowed us to evidence two types of tunneling motions. In addition to the tunneling motion connecting the two energetically equivalent Out configurations, dealt with in mono deuterated methyl formate (Margulès et al. 2009), it was possible to observe the tunneling motion connecting the energetically inequivalent In and Out conformations. This second tunneling motion leads only to shifts as it connects levels that already have different energies (Cox et al. 2003). Figure 3 illustrates the effects of both tunneling motions. The tunneling motion connecting the two energetically equivalent Out configurations leads to an 800 MHz tunneling splitting clearly visible in this figure. The effects of the tunneling motion connecting the In and Out conformations are smaller and become important when level crossings occur. The results of the analysis were used to build a database for astrophysical purposes for CH 2 DCOH. A similar database, for the isomeric mono deuterated species CH 3 COD, was compiled starting from the results of the previously published analysis of Elkeurti et al. (2010). With these databases, we have conducted a search of CH 2 DCOH and CH 3 COD lines in the publicly available ASAI IRAM-30m Large Program and the ALMA Protostellar Interferometric Line Survey (PILS, Jørgensen et al. 2016). Both CH 2 DCOH (93 transitions) and CH 3 COD (43 transitions) species were detected in the IRAS 16293-2422 source B young stellar object alone, located in the ρ Ophiuchus cloud region. Tables A.1 and A.2 list the transitions identified in this source. Fig. A.2. CH 3 COD observed (in black) and modeled (in red) lines. The lines have been shifted at a V LSR of 2.7 km s −1 . The quantum numbers are indicated with a sorting in frequency, for A i, j ≥ 0.001 s −1 and E up ≤ 500 K. In the case of multiple transitions, the quantum numbers are indicated from the left to the right, with increasing V LSR .
6,448.4
2019-04-01T00:00:00.000
[ "Physics" ]
Updated taxonomy of LactifluussectionLuteoli: L.russulisporus from Australia and L.caliendrifer from Thailand Abstract Lactifluusrussulisporus Dierickx & De Crop and Lactifluuscaliendrifer Froyen & De Crop are described from eucalypt forests in Queensland, Australia and different forest types in Thailand, respectively. Both species have recently been published on Index Fungorum and fit morphologically and molecularly in L.sect.Luteoli, a section within L.subg.Gymnocarpi that encompasses species with alboochraceous basidiomes, white latex that stains brown and typical capitate elements in the pileipellis and/or marginal cells. Introduction Since the division of Lactarius into Lactarius sensu novo and Lactifluus (Buyck et al. 2008), our understanding of both genera has increased significantly. Although Lactifluus is the smaller of the two genera, it is characterised by a higher genetic diversity with subgroups in very different and genetically distant clades . Recently, efforts in Lactifluus culminated in a new infrageneric classification based on checked for in Cotton blue in lactic acid and Cresyl blue (Clémençon 1997;2009) Spore ornamentation is described and illustrated as observed in Melzer's reagent. A total of 40 spores (20 per collection) were measured for each of the two new species. For details on terminology we refer to Verbeken (1998) and Verbeken and Walleyn (2010). Line-drawings were made with the aid of a drawing tube (Zeiss camera lucida on a Zeiss Axioskop 2 microscope equipped with a magnification changer of 2.5× for spores and an Olympus U-DA on an Olympus CX21 microscope for individual elements and pileipellis structures) at original magnifications: 6000× for spores, 1500× for individual elements and sections. Basidia length excludes sterigmata length. Spores were measured in side view, excluding the ornamentation, and measurements are given as (MINa) [AVa-2*SD]-AVa-AVb-[AVb+2*SD] (MAXb), with AVa = lowest mean value for the measured collections and AVb = greatest mean value for the measured collections, SD = standard deviation, MINa = lowest extreme value of collection "a" and MAXb = greatest extreme value of collection "b". The Q-value (quotient length/ width) is given as (MIN Qa) Qa-Qb (MAX Qb), with Qa = lowest mean ratio for the measured collections and Qb = greatest mean ratio for the measured collections, MIN Qa = lowest extreme ratio of collection "a" and MAX Qb = greatest extreme ratio of collection "b". Other measurements are given as MIN-MAX values. Colour codes refer to Kornerup and Wanscher (1978). Microscopic photographs were taken using a Nikon eclipse NI-U-microscope equipped with a DX-Fi1c camera and Nikon NIS-Elements software including EDF module. Molecular work DNA from dried collections was extracted using the protocol described by Nuytinck and Verbeken (2003) with modifications described in Van de Putte et al. (2010), and from fresh material using the CTAB extraction method described in Nuytinck and Verbeken (2003). Protocols for PCR amplification follow Le et al. (2007). The internal transcribed spacer (ITS) was sequenced for a second collection for each new species using the primers ITS1-F and ITS4 (Gardes and Bruns 1993;White et al. 1990). PCR products were sequenced using an automated ABI 3730 XL capillary sequencer (Life Technology) at Macrogen. Forward and reverse sequences were assembled into contigs and edited where needed with SequencherTM v5.0 software (Gene Codes Corporation, Ann Arbor, MI, USA). Results In congruence with De Crop et al. (2017), our molecular results show that the collections from Australia as well as those from Thailand belong to Lactifluus. subg. Gymnocarpi sect. Luteoli (Fig. 2). The newly generated sequences for Halling 9674 and Wisitrassameewong 392 belong to the same species as Halling 9398 and Wisitrassameewong 378 respectively. These two species are supported by morphological and geographical differences (see discussion) and are fully described below as L. russulisporus and L. caliendrifer. Original diagnosis. Basidiocarps small (up to 4 cm cap diam.). Cap and stipe dry, matt, yellowish white to pale brown. Context with unpleasant, fishy smell. Latex copious, watery white, staining tissues brown. Basidiospores broadly ellipsoid 7.0-7.8-7.9-8.7 × 5.7-6.4-6.5-7 μm (n=40, Q = 1.14-1.23-1.40); ornamented with irregular and isolated warts which are up to 1.3 μm high. True pleurocystidia absent, but with few to abundant sterile elements in the hymenium. Pileipellis a lampropalisade. L. russulisporus differs from its sister species, L. caliendrifer, by its longer basidia, slightly bigger spores with a somewhat heavier and more irregular ornamentation and the absence of abundant thick-walled marginal cells. Lactifluus russulisporus Dierickx & De Crop Basidiomes rather small. Pileus 20-40 mm diam., convex to plano-convex and depressed on disc to uplifted and slightly depressed, yellowish white (4A2) to pale brown, dry, matted, subtomentose to finely subvelutinous and somewhat subrugulose to subcorrugate; margin inrolled. Stipe 10-30 × 5-10 mm cylindrical, dry, matt, yellowish white, sometimes paler brownish towards the base, with white mycelium at the base. Lamellae adnexed to subdecurrent, rather close, pale greyish white to yellowish white, turning darker to near pale brown with age. Context white, solid to somewhat pithy in the stipe; smell unpleasant, fishy; taste mild. Latex copious, watery white, staining tissues brown. Distribution. Known from Eastern Australia. Ecology. East-Australian wet sclerophyll and subtropical rainforest, scattered to gregarious on soil under Leptospermum, Syncarpia, and Eucalyptus spp. Etymology. Named after the spores which are reminiscent of the spore ornamentation and shape of many Russula species. Distribution Remarks. Lactifluus caliendrifer differs from its sister species, L. russulisporus, by the abundant thick-walled marginal cells, very long pileipellis hairs and slightly smaller basidia and spores with more regular and lower warts. Discussion The morphological distinction between Lactarius and Lactifluus is not always straightforward in the field and can only be based on some general trends. For example, the genus Lactifluus is generally characterised by the complete absence of zonate and viscose to glutinose caps, and it contains many species with veiled and velvety caps (Buyck et al. 2008;Verbeken and Nuytinck 2013). A cellular hymenophoral trama and a lampropalisade as pileipellis structure are both characters which are more often observed in Lactifluus than in Lactarius. The newly described species can macroscopically be recognised as members of genus Lactifluus by the tomentose to velvety appearance of their caps and the exuded milk that changes to brownish (which is more common in Lactifluus and very rare in Lactarius). Microscopically the presence of a lampropalisade and a cellular trama indicate the affinity with Lactifluus. Lactifluus russulisporus and L. caliendrifer belong to L. subg. Gymnocarpi, which is supported by molecular (Fig. 2) ) and morphological data (e.g. brown discolouration of the latex and the absence of true pleurolamprocystidia). Both new species are placed in L. sect. Luteoli, which consists of seven species from all continents except South America and Antarctica, and are characterised by capitate elements in the pileipellis and/or the presence of differentiated marginal cells. The sister species Lactifluus russulisporus and L. caliendrifer are clearly delimited molecularly, which is reflected in both geographical and morphological characters. Geographically, L. russulisporus is only known from Eastern Australia (Queensland), while L. caliendrifer is only known from Southeast Asia (Thailand). In the field, both species can be recognised by their cream to yellowish white basidiomes, dry and finely velvety to pruinose pilei, rather crowded white to concolorous lamellae and copious watery latex that stains brown. These features are common to most species in L. sect. Luteoli. Lactifluus caliendrifer can be distinguished macroscopically by its velvety pileus, whiter basidiomes and its strong and fruity smell. Lactifluus russulisporus differs from its sister species by having a more yellowish-brown shade and an unpleasant, fishy smell. Lactifluus rubrobrunnescens is known to occur in Java (Indonesia) and can easily be recognised by a hollow stipe, latex that stains reddish brown, more globose spores (average Q = 1.16) and distinctly capitate elements in the pilei-and stipitipellis, and marginal cells (Verbeken et al. 2001). Notes on terminology When it comes to terminology used in the genera Lactarius and Lactifluus, most authors tend to follow Verbeken and Walleyn (2010) and Verbeken (1998). Unfortunately, some confusion seems to exist concerning hymenophoral cells that can be termed either leptocystidia or sterile elements. Even though this type of cell is frequently present in Lactifluus (pers. observations), these cells are only rarely reported in species descriptions Delgat et al. 2017), probably often being dismissed as basidioles and/or of limited taxonomic value. This problem presented itself during the description of the two new species and a consensus between the authors of this paper was pursued. The term leptocystidium is composed of the Greek leptós, meaning "smooth, thin-walled" and cystidium, meaning "a sterile body, frequently of distinctive shape, occurring at any surface of a basidiome, particularly the hymenium from which it frequently projects" (Ainsworth 2008). In Clémençon (1997), leptocystidia are described in a similar manner, with the addition that they often have an excretory function. For the latter, we could not find evidence in our collections. According to Verbeken and Walleyn (2010), leptocystidia can be regarded as "thin-walled cystidia without remarkable content and thus only deviating by their shape. They are tapering at the top and often have a rostrate apex, which makes them easy to confuse with monosterigmatic basidia. One can consider them to be cystidia if they are regularly observed and if they never bear a spore or spore primordium". In the two new species, and by extension in most Lactifluus species, thin-walled sterile cells with no remarkable content occur in the hymenium. Furthermore, they do not exhibit a deviating shape, being cylindrical and usually ending blunt. If shape deviation is seen as a vital component for being a cystidium, these cells cannot be named as such. In addition, we dismiss the idea that these cells represent basidioles. Firstly, no intermediate forms between these cells and basidioles were observed. Secondly, in L. russulisporus these cells display a different morphology in both collections. In RH 9674, and by extension in general, they do not protrude from the hymenium and do not exhibit a deviant form, leaving open the possibility that they constitute basidioles or protobasidia (Fig. 7C). However, in RH 9398, they grow out strikingly, protruding clearly from the hymenium (Fig. 7A, B). The same behaviour is seen in the pseudocystidia and marginal cells in this collection. According to Moore (2005), principle nine of fungal developmental biology states that "meiocytes appear to be the only hyphal cells that become committed to their developmental fate. Other highly differentiated cells retain totipotency-the ability to generate vegetative hyphal tips that grow out of the differentiated cell to re-establish a vegetative mycelium." A possible hypothesis is that some stimulus, perhaps environmental, caused the totipotent cells in the hymenium to grow out, giving rise to the protruding sterile elements, pseudocystidia and marginal cells in RH 9398. This explanation adds to the idea that these cells are not precursor cells of meiocytes (basidia). As these sterile elements are argued not to be cystidia or basidioles, the question remains as to what they are. Several terms might have been used to indicate the same kind of cells. For example, haplohyphidia refers to unmodified, unbranched or little branched terminal hyphae in the hymenium of (mostly) Aphyllophorales. An intriguing term, paraphyses, is used in the works on the developmental biology of the hymenium done in Coprinopsis cinerea (Horner and Moore 1987;Rosin and Moore 1985a). These cells originate as branches of sub-basidial cells and insert into the basidial layer, later inflating so that they become the main structural component as a pavement from which basidia and cystidia protrude (Horner and Moore 1987;Moore 1985;Rosin and Moore 1985a;. This description fits well with the sterile elements observed in Lactifluus (Figs 7, 8F). Nevertheless, paraphyses is a term strongly associated with Ascomycota, used for more hair-like (filiform) cells. It cannot be stated with certainty that Ascomycete paraphyses are homologous to the cells we find in Lactifluus. Given the lack of a distinctive deviating shape in most cases, the improbability of being basidioles and the neutrality of the term, we recommend the use of the term 'sterile elements' over the terms 'leptocystidia' and 'paraphyses' to refer to these cells. Thereto can be added that marginal cells often bear a striking resemblance to sterile elements (Fig. 8). Furthermore, in Inocybe, little differentiated cystidia are referred to as paracystidia, which also show similar morphology to marginal cells and might constitute the same type of cell (Jacobsson and Larsson 2012;Kuyper 1986). Presently it is difficult to argue whether this is due to homology or homoplasy. Marginal cells are sterile elements on a sterile edge that differ from pleurocystidia and are, in fact, 'hairs' sensu Romagnesi (Verbeken and Walleyn 2010). In species where the edge is fertile, sterile elements are also present on the edge. It is possible that, when no differentiated marginal cells are present on an infertile edge, sterile elements are present and consequently reported as being marginal cells. We suggest paying more attention to these sterile elements which occur predominantly in Lactifluus. Given the variation that we observe within L. russulisporus, it is likely that the taxonomic value of this character is rather low, but this needs more observations. of the 'Bijzonder Onderzoeksfonds Gent University' (BOF) Gent University and the Thailand Research Fund (BRG5580009) under the research grant entitled 'Taxonomy, Phylogeny, and Biochemistry of Thai Basidiomycetes". Roy Halling was partially supported by National Science Foundation (USA) funds from grant DEB 1020421. The National Geographic Society Committee for Research and Exploration provided funding via grant 8457-08. The Queensland Herbarium (BRI) collaborated generously with assistance and support for herbarium and field studies in Australia. We would like to thank Viki Vandomme for conducting lab work.
3,108
2019-07-10T00:00:00.000
[ "Environmental Science", "Biology" ]
L-Arginine Grafted Chitosan as Corrosion Inhibitor for Mild Steel Protection Corrosion prevention has been a global phenomenon, particularly in metallic and construction engineering. Most inhibitors are expensive and toxic. Therefore, developing nontoxic and cheap corrosion inhibitors has been a way forward. In this work, L-arginine was successfully grafted on chitosan by the thermal technique using a reflux condenser. This copolymer was characterized by Fourier-transform infrared spectroscopy (FTIR), thermogravimetric analysis (TGA), and X-ray diffraction (XRD). The corrosion inhibition performance of the composite polymer was tested on mild steel in 0.5M HCl by electrochemical methods. The potentiodynamic polarization (PDP) and electrochemical impedance spectroscopy (EIS) results were consistent. The inhibition efficiency at optimum concentration rose to 91.4%. The quantum chemical calculation parameters show good properties of the material as a corrosion inhibitor. The molecular structure of the inhibitor was subjected to density functional theory (DFT) to understand its theoretical properties, and the results confirmed the inhibition efficiency of the grafted polymer for corrosion prevention. Introduction Corrosion is defined as the degradation of materials caused by chemical or electrochemical attacks within the working environment [1]. It resulted in material losses and economic disadvantages for partial or total replacement of equipment and structures [2]. It is considered as an electrochemical reduction-oxidation (redox) reaction that occurs on the surface of metallic materials, prompting the release of electrons by the dissolution of metal and their successive transfer to another position on the surface, causing the hydrogen ions to be reduced and resulting in gradual deterioration and subsequent failure of the host material [3]. Corrosion has not only economic implications, but also social, including the safety and health of people, either working in industries or living in nearby towns. The petroleum industry is one of the sectors most affected by corrosion due to the presence of many corrosive substances in oil, which affects transportation of the petroleum products through pipelines [4]. Furthermore, corrosion represents a significant threat to storing radioactive wastes for safe disposal, and in medical implants as it causes blood towns. The petroleum industry is one of the sectors most affected by corrosion due to the presence of many corrosive substances in oil, which affects transportation of the petroleum products through pipelines [4]. Furthermore, corrosion represents a significant threat to storing radioactive wastes for safe disposal, and in medical implants as it causes blood poisoning [5]. Thus, it has been recognized as a global problem, affecting various aspects of human endeavors. Various approaches have been put in place to combat the persistent problem, with corrosion inhibitors being one of the most cost-effective and practical methods of preventing metallic corrosion in various corrosive media [6]. Corrosion inhibitors are compounds used in low concentrations to slow down or stop the electrochemical process [1]. Most conventional corrosion inhibitors contain one or more heteroatoms such as nitrogen, oxygen, phosphorus, and sulfur, and the other functional groups that possess lone pairs of electrons (such as amino and hydroxyl) [7]. It occurs when partially filled d-orbital metal atoms, such as iron, are oxidized. The nonmetal is reduced by gaining the electron from the metal to form an oxide. In the presence of air or moisture, it deposits on the surface of the metal substances, creating a corrosive layer on the surface of the material. The presence of a corrosion inhibitor limits the corrosion reaction by altering the reaction mechanism of the corrosion process and keeping its rate to a minimum, thereby, preventing economic losses due to the corrosion [4]. The effectiveness of an inhibitor depends on the ability of the inhibitor to interact with a metal surface by forming bonds with the metal surface through electron transfer. Inhibitors usually are adsorbed on the metal surface by dislodging water molecules on the surface forming a compact barrier. The availability of nonbonded (lone pair) and p-electrons in inhibitor molecules facilitate the electron transfer from the inhibitor to the metal [4]. The inhibition efficiency of the inhibitor depends on the stability of the chelate formed. Therefore, it directly depends on the type and the nature of the substituents present in the inhibitor molecules [8]. The vast majority of conventional corrosion inhibitors are hazardous [9]. Thus, green corrosion inhibitors have been identified as the cheapest, most biodegradable, renewable, efficient, and ecologically friendly approach to reducing mild steel corrosion [10]. In recent years, there has been a rise in interest in using environmentally friendly, low-cost materials as corrosion inhibitors. This interest has grown to incorporate the use of polymers to prevent metallic corrosion [11]. Some researchers have claimed that chitosan is a suitable corrosion inhibitor for mild steel [6,12]. However, the extensive inter-and intramolecular hydrogen bonding of the polymer decreased its solubility in an aqueous and acidic environment, thereby lowering its inhibitory actions on mild steel [13,14]. Herein, we aim to boost the effectiveness of the polymer's inhibitory actions by incorporating Larginine ( Figure 1). L-arginine is an amino acid (2-Amino-5-guanidinopentanoic acid) found naturally in proteins, in food such as seafood, watermelon, nuts, seeds, seaweed, pork, fish, and rich rice and soy proteins [15]. L-arginine itself has been reported as a corrosion inhibitor for mild steel by Khalid et., al, [16]. Gowri and co-workers [17] reported the application of L-arginine derivatives as a corrosion inhibitor for steel in sea water. Longer hydrocarbon chains in amino acids have often shown stronger corrosion mitigation [18]. As a result, adding more amino groups would enhance the electron density Longer hydrocarbon chains in amino acids have often shown stronger corrosion mitigation [18]. As a result, adding more amino groups would enhance the electron density on the inhibitor molecule and, hence, the effectiveness of the inhibitor. Thus, the current research functionalized chitosan by grafting with L-arginine and utilized the copolymer as a mild steel corrosion inhibitor in 0.5M hydrochloric acid using electrochemical and density functional theory. The results show high inhibition efficiency even at a low concentration of inhibitor. Materials and Methods Chitosan of medium molecular weight (190-310 kDa and 75-85% degree of deacetylation), L-arginine, hydrochloric acid (HCl) and acetic acid were purchased from Sigma Aldrich (Shanghai, China). Tert-Butyl hydroperoxide, 70% solution in water was purchased from Thermo Fischer Scientific (Waltham, MA, USA). Other reagents such as acetone, ethanol was supplied by DKSH Specialty Chemicals (Bangkok, Thailand). All the chemicals used were of analytical grade and used without further purification. Mild steel coupons and emery paper were collected from the KhonKaen University workshop, Thailand. Chitosan Modification L-arginine was grafted on chitosan by dissolving 2 g of chitosan in 100 mL of 0.01 M acetic acid solution, and 2 g of L-arginine in 50 mL of acetone. These two solutions were mixed in a beaker and 1 mL of Tert-Butyl hydroperoxide (THB) was then poured into the resulting mixture. This mixture was then taken into reflux at 120 • C for two hours. Cs-g-L-arginine was then precipitated by adding 20 mL of 0.5 M NaOH to the mixture. The solution was filtered through 11µ filter paper. The filtrated Cs-g-L-arginine was then extracted with deionized water using Soxhlet extraction at 90 • C for six hours to remove excess homopolymer. The extracted copolymer was then dried in an oven at 60 • C for 6 h and stored in a desiccator (at room temperature). Characterizations of the Copolymer Fourier transformed infrared (FT-IR) spectra were obtained using FTIR infrared spectrophotometer model; Tensor27 S/N; 3683 (broker Hong Kong limited). Thermal analysis (TGA) was carried out using Hitachi TG/DTA thermogravimetric analyzer instrument model no. STA7200 Hitachi, using a scan rate of 10 • C/min range 25 • C to 600 • C, and X-ray diffraction (XRD) spectra were analyzed using Empyrean X-ray Diffractometer from Malvern. Corrosion Analysis The electrochemical test was carried out using a Metron auto lab Potentiostat (company) equipped with NOVA 1.1 software. Three electrode systems consist of a saturated calomel electrode as a reference electrode, mild steel with 1 cm 2 exposed areas as a working electrode, and copper wire as a counter electrode. The open-circuit potential was set for one hour before any electrochemical measurement to maintain the steady-state potential. Potentiodynamic Polarization (PDP) The potentiodynamic polarization was carried out within the potential range of −8 V to +2 V using the scan rate of 0.001 Vs −1 and 303 K, the corrosion parameters were extrapolated from the Tafel plot fitting. The potentiodynamic corrosion inhibition efficiency (% IE) was calculated using Equation (1) where %I E p is potentiodynamic inhibition efficiency i corr and i corr was corrosion current density in the absence and presence of the inhibitor, respectively. Electrochemical Impedance Spectroscopy The electrochemical impedance spectroscopy (EIS) was analyzed with the frequency range from 100 Hz to 10 MHz at 303 K. The electrochemical corrosion circuit was used to fit the corrosion data. The electrochemical corrosion inhibition efficiency % IE EIS was calculated by Equation (2) where R p and R 0 p are resistance polarization values with and without inhibitor, respectively. Figure 2a displays the FTIR spectrum of pure chitosan, which has broadband at 3355 cm −1 and corresponds to stretching vibrations of O-H and N-H overlapping each other [18,19]. The stretching of the amide II bands correspond to 1603 cm −1 the band at 1591 cm −1 correspond to N-H bend [20]. The band at 1420 cm −1 correspond to symmetrical deformation of CH 2 and CH 3 [20]. The FTIR spectrum of Cs-g-L-arginine shows a C=O stretch at 1603 cm −1 , the subsequent shift from 1603 cm −1 to 1685 cm −1 (in pure L-arginine spectrum) indicating that L-arginine was grafted on the chitosan [21]. The appearance of some peaks from the pure L-arginine spectra is evidence that the grafting process took place. The X-ray diffraction spectrum of Cs-g-L-arginine together with pure chitosan is presented in Figure 2b. The observed intense and strong broad peak at 2θ = 20 • confirmed the semicrystalline structure of pure chitosan, which was like the report by Thankamony et al. [22]. In comparison to pure chitosan, Cs-g-L-arginine had less intense and much broader peaks, indicating that grafting chitosan with L-arginine deformed the crystal zone in the chitosan system, making it less crystalline [15]. Furthermore, the formation of new peaks in the Cs-g-L-arginine spectrum is another indication that the chitosan L-arginine was grafted on chitosan. FTIR and XRD Spectral Analysis range from 100 Hz to 10 MHz at 303 K. The electrochemical corrosion circuit was used to fit the corrosion data. The electrochemical corrosion inhibition efficiency % IEEIS was calculated by Equation (2) where 0 are resistance polarization values with and without inhibitor, respectively. Figure 2a displays the FTIR spectrum of pure chitosan, which has broadband at 3355 cm −1 and corresponds to stretching vibrations of O-H and N-H overlapping each other [18,19]. The stretching of the amide II bands correspond to 1603 cm −1 the band at 1591 cm −1 correspond to N-H bend [20]. The band at 1420 cm −1 correspond to symmetrical deformation of CH2 and CH3 [20]. The FTIR spectrum of Cs-g-L-arginine shows a C=O stretch at 1603 cm −1 , the subsequent shift from 1603 cm −1 to 1685 cm −1 (in pure L-arginine spectrum) indicating that L-arginine was grafted on the chitosan [21]. The appearance of some peaks from the pure L-arginine spectra is evidence that the grafting process took place. The Xray diffraction spectrum of Cs-g-L-arginine together with pure chitosan is presented in Figure 2b. The observed intense and strong broad peak at 2θ = 20° confirmed the semicrystalline structure of pure chitosan, which was like the report by Thankamony et. al. [22]. In comparison to pure chitosan, Cs-g-L-arginine had less intense and much broader peaks, indicating that grafting chitosan with L-arginine deformed the crystal zone in the chitosan system, making it less crystalline [15]. Furthermore, the formation of new peaks in the Cs-g-L-arginine spectrum is another indication that the chitosan L-arginine was grafted on chitosan. Thermal Analysis The TGA curve of chitosan in Figure 3a shows two-stage degradation. The first weight loss of 6.58% around 100 • C is attributed to the evaporation of water [20]. A sharp weight loss (51.47%) around 297-450 • C is due to the depolymerization of chitosan and the decomposition of the amine group [23,24]. On the other hand, the Cs-g-L-arginine thermogram shows a three-step degradation pattern, with the first weight loss (11.25%) around 110 • C, which is associated with loss of water. The second step is around 271 • C to 320 • C (40.34%), which is associated with the depolymerization of chitosan and the decomposition of L-arginine from the chitosan backbone [25]. A third degradation step was observed from 320 • C to 500 • C. DTA (Figure 3b) of pure Cs recorded in the air show a sharp exothermic peak around 300 • C, which is accompanied by thermal pyrolysis of the chitosan and thermal decomposition of amino and N-acetyl residues [23]. Similarly, the Cs-g-L-arginine curve shows a two less intense exothermic peak around 270 • C and 400 • C. The DTG of pure chitosan exhibited a maximum thermal decomposition temperature (T max ) of 300 • C. However, Cs-g-L-arginine has a Tmax of around 400 • C. The DTG (Figure 3c) maxima temperature order is the same as that found in the DTA curve. The results from the thermal analysis indicate that grafting with L-arginine results in a drop in the thermal stability of the polymer [25]. Since the structure of pure chitosan has a significant amount of intermolecular hydrogen bonds, adding L-arginine will distort the hydrogen cluster in this structure, which results in the decreases of thermal stability of the chitosan. weight loss (51.47%) around 297-450 °C is due to the depolymerization of chitosan and the decomposition of the amine group [23,24]. On the other hand, the Cs-g-L-arginine thermogram shows a three-step degradation pattern, with the first weight loss (11.25%) around 110 °C, which is associated with loss of water. The second step is around 271 °C to 320 °C (40.34%), which is associated with the depolymerization of chitosan and the decomposition of L-arginine from the chitosan backbone [25]. A third degradation step was observed from 320 °C to 500 °C. DTA (Figure 3b) of pure Cs recorded in the air show a sharp exothermic peak around 300 °C, which is accompanied by thermal pyrolysis of the chitosan and thermal decomposition of amino and N-acetyl residues [23]. Similarly, the Cs-g-L-arginine curve shows a two less intense exothermic peak around 270 °C and 400 °C. The DTG of pure chitosan exhibited a maximum thermal decomposition temperature (Tmax) of 300 °C. However, Cs-g-L-arginine has a Tmax of around 400 °C. The DTG ( Figure 3c) maxima temperature order is the same as that found in the DTA curve. The results from the thermal analysis indicate that grafting with L-arginine results in a drop in the thermal stability of the polymer [25]. Since the structure of pure chitosan has a significant amount of intermolecular hydrogen bonds, adding L-arginine will distort the hydrogen cluster in this structure, which results in the decreases of thermal stability of the chitosan. Potentiodynamic Polarization (PDP) In Figure 4, Tafel plot shows the polarization curve of corrosion of mild steel in 0.5M HCl solution with different concentrations of the Cs-g-L-arginine. The electrochemical parameters such as corrosion rate, cathodic Tafel slope (βc), anodic Tafel slope (βa) values, corrosion potential (E corr ), and corrosion current (I corr ), were derived from Tafel plots by nova1.1 software. Tafel fit and percent inhibition efficiency (% IE) (calculated from polarization measurements according to Equation (1)) are shown in Table 1. From the Tafel plot, corrosion current density decreased with the addition of the inhibitor, due to the adsorption of the polymer to the surface of mild steel leading to decreases in the rate of dissolution of mild steel by blanketing the mild steel surface against the corrosive agent [26,27]. The cathodic Tafel slope (βc) values are less than that of the anodic Tafel slope (βa) values in most cases. This implies that the addition of an inhibitor demotes the iron (Fe) dissolution much higher than it retards the hydrogen evolution [12,28]. The inhibition Polymers 2023, 15, 398 6 of 12 efficiency increases linearly with increases in the concentration of Cs-g-L-arginine. At the optimum concentration of 500 ppm, the obtained inhibition efficiency was 91.4% with a little shift in the E corr values towards the cathodic side. The corrosion inhibition mechanism is generally considered a cathodic inhibitor when the potential change exceeds 85 mV, while it is a mixed inhibitor when the potential change is less than 85 mV [29]. This indicates that Cs-g-L-arginine behaves as a mix type corrosion inhibitor. It is well observed that additions of the inhibitor are accompanied by lowering the corrosion current density related to the blank solution. ization measurements according to equation (1)) are shown in Table 1. From the Tafel plot, corrosion current density decreased with the addition of the inhibitor, due to the adsorption of the polymer to the surface of mild steel leading to decreases in the rate of dissolution of mild steel by blanketing the mild steel surface against the corrosive agent [26,27]. The cathodic Tafel slope (βc) values are less than that of the anodic Tafel slope (βa) values in most cases. This implies that the addition of an inhibitor demotes the iron (Fe) dissolution much higher than it retards the hydrogen evolution [12,28]. The inhibition efficiency increases linearly with increases in the concentration of Cs-g-L-arginine. At the optimum concentration of 500 ppm, the obtained inhibition efficiency was 91.4% with a little shift in the Ecorr values towards the cathodic side. The corrosion inhibition mechanism is generally considered a cathodic inhibitor when the potential change exceeds 85 mV, while it is a mixed inhibitor when the potential change is less than 85 mV [29]. This indicates that Cs-g-L-arginine behaves as a mix type corrosion inhibitor. It is well observed that additions of the inhibitor are accompanied by lowering the corrosion current density related to the blank solution. EIS Analysis The Nyquist plots for the corrosion of mild steel surface in 0.5 M HCl, inhibited by Cs-g-L-arginine, are depicted in Figure 5a the figures comprise a depressed semicircle; the presence of a capacitive loop in Nyquist curves is associated with charge transfer phenomena. The capacitive plots are semicircles, which can be attributed to frequency spreading caused by inhomogeneous electrode surface behavior [30]. The impedance response changed considerably after the addition of the inhibitor, i.e., an increase in the diameter of the Nyquist plots corresponds to an increase in inhibitor concentration. Shapes of the plots remained the same for the electrodes with and without various concentrations of inhibitors, indicating an unaltered mechanism of the corrosion process [28,29]. Figure 5b presented the Bode plot, in which log frequency is plotted against both the absolute values of the impedance (|Z|) and the phase-shift, which is one of the most used representation methods for electrochemical impedance spectroscopy results. The impedance values and phase angle values increase with the growing concentration of inhibitors. In addition, a time constant can be found in the phase angle, usually due to the relaxation effect of the corrosion inhibitor molecule adsorption [31]. The Bode plot (Figure 5b,c), in which log frequency is plotted against both the absolute values of the impedance (|Z|) and the phase-shift, is one of the most used representation methods for electrochemical impedance spectroscopy results. The impedance values and phase angle values increase with the growing concentration of inhibitors. In addition, a time constant can be found in the phase angle, usually due to the relaxation effect of the corrosion inhibitor molecule adsorption [32]. The electrochemical impedance parameters were obtained by fitting various impedance profiles into an equivalent circuit, which is given in Figure 6. This equivalent circuit is composed of constant phase element, CPE, solution resistance, Rs, and charge-transfer resistance, Rct. The system investigated here can be characterized by distributed capacitance for a nonhomogenous corroding surface of mild steel in 0.5 M HCl. This phenomenon of depression, modeled by CPE, is usually associated with the frequency dispersion, dislocations, surface roughness, formation of porous layers, and distribution of the active sites [28]. Table 2 shows that as the inhibitor concentration is increased, the Rct values increase, which can be attributed to the creation of an adsorption layer on the steel surface [33]. EIS findings revealed that the extent of corrosion inhibition by Cs-g-L-arginine has the highest corrosion inhibition efficiency of 91.4% at optimum conditions, indicating that the presence of an electron-donating group favors metal-inhibitor interactions. Furthermore, the Cdl values decreased, resulting in a higher concentration of inhibitor. This is caused by an increase in the thickness of the protective layer and/or a drop in the film's local dielectric constant [31]. The good performance of the inhibitor is supported by a rise in charge transfer resistance values as well as the decrease in double-layer capacitance values obtained from impedance measurements. This behavior indicates that the inhibi- The electrochemical impedance parameters were obtained by fitting various impedance profiles into an equivalent circuit, which is given in Figure 6. This equivalent circuit is composed of constant phase element, CPE, solution resistance, Rs, and charge-transfer resistance, Rct. The system investigated here can be characterized by distributed capacitance for a nonhomogenous corroding surface of mild steel in 0.5 M HCl. This phenomenon of depression, modeled by CPE, is usually associated with the frequency dispersion, dislocations, surface roughness, formation of porous layers, and distribution of the active sites [28]. Table 2 shows that as the inhibitor concentration is increased, the Rct values increase, which can be attributed to the creation of an adsorption layer on the steel surface [33]. EIS findings revealed that the extent of corrosion inhibition by Cs-g-L-arginine has the highest corrosion inhibition efficiency of 91.4% at optimum conditions, indicating that the presence of an electron-donating group favors metal-inhibitor interactions. Furthermore, the Cdl values decreased, resulting in a higher concentration of inhibitor. This is caused by an increase in the thickness of the protective layer and/or a drop in the film's local dielectric constant [31]. The good performance of the inhibitor is supported by a rise in charge transfer resistance values as well as the decrease in double-layer capacitance values obtained from impedance measurements. This behavior indicates that the inhibitors act as a barrier to the corrosion process, indicating the film's formation [34]. By displacing H 2 O and other ions that were initially adsorbed at the steel/solution interface, the inhibitive layer on the electrode surface controlled the mild steel dissolution. In the presence of uninhibited 0.5M HCl, the Fe-H 2 O complex is generated [32]. This complex is now transformed into the Fe-Cs-g-L-arginine complex. Quantum Chemical Calculations Density Functional Theory (DFT) The effectiveness of an inhibitor is often determined by its structure and molecular orbital distribution. Quantum and theoretical chemistry have been known to be efficient tools for understanding corrosion inhibition mechanisms of the compound. Proper models with computational simulations based on quantum chemistry can support and confirm experimental discoveries. An inhibitor's effectiveness is determined by both its spatial and molecular electronic structures. The optimized structure of chitosan-graft-L-arginine and pure chitosan are presented in Figure 7. As can be observed, HOMO is mainly localized over the L-arginine while, LUMO density is localized exclusively on the chitosan ring, indicating that these regions are mainly involved in electron(s) donation and acceptation, respectively, during the metal-inhibitors interactions [33]. Quantum chemical parameters are presented in Table 3. The smaller the value of ΔE, the more Fe interacted with the inhibitor, and hence, higher inhibition efficiency [35,36], the energy gap of Cs-g-L-arginine is much lower than that of pure chitosan, indicating that by grafting L-arginine on chitosan, one can increase the corrosion inhibition efficiency of the chitosan, this further supported the results obtained from electrochemical analysis. A high value of electronegativity of an inhibitor molecule shows the strong affinity of the molecule to accept the electrons from the metallic (Fe) surface. Furthermore, a molecule with higher electronegativity would have better interaction with the Fe surface and then better inhibition efficiency [36]. From the results, Cs-g-L-arginine has a higher value of X than that of pure chitosan; this indicates that the modified copolymer has better inhibition efficiency than that of pure chitosan. Hydrogen ions produced by the acid will turn to promote the anodic dissolution of the mild steel leading to various forms of corrosion [37]. Quantum Chemical Calculations Density Functional Theory (DFT) The effectiveness of an inhibitor is often determined by its structure and molecular orbital distribution. Quantum and theoretical chemistry have been known to be efficient tools for understanding corrosion inhibition mechanisms of the compound. Proper models with computational simulations based on quantum chemistry can support and confirm experimental discoveries. An inhibitor's effectiveness is determined by both its spatial and molecular electronic structures. The optimized structure of chitosan-graft-L-arginine and pure chitosan are presented in Figure 7. As can be observed, HOMO is mainly localized over the L-arginine while, LUMO density is localized exclusively on the chitosan ring, indicating that these regions are mainly involved in electron(s) donation and acceptation, respectively, during the metal-inhibitors interactions [33]. Considering the nature of the inhibitor with respect to number of nitrogen atoms in the molecule, one may suggest that the inhibitor molecule will interact with the hydrogen ion and at the same time interact with the metal surface by donating the lone pair of electrons to the empty d -o and f-orbitals of the iron atom. Conclusions Since most corrosion inhibitors are hazardous and expensive, interest in using ecofriendly and affordable materials as corrosion inhibitors has increased significantly in recent years. This was expanded to include the utilization of natural substances to prevent metallic corrosion. Chitosan is a suitable option because of its desirable qualities; however, it was more effective at inhibiting growth when it was soluble in an acidic and aqueous environment. However, modifying chitosan with another molecule that can inhibit corrosion (such L-arginine) may increase the effectiveness of this inhibition. The L-arginine was successfully grafted on chitosan by thermal method; a modified copolymer was characterized by different characterization techniques. The corrosion inhibition efficiency of these polymers was tested on mild steel in 0.5 M hydrochloric acid by potentiodynamic polarization, electrochemical impedance spectroscopy and validated with DFT. The results show high inhibition efficiency by the modified polymer up to 91.4% at optimum Quantum chemical parameters are presented in Table 3. The smaller the value of ∆E, the more Fe interacted with the inhibitor, and hence, higher inhibition efficiency [35,36], the energy gap of Cs-g-L-arginine is much lower than that of pure chitosan, indicating that by grafting L-arginine on chitosan, one can increase the corrosion inhibition efficiency of the chitosan, this further supported the results obtained from electrochemical analysis. A high value of electronegativity of an inhibitor molecule shows the strong affinity of the molecule to accept the electrons from the metallic (Fe) surface. Furthermore, a molecule with higher electronegativity would have better interaction with the Fe surface and then better inhibition efficiency [36]. From the results, Cs-g-L-arginine has a higher value of X than that of pure chitosan; this indicates that the modified copolymer has better inhibition efficiency than that of pure chitosan. Hydrogen ions produced by the acid will turn to promote the anodic dissolution of the mild steel leading to various forms of corrosion [37]. Considering the nature of the inhibitor with respect to number of nitrogen atoms in the molecule, one may suggest that the inhibitor molecule will interact with the hydrogen ion and at the same time interact with the metal surface by donating the lone pair of electrons to the empty d -o and f-orbitals of the iron atom. Conclusions Since most corrosion inhibitors are hazardous and expensive, interest in using ecofriendly and affordable materials as corrosion inhibitors has increased significantly in recent years. This was expanded to include the utilization of natural substances to prevent metallic corrosion. Chitosan is a suitable option because of its desirable qualities; however, it was more effective at inhibiting growth when it was soluble in an acidic and aqueous environment. However, modifying chitosan with another molecule that can inhibit corrosion (such L-arginine) may increase the effectiveness of this inhibition. The L-arginine was successfully grafted on chitosan by thermal method; a modified copolymer was characterized by different characterization techniques. The corrosion inhibition efficiency of these polymers was tested on mild steel in 0.5 M hydrochloric acid by potentiodynamic polarization, electrochemical impedance spectroscopy and validated with DFT. The results show high inhibition efficiency by the modified polymer up to 91.4% at optimum conditions. The EIS analysis supported the inhibitory function of the co-polymer by demonstrating an increase in polarization resistance with increasing inhibitor concentration. A polarization analysis showed that the inhibitor molecule inhibits both cathodic and anodic corrosion processes, leading to its classification as a mix type corrosion inhibitor. The results of the inhibitor molecule's quantum chemical calculations confirmed the findings of the experiment and offered proof of the interaction between the metal and the inhibitor. Hence, chitosan modified by L-arginine can serve as an alternative for corrosion mitigation methods. Data Availability Statement: The data will be made available on request.
6,783
2023-01-01T00:00:00.000
[ "Materials Science" ]
Chemoselective bond activation by unidirectional and asynchronous PCET using ketone photoredox catalysts The triplet excited states of ketones are found to effect selective H-atom abstraction from strong amide N–H bonds in the presence of weaker C–H bonds through a proton-coupled electron transfer (PCET) pathway. This chemoselectivity, which results from differences in ionization energies (IEs) between functional groups rather than bond dissociation energies (BDEs) arises from the asynchronicity between electron and proton transfer in the PCET process. We show how this strategy may be leveraged to achieve the intramolecular anti-Markovnikov hydroamidation of alkenes to form lactams using camphorquinone as an inexpensive and sustainable photocatalyst. A. General Considerations All manipulations were performed with the rigorous exclusion of air and moisture unless otherwise stated.Commercial reagents were stored in a N2-filled glovebox and used without further purification.All liquid reagents and deuterated solvents were degassed by three cycles of freezepump-thaw and stored over activated 3Å molecular sieves prior to use.All non-deuterated solvents were purified by the method of Grubbs and stored over activated 3Å molecular sieves. 1 Camphorquinone, tributylmethylammonium dibutyl phosphate, triethylamine, tetrabutylammonium chloride (TBACl) and potassium hydrogen fluoride (KHF2) were purchased from Sigma Aldrich.The Ir photooxidant, ( B. Synthesis of New Amide Precursors and Products The known amide substrates were either purchased or prepared as previously described, 2 whereas the new ones were synthesized according to the procedures reported below. Cyclohex-2-en-1-yl (4-(Bpin)phenyl)carbamate (13). In a 20 mL scintillation vial equipped with a PTFE-coated stir bar, 2-cyclohexen-1ol (0.404 g, 4.08 mmol, 1.00 equiv) and triethylamine (1.75 mL, 12.7 mmol, 3.20 equiv) were combined and CH2Cl2 was added (2 mL).4-Isocyanatobenzeneboronic acid pinacol ester (1.00 g, 4.11 mmol, 1.00 equiv) was added as a solid and more CH2Cl2 (3 mL) was used to effect quantitative transfer.After stirring the yellow solution at room temperature for 18 h, an aliquot was removed, dried, and subjected to 1 H NMR analysis, which showed complete consumption of the starting materials.The sample was brought back and recombined with the reaction.Volatiles were removed from the solution in vacuo and the residual solid was redissolved with minimum CH2Cl2 and subjected to a chromatographic column (0 → 30% EtOAc in hexanes; the desired product elutes first).After removing the solvents in vacuo a white solid remained and was dried for 18 h.Yield after drying: 0.360 g (26%). 3-(4-(Bpin)phenyl)hexahydrobenzo[d]oxazol-2(3H)-one (8). In the glovebox, compound 13 (0.176 g, 0.518 mmol, 1.00 equiv), camphorquinone (17.6 mg, 0.106 mmol, 0.200 equiv) and phenyl disulfide (12.4 mg, 0.057 mmol, 0.100 equiv) were combined as solids in a 20 mL scintillation vial containing a PTFE-coated stir bar.CH2Cl2 (5 mL) was added, the vial was capped, and sealed with electrical tape.The reaction was then brought outside the glovebox and irradiated using a Kessil A160WE Tuna Blue LED lamp under fan cooling.After 24 h, the reaction was brought back to the glovebox and an aliquot was retrieved for NMR analysis, which showed still the presence of starting material.More camphorquinone (13.0 mg, 0.08 mmol, 0.20 equiv) and phenyl disulfide (12.0 mg, 0.06 mmol, 0.10 equiv) were added to the reaction, which was stirred under blue LED for another 14 h.After that, another aliquot was retrieved and analyzed by 1 H NMR, which showed complete consumption of the starting material.The reaction had its volatiles removed and the residue was redissolved with minimum CH2Cl2.After that, the crude material was subjected to a chromatographic column (100% hexanes, 1 CV; 0 → 70% EtOAc in hexanes, 10 CV; 70% EtOAc, 2 CV).The volatiles were removed under reduced pressure, yielding a faint-yellow solid, which was further dried for 18 h.Yield after drying: 0.140 g (80%). After 5 min, the reaction was removed from the ice bath and allowed to stir at room temperature for 2 h.After that, the volatiles were evaporated using a rotavap.To help remove most of the pinacol side-product, the crude material was redissolved with MeOH (10 mL) and water (5 mL) and the volatiles were evaporated.This process was repeated once more.To the resulting material, acetone (10 mL) was added to create a white cloudy suspension, which was stirred for 30 min. After that, the reaction was filtered through a PTFE filter (0.45 µm) into a 20 mL scintillation vial. To the stirring colorless solution, tetrabutylammonium chloride was added (0.166 g, 0.597 mmol, 1.00 equiv), immediately forming a white precipitate.The reaction was stirred at room temperature for 30 min, after which it was filtered through a small pad of silica.The silica was further washed with acetone (ca. 3 mL) and the resulting filtrate had its volatiles removed in the rotavap, resulting in a sticky colorless oil.Et2O (5 mL) was added to the crude material and stirred vigorously for 5 min.The supernatant was carefully removed with a pipette and the process was repeated once more with hexanes (5 mL).After pulling vacuum, a white solid remained, which was dried for a further 18 h.Yield after drying: 0.210g (68%). Tetrabutylammonium trifluoro(4-(2-oxohexahydrobenzo[d]oxazol-3(2H)yl)phenyl)borate (12). In the glovebox, compound 14 (0.206 g, 0.391 mmol, 1.00 equiv), camphorquinone (19.4 mg, 0.177 mmol, 0.500 equiv) and phenyl disulfide (16.0 mg, 0.073 mmol, 0.200 equiv) were combined as solids in a 20 mL scintillation vial containing a PTFE-coated stir bar.CH2Cl2 (5 mL) was added, the vial was capped, and sealed with electrical tape.The reaction was brought outside the glovebox and irradiated using a Kessil A160WE Tuna Blue LED lamp under fan cooling.After 24 h, the reaction was brought back into the glovebox and an aliquot was retrieved for NMR analysis, which showed complete consumption of the starting material.The reaction had its volatiles removed, CH2Cl2 (2 mL) and hexanes (5 mL) were added and the reaction was stirred vigorously for ca. 5 min, after which the supernatant was removed.To the crude residue, CH2Cl2 (3 mL) was added and the suspension was filtered through a silica plug.After washing the silica with more CH2Cl2 (3 mL), hexanes (3 mL) was added to the filtrate and the solution was stirred for 5 min.Then, the supernatant was removed and the orange sticky solid was dried for 18 h.Yield after drying: 0.150g (73%). D. Single-Wavelength Kinetic Studies and Transient Absorption Spectroscopy The nanosecond transient absorption (TA) spectroscopy setup was described previously in detail. 3A Quanta-Ray Nd:YAG laser (SpectraPhysics) provides 3 rd harmonic laser pulses at 355 nm with a repetition rate of 10 Hz and pulse width of ~10 ns (FWHM).A MOPO (SpectraPhysics) was used to provide tunable laser pulses in the visible region.Typical excitation energy was adjusted to ~4 mJ/pulse @460 nm.Solutions were prepared in the glovebox and placed through a 1.0 cm flow cell (Starna) with a peristaltic pump for spectral acquisition.To extract the rate constants for HAT (kH) and back reaction (kBR), we use the following rate equation to model the TA trace: As shown in Figure 4 (A B) of the main text, the signal at 430 nm is due to the amidyl radical exclusively, 4 therefore, the signal can be written as S430nm = ε[1´•] where ε = 4100 M -1 cm -1 is the extinction coefficient of the amidyl radical at 430 nm, determined from previous studies. 4 E. NMR Study of the Ground-State Association Between CQ and 1 Solutions of 1 (2 mM) and varying amounts of CQ (0, 20, 30, 40, and 50 mM) were prepared in anhydrous DCM-d2.The association constant (Ka) between CQ and 1 in DCM-d2 was determined using 1 H NMR spectroscopy by plotting [CQ]/Δδ against [CQ] and calculating Ka = slope/intercept, where Δδ = δ1 -δobs is the difference in chemical shifts of the N-H proton of 1 by itself (δ1) and 1 in the presence of added CQ (δobs). 4,5 F. Steady-State Stern-Volmer Studies Fluorescence was monitored on a QM4 fluorometer (Photon Technology International).Different samples were obtained by sequentially diluting a stock solution of the quencher and photocatalyst with a solution containing only the photocatalyst and transferred into 1 cm quartz cuvettes (Starna) for measurement.Steady-state quenching studies were performed by using the peak phosphorescence intensity with excitation at 450 nm.Samples were exposed to air after the measurements in order to fully quench the phosphorescence.The resulting fluorescence spectrum was subtracted from the total emission spectra in order to obtain the phosphorescence-only spectra. G. Photochemical CQ-and Ketone-Mediated Intramolecular Hydroamidation A mixture of CQ (100 µL of a stock solution of 0.100 g CQ in 3 mL CD2Cl2, 0.02 mmol, 20 mol%), disulfide (0.01 mmol, 10 mol%), 1,4-bis(trifluoromethyl)benzene or 1,3,5-tris(trifluoromethyl)benzene as an internal standard, and amide substrate (0.10 mmol) was diluted with 0.88 mL CD2Cl2 to give a final concentration of 100 mM substrate.The reaction solution was transferred to a J-Young NMR tube, which was taken to the spectrometer to establish the starting ratio of substrate to internal standard.The reaction was then irradiated using a Kessil A160WE Tuna Blue LED lamp under fan cooling.After 24 h, the reaction yield was determined by 1 H NMR spectroscopy. H. Quantum Yield Measurements Determination of the photon flux at 467 nm.A 0.15 M solution of ferrioxalate was prepared by dissolving potassium ferrioxalate hydrate (2.210 g) in H2SO4 (30 mL of a 0.05 M solution).A buffered solution of 1,10-phenanthroline was prepared by dissolving 1,10-phenanthroline (0.050 g) and sodium acetate (11.25 g) in H2SO4 (50.0 mL of a 0.5 M solution).Both solutions were stored in the dark.To determine the photon flux of the LED (Kessil PR160-467nm), the ferrioxalate solution (3.0 mL) was placed in a cuvette and irradiated for 20 seconds at λmax = 467 nm.After irradiation, the phenanthroline solution (0.53 mL) was added to the cuvette and the mixture was allowed to stir in the dark for 1 h to allow for complete coordination of ferrous ions to the phenanthroline.The absorbance of the solution was measured at 510 nm.A non-irradiated sample was also similarly prepared, and its absorbance measured at 510 nm.The difference in absorbance between the irradiated solution and the dark solution (Δ) was calculated and used to determine the yield of Fe 2+ according to: where is the total volume (0.00353 L) of the solution after addition of phenanthroline, Δ is the difference in absorbance at 510 nm between the irradiated and non-irradiated solutions containing added 1,10-phenantroline, is the path length (1.00 cm), and ε is the molar absorptivity of the ferrioxalate actinometer at 510 nm (11100 L mol -1 cm -1 ). The fraction of light absorbed (f) at 467 nm by pure ferrioxalate actinometer was calculated using equation ( 2 where Φ is the quantum yield of ferrioxalate actinometer at 467 nm and is the time the actinometer was irradiated. Quantum yield measurement for hydroamidation of 1 with camphorquinone.A reaction mixture of 1 (0.148 g, 0.500 mmol), camphorquinone (17.2 mg, 0.103 mmol, 20 mol%), diphenyl disulfide (13.2 mg, 0.0604 mmol, ca. 10 mol%) and 1,4-bis(trifluoromethyl)benzene as an internal standard was dissolved in CD2Cl2 (5 mL).An aliquot (1 mL) was transferred to a J-Young NMR tube, which was taken to the spectrometer to establish the starting ratio of substrate to internal standard.The remaining solution (4 mL) was transferred to a cuvette containing a stir bar, which was caped, sealed with electrical tape and brought outside the glovebox to a darkroom.The reaction was then irradiated using a Kessil PR160-467nm LED lamp for 30 min.The reaction yield was determined by 1 H NMR spectroscopy against internal standard.The reaction quantum yield was measured using equation ( 4 Where is the reaction time and ′ is the fraction of light absorbed by camphorquinone at 467 nm (calculated as in equation 2; A467nm = 0.85). Figure S6. 1 H Figure S6. 1 H NMR (400 MHz, CDCl3) spectrum for N-phenylacetamide-N-d (15).Inset shows the aromatic region for protic (bottom) vs deuterated (top) compounds.Red arrows indicate the disappearance of the N-H signal in the deuterated version. Figure S7 . Figure S7.Comparison of the IR spectra for proteo-acetanilide (▬ black trace) and Nphenylacetamide-N-d (▬ green trace) showing a redshift of the N-D stretching frequency relative to the N-H stretching frequency. Figure S8 . Figure S8.Electrochemical studies on CQ. (A) Cyclic voltammogram of 2 mM CQ in DCM with 0.1 M [TBA][PF6] as the supporting electrolyte.(B) Spectroelectrochemistry on 2 mM CQ in DCM with 0.1 M [TBA][PF6] as the supporting electrolyte in a 0.5 mm pathlength cell using a Pt mesh working electrode. Figure S9 . Figure S9.TA spectra of CQ (10 mM) and phenol (20 mM) in DCM showing the evolution from an initial spectrum dominated by CQ* (▬ orange trace) to one dominated by PhO• (▬ blue trace).λexc = 460 nm. Figure S11. 1 H Figure S11. 1 H NMR study of association between amide 1 and CQ.(A) Stacked 1 H NMR spectra showing the change in the amide N-H signal of 1 (marked by *) with varying concentrations of added CQ. (B) Plot of [CQ]/Δδ against [CQ] for solutions of 1 with varying amounts of CQ (black circle) and linear fit (solid line). Figure S13 . Figure S13.Time traces for the cycloamidation reaction.Time traces for the yield of cyclized product 4 (dashed lines) and % remaining of CQ (solid lines).Black traces are for the reaction performed with PhSSPh and red traces are with (TripS)2. 3 Figure S14 . Figure S14.Photoredox intramolecular cycloamidation using various ketones as the photocatalyst.Yields as determined by1 H NMR spectroscopy are denoted in parentheses.*For ketones that absorb poorly in the visible region, a 370 nm LED light source (Kessil) was used in place of the standard blue LEDs. Table S1 . Correlation of the quenching rate (kq) of *CQ in DCM with different thermodynamic parameters of the quenchers.Calculated from the Stern-Volmer constant (KSV) using a value of τ = 30.6(0.1) µs for the lifetime of the CQ triplet state, as determined from time-resolved emission spectroscopy (see FigureS12for Stern-Volmer plots). a
3,169.6
2023-11-02T00:00:00.000
[ "Chemistry" ]
Facile Fabrication of Size-Tunable Core/Shell Ferroelectric/Polymeric Nanoparticles with Tailorable Dielectric Properties via Organocatalyzed Atom Transfer Radical Polymerization Driven by Visible Light An unconventional but facile approach to prepare size-tunable core/shell ferroelectric/polymeric nanoparticles with uniform distribution is achieved by metal-free atom transfer radical polymerization (ATRP) driven by visible light under ambient temperature based on novel hyperbranched aromatic polyamides (HBPA) as a functional matrix. Cubic BaTiO3/HBPA nanocomposites can be prepared by in-situ polycondensation process with precursors (barium hydroxide (Ba(OH)2) and titanium(IV) tetraisopropoxide (TTIP)) of ferroelectric BaTiO3 nanocrystals, because precursors can be selectively loaded into the domain containing the benzimidazole rings. At 1200 °C, the aromatic polyamide coating of cubic BaTiO3 nanoparticles are carbonized to form carbon layer in the inert environment, which prevents regular nanoparticles from gathering. In addition, cubic BaTiO3 nanoparticles are simultaneously transformed into tetragonal BaTiO3 nanocrystals after high temperature calcination (1200 °C). The outer carbon shell of tetragonal BaTiO3 nanoparticles is removed via 500 °C calcination in air. Bi-functional ligand can modify the surface of tetragonal BaTiO3 nanoparticles. PMMA polymeric chains are growing from the initiating sites of ferroelectric BaTiO3 nanocrystal surface via the metal-free ATRP technique to obtain core/shell ferroelectric BaTiO3/PMMA hybrid nanoparticles. Changing the molar ratio between benzimidazole ring units and precursors can tune the size of ferroelectric BaTiO3 nanoparticles in the process of polycondensation, and the thickness of polymeric shell can be tailored by changing the white LED irradiation time in the organocatalyzed ATRP process. The dielectric properties of core/shell BaTiO3/PMMA hybrid nanoparticles can be also tuned by adjusting the dimension of BaTiO3 core and the molecular weight of PMMA shell. impossible from individual materials alone 1,2 . For example, when the surface of Au nanoparticles is coated by organic ligands comprising thiol groups as shell, unique optical properties can be generated 3,4 . Besides small molecule ligands, functional polymeric coatings as shell on core nanoparticle surfaces are also widely used to prepare interfaces with special properties and characteristics to make them to interact with specific environment 5,6 . For example, the coating shell can be utilized to reduce undesirable interactions or enhance desired interactions 7 . Different from inorganic shell growing from the surface of nanoparticles, various different approaches have been utilized for the preparation of functional polymer shells, such as in situ polymerization from the functional surface of nanoparticles 8 , direct attaching functional polymer ligands onto surfaces of core nanoparticles 2,9 , layer-by-layer functional polymer deposition 10,11 , and in situ fabrication of inorganic nanoparticles using functional polymeric chains as ligands. However, in all of these strategies, it is challenging to control the growth of functional polymeric shell 12 . Owing to its room temperature ferroelectric behaviors and high permittivity 13 , BaTiO 3 nanomaterials are one of the most widely investigated ferroelectric materials 14,15 . BaTiO 3 nanomaterials can be used in various fields, such as multilayer capacitors, electro-optical devices, actuators and so on 16 . The ferroelectric behaviors and dielectric properties of BaTiO 3 materials depend heavily on morphologies, structures, nanoparticle sizes, crystalline structures, surface chemistry and so on 16,17 . In many cases, nanomaterials with low dielectric loss, high energy storage capability and high permittivity are highly desirable, especially for applications in modern electric and electronics fields 17 . Core/shell BaTiO 3 /polymer nanoparticles combine the characteristics of BaTiO 3 materials and polymeric shell, showing the high permittivity, low dielectric loss and easily processing 18 . However, in conventional routes, such as melt blending or solution mixing, a lot of issues can be raised such as inhomogeneity and aggregation resulting in undesirable properties 18 . A facile approach to fabricate size-tunable core/shell ferroelectric/polymeric nanoparticles with uniform distribution is of great interest for a variety of application areas mentioned above. Hyperbranched polymers (HBPA) possessing a three-dimensional molecular architecture with a highly branched backbone and many terminal functional groups exhibit different characteristics from the linear polymer analogues, such as high solubility and low solution viscosity 19 . In addition, aromatic polyamides have been broadly utilized as frequently used high-performance polymeric materials in the electronics and aerospace fields owing to their excellent properties, such as excellent mechanical properties, high thermal stability, low relative permittivity, low dielectric constant, low thermal expansion, high breakdown voltage, long-term stability, good hydrolytic stability and so on 20 . Unfortunately, most linear aromatic polyamides are generally characterized by poor processability 19 . It is worthwhile to bring hyperbranched structures into aromatic polyamides to improve the poor fabricability caused by the rigid repeating unit in the linear aromatic polyamides. Furthermore, the benzimidazole rings are widely utilized in the preparation HBPA because of its stability, asymmetric structure, stiffness and complexing abilities of metal ions 19 . Due to its unique chemical structures and synthesis conditions, HBPA containing benzimidazole rings are the outstanding choice as a functional polyamide matrix and the following carbon coating as protecting layer for the preparation of tetragonal BaTiO 3 nanocrystals 21 . Over the past several decades, due to ability of precision design and synthesis of a variety of polymers, atom transfer radical polymerization (ATRP) techniques have been one of the most effective approaches 22 . In the traditional ATRP techniques, transition-metal catalysts were used to mediate redox equilibrium process 23,24 , introducing the catalyst purification challenges and contamination and impeding their wide applications in biomaterials, microelectronics, functional inorganic/organic core/shell nanocomposites, etc 25 . Despite a lot of investigations in reducing catalyst loading and facilitating the metal catalyst removal 25 , it is still a challenge to extensively use conventional ATRP for the preparation of core/shell inorganic/polymer hybrid nanoparticles. Metal-free ATRP technique remains highly desirable to circumvent the removal of metal catalyst, avoid contamination and reduce toxicity concerns [26][27][28] . Here we report an unconventional but facile approach to fabricate size-tunable core/shell ferroelectric/ polymeric nanoparticles with uniform distribution via metal-free ATRP driven by visible light under ambient temperature based on novel HBPA as a functional matrix. Ba(OH) 2 and TTIP were used as precursors of ferroelectric BaTiO 3 nanocrystals that were selectively loaded into the domain containing benzimidazole rings by coordination interactions between benzimidazole rings and precursors 19,29 , then equimolar trimesic acid and 2-(4-aminophenyl)-1H-benzimidazol-5-amine as monomers were subjected to in-situ polycondensation with precursors to form BaTiO 3 nanoparticles, embedded in the HBPA matrix. At 1200 °C under the inert environment, the aromatic polyamide as coating of cubic BaTiO 3 nanoparticles were carbonized to form carbon layer, which acted as protecting shell to prevent nanoparticles from gathering. In addition, cubic BaTiO 3 nanoparticles were simultaneously transformed into tetragonal BaTiO 3 after 1200 °C calcination. Then the carbon layer on the surface of tetragonal BaTiO 3 nanoparticles was removed via calcination under relative low temperature (500 °C in air). In addition, the bi-functional ligands used as the metal-free ATRP initiator were synthesized via modifying the hydroxyl group of 12-hydroxydodecanoic acid by 2-bromophenylacetyl bromide. Then the bi-functional ligands were used for the surface modification of tetragonal BaTiO 3 nanocrystals. PMMA polymeric chains were growing from the initiating sites of ferroelectric BaTiO 3 nanocrystal surface by initiating the polymerization of methyl methacrylate (MMA) monomers via the metal-free ATRP technique to obtain core/shell ferroelectric BaTiO 3 /PMMA hybrid nanoparticles, composed of ferroelectric BaTiO 3 nanocrystals as core and PMMA polymeric chains as shell with different dimensions, 5,10-di(1-naphthyl)-5,10-dihydrophenazine as an organic photocatalyst under a white LED irradiation at ambient temperature. The dimensions of ferroelectric BaTiO 3 nanoparticles can be adjusted via changing the molar ratio between benzimidazole ring units and precursors in the polycondensation process, and the thickness of polymeric shell can be also tailored by changing the white LED irradiation time within the ATRP process. The dielectric properties of core/shell BaTiO 3 /PMMA hybrid nanoparticles are tunable by adjusting the dimension of BaTiO 3 core and the molecular weight of PMMA shell. Characterizations. 1 (4) TEM characterizaion samples of core/shell ferroelectric BaTiO 3 /PMMA hybrid nanoparticles were prepared by using a drop of nanoparticle toluene (volume: ~10 μL; c: 1 mg/mL) on regular TEM grids, and then dried under room temperature. In addition, for the TEM characterization of polymeric shell on the surface of BaTiO 3 nanoparticles, the PMMA shell were stained via RuO 4 (ruthenium tetraoxide). X-ray diffraction (XRD) characterization was used to confirm the crystal structures of samples (SCINTAG XDS-2000; Cu Kα radiation). In addition, the morphology of nanocomposites and the energy dispersive spectroscopy (EDS) characterization of samples were carried out via field emission scanning electron microscopy (FE-SEM; FEI Quanta 250). Molecular weight (MW) and polydispersity index (PDI) of PMMA grafting chains were characterized by GPC (Agilent1100 with a G1310A pump, a G1314A variable wavelength detector and a G1362A refractive detector). THF was used as eluent at 35 °C (1.0 mL/min). All the columns composed of two 5 μm LP gel mixed bed columns (molecular range: 200-3 × 10 6 g/mol) and one 5 μm LP gel column (500 Å, molecular range: 500-2 × 10 4 g/mol) were calibrated by PS standard samples. The weight fractions of the organic shell in tetragonal BaTiO 3 nanocrystals coated with the metal-free ATRP initiators and core/shell ferroelectric BaTiO 3 /PMMA hybrid nanoparticles were measured via TGA (thermogravimetric analysis characterization; TA Instrument TGA Q 50). In order to measure the dielectric properties of core/shell tetragonal BaTiO 3 /PMMA hybrid nanoparticles and corresponding PMMA shell in the microwave frequency range, all samples were compressed into toroidal shape (inner diameter: 3.00 mm; outer diameter: 7.00 mm). The complex permittivity of samples was characterized by Vector Network Analyzer (Anritsu 37347 C) incorporating a S-parameter test set. The S-parameters were characterized by utilizing the coaxial transmission/reflection method, and converted to complex permittivity via Nicholson-Ross-Weir algorithm 30 . preparation of cubic Batio 3 /HBPA nanocomposites. As shown in Figs 1 and S1, cubic BaTiO 3 /hyperbranched polyamide (HBPA) with benzimidazole rings were synthesized through a one-step procedure 14 with equimolar monomers of trimesic acid and 2-(4-aminophenyl)-1H-benzimidazol-5-amine. The NMP solution of TMA (10 mmol) and APBIA (10 mmol) was added in a dry 250 mL flask (three-necked, round bottom) with a magnetic stirrer and a condenser pipe. The triphenyl phosphite and pyridine as the condensing agents were added into the flask. The reaction mixture was then heated to reflux under the nitrogen atmosphere in an oil bath of 90 °C. After 3 h, Ba(OH) 2 (10 mmol) and TTIP (10 mmol) were added into the reaction solution (the molar ratio of precursors to benzimidazole ring = 1:1). After another 1 h reaction, pale yellow precipitates of cubic BaTiO 3 / HBPA were formed by slowly pouring the reaction solution into methanol under uniformly stirring. The precipitates were purified by washing successively with methanol and water, and then dried to constant weight in a vacuum oven at 80 °C. synthesis of carbon-capped tetragonal Batio 3 nanoparticles. After 2 h calcination at 1200 °C under argon atmosphere, the cubic structures of BaTiO 3 nanoparticles embedded into HBPA matrix were transferred to tetragonal BaTiO 3 nanoparticles, while the outer coating of HBPA was transferred into carbon layers as protecting shell of the BaTiO 3 nanoparticles to prevent the nanocrystals from forming larger irregular structures during the calcination process. preparation of carbon-free coated tetragonal Batio 3 nanoparticles. The carbon layer of core/ shell structure tetragonal BaTiO 3 /carbon nanoparticles prepared in the previous step was removed by 500 °C calcination (5 h) under air. Since the tetragonal BaTiO 3 nanoparticles had a thermodynamically stable crystalline structure, their shapes were kept after 500 °C calcination. The color of the nanoparticles was transferred from black to gray as the carbon layer were removed. Synthesis of metal-free ATRP initiators with bi-functional groups. The metal-free ATRP initiators with bi-functional groups 7 were synthesized by modifying the hydroxyl group in 12-hydroxydodecanoic acid by 2-bromophenylacetyl bromide (Fig. S5). The typical process is as follows: anhydrous 12-hydroxydodecanoic acid (12 mmol) was dissolved in anhydrous 1-methyl-2-pyrrolidione (NMP, 120 mL), and then cooled to 0 °C. 2-Bromophenylacetyl bromide (80 mL) was added dropwise into the reaction solution with magnetic stirring. After that, the reaction temperature was maintained at 0 °C for 2 h, and then slowly increased to room temperature. The reaction solution was maintained at ambient temperature to react for another 24 h. The as-prepared brown solution was concentrated by vacuum distillation. The resulting crude product was diluted with 200 mL dichloromethane, followed by washing with DI water (3 × 100 mL). The organic layer was concentrated by vacuum distillation to obtain the final bi-functional metal-free ATRP initiators ( Fabrication of tetragonal Batio 3 nanoparticles coating with metal-free ATRP initiators. Preparation of tetragonal BaTiO 3 nanoparticles coating with metal-free ATRP initiators was carried out by functional ligand absorption. In a typical process, tetragonal BaTiO 3 nanoparticles without carbon coating (100 mg) were firstly dispersed by ultrasonic instrument into toluene (100 mL) for 2 h. Subsequently, bifunctional metal-free ATRP initiator (50 mg) were dissolved into the solution system by other 2 h ultrasonic dispersion to obtain tetragonal BaTiO 3 nanoparticles capped with metal-free ATRP initiating sites. Finally, the centrifuging process was used for the removal of excess ligands (10000 rpm, 20 min). Owing to bi-functional ligands as shell layer, the system of tetragonal BaTiO 3 nanoparticles/ligands can be dissolved into organic solvent (e.g., toluene) to form uniform solution. In order to investigate stability of bi-functional ligands on the surface of BaTiO 3 nanoparticles in organic solvent (e.g., toluene), 20 mL of toluene solution of BaTiO 3 nanoparticles coated with bi-functional ligands was prepared (1 mg/mL). After the solution was vigorously stirred for 48 h at room temperature, BaTiO 3 nanoparticles coated with bi-functional ligands was obtained by centrifuging (10000 rpm, 20 min), and residual solution was characterized by 1 H-NMR after toluene solvent was completely removed under vacuum. At the same time, BaTiO 3 nanoparticles coated with bi-functional ligands were characterized by TGA again to confirm weight fraction of the bi-functional ligands. Fabrication of core/shell tetragonal Batio 3 /PMMA hybrid nanoparticles by metal-free ATRP driven by visible light. Core/shell tetragonal BaTiO 3 /PMMA hybrid nanoparticles were fabricated by metal-free ATRP driven by white LED light 31 . PMMA polymeric chains were growing from the initiating sites on the surface of tetragonal BaTiO 3 by the living polymerization of MMA monomers, 5,10-di(1-naphthyl)-5,1 0-dihydrophenazine was used as the photocatalyst under white LED light irradiation at room temperature. In a typical procedure, an ampule charged with a small stir bar, MMA (8 mL), visible light photocatalyst (0.1 mol%), BaTiO 3 nanoparticle-based initiators (50 mg), and 8 mL N, N-dimethylacetamide (DMA) was de-gassed via three freeze-thaw cycles in liquid nitrogen, then sealed at ambient temperature. The reaction was vigorously stirred in front of white LED while cooling with compressed air to maintain ambient temperature. The ampule was taken out from the white LED irradiation at different desired time to stop the polymerization. The mixture solution was then diluted by acetone, and then precipitated in the mixed solvent (methanol/water, v/v = 1/1). After centrifugation, the final product was purified via dissolution/precipitation twice by acetone and methanol/water, and then dried at 50 °C in vacuum for 12 h. Control experiments by metal-free ATRP driven by visible light. In order to compare with initiators on the surface of tetragonal BaTiO 3 nanoparticles, free bi-functional initiators were added in the polymerization reaction system to initiate MMA monomers and grow free PMMA polymeric chains via the metal-free ATRP process, 5,10-di(1-naphthyl)-5,10-dihydrophenazine used as the photocatalyst under white LED light irradiation at room temperature. In a typical procedure, an ampule charged with MMA (8 mL), 5,10-di(1-naphthyl)-5,10dihydrophenazine (0.1 mol%), BaTiO 3 nanoparticle-based initiators (50 mg), free bi-functional ligands (10 mg), and 8 mL N, N-dimethylacetamide (DMA) was degassed via three freeze-thaw cycles in liquid nitrogen, then sealed at ambient temperature. Then the reaction was vigorously stirred in front of white LED while cooling with compressed air to maintain ambient temperature. The ampule was taken out from the white LED irradiation at different times to stop the polymerization. After centrifugation, the BaTiO 3 -based nanoparticles were removed, and the as-prepared solution was diluted by acetone, and then precipitated into the mixed solvent (methanol/ water, v/v = 1/1). After filtration, the free PMMA polymers were purified via dissolution/precipitation twice by acetone and methanol/water, and then dried at 50 °C in vacuum for 12 h. Detachment of PMMA chains from the surface of BaTiO 3 nanoparticles for measuring the molecular weight of PMMA grafting chains. PMMA polymers as grafting chains on the surface of BaTiO 3 nanoparticles were detached from the surface of BaTiO 3 nanoparticles by dispersing core/shell tetragonal BaTiO 3 /PMMA hybrid nanoparticles in the pyridine: 0.2 g core/shell BaTiO 3 /PMMA hybrid nanoparticles (sample in Fig. 5) were dissolved into 50 mL of pyridine. The mixture solution was stirred at 100 °C for 24 h. After the reaction, the resulting BaTiO 3 without PMMA was gradually precipitated in pyridine. After filtration, the resulting solution was concentrated to dryness, and then the polymers were dissolved in acetone, and following by precipitating in the mixed solvent (methanol/water, v/v = 1/1). The PMMA polymers can be purified by dissolution/precipitation twice with acetone and methanol/water, and then dried at 50 °C in vacuum for 12 h. Results and Discussion As shown in Figs 1 and S1, hyperbranched polyamide (HBPA) with benzimidazole rings was synthesized through a one-step procedure with equimolar monomers of trimesic acid and 2-(4-aminophenyl)-1H-benzimidazol-5-am ine. The whole polycondensation process was easily conducted in a homogeneous solution 32 . Moreover, it is noteworthy that the used solvent during the polymerization process can make Ba(OH) 2 and TTIP (precursors of BaTiO 3 nanoparticles) dissolved. The reaction solution was slowly heated to 90 °C and refluxed for 3 h. After that, Ba(OH) 2 (10 mmol) and TTIP (10 mmol), as precursors, were added into the reaction solution (the molar ratio of precursors to benzimidazole ring = 1:1). After another 1 h reaction, the pale yellow precipitates of cubic BaTiO 3 /HBPA were formed, and cubic BaTiO 3 nanoparticles were encapsulated into the HBPA functional matrix. The chemical structure of HBPA was confirmed by FT-IR, 1 H and 13 C-NMR analyses, and the characterization results were shown in Figs S2-S4, respectively. Firstly, SEM characterization was used to the morphology of cubic BaTiO 3 /HBPA nanocomposites, and Fig. S6 show the SEM images of the sample. Clearly, nonuniform submicrospheres were observed. In contrast, the internal structures of cubic BaTiO 3 /HBPA nanocomposites were investigated and confirmed by TEM characterization. The dark dots appearing in the TEM images referred to BaTiO 3 nanoparticles in HBPA matrix (the average diameter: 18.1 ± 1.9 nm) shown in Fig. 2. Besides, X-ray diffraction (XRD) measurement was applied to characterize the crystalline phase architecture of BaTiO 3 /HBPA nanocomposites, the diffraction pattern shown in Figs S7 and S9(A). According to XRD patterns (patterns with 2θ = 20-60° are shown in Fig. S7; patterns with 2θ = 43-47° are shown in Fig. S9(A)), a single peak with 2θ at around 43-47° ((200) lattice plane) can be observed, suggesting the cubic crystalline phase of BaTiO 3 nanoparticles 13 . In Fig. 1, the HBPA coating layer of cubic BaTiO 3 nanoparticles was easily carbonized under an argon (Ar) at 1200 °C to form a carbon coating layer as protecting shell of the BaTiO 3 nanoparticles, thereby preventing BaTiO 3 nanoparticles from aggregation to form irregular larger structures. Meanwhile, after the high temperature calcination, crystalline structure of BaTiO 3 nanoparticles changed from cubic to tetragonal BaTiO 3 nanocrystals. The morphology of tetragonal BaTiO 3 nanocrystals capped by carbon layer was observed from the TEM images in Fig. 3. By comparing TEM characterization results of the BaTiO 3 nanoparticles before and after high temperature calcination, it was confirmed that dimension distribution of the nanoparticles was non-uniform due to the irregular carbon coating. The architecture of the carbon-coated BaTiO 3 nanocrystals can be also more clearly observed by high resolution TEM (HR-TEM). Clearly, the crystalline lattices of tetragonal BaTiO 3 nanoparticles were shown in inner dashed circle of Fig. 3(D), and irregular amorphous carbon coating was shown in outer dashed circle. TEM images indicate that the average diameter of tetragonal BaTiO 3 nanocrystals including outer irregular carbon coating is about 18 nm. Furthermore, in order to confirm the crystal architecture of tetragonal BaTiO 3 nanoparticles, X-ray diffraction (XRD) measurement is also carried out shown in Figs S8 and S9(B). XRD curves clearly show strong diffraction peaks. The single peak (2θ = 43-47°) splits into two peaks 33,34 . Detailed refinement of crystal architecture indicates that these as-prepared BaTiO 3 nanoparticles are largely tetragonal, including some detectable orthorhombic phase. In addition, energy dispersive spectroscopy (EDS) analysis characterization was also conducted to determine the composition of tetragonal BaTiO 3 nanoparticles capped by carbon layer (in Fig. S10). The carbon coating of tetragonal BaTiO 3 nanoparticles needs to be removed prior to preparation of the final core/shell tetragonal BaTiO 3 /HBPA nanoparticles by relative low temperature calcination under air (500 °C). Owing to the thermodynamic stability, the shapes of the tetragonal BaTiO 3 nanocrystals can be almost kept after 500 °C calcination in air 14 . After removing the carbon coating layer, the color of tetragonal BaTiO 3 nanocrystals powder turned into gray which was shown as insets of Fig. 4(A,B). The tetragonal BaTiO 3 nanocrystals were characterized by TEM after removal of carbon coating (Fig. 4(A,B)) to compare the morphology under different conditions. According to the TEM images, nanocrystals with about 17 nm of the average diameter are uniform. In addition, the composition of nanocrystals can be further determined by EDS microanalysis characterization after removing of carbon layer (Fig. S11). On the basis of the EDS characterization, it was found that the carbon shell layer was almost completely disappeared after 500 °C calcination under air for 5 h. Surface modification of these tetragonal BaTiO 3 nanocrystals without carbon coating layer is necessary to prepare tetragonal BaTiO 3 nanoparticles coated with a metal-free ATRP initiators, the whole process is depicted in Fig. 1. The tetragonal BaTiO 3 nanoparticles were dispersed in toluene by ultrasonication process, followed by the addition of bifunctional ligands to the toluene solution. Owing to the coordination bond interaction between carboxyl group of the bifunctional ligands and surface metal atom on tetragonal BaTiO 3 nanoparticles, tetragonal BaTiO 3 nanoparticles covered with metal-free ATRP initiating sites were formed by tethering bi-functional ligands onto the surface of nanoparticles, which showed compact comb-like structure [35][36][37][38] . BaTiO 3 nanocrystals without any ligands were easily precipitated in organic solvent (e.g., toluene), while tetragonal BaTiO 3 nanoparticles coated with metal-free ATRP initiating sites can be easily dissolved in organic solvent (e.g., toluene) ( Fig. 4(C)). The morphology of tetragonal BaTiO 3 /bifunctional ligand nanocrystals was investigated by TEM measurements, and the representative TEM and HR-TEM images are shown in Fig. 4(C,D). It is confirmed by analyzing TEM images that the average size of tetragonal BaTiO 3 nanocrystals is 17.1 ± 1.9 nm, smaller than the average diameter of cubic BaTiO 3 nanoparticles because of the crystalline transformation and perfection. For the determination of the presence of metal-free ATRP initiators on the surface of tetragonal BaTiO 3 , TGA was also applied (Fig. S12(A)). The weight fraction of metal-free ATRP initiators is confirmed to be 4.6% (the areal density of initiators on the surface of BaTiO 3 nanoparticles: 2.09/nm 2 ). For investigating stability of bi-functional ligands on the surface of BaTiO 3 nanoparticles in organic solvent (e.g., toluene), toluene solution of BaTiO 3 nanoparticles coated with bi-functional ligands was vigorously stirred for 48 h at room temperature. After BaTiO 3 nanoparticles coated with bi-functional ligands were obtained by centrifuging, no bi-functional ligand was detected in residual solution. In addition, TGA curve ( Fig. S12(B)) is almost same with freshly prepared sample (almost same weight fraction of the bi-functional ligands). These experimental results suggest that the desorption of bi-functional ligands doesn't happen when dispersing the BaTiO 3 /bi-functional ligands in solvent. Thereafter, the metal-free ATRP initiators on the surface of tetragonal BaTiO 3 nanoparticles were utilized to initiate the polymerization of MMA monomers under a white LED irradiation at ambient temperature for the preparation of core/shell tetragonal BaTiO 3 /PMMA nanoparticles 27 . During the metal-free ATRP process, 5,10-di(1-naphthyl)-5,10-dihydrophenazine was used as an organic photocatalyst (PC). Our proposed initiating mechanism of MMA monomers postulates reversible electron transfer (ET) from the photoexcited PC for reversibly activating an alkyl bromide initiator (Fig. 5) 27 . Except for the requirement of the excited triplet state ( 3 PC * ) with sufficiently strong reducing ability to activate the ATRP initiating sites, it is necessary that an interplay should be balanced between the ability to oxidize the propagating radical and its stability of the radical cation ( 2 PC •+ ) in order to efficiently deactivate the propagating polymeric chain and yield a controlled radical polymerization 26 . Based on computationally guided discovery 39,40 , we choose 5,10-di(1-naphthyl)-5,10-dihydrophenazine as a PC for the fabrication of tetragonal BaTiO 3 /PMMA core/shell nanoparticles. It is worth noting that the phenazine core was shared by several different biologically relevant molecules used as redox-active antibiotics 41 , whereas phenazine-based derivatives have attracted considerable attention in the field of organic photovoltaics 42,43 . Although 5,10-di(1-naphthyl)-5,10-dihydrophenazine as a photocatalyst has been investigated for the organocatalyzed radical polymerizations 27 , for the first time, metal-free ATRP driven by visible light by using 5,10-di(1-n aphthyl)-5,10-dihydrophenazine as photocatalyst was applied for the fabrication of size-tunable core/shell ferroelectric/polymeric nanoparticles. During the conventional ATRP process, a variety of ligated metal catalysts, such as Cu(I), Fe(II), Ru(II) and so on, were usually utilized to mediate the equilibrium process of the redox. Nevertheless, for the fabrication of functional inorganic/organic nanocomposites by ATRP techniques, a crucial limiting factor is the contamination and purification of metal catalysts. In our fabrication approach, the PMMA shell thickness can be readily controlled by tuning the white LED irradiation time during the metal-free ATRP process (Fig. S13). Firstly, the tetragonal BaTiO 3 /bi-functional ligands nanoparticles (the average diameter: 17.1 ± 1.9 nm) were utilized as example, PMMA chains as polymeric shell can grow from the surface of tetragonal BaTiO 3 nanocrystals, 5,10-di(1-naphthyl)-5,10-dihydrophenazine as the ATRP photocatalyst under white LED irradiation at ambient temperature (5 h). The architectures of core/shell tetragonal BaTiO 3 /PMMA nanoparticles can be further investigated using TEM measurement by staining the PMMA chains of polymeric shell with ruthenium tetraoxide (RuO 4 ) [44][45][46] . The clear ~6 nm shell thickness in the TEM images corresponding to PMMA chains can be investigated (Fig. 6). In addition, for obtaining real information of PMMA grafting chains, the detaching of PMMA shell from the tetragonal BaTiO 3 nanocrystal surface was carried out by dispersing core/shell tetragonal BaTiO 3 /PMMA hybrid nanoparticles in the pyridine. According to GPC characterization, a single peak with narrow distribution (PDI: 1.25) can be observed for the detached polymeric chains (Fig. S14). In addition, the molecular weight of 11,300 g/mol is close to that of PMMA polymers synthesized from free ATRP initiators (12,950 g/ mol). It is worth noting that the existence of BaTiO 3 core has almost no effect on the metal-free ATRP process of MMA monomers. By analyzing 1 H-NMR and TGA characterization results, it was confirmed that PMMA polymeric shell existed on the surface of BaTiO 3 core (Figs S15 and S16). The weight fraction of total organic shell (initiators and PMMA) was determined by TGA (18.4%), and the weight fraction of PMMA shell was 13.8%. The PMMA shell thickness can be readily controlled by tuning the white LED irradiation time during the metal-free ATRP, and all results about PMMA shell are summarized in Table S1 and Table S2. For example, with the white LED irradiation time increasing to 20 h, the PMMA shell thickness can be tuned to ~11 nm. In addition, temporal control has been realized by utilizing a pulsed-irradiation sequence (Fig. 7). The PMMA shell thickness increasing driven by visible light has been only detected during irradiation, and the increasing paused during dark time. At the same time, the corresponding molecular weight of PMMA chains steadily increased with the irradiation time increasing (Fig. 7(A) and Table S2). Importantly, the dimensions of tetragonal BaTiO 3 nanoparticles can be tuned by adjusting the molar ratio of benzimidazole ring units to precursors during the polycondensation process (Table S3 and Fig. S17). The size increase of nanocrystals may be the result of the higher precursor concentration in reaction solution 29,47 . For instance, when the molar ratio of precursors to benzimidazole ring units was adjusted from 1:1 to 10:1, the average size of tetragonal BaTiO 3 nanoparticles could be tuned to 39.2 ± 4.2 nm as other conditions fixed (Fig. S18). In order to further confirm that sizes of BaTiO 3 nanoparticles are independent on ATRP process of MMA monomers, larger BaTiO 3 nanoparticles (D: ~39 nm, Sample-5 in Table S3) capped with bi-functional ligands were used as initiators to initiate polymerization of MMA monomers under different LED irradiation time. Comparing with smaller BaTiO 3 nanoparticles (samples in Table S1), the thickness of shell is almost same under same LED irradiation time (samples in Table S4). The dielectric properties of core/shell tetragonal BaTiO 3 /PMMA hybrid nanoparticles and corresponding PMMA shell are shown in Fig. 8 (the frequency range: f = 4-14 GHz). For example, the real part of permittivity (ε′ = 2.71 ± 0.52) of the PMMA shell with low molecular weight (Sample-2 in Table S2, M n,GPC = 11.3 KDa) is larger than that of the sample with high molecular weight (Sample-3 in Table S2, M n,GPC = 23.4 KDa) (ε′ = 2.41 ± 0.34). Due to the higher coiling degree of the longer polymeric chains, the value of ε′ decreases with increase of M n,GPC of PMMA shell 48 . The imaginary part values of permittivity (ε″) of two samples are almost zero, and indicated that the dielectric properties are nearly lossless (Fig. 8(A)). The permittivity values of core/ Table S2, M n,GPC = 11.3 KDa) and different sizes of core BaTiO 3 nanoparticles (~17 nm and ~39 nm; Sample-1 and Sample-5 in Table S3) are shown in Fig. 8 Table S2, M n,GPC = 23.4 KDa) are shown in Fig. 8(C). In contrast, it is clear that both ε′ and ε″ are lower than the case of the lower molecular weight PMMA as shell. The dielectric property difference between different core/shell BaTiO 3 /PMMA hybrid nanoparticles ( Fig. 8(B,C)) composed of same BaTiO 3 core may be attributed to the different molecular weights of PMMA shell and different volume fractions of core nanoparticles from different thickness of PMMA shell. In addition, the difference of dielectric properties of different core/shell BaTiO 3 /PMMA hybrid nanoparticles composed of same PMMA shell thickness can be due to the size effect of core BaTiO 3 nanocrystals 17,49-51 . In general, the results in Fig. 8 indicated that the dielectric properties of core/shell BaTiO 3 /PMMA hybrid nanoparticles can be tuned by adjusting the dimension of BaTiO 3 core and the molecular weight of PMMA shell. Conclusion In conclusion, an unconventional but facile approach to prepare size-tunable core/shell ferroelectric/polymeric nanoparticles with uniform distribution was reported by metal-free ATRP driven by visible light under ambient temperature based on novel HBPA as functional matrix. Cubic BaTiO 3 /HBPA nanocomposites, HBPA as matrix and Ba(OH) 2 and TTIP as precursors, were firstly fabricated by in-situ direct polycondensation process. In an inert atmosphere, the aromatic polyamide as capping layer of cubic BaTiO 3 nanocrystals was readily carbonized by 1200 °C calcination to form carbon coating layer on the surface of BaTiO 3 nanoparticles for preventing these nanocrystals from aggregation and merging. The cubic BaTiO 3 nanocrystals were simultaneously transformed into tetragonal BaTiO 3 after 1200 °C calcination. Then the outer carbon layer as shell coating of tetragonal BaTiO 3 nanoparticles can be removed via the calcination under relative low temperature (500 °C) in air. Then the bi-functional ligands were used for the surface modification of tetragonal BaTiO 3 nanocrystals. PMMA polymeric chains were growing from the initiating sites of ferroelectric BaTiO 3 nanocrystal surface by the metal-free ATRP technique to obtain core/shell ferroelectric BaTiO 3 /PMMA hybrid nanoparticles, composed of ferroelectric BaTiO 3 nanocrystals as core and PMMA polymeric chains as shell. The size of ferroelectric BaTiO 3 nanoparticles can be tuned when the molar ratio of benzimidazole ring units to precursors was changed in the polycondensation process, and the thickness of polymeric shell can be also tailored by changing the white LED irradiation time within the organocatalyzed ATRP process. The dielectric properties of core/shell BaTiO 3 / PMMA hybrid nanoparticles can be tuned by adjusting the dimension of BaTiO 3 core and the molecular weight of PMMA shell. Therefore, we envisage that this facile approach based on metal-free ATRP driven by visible light at ambient temperature would open up a new avenue for producing a variety of intriguing novel functional organic/inorganic hybrid nanomaterials for many applications (e.g., catalysts, electronics, etc.). Table S2, M n,GPC = 11.3 KDa) and high molecular weight (b; Sample-3 in Table S2, M n,GPC = 23.4 KDa); (B) The core/shell tetragonal BaTiO 3 /PMMA hybrid nanoparticles with same shell size (Sample-2 in Table S2, M n,GPC = 11.3 KDa) and different sizes of core BaTiO 3 nanoparticles (~17 nm (b) and ~39 nm (a); Sample-1 and Sample-5 in Table S3); (C) The core/shell tetragonal BaTiO 3 /PMMA hybrid nanoparticles with same shell size(Sample-3 in Table S2, M n,GPC = 23.4 KDa) and different sizes of core BaTiO 3 nanoparticles (~17 nm (b) and ~39 nm (a); Sample-1 and Sample-5 in Table S3).
7,821.8
2019-02-12T00:00:00.000
[ "Materials Science", "Physics" ]
Genetic Background of Hypertension in Connective Tissue Diseases Peroxisome proliferator-activated receptors (PPAR gamma-2) and beta-3-adrenergic receptors (ADRB3) are involved in the risk of hypertension. But their exact role in blood pressure modulation in patients with connective tissue diseases (CTD) is still not well defined. In this study, 104 patients with CTD and 103 gender- and age-matched controls were genotyped for Pro12Ala and C1431T polymorphisms of the PPAR gamma-2 gene and Trp64Arg polymorphism of the ADRB gene. Anthropometric and biochemical measurements were evaluated, followed by genotyping using TaqMan® SNP genotyping assays and polymerase chain reaction-restriction fragment length polymorphism. The prevalence of analyzed genotypes and alleles was comparable between patients with CTD and the control group, as well as hypertensive and normotensive subjects. Patients with CTD have lower body fat and higher body water amount, serum glucose, and triglyceride (TG) levels. Hypertensive subjects are older and have higher body mass, BMI, waist circumference (WC), body water content, glucose, and TG concentration. The multivariate analysis revealed that hypertensive subjects with Ala12/X or Trp64Trp have higher body mass and WC when compared to normotensive subjects. Trp64Trp polymorphism was also characterized by a higher TG level, while T1431/X subjects had higher WC. The presence of CTD, visceral fat distribution, and increased age are the predictors of hypertension development. Hypertensive patients with CTD and Trp64Trp polymorphism have an increased risk of visceral obesity development and metabolic complications, which in turn affects the value of blood pressure. In addition, either Ala12/X or T1431/X predicts the visceral body fat distribution in hypertensive subjects. Since the ratio between polymorphisms of PPAR gamma-2 and ADRB3 genes is not well established in hypertensive patients with CTD, we tested whether analyzed genetic factors are associated with blood pressure values and metabolic parameters in this group. Therefore, we determined the frequency of the analyzed variants and polymorphisms of the PPAR gamma-2 gene in CTD patients, and we investigated their association with hypertension in the context of anthropometric and biochemical parameters. Materials and Methods 2.1. Study Group. In this study, 111 patients were selected from patients in the Department of Rheumatology and Internal Diseases. The participants were also selected from our previous study [18]. Those patients with severe kidney and liver diseases, with infections, with untreated thyroid disorders, that are nonsmokers, with skin ulcerations during CTD, and without supplementation of minerals and vitamins were selected to the study. Thus, 104 subjects meeting the above criteria were included for further analysis. Average body mass-matched healthy controls were enrolled onto the study. Of the 104 patients, nearly 70% with CTD required treatment orally with glucocorticosteroid. All hypertensive patients with CTD used blood pressure-lowering medications. Informed consent was signed by each patient. Research was conducted according to the principles expressed in the Declaration of Helsinki, and signed consent was obtained from each patient. The study was approved by the local research ethics committee (Bioethics Committee of Poznan University of Medical Sciences, no. 791/15). Anthropometric Measurements. Basic anthropometric parameters included body mass (measured in underwear) and height measurements. The waist was measured on the midline between the lowest part of the 12th rib and the suprailiac crest by the WHO method, and hip circumferences were measured at the widest point over the buttocks [19]. BMI was calculated as weight divided by height squared (kg/m 2 ), and the waist hip ratio (WHR) was estimated as waist circumference to hip circumference. A bioimpedance analyzer (Bodystat 1500, Bodystat Ltd., UK) was used to assess fat content as a proportion of total body mass. The bioimpedance analysis was performed with a single frequency (50 kHz) device. Each subject was examined at 8:00 AM in a controlled environment at room temperature (RT). After 20 min of rest in a supine position, brachial SBP and DBP were determined as the average of three measures obtained by an experienced medical staff member on the patient's nondominant arm and following a 10 min rest using a standard mercury sphygmomanometer (the mean of three measurements of SBP and DBP was calculated). Blood pressure was measured according to the guidelines of the European Society of Hypertension Working Group on BP Monitoring [20]. The diagnosis of HT was given if systolic blood pressure exceeded 140 mmHg and/or diastolic blood pressure was higher than 90 mmHg. The specific characteristics of pulsatile arterial hemodynamics included the analysis of two components of blood pressure: mean arterial pressure (MAP) and pulse pressure (PP) [21]. PP was determined by subtracting the diastolic from the systolic blood pressure, and MAP was calculated by using the formula: MAP = ½ðSBP + 2 × DBPÞ/3 [21,22]. Blood Parameter Measurements. Blood samples were drawn from the antecubital vein after an overnight fast and were collected in tubes containing EDTA. Serum samples were separated from clotted blood (15 min, RT) and centrifuged (15 min, 3000 × g). Enzymatic colorimetric assays (Pentra 400, Horiba ABX) were used to measure glucose and lipid profiles (total cholesterol (TC), high-density lipoproteins (HDL) and low-density lipoproteins (LDL), and triglycerides (TAG)). Samples were immediately centrifuged, and serum was separated and directly used for assays. The serum level of LDL was determined using the Friedewald equation ½LDL-C TC fHDL-C ðTG/5Þg [23]. Genetic Evaluation. A detailed description of the methodology was included in our previous studies [2,18,24]. DNA samples from patients and controls were isolated from peripheral blood lymphocytes with a Gentra Puregene Blood Kit (Qiagen, Hilden, Germany). DNA purity and concentration were confirmed using a NanoDrop ND-1000 spectrophotometer. We selected the SNPs previously associated with connective tissue diseases. We chose genomic regions based on a review of the literature and used the most significant reported SNPs which had been analyzed in relatively large groups of cases. All polymorphisms selected for this study had minor allele frequencies > 0:4 to achieve enough statistical power. Altogether, two SNPs in PPAR gamma-2 (rs1801282, rs3856806) and one in β3-AR gene (rs4994) were analyzed. The SNPs were genotyped using predesigned TaqMan® SNP genotyping assays (Life Technologies, Carlsbad, California; assay IDs: PPAR gamma-2 (rs1801282: C_1129864_10) and β3-AR gene (rs4994: C_2215549_20)). The polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP analysis) was performed with HOT FIREPol Probe qPCR Mix Plus (no ROX) according to the manufacturer's instructions provided by Solis BioDyne (Tartu, Estonia). The PCR thermal cycling was as follows: initial denaturation at 95°C for 15 min.; 40 cycles of 95°C for 15 sec and 60°C for 60 sec. Thermal cycling was performed using a CFX96 Touch™ Real-Time PCR Detection System (Bio-Rad, Hercules, California, U.S.). As a quality control measure, negative controls and approximately 5% of samples were genotyped in duplicate to check genotyping accuracy. The genotypes of selected samples were confirmed by direct sequencing (OLIGO, IBB, Warsaw, Poland). In case of C1431T (rs3856806) in PPAR gamma-2 genes, the PCR-restriction fragment length polymorphism (PCR-RFLP) method was applied. The 170 bp PCR product of exon 6 was digested with the Eco72I enzyme (according to the manufacturer's instructions: Fermentas, Vilnius, Lithuania). Digestion products were separated by 2.5% agarose gel 2 Journal of Immunology Research electrophoresis. In the case of wild-type DNA, two bands of 127 bp and 43 bp were present. The wild-type form was not digested by this endonuclease. The genotypes of selected samples were confirmed by direct sequencing (OLIGO, IBB, Warsaw, Poland). Statistical Analysis. GraphPad PRISM 5 Software (GraphPad, San Diego, CA) was used for statistical calculations. Genotype data were tested for deviations from the Hardy-Weinberg equilibrium. The chi-squared test was used to analyze the differences in genotype/allele frequencies between connective tissue disease (CTD) patients and the controls, as well as between normo-and hypertensive patients. The strength of associations between the PPAR gamma-2 genotypes (rs1801282 and rs3856806) among studied groups and ADRB3 gene (Trp64Arg) was calculated using logistic regression and expressed as an odds ratio (95% CI), and the differences were considered significant if the value of probability (P) was less than 0.05. Module contingency tables were used in these calculations. For polymorphisms, the wild-type or ancestral genotype/allele served as a reference category. The distributions of the anthropometrical and biochemical data were tested with the Shapiro-Wilk normality test. If analyzed data were not normally distributed, nonparametric tests were used. Since the number of Ala12Ala homozygotes was small (in both CTD patients and the control group) compared to Pro12Pro homozygotes, they were calculated together with Pro12Ala heterozygotes for all the analyses and are presented as Ala12/X in Table 1. Similarly, patients with C1431T+T14131T polymorphisms were collapsed together and are presented as T1431/X, and in the same way, Trp64Arg+Arg64Arg polymorphisms were analyzed as Arg64/X. Student's t-test was used to compare continuous variables between two groups if the data distribution was concordant with the normal distribution (Shapiro-Wilk test). If the data did not meet the criteria mentioned above, the nonparametric Mann-Whitney U-test was used. For normally distributed data, a multifactor ANOVA analysis was performed to determine whether the dependent variables were significantly different between study and control groups in relation to polymorphism and statin intake. Otherwise, the nonparametric Kruskal-Wallis test was used. A P value less than 0.05 was regarded as statistically significant. Statistical analyses were performed with STATISTICA 12 (including STATISTICA Medical Package 2.0; StatSoft, Inc. 2014 software) and SPSS 22 (IBM, Inc., Chicago, IL, USA). Results The analysis of allele and genotype frequencies showed no differences between CTD and control groups ( Table 2). Four different comparisons of anthropometric and blood pressure parameters in CTD vs. control group and hypertensive vs. normotensive subjects are shown in Table 3. The first comparison of patients with CTD and control group (Table 3 (I)) revealed that patients with CTD have a lower hip circumference, body fat and water, SBP, DBP, and MAP and a higher body water amount, serum glucose, and triglyceride level. The second comparison of hypertensive (n = 89) to normotensive (n = 118) patients (Table 3 (II)) demonstrated that hypertensive patients are older and have higher body mass, WC, LBM, body water content, BMI, glucose, and TG level. The third analysis including only patients with CTD diseases (Table 3 (III)) indicated that hypertensive CTD patients (n = 56) are older and have a higher body mass, WC, body fat amount, LBM, body water content, BMI, and serum glucose level when compared to normotensive subjects (n = 48). The fourth comparison of hypertensive subjects to normotensive ones in the control group (Table 3 (IV)) showed that the groups differed in age and glucose level. The multivariate analysis of all subjects in this study (Table 1 (I)) showed that hypertensive patients with Ala12/X or Trp64Trp genotypes have a higher body mass and waist circumference when compared to normotensive subjects. The levels of TG were higher in patients with Trp64Arg genotype, while subjects with T1431/X have higher WC. Similar relationships were observed in hypertensive CTD when compared to normotensive subjects with CTD (Table 1 (II)). For results presented in Tables 1 and 3, adjustment for the Family-Wise Error Rate (FWER) in multiple comparisons was not calculated, because these corrections were not included in the primary hypothesis of the current study. The P values uncorrected for use of multiple comparisons were presented for illustrative purposes, without making a categorical assertion. Gene-to-gene interaction in the context of WC and BMI in patients with hypertension and normal blood pressure was analyzed in the control and CTD groups using two-way ANOVA (Table 4). Discussion This study reveals associations between the analyzed polymorphisms and metabolic parameters and blood pressure characteristics in CTD. We have shown that hypertensive patients with the Ala12/X or Trp64Trp genotypes have an increased risk of visceral obesity development. A tendency to visceral fat distribution was also observed in hypertensive patients with CTD. The data presented in Table 2 showed no differences between CTD and control groups. Since ethnic and environmental variations for the analyzed alleles have been reported, we compared the data to other Caucasian populations. The frequencies of all analyzed alleles were comparable in both groups (CTD and control groups as well as hypertensive and normotensive subjects). The analyzed frequency of the Ala12 allele carrier was similar to the allele frequencies in other Caucasian populations (0.11-0.13), including those of Polish ethnicity [2,4,25]. The frequencies of the T1431 and Arg64 allele carriers were also comparable to frequencies observed in Polish subjects (T1431 (frequency 0.148) and Arg64 (frequency 0.101)) [24]. The data in Table 3 showed different comparisons in the four groups of analyzed patients. In this study, we analyzed not only SBP and DBP but also parameters of hemodynamic 3 Journal of Immunology Research Journal of Immunology Research characteristics such as MAP (which refers to the steady pressure and vascular resistance of small arteries) and PP (which is determined by stroke volume, arterial stiffness, and wave reflections) [20,22]. The first comparison between CTD and the control group (Table 3 (I)) indicated that patients with CTD have proper blood pressure (SPB < 140 mmHg Journal of Immunology Research and DBP < 90 mmHg) and were characterized by lower MAP when compared to the control group. This could have been caused by hypotensive treatment modified at every admission to the hospital. Similar associations were observed when comparing hypertensive vs. normotensive patients with CTD (P = 0:0499) ( Table 3 (III)). Unfortunately, low MAP is associated with a poorer prognosis and an 11% increased mortality in patients with cardiovascular diseases [26]. To compare body components between the analyzed groups, we used the bioimpedance method, which allows an estimation of lean body mass (free fat mass) and body fat content [27][28][29]. In our study, body mass, LBM, and BMI values were comparable between CTD and control groups. Similarly, other studies have not shown differences in fatfree mass (LBM) and fat mass between patients with rheumatoid arthritis and control groups if the groups have comparable BMI within the recommended range (<25 kg/m 2 ) [30,31]. But in our study, patients with CTD were characterized by higher water content, serum glucose, and triglyceride level when compared to healthy subjects. Changes in body compartments and metabolic parameters can be associated with an inflammatory state present in the course of autoimmune disorders as well as being a side effect of glucocorticosteroid use [32,33]. The second comparison showed that hypertensive patients were older when comparing normotensive subjects independent of the analyzed groups (whole group (Table 3 (II)), CTD patients (Table 3 (III)), or control group (Table 3 (IV))). This fact reflects the general tendency of increasing blood pressure with age [34]. Moreover, both aging and hypertension have a critical role in cardiovascular and cerebrovascular complications [35]. In this study, hypertensive subjects had a higher body mass and WC (Tables 3 (II) and 3 (III)). These data are in accordance with the fact that the prevalence of hypertension increases with weight gain and the visceral distribution of body fat [36]. Moreover, hypertensive subjects have higher body water content, which could be associated with the tendency of water gathering in the Table 4: Gene-to-gene interaction in hypertensive and normotensive groups with CTD and control groups. Group Genotype CTD Control X ± SD n P X ± SD n P Journal of Immunology Research hypertensive state [37]. The analysis (Table 3 (II)) of all hypertensive (n = 89) and all normotensive subjects (n = 118) did not reveal differences between SBP, DBP, PP, and MAP, because hypertensive subjects used medications to lower blood pressure; however, the analysis of blood pressure suggested the intensification of hypotensive therapy to achieve therapeutic goals mainly in the control group, because patients with CTD had proper blood pressure (Table 3 (I)). The third comparison (Table 3 (III)) including only patients with CTD shows that hypertensive patients were older and had a higher body mass, BMI, WC, and body fat content; however, the value of PP was comparable in all the presented analyses. PP is considered a predictor of cardiovascular disorders in the general population [38] and hypertensive state [39]. The calculation of hemodynamic parameters, if the normal blood pressure of 120/80 mmHg is present, gives PP = 40 mmHg. Unfortunately, in this study, the value of PP was high and exceeded 54 mmHg in all analyzed subjects. Thus, elevated PP in patients with CTD increases the risk of cardiovascular disorders [38]. This fact can be explained by a reduction in cardiac output, which neurohumorally activates a compensatory mechanism and a systemic vascular resistance. In consequence, the arterial stiffness increases [38,39]. In Table 3, chi-squared analysis for the number of patients with hypertension (n = 56, n = 33) and normal blood pressure (n = 48, n = 70) in CTD and control groups showed that the hypertensive state was significantly related to CTD (27.05%) while normotensive patients were predominantly present in the control group (33.82%; P = 0:0015). Data in Table 1 show that hypertensive patients, carriers of Ala12 allele, have a higher body mass and WC, which reflects the tendency for the coexistence of this allele with increased blood pressure [9-11, 40, 41]. Moreover, the Ala12 allele is also associated with higher body mass and BMI value and a tendency to obesity not only in Caucasian subjects [25] but also in other populations [4,5]. The Ala12 carrier is also related to increased body mass in women, and the additive effect of coexisting Ala12 and T1431 alleles is present [16]. In this study, patients with the T1431/X genotype were characterized by higher WC; however, we did not observe any additive effect of Ala12 or T1431 alleles (data not shown in tables). Interestingly, hypertensive homozygous subjects with the Trp64Trp genotype (both CTD and control groups) were characterized by a higher body mass, WC, and TG level when compared to normotensive subjects. In contrast to our study, Corella et al. reported that the Arg64 allele was associated with a higher BMI in a Mediterranean Spanish population [42]. We suspect that such differences are related to different ethnicities, which are related to different genetic and environmental factors and the presence of CTD. Two-way ANOVA has been used to determined differences between values of WC and BMI in hypertensive and normotensive groups and analyzed genotypes in patients with CTD and control group (Table 4). This analysis showed that Ala12/X genotype determined the higher values of waist circumference in patients with hypertension and CTD (P = 0:0216). Conclusion We did not find differences between genotype/allele frequencies between the analyzed hypertensive patients with CTD diseases and the control group; however, we showed that the analyzed polymorphisms Pro12Ala, T14131/X, and Trp64Trp were associated with worse anthropometric parameters in hypertensive subjects. From the analyzed genetic variants, the Trp64Trp genotype shows the stronger relation with hypertension, because it is associated not only with a higher body mass and waist circumference but also with higher triglyceride levels and may predict the development of metabolic syndrome in the future. Moreover, the hypertensive state was related to higher age and tended to visceral fat distribution (higher body mass, BMI, and WC). Although the patients with CTD were characterized by proper values of SBP and DBP, the MAP was lower in this group. Hypertension was well treated in CTD patients, but the intensification of lowering blood pressure therapy is necessary in the control group. Our findings suggest complex genotype-environmental interactions with hypertensive risk, and further studies should show a more complex relationship between the analyzed polymorphisms and metabolic risk. Data Availability The association study data used to support the findings of this study are included within the article.
4,494
2020-02-03T00:00:00.000
[ "Medicine", "Biology" ]
Biological aspects of the two-spotted spider mite on strawberry plants under silicon application Silicon is an inducer of plant resistance to arthropod pests, being a promising strategy for integrated management. The aim of this study was to evaluate the effect of silicon on biological, reproductive and population aspects of parental and F1 generations of the two-spotted spider mite on strawberry plants. Potassium silicate, nanosilica and water were applied to the plants. Two-spotted spider mite females were confined to strawberry leaf disks for oviposition and, after hatching, larvae were observed until the emergence of adults. Once adults had been obtained, couples were formed in order to evaluate pre-oviposition, oviposition, longevity and fertility, with an estimated net reproduction rate, intrinsic rate of increase, finite rate of increase and the necessary time for doubling the generation. Silicon prolonged the duration of some immature stages of the mites in parental and F1 generations, did not affect the duration of the whole biological cycle, though. The periods of pre-oviposition, oviposition and longevity of the parental generation and the longevity and oviposition of F1 generation of the two-spotted spider mite were negatively affected by potassium silicate and nanosilica. The population parameters of the parental generation of the mites indicated that nanosilica is able to lead to a long-run decrease of this pest population. Silicon can be used in integrated pest management to control mites, since it does not interfere with the action of other control methods. Gatarayira, et al. (2010), in a greenhouse, verified that conidia of fungus Beauveria bassiana, which relate silicon as a promoter of increased leaf pubescence (Reynolds et al., 2016). Although plenty of studies, on the effects of silicon on pest biology and on the induction of resistance in plants, can be found in literature, few studies had focused on the effects of Si on mites (Gatarayiha et al., 2010;Sadeghi et al., 2016;Catalani et al., 2017). Some studies on the effects of silicon on disease management, in nutrition and fruit organoleptic traits of strawberry S ilicon (Si) has been one of the most studied chemical elements for plant disease resistance against arthropods (Reynolds et al., 2016;Catalani et al., 2017). Among the factors which promote resistance induction is an increase in photosynthetic rate and mechanical resistance of cells (Moraes et al., 2005), an increased production of allelochemicals as well as favoring biological control agents (Reynolds et al., 2016). In addition to these mechanisms, there are some hypotheses added to potassium silicate, were efficient in the management of the twospotted spider mite on corn. Sadeghi et al. (2016) verified that silicon affects negatively population and reproductive parameters of two-spotted spider mites on bean plants. The high production of Fragaria X ananassa strawberries obtained in the Chapada Diamantina region has aroused increasing interest in the crop, increasing growth and expansion potential of this agricultural activity (SEBRAE, 2017). This region is considered one of the main producing regions in the Northeast part of Brazil, harvest reaching 416 thousand tons and productivity of 40 t ha -1 (SEBRAE, 2017), exceeding the national average of 36.1 t ha -1 (Fagherazzi et al., 2017). However, an arthropod complex which occurs in strawberry cultivation can compromise this productivity. Among them, the two-spotted spider mite stands out as the main pest of strawberry, because it promotes the appearance of chlorosis, loss of vigor, defoliation and wilting of plants, resulting in losses during production (Lourenção et al., 2000;Bortolozzo et al., 2007). Thus, the aim of this study was to evaluate the effect of silicon on biological, reproductive and population aspects of two-spotted spider mite of parental and F1 generations, using two sources of this element, aiming to improve integrated pest management for strawberry crop. MATERIAL AND METHODS The studies were carried out from May to August, 2018, in Laboratório de Entomologia da Universidade Estadual do Sudoeste da Bahia (Entomology Laboratory of the State University of Southwest Bahia), 25°C±2 temperature, relative humidity 70%±10 and 12-h photophase, using strawberry plants cv. Mojave and two-spotted spider mites T. urticae. The plants were kept in 5-L pots, with substrate composed of soil, sand and goat manure in a 2:1:1 (v/v/v) ratio, under plastic house conditions. The specimens of T. urticae used in the bioassays were obtained from a stock-culture started with individuals collected from commercial strawberry planting in Chapada Diamantina-BA region, being conducted in common bean plants (Phaseolus vulgaris), cultivated in 20-L pots in a plastic house. The plants were monitored daily in order to avoid being contaminated by other phytophagous and predatory mites. We a p p l i e d t h r e e s o l u t i o n s containing 32 mol L -1 of Si in the form of potassium silicate and nanosilica, and deionized water (Si-free control), in 15 pots for each type of solution. The first application was done at the beginning of flowering of the strawberry crop and the other applications were every 10 days, applying 6.9 mL per plant, covering the total area in order to guarantee the product absorption. The sprayed leaves were marked to ensure the use of the leaves which received the three applications. For treatment applications, we used a compression sprayer (1.25-L tank capacity), continuous jet, Guarany®. Assay using parental generation An assay was installed in a completely randomized design, with three treatments and 50 replicates, totalizing 150 plots. The treatments consisted of leaves submitted to applications of two Si sources, potassium silicate and nanosilica, and deionized water (Sifree control). Each replicate consisted of leaf disks (2-cm diameter) obtained from 20 marked leaves (submitted to treatments) and collected from the plants 10 days after the last application. The leaves were selected in order to obtain standardization related to physiological age and position on the plant, seeking, thus, homogeneous conditions for mite development. Two-spotted spider mite adult females from the stock-breeding were confined in Petri dishes 6.0 cm diameter, containing a leaf disk, previously washed and dried, which was pinned into the center of the dish with the aid of hot glue, one female per dish. The authors added water between the dish base and leaf disk to ensure the viability of the plant material and prevent the escape of mites. Water was replenished whenever necessary. After being confined, the female mites were observed in order to verify the presence of eggs and, once laying was confirmed, two eggs were kept on each leaf and the female was removed to prevent further laying. Those females which had still not laid any egg were observed every four hours until all dishes had one or two eggs. After hatching, only one larva per leaf was maintained, being monitored in all mobile and quiescent stages until adulthood. Evaluations were done twice a day, in a 12-h interval, until the mites reached adulthood, observing duration and viability of egg, larva, protochrysalis, p r o t o n y m p h , d e u t o c h r y s a l i s , deutonymph, telochrysalis and adult phases. Upon reaching the adult stage, the mites were sexed, transferring each mite to a new experimental unit and forming couples, for which it was necessary to use some males from the stock-breeding of the corresponding treatment. The pre-oviposition, oviposition, fecundity, fertility and longevity periods were determined for each treatment. Data referring to males from stock-breeding and mites that died on cotton (attempted escape) were not used in statistical analysis. The computer program TWOSEX-MSChart de Chi (2020), available at http://140.120.197.173/ecology/ Download/Twosex-MSChart.rar, was used to analyze the raw data of development and reproduction, as well as for the calculation of population parameters of all individuals, using the "two-sex life table" procedure (Chi & Liu, 1985;Chi, 1988). The population parameters estimated were net reproduction rate (R0), intrinsic rate of increase (r); finite rate of increase (λ) and average generation time (T). The standard error of data on development, fecundity, reproduction period and data on population parameters was estimated using the bootstrap method, following the procedure proposed by Huang & Chi (2012). During this procedure, the data for each of these biological parameters were re-sampled 100,000 times. Differences between treatments were compared using the paired bootstrap test based on the confidence interval of the differences (Efron & Tibshirani, 1993). Assay using F1 generation The experimental design was completely randomized, with three treatments and 50 replicates, totalizing 150 plots. The treatments were identical to the assay with parental generation. Ten days after the last applications of the products, the leaves of each treatment were collected, standardized in the best possible way, according to physiological age and position on the plant, in order to ensure that the leaves had received the three applications, as well as identical conditions for mite development. Leaf disks identical to the ones used in the previous assay were made, and afterwards the authors deposited one female to each leaf disk. After oviposition, the female was removed, leaving one or two eggs per dish. After hatching, only one larva was left per dish. The authors continued to observe all phases until adult emergence. After adult emergence, 50 females of each treatment were used (potassium silicate, nanosilica and control) in order to begin the bioassay using the F1 generation, obtained from replicates of the previous bioassay. The disks were put in Petri dishes, according to the procedure described in the previous item, and each leaf disk had a female mite on it. From then on, the same procedures adopted for parental generation were used for conducting and evaluating the assay and statistical analysis. Parental generation No significant differences for eggs, larva, protochrysalis, protonymph, deutochrysalis and total cycle were noticed; considering males and females, no significant difference for mortality rate was also verified. Deutonymph stage was lower in disks which received potassium silicate and teleiochrysalis stage was lower in control and under nano-silica treatment ( Table 1). The results found in this study corroborate the values obtained by Sadeghi et al. (2016), who noticed that Si altered development time of immature stages of T. urticae on bean plants. Several studies have been showing that silicon affects the initial phases of sucking insects, such as Bemisia tabaci biotype B (Hemiptera: Aleyrodidae), on bean plants grown in Si-treated soil (Gomes et al., 2008). Dalastra et al. (2011) verified that Si reduced the population density of silver thrips nymphs, Enneothripes flavens (Thysanoptera: Thripidae), on peanut plants, providing plant protection and increasing crop productivity. Pre-oviposition, oviposition days and longevity were shorter in potassium silicate treatment compared with the control; oviposition periods and longevity did not differ in relation to the sources of potassium silicate and nanosilica (Table 2). These results are not in accordance with those obtained by Catalani et al. (2017), who observed that silicon did not affect the survival and oviposition of the two-spotted spider mite females in papaya plants. On the other hand, Sadeghi et al. (2016) verified that Si application reduced oviposition period and longevity, both for female and male, on bean plants. The net reproduction and intrinsic rates and the finite rate of increase were lower in nanosilica-treated strawberry plants compared with the control, with no significant differences for these variables between silicon sources. Despite that, potassium silicate provided less time to double the mite population (Table 2). These results, in practical terms, would indicate a favoring for the mite; however, the lower intrinsic rate of increase represented by nanosilica treatment may suggest that plants treated with Si from this source did not demonstrate adequate conditions for the best performance of the mite population (Birch, 1948;Mottaghinia et al., 2011). The finite rate of increase is a parameter that represents a population multiplication factor within a time interval (Moro et al., 2012); therefore, Si-treatments would decrease the reproductive potential of the twospotted spider mite. The values of the net reproduction rate indicated harmful effects of the nanosilica on the two-spotted spider mite populations. Catalani et al. (2017) verified that potassium silicate applications provided a decrease in net reproduction rate of these mites on papaya plants compared with the control. The average generation time using the nanosilica source, even being equal to the control, is longer than with potassium silicate, which allows the authors to state that nanosilica application to strawberry plants has effects on the parental generation of T. urticae compared to untreated plants. F1 generation The duration of egg phase was longer in nanosilica-treated plants compared with plants treated with potassium silicate and this larva phase duration was also longer in nanosilica-treated plants in relation to the control. No significant differences in the development of immature stages of protochrysalis, protonymph and deutochrysalis were noticed; duration of teleiochrysalis stage was longer in plants which did not receive silicon application. The treatments did not affect the adult emergence period and mortality in immature stages of the two-spotted spider mites (Table 3). The stiffening of the cell wall caused by silicon may have hampered to feed immature phases of the pest (Moraes et al., 2005) and/or increased leaf pubescence, which, in theory, would decrease the movement of immature phases on the leaves of plants (Reynolds et al., 2016). A second hypothesis that can be raised is that silicon could have promoted an increase in the production of phenolic compounds in plant defense (Frew et al., 2016;Reynolds et al., 2016). Silicon hampers the arthropod's feeding and decreases palatability and digestibility of plants (Massey & Hartley, 2009;Moraes et al., 2005); so, it could be expected that the life cycle of F1 generation mites could be extended, because the parental females which were on the plants, that received Si-applications, would have inferior nutritional conditions than the control females, showing likely negative impact on their progenies. In addition, the immature phases of the mite would have greater difficulty in breaking the physical barriers created by silicon. The first hypothesis seems to be supported by the longer time of the larva phase in nanosilica treatment, however, without effects on its total development period. Pre-oviposition and oviposition did not differ among the treatments. Daily fertility and longevity of two-spotted female spider mite were longer for the control and fertility was shorter for plants which received nanosilica applications. All population parameters of the two-spotted spider mites were higher in the control compared to the nanosilica, not differing from the plants which received potassium silicate applications (Table 4). No study relating effects of silicon on two-spotted spider mites of F1 generation was found. However, considering the parental generation, Sadeghi et al. (2016) verified that all silicon concentrations used, from 1.0 to 2.0 ppm, affected negatively all population parameters of two-spotted spider mite on bean plants. Catalani et al. (2017) verified that, among the population parameters, net reproduction rate was the most affected, and the lowest values were noticed where silicon was applied. The small number of studies on effects of silicon on biological aspects of two-spotted spider mites, both on parental and F1 generations, showed that more studies are necessary in order to discuss pertinent issues, such Mortality rate 0.06 ± 0.03a 0.04 ± 0.02a 0.14 ± 0.04 a Means within a row followed by the same letter are not significantly different. The SEs were estimated by using 100,000 samples and means were compared by using paired bootstrap test at 5% significance level. as, silicon sources, how to apply this chemical element and how to keep plant resistance. In general, three silicon applications sprayed on the leaves, using the two sources mentioned in this study, affected negatively some biological and population parameters of the two-spotted spider mites. The authors concluded that an alternate use of these compounds may result in a decrease in subsequent populations of the twospotted spider mites on strawberry plants. Table 4. Average and standard deviation of reproductive parameters (periods of preoviposition, oviposition, longevity, daily fecundity and fertility) and population rate (R 0 = net reproduction rate; r = intrinsic rate of increase; λ= finite rate of increase; T = average generation time) of the F1 generation of Tetranychus urticae on strawberry plants treated with two sources of silicon. Vitória da Conquista, UESB, 2019.
3,738
2021-03-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Classifying Heart-Sound Signals Based on CNN Trained on MelSpectrum and Log-MelSpectrum Features The intelligent classification of heart-sound signals can assist clinicians in the rapid diagnosis of cardiovascular diseases. Mel-frequency cepstral coefficients (MelSpectrums) and log Mel-frequency cepstral coefficients (Log-MelSpectrums) based on a short-time Fourier transform (STFT) can represent the temporal and spectral structures of original heart-sound signals. Recently, various systems based on convolutional neural networks (CNNs) trained on the MelSpectrum and Log-MelSpectrum of segmental heart-sound frames that outperform systems using handcrafted features have been presented and classified heart-sound signals accurately. However, there is no a priori evidence of the best input representation for classifying heart sounds when using CNN models. Therefore, in this study, the MelSpectrum and Log-MelSpectrum features of heart-sound signals combined with a mathematical model of cardiac-sound acquisition were analysed theoretically. Both the experimental results and theoretical analysis demonstrated that the Log-MelSpectrum features can reduce the classification difference between domains and improve the performance of CNNs for heart-sound classification. Introduction Cardiovascular diseases (CVDs) are one of the major threats to human health [1]. Generally, doctors use a stethoscope (placed over what are called cardiac auscultation points) to determine the presence of certain CVDs. With the development of modern medical equipment technology, echocardiography and computed tomography (CT) are more accurate and comprehensive in diagnosing heart diseases than a stethoscope, but they are also more time-consuming and expensive. Consequently, they are not suitable for large-scale preliminary examination, especially in rural areas and grass-roots communities with insufficient medical resources. Heart sounds, physiological signals generated by myocardial contractions, have important clinical value in the prevention and diagnosis of CVDs, because they can reflect information about cardiovascular hemodynamic changes [2]. Usually, patients who have damaged the structure of the heart valve or exhibit abnormal heart function do not show clinical symptoms initially. Changes in the structure of the heart valves directly lead to narrowing of the blood vessels, increased blood flow, or abnormal channels between the arteries and veins, which, in turn, cause blood turbulence and produce murmurs. Consequently, automatic classification and recognition of heart-sound signals is of great importance for the prevention and diagnosis of CVDs. Up to now, an increasing number of artificial intelligence (AI) techniques have been used to automatically diagnose CVDs with the help of heart sounds [3,4]. In particular, feature extraction is very important in the classification process of heartsound signals [5]. When classifying heart-sound signals, it is common to transform the raw one-dimensional heart sound signals into two-dimensional features using a time-frequency analysis method, and then use these two-dimensional features to train the convolutional neural networks (CNNs). Some time-frequency analysis methods have been applied to examine heart-sound signals, such as STFT and continuous wavelet transformation (CWT) [6][7][8][9][10][11]. Specifically, STFT has been the most widely used method for research on non-stationary signals. The basic idea of STFT is to use a time-sliding analysis window to truncate non-stationary signals, decompose them into a series of approximately stationary signals, and then use the Fourier transform theory to analyse the spectrum of each shorttime stationary signal. In addition, it is easy to implement on hardware platforms, has practical application value in embedded systems, and has real-time requirements. Therefore, research results can be easily applied to smart wearable biosensors [12][13][14]. Usually, original heart-sound signals are generally transformed into two-dimensional feature maps that offer a rich representation of the temporal and spectral structures of the original heart-sound signals. These feature maps are then used to train deep learning neural networks. The most commonly used features are Mel-frequency cepstral coefficients (Mel-Spectrums) [15][16][17] and log Mel-frequency cepstral coefficients (Log-MelSpectrums) [18][19][20]. These features are based on STFT. Systems based on CNNs trained on MelSpectrums and Log-MelSpectrums of segmental heart-sound signals are superior to other systems using hand-crafted features [21][22][23][24][25][26]. For instance, Deng et al. [22] introduced a novel feature extraction method based on MelSpectrums to represent the dynamics of heart sounds, which were fed to a fused model combining a CNN and recurrent neural network (RNN) for classifying heart sounds. An accuracy of 98% was obtained when classifying normal and abnormal heart sounds. In addition, Nilannon et al. [25] combined MelSpectrums and spectrogram feature maps from fixed S5 heart-sound signals to train a CNN model and this method obtained an accuracy of 81.1%. Abdollahpur et al. [27] extracted 90 features in the time, frequency, perceptual, and Mel-frequency domains from segmented cycles of heart-sound signals. Three feed-forward neural networks combined with a voting system were used to perform the heart-sound classification task. Cheng et al. [28] presented a lightweight laconic heart sound neural network model that has low hardware requirements and can be applied to mobile terminals. This model was implemented using a two-dimensional spectrogram of heart sounds with a 5 s time period. Hence, this study has positive significance for recognising when to train deep learning neural networks. For instance, Rubin et al. [29] used Spring heart sounds in real life. Conversely, some recent studies used Log-MelSpectrum featurer's segmentation algorithm [30] to fix heart-sound signals with 3 s segments and convert them into twodimensional MelSpectrum feature maps. Maknickas and Maknickas [19] proposed a CNNbased model and trained it using Log-MelSpectrum features. The trained model produced an average classification accuracy of 86.02% for recognising normal and abnormal heartsound frames. Nguyen et al. [31] suggested a long-term memory and CNN model trained using Log-MelSpectrum. The proposed model can classify five different heart sounds. In addition, Li et al. [32] improved Log-MelSpectrum feature maps using dynamic and static MelSpectrum features, and used them as input features for deep residual learning. This method obtained an accuracy of 94.43% for the fusion datasets of three different platforms. In general, these different time-frequency features based on STFT were implemented for heart-sound classification and have made a substantial contribution. However, there is no a priori evidence of the best input representation for classifying heart sounds when using deep learning models. To solve this problem, MelSpectrum and Log-MelSpectrum features of heart-sound signals combined with the mathematical model of heart cardiacsound acquisition were analysed theoretically in this study. In addition, these two features were input to a general CNN model to evaluate further the features that are more suitable for classifying heart-sound signals. To our knowledge, this is the first study that has analysed theoretically the MelSpectrum and Log-MelSpectrum features of heart-sound signals to determine which one is more suitable for classifying heart-sound signals when using CNNs. In addition, our study provides the following major contributions to existing literature. First, by analysing the mathematical model of cardiac-sound acquisition, we conclude that the MelSpectrum and Log-MelSpectrum feature maps as input feature vectors of the CNN are efficient for additive and multiplicative noise suppression, respectively. Second, we evaluated our method based on published datasets from the PhysioNet/CinC Classifying Heart Sounds Challenge [33]. The MelSpectrum and Log-MelSpectrum features were input to a CNN-based model to classify heart-sound signals, and the experimental results showed that MelSpectrum and Log-MelSpectrum as input features of the CNN can be used as effective methods for classifying heart sounds. Furthermore, compared with MelSpectrum features, Log-MelSpectrum features are more suitable for processing heart-sound datasets that have domain differences and for improving the performance of CNN for heart-sound classification. Extraction of MelSpectrum and Log-MelSpectrum Features Mel filter, a useful tool for processing speech signals, has been widely applied in automatic speech recognition (ASR). It can reflect the non-linear relationship between human hearing and the sound heard. Recently, various studies have used Mel filters to extract valuable features from heart-sound signals, and the MelSpectrum and Log-MelSpectrum discussed herein are based on Mel filters. The parameters of MelSpectrum and Log-MelSpectrum in our study are shown in Table 1 and the feature extraction process is shown in Figure 1. The detailed process of feature extraction is described as follows: 1. The heart-sound signals are resampled from 25 Hz to 950 Hz using a Butterworth filter with a sampling frequency of 2000 Hz. The signals are then passed through a Savitzky-Golay filter to improve the smoothness of the time-frequency feature graph and reduce noise interference. 2. The filtered signals are framed and windowed using a Hanning window function to fix the signals into a selected frame length. 3. Frames are transformed into the periodogram estimate of the power spectrum using STFT. 4. Each periodogram estimate is mapped onto the Mel-scale using Mel filters, which consist of several triangular filters. The output of the Mel filter is called the MelSpectrum. 5. Logarithmic transformation is applied to the MelSpectrum features to obtain the Log-MelSpectrum. sound acquisition were analysed theoretically in this study. In addition, these two features were input to a general CNN model to evaluate further the features that are more suitable for classifying heart-sound signals. To our knowledge, this is the first study that has analysed theoretically the MelSpectrum and Log-MelSpectrum features of heart-sound signals to determine which one is more suitable for classifying heart-sound signals when using CNNs. In addition, our study provides the following major contributions to existing literature. First, by analysing the mathematical model of cardiac-sound acquisition, we conclude that the MelSpectrum and Log-MelSpectrum feature maps as input feature vectors of the CNN are efficient for additive and multiplicative noise suppression, respectively. Second, we evaluated our method based on published datasets from the PhysioNet/CinC Classifying Heart Sounds Challenge [33]. The MelSpectrum and Log-MelSpectrum features were input to a CNNbased model to classify heart-sound signals, and the experimental results showed that MelSpectrum and Log-MelSpectrum as input features of the CNN can be used as effective methods for classifying heart sounds. Furthermore, compared with MelSpectrum features, Log-MelSpectrum features are more suitable for processing heart-sound datasets that have domain differences and for improving the performance of CNN for heart-sound classification. Extraction of MelSpectrum and Log-MelSpectrum Features Mel filter, a useful tool for processing speech signals, has been widely applied in automatic speech recognition (ASR). It can reflect the non-linear relationship between human hearing and the sound heard. Recently, various studies have used Mel filters to extract valuable features from heart-sound signals, and the MelSpectrum and Log-MelSpectrum discussed herein are based on Mel filters. The parameters of MelSpectrum and Log-MelSpectrum in our study are shown in Table 1 and the feature extraction process is shown in Figure 1. The detailed process of feature extraction is described as follows: 1. The heart-sound signals are resampled from 25 Hz to 950 Hz using a Butterworth filter with a sampling frequency of 2000 Hz. The signals are then passed through a Savitzky-Golay filter to improve the smoothness of the time-frequency feature graph and reduce noise interference. 2. The filtered signals are framed and windowed using a Hanning window function to fix the signals into a selected frame length. 3. Frames are transformed into the periodogram estimate of the power spectrum using STFT. 4. Each periodogram estimate is mapped onto the Mel-scale using Mel filters, which consist of several triangular filters. The output of the Mel filter is called the MelSpectrum. 5. Logarithmic transformation is applied to the MelSpectrum features to obtain the Log-MelSpectrum. Analysis of MelSpectrum and Log-MelSpectrum Heart-sound signals are easily disturbed by additive and multiplicative noise during the acquisition process. Figure 3 shows the mathematical model of cardiac-sound acquisition. Analysis of MelSpectrum and Log-MelSpectrum Heart-sound signals are easily disturbed by additive and multiplicative noise during the acquisition process. Figure 3 shows the mathematical model of cardiac-sound acquisition. In Equation (1) below, s(n) is the original heart-sound signal, a(n) is the additiv signal, including background sounds, breath sounds, lung sounds, and other caused by friction between the equipment and the skin, and h(n) is the pulse resp the stethoscope. The actual collected heart-sound signal y(n) is given by: Taking the square of Equation (2), we obtain: where θ is the phase angle between the heart sound and the noise signals. Becau and a(n) are independent, the above equation can be expressed approximately as: The power spectrum estimation results of each frame were filtered by Mel filte composed of M triangular filters and a weighted sum with each filter. After the M process, we obtain the output energy of the filter banks, namely, MelSpectrum that expressed as Mels variable in the formula: Using the logarithm function on both sides of Equation (5), the Log-MelSp features that can be expressed as Log-Mels variable in the formula: In Equation (1) below, s(n) is the original heart-sound signal, a(n) is the additive noise signal, including background sounds, breath sounds, lung sounds, and other noises caused by friction between the equipment and the skin, and h(n) is the pulse response of the stethoscope. The actual collected heart-sound signal y(n) is given by: where * denotes the convolution operation. The STFT of Equation (1) can be expressed as: Here, H[l,k] is the representation of the impulse response of the stethoscope in the frequency domain; Y[l,k], S[l,k], and A[l,k] are the STFT forms of y(n), s(n), and a(n), respectively; and l and k are the frame in the time domain and the band in the frequency domain of the heart sound signal, respectively. Taking the square of Equation (2), we obtain: where θ is the phase angle between the heart sound and the noise signals. Because s(n) and a(n) are independent, the above equation can be expressed approximately as: The power spectrum estimation results of each frame were filtered by Mel filter banks composed of M triangular filters and a weighted sum with each filter. After the Mel filter process, we obtain the output energy of the filter banks, namely, MelSpectrum that can be expressed as Mels variable in the formula: Using the logarithm function on both sides of Equation (5), the Log-MelSpectrum features that can be expressed as Log-Mels variable in the formula: Equation (6) shows that the stethoscope-induced multiplicative component can be converted into an additive term in the Log-MelSpectrum domain; that is, the Log-MelSpectrum feature after logarithmic transformation can represent the multiplicative noise as an additive component in the feature space. Meanwhile, from the study, we established that if the training data is overlaid with irrelevant additive noise and enough data are available for the model to converge, CNN is robust to additive noise. Therefore, Log-Melspectrum feature maps are easier to improve the classification performance of CNN and enhance the robustness of the model in different domains. This conclusion is further verified by the experiments described in the following section. Heart-Sound Datasets The heart-sound dataset used in our experiments was obtained from the 2016 Phy-sioNet/Computing in Cardiology (CinC) Challenge [33]. This dataset includes six subdatasets: dataset-a, dataset-b, dataset-c, dataset-d, dataset-e, and dataset-f. Detailed information of these datasets is presented in Table 2. The distributions of these datasets are quite different. Specifically, dataset-e collected by MLT201/Piezo and 3 M Littmann made up approximately 66% of the total datasets, whereas dataset-c collected by AUDIOSCOPE accounted for only 1.7%. The distribution of the datasets varied with different acquisition equipment. Thus, the datasets had domain differences, making it difficult to classify heart sounds. CNN Architecture Regardless of the network parameters and training speed, we chose a general convolution network to classify the heart-sound fragments. VGG16, a simple but effective convolutional network, has been widely used in the field of face recognition and image classification [34]. Therefore, we used the VGG16 network to perform the task of heartsound signals classification. VGG16 consists of a simple stack of seven 3 × 3 convolution layers (CLs), four fully-connected layers (FLs), and three maximum pooling layers (MLs). However, the input feature vector size was 128 × 128 and the original input feature vector size was 224 × 224. In addition, the VGG16 model structure was appropriately adjusted to process the heart-sound feature maps. The modified structure of VGG16 is shown in Figure 4. In this modified structure, the first maximum pooling layer is used to reduce the dimension of the previous input from 128 × 128 to 64 × 64, while the second maximum pooling layer is used to reduce the dimension of the previous input from 64 × 64 to 32 × 32. The third maximum pooling layer is used to reduce the dimension of the previous input from 32 × 32 to 16 × 16. Meanwhile, the last layer, which is a softmax layer, is connected to the normal and abnormal classes in the datasets. The convolution kernels move with the CNN training with the feature maps in the time and frequency axes, and the deep features are ultimately extracted from the heart sound signals in both frequency and time dimensions. sion of the previous input from 128 × 128 to 64 × 64, while the second maximum pooling layer is used to reduce the dimension of the previous input from 64 × 64 to 32 × 32. The third maximum pooling layer is used to reduce the dimension of the previous input from 32 × 32 to 16 × 16. Meanwhile, the last layer, which is a softmax layer, is connected to the normal and abnormal classes in the datasets. The convolution kernels move with the CNN training with the feature maps in the time and frequency axes, and the deep features are ultimately extracted from the heart sound signals in both frequency and time dimensions. Experimental Process Five experiments were conducted to evaluate the stability and generalisation performance of the heart-sound classification method. Specifically, one dataset was selected from the data subset-a, -b, -c, -d, and -f in each experiment as test data, and the rest of the heart-sound subsets were used for training and optimisation of the model parameters. The data subset-e was only used for model training because it accounted for 66% of the total number of heart-sounds. The specific process is illustrated in Figure 5. Experimental Process Five experiments were conducted to evaluate the stability and generalisation performance of the heart-sound classification method. Specifically, one dataset was selected from the data subset-a, -b, -c, -d, and -f in each experiment as test data, and the rest of the heart-sound subsets were used for training and optimisation of the model parameters. The data subset-e was only used for model training because it accounted for 66% of the total number of heart-sounds. The specific process is illustrated in Figure 5. Step 1 Step 2 Step 3 Step 4 Step 5 Test datasets Training datasets The CNN hyper-parameters yielded the best results, as presented in Table 3. In the training phase, 20% of the training datasets were used for model validation and an oversampling method was used to balance the normal and abnormal heart-sound samples. In addition, the Kaiming method was used to initialise the parameters and make the gradient of the learned parameters valid or saturated in the training phase. Table 3. Initial hyper-parameters of the CNN. Parameter Step 1 Step 2 Step 3 Step 4 Step Model Training Results and Analysis The training and validation accuracy learning curves obtained by the CNN under different training datasets are shown in Figures 6-10. The different curves show that as the number of iterations increases, the accuracy of training and validation gradually improve and become stable. The accuracies of the MelSpectrum feature maps on the validation dataset-a, dataset-b, dataset-c, dataset-d, and dataset-f were 97.0%, 86.4%, 82.3%, 85.0%, and 89.5%, respectively. The accuracies of the Log-MelSpectrum feature maps for the five validation datasets were 93.9%, 93.2%, 87.25%, 89.7%, and 93.7%, respectively. The The CNN hyper-parameters yielded the best results, as presented in Table 3. In the training phase, 20% of the training datasets were used for model validation and an oversampling method was used to balance the normal and abnormal heart-sound samples. In addition, the Kaiming method was used to initialise the parameters and make the gradient of the learned parameters valid or saturated in the training phase. Table 3. Initial hyper-parameters of the CNN. Parameter Step 1 Step 2 Step 3 Step 4 Step 5 The training and validation accuracy learning curves obtained by the CNN under different training datasets are shown in Figures 6-10. The different curves show that as the number of iterations increases, the accuracy of training and validation gradually improve and become stable. The accuracies of the MelSpectrum feature maps on the validation dataset-a, dataset-b, dataset-c, dataset-d, and dataset-f were 97.0%, 86.4%, 82.3%, 85.0%, and 89.5%, respectively. The accuracies of the Log-MelSpectrum feature maps for the five validation datasets were 93.9%, 93.2%, 87.25%, 89.7%, and 93.7%, respectively. The loss curves for training and validation on the different datasets are shown in Figures 11-15. As observed, the loss value of the model decreases with an increase in the number of iterations and eventually stabilises. The parameters of the model were set at realistic levels based on the accuracy and loss curves, and there was no overfitting or underfitting. Test Results and Analysis The model performance results based on test dataset-a, dataset-b, dataset-c, datase d, and dataset-f are presented in Table 5. Specificity (Sp), Sensitivity (Se), and the mean o Se and Sp (MAcc) were used as evaluation indices in this study, as defined in [33]. Base on the results, deep learning models that were trained using different input time-fr quency features gave different prediction results on the same test datasets. In our exper ments, MAcc indices on test dataset-a, dataset-b, dataset-c, dataset-d, and dataset-f wer 57.83%, 75.98%, 70.24%, 60.05%, and 64.61%, respectively, for MelSpectrum feature map as the input, and 67.65%, 83.25%, 72.32%, 68.92%, and 66.54%, respectively, for Log-Me The validation accuracies are presented in Table 4. The accuracies of the Log-MelSpectrum and MelSpectrum time-frequency characteristic diagram are 91.74% ± 3.72% and 87.42% ± 3.99%, respectively. Therefore, the Log-MelSpectrum and MelSpectrum time-frequency feature maps discussed in this section can be used as feature input vectors for the CNN, which is an effective heart-sound classification method. Furthermore, Log-MelSpectrum features are more suitable for processing heart-sound datasets that have domain differences and for improving the performance of CNN for heart-sound classification, compared with MelSpectrum features. Test Results and Analysis The model performance results based on test dataset-a, dataset-b, dataset-c, dataset-d, and dataset-f are presented in Table 5. Specificity (Sp), Sensitivity (Se), and the mean of Se and Sp (MAcc) were used as evaluation indices in this study, as defined in [33]. Based on the results, deep learning models that were trained using different input time-frequency features gave different prediction results on the same test datasets. In our experiments, MAcc indices on test dataset-a, dataset-b, dataset-c, dataset-d, and dataset-f were 57.83%, 75.98%, 70.24%, 60.05%, and 64.61%, respectively, for MelSpectrum feature maps as the input, and 67.65%, 83.25%, 72.32%, 68.92%, and 66.54%, respectively, for Log-MelSpectrum feature maps as the input. Figure 16 shows the average performance of the model. The model trained by the Log-MelSpectrum feature maps has higher average Se, Sp, and MAcc than that trained by the MelSpectrum feature maps. Discussion Heart sounds that can reflect the information of cardiovascular hemodynamic changes to diagnose CVDs. It is of great value to use a computer to extract the features from heart-sound signals for quantitative analysis. The most commonly used are MelSpectrum features and Log-MelSpectrums features. Systems based on CNNs trained on Mel-Spectrums and Log-MelSpectrums of segmental heart-sound signals are superior to other systems using hand-crafted features. However, no a priori evidence exists regarding the best input representation for classifying heart sounds when using CNN models. Based on the accuracy of the validation datasets and the test results for each test dataset, using either MelSpectrum or Log-MelSpectrum as input features of the CNN can be effective methods for classifying heart sounds. Furthermore, the Log-MelSpectrum feature maps can easily improve the classification performance of the model and enhance its robustness in different domains, compared with the MelSpectrum feature maps. This is because the Log-MelSpectrum feature maps can represent the multiplicative noise caused by the stethoscope as an additive component in the feature space, and the CNN is more robust to the noise of additive components. Discussion Heart sounds that can reflect the information of cardiovascular hemodynamic changes to diagnose CVDs. It is of great value to use a computer to extract the features from heartsound signals for quantitative analysis. The most commonly used are MelSpectrum features and Log-MelSpectrums features. Systems based on CNNs trained on MelSpectrums and Log-MelSpectrums of segmental heart-sound signals are superior to other systems using hand-crafted features. However, no a priori evidence exists regarding the best input representation for classifying heart sounds when using CNN models. In this study, different input feature representations, including MelSpectrum and Log-MelSpectrum feature maps, are analysed to determine the most suitable method for classifying heart-sound signals when using CNNs. In particular, MelSpectrum and Log-MelSpectrum feature maps are discussed combined with the mathematical model of cardiac-sound acquisition. Based on theoretical analysis, heart-sound signals are always disturbed by additive and multiplicative noises. The multiplicative noises are due to the stethoscopes, and stethoscope-induced multiplicative noises can be converted into an additive term in Log-MelSpectrum domain. Hence, Log-MelSpectrum feature maps can transform nonlinear additive noise into linear noise. Moreover, the CNN is robust to the additive noise of the input layer. Therefore, we conclude that the Log-MelSpectrum feature maps as the input feature vector of the CNN can efficiently suppress the additive noise. This conclusion is further validated in our experiments. In the five different experiments, MelSpectrum and Log-MelSpectrum feature maps were input to train a modified CNN. The accuracies of the Log-MelSpectrum feature maps on the validation dataset-a, dataset-b, dataset-c, dataset-d, and dataset-f were all higher than those using the MelSpectrum feature maps in the experiments and the variance of mean accuracies using the Log-MelSpectrum as inputs were less than those using the MelSpectrum as inputs. Furthermore, the model trained by the Log-MelSpectrum feature maps has higher accuracy in terms of average Se, Sp, and MAcc than that trained by the MelSpectrum feature maps. The experimental results showed that using the feature maps of MelSpectrum and Log-MelSpectrum as inputs to the CNN can be effective methods for classifying heart sounds. Furthermore, Log-MelSpectrum features are more suitable for processing heart-sound datasets that have domain differences and for improving the performance of CNN for heart-sound classification compared with MelSpectrum features. The average sensitivity and specificity on testing datasets trained by the Log-MelSpectrum feature maps are 73.86% and 70.69%, respectively. The result is lower than that of Maknickas [19] and Li [32]. This may be due to the following reasons. First, the mode proposed by Maknickas is deeper than ours and deep learning models with deeper layers normally exhibit more accurate performance, and this has been the tendency in recent developments. Second, Li improved Log-MelSpectrum feature maps using dynamic and static MelSpectrum features, and used them as input features for deep residual learning. Although the sensitivity and specificity levels on testing datasets are far away from a useful diagnostic model in clinical settings, our work mainly resolved the issue that STFT-based features are more suitable for classifying normal and abnormal heart sound signals. As far as we know, this is the first study that analysed theoretically the MelSpectrum and Log-MelSpectrum features of heart-sound signals to determine which one is more suitable for classifying heart-sound signals when using CNNs. We believe the study provided a solid solution in the field of heart-sound classification and could promote the automatic diagnosis of CVDs.
6,323.8
2023-05-25T00:00:00.000
[ "Computer Science" ]