text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Data Donations as Exercises of Sovereignty We propose that the notion of individual sovereignty encompasses more than having the power to exclude others from one’s personal space. Instead, sovereignty is realized at least in part along outward-reaching, interactive and participatory dimensions. On the basis of reflections from gift theory, we argue that donations can generate social bonds, convey recognition and open up new options in social space. By virtue of these features, donations offer the potential to advance individual sovereignty. We go on to highlight distinctive benefits of data donations, before articulating several difficulties and puzzles: data donors are bound to have a limited grip on future uses of their data and the people affected by their decision to share. Further characteristic traits of data donations come from the invasive and comprehensive character of state-of-the-art data gathering and processing tools, and the fact that the relevant sense of data ownership is far from straightforward. In order to minimize tensions with negative, protective aspects of sovereignty, we argue that thoughtful mechanisms at the level of consent procedures, the representation of data subjects in governance structures, and organizational-level constraints are necessary. Along the way, we will devote particular attention to challenges and opportunities within big data contexts. Introduction Donations are common in health contexts. Crowdfunding calls through websites like GoFundMe in which patients rely on private donors to pay for unexpected medical expenses are familiar especially in the United States (Snyder 2016;Berliner and Kenworthy 2017). There are plenty of opportunities to help others not just by giving money, but also by giving parts of our bodies, such as organs or blood. We can give such parts or materials more or less directly to patients in need, or contribute samples to biobanks in which they feed into research, development, public health surveillance and other beneficial activities. In these kinds of donations, the potential donor is in a position to seek and understand information about the need for her donation. Although some degree of uncertainty is often inevitable, she can learn about the features of potential recipients, the way in which her donation addresses a problem, and how her donation will be distributed. It is also quite straightforward what she is donating, e.g., an organ, blood, or a specimen. Moreover, the donor herself is carrying any inconveniences in connection with her donation, and burdensome effects on others are typically minimal or absent. Databases are growing at breath-taking speeds, while tools and algorithms to process and interpret data become more powerful and sophisticated (Mayer-Schönberger and Cukier 2013;Murdoch and Detsky 2013;Raghupathi and Raghupathi 2014). Still, information that can feed into evidence bases is not always readily available. It needs to be discovered, harvested, shared, and analysed. In recent years, the roles individuals can play in data gathering processes have received increased attention. The widespread rollout of electronic health records has made it easier than ever to handle personal health data and opens up opportunities for sharing it in a variety of ways. By making their health data available, individuals can enable research and advance clinical progress (Nature Biotechnology 2015). Two potential applications are the following. First, medical data can feed into research. By providing one's data for such purposes, one ideally provides researchers with the raw materials for discovering unforeseen correlations and helps to pave the way for new hypotheses, preventive actions, diagnostics, and treatments. One possible source for such data is direct-to-consumer genetic testing. For example, in 2014 the online networking service PatientsLikeMe launched its "Data For Good" campaign, which "underscores the power of donating health data to improve one's own condition, help others and change medicine" (PatientsLikeMe 2014). The campaign is motivated by survey data suggesting that "94 percent of American social media users agree with sharing their health data to help doctors improve care" (Grajales et al. 2014), provided their anonymity is secured. Further examples include the non-profit platform DNA.Land which calls for users to upload their genomic data in order to "enable scientists to make new discoveries" and to learn more about their genome (DNA.Land 2018). The open source website openSNP allows users to upload genetic information, including full genomes, which are then published under a Creative Commons Zero licence. The data can be freely copied, modified, distributed, and analysed for commercial and non-commercial purposes. Second, an increasing number of clinical deep-learning-driven diagnostics and treatments rely on large amounts of patient data, cases, and background information. Data that is fed into such systems can guide a vast range of useful applications, e.g., the delineation of tumours in radiological images (Microsoft 2018) or therapeutic decision-making regarding metastatic breast cancer (Yang et al. 2016(Yang et al. , 2017. Sharing one's personal health data for such purposes directly affects treatment options and prospects of present and future patients. The question arises under which conditions applications like these can legitimately be based on donations of personal health data from individuals. The present paper argues that some of the most pressing challenges surrounding data donations are challenges about the data sovereignty of the donor. We begin by introducing the concept of data sovereignty (2.) and propose that it encompasses more than having the power to exclude others from one's personal space. Instead, it has a positive dimension as well. On the basis of reflections from gift theory, we propose that data donations can be exercises of positive data sovereignty. We go on to highlight potentials (3.) of data donations, before articulating several difficulties and puzzles that arise from the idea of donating personal health data (4.). We close with some suggestions on how sovereign data donations could be made possible in practice (5.). Along the way, we will devote particular attention to complications and opportunities within big data contexts. Before we begin, a conceptual remark on the idea of donating one's personal health data is in order. While the concept of data sharing has received a lot of attention throughout the literature (e.g., Borgman 2012), the notion of data donation is relatively new and less widespread (e.g., Prainsack and Buyx 2017, ch. 5). Data sharing and data donation both involve the provision of access to data. In our view, they differ along at least two dimensions. The first difference relates to exclusivity: if I share a good, I can still use at least a portion of it myself. If I donate something, typically the respective portion of the good is gone. Relative to ordinary language, it is thus somewhat surprising to speak of data donations insofar as the putative donor typically does not lose even a portion of her data when granting others access (see also Barbara Prainsack's contribution to this volume). The second difference is motivational, and in our view provides an important reason to focus on data donations. Relative to ordinary language, the notion of donating more than the notion of mere sharing highlights the possibility of a particular kind of motivation for why we might give others access to our goods. When I exchange or trade something or a portion thereof, I expect a return. When I gift something, do I expect a return, too? As we will see in the following, this question is discussed controversially. What does seem to distinguish gifting from exchanging is that the former involves a symbolic dimension that the latter lacks. For this reason, the following discussion is driven by the suspicion that when reflecting upon data donations, we should be mindful of such symbolic aspects of granting others access to one's data. Donations and Sovereignty Many areas in the health sector anticipate progress and efficiency gains from increasingly powerful data gathering and processing tools. The hope is that such innovations will advance a range of activities such as public health surveillance, research and development, the provision of medical care and the design of health systems. While these prospects are intriguing, novel and ever more penetrative dataprocessing tools can leave individuals susceptible to risks of harm and prompt us to consider at what point disproportionate intrusions into the personal sphere beginespecially if highly intimate and sensitive information is being processed. Big data applications thus bring a number of ethical questions to the forefront (Nuffield Council on Bioethics 2015; Mittelstadt and Floridi 2016;German Ethics Council 2017a, b), including how individuals can make autonomous choices about where their data goes and what is being done with it, while they shall be both beneficiaries and objects of investigation of data-and computation-intensive tools that promise to speed up and to enhance knowledge generation processes. One up-and-coming concept in these discussions is the notion of data sovereignty. Although not used uniformly throughout the literature, the concept relates to issues of control about who can access and process data (Friedrichsen and Bisa 2016;De Mooy 2017;German Ethics Council 2017a, b). For example, data sovereignty is being discussed with regards to cloud computing, and refers to what is being undermined by uncertainty about which law applies to information stored in the cloud (De Filippi and McCarthy 2012). Commentators worry that governments which use cloud computing run the risk of compromising national sovereignty by conceding control over their data (Irion 2013). Some identify data sovereignty with the ability to geolocate data and to place it within the borders of a particular nationstate (Peterson et al. 2011). Only then is it possible for users to determine which privacy protections, intellectual property protections, and regulations apply, and which risks of legal and illegal access to data exist. In the German media discourse, data sovereignty is occasionally being perceived as a threat to privacy and a "lobby notion" introduced by the data-processing industry to hollow out data protection standards (Krempl 2018). But quite the opposite is true. While data sovereignty does indeed rest on the conviction that traditional input-oriented data protection principles like data minimisation and purpose limitation are unsuitable in big data contexts (German Ethics Council 2017a, b), two important clarifications are in order. First, proponents of data sovereignty highlight its orientation towards informational self-determination, which involves the protection of a personal sphere of privacy that sets the stage for participation in the public sphere (Hornung and Schnabel 2009). Second, the notion of data sovereignty is driven by the conviction that claims and rights like those related to informational self-determination can only be realized against the backdrop of social contexts and structures in which they are articulated, recognized, and respected. Proponents of data sovereignty highlight that digitization has the potential to transform the social core in which articulations of these claims are always embedded. This is why it is inadequate to insist upon rigid, input-oriented data protection principles (Dabrock 2018). Instead, the focus must shift to the social transformations and tensions of digitization in which individuals should be put in a position to claim their right to informational self-determination reliably and robustly. In the following, we shall not deny that sovereignty motivates negative and protective claims and rights related to the data subject's privacy (although cf. Goodman 2016, pp. 153-155). Instead, we will focus on the question whether the picture of sovereignty encompasses more than just the exclusion of others from one's personal information, and instead motivates claims of individuals to share rather than hold back their data. In early modern political theory, sovereignty denotes absolute and unconditional power that is neither constrained by nor accountable to other powers. The notion became prominent after Bodin (1576) applied it to absolutist rulers in order to characterize their supreme authority. For Hobbes (1651), this authority is the result of a transfer of sovereignty from the people to the ruler. Other authors attributed sovereignty to nations, countries, or peoples. Sovereignty is typically indexed to a spatial or a substantive domain. The spatial domain is the territorial region which is subject to the sovereign's will. The substantive domain comprises the matters on which the sovereign is authoritative. Nevertheless, the claim to absolute power is one reason why the notion of sovereignty is sometimes being looked upon with uneasiness, and has led to controversies about whether the political sphere is framed fruitfully in terms of it. For example, Maritain (1951, ch. 2) worries that once the people transfer their power to the sovereign in Bodin's or Hobbes' model, their sovereignty is irretrievably lost. After having become the sovereign, the leader is free to determine the nature and boundaries of its power. Against this, one can invoke notions of legitimacy, and argue that sovereignty properly understood is undercut by certain claims to power and ways of ruling. The apparent sovereign becomes a despot if she is guided by arbitrariness and self-interest or proceeds without appropriate forms of recognition from the people she governs. Sovereignty, although prima facie a property of the authoritative individual, is not something which can simply be claimed and possessed independently of social or political embeddedness. It is something that is conferred upon the sovereign, a property that arises from its relation to those who are eventually subject to the sovereign power and recognize the sovereign as authoritative and legitimate. The power of the sovereign goes hand in hand with the ability to constrain the power of others. Prior to early modern times, the historic function of the concept was not to entitle rulers to power, but to delimit the authority of worldly leaders. Sovereignty as unconstrained and absolute power was attributed to God in order to distinguish divine authority from claims of kings and emperors, and to constrain the claims to power of the latter. Modern ascriptions of sovereignty also have implications about negative freedom. For example, Mill argues that with regards to things which concern only the subject herself, she is entitled to absolute independence from interference by society: "[o]ver himself, over his own body and mind, the individual is sovereign" (Mill 1859, p. 224). Nevertheless, sovereignty is not exhausted by negative claims. It can have positive implications about the space it determines as the domain of the sovereign. The sovereignty of a state is not exhausted by external sovereignty against outside interference. Instead, sovereignty has an internal dimension as well: within its territory the sovereign has the authority to govern according to her will. Similarly, Mill's individual who is the sovereign over her personal sphere is not merely entitled to the right and power to exclude others from her domain of sovereignty, but also to operate within this sphere-in Mill's case: to pursue her idea of the good life. For either dimension, one important realizer of sovereignty is power. Sovereignty is being realized through the power to keep outsiders out of one's domain of sovereignty and to operate within this domain. This carries with it the constraint, criticism, and repudiation of claims to power of outsiders as well as those insiders who are subject to the sovereign. Again, this isn't crude and arbitrary power or force. Whether a claim to sovereign power is appropriate and legitimate depends on its content and the relationship between the putative sovereign and her claim's addressees. Negotiating sovereignty and its scope is a discoursive process to be carried out in dialogue with others and society. When individuals pursue their idea of the good life, we should take note of the fact that this pursuit need not be exhausted by an atomistic sense of one's personal good. As Taylor (1985, p. 190) maintains: "Man is a social animal, indeed a political animal, because he is not self-sufficient alone, and in an important sense is not selfsufficient outside a polis." The sovereign individual's pursuit of the good life plausibly unfolds through social relations, embeddedness, and interactions. Crucial realizers of positive aspects of her sovereignty transcend the boundaries of her personal sphere, and rest on how this personal sphere is connected and related to others. Consider now what this could mean for data sovereignty. In the case of data sovereignty, the relevant kind of power is control over one's data: where it goes, who has access, and what is being done with it. The foregoing suggests that the individual need not always exercise sovereignty in ways that close off her personal data from others, e.g., by categorically prioritizing her right to privacy. As proponents of relational autonomy highlight, persons are not just independent, isolated, and self-interested beings (Mackenzie and Stoljar 2000;Dabrock et al. 2004;Baylis et al. 2008;Dabrock 2012;Steinfath and Wiesemann 2016;Braun 2017). Their selfhood and well-being depend on rich and complex relations to others, their community, and society as a whole. Importantly, this can mean that the data sovereign individual does not just close off her data, but shares it with others. In fact, practices of sharing one's personal data can constitute meaningful advances and reinforcements of the social structures in which the individual seeks to realize positive aspects of her sovereignty (see also Barbara Prainsack's contribution to this volume for a discussion of the relational nature of donations). This is a particularly fruitful option if these acts of sharing take the form of donating and endowing. In his seminal discussion, Mauss (1950) describes a range of features which he claims are distinctive of the notion of a gift. When someone gives goods in the context of a trade, she expects a return. In contrast, while a gift might be tied to reciprocal obligations between donor and recipient, it always points to something beyond these. A gift is tied to the donor's generosity as well as some form of obligation on the side of the recipient. In this sense, there is a similarity with economic exchange because the relationship that is being constituted through the act of giving is two-ways, mutual, or symmetrical. Still, the character of the gift cannot be captured in terms of the logic of exchange: the reciprocal obligations in question are incommensurable and cannot be set off against each other in an economic calculus. Gifts might not be incompatible with trade and exchange, but they involve much more. They provide systematic means for individuals and groups to articulate and reciprocate recognition, and thereby determine and shape identities. "[B]y giving one is giving oneself, and if one gives oneself, it is because one 'owes' oneselfone's person and one's goods-to others" (Mauss 1950, p. 59). Other authors insist that gifts need to be distinguished more sharply from exchange. For example, Derrida argues that a genuine gift cannot involve expectations of reciprocation of any kind. The gift "interrupts economy", "suspends economic calculation", "def[ies] reciprocity or symmetry", remains outside the "circle" of economic exchange, and is thus distinctively "aneconomic" (Derrida 1992, p. 7). One important consequence is that once the recipient perceives and recognizes the gift as a gift, it is "annulled" or "destroyed" as the act of giving becomes situated within a logic of exchange. Mere recognition of the gift as a gift already "gives back" (Derrida 1992, p. 13). In fact, not even the donor may be aware of the gift; otherwise the donor threatens "to pay himself with a symbolic recognition, to praise himself, to approve of himself, to gratify himself, to congratulate himself, to give back to himself symbolically" (Derrida 1992, p. 14). Because awareness and recognition annul the gift, the notion is inherently aporetic, and genuine gifts are impossible. Nevertheless, Derrida argues that it is out of the question to refrain from giving. He suggests that the gift is actually fundamental to exchange; giving is what "puts the economy in motion". We need to "engage in the effort of thinking or rethinking a sort of transcendental illusion of the gift. […] Know still what giving wants to say, know how to give, know what you want and want to say when you give, know what you intend to give, know how the gift annuls itself, commit yourself [engage-toi] even if commitment is the destruction of the gift by the gift, give economy its chance" (Derrida 1992, p. 30; his italics). Hénaff too insists that the gift has certain unique features. He distinguishes symbolic from economic exchange. Drawing on Mauss, he agrees that gifts figure in symbolic exchanges whose purpose is to establish and foster social bonds through relations of recognition, honour, and esteem amongst parties. It also prompts and articulates attitudes of generosity, benevolence, and gratefulness. Such symbolic exchange is "entirely outside the circuit of what is useful and profitable" (Hénaff 2010, p. 18). He criticizes that Mauss' discussion is not always consistent about the non-economic and non-commodifiable aspects of the gift as symbolic exchange (Hénaff 2010, p. 110). Hénaff further provides a threefold typology of the gift: ceremonial gifts which are public and reciprocal, gracious ones which are private and unilateral, and mutual aid which pertains to solidaric or philanthropic activity (Hénaff 2013, pp. 15-6). The mutual and public character of the ceremonial gift ties it to practices of recognition and attributions of equality in public space; it accounts for its central role in "identifying, accepting, and finally honoring others" (Hénaff 2013, p. 19;his italics). He proposes that these characteristic features of ceremonial token gifts ultimately culminate in political and legal institutions that protect and guarantee recognition (Hénaff 2013, pp. 21-2) and, amongst others, open up room for gracious gifts and mutual aid. Ricoeur is convinced that appreciating a gift need not take the form of a restitution that annuls it. What matters is the way in which the gift is received. If the gift succeeds in bringing about a kind of gratefulness that acknowledges the donor's generosity without forcing or pressuring the recipient to reciprocate, then appearances of aporia and impossibility can be circumvented. "Gratitude lightens the weight of obligation to give in return and reorients this toward a generosity equal to the one that led to the first gift" (Ricoeur 2005, p. 244). One important consequence is that ex ante, it must remain open whether this orientation towards the donor's generosity actually occurs. If the recipient reciprocates, she does so freely and without duty. A genuine gift involves openness and contingency. It cannot be forced or guided. As Mauss and Hénaff highlight, the gift can function as a source and catalyst for recognition. It does so by reflecting an endowment of the donor, a symbolic dimension through which the donor dedicates her gift and conveys a meaning beyond the commodifiable aspects of the good being given. In Mauss' view, the donor blends herself with the good being given. "Souls are mixed with things; things with souls. Lives are mingled together, and this is how, among persons and things so intermingled, each emerges from their own sphere and mixes together" (Mauss 1950, pp. 25-6). If through dedications of this kind, the donor manages to establish social bonds or even-in Derrida's words-to interrupt patterns of economic exchange, then the gift extends the individual's room for manoeuvre in social space. It opens up options for shaping and enhancing interactions and deepening modes of integration among individuals. The mentioned authors disagree whether gifts should be understood as being diametrally opposed to economic exchange, or whether they, too, can involve mutual expectations, obligations, and relations of reciprocity. The latter seems plausible in view of the fact that through making a gift, the donor exposes herself to others and thereby engages in a potentially precarious gamble (Braun 2017, pp. 206-7). With regards to potlach ceremonies, Mauss notes that "to lose one's prestige is to lose one's soul. It is in fact the 'face', the dancing mask, the right to incarnate a spirit, to wear a coat of arms, a totem, it is really the persona-that are all called into question in this way, and that are lost at the potlach, at the game of gifts, just as they can be lost in war, or through a mistake in ritual" (Mauss 1950, p. 50). Gifts are attempts at giving and seeking recognition. Such attempts can fail in various ways. They can be confirmed and reciprocated, but also disappoint, overburden, be perceived as coercive, or simply not be met with gratefulness. Thus, there is no need to romanticize gifts. They can open up opportunities, but they can also generate burdens or injustices. Moreover, the donor can face reactions and structures that reject her attempts to give. By making, enabling, or accepting gifts, we have not established fairness, not even ruled out violence. Gifts can only set the stage for negotiating these aspects. In the acts of giving discussed by Mauss, Derrida, Hénaff, Ricoeur and others, two dimensions can be distinguished: first, there is an aspect of exchange insofar as these acts of giving involve transfers of goods and expectations of some form of return or reciprocation, assessed against the backdrop of an economic rationale or logic. Second, there is a distinctive gift aspect which expresses recognition and valuation of the recipient and thus yields community-sustaining potentials. The appropriateness and success of such expressive acts is assessed relative to a logic of recognition. For the sake of speaking in a theory-neutral fashion and not beg any questions against authors like Derrida who think the former aspect actually undermines the latter, we suggest the term 'donation' as denoting acts of giving for which it is conceptually open whether they encompass exchange and/or gift aspects. Considering only one of those dimensions would fall short of capturing the complexity of the target phenomenon. In the resulting picture, donations need not be entirely distinct from exchange, yet they are something over and above it. In the words of Waldenfels (2012), donations exceed relations of mere exchange. Practices of organ and blood donation impact recipients in an immediate, intimate, and bodily way and are sometimes characterized as instantiating central features highlighted by Mauss' analysis: the presence and reinforcement of institutions that enable donations, expectations and even subtle pressures that motivate individuals to give, obligations on the side of the recipient to accept the gift, and recipients who are expected and feel the need to reciprocate (Fox and Swazey 1978;Vernale and Packard 1990;Sque and Payne 1994;Gill and Lowes 2008). In line with the insight that donations are risky and their effects contingent, organ and blood donations can impose undue burdens on recipients and effectively establish a "tyranny of the gift" (Fox and Swazey 1978, p. 383). Gift-theoretic insights on the entanglement between giving, gratuity, and gratification further resonate with recent work inspired by the conceptions of relational autonomy just mentioned. One of the consequences of the complex interplay between selfhood and orientation towards others is that motivations for acts of giving often cannot be straightforwardly classified as either altruistic or self-interested. Apparently altruistic donations carry aspects of self-interest and vice versa. In particular, empirical work suggests a "simultaneity of self-interested and other-regarding practice in the field of organ donation " (Prainsack 2018;cf. also Simpson 2018). Technological innovations and their impact on research and clinical care prompt us to focus on the provision of personal health data as a new and promising way to affect others. We shall discuss some opportunities below (3.). Importantly, most people who make health-related donations do not give in these ways in order to generate a return. This suggests that a logic of exchange cannot fully explain what is happening, and the gift paradigm might be able to go some way towards explaining core features of these practices. If we buy into the idea that sovereignty means more than the right and power to keep others from interfering with one's personal sphere and is realized at least in part along outward-reaching, interactive and participatory dimensions, then dona-tions could advance these positive aspects of sovereignty. Donations are surely just one amongst many ways to enter into relations with others, but their aneconomic aspects promise some distinctive community-and recognition-sustaining opportunities. Insofar as data sovereignty, and especially its positive dimension, is a worthwhile normative target notion, individuals should be enabled to donate personal health data. As we will argue, this is compatible with insisting on a range of mechanisms and safeguards to ensure that tensions between positive sovereignty and the protective aspects of its negative counterpart are minimized. Reasons in Favour of Data Donations In the following, we highlight three ways in which data donations could advance positive data sovereignty. Solidarity Data donations can express solidarity. Although not used uniformly throughout the literature, Prainsack and Buyx propose that the concept of solidarity involves "shared practices reflecting a collective commitment to carry 'costs' (financial, social, emotional, or otherwise) to assist others" (2012, p. 346; italics removed). Data donations fall under this definition insofar as they reflect the willingness to share efforts that are essential for advancing research and thereby helping those who are in need of findings and innovations. Prainsack and Buyx add the further condition that this willingness is based on the donor "recogniz[ing] sameness or similarity in at least one relevant respect" (2012, p. 346; italics removed)-a condition that distinguishes solidarity from altruism and charity, which do not necessarily rest on an understanding of symmetry between the agent and the recipient. Data donations meet this recognition-of-similarity constraint, too. This is obvious if-as on PatientsLikeMe-the donor is providing her data for the benefit of individuals who share her risk profile or illness. But even if motivations differ and/or it is not clear who exactly will benefit from the data, we can suppose that the donor's contribution is at least partly based on the insight that she herself could one day find herself in a situation where she benefits from donations of this kind, and so recognizes similarity with the beneficiary in a relevant respect. Two examples illustrate how the gathering and sharing of data can relate to solidarity. First, generating data about oneself, whether through genetic testing or by means of self-tracking devices like wearables and other new technologies, might appear egoistic, solipsistic, or self-centred. However, it can have an inherently social and communicative dimension (cf. Sharon 2017, pp. 111-2;Ajana 2018, p. 128). Such data gathering is being carried out not only to further one's own ends, but also to share, report, discuss, and compare one's data with others. Second, consider the importance of data gatherers and contributors for personalized medicine. Personalization of health services might appear individualistic insofar as it focuses on specific traits of a given patient, and increasingly shifts responsibility for health towards the individual. But alternatively, personalized medicine can be seen as a context in which individual and collective good are inherently intertwined. Individual health tracking, testing, and data-sharing are key towards building up the databases that enable tailor-made health services. Through this "bottom-up" process from self-tracking to the generation of common knowledge bases, there is a sense in which personalization of services rests on "the idea that the overall health […] of the population can only be improved if individuals take on more personal responsibility for their own health"; we arrive at an "intertwining of the personal and collective good" (Sharon 2017, p. 100). Beneficence Donating data promises "New Opportunities to Enrich Understanding of Individual and Population Health" (Health Data Exploration Project 2014). Research attains public goods such as knowledge, technology, and health. Data is one essential ingredient in research success. In the age of wearables, smartphones, and other selftracking devices, plenty of personal health data is being generated. However, most of it remains inaccessible to medical and public health research. Data donations could allow the research community to utilize generated data and to convert it into predictions, treatments, and other innovations that potentially benefit a great number of patients, health systems, and populations. In the ideal case, these benefits come at minimal costs for the donors. Unlike organ donation, data donation doesn't hurt. It is convenient and effortless. And unlike donations to charities, there is no financial burden for the donor. Data donations can also lead to self-interested benefits. The PatientsLikeMe campaign claims that donating one's data also helps "to improve one's own condition" (2014), and DNA.land (2018) promises to reveal new insights about the donor's genome. At the very least, contributing to a common practice of data donations adds to improved evidence bases, understanding of diseases, and treatments that ameliorate clinical practice. It is also possible that the provision and analysis of data leads to the discovery of actionable incidental findings that would have otherwise remained unnoticed. Only upon receiving this information can the donor take preventive and curative steps. There are many potential beneficiaries of data donations. Data helps foundational science, doctors, patients, healthy individuals, society as a whole, insurers, and others (German Ethics Council 2017a, sect. 4.4). Moreover, there is a plurality of services that can be ameliorated: knowledge generation and supply, diagnosis, prediction, treatment. Improvements can be achieved along several dimensions: in terms of the hedonic benefits they provide, the costs they save, and/or the contributions they make to social integration. Participation The significance and normative dimension of scientific research need not be exhausted by the benefits it generates. Focusing on genomic research, Knoppers et al. argue that a human right to benefit from science includes the right "to have access to and share in both the development and fruits of science across the translation continuum, from basic research through practical, material application" (2014, p. 899). In a similar vein, Tasioulas (2015, 2016) argue that science is a central component of the kind of communal and cultural life to which all humans are entitled. The authors highlight an underappreciated participatory dimension of the right to science: human rights frameworks like the Universal Declaration of Human Rights (1948, art. 27) and the International Covenant on Economic, Social and Cultural Rights (1966, art. 15) entitle individuals to take part in scientific endeavours. Encouraging and enabling data donations would certainly be an important step towards respecting this right and including broader populations in scientific endeavours. Proponents of a participatory right to science could even insist that mere data gathering and sharing falls short of respecting the right to science in all its facets. Understood more comprehensively, it also entitles individuals to participation in financing, agenda setting, governance, and even lead roles in initiating, designing, and carrying out studies. The human right to science imposes duties "to equip people with the basic scientific knowledge needed to participate in science or to provide citizen scientists with various forms of support and recognition, e.g. sources of research funding, access to oversight mechanisms and the opportunity to publish in scientific journals" (Vayena and Tasioulas 2015, p. 482). According to such positions, strong reasons in favour of enabling data donations actually imply that we shouldn't stop there, but enable much more. Challenges with Data Donations Data donations can provide great benefits, express and foster solidarity, and enable individuals to participate in scientific research. But they also raise some difficulties and puzzles. Trust One aspect that well-established practices of giving like financial, organ, or blood donations share with data donations is their reliance on trust. They function only if the donor can expect that her willingness to give will not be exploited by collectors and facilitators, that her donation is being handled responsibly and put to work effectively, and that no third-party interests interfere with the equitable distribution of her donation. The donor also expects that her contributions are being made against the backdrop of appropriate safeguards that protect her from harm, and that burdens arising from the donation process are minimized. Important questions arise about which institutional designs best promote that such expectations are met, trust does not erode, and the practice remains stable. Data donations presuppose trust in similar ways. One case in point is the backlash against the NHS care.data scheme in the United Kingdom, which was intended to enable the sharing of personal health data for research, but was met with distrust due to shortcomings in communication and transparency (Sterckx et al. 2016). Future Use The scope and timing of financial, organ, or blood donations is clearly defined. Donations of biological specimen can be sought with a reasonably well-defined purpose in mind, but already here questions loom about admissible future uses of such samples beyond the initially intended purpose. For example, after plenty of samples were collected to speed up research and development efforts during the 2014 outbreak of the Ebola virus disease in West Africa, question arose about how to use these biobanks responsibly in a way that provides long-term benefits for the health systems and scientific infrastructures of affected countries (Hayden 2015;World Health Organization 2015). Unlike organ and blood donations, biological specimen are not exhausted once they reach a beneficiary. They can be analysed repeatedly in a variety of study designs. To harness these potentials, regulators and researchers need to think carefully about consent mechanisms, the provision of appropriate information to sample donors, and mechanisms to govern access to the biobanks in which samples are stored. One distinctive feature of data donations is that the possibility of future uses familiar from biobank donations is driven to the extreme. Consider the de-and recontextualization processes which datasets tend to undergo in the age of big data. Donated health data is likely to be processed and analysed by means of algorithms and applications that are designed to discover and examine unforeseen correlation hypotheses (cf. Mittelstadt and Floridi 2016, p. 312). From a normative perspective, this raises at least three issues. First, the protective value of anonymization is limited. Some data, such as genomic information, is essentially personalized and cannot be anonymized. But even for other kinds of information, the possibility of de-and recontextualization entails that deanonymization cannot be ruled out. Giving data might be relatively convenient and effortless, but depending on the kind of data and context, such linkages can have quite significant consequences. Surprising inferences can be drawn from personal information especially once it is combined with and set in relation to other data sets. The problem is that individuals are less and less in a position to foresee and take into account potential harms and/or disadvantages that can accrue on the individual or the collective level. Second, because future uses and possible inferences about the data subject are to some extent unclear at the point of data collection, it is challenging to design consent mechanisms that inform individuals appropriately. The problem is not just that non-experts lack the competence to foresee the possibilities of recontextualization and linkage with other data sets, and that this leads to deep asymmetries of information between data donor and users who have the expertise and technology to process it. At the point of donation, the range of possible recontextualizations, linkages, and inferences can remain inaccessible even to experts. In other words, the exact quality and character of the donation is in constant flux. The question arises how under these conditions, an individual can meaningfully deliberate upon whether or not to donate her health data. There is a tension between the very idea of making such a donation, and the fact that it must remain somewhat opaque to both donors and collectors what exactly is being donated. Third, the availability of greater sets of data by itself does not guarantee improvements in the quality of data and/or the inferences drawn from it. The complexity of big data sets and the tools used to analyse them poses a range of epistemic challenges for data collectors and researchers that complicate the evaluation of big-datadriven hypotheses (cf. Mittelstadt and Floridi 2016, p. 327). The beneficent potential of data donations is directly tied to the scientific soundness of their analysis, processing, and conversion into research and development. Providing her data entitles the donor to reasonable expectations towards the scientific institutions whom she authorizes to use and leverage her donation, for example the expectation that her data is being used responsibly and effectively in a way that reflects her philanthropic intentions. These expectations will get frustrated if scientific virtues like rigour, care, and modesty are not enacted consistently throughout data collection, analysis, and interpretation. Invasiveness The implications about asymmetries of information become even more significant once we consider how invasive data can be in the age of big data, genomics, and continuous and holistic tracking. When we speak of data that can be donated, we are referring to a vast number of biological markers such as an individual's complete and unique set of genetic information, physical parameters such as location and movements, lifestyle data, and even data about emotions, moods, and states of mind. Moreover, linkages amongst datasets lead to cumulative effects (Braun and Dabrock 2016a, pp. 316-7). First, the combination of clinical records with data from medical research, self-tracking technologies like fitness apps, lifestyle data, financial data, etc. results in levels of invasiveness which individual datasets do not achieve. Second, distinctions between seemingly discrete data kinds and spheres begin to vanish. The fact that companies like Apple, Google, and Microsoft are already active in all these domains underlines that linkages between them are only a matter of time. The penetrative character of data and devices means that what they extract from us transcends concepts like parthood or possession. The German philosopher Helmuth Plessner (1980) has drawn a distinction between physical body (Körper) and living body (Leib). According to Plessner, one distinctive feature of human life is eccentric positionality, i.e. a particular mode of relating to its own positionality in space: humans can conceive of themselves as both physical bodies existing in the corporeal, outer world of things and as experiencing selves occupying the centre of a spatially delineated physical body, the locus of perceptions, actions, and experiences (cf. also de Mul 2014). Qua physical body, humans live, but qua living bodies, humans are subjects of experienced life. This double aspect is reflected by the two simultaneously instantiated modes of being a living body (Leibsein) and having a physical body (Körperhaben). In view of these concepts and distinctions introduced by Plessner, we might wonder whether, once individuals and their experiences are seen as complex conglomerates of algorithmic processes (for example Harari 2016 chs. 2, 10, 11), captured in their entirety by holistic, rich datasets and invasive devices, the difference between what we are and the features we have has collapsed. In this case, some kinds of data donations-the ones paradigmatically enabled by novel big data technologies-would involve much more than donating merely a part of me, or merely something about me. The question arises what about me is not being captured by data. As long as it remains unanswered, we are left with a sense in which the data donor can give all of her, all she is. The scope of the potential donation is unprecedented. Ownership In order to donate something, it must be mine. I cannot donate things that belong to you, such as your blood or organs. My personal health data is certainly about me, but is it also mine? Much seems to depend on the sense of ownership in question. For example, it is contentious whether personal health data can be seen as private property. Montgomery offers several reasons to reject the suggestion. He notes that in the context of health data, intuitions about privacy "sit uneasily with property ideas": even if we commodify personal health data, "information 'about me' does not cease to be connected to my privacy when I give (or sell) it to others" (Montgomery 2017, p. 82). This suggests that ownership in the sense of private property is not primarily what motivates the regulation of health data. Moreover, according to a broadly Lockean account, private property results from mixing labour with resources. This idea undercuts rather than supports the view that my health data is mine. While I might have "invest[ed] bodily samples" (Montgomery 2017, p. 83), it is the medical service provider who analyses specimen and data, compiles it into evidence bases, and generates value based on the raw materials I am providing. If labour is any indication, then "[i]f anyone may claim proprietary rights over the information on the labour theory of property, it would seem to be the health professionals or service for which they work" (Montgomery 2017, p. 84). Montgomery suggests that if we really want to regard data like genomic information as property, it should not be considered private. One alternative is to regard such data as common property, i.e. property shared by a group of people (such as families) and outsiders being excluded. But Montgomery himself prefers the paradigm of public property: genomic data is like the air we breathe in the sense that everybody is entitled to it, the resource is not exhausted by universal access, and the benefits connected to its usage motivate obligations of stewardship and preservation. We might have to complement such an account with the additional thesis that privacy-rather than property-related claims could still exclude access to personal health data, especially given the degree of invasiveness and comprehensiveness described above. What matters for our purposes is that data donations are disanalogous to other ways of giving in that they do not involve a transfer of something the donor owns in a straightforward way (on this issue, see also Barbara Prainsack's contribution to this volume). In fact, as Montgomery also notes, data donations need not even involve a transfer: the data donor need not lose anything. Instead, her donation might be best understood as a suspension of certain privacy claims. Considerations about ownership become highly relevant once calls for data donations are addressed not only at individuals, but also at data-processing organizations and institutions. In this context, data philanthropy refers to the provision of data from private sector silos for the public benefit, e.g., development aid, disaster relief efforts, and public health surveillance. Social media data can be key in the detection and monitoring of disease outbreaks. Organizations could share data of this kind not only on the basis of corporate social responsibility, but because they recognize the need for a "real-time data commons" (Kirkpatrick 2013). One necessary condition is that the privacy of individuals can be protected through measures like anonymization and aggregation. Even in cases where this is not possible, the hope is that "more sensitive data […] is nevertheless analysed by companies behind their firewalls for specific smoke signals" (Kirkpatrick 2011). Since such data is generated by the private entity, typically on the basis of some form of consent, there is a sense in which this entity is the owner. However, the owner and envisioned data philanthropist is not the data subject. It must be ensured that the interests of the latter are not compromised when data is being made available. Affected People In organ or blood donations, the identity of the beneficiary is often somewhat unclear: unless I am donating to a relative or friend, the recipient will be some indeterminate or unfamiliar other who is in need of the materials I am providing. Still, I have at least a vague idea about certain features and needs of the recipient, e.g., that she is in need of an organ. Something similar applies if I disclose personal health data for the benefit of people who share my illness or risk profile, e.g., on PatientsLikeMe. But note that once data is either decontextualized as described above or not being donated with such a specific purpose in mind, e.g., when uploading one's genome on openSNP, the potential beneficiary and the way in which she benefits from the contribution become increasingly abstract. Not only does the range of beneficiaries of the data donation broaden-it is also less clear who is carrying the burdens and consequences connected with the act of sharing. The donation of my kidney is a sacrifice which I make myself. Setting aside the beneficiary, the effects of my donation on others are minimal. In particular, any burdens related to the donation are carried almost exclusively by myself. In contrast, consider how submitting my genome to a public database could reveal information not only about myself, but also about my children or relatives, e.g., on hereditary risk factors. The range of people being affected as well as the precise consequences of the donation are much less transparent to the donor than in other health-related donations. Voluntariness Donations are conscious, deliberate, uncoerced acts of giving, informed by beliefs about a need that is being addressed through the donation. Data donations can be made by means explicit provision of information towards research projects and platforms, or by accepting terms and conditions of platforms that gather, evaluate, and maybe even publish data of its users (Kostkova et al. 2016). In any case, the informed will of the donor cannot be bypassed. In this context, at least two challenges arise. First, there is a risk of opacity or even deception about the purpose of data gathering, especially if the sharing of data offers significant benefits to private sector service providers. The question arises how societies and individual donors choose to evaluate the activities of commercial entities who convert philanthropic data donations into products that might improve lives to some extent, but in the first place generate non-altruistic, self-serving revenues. For example, the biotechnology company 23andMe (2018) motivates customers to become "part of something bigger" and make contributions that "help drive scientific discoveries" by allowing the company to use data from its direct-to-consumer genetic testing services for research purposes. At the same time, 23andMe is generating intellectual property from its biobank, such as the patent of a gene sequence which it found to contribute to the risk of developing Alzheimer disease (Hayden 2012), and a method for gamete donor selection that allows prospective parents to select for desired traits in their future child (Sterckx et al. 2013). Calls for data donations may allude to philanthropy, altruism, solidarity, and the good a donation can do, but in fact they might at least partly be driven by the selfinterest of the data collector. The question of whether to share data in view of private sector benefits becomes particularly pressing in contexts where the latter conflict with the donor's beneficent aims. For example, consider a situation in which data provision that is intended as philanthropic advances medical research while enhancing and stratifying insurers' knowledge about risk profiles of donors and customers. Such prospects can ultimately deter individuals from sharing. If not, it provides opportunities for private sector entities to freeride upon philanthropic dispositions. Second, the informed will of the potential donor can be challenged by apparent moral pressures. Understood charitably, headlines like "Our Health Data Can Save Lives, But We Have to Be Willing to Share" (Gent 2017) can be seen as raising awareness for so far unrecognized, readily available, and effort-efficient means for the individual to improve the lives of others. But there is a somewhat questionable flipside to such statements. They might be taken to suggest that an individual acts wrongly if she ultimately prioritizes her privacy over the presumed benefits of a data donation, and/or if she judges the privacy risks to be disproportionate relative to the utility that would be generated by her donation. In other words, a perceived duty to participate might result (Bialobrzeski et al. 2012). In view of rhetoric that declares data a common good and public asset, Ajana sees a risk of pitting data philanthropists against privacy advocates when "in the name of altruism and public good, individuals and organisations are subtly being encouraged to prioritise sharing and contributing over maintaining privacy. […] First, it reinforces […] the misleading assumption that individuals wishing to keep their data private are either selfish and desire privacy because they are not interested in helping others, or bad and desire privacy to hide negative acts and information. Second, this binary thinking also underlies the misconception that privacy is a purely individual right and does not extend to society at large" (Ajana 2018, pp. 133-4). A parallel can be drawn to worries regarding self-imposed surveillance and disciplining mechanisms (Foucault 1977) through self-tracking devices (Sharon 2017, pp. 98-99). Voluntary tracking and provision of personal health data can turn into liberty-constraining expectations that data is not only shared, but also that individuals take measures to improve their health markers (Braun and Dabrock 2016a, p. 323). The prospect of doing good with one's data can similarly be turned into a disciplining narrative that conveys implicit expectations that data should not be withheld. What initially appears to open up options for the individual ends up delimiting them. These dynamics would be unfortunate from a normative perspective. Data donations might be beneficial and morally commendable, and these features provide some reason to donate. But they hardly provide an all-things-considered reason-let alone a strict duty-to do so. Consider two examples: first, for the Kantian, the duty to help others is an imperfect one, i.e. it remains entirely up to the agent to what extent she helps others (Kant 1785, p. 423). Second, consider effective altruism according to which there are strong moral reasons to give, e.g., donating money to charity, organs to patients in need, or time and labour to good causes (Singer 2009(Singer , 2015MacAskill 2015), but also to ensure that the good your efforts bring about is being maximized. To our knowledge, effective altruists have not yet explored data donations, but they could be intrigued by the benefits that can be realized through such acts of giving. Still, effective altruists agree that although once you donate, you should donate as effectively as possible, there can be optionality about whether to donate at all. Strong normative reasons to give money to charity can be outweighed by the costs such donations incur to the donor. In such cases, "it would not be wrong of you to do nothing" (Pummer 2016, p. 81). According to these positions, it is far from unreasonable or immoral if an individual decides to be restrictive about her data. It is a fine line between holding her contributions in esteem and implicitly sanctioning or generating a burden of proof for the individual who decides to keep her information restricted. To sum up, donating personal health data offers alluring opportunities (3.), but a number of challenges lurk along the way. Genuine donors typically have some idea about what they are donating, what the donation will be used for, whom it benefits, and who carries burdens related to the donation. However, in big data contexts, potential data donors are bound to have a limited grip on the nature of their donation, the future use of their data, and the people affected by their decision to share. Further disanalogies come from the invasive and comprehensive character of stateof-the-art data gathering and processing, and the fact that the relevant sense of ownership is far from straightforward. Finally, the voluntariness of data donations can be undercut by opaque or deceptive information and/or moral pressures that appear to deflate individual privacy claims. Earlier, we suggested that donations can advance positive data sovereignty as they foster social bonds and open up room for manoeuvre in social space. Specifically, we suggested that through data donations, individuals can enact beneficence, solidarity, and play an active role in scientific processes. The challenges just characterized aggravate the uncertainties that are inherent to any act of giving. Important aspects of the good being given are in constant flux-what it will be used for, whom it benefits, and who carries burdens. If the donor decides to give nevertheless, she embarks on a venture into the unknown that can become precarious. Not only might the donation be in vain, fail to accord with the donor's intentions, and remain unsuccessful in advancing positive sovereignty. Even worse, the donation could backfire and end up compromising negative aspects of the donor's sovereignty that relate to protective claims and rights, for example against untoward interferences from others, disadvantages, discrimination, or exploitation. Donations, Consent and Control As mentioned earlier (2.), one important realizer of sovereignty is power. In the case of data sovereignty, the relevant power is control over one's data. The question arises how data donations can be facilitated and regulated in a way that guards and strengthens the data sovereignty of potential donors. We now suggest three governance areas that are crucial towards this goal. Ideally, mechanisms in these areas enable potential donors to contribute their health data for the benefit of others and scientific progress as a whole without leaving them susceptible to undue harms arising from the aforementioned challenges. Consent Several initiatives highlight a considerable degree of willingness on the side of individuals to share their data (Wellcome Trust 2013; Health Data Exploration Project 2014; PatientsLikeMe 2014). However, it has also been recognized the willingness to share data, and especially preferences about what kind of data may be shared, is expected to vary amongst user groups (Weitzman et al. 2010). The example of the care.data scheme shows that sharing and connecting health data can prompt scepticism as soon as insufficient attention is being devoted to the consent of data subjects. It is thus necessary to focus on the conditions and mechanisms for meaningful, informed decision-making. As mentioned, many uncertainties surround the future use of one's data. In big data contexts, the informedness of one-time consent to data gathering and processing inevitably remains incomplete (Mittelstadt and Floridi 2016, p. 312). Given the prospective benefits of data donations outlined earlier, and the potentials of big data methods more generally, it stands to reason to not simply refrain from useful activities in the absence of fully informed consent, but to rethink and redesign informed consent in a way that makes these activities possible and honours the data subject's self-determination. Even if data is already collected and in principle available for analysis, it is highly questionable whether informed consent can legitimately be bypassed (Ioannidis 2013). And needless to say, for our context it matters that data crawling and processing without consent undermines the very idea of a data donation. A range of new consent forms are under discussion in the literature. Reliance on opt-out mechanisms in biobanks and online data gathering (CIOMS 2016, chs. 11, 22) is already widespread. Blanket consent poses little to no constraints on future uses. Broad consent allows a wide range of future uses (Petrini 2010). Tiered consent can take several forms, from the specification of a range of approved uses, to the exclusion of certain uses, to requiring re-consent if usage for a new purpose is intended (Eiseman et al. 2003, pp. 134-7;Master et al. 2015). Each of these options can enable valuable research, but also compromises the ideal of informed consent to some extent. For example, they do not satisfy the standards of informedness laid out in the Declaration of Helsinki (World Medical Association 1964). Some thus argue, e.g., that "blanket consents cannot be considered true consent" (Caulfield et al. 2003) since it is provided on the basis of information that is way too vague and does not allow the individual to act on her continuing interest in her health information. Others even conclude that informed consent is inapplicable to contexts like biobanking where uncertainty about future use is unavoidable (Cargill 2016). In fact, we must highlight a further problem. Inherent to alternative consent models is typically a more or less explicit distinction between sectors. Information and samples are being given for a certain range of future uses or certain tiers of research. Oversight mechanisms and committees are thus needed to determine whether a particular usage request of a researcher accords with the consent provided at enrolment. But note how given our earlier remarks about future use and de-and recontextualization, these sectorial distinctions are in jeopardy in big data contexts. For example, consider the consent to the processing of one's social media data, given through acceptance of terms and conditions (Kostkova et al. 2016, p. 2). Once analysed by suitable algorithms and linked with other data sets, certain social media data (or metadata) effectively becomes health data. Of course, this can be seen as a challenge already for single-instance consent, given that it becomes increasingly less transparent to the individual what can and will be done with her data. But novel consent forms become even more tricky once the sectorial distinctions inherent to broad or tiered consent forms fade. Problems like these motivate consent forms that are dynamic. Different individuals possess different preferences depending on the kind and context of data in question. Moreover, preferences can be expected to change over time, for example if technological advances open up new possibilities for drawing inferences from a given dataset. This calls for refined and dynamic control mechanisms that allow individuals to provide and withdraw data in accordance with their evolving preferences-a demand which has found its way into legislation on data portability, e.g., in Article 20 of the EU General Data Protection Regulation (GDPR). Once individuals become equipped with effective means to access and transfer their data, they turn from mere data subjects to active data distributors (Vayena and Blasimme 2017, pp. 507-8). One example for what this could mean for data donations is provided by Schapranow et al. (2017). While organ donation passes are common, similar mechanisms are lacking for data donations. The authors thus introduce a data donation pass, which can be maintained through a smartphone app in which individuals can choose in real-time whether and for how long they would like to provide their data to research projects, what kind of projects they would like to support, what kind of data is being shared, and when it shall be withdrawn. Besides highlighting potential benefits, the authors explicitly construe the data donation pass as a means for the individual to exercise data sovereignty. Representation Innovative consent forms can be complemented by representatives who express or represent the donor's will in governance processes. For example, trustee or honest broker models authorize a neutral and unbiased individual, committee, or system to manage access requests by researchers and function as a firewall between the database and potential data processors (Vaught and Lockhart 2012). The purpose of honest brokers is typically to secure the privacy and anonymity of individuals. We can easily imagine extending its scope to representing further interests of the donor. In this context, we might also invoke the concept of custodianship, which aims at ensuring accountability to the data donor across the full spectrum from data collection to database maintenance and access permission. "Custodianship does not entail the right to ownership but acknowledges that a biospecimen is provided to research as a 'gift' to be used only with consent to advance science for the benefit of society" (Yassin et al. 2010). Going one step further, one can take on board some of the ideas from citizen science indicated earlier. For example, Shirk et al. (2012) distinguish several models of public involvement in scientific research. Such models could also be applied when including data subjects in governance processes: on one end of the spectrum, individuals are merely contributing data or specimen to research projects. In collaborative projects, donors or members of the refine research project designs together with investigators. In co-created projects, researchers and donors work as equals. And in collegial contributions, non-credentialed individuals even carry out research independently. Organizations Data sovereignty appears as a feature of individuals, but consent structures, participatory designs, and organizational self-control set the stage for it. Shaping these structures in a way conducive to data sovereignty is indispensable. This requires organization-level commitments and rules prompted by a thoughtful mix of incentives and frameworks along at least two dimensions. First, mechanisms of voluntary self-control, either on the level of corporate social responsibility, or by setting up industry-wide, impartial licensing and control agencies should be considered. Second, the state can intervene by reshaping legislation for the operation of dataprocessing institutions, e.g., through the mentioned EU GDPR. Either way, data sharing requirements need to be designed with care. For example, there is a potential tension between mandatory publication of publicly funded data and the willingness of individuals to donate. The former can speed up research, but also-especially in the case of genomic data-increase privacy risks and thus deter potential donors. Observation I In the literature, it is sometimes noted that data donations solve problems with research in which standard informed consent is impracticable. The idea is that in view of looming deanonymization, de-and recontextualization, and future uses, research is bound to rely on "information altruists" (Kohane and Altman 2005) who are aware of these risks, but share their data nevertheless. On the far end of the spectrum is probably the OpenSNP case where whole genomes are freely accessible. The upshot is that people who are willing to take risks facilitate research that would otherwise be impossible or very hard to carry out, while the consent requirements for the general, less risk-seeking public remain uncompromised. We saw earlier that sovereignty can indeed be transferred and delegated to others. But we also saw that considerations about the legitimacy of the sovereign indicate that obligations of representation and accountability are tied to such transfers. Sovereigns who fail to represent their people are despots. Moreover, on reflection we might become convinced that certain fundamental aspects of individual sovereignty resist transfer to others. As Judith Butler puts it, when people vote, "[s]omething of popular sovereignty remains untranslatable, non-transferable, and even unsubstitutable, which is why it can both elect and dissolve regimes" (2015, p. 162). The implication for our purposes is that even if data sovereigns delegate power and authority to representatives and trustees, suspend their own authority through novel consent mechanisms, or renounce authority through blanket consent, some ethical constraints still remain in place. For example, individuals who upload their genome on OpenSNP do not thereby become fair game. Despite their broad consent, we can still raise questions about which use of their data is legitimate. Such questions arise from an ethical, but also from a legal perspective, e.g., when we debate which ways of discriminating against data subjects are unlawful. And in cases where consent procedures are tied to mechanisms of representation, Butler's remark suggests that representatives might be authorized to speak on behalf of data subjects, but can fail to articulate their voice. In some instances, the authority of representatives might "dissolve". These points illustrate that it remains an open and pressing question what researchers and data collectors owe to 'information altruists' and others who suspend their claims to full-fledged control over future use. The mere broadening of consent forms is not a surrogate for reflecting upon responsible institutional designs. Observation II There is considerable variation across the mentioned consent and representation models with regards to how well they cohere with the idea of a data donation. For example, in the above-mentioned picture of collegial research by Shirk et al., there is a sense in which data subjects are not donating any data at all. Their data does not go anywhere. It is merely channelled into a research process which the subjects themselves are designing and carrying out. Broad consent might secure a link between self-determination and the process of sharing and subsequent analysis of personal health data. But here, some of the earlier challenges strike back. Precisely because the consent is broad, questions arise about how the apparent donor can meaningfully endow her data. After all, crucial aspects of her donation must remain open, including what exactly it is for, who benefits from it, and whether only she carries burdens related to the donation. Tiered consent to data sharing, i.e. donating data towards specific purposes and/ or with re-consent conditions in place, need not be strictly incompatible with the idea of a donation. But notice how when being provided by means of tiered consent, data is not simply given to others-researchers, developers, or the general public. Instead, claims to power remain attached to it, and are not renounced by the apparent donor. Similar points apply to trustee or honest broker models. One of their purposes seems to be the extension of the donor's will to future situations and applications she cannot foresee in the present. These mechanisms allow the apparent donor to remain in command, if only indirectly and through representation, to ensure that use fits intended purpose. To put it bluntly: it is a little odd to make a donation or gift, but to tell recipients what to do with it. This request is driven to the maximum with dynamic consent, where the subject never actually ceases to be in control. All these mechanisms and models certainly hold alluring promises with regards to the protection and autonomy of individuals. But the question arises whether the apparent donor is actually put into a position where she clings onto what she has promised to let go off when entertaining and committing to the idea of a genuine data donation. Taken together, the foregoing results lead to a puzzle. If I am giving some broad form of consent to use my personal health data, I lose my grip on the sense of endowment which authors like Mauss, Derrida, Ricoeur, and Hénaff highlight as a distinctive feature of gifts. If I cling onto my data through various models of extending my control, I am not actually letting go. Part of the puzzle might depend on the extent to which we regard donations as being more than exchange. It appears that all the aforementioned conditions are suitable means for the individual to retain power and control over her data and to constrain access and use it when this process is thought of as an exchange whose conditions the individual seeks to govern. But earlier (2.), we were suspecting that when being considered through the lens of gift theory, donations can be seen to exceed this logic, to point to something beyond economic exchange, and involve the acceptance of risks and uncertainties about the consequences of their endowment. If so, there is a tension between conditions to facilitate data donations as exercise of data sovereignty-in particular the resulting claims to power and control-and the idea of what it means to donate, gift, and endow something to others. At this juncture, several strands of the foregoing discussion flow together. Data donations can reinforce the social structures in which individuals live their lives (2.). Specifically, data donations allow the individual to enact solidarity, beneficence, and participation (3.). Exercises of data sovereignty will thus not categorically result in restrictions to data access. Privacy must be ensured by default, but respecting individuals as data sovereigns further involves implementing responsible governance mechanisms to enable data donations. As we have seen, sovereignty is being realized through power and control. Data sovereignty in particular involves control over one's data: where it goes, who has access, and what is being done with it. Such control matters especially in view of the challenges and puzzles surrounding data donations (4.). Hence the three governance areas proposed above. However, on the one hand, gifting involves endowing and donating means letting go of what one gives. On the other hand, sovereignty involves power and control. The latter might undermine the former. In view of this tension, should we not refrain from applying the sovereignty and gift paradigms, which we have claimed are inherently related, when trying to better understand the practice of data donations? Not necessarily. One intriguing way to resolve the tension just described is to regard data donations as data loans. When deciding whether or not to give an item, asset, or commodity, my options certainly include keeping all my claims to the object in place, i.e. not giving at all, or renouncing the entirety of my claims and giving without any remaining strings attached. But in between, a continuum of acts of giving is conceivable where only some kinds of claims to the object are renounced or suspended. Loans are instances where certain claims are being suspended and can be reclaimed at the conclusion of the loan (on the significance of this picture for understanding public attitudes towards scientific research, cf. Starkbaum et al. 2015;Braun and Dabrock 2016b). Other claims can remain in place throughout, e.g., when there are expectations about the purpose of the loan. As this illustrates, it is not inconsistent to give while keeping certain claims to the item, asset, or commodity in place. Loans as well as donations are something the lender gives, and her aims can include conveying recognition, fostering bonds of solidarity, and reinforcing social structures. In our context, providing one's data to researchers need not be seen as a donation of the data itself. What is being given, potentially with all the aspects of endowment aspects described earlier (2., 3.), is a loan of this data. Individuals might want to retain certain powers, for example the ability to cancel or modify access if the challenges and evolving circumstances described earlier (4.) increase precarity or shift the nature of their data loan. If the motivation is genuinely non-self-interested, the loan carries no economic interest or benefit, no expected return in the light of which the lender's action pays off for her, other than putting her in a position to offer symbolic appreciation and contributions to others, her community, the scientific enterprise, and society as a whole. As an exercise of sovereignty, the loan comes with only one condition: that it may be retracted or at least the consent be modified if and when the individual requests it. The picture of data donations as data loans does not resolve all challenges. Loans emphasize the precarious aspects of donations as they carry risks of exploitation and default. Lenders might strive in vain for control and security. Moreover, the question remains how individuals can lend something that they do not own in a straightforward way, and give a loan that in view of penetrative data processing is incredibly invasive. Nevertheless, the appeal of the picture is that it reflects both the ability to grant access to data and the implementation and justification of control mechanisms such as those outlined above. The latter might remain imperfect, but still be promising enough to set the wheel of giving in motion. Conclusion We have defended the thesis that donations of personal health data can advance individual sovereignty. The elements of gift theory have been used as a descriptive heuristic to gain a better understanding of donations. Gift theorists maintain that there are cases in which an analysis that focuses solely on exchange aspects elides important features of the target phenomenon. Instead, they invite us to look for what Derrida calls aneconomic aspects in order to grasp acts of giving in all their complexity: whether or not these acts involve a sense of endowment, are being carried out without the intention to prompt a return, transcend the individual's self-interest, and/or convey a symbolic, non-commodifiable aspect that encodes the donor's dedication and investment of a part of herself into what she is giving. Note that these suggestions are descriptive. It does not follow that it is normatively desirable to make gifts, just that considering these aspects ameliorates our understanding of acts of giving. Once donations are examined through the lens of gift theory, it becomes apparent that they can generate social bonds, convey recognition and open up new options in social space, for example by interrupting patterns of economic exchange and enabling activities and interactions that would have otherwise remained unlikely or impossible. If these potentials are realized, donations can be fruitful advances of individual sovereignty. Sovereignty is sometimes being reduced to negative and protective rights and powers, but we suggested that it also encompasses positive entitlements to pursue one's notion of the good life through connecting and interacting with others. Our claim was not that donations are the only way to advance sovereignty. However, if data subjects are to be sovereigns about their health data, the positive dimension of sovereignty calls for ways to facilitate the sharing of data as an expression of the individual's informational self-determination. Such donations can enact solidarity and beneficence and enable donors to participate in scientific processes. The foregoing neither motivates a duty to donate nor deflates the importance of protections. Even though donations can advance positive sovereignty, we must not lose sight of potential conflicts with the negative, protective aspects of sovereignty. Data donations in particular have a range of features that exacerbate risks and uncertainties. In big data contexts, data donations become more invasive than other kinds of donations. Potential data donors are bound to have a limited grip on what they are giving, the future use of their data, and the people affected by their decision to share. We thus proposed that tensions between data donations and the negative, protective aspects of sovereignty shall be minimized through consent procedures, the representation of data subjects, and organization-level constraints and commitments. These mechanisms complement one another and apply to a plurality of agents on different levels (Braun and Dabrock 2016a, pp. 324-5;German Ethics Council 2017a, sect. 5.3): individuals who become empowered to share and withdraw their data, representatives and brokers who mediate between individuals and data processors, data networks which provide means for data subjects to govern the flow of their information, and regulators who set formal and enforceable frameworks. These mechanisms seek to ensure the controllability of data donations for individuals as well as the accountability of data gatherers and processors. Ideally, the intentions of data donors, including those related to gifting and endowing, can then be introduced and unfold within the governance of the institution. Special attention should be paid to technological infrastructures. First, data interoperability (Nature Biotechnology 2015) is necessary to transfer data, e.g., from electronic health records or direct-to-consumer genetic testing to data networks. Second, our call for dynamic consent mechanisms requires user-friendly interfaces in order to make users aware of new developments and allow them to control, submit, and withdraw data in real-time. Third, developing such interfaces and/or setting up representatives, typically software data agents, to serve as data trustees presupposes a sufficient degree of standardization of programmatic data interfaces. Nevertheless, in the end all these measures might fall short. Recall Derrida's claim that gifts set the circle of the economy in motion. We can set up efficient infrastructures and implement controllability for donors as well as accountability of data-processing institutions. Still, Derrida's claim can be taken to remind us that institutions of giving will be set in motion only if individuals are ready to engage in this risky enterprise-an enterprise that opens up opportunities, but in which frustrations and harms can never be ruled out. That is, a particular kind of endowment is required: individuals need to trust and engage in the act of giving despite the risk that it will not have its intended effects. This is not a normative demand that potential donors shall trust the system that seeks their contribution. The claim is, again, descriptive: trust is what sets the system in motion, and if trust is lost, everything comes to a halt. This insight is perfectly compatible with the further claim that once donors trust and decide to give, mechanisms that implement accountability, controllability as well as norms of transparency remain indispensable to keep the process functional and sustainable. The necessity of such momentums of endowment highlights a strength of gift theory: it helps us to discern certain aneconomic working principles of our institutions that might have otherwise escaped our attention. If the donor transfers authority over her data by means of broad consent, it becomes hard to get a grip on future uses and beneficiaries, which appears to be in tension with the idea of meaningfully endowing such data. If consent is dynamic or tiered, one is not actually letting go of what one appears to donate, and thus deflates the sense in which one makes a genuine donation. These observations could be seen as reasons to refrain from applying the gift paradigm to data donations. However, we have argued for a different approach. Data donations-at least those that are cognizant of the claims of sovereign individuals-come in a particular form: unlike other forms of donation, they are most plausibly understood as loans rather than transfers. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
18,722.2
2019-01-01T00:00:00.000
[ "Sociology", "Philosophy" ]
On the Integration and Development of Artistic Creativity and Digitalization . In the era of digital technology, the development of information technology and art culture helps to enhance artistic creativity through digital technology. New forms of art emerge from the parallel between art and "digitization": This article discusses digitization in artistic creativity, and theoretically discusses and summarizes the creativity generated by the integration of artistic ideas and digital art in the current digital era in China. Digitization is a brand new mode of production and economic form, which has become an important standard of modern economic development. At the end of 1980s, the rapid development of digital technology and information technology made digital art appear in the world. Based on computer and Internet technology, it has created a space for digital development, and provided a new platform for artists to create digital art in all aspects of digital technology and application ability. At present, digital art is growing into a new art form in the art and creative industry of the 21st century. Artistic thinking and digitization Artistic thinking mode is a way of thinking related to creative thinking, focusing on imagination, innovation and expression ability 1 .Emphasizing creativity and expression, digitization provides a platform for innovation and expression.The combination of artistic creativity and digital creativity can promote innovation and differentiated competition in the process of digital transformation, and deliver brand value and stories through visual, audio, video and other multimedia forms, so as to realize a unique brand experience. Artistic thinking, with the thinking mode applicable to the field of art, has a certain content of artistic thought and the consistent way of thinking.Artistic thinking mode encourages innovation and jumps out of the traditional framework, while digitalization provides the tools and technological platforms for innovation.Artistic thinking emphasizes the resonance with users' emotions and needs, while digitalization provides more opportunities to interact with users.By artistic thinking mode into digital transformation, we can pay better attention to user experience, design more creative and personalized digital products and services, and improve user satisfaction. Digital technology provides more creative tools and platforms, so that artists can create and express themselves more conveniently, and use another kind of digital artistic thinking to think about problems.For example, computer painting software can simulate the effect of various painting media, digital music software can simulate the timbre of different instruments, and virtual reality technology can create an immersive artistic experience. Artistic innovation thinking and digitization Digital technology provides new ways to experience the art, such as virtual reality, augmented reality and other technologies that can create immersive feelings for the audience.In addition, digital technology also provides audiences with the opportunity to experience participatory art, who can interact and cooperate with art works through interactive installations, gamification, etc. The digital transformation of artistic thinking mode has brought more possibilities and innovation space for artistic creation and appreciation, and promoted the development and change of art2.Compared with the traditional artistic thinking mode, digital transformation pays more attention to the integration of technology and art, encouraging artists to jump out of the shackles of tradition and have the courage to explore and try new forms of artistic expression and experience. Cross-border integration and multiple thinking Artistic thinking mode is not sticking to one pattern, and can span different fields and industries and absorb inspiration from different ways of thinking.In the digital transformation, the integration of artistic thinking can bring more diverse thinking and creativity.By introducing artistic thinking, we can break the traditional thinking framework, examine problems from different angles, and create unique solutions. The development concept of the integration of artistic thinking mode and digital transformation mainly includes innovation and creativity, user experience and emotional connection, cross-border integration and multiple thinking, and creative expression and brand building.This integrated development can bring more creative and differentiated digital solutions to enterprises, and promote greater success in the transformation and competition of the digital era. Artistic creativity and digitalization merge together 2.1 Breaking boundaries and merge The mutual integration of artistic creativity and digitalization refers to the combination of digital technology and artistic creation, which makes artistic creativity more rich and diversified, and opens up new ways of artistic expression.Artistic thinking focuses on interdisciplinary and cross-border cooperation and integration, and digital transformation also requires the cooperation and cooperation between professionals in different fields.Artistic thinking can promote the exchange and collision of knowledge and experience in different fields, provide new ways of thinking and creative paths, and help enterprises break traditional boundaries and achieve greater digital transformation innovation effect. The digital age has encouraged artists, designers and engineers in different fields to create innovative and forward-looking works of art together.They use digital technology and scientific knowledge to combine art with science and technology, engineering and other fields to break the boundaries of traditional artistic creation and open up new areas of creation. Digital technology provides more tools and media for artistic creativity.Through digital tools and software, artists can do image processing, audio editing, 3 D modeling, etc., to create unique works of art.Digital technology can also expand the forms of art, such as virtual reality, augmented reality, so that audiences can interact with the works and get a richer experience 3 . Strengthen digital expression and creation Digital transformation provides new ways of expression and creation, and artistic thinking can help people to make better use of digital tools for creation and expression 4 .Artistic thinking can inspire creators' artistic inspiration, guide them to create art through digital technology and media, and create unique digital artworks and works. Digital technology has changed the process and way of art creation.Traditional artistic creation relies on manual or traditional production process, and digital technology enables artists to create more quickly and efficiently.The trial and error and modification of artistic creation are also more convenient, and artists can adjust and perfect their works at any time, which improves the flexibility and freedom of creation.In addition, digital technology brings brand-new opportunities for the dissemination and consumption of works of art.Artists can quickly spread their works to a global audience through platforms such as the Internet and social media, gaining wider influence and visibility.Digital technology has also brought new business opportunities and models to the art market, such as digital art, online auction, etc., promoting the exchange and transaction of artistic creativity. The integrated development of artistic thinking mode and digital transformation can bring opportunities for enterprises and individuals to achieve innovation, breakthrough and expression.Artistic thinking can help digital transformation to focus more on user experience, provide creative solutions, and promote integration and collaboration across disciplines.At the same time, digital transformation also provides new expression and creative tools for artistic thinking, and expands the boundary of artistic creation.The mutual integration of artistic creativity and digitalization not only expands the boundaries and possibilities of art, but also provides artists with more creative opportunities and ways of expression.The development of digital technology will continue to promote the innovation and development of artistic creativity, so that art and technology can promote each other, and bring a richer and more diversified artistic experience to the audience. Overview of traditional art Traditional art can be analyzed from painting, sculpture, furniture decoration, theatrical stage performance, literary poetry. Painting is one of the most common traditional art forms.Traditional painting can use a variety of media, such as paper, cloth or drawing board, and combine different painting tools and techniques, such as watercolor, oil painting, gouache, drawing, etc.A painting can express the artist's feelings, imagination, and observations through elements such as color, line, shape, and texture. Sculpture is the creation of a work of art with a threedimensional form by carving, shaping or constructing materials.Traditional sculpture usually uses wood, stone, metal and other materials to express the creativity and intention of the artist through techniques such as cutting, carving or casting. Traditional furniture and architectural design is also an important artistic expression.They demonstrate the artist's pursuit of aesthetics and function through the selection of materials, the layout of structures and the treatment of decorations. Traditional drama and stage performance convey stories and emotions through elements such as role playing, music, dance, sets and costumes, as well as the choreography and performance techniques of the plot.These forms of performance play an important role in cultural traditions. Literature and poetry are written art forms that express feelings and ideas through language, image, and rhythm.Traditional literary and poetic works, with their unique narrative and expression styles, show the talent and perception of the author. The expression techniques of these traditional arts have been passed down and evolved from generation to generation in history, representing the treasures of human creativity and cultural traditions.They express the ideas and spirit of the artist in their own unique ways, and at the same time bring the audience the space to enjoy the beauty and think. Integration of traditional art and digitalization Traditional artistic creation often needs to rely on manual or traditional production technology, and digital technology allows artists to create more quickly and efficiently.The trial and error and modification in the process of artistic creation are also more convenient.Artists can adjust and improve their works at any time, which improves the flexibility and freedom of creation. Traditionally, artists may use tools such as brushes, paints, and canvases to create paintings.However, with the development of digital technology, artists can now create art works by using techniques such as electronic drawing boards, computer software, and virtual reality. Under the concept of traditional art, combined with the elements and techniques of traditional art, with the help of digital tools and software to create unique retro style art works.Through digital processing and presentation, traditional art is combined with modern technology to bring a new aesthetic experience to the audience. The combination of digital painting and traditional painting can give rise to new art forms.On the basis of traditional painting, the use of digital technology to innovate.You can use digital painting software to create and output the finished product through a printer or other digital device.The combination of this technology can not only improve the efficiency of creation, but also realize more creative possibilities.Use augmented reality technology 5 to give dynamic effects and interactivity to flat paintings.By scanning the paintings, viewers can trigger the stories hidden in the images, enriching the viewing experience 6 . Under the development of projection technology and VR technology, the traditional dance is combined with virtual reality technology, and the audience is brought into the virtual stage space through devices such as head-mounted display devices.The audience can observe the dancers' movement details up close while interacting with virtual elements to create a more rich and immersive dance experience. It combines traditional art forms with digital technologies in innovative ways to bring new artistic experiences to the audience.This integration can not only open up the boundaries of artistic creation, but also promote the development and innovation in the field of art. Overview of digital art In the digital age, art is facing new challenges and opportunities, showing a development trend of diversification and innovation.Through digitization, artists can use computer software to create more accurate and detailed works 7 , and they can use virtual reality technology to appreciate art works personally.In addition, the digital transformation also provides artists with more creative tools and ways, such as using data visualization technology to create works of art and transform complex data into visual images. Virtual reality and augmented reality: Digital technologies such as virtual reality (VR) and augmented reality (AR) provide a whole new way of expression for art.Through VR technology, the audience can immerse themselves in a virtual art environment and interact with the works, while AR technology adds virtual art elements into the real world, enabling the audience to perceive and experience the art works in a new way.Big data and information technology in the digital age allow artists to transform abstract data into visual works of art.Data visualization art special data and art combined, through charts, images, animation and other forms, let the audience more intuitively understand and perceive the meaning and value of data. Application of digital art Social media platforms in the digital age have become an important channel for artists to spread and display their works.Artists can share their creative process and works through social media to interact and communicate with the audience.Social media platforms also provide wider opportunities to engage and participate in the arts.Digital transformation has also changed the way art is spread.Artists can display and sell their art through online platforms, instead of relying on traditional galleries and art exhibitions.In addition, the digital transformation also enables art works to be appreciated and shared by more people, and spread through channels such as social media. In the digital age, virtual reality technology is used to create a new art exhibition experience and promote the development of virtual art exhibitions.The audience can immerse themselves in the artwork through virtual reality equipment, interact with the artist and explore the infinite possibilities of art. Interactive digital art installations, combining sensor technology and interactive design, create digital art installations that can interact with the audience.The audience can manipulate the works of art through touch, sound or body movements, participate in the creation of art, and become part of the art. To bring art into a new digital realm, bringing audiences a richer, more interactive and innovative art experience. Through artistic thinking, we are able to explore unlimited creative possibilities and inject new vitality into the development of the digital art field. Concluding Remarks General Secretary Xi Jinping stressed, " I hope you will have the courage to innovate and create, and use exquisite art to promote cultural innovation and development."This points out the direction for literary and art workers, and innovation must be integrated into the whole process of literary creation to ensure the vitality of the new art in the new era.Artistic thinking mode focuses on innovation and creativity, while digital transformation requires a constant search for new business models, services and products.By introducing artistic thinking into the process of digital transformation, the innovation potential of the team can be stimulated and enterprises can promote continuous innovation in the digital field. Artistic creativity in the digital integration can stimulate people's innovative thinking and creativity.Through the unique perspective and creativity of artists, new and unique ideas and innovative applications can be explored in the field of digitalization.Artistic creativity in digital integration promotes cross-border cooperation and innovation between different fields.Collaborations between artists and digital technologists can lead to new mindsets and collaboration opportunities that drive innovation.Artistic creativity infuses the elements of creativity and beauty into the digital field, and promotes the development of digital integration in a richer and more dynamic direction. Digital technology provides artists with more creative tools and media.Artists can use digital technology to create new art forms, experiments and ways of expression.Through digital technology, artists can combine traditional art forms with digital elements to create more diverse works of art.The significance of the integrated development of artistic creativity and digitalization is to promote the diversity and innovation of artistic creation, expand the communication channels and audience groups of art, and promote the development of the art market and the prosperity of the art industry.Digital technology has opened up new possibilities for art, bringing a richer and more convenient art experience to artists and audiences. The digital age has brought some creative tools and expressions to art.Artists can use digital technologies, virtual reality, augmented reality and so on to create more innovative and interactive works of art.At the same time, the digital age also provides artists with broader communication channels and opportunities for the audience to participate in it, and promotes the diversified and global development of art.
3,663.8
2024-01-01T00:00:00.000
[ "Art", "Computer Science", "Economics" ]
Substance P releases and augments the morphine-evoked release of adenosine from spinal cord The effects of substance P on the morphine-evoked release of adenosine were examined. Substance P alone produced a multiphasic effect on release of adenosine, with release occurring at low nanomolar concentrations and at a micromolar concentration, but not at intermediate concentrations. An inactive dose of substance P augmented the morphine-evoked release of adenosine at a nanomolar (cid:14) . (cid:14) . concentration of morphine. Release of adenosine by substance P alone 1 nM or substance P r morphine 100 nM r 10 nM was Ca 2 q -dependent and originated from capsaicin-sensitive nerve terminals. q 1997 Elsevier Science B.V. Substance P is present in small diameter unmyelinated primary afferent nerve terminals within the dorsal spinal cord and is involved in the transmissionrmodulation of w x nociceptive information 15 . Substance P depolarizes projection neurons and interneurons within the dorsal horn, and such postsynaptic actions have received emphasis with w x respect to pain transmission mechanisms 15 . There is also some evidence that substance P can modulate primary w x afferent function 10,17 . Substance P is released from w x primary afferent neurons by noxious stimulation 15 and release is increased under conditions of inflammation w x 6,19 . Opioids have been known for some time to inhibit the release of substance P from sensory nerve terminals w x contributing to antinociception 9 , but more recent studies report dual effects of opioids on substance P release with stimulatory and inhibitory effects being due to actions on w x different opioid receptor populations 21 . At supraspinal w x sites, substance P releases endogenous opioids 8,14 , and this contributes to some behavioural effects of substance P. Multiple forms of interactions appear to occur between opioids and substance P in relation to pain mechanisms. Within the spinal cord, release of adenosine mediates a component of morphine-induced antinociception. In behavioural studies, spinal opioid-induced antinociception is w x antagonized by pretreatment with methylxanthines 2,4 , while in neurochemical studies, opioids stimulate the release of adenosine in both in vivo and in vitro spinal cord w x preparations 22 . The morphine-evoked release of adenosine from dorsal spinal cord synaptosomes occurs at nanomolar concentrations in the presence of elevated K q concentrations; this release occurs via activation of mw x opioid receptors 2 . The present study determined whether substance P can induce adenosine release directly, and whether it augments morphine-evoked release of adenosine from dorsal spinal cord synaptosomes in a manner similar to K q . Ž Male Sprague-Dawley rats 250-325 g; Charles River, . Quebec, Canada were used. Adenosine release from dorsal spinal cord synaptosomes was examined in a synaptow x somal suspension as described previously in detail 2 . For intrathecal pretreatment with capsaicin, an acute cannula was inserted into the spinal subarachnoid space under w x halothane anaesthesia as described previously 22 . Cap-Ž . saicin 60 mg in 20 ml 60% dimethylsulfoxidersaline or vehicle was injected over a 7-10-min interval prior to cannula withdrawal. Animals were allowed to recover at least 7 days before being used in neurochemical experiments. Any animal displaying motor deficits as a result of this procedure was excluded. For Ca 2q -free experiments, 0006-8993r97r$17.00 q 1997 Elsevier Science B.V. All rights reserved. Ž . PII S 0 0 0 6 -8 9 9 3 9 7 0 0 4 7 3 -3 ( ) synaptosomes were prepared in a Krebs-Henseleit medium from which Ca 2q was omitted. Ca 2q was added back to synaptosomes during the incubation stage. All experiments included a time s 0 determination of adenosine generated by the experimental procedure, and this was subsequently subtracted from all other values. Adenosine release values are expressed as pmol adenosine released per mg protein. Statistical comparisons were made using analysis of variance and Student Newman Keuls test for post hoc comparisons. Substance P released adenosine in a multiphasic manner, enhancing release at 0.1-1 nM, and again at 1 mM but Ž . not at intermediate concentrations Fig. 1A . The extent of the adenosine released by substance P at both concentrations is comparable to that produced by maximum depolarq Ž w x. ization with K cf. 3 . Two threshold concentrations of Ž . substance P 0.01 nM and 100 nM were combined with morphine. Substance P at 100 nM enhanced release of Ž . adenosine by 10 nM morphine Fig. 1B , as does 6 mM q Ž w x. K cf. 3 . No augmentation of release was observed Ž . with 0.01 nM substance P data not shown . The release of Ž . adenosine evoked by substance P 1 nM and substance Ž . P r morphine 100 nMr10 nM appears to originate from capsaicin-sensitive nerve terminals, as release from cap-Ž . saicin-pretreated rats was significantly reduced Fig. 2A . Such release was Ca 2q -dependent, as no release occurred 2q Ž . when Ca was omitted from the medium Fig. 2B . These characteristics of release are identical to those observed for q Ž . morphine in the presence of 6 mM K Fig. 2A,B . The present study demonstrates that substance P can release adenosine from dorsal spinal cord synaptosomes in a Ca 2q -dependent manner. Substance P depolarizes a range of neuronal types by decreasing K q conductances, leads to enhanced Ca 2q entry via voltage-gated Ca 2q channels, and 2q w x induces Ca release from intracellular stores 15 . Substance P releases a number of neurotransmitters from spinal cord preparations; in some cases release is Ca 2q -dew x 2q w x pendent 11 , but in others, it is Ca -independent 10,18 , perhaps reflecting an involvement of different neurokinin receptors in these responses. An interesting feature of the adenosine release induced by substance P is its multiphasic nature. The neurokinin receptor subtype mediating release of adenosine by substance P at nanomolar concentrations is likely a neurokinin-1 receptor based on the potency of w x the effect 15 ; other subtypes may mediate the inhibitory phase and subsequent stimulatory phase at higher concentrations. Micromolar concentrations of substance P previously have been shown to release glutamate, acetylcholine and gamma-aminobutyric acid from spinal cord preparaw x tions 10,11,18 . The capsaicin-sensitivity of the substance P-induced release of adenosine suggests that release occurs from small diameter primary afferent nerve terminals, as the capsaicin pretreatment schedule used here results in degenw x eration of C fibre profiles in the substantia gelatinosa 16 . A number of observations suggest that substance P can exert actions on afferent nerve terminals within the spinal cord. Thus, substance P releases glutamate from primary w x afferents 10 , alters primary afferent nerve terminal exw x citability 17 , and depolarizes sensory neuron cell bodies w x 20 . Ligand binding studies have failed to demonstrate any ( ) loss of substance P receptors in the dorsal horn following w x capsaicin pretreatment or rhizotomy 13,25 , but postsynaptic upregulation may have obscured a change in a small population of receptors. More recently, in situ hybridization analysis of mRNA for substance P receptors and immunohistochemistry of the substance P receptor itself in the spinal cord showed no evidence of substance P w x receptors on primary afferent nerve terminals 1 , and it was suggested that effects of substance P on C fibres are mediated indirectly by actions on interneurons. In the present study, release occurs from a synaptosomal suspension where anatomical juxtapositions are largely not retained. This observation initially suggests that a direct effect on synaptosomes occurs, perhaps by a direct depolarization. However, an indirect effect via release of endogenous opioids also is possible. Thus, spinal administration of substance P can produce a delayed analgesia which Ž is blocked both by naloxone suggesting release of endoge- . w x Ž nous opioids 5,23 , and by caffeine suggesting an adeno- . w x sine link also occurs 24 . The present demonstration that the effect of a nanomolar concentration of morphine is enhanced by substance P indicates that an amplification mechanism could occur in the synaptosomal suspension due to simple diffusion of a mediator without necessarily requiring an anatomical juxtaposition. Opioid-induced rew x lease of adenosine is capsaicin-sensitive 22 , and this would then account for the capsaicin-sensitivity of the adenosine released by substance P and the substance Prmorphine combination. The interaction between substance P and morphine in releasing adenosine is of interest from a functional point of view. Substance P is released by acute noxious sensory w x stimulation 15 , and this could interact subsequently with morphine to augment antinociception. The spinal administration of low doses of substance P has been shown to potentiate antinociception by morphine using the thermal threshold tail flick test, and this exhibits a bell-shaped w x dose-response curve as does adenosine release 12 . Augmentation of the action of morphine could occur either by substance P releasing adenosine directly with adenosine w x subsequently enhancing the action of morphine 3 , or substance P enhancing the ability of morphine to release adenosine and accentuating the component of opioid action w x due to adenosine release 2,4 . Interestingly, under conditions of inflammation where release of substance P is w x enhanced 6,19 , morphine exhibits an enhanced spinal w x antinociception 7 . A substance P-adenosine-opioid axis could contribute to changes which occur under conditions of inflammation as well.
2,207.8
1997-06-20T00:00:00.000
[ "Biology", "Chemistry" ]
Spin-Orbit Interactions of Light: Fundamentals and Emergent Applications We present a comprehensive review of recent developments in Spin Orbit Interactions (SOIs) of light in photonic materials. In particular, we highlight progress on detection of Spin Hall Effect (SHE) of light in hyperbolic metamaterials and metasurfaces. Moreover, we outline some fascinating future directions for emergent applications of SOIs of light in photonic devices of the upcoming generation. Introduction Light's polarization degrees of freedom, also known as Spin Angular Momentum (SAM) and orbital degrees of freedom, also known as Orbital Angular Momentum (OAM), can be coupled to produce a wide variety of phenomena, known as Spin-Orbit Interactions (SOIs) of light.Due to their fundamental origin and diverse character, SOIs of light have become crucial to a variety of active fields, such as singular optics, photonics, nano-optics and quantum optics, when dealing with SOIs at the single-photon level. Among a variety of fascinating exotic phenomena, SOIs show the remarkable spin-dependent transverse shift in light intensity, also known as the Spin Hall Effect (SHE) and the Spin Orbit Conversion (SOC) of light. The various plane-wave components of the beam that travel in slightly different directions and acquire slightly different complex reflection or transmission coefficients are what cause the regular SHE of light at a planar interface to form [1][2][3][4].A handy quantum-like framework with generalized wavevector-dependent Jones-matrix operators at the interface, and expectation values of the position and momentum of light, provides a theoretical explanation of the photonic SHE [3][4][5][6].Such a description also unifies the longitudinal (in-plane) beam shifts connected to the GoosHanchen (GH) effect [3][4][5][6] and transverse SHE shifts, also known as the ImbertFedorov (IF) shifts in the case of the Fresnel reflection/refraction [7][8][9][10]. 2D metamaterials, commonly referred to as metasurfaces, are an interdisciplinary area that encourages the use of substitute methods for light engineering based on spatially ordered meta-atoms and subwavelength-thick metasurfaces of different compositions.They display exceptional qualities in light manipulation in a 2D interphase.Metasurfaces can attain their 3D counterpart functions, such as invisibility cloaking and negative refractive index.In addition, they can eliminate some of the 3D metamaterial current restrictions, such as high resistivity or dielectric loss, for example.Additionally, the creation of metasurfaces via conventional methods for nanofabrication, such as electron beam lithography techniques, are readily available in the semiconductor industry. In this review, we provide a summary of recent findings and future potentials for applications of SHE of light in photonic materials.As an optical equivalent of the solid-state spin Hall effect, SHE of light warrants promising opportunities for examination of innovative photonic materials and nanostructures physical characteristics, such as in figuring out the magnetic and metallic thin films' material characteristics, or the optical characteristics of two-dimensional atomically thin metamaterials, with unmatched spatial and angular precision resolution, a trait that SHE and other combined technologies can provide, utilizing quantum weak measurements and quantum weak amplification methods.Additionally, we provide a summary of recent developments in primary 2D metamaterials and metasurfaces applications for producing and manoeuvering Spin Angular Momentum (SAM) and Orbital Angular Momentum (OAM) of light, for applications in multicasting and multiplexing, spin-based metrology or quantum networks. Spin Hall Effect of Light (SHEL) Spin-Hall effect of light (SHEL) typically refers to a spin-dependent transverse y-shift in the reflection or refraction of light at a sharp inhomogeneity of an isotropic optical interface [1][2][3][4]. An example of this effect is the so-called transverse Imbert-Fedorov beam shift, which occurs when a paraxial optical beam is reflected or refracted at a plane interface.A novel example of SHEL was recently reported, for transmission of light through a uniaxial crystal plate with tilted optical axis [11]. Using the terminology of the closely related work [11][12][13], we begin with the theoretical description of the problem.Namely, the SHE of light in a tilted photonic material.We note that what is tilted in the photonic material is the optic axis, relative to the propagation direction of the light. In the (x, z) plane, the photonic material is tilted so that its axis forms a θ angle (labeled ϑ for ease of notation in the following sections) with the z-axis and transmits mostly the y-polarization. The dichroic action of the photonic material in this geometry, and in the zero-order approximation of the incident plane-wave field, can be characterized by the Jones matrix, of the form: the Jones vector of the transmitted wave is |ψ = M0 |ψ .T x,y , which can depend on θ, are the amplitude transmission coefficients for the x-and y-polarized waves.While T x = 0 and T y = 1 for an ideal polarizer, we can assume that |T x /T y | 1 for real dichroic plates.Also take note of the fact that T x,y = exp(∓iΦ/2) relates to the birefringent waveplate issue discussed in [11]. Considering that in the paraxial approximation the beam consists of a superposition of plane waves with their wavevector directions labeled by small angles Θ = (Θ x , Θ y ) (k x /k, k y /k) [see Fig. 1(a)], the Jones matrix incorporates Θ-dependent corrections and can be written as [11]: here are the well-known Goos-Hanschen (GH) and Spin Hall (SHEL) terms [11], the latter being the main focus of this review. Quantum Weak Amplification Techniques A detailed explanation of the entire theory governing transverse shifts of light in tilted uniaxial crystals was elaborated in [11,12,13].The anisotropic transverse shift, or SHE of light, is represented by the expectation value of the position operator Ŷ , for an input state defined by a Jones vector |ψ , and a transmitted state described by a Jones vector |ψ : where k is the wave-vector, φ 0 refers to the difference in phase between ordinary and extraordinary waves as they propagate through a birefringent material, ϑ represents the tilt angle of the photonic medium, and (σ, χ, τ ) represent the Stokes parameters for the light beam.The SHE can be measured directly, using the sub-wavelength shift of the beam centroid [14][15][16][17][18] [Eq.( 3)].Various alternative techniques, such as quantum weak measurements [19][20][21][22][23][24][25][26][27] In this configuration, the SHE weak value results in: We compute the expected value of the centroid displacement based on the phase difference acquired by the beam (Eq. 3) as it propagates through the photonic material to estimate the classical beam shift.This is how the phase difference accumulated during propagation in the photonic material is expressed: where, n e (ϑ) = n o n e / n e cos(ϑ) + n o sin(ϑ) is the refractive index for the extraordinary wave, and n o is the refractive index for the ordinary wave, which propagate at the angle ϑ to the optical axis.The distances of propagation of the ordinary and extraordinary rays in the tilted plate are: Using Eqs.(3), ( 5) and ( 6) the expectation value of the spin Hall shift Ŷ and its weak value Ŷ weak can be derived, and contrasted with experimental findings. SAM of Light In optics, the Spin Angular Momentum (SAM) and Orbital Angular Momentum (OAM) of light can be observed separately.A Spin-Orbital Angular Momentum decomposition for paraxial monochromatic beams is simple.This distinctive property, which inspires this review, explains in part the recent unrivaled development in photonic SAM-OAM conversion in metasurfaces and 2D metamaterials.At the same time, the spin and orbital description of quantum electromagnetic field theories produce a variety of complexities, in the generic non-paraxial or non-monochromatic angular momentum description [10,11]. The SAM is connected to the polarization of light in the unified theory of angular momentum of light, which is based on canonical momentum and spin densities developed in [11].Accordingly, right-hand circular (RHC) and left-hand circular (LHC) polarizations of a paraxial beam correspond to positive and negative helicity σ = ±1.If the beam's mean momentum (measured in units per photon) can be connected to its mean wave vector k , then such beam carries the corresponding SAM S = σ k /|k|, where the helicity parameter σ is equivalent to the degree of circular polarization in the Jones formalism. A plane wave is an idealized phenomenon that can extend to infinity.Extrinsic OAM cannot be carried by such a plane wave (like its mechanical counterpart L = r × p), because its position r is undefined.On the other hand, a circularly-polarized electromagnetic plane wave can carry SAM.In the canonical momentum representation, the vector describing the electric field, for a circularly polarized plane-wave propagating along the z -direction can be written as [28]: where (x, ŷ, ẑ) are unit vectors and the helicity parameter σ = ±1 corresponds to the LHC and RHC polarizations, respectively.The wave number |k|, results from the dispersion relation for a plane wave, that is, |k| = ω/c.The electric field described in Equation ( 8) represents the eigen-mode of the z -component of the spin-1 matrix operators with eigenvalue σ, of the form Ŝz E = σ E [28].Where the spin-1 operators (generators of the SO(3) vector rotations) are given by [28]: Therefore, the plane wave carries SAM density S = σ k |k| , defined as the local expectation value of the spin operator with the electric field Equation (8). OAM of Light In 1936, Beth made the first demonstration of the mechanical torque produced by the transmission of angular momentum from a circularly polarized light beam to a birefringent plate [30,31].In this experiment, a fiber suspended a fine quartz quarter-wave plate.Such plate transforms RHC polarization, with spin component + , into LHC polarization, with spin component − , with a net SAM transfer of 2 per photon to the birefringent plate.Beth measured torque, known as the measurement of SAM of the photon, agreed in sign and modulus with the quantum and classical expectations. In Reference [31], Laguerre-Gaussian modes with azimuthal angular dependency (exp [−ilφ]) were shown to exist, which are eigen-modes of the momentum operator L z and carry an orbital angular momentum l per photon.Using the vector potential, a proper representation of a linearly polarized TEM plq laser mode can be obtained, in the Lorentz gauge [30]: where u(r, φ, z) is the complex scalar field amplitude satisfying the paraxial wave equation and x is the unit vector in x -direction.In the paraxial regime, du/dz is taken to be small compared to ku and second derivatives and the products of first derivatives of the electro-magnetic field are ignored.The solutions describing the Laguerre-Gauss beam, for the cylindrically symmetric case u r,φ,z , are of the form [31]: where w(z) is the radius of the beam, L l p is the associated Laguerre polynomial, z R is the Rayleigh range, C is a constant and the beam waist is taken at z = 0. Within this description, the time average of the real part of the Poynting vector 0 E × B, results in: where z is the unit vector in z -direction.When applying to a Laguerre-Gaussian distribution given in Equation ( 4), the linear momentum density takes the form: where r, φ are unit vectors.It may be seen that the Poynting vector (c 2 P ) spirals along the direction of propagation along the beam.The z component relates to the linear momentum, the r component to the spatial dispersion of the beam, and the φ component generates the OAM. SAM Control in Metasurfaces Metasurfaces enable polarization conversion in the same way as ordinary wave plates by maneuvering two light eigen-modes that correspond to orthogonal polarization.The Jones vector of the incident field and the Jones vector of the desired output fields must be known in order to create a metasurface for polarization conversion (Figure 2).With this information, it is possible to determine a Jones matrix J(x, y) for each spatial point (x, y) on the metasurface plane connecting the incident and output waves.Appropriate nanoantenna designs can be engineered to produce the calculated Jones matrix.Metasurfaces have been used to convert between linear and circular polarizations, between various linear polarization states, and between opposing circular polarization states [32,33,34].Because circularly polarized input light is one of geometric-phase metasurfaces main limitations, it is necessary to precisely match the geometric phase to the propagation phase in order to obtain unrestricted control of polarization states.Typically, hybrid nanoantenna patterns are used to achieve this. In the context of this manuscript, the term metasurface refers to a nanostructured 2D metamaterial, the thickness of which is less than or comparable to the value of the spatial attenuation of surface evanescent waves.Thus, we do not refer to 2D nanostructures such as quantum wells or monolayers of solid materials, the physics of which is based precisely on the absence of a third spatial coordinate. OAM Control in Metasurfaces In order to create optical vortex beams with net topological charge or OAM of light (l), polarization-control in metasurfaces that permit manipulation of the propagation phase and the geometrical phase can also be used [30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47].The helixed wave front of such OAM beams, which is a direct result of the dependence of phase on azimuthal angle, is one of its most distinctive features.The number of twists a wave-front has, determines the integer l that represents the order of OAM (per unit wavelength).Since beams with different orders of OAM are orthogonal they do not interfere with each other, widespread acceptance of OAM as an unrestricted degree of freedom of light that can be used in fast free-space optical communication systems has been achieved [39][40][41][42][43][44][45][46][47][48]. TAM Control in Metasurfaces Precise control of the Total Angular Momentum (TAM) of light can also be achieved in metasurfaces.More specific, it is possible to introduce a controlled geometric phase and establish a link between polarization (SAM) and phase using phase elements with spatially variable orientations. According to [30], these devices are often constructed from periodic elements known as q-plates.The precise transformation made by q-plates can be written as [34,35]: where circular polarization LHC (L) and RHC (R) are mapped to its anti-parallel spin state with an acquired OAM charge of ±2q per photon.Spin-Orbit Conversion (SOC) is the common name for this transformation. As introduced in the previous Sections, the rotational elements (i.e., θ(x, y)) are often the only ones that change the angle spatially.As a result, the OAM output states are limited to conjugate values (±2q ). Additionally, the SOC operation carried out by q-plates is restricted to SAM states of circular polarization as a result of the device symmetry.A more general device is needed in order to perform the SOC operations of different SAM states, including elliptic polarization.As will be discussed later, J-plates can carry out this arbitrary mapping (Figure 2). A J-plate (represented by the variable J) has the capacity to transfer two arbitrary TAM states into two arbitrary input SAM states, including but not limited to RHC or LHC.Any media that allows birefringence, absolute phase shift, and retarder orientation angles to vary spatially can be used to produce a J-plate.In other words, the relative phase shift between orthogonal spins (φ = φ(x, y)) should vary spatially in addition to the fast axes of rotating plates.Metasurfaces were used to construct such J-plates [34].The J-plate's precise transformation is best described as (Figure 2): where |λ± are arbitrary elliptical polarization expressed as [34]: the parameters (χ, δ) determine the polarizations state.Implementing the SOC operation reduces to determining the actual Jones matrix J(φ) that transforms J(φ)|λ+ = e imφ |(λ+) * and J(φ It can be demonstrated that the required spatially varying Jones matrix has the form: Unfortunately, the necessary control and sub-wavelength spatial variations in phase shift, birefringence, and orientation cannot be achieved with a conventional phase plate.On the other hand, sub-wavelength space metasurfaces and 2D metamaterials may provide such unprecedented control [56][57][58][59][60][61][62][63][64][65][66][67][68][69]. 6 SHEL in Photonic Materials: Experimental realizations Experimental configurations To confirm the above theoretical predictions for SHE in tilted photonic materials, we performed a series of experimental measurements using the setups shown in Figure 3 [12,13].While in Ref. [12] we use a sample of free-standing birefringent polymer foil, similar to the type Newport 05RP32-1064, as photonic material, in Ref. [13], we use a hyperbolic trench metamaterial structure.As a source of incident Gaussian beam, we employed a He-Ne laser (Melles Griot 05-LHR-111) of wavelength λ = 633 nm.The laser radiation was collimated using a microscope objective lens.We measure the anisotropic phase difference Φ 0 versus the angle of the tilt ϑ via Stokes polarimetry methods [12,13].For this purpose we used the setup shown in Fig. 3 (a). The appropriate linear-polarization state in the incident beam was chosen using the twin Glan-Laser polarizer (Thorlabs GL10) (P1).This was oriented at 45 polarization in the first experiment, equivalently in the formalism introduced in the previous Sections.The Stokes parameters are then determined by passing the beam through the photonic material sample while utilizing a quarter wave plate (QWP) with a retardation angle of δ and a second polarizer P2 with a rotation angle of γ, as shown in Fig. 3 (a).The Stokes parameters can be used to determine the phase difference using the expression: factor S 0 is given by the total intensity of the beam.The measured phase using Eq. ( 5) is wrapped in the range (−π, π).To calculate the unwrapped phase difference, we employ an unwrapping technique with a tolerance set to 0.001 radians [12,13]. SHEL in Birrefringent Polymers In Ref. [12], a full experimental demonstration of SHE of light in birefringent polymers was provided.The schematic of tilted birefringent polymer film is depicted in Figure 4 (a).We observe a spin-Hall effect via Stokes polarimetry (k Ŷ ) using a 50µm polymer film which is 10 times larger than the shift observed in Ref. [11], for a 1000µm Quartz sample.We attribute this increase to the polymers' greater effective birefringence.We also investigate the impact of tunable birefringence in the polymer film.By applying controllable voltage, leading to tunable stress in the polymer, it is possible to induce tunable birefringence in the polymer film, which can in turn induce controllable light shifts.In Figure 4 (b), we present numerical simulations of SHE for a set of stress-induced birefringence ranging from ∆n = 0.009, 0.03, 0.06, 0.07 as a function of tilting angle ϑ, maximal tunability is achieved at ϑ = 0.3 radians [12]. Additionally, using the quantum weak measurement apparatus shown in Fig. 3 In conclusion, we experimentally demonstrated the fine lateral circular birefringence of a tilted birefringent polymer, the first instance of the SHEL in a polymer material [12].We revealed experimental results of this nanometer-scale phenomena and found a quantum weak amplification factor of 200 using Stokes polarimetry and quantum-weak measurement techniques.Because the polymer's birefringence may be controlled using mechanical stress in the case of stress-induced birefringence or voltage in the case of liquid crystals, this lateral shift could be utilized as an optical switch at the nanoscale scale.Numerous cutting-edge applications in photonics, nano-optics, quantum optics, and metamaterials might become possible as a result.Such emergent application of SHEL are discussed in the following Sections. SHEL in Metamaterials In Ref. [13], we present experimental findings for SHE of light in Hyperbolic Metamaterials (HMM).In particular, we experimentally show enhanced spin-Hall effect of light in HMMs in the visible region and exceptional angle sensitivity.As shown in Fig. 5 (a), the effect is shown in the transmission configuration using an HMM that is a few hundred nanometers thick and made up of alternating layers of metal and dielectric as illustrated in Fig. 5 (a).With a change of ≈ 0.003 rad (≈ 0.17 deg) in the angle of incidence, the transverse beam shift in the HMM configuration can alter dramatically, from essentially no beam shift to a few hundred microns.Therefore, it is to be expected that compact spin Hall photonic devices can take advantage of the huge photonic spin Hall enhancement in such a tiny structure, with a great angular sensitivity to manipulate photons via polarization. The HMM sample has eight gold-alumina periods placed on a 500 µm thick glass substrate for a total thickness of 176 nm.Al2O3(10 nm)APTMS(1 nm)Au(10 nm)APTMS(1 nm) are the four layers that make up one period of the HMM structure.Amino Propyl Tri Methoxy Silane, often known as APTMS, is an almost loss-free adhesion layer that is beneficial for highly confined propagating plasmon modes [13].Al2O3 layer was deposited via atomic layer deposition, upon Au layer sputtering.Calculated as [20] are the HMM's ordinary and extraordinary permittivities, indicated as ε o and ε e , respectively. Based on the effective media approximation [13], HMMs are treated as homogenized uniaxial mediums with effective permittivities.The thicknesses of individual layers are assumed to be deeply subwavelength within the effective media approximation [13].The wavelength range of λ = 500700 nm, normalized by the unit cell Λ = 22 nm, yields the ratio of Λ/λ = 1/22.7 1/31.8Therefore, it is justified to use the effective media approximation in our scenario. Calculated as described in [20], the HMM's ordinary and extraordinary permittivities, are indicated as εo and εe, respectively: where f m and f d are the volume fractions of metal and dielectric, respectively, and ε m and ε d are the permittivities of metal and dielectric, respectively.The Drude model with the thickness-dependent correction is used to describe the Au film's permittivity, εm [21]. APTMS has a refractive index of 1.46 [22].The dispersion of the HMM effective permittivities in the visible region is seen in Fig. 5 (b).With λ= 500 nm as the zero crossing wavelength for our HMM structure, type II HMMs (ε o < 0 and ε e > 0) are formed when red wavelengths are approached. We ran simulations based on the theory developed by T. Tang et al. [13] in order to comprehend how the spin Hall beam shift behaves in HMMs, with realistic parameters (Fig. 5 (b)).Air serves as the ambient medium for the entire HMM-SiO2 substrate structure, as depicted in Fig. 5 (a) .We consider that the incident light is impinging on the HMM structure with an incident angle θ i in the y-z plane.Since 1 = 2 = 5 = 1 (air), 4 corresponds to the SiO2 substrate, and 3 corresponds to the HMM, the relative permittivities of the media in regions 15 are indicated by i (i = 1, 2, 3, 4, 5) respectively.The HMM is considered to be uniaxially anisotropic, non-magnetic, and to have a relative permittivity tensor ( 3 ): Taking into account the input waist Gaussian beam ω 0 : we can define the transverse beam shifts after transmission through the structure in the form: where the transverse shifts for the right hand circular (RHC) and left hand circular (LHC) polarizations are indicated by η ± , respectively.The terms for the transverse shifts contain z-dependent and z-independent terms, which, respectively, represent angular and spatial transverse shifts [23].In this case, our attention is on the spatial transverse shift of light transmitted by the HMM waveguide, which has the following form: where t s,p are the transmission amplitudes for the s, p modes, respectively [13], θ t is the transmission angle, and k1 = n1k = n1 2π λ with n1 = 1 (air) are all constants.We assume transmission along the laser beam axis, thus ( dts dθi ) ≈ 0 for a large beam waist and θt = 0f or a large beam waist. We carried out a number of characterizations with the help of the polarimetric setup depicted in Fig .3 (a) to highlight the angular sensitivity of the photonic SHE in the HMM structure.Fig. 5 (a).Polarimetric and quantum weak measurement techniques can be used to determine the transverse beam shift [12,13,16]. We used a He-Ne laser with a wavelength of λ = 633 nm and a diode laser with a wavelength of λ = 520 nm, as sources of an incident Gaussian beam.A microscope objective lens was used to collimate the laser light. Using Stokes polarimetry, we calculated the anisotropic phase difference Φ0 vs angle θi.A Quarter Wave Plate (QWP) and then a Glan-Thompson polarizer (P1) are used to produce the input polarization state (RHC).The Stokes parameters can be used to determine the phase difference using the expression: where the normalized Stokes parameter for circular polarization is defined as and the normalized Stokes parameter for diagonal basis is defined as normalization factor S 0 is determined by the total beam intensity.The retardation angle of the quarter wave plate QWP2 and the rotation angle of the polarizer P2 are represented by δ and α of I(δ, α), respectively. The range (-π, π).encompasses the measured phase obtained using Eq. ( 5).We employ an unwrapping algorithm [12] to calculate the unwrapped phase difference with a 0.01 rad tolerance set. In Figures 5, we can see the characteristics of the photonic spin Hall effect in transmission-configured HMM systems.The transverse beam shift in HMMs is quite sensitive to the incident angle; for λ = 633 nm, changes from θi = 0 rad to just θi = 0.003 rad (0.17 circ) [Fig.5(c)] causes a massive beam shift of several hundred microns while demonstrating milliradian-level sensitivity.The angular variation from θi = 0.003 rad to above also significantly changes the beam shift from < Ŷ > = 105 µm to merely < Ŷ > = 10 µm, which is almost one order of magnitude difference.Given that we observed the same result in dielectric media [16], the significant anisotropy of HMMs is believed to be the cause of the beam shift's strong peak.With a wider beam diameter and wavelength λ = 520 nm (Fig. 5(d)).The peak shift of < Ŷ > = 270 µm is reached by the incident angle change of 0.001 rad (0.057 • ) only (≈ 4700 µm/ • ).This means that the beam shift exhibits even sharper resonance and, as a result, greater angular sensitivity.The beam shift dramatically decreases to < Ŷ > = 10 mum and less when the incident angle increases, for example, θ i = 0.01 rad (0.57 • ). However, the transverse beam shift drastically decreases to < Ŷ > = 5 µm when the incidence angle is inclined by just θi = 0.035 rad (≈ 2 • ).Such findings show the great angular sensitivity of the SHE in HMMs and prove that the transverse beam shift may be controlled within an order of magnitude range by minor angular adjustments.Experimental measurements were utilized to determine the waist (w 0 = 100 µm) for the simulations.In summary, we experimentally proved the photonic SHE in a hyperbolic metamaterial at visible wavelengths for the first time.The incidence angle has a significant impact on the tranverse beam shift in the transmission arrangement.We found that a little difference of a few milliradians can affect the beam shift by two orders of magnitude, going from a few hundreds of microns to a few microns, for example.A HMM that is two hundreds of nanometers thick achieves this tremendous angular tunability.Such sensitivity can result in small and compact spin Hall devices, such as switches, filters, and sensors, that control light at the nanoscale by varying the wavelength, incidence angle, and spin. Emergent Applications of SOI in Photonic Materials SOIs may be engineered to modify they way in which an artificial substance disperses.SAM and OAM can therefore both be used as a means of control for light.The multifunctional spin-dependent element is made available by the geometric phase design, which also permits the spin-based optical devices.In addition, unlike conventional devices based on dynamic phase, the manipulation of cumulative geometric phase is essentially the control of light polarization.In conventional settings, a phase distribution is created by varying the reflective indexes or the thickness of optical materials.However, planarization and miniaturization of components are necessary for the advancement of integration optics.The photonic SHE may be developed in conjunction with the functions of conventional components to create infinitesimally tiny and multifunctional devices.Photonic SHE devices that provide the basic optical components variation for emergent applications of SOIs will be briefly outlined in this section.The transverse and spin-dependent shifting of light is referred to as the optical Spin Hall Effect (SHE).As a result, it has been enthusiastically advocated for a variety of sensing applications, including biosensing, material interface studies, polarization-dependent sensors, and refractive index spectroscopy [83,84,85].SHE biosensors usually include a graphene sheet, or some alternative 2D material, an Au film, and a BK7 glass. A change in the concentration of biomolecules in the sensing medium will cause a localized change in the refractive index close to the graphene surface.The photonic SHE's spin-dependent shifts are sensitive to changes in the sensing medium's refractive index.More specifically, it is possible to infer a quantitative link between the spin-dependent splitting and the sensing medium's refractive index.Furthermore, by varying the refractive index of the sensing medium, the spin-dependent splitting in the biosensor may be studied. Similar SHE-based sensing applications have been documented [83] (Fig. 6(a) and (c)).Weak measurement amplification methods can be used to further improve such sensing applications. Spintronics based on photonic SHE The creation of a spin current perpendicular to the direction of the charge current flow constitutes the Spin Hall Effect itself.The optical SHE based on polaritons, and consisting of a separation between real space and momentum space, was recently predicted for laser-induced spin-polarized exciton-polaritons, in a semiconductor microcavity, due to a combination of structural disorder-based dispersion of exciton-polaritons, and an effective magnetic field resulting from polarization splitting of the polariton states.The excitonic spin current is controlled by the linear polarization of the laser pump.The first experimental evidence for this effect was reported in [86,87,88].It was stated that polariton spin currents might travel over 100 µm distances in a superior GaAs/AlGaAs quantum microcavity.By rotating the laser pump polarization plane, it is feasible to switch the spin currents directions, opening the door to a host of amazing applications in optical spin switching and spintronics, in addition to further emergent spin-based metrology functionalities (Fig. 6 (d)). Applications in Quantum Information Networks A polarizing beam splitter or spin-dependent splitter can separate the orthogonally polarized components of a beam into different propagation directions.Since the beam is divided into several spatial modes according to its polarization, the photonic SHE generator may intuitively be thought of as a polarization beam splitter. Photonic Multiplexing and Muticasting Devices Metasurfaces can contribute to a new class of on-chip scalable devices, which are expected to revolutionize nanophotonic and optoelectronic circuitry through smart integration of multiple functions in metallic, dielectric, or semiconductor building blocks.Metasurfaces among artificially produced materials offer considerable promise to succeed in technologically relevant applications because of their 2D nature, which is a key benefit for wafer-scale manufacturing and integration.Here, we will concentrate on applications of metasurfaces for high-speed data transmission in integrated OAM multiplexing and multicasting devices [79][80][81]. In order to distinguish between the many orthogonal channels, OAM division multiplexing is an experimental technique for boosting the transmission capacity of electromagnetic signals, as stated in [82].It corresponds to transmission over a few kilometers in OAM-maintaining fibers equivalent to wavelength division multiplexing (WDM), temporal division multiplexing (TDM), or polarization division multiplexing (PDM). The extremely bulky optical components needed for OAM generation and OAM detection are one of the key constraints for scalable OAM multiplexage in addition to the lower transmission range [33], in the region of space that OAM of light covers.While OAM multiplexing can access a theoretically endless collection of states and as a result can offer an infinite number of channels for multiplexage, SAM or polarization multiplexing only offers two orthogonal states that correspond to the two states of circular polarization. Although 2.5 Tbit/s transmission rates in MIMO systems have been reported, OAM multiplexing still remains an experimental approach and has only been tested in the lab so far, over relatively short distances of a few Km over OAM maintaining fibers.Nevertheless, it promises very significant improvements in bandwidth. The extremely bulky optical components needed for OAM generation and OAM detection are one of the key constraints for scalable OAM multiplexage in addition to the lower transmission range.Recently [73][74][75][76][77], the first experimental demonstration of an OAM multiplexing technique based on single-layer metasurfaces in the Terahertz (THz) band was realized.In particular, OAM multiplexing with four channels is made possible by the developed structure's ability to produce four focused phase vortex beams with various topological charges (or OAM number l) when illuminated by a Gaussian beam. The OAM signal is demultiplexed when a single vortex beam is employed as the incident light because only one channel is recognized and extracted as a focal spot.The subwavelength-level thickness of the metasurface structure expands the range of viable methods for the integration and downsizing of THz communication systems.Excellent agreement between theoretical predictions and practical results can be seen in the performance of the developed OAM multiplexing and demultiplexing device, proving its suitability for scaled ultra high-speed THz communications. Conclusions In conclusion, we presented a thorough review of recent developments in Spin Orbit Interactions (SOIs) of light in photonic materials.In particular, we highlighted progress on detection of Spin Hall Effect (SHE) of light in hyperbolic metamaterials and metasurfaces via polarimetric measurements, reporting unprecendented angular resolution at visible wavelength.Moreover, we outlined some fascinating future directions for emergent applications of SOIs of light in photonic devices of the upcoming generation.As a rapidly expanding interdisciplinary field, SOIs of light in 2D metamaterials and metasurfaces has important emergent applications in nanophotonics, biosensing, plasmonics, quantum optics, and telecommunication.SOIs in metamaterials and metasurfaces largely guarantees exceptional performance and versatility in the exact control of optical fields.Moreover, applications made possible by optical metasurfaces decreased dimensionality are significantly different from those made possible by bulk metamaterials.In general, metasurfaces can offer a novel tool for scalable OAM generation and conversion with minimal losses.This feature can encourage many applications in integrated on-chip OAM generation, such as multiplexing and multicasting approaches, which may hold the promise of boosting transmission capacity and resolving scalability problems beyond the state of the art [12,13,77,78,79,80,81,82]. Acknowledgments This Review is intended as a contribution to the advancement of scientific knowledge, for the benefit of the entire society, and its future generations.The author is grateful to Konstantin Bliokh, Andrei Lavrinenko, and Ricardo Depine for many helpful discussions.The author ackowledges Osamu Takayama and Radu Malureanu for providing the metamaterial samples used in Ref. [13].This work was supported by ANPCyT via grant PICT Startup 2015 0710 and UBACYT PDE 2017. Figure photonic material is the optic axis, relative to the propagation direction of the light.Figure 1 (a), (b) and (c) depict the problem's geometry.The normalized Jones vector describes the incident z-propagating paraxial beam's polarization |ψ = (E x , E y ) T (T stands for the transposition operator), are also used.The latter technique enables substantial amplification, when employing almost crossed polarizers for pre-selection and post-selection of the polarization input and output states, as they relate to the system.The output polarizer corresponds to a post-selected polarization state |ψ = (α , β ) T whereas the input polarizer corresponds to a pre-selected state |ψ = (α, β) T , here (α(α ), β(β )) represent the input(output) polarization components of the input(output) beam (E x (E x ), E y (E y )), and T stands for transposition operation.As opposed to traditional expectation values, weak values Ŷ weak might display a quantum weak amplification effect and lie beyond the operator's spectrum, moreover weak values can take imaginary values.We examine the quantum weak amplification of the SHE shift using an initial beam with e polarization |ψ = (1, 0) T , and a nearly orthogonal polarization state |φ = ( , 1) T , | | << 1, for the post-selection polarizer. here z R stands for the Rayleigh length.The second angular term, becomes dominant in the far field zone, and presents weak amplification due to two reasons: First, due to the fact that | | << 1, and second due to the fact that z >> z R , in the far field regime.It should be noted that the maximum weak amplification that can be achieved at | | ≈ (kω 0 ) −1 is of the order of the beam waist ω 0 z/z R . Figure 1 : Figure 1: (a) General 3D geometry of the problem displaying the angle ϑ between the anisotropy axis of the plate and the beam axis z.(b) The wave vectors in-plane deflections (Θ x ) cause the well-known birefringence shift X which is comparable to the GH shift, by altering the angle between k and the anisotropy axis.(c) View along the anisotropy axis of the crystal is shown.The transverse Θ y deflections of the wave vectors rotate the corresponding planes of the wave propagation with respect to the anisotropy axis by the angle ϑ ≈ Θ y /sin(Θ y ).This causes a new helicity-dependent transverse shift Ŷ , i.e., a spin-Hall effect similar to the IF shift.Further details are in the text. Spatial light modulators (SLM), holograms, laser mode conversion, and spiral phase plates(SPP) are examples of conventional techniques for producing OAM beams.On the other hand, by positioning nanoantennas with linearly increasing (or decreasing) phase shifts along the azimuthal direction, metasurfaces can produce helical wave fronts[49][50][51][52][53][54][55].As a result, a metasurface can add an optical vortex to the incident light wave front, thus converting SAM into OAM, this transformation is also termed spin-to-orbit conversion (SOC)[32,33,34,70,71].Due to the conservation of total angular momentum, the SOC typically facilitates the conversion of LHC and RHC polarization into states with opposing OAM (TAM).A metasurface can transform circular polarizations into states with independent values of OAM by adding an additional phase shift in the azimuthal direction.Recent years have seen the demonstration of the transformation of light with arbitrarily elliptical polarization states into orthogonal OAM vortex states[71,72]. Figure 2 : Figure 2: (a) Representation in Poincare sphere of Spin-Orbit-Conversion (SOC) via q-plates.A state of circular polarization located in the poles of the Poincare sphere is mapped into its opposite state of circular polarization, while imprinting a fixed azimuthal phase ±2qφ.(b) Representation in Poincare sphere of spin-orbit conversion (SOC) via J-plates.Arbitrary states of elliptic polarization |λ± are mapped into its opposite state of elliptic polarization, while imprinting a tunable azimuthal phase (n, m)φ. Figure 3 : Figure 3: Schematic depiction of experimental configurations for (a) polarimetric measurements and (b) quantum weak measurements.P1 and P2 represent double Glan-Laser polarizers (Thorlabs GL10), QWP is quarter wave plate.L1 and L2 are used to identify lenses.He-Ne (Melles Griot 05-LHR-111) laser with a 633 nm emission wavelength; Thorlabs WFS150-5C CCD camera type.The ordinary and extraordinary permittivities of the photonic material sample, and their corresponding axes, are denoted as ε o and ε e , respectively. whereS 2 = I(0 • , 45 • ) − I(0 • , 135 • ) is the normalized Stokes parameter in the diagonal basis, and S 3 = I(90 • , 45 • ) − I(90 • , 135 • ) is the circular polarization's normalized Stokes parameter, here the normalization (b), we measured the spin Hall shift weakly and observed the quantum weak amplification effect.A CCD camera (Thorlabs WFS150-5C) is used to image the beam.To achieve this, two lenses (L1) and (L2) with a focal distance of f = 6 cm were implanted.Pre-selected and post-selected polarization states are produced by polarizers Glan-Thompson Polarizers P1 and P2, respectively, having polarization states of |ψ and |ψ .The first lens (L1), which had a 6 cm focal length, generated a Gaussian beam with a waist of 30 µm and a Rayleigh range of z R = 4.6 mm.Therefore, z/z R = 10.86 is the propagation amplification factor for a CCD camera placed at a distance of z = 5 cm.The amplification factor due to crossed polarizers results 1/ ≈ 1.83 × 10 −2 .For k = 2π λ , the overall weak amplification factor becomes A = z z R × 1 k = 200, this is confirmed in the experiment which involved a displacement between centroids of ∆Y = 1000µm between post-selection polarizers oriented at = −1/1.83× 10 −2 (Fig. 4(c) Top), and = +1/1.83× 10 −2 (Fig. 4(c) Bottom) is measured at a tilt angle ϑ = 20 • , consequently, the SHE is amplified by a factor A = 200.For crossed polarizers ( = 0), a Hermite-Gaussian distribution is created from the input Gaussian beam (Fig.4(c) Middle), and the two centroids are roughly separated from one another by approximately ∆Y = 1000µm. Figure 4 : Figure 4: (a) Paraxial beam's transmission through a clear birefringent polymer slanted layer as shown in 3D geometry.The beam goes through transversal shift < Ŷ > at the nanometer scale brought on by the Spin Hall Effect (SHE) of light.The paraxial angles (Θ x , Θ y ) identify the propagation direction of the incident beam's wave vectors k.(b) A tunable birefringent polymer results in enhanced SHE of light as shown in numerical simulations considering different birefringence: ∆n 0.009 (gray, index difference in quartz), ∆n 0.03 (green), ∆n 0.06 (blue), and ∆n 0.07 (red, index difference in stretched polymers [32]).Vertical dashed line at ν 0.3 (rad) is to show the tunability of beam shift by different birefringence.(c) Transverse intensity distributions (a.u.) in false scale for an o-polarized beam transmitted through a tilted polymer plate and post-selected in the almost e-polarized state, with a tilt angle ϑ = 20 • .Top: Post-selected polarization state with = −1/83 × 10 −2 .The beam centroid is shifted, resulting in a measurement of weak value < Ŷ > weak = −500µm.Middle: With crossed polarizers ( = 0), a Hermite-Gaussian intensity distribution with peaks spaced apart from one another by ∆Y = 1000µm is created from a Gaussian distribution, corresponding to a weak amplification factor A = 200.Bottom: Post-selected polarization state with = 1/83 × 10 −2 , which corresponds to a weak value measurement of < Ŷ > weak = +500µm. According to Figure 5 (d), with λ = 520 nm, we observe < Ŷ >= 165 µm for θi = 0 rad .At θi = 0.035 rad (≈ 2 • ), the beam shift decreases to < Ŷ >= 5 µm.The previously observed spin Hall effect in dielectric anisotropic media, such as a quartz crystal (≈ 150 µm / 5 • = 30 µm/ • ) and polymer film (approximately 250 µm / 20 • = 12.5 µm/ • )[16], is strikingly different from the extremely high angular sensitivity of this new device.When the beam shift is two orders of magnitude larger and occurs within a few degrees of sample tilting.Additionally, the HMM has a thickness of only 176 nm as contrasted to the dielectric materials, which have thicknesses of 50µm for polymer films and 1mm for quartz plates. Figure 5 : Figure 5: (a) Scheme of the Spin Hall Effect (SHE) of light in hyperbolic metamaterials.The incidence angle, indicated by θ i , switches the transverse beam shift along the y-axis denoted as < Ŷ >.The HMM structure's unit cell is made up of Al 2 O 3 (10 nm)-APTMS(1 nm)-Au(10 nm)-APTMS(1 nm).Eight periods totaling 176 nm in thickness make up the HMM.(b)According to the effective media approximation, the effective permittivities of a multilayer HMM structure made up of a unit cell of Al2O3(10 nm)-APTMS(1 nm)-Au(10 nm)-APTMS(1 nm) were calculated.The structure exhibits type II (εo < 0 and εe > 0) HMM behavior in the visible wavelength range.The vertical dashed lines indicate the wavelengths λ = 520 nm and 633 nm at which experiments were conducted.Measured and calculated spin Hall transverse shifts < Ŷ > for (c) λ = 633 nm and (d) λ = 520 nm, respectively, are shown in solid lines.Beam waist wo is fitted to the simulation results in (a) and (b).Note that there is an estimated beam divergence of ∆θi = 0.01 rad for λ = 633 nm and ∆θi = 0.02 rad for λ = 520 nm indicated as lateral error bars, respectively.Further details are in the text Metasurfaces have recently been shown to be able to replace bulk optical components and be extended into the single-photon quantum optical regime.For use in quantum network applications, the decreased propagation losses brought on by metasurfaces make it possible to realize a spin-dependent splitter at the single photon level.In particular, SHE switches and decoders are a fundamental building block for Quantum Information Networks where polarization dependent operations, such as C-NOT and SWAP gates, are daily required[79,81,82] (Fig.6 (b)). Figure 6 : Figure 6: SHE of light applications in precision measurements of (a) mechanical properties of photonic materials, (b) quantum network applications, (c) optical/electronic properties of 2D materials, (d) optical switching for spin-based metrology and spintronics.Further details are in the text.
10,287.2
2022-10-06T00:00:00.000
[ "Physics" ]
Historical Origin and Realistic Enlightenment of “ Nanniwan Policy ”-On the Realistic and Pragmatic Scientific Spirit of Zhu Proposal and implementation of “Nanniwan Policy” fully reflects the scientific spirit of being practical and realistic, being innovative and realistic and pragmatic proposed by Zhu De, which has a realistic significance and enlightenment effect on construction of innovative and harmonious society and sustainable development of social economy in the current China. Deficiency of Clothing and Food in the Base Area, Seriously Threatening Survival and Development in the Base Area Ever since 1937 when Kuomintang-Communist cooperation in the Anti-Japanese War, the enemy's rear area battlefield and frontline battlefield were formed in China.These two battlefields depended on each other and collaborated with each other.After 1938, Japanese invaders changed their policy of aggression against China, and they mainly attacked the anti-Japanese base area and carried out the policy that was centered by political lure into surrender and was supplemented by military attack towards Kuomintang government.The anti-Japanese base area suffered cruel "smashes" by the Japanese invaders for several times.The Kuomintang government adopted the policy of besiege and blockage in the anti-Japanese base area, successively ceased releasing such necessary material supplies as solders' pay and rations and ammunition to the Eighth Route Army, blocked the normal commercial trade channels between the border area and the outside, which made the native products in the border area unable to be sold out, and the urgently needed food, ammunition, drug and salt unable to be bought in, and which, in the meantime, stopped the donation assistance from the progressives in all walks of life and from the overseas Chinese.Zhu De once said in a conference, "Chiang kai-shek neither released pay nor food, implemented military and economic blockage in the Shaanxi-Gansu-Ningxia border region.He ordered not to give us food or clothing.He is planning to starve us to death and freeze us to death."The land in the Shaanxi-Gansu-Ningxia border region was infertile and natural disasters frequently occurred, so this region was unable to burden the least survival demand of the border area government and army.Especially under the "great smash" of the Japanese invaders and attack of the Stubborn Army, the area of the base area declined sharply from more than 120 thousand square kilometers to only 90 thousand square kilometers or so.Thus, development of economy and security of supplies became a matter of primary importance to be resolved by the government. Financial Deficiency of the Border Area Government, Seriously Constraining Sustainable Development of the Anti-Japanese War The Shaanxi-Gansu-Ningxia border region was located in the Loess Plateau and was a closed and extremely backward poverty-stricken area, where the agriculture was quite backward.Before the Anti-Japanese war began, there was no industry in the border area.After the war, there were 9 public-owned factories in the border area, with a capital of 523 thousand RMB Yuan, 29 production cooperation institutes, with a share capital of 135 thousand RMB Yuan and a production value weight of 20 thousand Yuan.The economic condition was unable to provide sufficient sources to the finance of the border area.Before 1940, the primary revenue of the government was the channel of foreign assistance, so it had a small amount of balance.When the Kuomintang government changed its policy to besiege and block the base area in 1940, and, furthermore, a large batch of young people hurried off for Yan'an, and the forward-deployed forces, such as, 359 Brigade headed by Wang Zhen, were recalled from the frontline to defend the border area, the population in the border area sharply increased, and the finance became more deficient.In 1941, the financial economic deficit reached as much as 5.672 million RMB Yuan.It became an extremely urgent matter how to develop economy, increase financial revenue, open new financial sources and provide solid material security for the Anti-Japanese War. 1.3 The Economic Burden of Peasants Was Extremely Heavy, Their Accumulated Rancour in the Government Deepened, and the Issue of Food Possibly Evolved into a Disturbing Political Issue When the most difficult period of the Anti-Japanese war approached, the burden of the people in the Shaanxi-Gansu-Ningxia border region become heavy."The common people transported public salt and provided us with public salt as an replacement of money.Besides, we bought a government loan worth of 5 million RMB Yuan in 1941, which is also a heavy burden".At the beginning of the Anti-Japanese War, the border area allocated an agricultural tax paid in grain worth of 10 thousand Dan to the peasants, 50 thousand Dan in 1939, increasing to 90 thousand Dan in 1940 and sharply increasing to 200 thousand in 1941.At that time, there was already a population of 1.5 million in Yan'an.These people had to burden public food worth of 200 thousand Dan.Furthermore, there were a lot of other public loans and taxes.The burden of peasants was extremely heavy, so the public had objections and relationship between the cadres and the masses became intense, contradictions appeared between the government army and peasants, and the dissatisfactory emotions of the masses increased.As a result, such cynical remark emerged as "Why the Thunder God doesn't beat Mao Zedong to death."Thus, it can be seen, the issue of food had become an austere political issue and was involved about the fundamental principle issue whether the central government and the army could gain support of the common people, get through the most difficult time and carry the war through to the end. Then, the most critical issue in how to coordinate relationship between the government, the army and the common people, maintain the affection of fish and water between the government and the common people and ensure victory of the war was how to develop the economy and lessen the burden of the people. Zhu De mentioned in "Finishing the Financial Economic Plan of the Border Area in 1941" in December 1940, the most critical matter in the Shaanxi-Gansu-Ningxia border region was to resolve the problems of food, clothing, articles of everyday use and military supplies.If these problems were unlikely to be resolved, it was difficult to carry through the war to the end and victory of the Anti-Japanese War was not guaranteed. In order to continue to finish the undertaking of anti-Japanese, the Central Communist Party of China headed by Mao Zedong formulated in time the policy of "production of self-help", calling the people and the army in the border area to depend on themselves, better troops and simplify administration and conduct large scale production campaign.Proposal of "Nanniwan policy" and development of Nanniwan then emerged as the times required. Unveiling of "Nanniwan Policy" and Three Reflections of the Scientific Spirit of Zhu De The scientific spirit refers to the be practical and realistic attitude of respecting an objective rule, the realistic and pragmatic concept of daring to be the first and the human-oriented spirit of respecting science and technology as well as talents. In development of Nanniwan and the decision-making process of "Nanniwan policy", the realistic and pragmatic scientific spirit of Zhu De was mainly reflected in three aspects. Adhering to the Historical Materialism with Close Combination of Scientific Development and Building the Country through the Anti-Japanese War According to the historical materialism, Zhu De believed that, in order to effectively resolve the above three political and economic issues, we had no other choice but to focus on science and on application of natural science, and resolve the central economic issue with science and technology.Then we might ensure the material interest of the people and that the undertaking of the Anti-Japanese War was carried forward.He proposed the scientific development thought of "combining science with Anti-Japanese War."That is to say, the People's Republic of China at that time was in a great process of building the country through the Anti-Japanese War, so whether its gaining victory of the war or its success in building of the country depended upon science, both social science and natural science.According to Zhu De, "Natural science is a great power.Only if we achieved progress of the natural science, development of all fields in industry and agriculture, growth in production capacity, development and correct utilization of natural resources, and correct management of the industry, could we replenish our power, enrich the struggling forces of the troops, ensuring the people with an affluent life, improving the cultural degree and political consciousness of the common people, gain victory of the war and win success in building of the country.Whoever ignores this power is a big mistake." According to the scientific development thought, Zhu De creatively put forward the solution to the economic and more political issue of food in a scientific way, namely, "Nanniwan policy".The core content was that the troops opened up wasteland and grew food grain to develop a variety of diversifying operations and depended upon themselves to carry through the war. It was proved by the fact that it was exactly due to implementation of "Nanniwan policy" and proceeding of specific measures that effectively impelled large scale production campaign proceed towards a broader and deeper direction, which recuperated the financial resources of the people, strengthened unity between the army and the people, exercised the troops and obtained necessary material power for carrying through the war to the end. The Realistic and Pragmatic Spirit to Go Examination on the Spot When the Commander in Chief Zhu De returned to Yan'an from the frontline of Northern China, it was exactly the most difficult period of the Anti-Japanese War.Therefore, how to produce by self-help and how to carry through the war to the end, without doubt, became the most critical matter of all his businesses. Nanniwan was located forty to fifty kilometers far away from the southeast of Yan'an, closely neighboring to the military defense area of Hu Zongnan, and was a southern gate towards Yan'an, with an important strategic position.In addition, its land was fertile, with sufficient water sources and had nearly 10 thousand Mu of land to be reclaimed. Zhu De drafted some articles, such as "Discussion on the Economic Construction of the Border Area", proposed such opinions as developing resources in the border area, and enhancing the technology and production by self-help to achieve half self-sufficiency and total self-sufficiency of the finance, and creatively put forward the new mode of social and economic development of opening up wasteland by the troops for self-assistance. Zhu De, born in a peasant family, had personally gone to Nanniwan for several times for on-the-spot investigation and exploration and offered detailed and accurate evidence for formulation, implementation and adjustment of the policy. It was recorded by historical data, the Commander in Chief "personally made an on-the-spot survey on Nanniwan, organized the opening up of Nanniwan.At that time, Nanniwan was a desolate place, where birds and beasts were everywhere and wormwood and fleabane blocked the road.When the Commander in Chief got there, he could only find a shack for accommodation at night."Under such a circumstance, the Commander in Chief who was over fifty years old braved the wind and dew, drank cold water, and ate dry steamed bun, slept in a "wolf" nest, overcoming all obstacles and putting himself out of the way.He went to Nanniwan for three times so as to personally know about the geomorphic feature of Nanniwan and its vegetation condition, etc.. On the basis of an on-the-spot survey, he gradually produced a scientific and overall conception to develop Nanniwan. According to the change and requirement of objective conditions, Zhu De pursued new knowledge and new rules, change unfavorable conditions to favorable conditions and fully reflected the realistic, pragmatic, steadfast and earnest spirit. It was proved by the fact that, it was exactly this kind of realistic and pragmatic down-to-earth spirit that became the precondition and security for successful implementation of "Nanniwan policy".For only one year, the new appearance was presented in Nanniwan, "a new market was opened, cave dwellings well all over the mountains, golden grain was planted on the plain, and new rice was grown in the paddy field.The wasteland was reclaimed and soldiers had enough to eat and wear.Flocks and herds were fat in the farm and Ma Lan was well suitable for papermaking."Implementation and promotion of "Nanniwan policy" promoted development of large scale production campaign.During the two years of 1941 and 1942, the part obtained through self operation by the army and institution schools occupied a large part of the entire need.This was an unprecedented miracle in Chinese history and was unconquerable material foundation. The Human-oriented Spirit of Respecting Science and Technology as well as Talents Scientific spirit originates from activities of exploration of truth and pursuit of innovation.Its preliminary idea is to lay emphasis on scientific innovation and respecting technical talents.Zhu De once pointed out, "All science and all scientists should serve and make efforts for building of the country through the Anti-Japanese War.Then, we could defeat the Japanese fascist robbers and build a democratic republic of Three Principles of the People." In the process of formulating and implementing "Nanniwan policy", Zhu De showed positive support on scientific investigation by the investigation group constituted by the famous agricultural and forestry biologist Le Tianyu and others on plant resources and natural conditions in Nanniwan and attached great importance to his suggestion to reclaim Nanniwan to increase food production that was proposed in his "Report on Investigation of Forests in Shaanxi-Gansu-Ningxia Border Region".Besides, he went to the natural academy in Yan'an in person to invite experts such as Le Tianyu to go to Nanniwan to explore its ecological environment and decide areas that could be developed. At the same time, Zhu De appointed Li Shijun as the director of administrative committee in the military region of Nanniwan, an agricultural expert who had rich experience in reclamation of wasteland by an army unit.In order to fully play the roles of agricultural, forestry and animal husbandry scientific persons, under the instruction of Zhu De, a liberation region agricultural association was set up in Yan'an in 1941 headed by the famous agricultural and forestry expert le Tianyu.In the process of reclaiming Nanniwan, It was exactly owing to the strong scientific power of the agricultural association and its scientific planning and specific implementation and guide of soil and water conservation, crop improvement and land development that ensured scientific and sustainable development of Nanniwan. In 1942, the self-sufficiency rate of 359 Brigade was 61%, and increased to 100% in 1943.In 1944, 260 thousand Mu of wasteland was reclaimed and tilled, 37 thousand Dan of grain was harvested, 5624 pigs were raised, and 10 thousand Dan of public grain was turned over, achieving "equal cultivation and balance". Respect of science and respect of scientific talents made development of Nanniwan achieve great success and accumulated rich experience for economic construction in the future. Zhu De made high comments, "Aggression of Japanese invaders caused severe devastation and difficulties to scientific development in China.Yet, the great undertaking of Anti-Japanese War directed a new forward development path for science and offered stimulus.On the wasteland which brilliance of the natural science had never shined on, our scientific practitioners learned to overcome difficulties, discovered quite a lot of buried treasure, and made a lot of contributions.All these made us full of wish and confidence, trust in our own power and our scientific power, which was enough to build a new China…" Contemporary Enlightenment of the Realistic and Pragmatic Scientific Spirit of Zhu De A great practice breeds a great spirit.Decision making and implementation of "Nanniwan policy" realized the established target of "developing economy and securing supply", made cave dwellings all over the mountains and strong appearance of the troops, and brought sufficient food and sufficient soldiers who grew grain by themselves, so as to impel the overall development of the large scale production campaign and promote coordinated social and economic development in the border area. Development of Nanniwan reflected Zhu De's surefooted scientific spirit of looking far ahead and aiming high.Its realistic and pragmatic spirit of not only following the objective role but also daring to innovate, without doubt, has great realistic significance and enlightenment effect to the contemporary socialism harmonious social construction and construction of an innovative country. Persisting in Scientific Development Spirit, Grasping the Primary Contradiction and Promoting Social Coordinated Development "Development of Nanniwan" was guided by the scientific development of Chinese revolution, formulated the economic development plan of opening up and reclaiming wasteland by the army, grasped the major contradiction of economic construction, coordinated relationship between the border government and the common people and relationship between the army and the common people, coordinated relationship between struggle, training and production of the army, and also coordinated the organic connection between economic construction and carrying through the war to the end.Thus, it became a new mode that fits in with the contemporary social and economic development and laid solid material foundation and masses foundation for the ultimate victory of the revolution.At that time, the reform and opening up in China was at a golden development period, and, meanwhile, a critical period where contradictions were prominent.In the face of the multiple problems and contradictions, it was more necessary to follow and use the natural and social objective roles, closely connect scientific development and construction of a harmonious society, and formulate a scientific plan and strategy that was suitable for different environments, and helpful to resolve major contradictions and for benign development of the society and economy, so as to bring benefit to one party and promote harmonious development of the society. Adhering to the Development Tenet that Interest of People Was Higher than All and Establishing Scientific People-oriented Performance Concept Troops opening up wasteland had existed in ancient China.Development of Nanniwan was featured by troops' opening up and reclaiming wasteland.The essence of this feature started out from the fundamental interest of the common people in Yan'an, a new economic policy proposed from the perspectives of alleviating the burden of the peasants, recuperating the financial resources of the people and ensuring victory of the Anti-Japanese War.Only if "the people both have losses and have gains, and their gains are more than their losses, can they support the long run Anti-Japanese War".In accordance with the instruction of Mao Zedong, "the more developed our self-sufficient economy, the less heavy the taxes imposed on the people".It was necessary to take the common people as the orientation, worry what the masses worried, think about what the masses thought about, start out from the most urgent problem to be resolved by the masses, resolve practical matters in an effective way, listen to the voice of the people and know about the sufferings of the people.It was also necessary to regard social, economic and scientific development as the starting point of attaining the political performance and regard as the standard to balance the political performance whether the common people were satisfied or whether they recognized.Only in that way, could we construct a promising and harmonious country together. Persisting in the Scientific Spirit of Pragmatic Innovation and Respecting Knowledge and Talents As the first productive force, without doubt, science and technology will bring enormous material wealth, which is the material foundation for social and economic development.Emphasis on science and technology and emphasis on scientific talents was an important experience in development of Nanniwan.In today's socialist construction, innovation of science and technology seems especially important. During the contemporary social transition period of China, new situations and new issues emerge constantly.In the face of new situations and new issues, if we choose not to go deep into the grass-root organizations for an on-the-spot investigation and not to stride our steps to enter the field, peasant house and workshop, etc., and, instead, we just "give a hurried and cursory glance" and skim over the surface, then the new socialist countryside building and building of a harmonious society and creative society will become only an armchair strategist. The fact that "Nanniwan policy" was able to be effectively implemented and that "the slush mud bay" was able to be developed into "Jiangnan in Shaanbei", benefited from the Commander in Chief Zhu De who was at an age of over fifty years and who still personally led his undertaking team to go to the wasteland and practiced on himself, and benefited from the Commander in Chief Zhu De and his undertaking team who resorted to science, respected scientific personnel, and planned a scientific development plan, which, without doubt, was a prerequisite condition for successful development of Nanniwan. Today, when we review development of Nanniwan and implementation of "Nanniwan policy", without doubt, the practical and realistic and pragmatic scientific spirit of Zhu De has a realistic significance to deepening of the contemporary Chinese social reform and opening up and to construction of innovative society.
4,809
2012-09-28T00:00:00.000
[ "Economics" ]
Second-order coherence function of a plasmonic nanoantenna fed by a single- photon source We study the second-order coherence function of a plasmonic nanoantenna fed by near-field of a single-photon source incoherently pumped in the continuous wave regime. We consider the case of a strong Purcell effect, when the single-photon source radiates almost entirely in the mode of a nanoantenna. We show that when the energy of thermal fluctuations, kT , of the nanoantenna is much smaller than the interaction energy between the electromagnetic field of the nanoantenna mode and the single-photon source, R Ω  , the statistics of the emission is close to that of thermal radiation. In the opposite limit, R kT Ω >>  , the nanoantenna radiates single photons. In the last case, we demonstrate the possibility of overcoming the radiation intensity of an individual single-photon source. This result opens the possibility of creating a high-intensity single-photon source. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement SPSs suitable for nanophotonic applications have an important drawback in that their radiation rate is low [13]. The characteristic radiation rate of SPSs based on solid-state quantum emitters does not exceed one radiation event per nanosecond. This radiation rate could be increased by placing an SPS inside an open resonator, i.e. using the Purcell effect [14]. This increase is proportional to the quality factor of the resonator and is inversely proportional to the volume of the resonator mode. In nano-optics, a system consisting of an antenna and an SPS should be nanosized. Metallic plasmonic nanoantennas satisfy this requirement. In such a system, the role of the resonator mode is played by localized surface plasmons. Although the Q-factor of plasmonic structures is relatively low, due to the small volume of the modes, the Purcell factor reaches a value of 2 4 For many applications, it is important to know how the antenna-SPS system radiates. Note that when the Purcell factor is large, the SPS mainly radiates in resonator mode [17,18]. In other words, an excited SPS passes the main part of the energy to a nanoantenna, which then reradiates this energy. Since the characteristic radiation rate of plasmonic structures is several orders of magnitude greater than the radiation rate of SPS, we achieve a desirable increase in intensity. On the other hand, since a nanoantenna without an SPS is in thermal equilibrium, it radiates as a black body with a second-order coherence function (2) (0) 2 g = [19,20]. Thus, the system may no longer radiate single photons, even though (2) (0) 0 g = for an SPS. This has been confirmed by recent experimental results involving measurements of the radiation statistics of plasmonic structures interacting with SPSs [21][22][23][24][25][26][27][28][29][30][31][32]. In the overwhelming majority of these experiments, (2) (0) g of the radiation of an antenna-SPS system has a value of a few tenths. Some of these experiments even demonstrate super-Poisson statistics, with (2) 25,29,31]. However, if an SPS passes only one photon into the antenna mode, we can expect this photon to be radiated by the antenna before thermalization. It has recently been theoretically shown that a plasmonic nanoantenna may produce single-photon radiation if it is excited by coherent pumping [33], or if coherent population trapping is used in a three-level system [34]. However, incoherent pumping is also widely used and more easily achieved in practical realizations of SPSs. In this paper, we demonstrate that it is possible for an antenna-SPS system to emit single photons under incoherent pumping of the SPS. Using computer simulation, we show that for a plasmonic nanoantenna-SPS system, the values of the second-order coherence function, (2) , are in the range from 0 to 2, depending on the ratio of the energy of thermal fluctuations, , kT of the nanoantenna and the interaction energy, , R Ω  between the mode of the nanoantenna and the SPS. For , the Purcell factor is small, as is the part of the energy transferred from the emitter to the antenna; as a result, the nanoantenna radiates as a black body with (2) the Purcell factor and the radiation rate are large. In such a case, the rearrangement of the quantum states of the nanoantenna effectively gives single-photon emission. In this case, (2) g can reach zero. The obtained result can be used to create nanoscale ultrafast SPSs based on plasmonic nanoantennas. The model We consider a plasmonic nanoantenna, which size is much less than the radiating wavelength in a free space, fed by an SPS. We assume that the SPS is a two-level system (TLS) interacting with only one of the nanoantenna modes and transmits its energy to the nanoantenna through near-field interaction. The Hamiltonian of such a system has the form [20,35]: ω ω σ σ σ where TLS ω and M ω are frequencies of the TLS transition and the antenna mode, respectively. The first term in Eq. (1) describes the nanoantenna mode; operators â + and â are the creation and annihilation operators of a plasmon in the mode, and satisfy the commutation relation ˆ,  (for more detailed information about the second quantization procedure for the near field in dissipative dispersive media, see Refs [36][37][38][39].). In the absence of the interaction between the nanoantenna and the dipole emitter, the eigenstates of the system consist of the nanoantenna and dipole emitter eigenstates, , To describe the losses, we introduce three reservoirs interacting with the system. The Hamiltonian of these reservoirs has the form [19,40]: The first term describes the electromagnetic field of the free space, which is responsible for the radiative losses of the system. Operators , b λ The first term in Eq. is the electric field after the second quantization procedure, and , , / The second term describes dephasing of the TLS, i.e. the process of emission and absorption of a quantum of the reservoir excitation in which the energy of the system does not change, but the average dipole moment (non-diagonal elements of its density matrix) decays [19,40]. The last term describes the interaction of phonons in the metal and the nanoantenna mode. Using the Born-Markov approximation, and excluding reservoir variables, we obtain the master equation for the density matrix in the Lindblad form [19,41]: where the Lindblad superoperator, describes the relaxation processes in the system due to interaction with reservoirs with rates , where i T is the temperature of the i-th reservoir. Note that in Eqs. (6) and (7), we add incoherent pumping of the TLS by introducing the term pump , γ which corresponds to the transition between the eigenlevels with increasing energy [43]. We assume that the temperature of the free space reservoir is zero, rad 0 T = , and the temperature of the pumping reservoir is pump 0, T = − so that interaction with this results in an energy transfer only from the reservoir to the system. The temperature of the reservoir of Joule losses can change. We investigate the dependence of the system behavior on this temperature. We suppose that the radiative and nonradiative losses of the nanoantenna and the TLS dephasing remain the same as for a non-interacting antenna and SPS. From the system of equations in (8), we can obtain the dynamics of the diagonal elements of the density matrix. We can then use these to calculate all the average values of the operators of interest at any moment in time, as ˆˆ( ( ) ) ( ) . In the following, we consider the behavior of the second-order coherence function (2) (0) g . The plasmonic nanoantenna as a single-photon source The second-order coherence function (2) Here, we assume that the nanoantenna makes the main contribution to the radiation (which is much greater than that of the SPS). This assumption is reasonable because, as it has been mentioned in [18] (2) (0) g crosses over from 2 to 0. In the limit , R kT Ω >>  (2) (0) g tends to 0, corresponding to the radiation of single photons. Thus, for a sufficiently strong interaction and a low pumping rate, the plasmonic nanoantenna emits single photons, in agreement with experiment [23,27,32]. It should be noted that an increase in the pumping rate, pump rad , γ γ causes (2) (0) g to tend to unity, and the light from the system becomes coherent (see the dashed and dot-dashed curves in Fig. 1). This behavior corresponds to the coherent generation of the near-field in the nanoantenna; in this case, the system turns to a nanolaser. However, when only one SPS is used, this regime cannot be achieved, since the corresponding pumping rate is very high and cannot be obtained in experiments (see also Ref [45].). Thus, in the case of nanoantenna fed by one SPS, the real pumping rate is much lower than the threshold value. The case of zero pumping corresponds to the situation in which only the reservoir with a temperature greater than zero provides energy to the system. Note that at room temperature, in the optical region, the black body radiation is negligible and the system essentially does not radiate. To create radiation which can be detected, pump γ should have a reasonable value that is greater than zero. Thus, it is possible to observe single-photon emission from a nanoantenna by setting the required temperature and the pumping power of the TLS. The effect described here was obtained using a numerical simulation of Eq. (8). To clarify the mechanism of this effect, we consider a simplified model of the original problem. Low-quantum excitation limit To understand the behavior described in the previous section, we consider a simplified model of the system consisting of a nanoantenna coupled to an SPS. Let us assume that the pumping power is zero and take into account only the excitations of the lower states, Eq. (2), which give first nonzero contributions to ( ) Suppose for a moment that we have only the interaction with the reservoir of Joule losses, with temperature T . In this case, the system comes to thermal equilibrium with the reservoir, and the diagonal elements of the density matrix are then distributed according to Gibbs distribution [19], i.e.: where m E is the energy of the m -th eigenstate. Using Eqs. (10) to (12), we can calculate (2) (0) g : ( ) ω ω = = , we have: Hence, in the limits / 1 R kT Ω >>  and / 1 R kT Ω <<  (Fig. 1, dashed and solid curves, respectively), at zero pumping rate, we obtain: (2) Expressions (15) and (16) are in qualitative agreement with the results of numerical simulation, as shown in Fig. 1. The obtained result can be qualitatively explained as follows. When the energy of thermal fluctuations, , kT is much higher than the interaction energy, Radiation As can be see pumping rate exceed the in power at whic useful in prac which the nan time a radiatio rate of ener In both cases, the radiation intensity rad S H  rates are larger than that for a single TLS, and may reach one radiation process per picosecond (for the parameters used in Refs [27]. and [32], the values are larger by two and three orders of magnitude, respectively). The quantitative distinction between the experimental results obtained in Refs [27]. and [32] (for the value of (2) (0) g and the radiation rates) is due to the significant difference in the Rabi constants of the systems and, consequently, the Purcell factors. Moreover, according to Fig. 3 and Eq. (17), a small negative detuning, , M TLS ω ω < can reduce the value of (2) (0) g even further, as observed in the experiment in Ref [27]. Conclusion As discussed in the introduction, an attempt to increase the radiation rate of isolated SPSs by using plasmonic nanoantennas is expected to lead to the deterioration of single-photon radiation properties due to the contribution of nanoantenna (open-cavity) radiation to the total emission of the system. In the present paper, we demonstrate the possibility of a nanoantenna fed by an SPS radiating single photons at high rates. We show that the second-order coherence function of radiation, (2)
2,850.4
2019-08-05T00:00:00.000
[ "Physics" ]
SND@LHC: the scattering and neutrino detector at the LHC SND@LHC is a compact and stand-alone experiment designed to perform measurements with neutrinos produced at the LHC in the pseudo-rapidity region of 7.2 < η < 8.4. The experiment is located 480 m downstream of the ATLAS interaction point, in the TI18 tunnel. The detector is composed of a hybrid system based on an 830 kg target made of tungsten plates, interleaved with emulsion and electronic trackers, also acting as an electromagnetic calorimeter, and followed by a hadronic calorimeter and a muon identification system. The detector is able to distinguish interactions of all three neutrino flavours, which allows probing the physics of heavy flavour production at the LHC in the very forward region. This region is of particular interest for future circular colliders and for very high energy astrophysical neutrino experiments. The detector is also able to search for the scattering of Feebly Interacting Particles. In its first phase, the detector is ready to operate throughout LHC Run 3 and collect a total of 250 fb-1. Overview SND@LHC is a compact experiment proposed to exploit the high flux of energetic neutrinos of all flavours from the LHC [1,2]. It is located slightly off-axis, covering the unexplored pseudo-rapidity ( ) range from 7.2 to 8.4, in which a large fraction of neutrinos originate from charmed-hadrons decays. Thus, neutrinos can probe heavy-flavour production in a region that is not accessible to other large LHC experiments, which are designed to study high-physics at <5. Together with the FASER [3] experiment, SND@LHC will make the first observations of neutrinos produced by a collider, in an energy range which was inaccessible at accelerators so far. SND@LHC is also sensitive to Feebly Interacting Particles (FIPs) through scattering off nuclei and electrons in the detector target. The direct-search strategy gives the experiment sensitivity in a region of the FIP mass-coupling parameter space that is complementary to other indirect searches [4]. In order to shield the detector from most of the charged particles produced in the LHC collisions, SND@LHC is located in the TI18 tunnel, about 480 m downstream of the ATLAS interaction point (IP1). Charged-lepton identification and the measurement of the neutrino energy are essential to distinguish among the three flavours in neutrino charged-current interactions and to identify and study the corresponding neutrino production process. These features were the main drivers in the design of the SND@LHC apparatus, that had also to account for geometrical constraints of the selected location. The detector was installed in TI18 in 2021 during the Long Shutdown 2 and it has started to collect data since the beginning of the LHC Run 3 in April 2022. The SND@LHC experiment will run throughout the whole Run 3 and it is expected to collect 250 fb −1 of data in 2022-25, corresponding to two thousands high-energy neutrino interactions of all flavours in the detector target. The SND@LHC detector is composed of several parts (see Figure 1). The veto system tags events with charged particles entering the detector from the front. It is followed by the emulsion target, which acts as a vertex detector, and the target trackers that provide the timestamp to the events reconstructed in the emulsions. The combination of the emulsion target and the target tracker also acts as an electromagnetic calorimeter. A shielding surrounding the target has been put in place to absorb low-energy neutrons and as a thermal insulation chamber. The target system is followed by a hadronic calorimeter and a muon identification system. The detector concept and the physics goals of the SND@LHC experiment have been described in the Technical Proposal [2]. This document details the detector layout, construction and installation phases. Sections 2 to 5 describe the sub-systems of the detector. Section 6 describes the data acquisition and online systems, while Section 7 discusses the offline software and simulation framework. Sections 8 and 9 give details about the commissioning and installation of the detector. Finally, in Section 10 we give some ideas about a possible upgrade of the detector. The physics case Neutrinos allow for precise tests of the Standard Model (SM) [5][6][7][8]. They are a probe for new physics [9,10] and provide a unique view of the Universe [11]. The neutrino-nucleon cross section region between 350 GeV and 10 TeV is currently unexplored [12,13]. Indeed, measurements of neutrino interactions in the last decades were mainly performed at low energies for neutrino oscillation studies. Neutrinos in interactions at the CERN LHC arise promptly from leptonic and decays, and and decays. They are subsequently also produced in the decays of pions and kaons. The use of LHC as a neutrino factory was first envisaged about 30 years ago [14][15][16], in particular for the then undiscovered [17]. The idea suggested a detector intercepting the very forward flux ( > 7) of neutrinos (about 5% have flavour) from and decays. Recently, it was pointed out [18] that at larger angles (4 < < 5) leptonic and decays also provide an additional contribution to the neutrino flux, of which one third has flavour. The role of an off-axis setup has been emphasised in a recent paper [19]. Today, two factors make it possible and particularly interesting to add a compact neutrino detector at the LHC. The high intensity of collisions achieved by the machine turns into a large expected neutrino flux in the forward direction [1], and the high neutrino energies imply relatively large neutrino cross-sections. As a result, even a detector with a relatively modest size to fit into one of the existing underground areas close to the LHC tunnel has significant physics potential. Machine-induced backgrounds decrease rapidly while moving along and away from the beam line. A detailed study of a possible underground location for a neutrino detector was conducted in 2018 [1], during the LHC Run 2. Four locations were considered for hosting a possible neutrino detector: the CMS quadrupole region (25 m from the CMS Interaction Point (IP5)), UJ53 and UJ57 (∼90 and 120 m from IP5), RR53 and RR57 (∼240 m from IP5), TI18 (∼480 m from IP1). The potential sites were studied on the basis of expected neutrino rates, flavour composition and energy spectrum, predicted backgrounds, and in-situ measurements performed with a nuclear emulsion detector and radiation monitors. TI18 emerged as the most favourable location. Assuming a luminosity of 250 fb −1 in the LHC Run 3, a detector with a mass of 830 kg located in TI18 can observe and study about two thousand high-energy neutrino interactions of all flavours. The main physics goals of the SND@LHC experiment are summarised in the following sections. Figure 2 shows the energy spectrum of incoming neutrinos and anti-neutrinos in the pseudorapidity range covered by the SND@LHC detector, 7.2 < < 8.4, normalised to 250 fb −1 . Neutrino production in proton-proton collisions at the LHC is simulated with the FLUKA Monte Carlo code [20,21]. DPMJET3 (Dual Parton Model, including charm) [22,23] is used for the event generation, and FLUKA performs the particle propagation towards the SND@LHC detector with the help of the FLUKA model of the LHC accelerator [24]. FLUKA also takes care of simulating the production of neutrinos from decays of long-lived products of the collisions and of particles produced in re-interactions with the surrounding material. G [25] is then used to simulate neutrino interactions with the SND@LHC detector material. About 1700 charged-current (CC) and 550 neutral current (NC) neutrino interactions are expected in the target volume, mainly from muon neutrinos (72%) and electron neutrinos (23%). Neutrino physics In the explored range, electron neutrinos and anti-neutrinos are predominantly produced by charmed-hadron decays. Therefore, if one assumes that the deep-inelastic charged-current crosssection of the electron neutrino follows the SM prediction, as also supported by the HERA results in their SM interpretation [26,27], electron neutrinos can be used as a probe of the production of charm. Taking into account uncertainties in the correlation between the yield of charmed hadrons in a given region with the neutrinos in the measured region, it was evaluated that the measurement of the charmed-hadron production in collision can be done with a statistical uncertainty of about 5%, while the leading contribution to the uncertainty is the systematic error of 35% [2]. Furthermore, the measurement of the charmed hadrons can be translated into a measurement of the corresponding open charm production in the same rapidity window, given the linear correlation between the parent charm quark and the hadron. The dominant partonic process for associated charm production at the LHC is the scattering of two gluons producing a pair [28]. The average lowest momentum fraction ( ) of interacting gluons probed by SND@LHC is around 10 −6 . The extraction of the gluon PDF at such low values of , where it is completely unknown, could provide valuable information for future experiments probing the same low range, such as FCC [29]. It can also reduce uncertainties on the flux of very-high-energy (PeV scale) atmospheric neutrinos produced in charm decays, essential for the evidence of neutrinos from astrophysical sources [30,31]. Since the three neutrino flavours can be identified, the lepton flavour universality can be tested in the neutrino sector by measuring the ratio of / and / interactions. Both and are mainly produced by semi-leptonic and fully leptonic decays of charmed hadrons. Unlike s that are produced almost only in decays, s are produced in the decay of all ground-state charmed hadrons, essentially 0 , , and Λ . Therefore, the / ratio depends only on the charm hadronisation fractions and decay branching ratios. The systematic uncertainties due to the charm-quark production mechanism cancel out, and the ratio becomes sensitive to the -nucleon interaction cross-section ratio of the two neutrino species, which is affected by the uncertainty on hadronisation processes. The estimate of the branching ratios has a systematic uncertainty of about 22% while the statistical uncertainty is dominated by the low statistics of the sample, which corresponds to a 30% accuracy. The situation is rather different for s when compared to s. The s are much more abundant but heavily contaminated by and decays, and therefore the production mechanism cannot be considered the same as in the case of . However, this contamination is mostly concentrating at low energies. Above 600 GeV, the contamination is predicted to be reduced to about 35%, and stable with the energy. Moreover, charmed-hadrons decays have practically equal branching ratios into electron and muon neutrinos. Therefore the / ratio is not affected by the systematic uncertainties in the weighted branching fractions, but rather by uncertainties due to and production in this range and to their propagation through the machine elements along the beamline, that can be assessed thanks to the available measurements used to constrain the simulation. The / ratio provides a test of the lepton flavour universality with an uncertainty of 15%, with an equal 10% statistical and systematic contribution. SND@LHC plans to measure the ratio between charged-current (CC) and neutral-current (NC) interactions as an internal consistency test. Indeed, by summing over neutrinos and anti-neutrinos, the ratio between NC and CC deep-inelastic interaction cross-sections at a given energy can be written as a simple function of the Weinberg angle, with a correction factor accounting for the non-isoscalarity of the target [32]. In the approximation that the differential and¯fluxes, as a function of their energy, are equal, the same formula also applies to the observed interactions since the convolution with the flux would bring the same factor everywhere, that then cancels out in the ratio. The statistical uncertainty on the NC/CC ratio for observed events is expected to be lower than 5% while the systematic uncertainty on the unfolded ratio amounts to about 10% [2]. Feebly Interacting Particles The SND@LHC experiment is also capable of performing model-independent direct searches for FIPs. The background from neutrino interactions can be rejected by making a time-of-flight (TOF) measurement. With a time resolution of ∼ 200 ps, it will be possible to disentangle the scattering of massive FIPs and neutrinos, with a significance that depends on the particle mass [2]. The hybrid nature of the apparatus, which combines emulsion trackers and electronic detectors, makes it possible to disentangle the scattering of massive FIPs and neutrinos, with a significance that depends on the particle mass. FIPs may be produced in the scattering at the LHC interaction point, propagate to the detector and decay or scatter inside it. A recent work [33] summarises SND@LHC sensitivity to physics beyond the Standard Model considering the scatterings of light dark matter particles via leptophobic (1) mediator, as well as decays of Heavy Neutral Leptons, dark scalars and dark photons. The elastic scattering was considered, showing an excess of neutrino-like elastic scatterings over the SM yield due to the + process. The excellent spatial resolution of nuclear emulsions and the muon identification system makes SND@LHC also suited to search for the decay of neutral mediators decaying in two charged particles. Detector layout The detector layout was developed to allow for the identification of the three neutrino flavours and the direct search for FIPs. The layout of the detector, with the exclusion of the neutron shield, is shown in Figure 3. The apparatus is composed of a target region followed downstream by a hadronic calorimeter and a muon identification system. Upstream of the target region, two planes of scintillator bars act as a veto for charged particles, mostly muons coming from IP1. The target region, with a mass of about 830 kg, is instrumented with five walls of Emulsion Cloud Chambers (ECC) [34], each followed by a Scintillating Fibre (SciFi) plane The ECC technology alternates emulsion films, acting as tracking devices with micrometric accuracy, with passive material acting as the neutrino target. Tungsten is used as a passive material to maximize the mass within the available volume. The SciFi planes provide the timestamp for the reconstructed events and have an appropriate time resolution for the time-of-flight measurements of particles from IP1. The combination of the emulsion target and the target tracker also acts as an electromagnetic calorimeter, with a total of 85 0 . Veto, emulsion target and target tracker are contained in a 30% borated polyethylene and acrylic box which has the dual function of acting as a neutron shield from low energy neutrons and maintaining controlled temperature and humidity levels in order to guarantee optimal conditions for emulsion films. The hadronic calorimeter and muon identification system are located downstream of the target and consist of eight 20 cm-thick thick iron slabs (green) making up 9.5 interaction lengths int in total, each followed by one or two planes of 1 cm-thick scintillating bars. The hadronic shower starts developing already in the target region, which adds on average 1.5 int , for an average total length of 11 int , thus providing a good coverage of the hadronic showers. The muon identification is mainly based on the last three planes of scintillator bars. These planes have double layers with narrower bars located both vertically and horizontally for higher granularity. The detector, including the neutron shield described in Section 4.4, exploits all the available space in the TI18 tunnel to cover the desired range in pseudo-rapidity. Figure 4 shows the side and top views of the detector positioned inside the tunnel. The size of the tunnel, the tilted slope of the floor, as well as the distance of tunnel walls and floor from the nominal collision axis, imposed several constraints to the detector design since no civil engineering work could have been done in time for the operation in Run 3. The detector layout was therefore optimised in order to find the best compromise between geometrical constraints and the following physics requirements: a good calorimetric measurement of the energy requiring about 10 int , a good muon identification efficiency requiring enough material to absorb hadrons, a transverse size of the target region having the desired azimuthal angular acceptance. The energy measurement and the muon identification set a constraint on the minimum length of the detector. With the constraints from the tunnel, this requirement competes with the azimuthal angular acceptance that determines the overall flux intercepted and therefore the total number of observed interactions. The geometrical constraints also restrict the detector to the first quadrant only around the nominal collision axis. The identification of the neutrino flavour is done in charged current interactions by identifying the charged lepton produced at the primary vertex (see Section 7). Electrons will be clearly separated from neutral pions thanks to the micrometric accuracy and fine sampling of the Emulsion Cloud Chambers, which will enable photon conversions downstream of the neutrino interaction vertex to be identified. Muons will be identified by the electronic detectors as the most penetrating particles. Tau leptons will be identified topologically in the ECCs, through the observation of the tau decay vertex, together with the absence of any electron or muon at the primary vertex, following the technique developed by OPERA [35,36]. FIPs will be identified through their scattering off electrons and nuclei of the emulsion target material. In the case of a FIP elastic scattering off atomic electrons, the experimental signature consists of an isolated recoil electron that can be identified through the development of an electromagnetic shower in the target region. For FIPs interacting elastically with a proton, instead, an isolated proton will produce a hadronic shower in the detector. In both cases the background can be reduced down to a negligible level by topological and kinematic selections. The timing information will be used to confirm any excess of events with the expected signature [2]. Veto system The veto system aims at rejecting charged particles entering the detector acceptance, mostly muons coming from IP1. It is located upstream of the target region and comprises two parallel planes, located 4.3 cm apart, of stacked scintillating bars read out on both ends by silicon photomultipliers (SiPMs) as shown in Figure 5. One plane consists of seven 1 × 6 × 42 cm 3 stacked bars of EJ-200 scintillator [37]. EJ-200 is found to have the right combination of light output, attenuation length (3.8 m) and fast timing (rise time of 0.9 ns and decay time of 2.1 ns). The emission spectrum peaks at 425 nm, closely matching the SiPMs spectral response. The number of photons generated by a minimum-ionising particle crossing 1 cm scintillator is of the order of 10 4 . Bars are wrapped in aluminized Mylar foil [38] to ensure opacity and isolate them from light in adjacent bars. Each bar end is read out by eight Hamamatsu S14160-6050HS [39] (6 mm 2 × 6 mm 2 wide, 50 µm 2 × 50 µm 2 pixel size) SiPMs. The SiPMs are mounted on a custom built PCB, seen in , that covers all seven bars on each end of a plane. A transparent silicone epoxy gel [40] fills the space of ∼1 mm between the SiPMs and bars. Each individual SiPM signal is read out by a single channel of the front-end (FE) board, containing two TOFPET2 ASICs (described in Section 6.1.2). A DAQ board collects the digitized signals from four FE boards. A CAEN mainframe, described in Section 6.1, which is shared with the muon system, houses low voltage (LV) and high voltage (HV) CAEN power supplies. Details of the data acquisition (DAQ) system and of the boards are described in Section 6. The total number of channels per PCB is 56, totaling 224 channels for the entire veto system. The stacked bars for each plane are housed in an aluminum frame, with 4 mm thick walls. PCBs are mounted on 4 cm wide rectangular flanges on both ends and act as end caps for the frame. An aluminum cover on each end is used to ensure light tightness and also acts as a heat sink for the FE board, which generates ∼3 W and is placed in a groove in the cover on the side opposite to the PCB. The two frames of the veto system are held together by a small support structure. This in turn is attached to the support of the target region within 1 mm accuracy, as shown in Figure 7. A vertical shift of 2 cm between the two frames allows for 100% coverage of the target region, compensating for inefficiency due to the dead area between bars introduced by wrapping material (∼60 µm) and variations in bar height (∼250 µm). The DAQ board is mounted on the support frame directly in front of the veto planes. Fine alignment is performed as part of the target region alignment as mentioned in Section 4. Overview The Target Tracker system is made of five scintillating fibre (SciFi) planes interleaving the five target walls. The SciFi technology is well suited to cover large surfaces in a low track density environment * , where a ∼100 µm spatial resolution is required. The role of SciFi trackers is two-fold: assign a timestamp to neutrino interactions reconstructed in the ECC walls and provide an energy measurement of electromagnetic showers. Moreover, the combination of SciFi and scintillating bars of the muon detector will also act as a non-homogenous hadronic calorimeter for the measurement of the energy of the hadronic jet produced in the neutrino interaction and hence for the neutrino energy. The matching with events reconstructed in the target walls is performed by connecting the centre of gravity of electromagnetic and hadronic showers, reconstructed in the SciFi immediately downstream of the ECC where the interaction occurred, with tracks reconstructed in emulsions. The large multiplicity of tracks produced in neutrino interactions and the high density of passing-through muons prevent a track-by-track matching between SciFi and ECC. The measurement of electromagnetic shower energy is based on information provided both by ECC bricks and Target Tracker planes. The five target walls (∼17 X 0 each) interleaved with SciFi tracker modules, form a coarse sampling calorimeter. The two main components employed in this SciFi tracker, the scintillating fibre mats and the multichannel SiPM photo-detectors, were developed by the EPFL group for the LHCb SciFi Tracker [41]. The read-out electronics is different from the one used in LHCb and it has been optimised to have an improved time resolution and to detect electromagnetic showers. The SciFi modules The SciFi modules for SND@LHC, shown in Figure 8, are closely following the design of the 2.5 m long modules built for LHCb. The double-cladded polystyrene scintillating fibres from Kuraray (SCSF-78MJ), with a diameter of 250 µm, are blue emitting fibres with a decay time of 2.8 ns. The fibres are arranged in six densely-packed staggered layers, forming fibre mats of 1.35 mm thickness. A picture of the cross section of such a mat is shown in Figure 9. The fibre winding and gluing process has been developed within the LHCb SciFi collaboration. A dedicated winding machine, shown in Figure 10, with tension and position control as well as optical feedback has been engineered. Fibre mats produced for the SND@LHC tracker are 133 mm wide and 399 mm long; they are integrated into a fibre plane with less than 500 µm dead zones. Figure 10: The winding wheel with a circumference of 2.5 m allows to wind five 40 cm mats. The winding process has been refined and adjusted in order to obtain precise and regular fibre mats. The path of the fibre in the machine is highlighted in cyan. A polycarbonate end-piece is glued and an optical surface cut is applied to each end of the fibre mat. One side of the mat is brought in direct contact with the epoxy entrance window of the photo-detector and the other end can optionally have a mirror or a light injection fibre coupling. The SiPM photo-detectors and readout electronics The readout consists of the photo-detector (S13552 SiPM multichannel arrays by Hamamatsu) at the end of the fibre module, a short Kapton flex PCB holding the photo-detector and signal connectors and the front-end electronics board, shown in Figure 11a. The light tightness of the module is ensured by a seal on the flat Kapton flex, the aluminium module frame and an opaque Tedlar sheet on both sides of the module. This encloses the photo-detector and the entire fibre region. The light tightness is evaluated during the assembly phase and leaks closed with glue. The photo-detectors are not actively cooled, as their heat dissipation is low and the expected noise is acceptable at the operation temperature of 15°C. The SiPM multichannel array is optimised for low light-intensity detection. For application in SND@LHC, the SciFi performance has to be tuned for maximising the hit detection efficiency at an acceptable noise rate. The final threshold chosen for operation produces a noise rate of 25 Hz per channel, which poses no problem to the event builder (described in Section 6.1.4) and can be efficiently suppressed by the online noise filter. The SciFi detector with this configuration features an efficiency of ∼99 %, as discussed in Section 8.2.2. The array used in SND@LHC, shown in Figure 11, has an active channel area of 0.25 × 1.625 mm 2 , a peak photo-detection efficiency (PDE) of 47 % and a gain of 3.6 × 10 6 at 3.5 V over-voltage. To obtain a high PDE, the choice of large pixels 62.5 µm × 57.5 µm, leading to an array of 26 × 4 pixels per channel, significantly limits the linearity and the dynamic range to about 50 photo-electrons per channel. For this operation condition, the light yield (LY) for a minimum ionising particle (MIP) traversing the fibre plane in the center of the module, is ∼25 photo-electrons (PE). The readout chip chosen for this tracker is the TOFPET2 ASIC [42][43][44], described in Section 6.1.2. Its power consumption is 1.5 W per 64 readout channels, including the loss for linear voltage regulation. A water cooling system has been chosen to counteract limited convection due to dense packing between modules. To simplify the mechanical design of the water cooling, the FPGA of the DAQ boards is connected to the large aluminium support of the module and not to the water cooling. The thermal design has been verified and the temperature lies within the required range during operation. The heat dissipation of the electronics into the target enclosure is ∼24 W per board or a total of ∼720 W for the complete SciFi tracker. Low voltage and SiPM bias voltage The power for each DAQ board is provided with a CAEN A2519, an 8-channel, 15 V, 5 A power supply module, hosted in one of the CAEN mainframes (see Section 6.1 for more detail). To optimise the cost for the bias voltage of the SiPMs, a single channel per DAQ board is used. A group selection of SiPM arrays allows to minimise the break down voltage spread among SiPMs biased by the same bias voltage. Calibration The DAQ electronics provide an electrical injection signal, synchronous to all TOFPET2 FE chips on one board. This allows for a first-order time calibration between channels on the same board. Subsequently, a fine time calibration based on muon tracks among different boards and layers can be used to correct and verify the time calibration based on the collected data during the runs. The studies from a DESY test beam in October 2019 show that, based on the initial time alignment, the time calibration for channels can be improved using data with single tracks producing multiple hits in all ten SciFi layers. When detecting electromagnetic showers in a -detector layout, a large number of tracks are produced in a small region of space and therefore only a projection of the shower profile can be obtained. Additionally, the pixelised silicon photomultiplier (SiPM) suffers from non-linear amplitude response due to the limited number of pixels. With a light yield of 25 PE and a total of 104 pixels per channel, a pixel occupancy of almost 50% is expected for a shower track density of 2 tracks per channel (250 µm). Beyond this track density, a strong non-linear response of the detector signal is expected. The saturation is of statistical nature and can be corrected to make the detector response linear. To obtain a correlation between the measured signal amplitude by the TOFPET2 electronics and the number of MIP tracks in the detector, a G 4 simulation will be used to model the EM shower development and the SiPM saturation. Alignment The mechanical alignment between the SciFi planes and the emulsion boxes is ensured with mechanical precision pins, constraining the relative position between the two objects. Because of the large number of tracks from high-momentum muons traversing the target (a few thousands/cm 2 /fb −1 ), an accurate spatial alignment between SciFi planes can be obtained by using the tracks themselves. Each SiPM array of 128 channels is expected to have a constant shift relative to the nominal position and each fibre mat (three per detection plane) has to be corrected for its constant rotation angle. These corrections have been studied during the commissioning in the SPS H6 beam line, presented in Section 8.2. Overview The emulsion target is based on the Emulsion Cloud Chamber (ECC) technique, that makes use of nuclear emulsion films interleaved with passive layers to build up a tracking device with submicrometric spatial and milliradian angular resolution, as demonstrated by the OPERA experiment [34]. It is capable of detecting leptons [36] and charmed hadrons [45] by disentangling their production and decay vertices. It is also suited for FIP detection through the direct observation of their scattering off electrons or nucleons in the passive plates. The ECC technology alternates 1-mm thick tungsten plates acting as the neutrino target with ∼300-micron thick films, each made of two sensitive emulsion layers poured on a plastic base, acting as tracking devices with micrometric accuracy. The reconstruction of track segments in consecutive films provide the vertex reconstruction with an accuracy at the micron level. The fine segmentation of active films interleaving tungsten plates is motivated by the longitudinal resolution required to observe the tau lepton track and by the need to keep the combinatorial background in the association of track segments sufficiently low over an integrated luminosity of about 25 fb −1 (corresponding to ∼ 45 days of data taking in nominal conditions), after which the emulsion films are replaced. It also makes the emulsion-tungsten ECC a high-sampling electromagnetic calorimeter with more than three active layers every radiation length, 0 , essential for electron identification and discrimination against neutral pion decays [46]. The emulsion target is made of five walls with a sensitive transverse size of 384 × 384 mm 2 . Each wall consists of four cells, called bricks as illustrated in Figure 12. Each brick is made of 60 emulsion films with a transverse size of 192 × 192 mm 2 , interleaved with 59 1 mm-thick tungsten plates. The resulting brick has a total thickness of ∼78 mm, making ∼17 X 0 , and a mass of 41.5 kg. The overall target mass with five walls of 2 × 2 bricks amounts to 830 kg. The layout of the target was optimised to fulfill conflicting requirements: overall dimensions that cover the desired pseudo-rapidity region and maximse the azimuthal angular acceptance, large emulsion surface to maximise the event containment in the brick and reduced number of bricks per wall to minimise the dead area between adjacent cells. Target walls Nuclear emulsion films are the most compact, thinnest and lightest three-dimensional tracking detectors with sub-micrometric position and milliradian angular resolution. A nuclear emulsion film has two sensitive layers (70 µm-thick) on both sides of a transparent plastic base (170 µm-thick). By connecting the two hits generated by a charged particle on both sides of the base, the slope of the track can be measured with milliradian accuracy. The whole detector will contain 1200 emulsion films, for a total of 44 m 2 . Emulsion films will be produced by the Nagoya University in Japan and by the Slavich Company in Russia. † Emulsion films are analysed by fully automated optical microscopes [47,48]. The scanning speed, measured in terms of film surface per unit time, was significantly increased in recent years [49][50][51], reaching ∼180 cm 2 /h. R&D is still ongoing [52] to further increase the scanning speed. Tungsten was selected as target material in order to maximise the interaction rate per unit volume. Its small radiation length (∼3.5 mm) allows for good performance in the electromagnetic shower reconstruction in the ECC. The low intrinsic radioactivity makes tungsten a suitable material for an emulsion detector. ‡ An ECC wall is contained in an aluminum box that hosts the four bricks, which are assembled one after the other by piling up 60 emulsion films and 59 tungsten sheets. The box is then closed using a semi-automatic tool that keeps the necessary pressure to avoid relative displacements between emulsion films. Once closed, the box is light tight. Each wall is transported from the dark room where it is assembled to the TI18 tunnel by means of a custom trolley and, once there, inserted into the mechanical structure of SND@LHC. The different phases of the wall assembly, transportation and installation are described in Figure 13. § The emulsion target will be replaced every ∼25 fb −1 . The exchange of target walls will be performed during LHC Technical Stops. Since it is not assured that the integration of the target luminosity will be in coincidence with Technical Stops, the Collaboration has developed a procedure for a fast brick replacement (about 5 hour shift), that could fit within shorter accesses to the LHC tunnel. Target mechanics The mechanical structure of the SND@LHC target was designed to have a single support structure for both the five emulsion/tungsten walls and the five SciFi planes. It is made of a vertical rectified aluminum plate, that guarantees a fine mechanical alignment of target walls, and of five aluminum horizontal profiles, each sustaining a target wall, as shown in Figure 14. Each SciFi plane is fixed to the upstream wall box via three pins. Wall boxes are suspended to two horizontal profiles by two rope tensioners, two springs and a pendulum link. Each wall box is placed into the loading position with the transportation trolley, it is then suspended to the structure and translated to the final position via recirculating ball guides. Finally, the wall box is secured to the vertical plate with two screws. The whole structure is supported isostatically on three points. Alignment feet are used to adjust the height of the structure, to compensate for the inclined floor. Horizontal plates located below each foot are used for fine adjustment of the target position on the tunnel floor. The alignment of the target system is performed via three alignment spheres located at the rear of the vertical plate. The required mechanical tolerances are of the order of the millimeter, being the fine alignment performed with penetrating tracks coming from the ALTAS impact point. Neutron shield and cold box The interaction of proton beams with the residual gas inside the LHC beam pipe produces low energy neutrons, with a spectrum ranging from a few meV to a few hundreds of MeV, about half of them being thermal neutrons. The neutron flux expected in the TI18 tunnel is predominantly produced by the counterclockwise beam (beam 2) that passes by TI18 while moving towards IP1. Thermal neutrons would cause an increase of fog, . . number of developed grains per unit volume, in the emulsion films. A shielding box was therefore built around the target region. It is made of 4 cm-thick 30% borated polyethylene and 5 cm-thick acrylic layers, as shown in Figure 15. It provides a background reduction of a factor of 10 −7 [2]. The box acts also as an insulation chamber. For the long-term stability of emulsion films, a cooling system was installed to keep the temperature of the target at (15 ± 1)°C and the relative humidity in the range 40 to 50%. Overview Downstream of the target region lies the hadronic calorimeter and muon system, shown in Figure 16. Its primary purpose is to identify passing-through muons and, together with the SciFi, it serves as a sampling hadronic calorimeter, enabling measurement of the energy of hadronic jets. It comprises eight layers of scintillating planes interleaved with 20 cm-thick iron slabs, which acts as passive material with a thickness of 9.5 int . This adds up to an average total of 11 int for a shower originating in the target region. Muons are identified as being the most penetrating particles through all eight planes. The system is further divided in two sections: the first five upstream layers (US), made of 6 cm-thick horizontal scintillating bars, and the last three downstream layers (DS), made of fine-grained horizontal and vertical scintillating bars, illustrated in Figure 17. Upstream system The first five US layers are similar to the veto planes, albeit with different dimensions. Each layer consists of ten stacked bars of EJ-200, each bar having dimensions 1 × 6 × 82.5 cm 3 . The length was chosen to be longer than the iron blocks to allow the FE to be placed outside the gap between them, reducing the space needed for the gap and overall length of the muon system along the collision axis, a critical parameter in the apparatus design as described in Section 1. The bars are wrapped in aluminized Mylar foil in the same fashion as the veto system. Every bar end is viewed by eight SiPMs; six Hamamatsu S14160-6050HS (6 × 6 mm 2 , 50 µm pitch) and two Hamamatsu S14160-3010PS [53] (3 × 3mm 2 , 10 µm pitch) SiPMs. The SiPMs are arranged on a custom PCB as shown in Figure 18, which is read out by a front-end TOFPET2 board (see Section 6). The placement of SiPMs along a bar can be seen on the left of Figure 19. The two smaller-size SiPMs are used to increase the dynamic range for each bar, which has to cover the low light yield generated by minimum ionizing particles and the large light yield in case of hadronic showers created in the target region or iron blocks. The latter can lead to large charged-particle fluxes through the bars, and hence to large signals, which can saturate the larger SiPMs but not the smaller ones. Each SiPM is read out as a single channel, giving 80 channels per PCB totaling 800 channels for all US layers. The PCBs are aligned to the bars within 1 mm, as shown on the right in Figure 19. The space between the SiPMs and bars on one side is filled with the same silicon epoxy gel (∼1 mm thick) as in the veto, while the PCB on the opposite end is pressed against the bars to minimize the air gap. This leads to a small asymmetry between left and right side signals for a given plane, which however does not affect the detection efficiency. Downstream system Muon identification is completed with three high-granularity DS stations, placed further downstream, for providing the muon position with a resolution of better than 1 cm (Figures 16, 17). Bars are individually wrapped in aluminized mylar foil. Because of the bar shape, most of the light collected by the edge SiPM is indirect. Therefore a tool was developed in order to ensure that the aluminized mylar foil is very tightly wrapped around the scintillating bar, minimising the light loss because of multiple reflections. For vertical bars, the wrapping of the foil at the bar end without SiPM was terminated with an additional flat layer that optimises reflection. Scintillating bars in the same plane can differ in dimensions and be out of square from one edge to the other within fabrication tolerances. Since the SiPMs are locked in position on the PCB, assembling tools have been developed to ensure that the one-to-one alignment of SiPMs and bar edges is preserved along the entire stack: 6 mm wide SiPMs are centred on the 10 mm wide bar edge with an uncertainty of 1 mm. The quality of the contact between SiPMs and bar edges can differ because of differences in bar lengths. Thus it was decided to sort bars in groups of similar lengths, maximising uniformity in the same plane, and adjust the distance of the PCB, so that the SiPM-to-bar-edge gap was measured to be less than 100 µm for all bars. Low voltage and SiPM bias voltage The low voltage for powering the DAQ boards and the bias voltage for the SiPMs are provided by CAEN power supplies, described in Section 6.1. For the US system, two separate HV bias lines are used for the two types of SiPMs, connected with two LEMO connectors on each PCB, as seen in Figure 18, while for the DS a single LEMO connector is used to power all SiPMs. Mechanical support Bars and PCBs are housed in aluminum frames that provide light tightness. The thickness of the frames is 2 mm and the rectangular flanges are 4 cm wide. An aluminum cover shields the PCBs from outside light and protects it from heat generated in the FE board, located on the opposite side of the cover. A Kapton gasket between the PCB and flange prevents electrical shorts between the electronics and frame. The inside of the aluminum cover is also lined with Kapton to electrically isolate the PCB. Alignment Frame support mechanics are mounted on the iron blocks; adjustment screws allow for correcting the placement in position, rotation and tilt. Individual frames are installed in gaps between the iron blocks, and then fixed to the support mechanics. Three spherical alignment markers, shown in Figure 21, are mounted on each frame for global survey measurements. Alignment was performed by the Geodetic Metrology Group within the Beams Department (BE-GM) at CERN, with each marker aligned with respect to the nominal positions within 1-2 mm. Online system The SND@LHC online system includes all components involved in operating the experiment, i.e. the timing and the data acquisition hardware and software that realise the data flow from the detector to the storage, the detector control system (DCS) that controls and monitors the detector services, such as power supplies, cooling system, etc, and the data quality monitoring (DQM) and real-time analysis (RTA) system, necessary to ensure a good quality of the collected data. Globally, the top-level software, the experiment control system (ECS), encompasses all the sub-components above, together with the system of logs and databases in order to store information about the state, configuration and conditions of the data taking. The ECS is constructed to allow full automation of the data taking. The different components, shown in the scheme in Figure 22, and the readout system are described in more details in the following sections. Root Shared Presenter Agent Figure 22: Simplified scheme of the SND@LHC online system. Readout system As discussed in Sections 2, 3 and 5, the SND@LHC experiment features two types of electronic detector systems: scintillator bars read out by SiPMs in the Veto, the hadronic calorimeter and muon system, and scintillating fibres read out by SiPMs in the Target Tracker. These sub-systems are read out with the same data acquisition (DAQ) electronics, consisting of front-end (FE) boards, described in Section 6.1.2, and DAQ readout boards, described in Section 6.1.3. They read out the signals from the SiPMs, digitize them and send the recorded data (including timestamp and integrated signal charge) to a DAQ server. The detector uses in total 37 DAQ boards, each of which is connected to four FE boards. The system runs synchronously with the LHC bunch crossing clock, and operates in a trigger-less fashion, i.e. all hits recorded by each board are transmitted to the DAQ server. Noise reduction is performed at the front-end level by setting an appropriate threshold for each channel, and on the DAQ server after event building. A Trigger Timing Control (TTC) crate [54], described in Section 6.1.1, is responsible for receiving the LHC clock and the orbit signals from the LHC Beam Synchronous Timing (BST) system and distribute them to the DAQ boards. The detector is powered using CAEN A2519 modules for the DAQ readout boards power, requiring 12 V and 2 A each, and A1539B modules for the bias voltage of the SiPMs, requiring up to 60 V and up to 300 µA per channel. These modules are hosted in two SY5527 mainframes. The control of power supplies is performed by the detector control system (DCS, in the control computer in Figure 22) discussed in Section 6.2.1, which also monitors the voltages and currents drawn on both the LV and HV channels and monitors the presence of alarms. The online system (DAQ Server, DQM server and Control computer shown in Figure 22) includes two servers located on the surface. One of them receives data from the DAQ readout boards, combines the data into events, and performs the online processing of the detector data, as described in Section 6.1.4, before saving the data to disk. The other one runs the ECS and the other elements of the online system. Timing system The LHC clock (40.079 MHz bunch crossing frequency) and orbit clock (11.245 kHz revolution frequency of the LHC) signals are obtained from the LHC BST system via optical fibres based on the TTC system. A scheme of the system used in SND@LHC is shown in Figure 23. The BST signal is received by a dedicated board, BST-TTC, that extracts the clock and orbit signals, cleans the clock using a Phase Lock Loop, and distributes them to the detector using the TTC system. The board is the same that is used for the read-out of the detector, described in Section 6.1.3, with the addition of a mezzanine card to generate the correct signal levels for the TTC modules. The clock and synchronous commands are distributed to the DAQ boards using the TTCvi and TTCex modules [55], housed in a VME crate. The TTCvi receives the clock and orbit signals, and generates the A-channel (trigger) and B-channel (synchronous and asynchronous commands) signals, which are encoded and transmitted by the TTCex. The TTCvi module can be programmed and controlled using the VME bus. A USB-to-VME converter allows it to be programmed from the computer server. Variations of several nanoseconds in the phase of the clock are to be expected due to temperature changes. For this reason, the absolute timing offset will be calibrated with the timestamps of the muons generated by collisions at the ATLAS interaction point and detected in SND@LHC. Front-end electronics The front-end (FE) boards, shown in Figure 24, are based on the TOFPET2 ASIC by PETsys [42]. The TOFPET2 is a 64-channels readout and digitization ASIC designed for time of flight positron emission tomography systems [56]. It incorporates signal amplification circuitry, discriminators, charge integrators, analog-to-digital converters (ADC, in this case QDCs as in charge-to-digital converters) and time-to-digital converters (TDC). The ASIC has been found to be perfectly suitable to measure signals produced by SiPMs, and to record their timestamp and charge. The FE boards contain two TOFPET2 ASICs each for a total of 128 channels, and has temperature monitoring capabilities of both the SiPMs and the boards themselves. Each channel of the TOFPET2 features a preamplifier and two amplifiers, one optimized for the timing measurement and the second for the charge measurement. A combination of up to three discriminators with configurable thresholds can be used. The first one is mainly used for timing measurements, and normally has the lowest threshold, while the other two are used to reject low amplitude pulses and to start charge integration. The TDCs feature a time binning of ∼40 ps and the QDCs have a linear response up to 1500 pC input charge. In addition, the gain of the QDC branch can be selected to have a value between 1.00 and 3.65 to achieve the best resolution and dynamic range, depending on the signal generated by the SiPMs [56]. The TOFPET2 ASIC requires calibration in a three-step procedure. The first two steps include adjusting the input level of each amplifier just above the electronic noise produced by the detector, and then calibrating the TDCs and QDCs with the help of pulses with known duration and phase relative to the clock injected from the FPGAs. This is performed with the bias voltage of the SiPMs below the breakdown. Last, the dark count rates of the SiPMs are measured as a function of the thresholds of the three discriminators in each TOFPET2 to determine the best settings to achieve an optimal efficiency and data rate. The last step is performed at the nominal operating voltage of the SIPMs. The calibration procedure has been implemented in an automated way, and the parameters obtained from it are stored in configuration files. DAQ electronics The DAQ readout boards, shown in Figure 25, are based on the Mercury SA1 module from Enclustra [57], featuring an Altera Cyclone V FPGA. This board is equipped with four high-speed connectors for the FE boards, a TTCrx ASIC [55] with an optical fibre receiver to receive the clock and synchronous signals from the TTC system, a 1 Gb Ethernet port used for data and command transmission, and a coaxial LEMO connector to deliver the bias voltage to SiPMs. Each DAQ board collects the data digitized by four FE boards, i.e. 512 channels, and transmits it to the DAQ computer server located on the surface. Readout software Each DAQ board transmits all the recorded hits to the DAQ computer server, where event building is performed. The hits are grouped into events based on their timestamp, and saved to disk as a ROOT file. The DAQ boards also transmit periodic triggers received from the TTC system. These heartbeat triggers are used by the event building software in the DAQ server to verify that all the boards are running synchronously, and operating properly even when there is no data. The readout process from starting servers to starting the data taking, sending periodic triggers, monitoring the status of each element, etc, is fully controlled by the ECS, described in Section 6.2.3. The event building process is structured in two main steps, shown in Figure 26. In the first step, hits collected by all boards and belonging to the same event, i.e. with time stamps within 25 ns, are grouped into "events". The event timestamp corresponds to the timestamp of the earliest hit within the event. The events are then filtered and processed online, before being written to disk. The details of the processing depend on the chosen settings, but it always contains an online noise filter, described below. The noise filtering is performed in two steps. In the first one, events are required to have a minimum number of DAQ boards that have detected a certain number of hits. This is fast and eliminates all the events generated by single noise hits. In the second step, the hits are grouped by the plane that generated them. This allows more advanced requirements on the topology of the events to be imposed. The system includes a number of additional configurable data processors, such as the FE calibration that may be applied during the data acquisition. The DAQ server writes the recorded data to a local disk. At the end of each run the data is transferred to a permanent storage and converted to the format used by the reconstruction software. The DAQ server can cope with a maximum rate of 1 Mhits/s. The dark rate in the whole detector produces ∼400 khits/s, the muon flux at the highest instantaneous luminosity (0.8 Hz/cm 2 ) produces ∼450 khits/s, leaving ∼150 khits/s of spare bandwidth. Detector control and safety monitoring The detector control system (DCS) controls and monitors the status of all the detector services, i.e. the detector and electronics power supplies, the cooling system and the environmental sensors within the neutron-shielded box surrounding the target system, as well as the safety system. The voltage, currents and channel status of the power supplies are continuously monitored and transmitted to the ECS. The ECS then acts accordingly, logging the events or raising an alarm in case of problems. The neutron-shielded box surrounding the target system is equipped with sensors for temperature, humidity and smoke. The safety and environment monitoring system (SMS) monitors these environmental parameters inside the box, detects the presence of smoke and monitors the status of the cooling system. The monitoring and safety decison logic is implemented on a NUCLEO-H743ZI development board [58], featuring an ARM Cortex-M7 microcontroller. A schematic illustration of the SMS is depicted in Figure 27. Temperature and humidity are measured in five different locations, using digital sensors (Sensirion SHT31 [59]) which guarantee an accuracy of ±0.2°C and ±2% respectively, and a reliable I2C communication with the host microcontroller. The positions of the sensors have been chosen to minimize the interference with the emulsion replacement procedure. Three of them are positioned on the back metal plane, monitoring the temperature and humidity as close as possible to the emulsion boxes, while the remaining two sensors are placed on the opposite side, facing the cold box. This configuration assures a comprehensive temperature and humidity mapping of the target system. The SMS is also equipped with three smoke sensors with relay output. They are used for fire and smoke detection and are connected to the microcontroller digital inputs with interrupt capability. In addition, a signal from the cooling system is used to monitor its status. The microcontroller firmware has been developed using the Mbed OS [60], an ARM Real Time Operating System. It continuously monitors its inputs, evaluates possible alarm conditions, takes The MQTT protocol [61] is used to transmit data. The DCS monitors and logs the temperature inside the neutron shield. If it exceeds 18°C, an alarm is raised and the power to the DAQ boards is cut, to protect the emulsion films. Alarms are also raised if a failure is detected in the cooling plant, i.e. if it turns off and stays off for more than 10 minutes. The SMS also acts as an interlock for the CAEN power supplies. If an alarm condition is detected, it can turn them off without relying on the DCS being functional. The control algorithm is designed to classify alarms according to three levels referred to as low, medium and high. A low level alarm occurs when the temperature or humidity readings of one sensor exceeds the set thresholds, or if only one of the three smoke sensors is triggered. An alarm is posted but no further action is taken. A medium level alarm occurs when the temperature or humidity readings of two or more sensors exceed the set thresholds or if the cooling system has been turned off for more than 10 minutes. An alarm is posted and the power supplies are immediately turned off, to avoid the temperature inside the cold box to raise further and potentially affect the emulsion films. A high level alarm occurs when two or more smoke detectors are triggered. An alarm is posted and the power supplies are immediately turned off. In addition, it could be setup to trigger a response of the fire brigade. Data processing and quality monitoring The data quality monitoring (DQM) is fundamental to ensure that useful data is recorded and to verify that all the sub-systems of the experiment operate correctly. The DQM process runs on the second computer server located at the surface and reads in real-time the data file that is being written by the DAQ server. The process performs the conversion to the offline data format and makes this data available to the DQM agents, which process it and displays the results in the ECS. Several levels of complexity of the processing have been implemented from simple hit maps to ensure that all detector channels work as expected, to full real-time reconstruction that allows high-lever quantities to be checked, such as efficiencies and resolutions. Experiment control system The ECS is the top-level control of the experiment online system, providing a unified framework to control the hardware and software components, and to sequence all data taking operations. A hierarchical architecture has been implemented in which the ECS is a layer above the other online systems, preserving their autonomy to operate independently. With this architecture, the various online components do not strictly require the ECS to operate, e.g. detector calibration and data taking are stand-alone processes. The ECS also performs the logging of the relevant detector information, either in ELOG or in databases depending on the type of information. The ECS consists of two main software components: the ECS Process Manager (EPM) and the ECS Graphical User Interface (GUI). The software is written in C++ and the inter-communication with the DAQ and the DCS system is done with Python scripts. The EPM is a process which runs on the main server and acts as the communication link between the different online system components. It takes care of starting them and continuously monitoring their status. The EPM process is kept alive by a cron-based watchdog. The status of the process is monitored at fixed time intervals, and restarted if it is not running. The EPM and its slave processes are associated with state machines that are driven by the status of the process activities. The information of these state machines is stored in a shared memory supervised by the EPM. The ECS GUI has been designed to ensure a simple and compact view of the run control, status of the sub-detector and of the peripheral systems. The main windows of the ECS GUI is shown in Figure 28. The ECS is designed to operate the online system automatically, controlled by a global finitestate machine that receives the status of the LHC and of the detector to perform predefined actions in order to run the data taking and recover from errors. The accelerator states are received from the LHC Data Interchange Protocol system, the power supply and environment conditions from the DCS, and the data acquisition status from the DAQ boards and the DAQ server. As an example, the ECS automatically starts the data acquisition when the LHC declares stable beams, stops it when the beams are dumped, and logs any event that can be useful to analyse the data. Furthermore, it can reboot a board that has become unresponsive, stop the DAQ and cut the power to the boards in the neutron-shielded box if the temperatures rises above the thresholds, try to restore a tripped SiPM bias channel, etc. Offline software and simulation The offline software framework, sndsw, is based on the FairRoot framework [62], and makes use of the experience gained with the FairShip software suite, developed within the SHiP collaboration [63]. The reconstruction and analysis tools developed by the SHiP collaboration had been successfully applied to the SND@LHC use cases and further improved. The offline software is developed, maintained, and distributed on Github. sndsw and its dependencies are built from source and are configured using the AliBuild tool, developed within the ALICE Collaboration for their upgrade software. The recipes for the dependencies are shared with ALICE and SHiP, where possible, to reduce the maintenance of the framework. Specific patches and recipes for software uniquely used by SND@LHC are added, where required. Container images with the dependencies as well as an installation on the CVMFS are provided for various use cases. Raw and reconstructed data from testbeam and TI18 commissioning are available on EOS for analysis. Detector geometry The detector geometry is implemented using the TGeo package of ROOT and used in the simulation by G 4 as well as in the reconstruction. A model of the detector, the neutron shield and the surrounding tunnel can be seen in Figure 29. Electronic detectors and emulsion films are implemented as sensitive volumes. For the electronic detectors, the full granularity is implemented, from scintillator bars to individual scintillating fibres. The G 4 simulation stops with the deposition of energy in the sensitive detectors. The digitisation step takes this energy and simulates an electronic signal, taking into account the transformation to photons, the light propagation and absorption along the scintillating fibre or bar, the photodetection efficiency of the SiPMs and the response of the front-end. Simulation Several simulation engines are available. Muons from IP1 simulated by FLUKA [20,21] and transported through the detector by G 4 [64], muon deep inelastic scattering using Pythia6 [65], Figure 29: The SND@LHC detector layout and the TI18 tunnel geometry as implemented in the G 4 simulation. DPMJET3 (Dual Parton Model, including charm) [22] or Pythia8 [66] for neutrino production at IP1 and GENIE [25] for the neutrino interactions in the detector target. In addition, G 4 had been used to investigate the neutron shielding performance of various materials. Data reconstruction The event reconstruction is performed in two phases: the first one is performed during the data taking using the response of the electronic detectors. The second phase incorporates the emulsion data, that will be available about six months after the exposure. First data became available from the testbeam campaign in H8 for the hadronic calorimeter and muon system, using a pion beam with different energies for the energy calibration studies. Data from a parasitic run in H6 with in addition the SciFi and Veto detector installed is also available. Theis data was used to make a first internal space alignment of the SciFi detector, with a subsequent alignment of the other detectors with respect to the SciFi. This will be repeated with the first data in TI18. The data was also used to determine the light propagation speed in the scintillator bars as well as the absorption length, as reported in Section 8. During Run 3 operation, the upstream veto planes will tag incoming muons that will be used for fine alignment between detector planes. The occurrence of a neutrino interaction or a FIP scattering will be first detected by the target tracker and the muon system. Electromagnetic showers are expected to be absorbed within the target region and will therefore be identified by the target tracker, while muons in the final state will be reconstructed by the muon system. In addition, the detector as a whole acts as a sampling calorimeter. The combination of data taken from both systems will be used to measure the hadronic and the electromagnetic energy of the event. A schematic representation of a and a charged-current interaction in the SND@LHC detector is shown in Figure 30. The reconstruction of the emulsion data begins during the scanning procedure. Optical microscopes [49][50][51] analyse the whole thickness of the emulsion, acquiring tomographic images at equally spaced depths. After digitizing the acquired images an image processor recognizes the grains as clusters, i.e. groups of pixels of a given size and shape. Thus, the track in the emulsion layer (usually referred to as micro-track) is obtained by connecting clusters belonging to different levels. Since an emulsion film is formed by two emulsion layers, the connection of the two micro-tracks through the plastic base provides a reconstruction of the particle's trajectory in the emulsion film, called base-track. The reconstruction of particle tracks in the full volume requires connecting base-tracks in consecutive films. In order to define a global reference system, a set of affine transformations has to be computed to account for the different reference frames used for data taken in different films. Muons coming from the IP will be used for fine film-to-film alignment. Once all emulsion films are aligned, volume-tracks (i.e., charged tracks which crossed several emulsion films) can be reconstructed. The offline reconstruction tools currently used for track finding and vertex identification are based on the Kalman Filter algorithm and are developed in FEDRA (Frame-work for Emulsion Data Reconstruction and Analysis) [67], an object-oriented tool based on C++ and developed in the ROOT [68] framework. The topologies of some signal events that can be reconstructed in the SND@LHC brick are illustrated in Figure 31. About twenty neutrino interactions are expected in each brick, given the replacement at every ∼25fb −1 . The matching with the adjacent target tracker plane will be performed by aligning the centre-of-gravity of events reconstructed in the two detectors, thus assigning timing information to interactions reconstructed in the brick. The emulsion data will be also used to complement the target tracker system for the energy measurement of electromagnetic showers. Emulsion scanning system The emulsion readout is performed in dedicated laboratories equipped with automated optical microscopes, as the one shown in the left panel of Figure 32. The system analyses the whole thickness of the emulsion, acquiring tomographic images at equally spaced depths by moving the focal plane along the vertical axis. A recently developed upgrade of the European Scanning System (ESS) [47,48,69] combines the use of a faster camera with smaller sensor pixels and a higher number of pixels, a lowermagnification objective lens and a new software LASSO [49,50], allowing to increase the scanning speed to 180 cm 2 /h [51], more than a factor ten faster than before. The lens of the microscope guarantees a sub-micron resolution and, having a working distance longer than 300 μm in the direction perpendicular to the film, allows for a scan of both sides of the emulsion film. In order to make the optical path homogeneous in the film, an immersion lens in an oil with the same refraction index of the emulsion is used. A single field of view is 800×600 μm 2 . Larger areas are scanned by repeating the data acquisition on a grid of adjacent fields of view. The images grabbed by the digital camera are sent to a vision processing board in the control workstation to suppress noise. The total emulsion-film surface to be scanned in SND@LHC is expected to be about 44 m 2 every four months, thus requiring at least ten scanning systems fully devoted to this activity in order for the readout time to be approximately equal to the exposure time. Commissioning Commissioning of electronic detectors and target mechanics largely took place in the autumn of 2021 in the North Area of the SPS. These tests included a test beam campaign of the hadronic calorimeter and muon system with hadrons using the H8 beamline as well as commissioning of all the electronics detectors with parasitic muons using the H6 beamline. The first is necessary to tune Monte Carlo simulations for accurate shower reconstruction. The commissioning in H6 was used to evaluate the performance of all electronic sub-detectors when read out together. In addition, part of the floor in the H6 beamline was inclined to reproduce the angle of the floor in TI18, to allow the commissioning of the mechanical support. Pion test beam in H8 During test beam campaign in H8, all five US station and two DS stations, including passive iron blocks, were tested. A wall of iron 80 cm wide, 60 cm tall and 29.5 cm thick was placed 20 cm upstream of the first US iron block, reproducing the target region in terms of hadronic interaction lengths. Besides an energy calibration measurement, the test beam served to investigate the appropriate DAQ settings for data taking. Three different gain settings of the QDC were investigated, 1.00, 2.50 and 3.65. Calibration was performed for each gain setting before the beginnings of data taking. Subsequent tests on a spare PCB with a tuneable laser found that a gain setting of 2.50 provided the most linear behavior of the recorded signal as a function of injected charge. The system, shown in Figure 33, was exposed to 140 and 180 GeV positive pions and 240 and 300 GeV negative pions. Additional runs were taken with cosmic muons when the SPS beam was off and with halo muons when a beam dump was placed upstream to obstruct the beamline. During both test beam campaigns, the beam spot was about 1 cm in diameter and the particle rate ranged from 100 Hz to 2 kHz. Analysis of the test beam results is ongoing, with initial studies focusing on signal distributions, light attenuation lengths of the bars, detection efficiency, spatial and time resolution, timing calibration, signal propagation speed in the bar, event displays, saturation effects of the SiPMs, MC/data comparison, background estimation and hadronic shower evolution. Some preliminary results from the test beams are presented here. The average signal size (in ADC counts) in each SiPM follows the distribution of a Landau distribution convoluted with a Gaussian one, as seen in Figure 34. A comparison of the total charge (expressed in QDC units) in the first two US stations for different energies is shown in Figure 35: a significant increase is noticeable in the step from 140 GeV to 180 GeV. This is not fully understood and will be investigated with a follow-up testbeam. Example event displays at 300 GeV and at a QDC gain setting of 3.65 are displayed in Figure 36. The signal as a function of position along the DS bars is shown in Figure 37 and the measured attenuation length of 3.6 ± 0.1 m, obtained from the average of the listed values (with the values of the vertical bars taken as half the listed value, due to the presence light reflected off the bottom), is consistent with the value given by the manufacturer (3.8 m) [37]. The time difference between signals collected on opposite ends of a bar, as seen for a DS horizontal plane in Figure 38, can be used to calculate signal propagation speed along the bar which, at about 15 cm/ns, closely matches the literature value [70]. The response of the different SiPM types at 300 GeV and the highest gain setting for the US can be seen in Figure 39, with the small SiPM response indicating that hadronic showers are mainly contained in the first three layers. The drop seen in Figure 39 for the large SiPMs of the fourth US station is presumed to be due to dead channels on the PCB, although this is still under investigation. Muon test beam in H6 All electronic detectors have been accurately tested before being installed in the TI18 tunnel. An important part of these tests has been performed in the H6 beamline of the CERN SPS, where all electronic subdetectors have been operated together for the first time. A picture of the setup installed in H6 is shown in Figure 40. Due to space constraints, the order of the detectors was not the same as in its TI18 configuration: the veto is placed right in front of the hadronic calorimeter, while the SciFi is located behind the hadronic calorimeter and muon system. The measurements performed in H6 focused on the study of the performance and alignment of all subsystems. Several runs at different settings have been collected. The details for each sub-system are given in the following sections. Figure 36 : Event display from the second test beam at 300 GeV at a gain setting of 3.65 with the location of the detector superimposed as seen from above (top) and the side (bottom). The left shows a single particle event with a fitted track while the right shows a multiple particle event. Veto, hadronic calorimeter and muon system results Commissioning of the veto and muon systems were carried out in two phases, before and after the test beam in H8. The commissioning tests in H6 represented the first test of the veto and its electronics. During the first commissioning phase, five US and two DS stations of the hadronic calorimeter and muon system were tested along with the SciFi. Ground loops were also discovered, leading to a significant noise increase and difficulties with DAQ calibration. This led to the introduction of a grounding cable between the ground of the HV and ground of LV, which were ready in the second phase of commissioning. During this phase, dimensions and spacing of the veto system within the target structure mechanics were also checked. In the second phase, the target structure mechanics was removed and the veto was placed directly in front of the muon system, with the SciFi placed further downstream. A third DS station was also added. It was discovered that PCBs on three US stations and two DS stations displayed a number of missing channels, which were then removed from the experimental hall for repair. The remaining stations were then tested along with the veto and SciFi, with the addition of grounding cables to the veto and muon system as mentioned before. An example of an event display including all electronic sub-systems is shown in Figure 41. Preliminary analysis of the data also indicated missing channels in two PCBs of the veto system, which were sent for repair at the end of the commissioning phase. SciFi results Several runs in different conditions were collected: the T1 and T2 thresholds, described in Section 6.1.2, were varied, while the E threshold was not used. T1 is lower and determines the timestamp of the hit, so its influence on the time resolution was studied, while T2 is higher, and determines whether a hit is collected or not. It was studied to find the optimal compromise between dark-count rate and efficiency. In addition, data were collected at three different QDC gain values: The alignment of the SciFi stations, both their relative position and the inner degrees of freedom within one station, were studied by performing track reconstruction using four of the five stations, extrapolating the reconstructed tracks on the fifth one and minimizing the residuals, i.e. the difference between the track extrapolation and the corresponding cluster position. Results obtained for one station are shown in Figure 42. They show that the alignment procedure works as expected, as the residuals distribution is peaked at 0 and that the spatial resolution of the SciFi system is below 100 µm. The particle detection efficiency of the SciFi detector was studied at two different T2 thresholds (a higher one, producing ∼2 Hz of dark rate per channel and a lower one, producing ∼20 Hz). The efficiency was studied similarly to the alignment, by reconstructing tracks using four stations, extrapolating them to the fifth one and looking for an associated cluster within a radius of 1 cm. The efficiency is calculated as the ratio between the tracks with associated cluster (in the plane, At low threshold, the efficiency is 97% over the whole station, while it drops down to ∼65% at the higher threshold. This result, in combination with the maximum hit rate allowed by the DAQ server (see Section 6.1.4) lead to the decision of running the detector an even lower T2 threshold, producing ∼25 Hz of dark rate per channel. The efficiency of a representative SciFi layer at this threshold is shown in Figure 43. All layers show a consistent efficiency of ∼98%, which raises to ∼99% if the gaps between SiPMs are excluded from the computation. The time resolution of the SciFi tracker is limited by the number of detected photons and the scintillator decay time. It has been measured by calculating the coincidence time resolution (CTR) between two fibre layers, correcting for the light propagation delay in the fibres. The results are shown in Figure 44: each fibre layer has a time resolution of ∼330 ps, which translates in ∼230 ps per plane or ∼100 ps for the whole target tracker Commissioning of the target structure In order to perform the commissioning of the target mechanical structure, the upstream section of the floor in H6 was inclined by 4°to reproduce the floor inclination in the TI18 tunnel. The whole structure was assembled and installed on the three alignment feet, as shown in Figure 45. A test of the transportation along the slope of the wall box with the trolley, of the insertion and extraction of the wall box inside the structure, as well as of the fixation of the SciFi plane was Target wall commissioning The commissioning of the target wall was performed in November 2021 at the Emulsion Facility at CERN. A test with a first batch of 192 × 192 mm 2 emulsion films was conducted in order to test the chemical compatibility of tungsten plates with emulsions, the light tightness of the wall box, the uniformity of track reconstruction in different bricks and in different positions within the brick. A full wall made of four bricks, each consisting of 58 tungsten plates, was assembled in dark room conditions. A stack of 30 emulsion films were used for the test, disposed in two bricks (B1 and B4) as reported in the schematic drawings in Figure 46. Steel plates with a surface of 10 × 10 cm 2 and a thickness of 300 µm were used to replace emulsion films in the remaining part of the walls. After the assembly, the wall box was exposed to cosmic radiation for 48 h without any dedicated shielding. Then emulsion films were developed and scanned with automated optical microscopes in one of the emulsion scanning laboratories of the Collaboration. During the scanning, aligned grains in adjacent emulsion layers are recorded by a camera and stored as digital pixels. After the scanning, an image processor recognized aligned clusters, formed by groups of pixels. These clusters need to be separated from a background of thermally excited grains, which get developed even if not exposed to radiation. This background is usually referred to as fog, and its density was measured by counting the number of grains per unit volume in both emulsion layers. An average grain density of 4.5 ± 0.2 per 1000 µm 3 was measured, compatible with that of reference ( . . not exposed) emulsion films, showing that contact of the films with neither the tungsten plates nor the internal coating of the wall had chemically contaminated the emulsion. The grain density was measured in different points of the emulsion surface and for different positions of emulsion films within the brick, demonstrating the light tightness of the wall box. After the scanning, the reconstruction process was performed, as described in Section 7. The position and angular distributions of reconstructed base-tracks in an emulsion film are shown in Figure 47. A good alignment between consecutive films was obtained, proving that the distortion of the emulsion films is negligible. The tracks are distributed uniformly on the surface, as expected. Since the target was placed horizontally during the exposure, the cosmic rays crossed the emulsion films perpendicularly to their surface, leading to a peak of reconstructed tracks at low angles. Finally, track reconstruction in the whole target is performed, with a Kalman filter seeded on the base-tracks recorded in the single emulsion films. Comparing the position and angle of each base-track with a linear fit on the and planes leads to an estimation of the tracking resolution. The results are reported in Figure 48, with Gaussian fits leading to X ∼ 9 µm for the position and TX ∼ 8 mrad for the slope as the tangent of track angle in the plane ( ≡ tan( ). The average surface density amounts to (1.5 ± 0.1) × 10 3 tracks/cm 2 , reconstructed in the angular range from 0 to 1 rad. Over a total exposure time of 48 hours, this density corresponds to a flux of (0.52 ± 0.03) muons/cm 2 /min. The expected cosmic-ray in the same angular acceptance amounts to about 0.73 muons/cm 2 /min. The discrepancy can be attributed to the energy threshold for reconstructed tracks, which are required to pass through the Tungsten layers. Track reconstruction was performed on both the upstream section (five emulsion films) and downstream section (ten emulsion films) of the two exposed ECC bricks. As an example, a display of the reconstructed tracks in the downstream section of one brick is reported in Figure 49. Infrastructure and detector installation in TI18 The TI18 tunnel, shown in Figure 50, was initially constructed for injection of positrons from the SPS to the LEP accelerator. It is 280 m long and has mostly a steep slope of about 15%, but levels out as it enters the LHC ring via the junction cavern UJ18 in the LHC Sector 12, about 480 m from the ATLAS interaction point. The LEP machine elements in TI18 were removed during the preparatory works for the LHC but the tunnel was left unused. All but the last short section of about 20 m before entering UJ18 has been closed off. At the level of the floor, this short section crosses the collision axis of IP1, making the location particularly suitable for the high pseudo-rapidity region sought by the experiment. At this location, the tunnel has a slope of 3.6% and a 2.9 m-wide floor. Detailed integration studies showed that the detector could be constructed on the floor without any modification to the tunnel structure. Yet, the use of TI18 presented a number of challenges. Firstly, TI18 is on the outside of the LHC ring while the 450 m transport path from the access shaft PM15 at IP1 to UJ18 is on the inside, requiring preparation of dedicated transport paths above and under the machine for the infrastructure and detector components. The transport path had to be made compatible with the machine cryogenics under helium pressure. Secondly, TI18 was lacking all services in terms of ambient lighting, power, cooling and safety as required by the experiment. Figure 51 shows the main modifications in UJ18. The transport path over the machine is ensured by an added rail fixed to the UJ18 ceiling and carrying a manual hoist with a 500 kg capacity. A protective table, capable of resisting against a fall of an object of up to 1.3 t, was produced and installed under the hoist and over the cryostat. A transport volume of 75 × 90 × 170 cm 3 was opened up by modifying the location of the existing cable trays. Space below the machine was also freed to guarantee a path for transporting smaller objects with the help of low-profile trolleys. The passage will also be used to pass the trolley for the exchange of the emulsion walls during the run (Figure 13b). Space for storage of detector components and assembly was freed by removing obsolete ventilation ducts in UJ18. This allowed for detector components and infrastructures items to be brought in batches to avoid transport bottlenecks in the LHC access system. The required detector electrical power of 11 kW could be provided from the existing electrical grid in UJ18. A dedicated circuit with an electrical box and associated emergency stop buttons were installed in TI18. Figure 52 shows an overview of the service and detector integration in TI18, together with detailed images of the experimental area. To free additional space for the detector installation in TI18, a 20 m-long and obsolete ventilation duct was removed. The neutron-shielded box that surrounds the target region has dimensions of 2.19 × 1.76 × 1.86 m 3 and is shown in Figure 53. In order to provide the required shielding, the walls of the box are made of acrylic and 30% borated polyethylene panels, having a thickness of 50 mm and 40 mm, respectively. The whole structure is sustained by a skeleton of aluminium profiles. Doors on the upstream side and the corridor side of the detector provide easy access for maintenance and for emulsion wall replacements. The neutron-shielded box is equipped with a closed circuit cooling system that guarantees a stable ambient relative humidity and temperature of 45% and 15°C, respectively, as required in order to prevent fading of the emulsion films. Two racks were installed in TI18 to house the detector power supplies and readout system, and dedicated optical fibre tubes were installed over 600 m in the LHC tunnel in order to connect with existing fibres up to the surface rack that is hosting the timing system, and the DAQ and control computer servers. The eight iron walls of the muon system, each with dimension 80 × 60 × 20 cm 3 and a weight of 750 kg, rest on horizontal steel base plates which were positioned at an accuracy of 0.5 mm and grouted against the floor to compensate for the tunnel slope. Together with a smaller iron block at the end, the walls are by themselves providing the support for the mechanical structure holding the eight muon detector planes. The emulsion walls and the SciFi planes are carried by the target system structure that is grouted to the existing floor with custom-made wedges. All detector components were aligned with an accuracy of 0.5 mm. The goal of being ready for data taking at the start of the LHC Run 3 in 2022 limited the entire schedule for the infrastructure, detector installation and commissioning to nine months. The final phase of the LHC Long Shutdown 2 and the preparation of the machine for startup in 2022 set additional strict constraints on the planning. A large part of the works had to be done with the LHC dipoles cooled to 4.5 K, requiring further attention on the procedures. The main infrastructure modifications in UJ18 and TI18 were performed between the end of June and September 2021. September and October were dedicated to detector assembly and beam tests on the surface in the North Area Hall EHN1, while LHC was closed for the pilot run. The detector installation, including the iron blocks, cooling plants and the related electronics, was successfully carried out in November and December, allowing the start of global commissioning Figure 53. On April 7 th , one-fifth of the target region was partially instrumented with emulsion films, together with a few independent small emulsion bricks to check machine-induced background during the LHC commissioning, as the very final step of the detector installation. On May 24 th , SND@LHC registered the first muons from pp collisions and at the beginning of July the first ever events at a centre-of-mass record energy of 13.6 TeV were recorded. The first target fully equipped with emulsions was installed on July 26 th and emulsion were replaced three times during the 2022 run, integrating a total of ∼40 fb −1 . Ideas for an HL-LHC upgrade An advanced version of the SND@LHC detector is envisaged for the HL-LHC. It will consist of two detectors: one placed in the same region as SND@LHC, . . 7.2 < < 8.4 and the other one in the region 4 < < 5. The first apparatus will have a similar angular acceptance as for the SND@LHC and will perform the charm production measurement and lepton flavour universality tests with neutrinos at the percent level; the second detector will benefit from the overlap with LHCb to reduce systematic uncertainties and will perform neutrino cross-section measurements. In order to increase the azimuth angle coverage of the second detector, the idea is to search for a location in existing caverns, closer to the interaction point. We consider this second module as a near detector meant for systematic uncertainty reduction. Each detector will be made of three elements. The upstream one is the target region for the vertex reconstruction and the electromagnetic energy measurement with a calorimetric approach. It will be followed downstream by a muon identification and hadronic calorimeter system. The third and most downstream element will be a magnet for the muon charge and momentum measurement, thus allowing for neutrino/anti-neutrino separation for and for in the muonic decay channel of the lepton. The target will be made of thin sensitive layers interleaved with tungsten plates, for a total mass of ∼ 5 tons. The use of nuclear emulsion at the HL-LHC is prohibitive due to the very high intensity that would make the replacement rate of the the target incompatible with technical stops. The Collaboration is investigating the use of compact electronic trackers with high spatial resolution fulfilling both tasks of vertex reconstruction with micrometric accuracy and electromagnetic energy measurement. The hadronic calorimeter and the muon identification system will be larger than 10 which will bring the average length of the hadronic calorimeter above 11.5 , thus improving the muon identification efficiency and energy resolution. The magnetic field strength is assumed to be about 1 T over a ∼2 m length. The configuration of the detectors allows efficiently distinguishing between all three neutrino flavours and measure their energy. The SND@LHC upgrade will open a unique opportunity to probe physics of heavy flavour production at the LHC in a region inaccessible to other experiments.
20,612.2
2022-10-06T00:00:00.000
[ "Physics" ]
Two Notch Ligands, Dll1 and Jag1, Are Differently Restricted in Their Range of Action to Control Neurogenesis in the Mammalian Spinal Cord Background Notch signalling regulates neuronal differentiation in the vertebrate nervous system. In addition to a widespread function in maintaining neural progenitors, Notch signalling has also been involved in specific neuronal fate decisions. These functions are likely mediated by distinct Notch ligands, which show restricted expression patterns in the developing nervous system. Two ligands, in particular, are expressed in non-overlapping complementary domains of the embryonic spinal cord, with Jag1 being restricted to the V1 and dI6 progenitor domains, while Dll1 is expressed in the remaining domains. However, the specific contribution of different ligands to regulate neurogenesis in vertebrate embryos is still poorly understood. Methodology/Principal Findings In this work, we investigated the role of Jag1 and Dll1 during spinal cord neurogenesis, using conditional knockout mice where the two genes are deleted in the neuroepithelium, singly or in combination. Our analysis showed that Jag1 deletion leads to a modest increase in V1 interneurons, while dI6 neurogenesis was unaltered. This mild Jag1 phenotype contrasts with the strong neurogenic phenotype detected in Dll1 mutants and led us to hypothesize that neighbouring Dll1-expressing cells signal to V1 and dI6 progenitors and restore neurogenesis in the absence of Jag1. Analysis of double Dll1;Jag1 mutant embryos revealed a stronger increase in V1-derived interneurons and overproduction of dI6 interneurons. In the presence of a functional Dll1 allele, V1 neurogenesis is restored to the levels detected in single Jag1 mutants, while dI6 neurogenesis returns to normal, thereby confirming that Dll1-mediated signalling compensates for Jag1 deletion in V1 and dI6 domains. Conclusions/Significance Our results reveal that Dll1 and Jag1 are functionally equivalent in controlling the rate of neurogenesis within their expression domains. However, Jag1 can only activate Notch signalling within the V1 and dI6 domains, whereas Dll1 can signal to neural progenitors both inside and outside its domains of expression. Introduction The vertebrate central nervous system is composed by a variety of neuronal and glial cell types, whose production has to follow three fundamental rules: i) to be generated in the correct proportion; ii) to migrate to the right position and iii) to be functionally distinct. During embryonic spinal cord neurogenesis, neural progenitor cells are exposed to different concentrations of secreted TGFb, Sonic hedgehog (Shh) and Wnt proteins that act in a graded manner to establish a pattern of progenitor identities along the dorso-ventral (DV) axis. This results in the generation of distinct neural progenitor domains in the spinal cord, each expressing specific combinations of transcription factors (TFs) from the homeodomain (HD) and basic-helix-loop-helix (bHLH) families, which confer specific identities to each progenitor population (reviewed in [1,2]). In the ventral spinal cord, five progenitor domains have been defined, four that give rise to different classes of ventral interneurons, named V0, V1, V2, and V3, and a domain from which all motoneurons (MN) arise. Similarly, neural progenitors in the dorsal spinal cord are organized into six domains that generate six early forming (dI1-6) and two late developing (dIL A and dIL B ) classes of interneurons. Differentiating neurons arising from each progenitor domain express unique sets of TFs that regulate their final differentiation programs and their integration into the spinal cord circuitry. In the ventral spinal cord, for instance, V0 INs are characterized by the expression of Evx1, V1 INs express En1, V2a INs express Chx10, MNs express Hb9 and Isl1/2, and V3 cells express Sim1 [3]. Notch signalling is another mechanism that has been shown to be essential for appropriate neuronal production in the embryonic spinal cord, controlling the rate of neurogenesis [4,5]. Deletion of Notch1, which is exclusively expressed in the ventricular zone of the neuroepithelium where neurogenesis occurs results in a neurogenic phenotype that is characterized by premature and excessive neuronal differentiation in the spinal cord [6,7]. Two other Notch genes, Notch2 and Notch3, are also expressed in the embryonic neuroepithelium [8]. Complete elimination of Notch activity could be achieved through the generation of mutant mice with simultaneous deletion of the three bHLH-O genes hes1, hes3 and hes5, which encode the main effectors of Notch signalling in the embryonic spinal cord [6,9]. Analysis of these triple-mutant mice showed that all neural progenitors in the spinal cord are dependent on Notch signalling to maintain their neurogenic potential. In the absence of Notch activity, progenitors enter differentiation prematurely and neurogenesis collapses due to progenitor depletion. In addition to its essential role in progenitor maintenance, Notch signalling has also been shown to regulate specific neuronal fate decisions in the spinal cord, controlling for instance the generation of excitatory V2a and inhibitory V2b interneurons from the V2 domain [4,5]. These diverse Notch functions are likely mediated by different Notch ligands, all of which are expressed in the embryonic vertebrate spinal cord in unique spatio-temporal patterns. The Dll3 and Jag2 genes are expressed in differentiating neurons [10,11], with Jag2 being expressed exclusively in differentiating motoneurons [11]. The other ligands are specifically expressed in the ventricular region of the neuroepithelium: Dll1 and Jag1 are expressed in a strikingly complementary pattern [8,12], with Jag1 expression restricted to the V1 and dI6 progenitor domains [13][14][15] and Dll1 expression present in the remaining DV progenitor domains of the embryonic spinal cord, coinciding with Dll4 in the V2 domain [12,14]. We have previously shown that Dll1 inactivation leads to premature neuronal differentiation in all domains where the gene is expressed [14]. Similarly, it has been recently reported that Jag1 mutants reveal accelerated neurogenesis within its domains of expression, resulting in the overproduction of V1-derived interneurons [15]. The finding that two ligands share a common role in progenitor maintenance in adjacent domains of the embryonic spinal cord raises the question of whether one ligand could compensate for the absence of the other in regulating neuronal production. A functional equivalence between different Notch ligands has been reported in the Drosophila embryo, where complete phenocopy of Notch mutations in wing veins and sensory lineages can only be achieved after deletion of both Delta and Serrate [16]. In addition, ectopic expression of Serrate was shown to partially rescue the severe neuronal hyperplasia observed in Deltadeficient embryos [17], reinforcing the notion of functional redundancy between different ligands. This is further supported by our analysis of mouse Dll1 mutants, where Dll4 can partially compensate lack of Dll1 in the spinal cord V2 domain, attenuating the overproduction of V2 INs due to Dll1 deletion [14]. To investigate whether Jag1 and Dll1 have differential roles in the control of neuronal production, we have used conditional mouse models to delete one or both genes specifically in the progenitor domains of the embryonic spinal cord. Analysis of neuronal production in these mutants supports a model where both ligands regulate neurogenesis in similar ways within their own domains of expression. However, Dll1 and Jag1 show different signalling capacities to adjacent domains: while Dll1 is able to signal to Jag1-expressing domains, regulating neuronal production in the absence of Jag1, the latter is unable to sustain neurogenesis in adjacent Dll1-expressing domains, when Dll1 is inactivated. Thus, Dll1 is able to compensate for the loss of Jag1 function, while Jag1 fails to do the same in the absence of Dll1. Ethics Statement Animal experiments were approved by the Animal Ethics Committee of Instituto de Medicina Molecular (AEC_027_2010_ DH_Rdt_general_IMM) and according to National Regulations. All animals were fed ad-libitum and housed in SPF facilities. Immunofluorescence and in situ hybridization Embryos were fixed in 4% paraformaldehyde at 4uC (2 h for immunofluorescence (IF) and O/N for in situ hybridization (ISH)), cryoprotected in 30% sucrose and embedded in 7.5% gelatin:15% sucrose and 12 mm sections were used in the analysis. For IF, sections were degelatinized at 37uC for 15 min, followed by a pre-treatment with 3%H 2 O 2 : Methanol for 30 min at room temperature (RT), except for the antibodies against Jag1 and GFP. Permeabilization was performed using Triton 6100 (0.5%) for 15 min, followed by blocking (10% Normal Goat Serum, 0.1% Triton 6100) for 1 h at RT. Primary antibodies were incubated O/N at 4uC. The following antibodies were used in this study: Double in situ hybridizations using Dll1 and Hes5 mRNA probes were performed as previously described [12], with modifications. Dll1 DIG-labelled probe was first detected with AP-conjugated anti-DIG antibody (1:2000; Roche) and signal was developed using Fast-Red substract (Roche). To detect the second Hes5 Fluorescein-labelled probe, sections were incubated with HRPconjugated anti-Fluorescein antibody (1:1000, Roche), and signal developed by TSA-Plus Fluorescein System (Perkin-Elmer), according to manufacturer's instructions. Cell counts and Imaging Cell counts were performed for eight cryostat sections from at least three spinal cords (i.e. twenty four sections for each genotype). For the described antibodies, quantification of neuronal types was done by counting the number of immunopositive cells, which were normalized to the total number of cells (DAPI) in images taken with either a 206 or 406 objective on a Leica DM5000B fluorescence microscope. Statistical significance was determined using Student's ttest. Confocal images were captured with Zeiss LSM510 META confocal microscope. Results Jag1 mutants exhibit a milder neurogenic phenotype than Dll1 mutants To investigate the role of Jag1 and Dll1 in regulating neuronal production within and outside their domains of expression in the embryonic spinal cord, we have analysed in parallel the phenotypes of mutant embryos where either Jag1 or Dll1 were specifically inactivated in the neuroepithelium. These embryos were obtained by crossing floxed Jag1 and floxed Dll1 mice [20,21] with mice carrying a Cre recombinase under the control of the rat Nestin promoter, which drives Cre expression in all neural progenitors [18]. Jag1 single mutant (Jag1 f/f ;NesCre) and Dll1 single mutant embryos (Dll1 f/f ;NesCre) were compared between them and with control littermates. Comparison of Jag1 f/f ;NesCre with control embryos at E10.5 and E11.5 showed no differences in the general morphology of the spinal cord, whereas E11.5 Dll1 f/f ;NesCre spinal cords were severely affected as an enlargement of the floor plate, accompanied by the disappearance of the central lumen could be observed [14]. A similar morphology has been reported in a conditional Notch1 mutant [7]. In order to monitor the production of the distinct INs arising from the Jag1-expressing V1 and dI6 domains of the embryonic spinal cord, as well as the Dll1-expressing V2 and V0 domains, we have used various markers, individually or in combinations, as depicted in Figure 1. For V1-derived interneurons (INs), we followed the expression of En1, a homeobox-containing TF, and Foxd3, a winged-helix TF, which are both expressed in all postmitotic V1 INs [13,22]. To detect specific subsets of V1-derived neurons at later stages, we used Calbindin expression to label Renshaw cells [23,24] and Foxp2 expression to mark non-Renshaw cells [25]. To identify dI6 INs, we have analysed the expression of bHLHb5, a TF present in dI6 INs and also in more ventral V1 and V2 INs, but not in V0 INs [26]. Combined analysis with Evx1, which is selectively expressed by a more ventral subset of V0 INs (V0v) [27,28], allows the unequivocal identification of dI6 INs. In addition, we have evaluated the expression of Pax2, a TF common to multiple spinal cord INs, including the dI6 INs, as well as the V0 and V1 INs, but not to dI5 INs [29]. Finally, the expression of the homeodomain TF Chx10 was used to identify V2a INs arising from the Dll1-expressing V2 domain [30]. Quantification of Foxd3 + V1 INs in Jag1 f/f ;NesCre embryonic spinal cord at E11.5 revealed that lack of Jag1 function results in a mild, but statistically significant, increase of V1 IN production, when compared to control embryos ( Fig. 2 A,C,D). On the contrary, Dll1 f/f ;NesCre embryos showed similar numbers of Foxd3 + V1 INs to that in control embryos ( Fig. 2 A,B,D). This Jag1-specific V1 phenotype was further confirmed by a modest increase of En1 + INs found in Jag1 mutants at E11.5, when compared to control embryos (Fig. S1). A recently published work has also reported an increase in V1 INs on a different Jag1 mutant mouse [15], although the V1 neurogenic phenotype we observed in Jag1 f/f ;NesCre embryos is not as striking. We next quantified the number of Chx10 + V2a INs in Jag1 f/f ; NesCre embryos ( Fig. 2 C,E), and no significant alteration was observed, in contrast with the marked increase of V2a INs detected in Dll1 f/f ;NesCre mutants, (Fig. 2 B,E). Noticeably, the increase of V1 [13,22] INs in Jag1 f/f ;NesCre mutants is less pronounced than the increase in V2a INs found in Dll1 f/f ;NesCre mutants, being also statistically less significant (t-test, p,0.05 versus p,0.005) (Fig. 2 D, E). Our findings show that Jag1 is necessary to maintain the normal pace of neurogenesis within the V1 domain, but is not controlling progenitor maintenance in the adjacent Dll1-expressing V2 domain. In addition, the relatively mild Jag1 phenotype in the V1 domain suggests that not all V1 neural progenitors are affected by the lack of Jag1-mediated Notch signalling. This is further supported by our finding that the number of later V1-derived IN sub-types (Calbindin + and Foxp2 + ) is not altered in Jag1 f/f ;NesCre spinal cords (E15.5), when compared to control embryos (Fig. S2). Together, these results raise the hypothesis that control of V1 neurogenesis in the absence of Jag1 may be, at least partially, rescued by Dll1 signalling from the V0 and V2 neighbouring domains. To further test this hypothesis, we analysed neuronal production in the other Jag1-expressing domain of the spinal cord, the dI6 domain. Our results show that the number of Bhlhb5 + dI6 INs in Jag1 f/f ;NesCre (E11.5) embryos is indistinguishable from that detected in control and in Dll1 f/f ;NesCre embryos ( Fig. 3 A-C,G). Similarly, quantification with Pax2 confirmed that dI6 neurogenesis is not affected in Jag1 mutants, when compared to control (Fig. 3 D,F,H). The normal production of dI6 INs in Jag1 mutants offers further support to the hypothesis that Dll1 signalling from adjacent domains can compensate the absence of Jag1 and restore the control of neurogenesis. The increase in the number of Pax2 + INs detected in Dll1 mutants (Fig. 3 E,H) results from the overproduction of Pax2 + /Evx1 2 V0 D INs, and is not due to an excess of Pax2 + /Bhlhb5 + dI6 INs (Fig. 3 D-F, H). In parallel, we confirmed that Dll1 is necessary and sufficient for the control of V0 neurogenesis, as an increase of Evx1 + V0 V INs could only be detected in Dll1, and not in Jag1, mutants (Fig. 3 A-F, I). Nestin-Cre driver effectively inactivates Jag1 in V1 and dI6 spinal cord progenitors To exclude that the mild neurogenic phenotype found in Jag1 f/f ; NesCre embryos was due to poor Cre recombinase activity driven by the Nestin-Cre driver, we evaluated the extent of Nestin Cremediated recombination in the embryonic spinal cord of Jag1 mutants. To assess this, a Rosa26-derived reporter line that conditionally expresses the YFP gene (Rosa26-YFP) was bred into the Jag1 f/f ;NesCre line, allowing us to identify cells where Cremediated recombination is active [19]. E11.5 Jag1 f/f ;R26-YFP/+ ; NesCre embryos were collected and exhibited an intense YFP immunofluorescence along the whole DV axis of the developing spinal cord, indicating widespread Cre-mediated recombination in the neuroepithelium (Fig. 4 A,B and A0,B0). In addition, we have used immunofluorescence to detect the presence of the Jag1 protein in control and Jag1 mutant embryos. Our results show that Jag1 is completely absent from the dI6 and V1 domains of Jag1 mutants, demonstrating that the Nestin-Cre driver effectively deletes Jag1 in the embryonic spinal cord (Fig. 4 A9,B9 and A0,B0). Notch signalling is still active in the V1 domain of Jag1 mutants Given the mild V1 phenotype detected in Jag1 mutant embryos, we next asked whether Notch signalling continues to be active in the V1 domain, even in the complete absence of Jag1 protein in the mutant neuroepithelium. To address this, we analysed the expression of Hes5, the main target and effector of Notch activity in the developing spinal cord [9]. In situ hybridization with a Hes5 probe in Jag1 f/f ;NesCre and control embryos revealed that Hes5 mRNA expression is slightly diminished in the V1 domain of Jag1 mutants, but is still broadly detected in V1 progenitors (Fig. 5 A,B). Simultaneous detection of Dll1 mRNA expression shows that Dll1 transcription continues to be excluded from the V1 domain of Jag1 f/f ;NesCre embryos (Fig. 5). These findings confirm the absence of cross-inhibition between the two genes in the developing spinal cord, as previously suggested by studies in the chick embryo, where missexpression of Dll1 or Jag1 did not alter the endogenous expression domains of Jag1 and Dll1, respectively [15]. The observed Notch activity in the V1 domain of Jag1 mutant embryos favours the hypothesis that Dll1-expressing cells located at the boundary between the V0/V1 and V1/V2 domains are capable of signalling to neural progenitors in the adjacent V1 domain, preventing massive differentiation of V1 INs. Consistent with this observation, high-resolution confocal analysis of the spinal cord neuroepithelium in Jag1 f/f ;NesCre and control embryos after Dll1/Hes5 double in situ hybridization shows the presence of Dll1-expressing cells flanking Hes5 + V1 progenitors, suggesting that cells from neighbouring domains are indeed able to laterally signal to V1 progenitors and mediate Notch-driven Hes5 expression in these cells (Fig. 5 C,D). Dll1-mediated signalling from adjacent domains can compensate absence of Jag1 in V1 and dI6 domains To definitively confirm our hypothesis that Jag1 absence is compensated by Dll1 from adjacent domains, we generated mutant embryos where both Dll1 and Jag1 were conditionally deleted in the neuroepithelium. For this purpose, we crossed double-floxed Dll1;Jag1 female mice (Dll1 f/f ;Jag1 f/f ) with males carrying one floxed allele of Dll1, one floxed allele of Jag1 and one allele of the Nestin-Cre driver (Dll1 f/+ ;Jag1 f/+ ;NesCre). This strategy allowed us to generate an allelic series for phenotypic analysis. Neuronal production was monitored in these embryos using the previously described markers (Fig. 1). For all neuronal types assessed, double heterozygote embryos (Dll1 f/+ ;Jag1 f/+ ;NesCre) were indistinguishable from control embryos and were therefore used as controls (data not shown). If Dll1 signalling from adjacent domains is able to control neurogenesis in the V1 domain of Jag1 mutants, the prediction is that the mild V1 phenotype detected in Jag1 f/f ;NesCre embryos would become more pronounced in the absence of the two ligands. Indeed, quantification of Foxd3 + V1 INs in E11.5 full conditional Dll1;Jag1 double mutants (Dll1 f/f ;Jag1 f/f ;NesCre) revealed the highest increase, when compared to all other genotypes. For instance, single Jag1 mutants displayed a 23% increase in Foxd3 + V1 INs (p,0.05), while Dll1 f/f ;Jag1 f/f ;NesCre mutants exhibited a 49% increase (p,0.005) (Fig. 6 A,J). This excess in V1 neurogenesis was further confirmed by the analysis of En1 + V1 INs in Dll1 f/f ;Jag1 f/f ;NesCre mutants (Fig. S3). In addition, Dll1 f/f ; Jag1 f/f ;NesCre mutants showed a marked increase in V1-derived Calbindin + Renshaw cells and FoxP2 + non-Renshaw cells (Fig. S4), in contrast to single Jag1 f/f ;NesCre mutant embryos (Fig. S2). Analysis of the dI6 domain shows also a clear increase in the number of Pax2 + and Bhlhb5 + dI6 INs in full double mutant embryos (Dll1 f/f ;Jag1 f/f ;NesCre) when compared to control or to single mutant Jag1 f/f ;NesCre embryos (Fig. 6 D,G,K). Together, these results indicate that the absence of Jag1 activity in the V1 and dI6 domains of the developing spinal cord can be compensated by Dll1-mediated signalling from adjacent domains. This is further supported by the finding that one functional copy of Dll1 (Dll1 f/+ ;Jag1 f/f ;NesCre) is sufficient to partially compensate for the lack of Jag1 in the V1 domain, reverting the stronger V1 phenotype observed in Dll1 f/f ;Jag1 f/f ;NesCre embryos into a mild phenotype, similar to that observed in Jag1 single mutants (Fig. 6 B,J). Moreover, an identical trend is detected in the dI6 domain, where the presence of one functional copy of Dll1 results in normal numbers of dI6 INs (Fig. 6 E,M). These embryos, with just one copy of Dll1, show also a full rescue of the excessive differentiation of Chx10 + V2a (Fig. 6 B,K), and of Evx1 + V0 INs (Fig. 6 E,H,L), confirming the functional activity of the Dll1 allele. To test gene dosage dependence of the Jag1 phenotype, we analysed embryos where only one functional copy of Jag1 is present, in the complete absence of Dll1 (Dll1 f/f ;Jag1 f/+ ;NesCre). In these embryos, the number of Foxd3 + V1 INs is similar to that detected in control embryos, revealing that one functional copy of Jag1 is sufficient to ensure normal control of V1 neurogenesis ( Fig. 6 C,J). The same applies to the dI6 domain, where the number of Bhlhb5 + dI6 INs detected in E11.5 Dll1 f/f ;Jag1 f/+ ;NesCre embryos is comparable to that found in control littermates (Fig. 6 E,I). Quantification of Pax2 + INs in Dll1 f/f ;Jag1 f/+ ;NesCre embryos shows similar numbers to those in single Dll1 and Dll1 f/f ;Jag1 f/f ; NesCre mutants ( Fig. 6 I,N). From these results we could confirm that the excess of Pax2 + INs located dorsally to Evx1 + is not due to the overproduction of dI6 INs, but rather of V0 D INs. The finding that one functional copy of either Jag1 or Dll1 is able to rescue the dI6 phenotype detected in Dll1 f/f ;Jag1 f/f ;NesCre embryos, together with our data showing that dI6 neurogenesis is not affected in either Dll1 or Jag1 single mutants, imply that both ligands are able to control the rate of dI6 neurogenesis. Finally, we evaluated V2 and V0 neurogenesis in Dll1 f/f ;Jag1 f/+ ; NesCre embryos at E11.5 and found that the functional copy of Jag1 present in these embryos is unable to rescue the increases in Chx10 + V2a (Fig. 6 C,K) and Evx1 + V0 INs (Fig. 6 F,I,L) due to Dll1 deletion. These results confirm that Jag1 does not signal to adjacent Dll1-expressing domains and that neurogenesis in these domains is exclusively regulated by Dll1. (A schematic summary of the above results is presented in Fig. S5). Discussion Although Notch signalling is widely used during several developmental processes, it is not yet clear how different Notch ligands are employed to control a multitude of distinct cellular decisions. Here, we address the function of two different ligands, Dll1 and Jag1, during spinal cord neurogenesis. These ligands are expressed in non-overlapping complementary domains of the embryonic spinal cord, and analysis of mouse embryos carrying mutations in Dll1 and Jag1, singly or in combination, reveals that the two ligands play equivalent roles in controlling the rate of neuronal production within their domains of expression. However, while Jag1 signalling is restricted to cells within its domains of expression, our results reveal that Dll1 is able to signal to neural progenitors in the adjacent Jag1-expressing domains and prevent their untimely differentiation in the absence of Jag1 function. These results imply that Dll1-or Jag1-mediated activation of Notch in the spinal cord neuroepithelium is not qualitatively different, with both ligands contributing to regulate neural progenitor maintenance but not neuronal cell type diversity. Dll1 and Jag1 are functionally equivalent in controlling the rate of neurogenesis within their expression domains In mammals, four Notch receptors (Notch1-4) can bind five different ligands, named Delta-like (Dll) 1, 3 and 4, and Jagged (Jag) 1 and 2 [31]. All ligands exhibit different expression patterns during embryonic spinal cord neurogenesis. While Jag1 and Dll1 are expressed transiently in non-overlapping complementary domains along the DV axis, in cells committed to differentiation [32], Dll3 is expressed later in differentiated neurons, across all DV domains [10]. A more restricted expression pattern is shown by Dll4, that is exclusively expressed by V2 differentiating neurons [14,33], and by Jag2, which is expressed in differentiated MNs [32]. Our previous work showed that Dll1 signalling is necessary to regulate neurogenesis and that Dll1 deletion causes a neurogenic phenotype characterized by premature and excessive neuronal differentiation in the spinal cord domains where the gene is normally expressed [14]. A recent paper reported that deletion of Jag1 causes an acceleration of neurogenesis in the V1 domain where this gene is expressed, suggesting that Dll1 and Jag1 play similar functions within their expression domains, controlling the rate of neuronal differentiation [34]. Here, we have analysed a conditional Jag1 mutation in the developing spinal cord and confirmed that Jag1 is necessary in the V1 domain to regulate neurogenesis. However, when compared to the excessive neuronal differentiation caused by Dll1 mutation in the V0 and V2 domains, the V1 neurogenic phenotype due to Jag1 deletion is milder and seems to be rescued at later stages, as two V1derived subtypes of INs are produced in normal numbers in Jag1 mutants. This milder phenotype correlates with our finding that Notch activity is still present in the V1 domain of Jag1 mutants, as detected by the expression of the Notch target and effector Hes5. These results led us to consider the hypothesis that deletion of Jag1 in the V1 and dI6 domains can be compensated by Dll1 signalling from adjacent domains. Since there is no evidence for any physical boundary separating the various progenitor domains along the D-V axis of the embryonic spinal cord, it is conceivable that Dll1-expressing cells, in direct contact with V1 and dI6 progenitors, may activate Notch in these cells, enabling neurogenesis to proceed at normal pace in the absence of Jag1. Neuroepithelial cells expressing Dll1 might even reach progenitors located further away inside the V1 and dI6 domains, as suggested by the recent findings that the Drosophila Delta protein is present in filopodi of signalling cells within the fly wing and notum epithelium, being able to mediate lateral inhibition over several cell diameters during specification of sensory organ precursors [35,36]. Given the difference in width of the two Jag1-expressing domains, with the V1 domain being 2-3 times wider than the dI6 domain (see Fig. 3), the predicted long-range signalling ability of Dll1-expressing cells could account for our findings that dI6 neurogenesis is normal in Jag1-mutant embryos and that only a milder neurogenic phenotype could be detected in the V1 domain. In this scenario, neural progenitors at the centre of the wider V1 domain may be too far to be reached by neighbouring Dll1expressing cells, and will commit to differentiation in the absence of Jag1 signalling, while all progenitors in the thinner dI6 domain receive Dll1-mediated signalling. In Jag1-expressing domains, control of neurogenesis can be achieved by either Jag1-or Dll1-mediated Notch signalling To test whether Jag1 inactivation can be compensated by Dll1signalling from adjacent domains, we have generated an allelic series of Dll1;Jag1 double mutants and analysed neuronal production in the spinal cord of the various mutant combinations. Our results show that simultaneous deletion of both copies of Dll1 and of Jag1 causes an extensive differentiation of various subtypes of INs produced from the DV domains where each ligand is expressed. In the V1 domain, we could observe that absence of both Jag1 and Dll1 causes a stronger and more significant increase in the number of INs than that observed in Jag1 single mutants. In the case of the dI6 domain, a neurogenic phenotype can only be detected when both Jag1 and Dll1 are deleted. Thus, clear disruption of V1 and dI6 neurogenesis only occurs when the two ligands are deleted, showing that Dll1-signalling is indeed able to compensate for lack of Jag1. This conclusion is further supported by the finding that a single copy of Dll1 (in Dll1 f/+ ;Jag1 f/f ;NesCre embryos) is enough to restore dI6 neurogenesis and revert the strong V1 neurogenic phenotype to a milder one, similar to that detected in Jag1 single mutants. In addition, the fact that the identity of dI6 and V1 INs is not altered when Jag1-mediated signalling is replaced by Dll1-mediated signalling from adjacent domains reveals that these Notch ligands do not regulate neuronal type specification within each DV domain. Figure 6. Jag1 deletion can be compensated by Dll1-signalling from adjacent domains. In double mutant Dll1 f/f ;Jag1 f/f ;NesCre embryos (A), both Foxd3 + V1 INs and Chx10 + V2a INs are strongly increased, while one functional copy of Dll1 (B) rescues the V2a phenotype completely and the V1 phenotype partially. On the contrary, one functional copy of Jag1, in the absence of Dll1 (C), rescues the V1 phenotype but fails to revert the excess of V2a INs. (D-I) A neurogenic phenotype in the dI6 domain can only be observed in the absence of both ligands, using either bHLHb5 (D-F) or Pax2 (G-I) to identify dI6 neurons, located dorsally to the Evx1 + V0 V INs (indicated with asterisk). The presence of one functional copy of Dll1 (E,H) or of Jag1 (F,I) is enough to prevent excessive dI6 neurogenesis. The number of Evx1 + V0 V INs is only increased in the complete absence of Dll1 (D,F,G,I) as one functional copy of Dll1 is enough to revert the V0 V neurogenic phenotype, even in the absence of Jag1 (E,H). Although one functional copy of Jag1 (F) is enough to revert the excess of Bhlhb5+ dI6 INs detected in Dll1 f/f ;Jag1 f/f ;NesCre embryos (D), an excess of Pax2+ INs located dorsally to Evx1 + V0 V INs (indicated with asterisk) can still be detected in Dll1 f/f ;Jag1 f/+ ;NesCre embryos (I), when compared to Dll1 f/f ;Jag1 f/f ;NesCre (G). The excess of Pax2 + INs arises from the Dll1-dependent V0 D domain (Pax2 + /Evx1 2 ) and not from the Jag1-expressing dI6 domain (Pax2 + /Evx1 2 /Bhlhb5 + ). On the contrary, one functional copy of Dll1 is enough to rescue both the V0 D and dI6 neurogenic phenotypes (E,H). Scale bar 50 mm. (J-N) Graphics depicting the quantification of various types of INs in different allelic combinations of Dll1 and Jag1. The percentage of positive cells for each marker is relative to the total number of cells, detected by DAPI staining of the entire spinal cord sections where the counts were done. Error bars represent s.d. for at least three biological replicates. Student's t-test * p,0.05; ** p,0.05; *** p,0.001. doi:10.1371/journal.pone.0015515.g006 Dll1 and Jag1 are differently restricted in their range of action to control neurogenesis in the developing spinal cord While our results show that Dll1 can signal outside its own domains of expression and compensate for the absence of Jag1 in the dI6 and V1 domains, Jag1 can only control neurogenesis inside these domains, failing to compensate Dll1 deletion in adjacent domains. This is particularly evident in our analysis of dI6 neurogenesis: while double mutant embryos (Dll1 f/f ; Jag1 f/f ; NesCre embryos) show a marked increase in Pax2 + INs derived from the neighbouring dI6 and V0 domains, the presence of one functional copy of Jag1 (Dll1 f/f ;Jag1 f/+ ;NesCre) is able to restore the normal number of dI6 INs (identified as Pax2 + /Bhlhb5 + ) but not the number of the immediately adjacent dorsal V0 D INs (also Pax2+ but negative for bHLHb5 and Evx1). The described incapacity of Jag1 to signal to neighbouring cells within Dll1-expressing domains might be due to the presence of Lunatic Fringe (LFng), which is known to modulate the response of Notch receptors to different ligands [37][38][39][40]. Actually, LFng is expressed in the same domains as Dll1 and is excluded from the dI6 and V1 domains, where Jag1 is expressed [41]. Studies in both Drosophila and vertebrates have shown that the o-fucosyltransferase activity of Fng proteins leads to a modification in Notch receptors that blocks activation of the pathway by the Serrate/Jagged class of ligands [37]. This offers a simple explanation for the finding that Jag1 is unable to compensate the absence of Dll1 in neighbouring progenitors, as Notch receptors in these cells have been modified by LFng and are therefore unable to be activated by Jag1. On the contrary, several reports have shown that modification of Notch by Fringe enhances Delta-mediated activation [37,39,42]. This suggests that the overlapping LFng and Dll1 expression in the developing spinal cord results in high levels of Notch activity, which are necessary for the proper control of neurogenesis. However, our results indicate that Fringe activity is not absolutely needed for the ability of Notch to respond to Dll1-signalling during neurogenesis: in the absence of Jag1, a functional copy of Dll1 is sufficient to regulate neural progenitor differentiation in the Fringe-negative dI6 and V1 domains, thereby implying that the levels of Notch activity elicited by Dll1 binding are still sufficient to control neurogenesis. This is in agreement with biochemical data reported by Yang et al., who showed that, in the absence of Fringe, the levels of Notch activity elicited by Dll1 or Jag1 are identical [40]. These findings also suggest that levels of Notch activity are not uniform along the DV axis of the developing spinal cord, being higher in Dll1 + /LFng + domains than in Jag1 + /LFng 2 domains. Nonetheless, our results do not support the model proposed by Marklund et al., in which both Dll1 and Jag1 are prohibited from signalling across domain boundaries [15]. This model is based on the finding that ectopic Dll1 expression in the chick spinal cord was unable to inhibit neuronal differentiation in the Jag1-expressing V1 domain. However, this data does not rule out that Dll1-signalling from cells located in adjacent domains can activate Notch in V1 and dI6 progenitors, as the endogenous expression of Jag1 in electroporated cells can result in cis-inhibition of the ectopically expressed Dll1. A similar cis-inhibition of Dll4 signalling by Jag1 has been described in stalk cells during retina angiogenesis [43] and might explain the lack of Dll1 activity in the chick gain-of-function experiments [34]. In summary, Dll1 and Jag1 can similarly activate Notch signalling in neural progenitors of the embryonic spinal cord to regulate their commitment to differentiation, although the two ligands are differently restricted in their range of action: while Jag1 is effectively prevented from signalling to progenitors located in adjacent Dll1expressing domains, Dll1 can efficiently signal to progenitors in Jag1-expressing domains and regulate their differentiation.
7,999.6
2010-11-24T00:00:00.000
[ "Biology" ]
Microbial Interaction Network Estimation via Bias-Corrected Graphical Lasso With the increasing availability of microbiome 16S data, network estimation has become a useful approach to studying the interactions between microbial taxa. Network estimation on a set of variables is frequently explored using graphical models, in which the relationship between two variables is modeled via their conditional dependency given the other variables. Various methods for sparse inverse covariance estimation have been proposed to estimate graphical models in the high-dimensional setting, including graphical lasso. However, current methods do not address the compositional count nature of microbiome data, where abundances of microbial taxa are not directly measured, but are reflected by the observed counts in an error-prone manner. Adding to the challenge is that the sum of the counts within each sample, termed “sequencing depth,” is an experimental technicality that carries no biological information but can vary drastically across samples. To address these issues, we develop a new approach to network estimation, called BC-GLASSO (bias-corrected graphical lasso), which models the microbiome data using a logistic normal multinomial distribution with the sequencing depths explicitly incorporated, corrects the bias of the naive empirical covariance estimator arising from the heterogeneity in sequencing depths, and builds the inverse covariance estimator via graphical lasso. We demonstrate the advantage of BC-GLASSO over current approaches to microbial interaction network estimation under a variety of simulation scenarios. We also illustrate the efficacy of our method in an application to a human microbiome data set. Introduction Microorganisms are ubiquitous in nature and responsible for managing key ecosystem services [1]. For example, microbes that colonize the human gut play an important role in homeostasis and disease [2][3][4]. To better reveal the underlying role microorganisms play in human diseases requires a thorough understanding of how microbes interact with one another. The study of microbiome interactions frequently relies on DNA sequences of taxonomically diagnostic genetic markers (e.g., 16S rRNA), the count of which can then be used to represent the abundance of Operational Taxonomic Units (OTUs, a surrogate for microbial species) in a sample. The OTU abundance data possess a few important features in nature. First, the data are represented as discrete counts of the 16S rRNA sequences. Second, the data are compositional because the total count of sequences per sample is predetermined by how deeply the sequencing is conducted, a concept named sequencing depth. The OTU counts only carry information about the relative abundances of the taxa instead of their absolute abundances. In addition, the sequencing depth can vary drastically across samples. Last, the OTU data are high-dimensional in nature, as it is likely that the number of OTUs is far more than the number of samples in a biological experiment. When such data are available, interactions among microbiota can be inferred through correlation analysis [5]. Specifically, if the relative abundances of two microbial taxa are statistically correlated, then it is inferred that they interact on some level. More recent statistical developments have started to take the compositional feature into account and aim to construct sparse networks for the absolute abundances instead of relative abundances. For example, SparCC [6], CCLasso [7], and REBACCA [8] use either an iterative algorithm or a global optimization procedure to estimate the correlation network of all species' absolute abundances while imposing a sparsity constraint on the network. All the above methods are built upon the marginal correlations between pairs of microbial taxa, and they could lead to spurious correlations that are caused by confounding factors such as other taxa in the same community. Alternatively, interactions among taxa can be modeled through their conditional dependencies given the other taxa, which can eliminate the detection of spurious correlations. In an ideal setting, the Gaussian graphical models are a useful approach to studying the conditional dependency, in which the data are modeled through a multivariate normal distribution and the conditional dependency is determined by the non-zero entries of its inverse covariance matrix. Graphical lasso is a commonly used method to estimate sparse inverse covariance matrix for high-dimensional data under the Gaussian graphical models [9,10]. However, both the count nature and the compositional features of the microbiome abundance data result in violations of the multivariate normality assumption. SPIEC-EASI is a popular method for estimating a microbial interaction network that is represented by a sparse inverse covariance matrix between the abundances of species [11]. It is a two-step procedure that first performs a central log-ratio (clr) transformation on the observed counts [12] and then applies graphical lasso to the transformed abundances. As noted in Kurtz et al. [11], the clrtransformed abundances add up to zero, which leads to a singular covariance matrix and thus an ill-posed problem for estimating its inverse. To overcome this difficulty, SPIEC-EASI treats the covariance matrix of the clr-transformed abundances as an approximation to that of the log-transformed abundances that is no longer singular. Therefore, the second graphical lasso step is treated as estimating the well-defined inverse covariance matrix of the log-transformed abundances instead of the above-mentioned ill-posed problem. In other words, SPIEC-EASI is built upon the approximation of two covariance matrices, and thus lacks a clear objective function. More recently, several other methods have been proposed to infer a microbial interaction network, including gCoda [13], CD-trace [14], and SPRING [15]. However, existing methods for inverse covariance estimation including these methods and SPIEC-EASI do not properly account for two related features intrinsic to microbiome data: (a) the data are compositional counts in nature, and (b) sequencing depth is finite and varies from sample to sample. In microbiome research, a common strategy to tackle uneven sequencing depths is rarefaction, in which data on samples with higher sequencing depths are thinned by randomly subsampling from the observed counts so that the sequencing depths are the same in the rarefied data. However, this is known to amount to substantial loss of data [16]. Another widely used practice in microbiome data analysis, also adopted albeit implicitly in SPIEC-EASI, is to simply discard the sequence depth by converting the count data directly to compositional proportions as a proxy for the true relative abundances in a sample. However, it relies on the assumption that the estimated proportion of a taxon in a sample is equal to its true value and ignores the uncertainty of the proportion estimates as reflected by the sampling variance of these estimates. Therefore, this approach does not adequately account for the variation in the microbial counts and has been reported to result in excessive false positives in differential abundance analysis of microbiome data [16]. In this paper, we show, in the context of covariance estimation, that the proportion-based approach leads to substantial bias in the estimator, which can deteriorate the accuracy of the inferred interaction network. To address this challenge, we quantify the bias by directly modeling the compositional count data. We develop BC-GLASSO (bias-corrected graphical lasso), a method for inverse covariance estimation in microbiome data, which accounts for the compositional count nature of microbiome data and embraces the heterogeneous sequencing depths. BC-GLASSO is a two-step procedure similar to SPIEC-EASI but possessing key distinctions. First, BC-GLASSO is built upon the logistic normal multinomial distribution that is commonly applied to model compositional count data [12,17,18], and thus has a clear objective function. This is a hierarchical model that models the compositional counts using a multinomial distribution and hierarchically the multinomial probabilities using a logistic normal distribution. Compared to SPIEC-EASI, the true covariance matrix is defined on the additive log-ratio (alr) transformed multinomial probabilities instead of the clr-transformed abundances, with the benefit of being positive-definite and possessing a well-defined inverse matrix. Second, we show that the naive estimator of the true covariance matrix, which is the sample covariance matrix based on the alrtransformed abundances, has estimation bias in this hierarchical model. The bias can be approximated by a term that is inversely proportional to the sequencing depths. Last, motivated by the form of the estimation bias, we propose a bias correction procedure by accounting for and, in fact, taking advantage of the heterogeneous sequencing depths. The bias-corrected estimator of the true covariance matrix is easy to compute because it can be written as a weighted average of sample-specific covariance matrix estimators based on the alr-transformed abundances. Finally, we apply graphical lasso to estimate a sparse inverse covariance matrix based on this bias-corrected estimator. The non-zero entries in this sparse inverse covariance matrix are interpreted to represent an edge between the associated taxa in a microbial interaction network. The rest of the paper is organized as follows. In Sect. 2, we will describe the BC-GLASSO method by introducing the logistic normal multinomial model for the compositional counts, approximating the estimation bias of the naive estimator of the desired covariance matrix, and correcting its estimation bias. In Sect. 3, we will evaluate the performance of BC-GLASSO via simulation studies and compare it with SPIEC-EASI. We will show that BC-GLASSO performs better in terms of reducing the estimation bias for the covariance matrix and detecting the edges in the microbial interaction networks more accurately. In Sect. 4, we report a real data application, in which we compare the performance of BC-GLASSO and SPIEC-EASI when applied to the data from the American Gut Project [19]. Section 5 concludes this paper with some discussion. Some details for the theoretical derivations in Sect. 2 are presented in the Appendix. Data and Model Consider an OTU abundance data set with n independent samples, each of which composes observed counts of K + 1 taxa, denoted by i = (X i,1 , … , X i,K+1 ) � for the i-th sample, i = 1, … , n . Due to the compositional property of the data, the total count of all taxa for each sample i is a fixed number, denoted by M i . Naturally, a multinomial distribution is imposed on the observed counts: K+1 ) � represents the sample-specific multinomial probabilities for individual taxa satisfying that p i,1 + ⋯ + p i,K+1 = 1. To model the variability of the multinomial probabilities in the population, we build a logistic normal distribution on i . We first choose one taxon, without loss of generality the (K + 1)-st taxon, as a reference for all the other taxa and then apply the additive log-ratio (alr) transformation [12] on the multinomial probabilities: Let i = (Z i,1 , … , Z i,K ) � and further assume that they follow an i.i.d. multivariate normal distribution where is the mean, and is the covariance matrix. The above model in (1)-(3), known as a logistic normal multinomial model, is a hierarchical model with two levels. The multinomial distribution is imposed on the compositional counts, which is the distribution of the observed data given the multinomial probabilities. In addition, the logistic normal distribution is imposed on the multinomial probabilities as a prior distribution. The logistic normal multinomial model has been applied to microbiome data to detect covariates that are associated with differential microbial taxa [18]. The goal of this paper, however, is to infer interactions between microbial taxa. To this end, we set = −1 to be the inverse covariance matrix or the precision matrix. is the parameter of interest whose non-zero entries encode the conditional dependencies between Z i,1 , … , Z i,K , which are interpreted as edges in the microbial interaction network. Our objective is to find a sparse estimator of the inverse covariance matrix based on the observed data ( 1 , … , n ). Naive Estimation A naive approach to estimating is a two-step procedure. First, one can estimate 1 , … , n from the multinomial distribution by applying the same alr transformation on the counts as in and then apply graphical lasso directly on ̂ 1 , … ,̂ n by treating them as surrogates for 1 , … , n : where ̂ is the sample covariance matrix of ̂ 1 , … ,̂ n and is a tuning parameter. This naive estimation shares the same spirit as SPIEC-EASI [11] except that the alr transformation is used in (4) but SPIEC-EASI uses the central log-ratio (clr) transformation where g( i ) is the geometric mean of the counts X i,1 , … , X i,K+1 . As noted in Kurtz et al. [11], the clr transformation results in a singular covariance matrix for the transformed data, and thus the non-existence of the inverse covariance matrix. Nonetheless, Kurtz et al. [11] argued that this covariance matrix is an approximation of the covariance matrix of the logged counts log(X i,1 ), … , log(X i,K+1 ) and they applied graphical lasso to this covariance matrix directly. Therefore, SPIEC-EASI is built upon the approximation of two covariance matrices, and thus lacks a clear objective function. In this paper, we focus on the alr transformation in (4) instead of the clr transformation and call the resultant estimator of from (5) the naive estimator. In the logistic normal multinomial model in (1)-(3), the inverse covariance matrix is defined on the true parameters 1 , … , n but not their estimators ̂ 1 , … ,̂ n . The naive estimation treats ̂ 1 , … ,̂ n as known, which ignores the variation of ̂ 1 , … ,̂ n as the estimators of 1 , … , n . In Sect. 2.3, we will show that the naive estimator has an estimation bias due to the ignorance of the variation of ̂ 1 , … ,̂ n . Estimation Bias In this subsection, we investigate how the naive sample covariance matrix ̂ based on ̂ 1 , … ,̂ n estimates the true covariance matrix in (3). In particular, we evaluate the estimation bias of each element in the covariance matrix separately. For be the true covariance between Z i,k and Z i,l . Notice that kl does not depend on i because 1 , … , n share the same distribution. The naive estimator of kl is the sample covariance In (6), it is seen that the sample covariance ̂k l is the arithmetic mean of ̂1 ,kl , … ,̂n ,kl , the corresponding contributions from each sample. In the following, we will argue that ̂i ,kl is biased as an estimator of kl and so is ̂k l . When M i is large, the Taylor's expansion of X i,k ∕M i at its conditional mean P i,k gives the following approximation by ignoring higher-order terms A direct evaluation leads to the following approximations ) are quantities that do not depend on i. Plugging (8) to E(̂i ,kl ) leads to the following approximation of its estimation bias when M i and n are both large For details of the above derivations, we refer to the Appendix. The result in (9) implies that ̂i ,kl has an approximate bias with the order of M −1 i as an estimator of kl , ignoring higher-order terms. In addition, as the arithmetic mean of ̂i ,kl , the naive sample covariance ̂k l is also approximately biased with the order of 1 n ∑ n i=1 M −1 i for the bias term, when all M i 's and n are large. The expression in (9) has a similar form to a simple linear regression if we treat ̂1 ,kl , … ,̂n ,kl as the responses and M −1 1 , … , M −1 n as the explanatory variables. In this linear regression, kl serves as the intercept and C kl serves as the slope. This observation motivates us to develop a bias correction procedure based on fitting such a simple linear regression in Sect. 2.4. Bias Correction and Graphical Lasso We fit a simple linear regression of the responses ̂1 ,kl , … ,̂n ,kl to the explanatory variables M −1 1 , … , M −1 n : and use the least-squares estimator of the intercept 0 , denoted by ̃k l , to estimate kl . It is not hard to show that where where −1 = (M −1 1 , … , M −1 n ) � , and ‖ ⋅ ‖ 1 and ‖ ⋅ ‖ 2 denote the L 1 and L 2 norm of a vector, respectively. Compared to the naive estimator ̂k l that is an arithmetic mean of ̂1 ,kl , … ,̂n ,kl , the bias-corrected estimator ̃k l is a weighted mean. It is seen that when the sample i has a higher sequencing depth M i , i will be larger and so is the weight, which agrees with the intuition that a higher sequencing depth gives more accuracy in estimating its compositional probabilities. In addition, the fact that ∑ n i=1 i = 0 implies that the sum of all the weights still add up to one as same as in the naive estimator. Figure 1 presents a scatter plot of the estimated and true covariances in the (1,2)-entry of the covariance matrix from a simulation study. If an estimator has no bias, the points on the scatter plot should lie around the straight line which indicates the equality of the quantities on both axes. It is obvious that the naive estimator has a substantial bias (left panel) and the bias correction procedure is effective in removing the bias (right panel). This can also be justified by evaluating E(̃k l ) by combining (9) and (10), which turns out to be approximately unbiased as In other words, the bias-corrected estimator ̃k l is approximately unbiased when all sequencing depths and the sample size are large. In Fig. 1, we also notice that the variance of the bias-corrected estimator ̃k l is slightly higher than that of the naive estimator ̂k l . In this particular simulation, the sample variance of ̃k l is approximately twice of the sample variance of ̂k l . With the bias-corrected estimator of the covariance matrix ̃ whose (k, l)-entry being ̃k l for 1 ≤ k, l ≤ K , we can apply graphical lasso to achieve a sparse estimator of the inverse covariance matrix = −1 as in Note that the only difference of (11) from (5) is the replacement of the naive estimator ̂ by the bias-corrected estimator ̃ . Therefore, we call this approach biascorrected graphical lasso (BC-GLASSO) and the resultant inverse covariance matrix estimator the BC-GLASSO estimator. In the following sections, we will apply BC-GLASSO on simulated data and real data to evaluate its performance by assessing its estimation unbiasedness for the covariance matrix and its identification accuracy for the interaction network. Simulation Studies We perform simulation studies under a variety of settings to assess the effectiveness of bias correction and the accuracy of network identification using BC-GLASSO and to compare its performance with SPIEC-EASI. Note that in addition to adopting the naive approach described in Sect. 2.2, SPIEC-EASI also uses the clr transformation rather than the alr transformation in BC-GLASSO. For a fair comparison and to highlight the impact of the proposed bias correction technique enabled by more careful modeling of the data, we will adopt the alr transformation in our implementation of SPIEC-EASI instead of its default clr transformation. In addition, we also implement CD-trace and gCoda for comparison. Simulation Settings We consider four types of network structures: the random-edge, cluster, hub, and scale-free networks (Fig. 2). First, in the random-edge network, each pair of nodes, independently of other pairs, has a probability of 0.3 to be connected by an edge. In its corresponding inverse covariance matrix , kl is 1 if nodes k and l are connected and 0 otherwise, while kk is a constant in k that controls the condition number of at 100. Second, in a cluster network, the nodes are evenly partitioned into 2 disjoint groups of the same size. Each group forms a cluster that is interconnected as a random-edge network with the connection probability 0.3. Third, similar to the cluster network, the nodes in a hub network are also evenly partitioned into 2 disjoint groups of the same size and each group has a center to which all the other nodes within the same group are connected to. Finally, in a scale-free network, the distribution of degrees (the number of connections each node has to other nodes) follows a power law. The cluster, hub, and scale-free networks as well as their respective inverse covariance matrices are generated using the Huge package in R [20,21]. In this package, we set the off-diagonal element of to be v = 0.3 for the cluster network and v = 0.03 for the hub and scale-free networks. Throughout our simulations, we fix the number of nodes to be K = 50 for all four types of networks. For each network structure, we simulate microbiome compositional counts on n = 500 samples with heterogeneous sequencing depths given by = (M 1 , ⋯ , M n ) � . We consider three settings for : (M1) half of the M i 's are K/2.5 and the other half are 40K, (M2) each M i is independently drawn from the uniform distribution from K/2.5 to 40K, and (M3) the M i 's are generated from the real sequencing depths in the 16S data from the American Gut Project (see Sect. 4). Specifically, for setting (M3), after removing rare OTUs (average relative abundance < 0.01%) in the real data, we compute the total reads of each sample based on a randomly selected set of K + 1 taxa. Then, after removing samples whose total reads are below K + 1 , we randomly draw 500 samples from the rest and use their total reads as . In summary, settings (M1) and (M2) contrast cases with high and low heterogeneity in sequencing depth, while setting (M3) tries to mimic the real situation. Given the inverse covariance matrix for each network structure, we independently draw 1 , … , n from the multivariate normal distribution given by N( , −1 ) , where = ( 0 , ⋯ , 0 ) � . In general, with other factors held fixed, the greater 0 is, the rarer the reference taxon tends to be. In our simulations, we use 0 = log(4∕K) < 0 , which implies that the reference taxon is on average more abundant than the other taxa. Given 1 , … , n and the sequencing depths generated as described in the previous paragraph, the compositional counts are generated based on the multinomial distribution in (1) and (2). The simulated count data may include zero counts. In order to perform the log-ratio transformation to obtain the ̂ i 's, we add to each count X i,k a small positive number equal to p k (K + 1) , where p k = n −1 ∑ n i=1 X i,k ∕M i is the estimated mean relative abundance of taxon k. This allows a zero count to be replaced by a positive number whose value depends on the relative abundance of the associated taxon in the other samples for which its observed abundance is non-zero. For each simulation setting, the process described above is independently repeated to create 100 replicates. Simulation Results To assess the effectiveness of BC-GLASSO in correcting the estimation bias in the covariance matrix, we implement BC-GLASSO and SPIEC-EASI to the simulated data sets to evaluate their performances. We compare the estimated covariances to their corresponding true values across simulation replicates to obtain the empirical bias and mean squared error (MSE) separately for each off-diagonal entry in the covariance matrix. Table 1 summarizes the results by averaging the absolute values of the empirical bias and the MSE values across all pairs of taxa. In addition, the entry-wise bias values are visualized in heatmaps (see Fig. S1-S4 in the Supplemental Materials). To assess the accuracy with which a method is able to recover the interaction network, we compare the true network with the inferred network obtained by joining pairs of nodes with non-zero entries in the estimated inverse covariance matrix. The true positive rate is defined to be the frequency with which an edge in the true network is present in the inferred network, and the false positive rate is defined to be the frequency with which an edge not present in the true network is identified in the inferred network. We implement BC-GLASSO, SPIEC-EASI, CD-trace, and gCoda with a range of values for the tuning parameter, which allows us to plot the true positive rate of each method against its false positive rate in an ROC curve. For CD-trace and gCoda, because the true network has one less node than the dimension of the estimated inverse covariance matrix, we build the estimated network based on the K-dimensional submatrix on the top left by excluding the last dimension. The ROC curves from different simulation settings are included in Fig. 3. From Table 1, we find that BC-GLASSO effectively reduces the overall bias by up to 2 orders of magnitude. We note that under some scenarios, the bias reduction comes at the cost of a moderate increase in MSE. This is due to a potential increase in the variance of the bias-corrected estimator as compared to the naive estimator. In addition, as demonstrated in Fig. 3, BC-GLASSO outperforms the naive procedure in recovering the interaction network across all network structures and sequencing depth settings in exhibits a greater advantage over SPIEC-EASI in a setting with high heterogeneity in the sequencing depths (M1) than in a setting with low heterogeneity (M2). Substantial improvement in network recovery is achieved by BC-GLASSO when the sequencing depths are obtained mimicking the real situation in setting (M3). For example, for the scale-free network, BC-GLASSO, when compared to SPIEC-EASI, reduces the false positive rate from 24.7% to 15.4% with a fixed true positive rate of 90% . For a randomedge network, BC-GLASSO increases the true positive rate from 44.8% to 59.0% with a fixed false positive rate of 20%. As further demonstrated in Fig. 3, the comparison with CD-trace shows that BC-GLASSO either outperforms or achieves almost the same performance as CD-trace for most of the settings, with the exception of a hub network with high sequencing depth heterogeneity (M1), for which CD-trace is slightly better than BC-GLASSO. However, BC-GLASSO yields substantial improvement over CD-trace in several settings including the cluster network with setting (M1). Overall, we conclude that BC-GLASSO compares favorably with CD-trace. Moreover, the comparison with gCoda indicates that the performance of gCoda is dominated by that of CD-trace and BC-GLASSO. In summary, BC-GLASSO is effective in reducing bias in the estimation of the covariance matrix compared to SPIEC-EASI. In some scenarios, BC-GLASSO can yield a higher MSE due to the inflation of the estimation variance. However, in all of our simulation scenarios, BC-GLASSO always outperforms SPIEC-EASI in terms of the accuracy in the recovering the interaction network represented by the estimated inverse covariance matrix. The overall performance of BC-GLASSO also surpasses that of CD-trace and gCoda in terms of recovering the microbial interaction network. We also investigate the performance of using SPIEC-EASI on rarefied data, where data from samples with higher sequencing depths are subsampled without replacement so that all sequencing depths are equal to the smallest one. We note that BC-GLASSO cannot be applied to rarefied data. Therefore, we compare the ROC curves of three methods: SPIEC-EASI applied to unrarefied data, SPIEC-EASI applied to rarefied data, and BC-GLASSO. The results are summarized in supplemental Fig. S8. It is clearly seen that SPIEC-EASI applied to the rarefied data performs the worst among the three methods-its ROC curve is dominated by the other two methods most of the time. This is not very surprising. In our simulation settings, the sequence depth varies from sample to sample. In all settings for the sequencing depth, (M1)-(M3), the smallest sequencing depth is usually quite small. As such, rarefaction results in considerable loss of information for those samples with higher sequencing depths and also adds artificial uncertainty with random subsampling [16]. Therefore, SPIEC-EASI applied to the rarefied data performs the worst. Real Data Analysis We illustrate the use of BC-GLASSO with an analysis of 16S data from the American Gut Project (AGP) [19] to infer the interaction network between microbial taxa in the human gut. We focus on the data collected on 3,679 stool samples and remove from the data set samples collected from other body sites. To better capture the variation in the composition of human microbiota, it is helpful to take into account subgroup structure of the population that may exhibit fundamentally different biological properties. Recent research has found evidence that while the gut microbiome takes on smooth gradients of compositional diversity across individuals [22], human populations can generally be stratified into two main clusters, referred to as "enterotypes," based on the abundance of specific taxa [23]. These enterotypes are compositionally and potentially functionally distinct [24]. In our analysis, we propose to estimate the microbial interaction network separately for each enterotype, which helps minimize the detection of spurious interactions due to confounders related to population stratification and enhances our ability to identify biologically relevant interactions. Recent studies have shown the existence of two common enterotypes, including one marked by a high relative abundance of Bacteroides and the other by a high relative abundance of Prevotella [25], and that Prevotella-to-Bacteroides ratio (P/B ratio) can be used to effectively classify humans to these subpopulations [26]. In our analysis of the AGP data, we analyze two groups of samples separately: the P group which includes samples whose P/B ratio is in the upper 25%, and the B group includes samples whose P/B ratio is in the lower 25% (Fig. 4). We conduct our analysis on the genus level and aggregate the counts for OTUs that belong to the same genus. OTUs the do not have the genus-level taxonomic information are aggregated into a pseudo-genus, which we will refer to as the "unlabeled genus." We filter the data to remove samples with very low sequencing depths and genera that are highly sparse. More specifically, samples with total reads smaller than 100 are removed from the analysis. Separately for the two groups based on P/B ratio, we remove genera with zero abundance for over 95% of the subjects within a group. The resulting data set has 786 subjects and 143 genera in the P group, and 864 subjects and 142 genera in the B group. To select the reference taxon, we aim to find a genus whose abundance remains stable across samples. To this end, we evaluate the inter-sample dispersion of the relative abundance of each taxon and find the taxon whose relative abundance is the least dispersed. More specifically, we first calculate each genus' relative abundance in a sample by taking the ratio of its observed count to the total read count for the sample. Then, we measure the dispersion of a taxon's relative abundance by the coefficient of variation, defined as the standard deviation of the relative abundance log(P/B ratio) Empirical distribution of P/B ratio from the AGP data and the regions in this distribution corresponding to the P group and the B group across samples divided by its mean. The genus that minimizes the coefficient of variation is taken as the reference taxon. This criterion is applied separately for the P group and the B group, for both of which the unlabeled genus is chosen to be the reference. This reference has a non-zero count for over 95% of the samples in both groups. We apply BC-GLASSO and SPIEC-EASI to estimate the inverse covariance matrix separately for the two groups. For both methods, the value of the tuning parameter is selected using a grid search based on BIC. In Figs. 5, 6, and 7, we compare the results based on BC-GLASSO and SPIEC-EASI for the P group. The comparison between the two methods is qualitatively similar for the B group and we show the results in Fig. S5-S7 in Supplemental Materials. Fig. 5 visualizes the estimated correlation matrices based on the two methods. SPIEC-EASI has resulted in To visualize the microbial interactions identified by a method, we show the heatmap visualizing the estimated inverse covariance matrix, where a non-zero entry corresponds to an edge in the inferred microbial network (Fig. 6). Because an entry in an inverse covariance matrix is connected to the negative of the partial correlation between two variables [27], we show the inverse covariance matrices on the negative scale. Both methods are able to identify two clusters of taxa that are densely connected to each other. The first cluster includes about 14 genera from the family Enterobacteriaceae, and the second cluster includes all of the seven genera from the family [Tissierellaceae]. Comparing the inverse covariance matrices based on the two methods, however, SPIEC-EASI seems to lead to a background rate of additional non-zero interactions that arise evenly from all families, while for BC-GLASSO the identified interactions align more closely with the taxonomic relationships of the genera, showing a clearer trend of genera from the same family exhibiting similar patterns of interactions. We are further interested in the interactions that are exclusively detected by only one method. In summary, between the 142 genera analyzed for the P group, there are a total of 10,011 potential pairwise interactions. Among these, 938 interactions are identified by both methods, BC-GLASSO detects an additional set of 151 interactions, and SPIEC-EASI detects an additional set of as many as 1,460 interactions (Fig. 7b). Interestingly, 114 out of the 151 interactions exclusively identified by BC-GLASSO (shown in red in Fig. 7a) are associated with a small number of genera, all of which are from the family Enterobacteriaceae. These genera are found to have extensive interactions with the rest of the microbiome which are captured by BC-GLASSO. The genera include Pantoea (44), Enterobacter (34), Plesiomonas (14), Klebsiella (12), and Erwinia (10), where the numbers in parentheses indicate how many edges associated with a genus are unique identified by BC-GLASSO. The interactions associated with these genera exclusively identified by BC-GLASSO are listed in Supplemental Table S1. In contrast, the 1,460 interactions that are present only in the network produced by SPIEC-EASI (shown in white in Fig. 7a) are widespread and distributed across all families, rather than concentrated within specific taxa. The unique interactions revealed by BC-GLASSO represent important discoveries that can advance follow-up research and potentially impact the development of clinical resources. For example, members of the genera Enterobacteria, Klebsiella, and Plesiomonas include known opportunistic pathogens and pathobionts that can impact the host of the health when highly abundant in the gut [28][29][30]. Our identification of additional linkages between members of these genera and other gut taxa can help guide experiments that uncover how other gut taxa impact the success of these organisms in the gut, possibly through competitive exclusion or biocontrol. Discussion It is becoming increasingly recognized that microbiome data have unique characteristics that are known to require tailored statistical methods. With these characteristics in mind, in this paper, we focus on the problem of inferring the interaction network between microbial taxa through the estimation of a sparse inverse covariance estimation in microbiome data. We have highlighted a key disadvantage of the popular proportion-based approach due to the bias that originates from the failure to properly model the abundance counts and to adequately capture the variation in the data. To address this issue, we have developed BC-GLASSO, a model-based method for inverse covariance estimation which directly tackles the compositional count data and exploits the heterogeneity in sequencing depth. Features of BC-GLASSO include (a) the method is based on a hierarchical model where the technical variation of the count data are modeled using a multinomial distribution and the biological (i.e., inter-sample) variation of microbiome composition is modeled using a logistic normal distribution on the multinomial probabilities; (b) the unevenness in the sequencing depth, which frequently poses a challenge in microbiome data analysis, is not only properly accounted for in our model but also taken advantage of to correct the bias in the estimator; (c) despite the hierarchical model used in BC-GLASSO, the method remains computationally rapid even on big data sets owing to the linear models underlying the bias correction procedure. We have demonstrated the advantage of BC-GLASSO relative to a leading approach through simulation studies. In particular, BC-GLASSO consistently outperforms the competing method under a variety of network structures and different setups for the sequencing depths. The strength of BC-GLASSO is manifested by a greater accuracy in the inferred network as well the reduced bias of the covariance estimator. We have also applied BC-GLASSO to infer the microbial interaction network in a data set from the American Gut Project, where BC-GLASSO has detected a group of genera from the Enterobacteriaceae family which have extensive interactions with the rest of the microbiome. In our presentation of BC-GLASSO, the i 's are assumed to follow N( , ) . We note that the normality assumption on i is not essential. In fact, even without specific distributional assumptions on the i 's, all theoretical derivations and properties reported in this paper on the covariance estimator, ̂k l , in BC-GLASSO remain valid so long as i 's are assumed to i.i.d. with some mean and covariance . The use of graphical lasso in the second step of BC-GLASSO, however, does assume normality of i 's. One caveat of our approach from the perspective of biological interpretation is that the interaction network excludes up to K possible edges that theoretically could exist in the community being studied. Specifically, potential interactions between the reference taxon and all other taxa are not modeled by our method and thus cannot be analyzed and interpreted by users. If users are interested in understanding how particular taxa relate, then they would want to avoid using such taxa as a reference given that our approach specifically excludes such reference taxa from the final interaction network. Without extensive prior biological information, we recommend picking as the reference a taxon which is present in the majority of the samples and whose relative abundance is least dispersed across samples. Microbiome studies often record metadata on the samples including covariates such as the age, sex, and dietary information of a subject. Some of these covariates have been associated with the abundance of a taxon in the microbiome. It can be helpful for such associations to be accounted for when estimating the microbial interaction network. To this end, a potential approach is to extend GC-GLASSO to incorporate covariates in the hierarchical model. This may be done, for example, by allowing to depend on covariates. This is beyond the scope of this paper but may present worthwhile opportunities for future research. Although BC-GLASSO is motivated by problems that arise in microbiome research, it can be applied to compositional count data from other types of applications so long as the total count varies substantially across samples. Examples include ecological data on species abundance [31], where it may be of interest to estimate the ecological relationship between species, and RNA-Seq data, where it may be of interest to infer the regulatory relationship between genes. K+1 , K+1 . K+1 . The above approximation ignores the following term: which is small compared to the other terms when all M i 's are large.
9,153.2
2020-06-04T00:00:00.000
[ "Computer Science" ]
Determination of the Effects of Sevoflurane Anesthesia in Different Maturing Stages of the Mouse Hippocampus by Transcriptome Analysis Purpose: Postoperative cognitive dysfunction (POCD) is a serious complication after general anesthesia. POCD is more likely to occur in elderly patients, but the mechanism of POCD has not been fully elucidated. We hypothesized that the difference of mRNA expression profile in the brain depending on the maturing stage causes the difference in the effect of sevoflurane anesthesia. We investigated the mRNA expression profile of hippocampal cells in young mice and in aged mice under sevoflurane anesthesia using transcriptome analysis. Methods: This study was conducted after approval from our institutional animal ethics committee, the Animal Research Center of Sapporo Medical University School of Medicine (project number: 12-033). Eight mice were assigned to two groups: a young group and an aged group. Each of the 4 mice in the two groups was anesthetized with 3.5% sevoflurane for 1 hour. Subsequently, mRNA was isolated from hippocampal cells and RNA sequencing was performed on an Illumina HiSeq 2500 platform. Mapping of the quality-controlled, filter paired-end reads to mouse genomes and quantification of the expression level of each gene were performed using R software. Results: The Lhx9 gene, which is thought to be associated with neuronal inflammation, was the most highly upregulated gene in aged mice. The Epyc gene, which encodes a protein related to the phospholipase-C pathway and ERK signaling, was the most down-regulated gene in aged mice. Conclusions: The findings suggest that sevoflurane anesthesia induces neuronal inflammation via a LIMhomeodomain family related gene in aged mice and causes POCD. Introduction Postoperative cognitive dysfunction (POCD) is a frequent and serious complication after general anesthesia [1]. POCD is known to have a negative impact on the quality of life in affected patients [2]. Despite the high prevalence of POCD, the mechanism of POCD has not been fully elucidated. Recent studies have revealed that clinical risk factors of POCD are frontal cortex function, lifestyle, medication, and age [3][4][5][6]. General anesthesia might cause neuroinflammation in the developing brain [7], but it is difficult to determine cognitive changes caused by the anesthetic agent per se. POCD is usually transient, and it is difficult to establish clear diagnostic criteria for POCD [1,8]. Elucidation of the biological mechanism of POCD would be useful for improving the diagnosis and prevention of POCD. It is known that the requirement of volatile anesthetics is decreased with advance of age [9]. This suggests that volatile anesthetic agents cause different biological changes depending on the brain maturing stage. We previously reported that exposure to sevoflurane changes mRNA profile in the juvenile mouse hippocampus by transcriptome analysis. In the juvenile mouse, the Lhx9 gene was highly downregulated by sevoflurane exposure, while the Rtn4rl2 gene was highly up-regulated [10]. The Lhx9 gene encodes a LIM-homeodomain factor, which is essential for the development of thalamic neurons [11]. The Rtn4rl2 gene encodes the Nogo receptor, which is involved in the adhesion of dendritic cells to myelin in the central nervous system [12]. These findings suggest that sevoflurane anesthesia induces neuroinflammation in juvenile mice, but data for aged mice have not been shown. Surgical stress induces systemic inflammation and increases levels of cytokines such as TNF-alpha. After transition of inflammatory cytokines to the blood-brain barrier, they activate glial cells, which cause neuroinflammation. Cholinergic neurons alter the activation of glial cells, but the alteration is affected by aging. Subsequently, the aging of cholinergic neurons is thought to be a potential biological mechanism of POCD and the reason why POCD is likely to occur in elderly patients [13]. Is general anesthesia itself harmful for the aged brain? [13] Is the anesthetic agent itself likely to cause neuroinflammation in the aged brain? Alternatively, the anesthetic agent might activate unknown pathways that lead to the occurrence of POCD. We hypothesized that the change in the mRNA expression profile in aged mice after sevoflurane exposure is different from that in juvenile mice, especially in the hippocampus, which integrates memory and cognitive function [14]. Recent progress in genomics has enable us to comprehensively analyze cellular modifications at the gene expression level using transcriptome analysis. The DNA microarray technique has uncovered various mechanisms of diseases; however, there has been no investigation of the association between POCD and the hippocampus by a transcriptome-wide association study. In this study, the mRNA expression profiles of hippocampal cells in juvenile mice and in aged mice under sevoflurane anesthesia were investigated by using transcriptome analysis. Materials and Methods With approval from the Sapporo Medical University School of Medicine animal ethics committee (project number: 12-033) for this study, male C57/BL6 mice (8 weeks old, body weight of 20-25 g) were purchased from Japan SLC, Inc. (Hamamatsu, Japan) and housed at 22°C under controlled lighting (12:12-hour light/dark cycle) with food and water provided ad libitum. Eight male mice were assigned to two groups: a young group (8 weeks of age, n=4) and an aged group (35 weeks of age, n=4). In both groups, 3.5% sevoflurane (Maruishi Co., Ltd. Shizuoka, Japan) in 100% oxygen was provided to mice in a plastic chamber for 1 hour. Then the mice were decapitated after being anesthetized with 3.5% sevoflurane. The brain of each mouse was immediately removed from the skull, frozen at -70°C with 2-methylbutane, and placed in a Petri dish containing ice-cold phosphate-buffered saline. The brain was cut along the longitudinal fissure of the cerebrum, and the regions posterior to the lambda were cut off using tissue matrices (Brain Matrices, EM Japan, Tokyo, Japan). Thereafter, the brain was placed with the cortex of the left hemisphere facing down and any noncortical forebrain tissue was removed. Tissue blocks containing hippocampal cells were obtained using Brain Matrices (EM Japan). Meningeal tissue was removed from the hemisphere according to a previously described method [15]. Finally, dissected hippocampal cells were homogenized and lysed into six samples for each mouse using the RNeasy ® Plus Micro Kit (Qiagen, Hilden, Germany) and QIAcube (Qiagen). Quality control for isolated RNA was performed using the Agilent 2200 TapeStation system (Agilent Technologies, Santa Clara, CA, USA). For samples to pass the initial quality control step, it was necessary to quantify >1 μg of sample and to have an equivalent RNA integrity number (eRIN) of ≥ 8. The eRIN determined by a 2500 Bioanalyzer Instruments (Agilent Technologies) has been reported to provide accurate information [16]. Isolated RNA was then pooled into two samples per group and labeled. A cDNA library was prepared using TruSeq® RNA Library Prep Kits (Illumina, Inc., San Diego, CA, USA) according to the manufacturer's instructions. RNA-seq was performed in the paired-end (101 cycles × 2) mode on an Illumina HiSeq 2500 platform (Illumina, Inc.). Base call (.bcl) files for each cycle of sequencing were generated by Illumina Real Time Analysis software (Illumina, Inc.) and were analyzed primarily and de-multiplexed into a FASTQ (.fastq) file using Illumina's BCL2FASTQ conversion software (ver. 1.8.4, Illumina, Inc.). Raw paired-end RNA-seq reads in FASTQ formats were assessed for base call quality, cycle uniformity, and contamination using FastQC (http://www.bioinformatics.bbsrc.ad.uk/projects/fastqc/). Mapping of the quality control-filtered paired-end reads to mouse genomes and quantification of the expression level of each gene were performed using R software (ver. 3.1.1 with TCC package) [17,18]. The quality control-filtered paired-end reads were mapped to public mouse genome data published by UCSC (NCBI37/mm9, http:// genomes.UCSC.edu/). Differential gene sets were filtered to remove those with fold changes <1.5 (up-or down-regulated) and with a false discovery rate-corrected P value of 0.05. Sample size was calculated with the following parameters: power ≥ 0.8, probability level <0.05, and anticipated effect size=14. Results All total RNA samples had a quality ≥ 1 µg and eRIN value ≥ 8. The average base calls after primary filtration were 41,778,221 base pairs, and the average mean quality score (Phred quality score) was 37.1. We investigated changes in expression levels of a total of 37,681 genes (Supplementary Table 1). A total of 7,716 genes were filtered because they showed little change in mRNA expression levels. Microarray plotting showed a total of 7,027 genes that were expressed differentially between the maturing stages. The Lhx9 gene was the most highly upregulated in aged mice ( Table 1). The Htr5b gene, which encodes the serotonin receptor, the Cbln3 gene, which encodes cerebellin 3 precursor protein, and the Gabra6 gene, which encodes the gamma amino butyric acid type A (GABAA) receptor alpha 6 subunits, were highly up-regulated in aged mice (log2 ratios being 7.48, 7.33, and 6.27, respectively). The Epyc gene was the most down-regulated gene in aged mice ( Table 2). The Oprd1 gene, which encodes the delta opioid receptor, the Drd1a gene, which encodes dopamine receptor D1A, and the Adora2a gene, which encodes adenosine A2a receptor were highly down-regulated in aged mice (log2 ratios being 7.64, 5.54, and 5.52, respectively). Discussion We first confirmed the quality of RNA samples for transcriptome analysis. The quality and amount of RNA samples are likely to vary depending on the type, state, and part of tissue, and it confirmation of the quality is an important requirement for transcriptome analysis [19]. Using a previously described method, we homogenized some of the hippocampal cells without any tissue fixation and freezing technique [15]. Consequently, we were able to obtain qualitycontrolled RNA samples in this study [20]. We investigated a total 37,681 genes using data published data by UCSC. A total of 18,814 genes showed very small average expression levels of mRNA, namely less than 1 count per sample, in the hippocampus of both juvenile and aged mice. In the remaining 18,867 genes, we found that a total of 7,027 genes were differentially expressed between the groups in this study. These data might support that the mRNA expression levels in hippocampus cells are different depending on the maturing stage and suggest mechanisms underlying the differences in efficacy of sevoflurane among maturing stages. Understandably, since a very large number of genes were expressed differently in the two groups, we could not identify the factor that critically alters the effect of sevoflurane in this study. Further study is needed to identify the factor that alters the effect of sevoflurane. Next, we demonstrated that the Lhx9 gene was the most upregulated gene in aged mice. In our previous study, the Lhx9 gene was found to be the most down-regulated gene in anesthetized juvenile mice, and we therefore could not determine whether the Lhx9 gene was up-regulated in aged mice by sevoflurane per se [10]. However, the Lhx9 gene showed divergent mRNA expression between juvenile and aged mice in the hippocampus. The Lhx9 gene encodes a LIMhomeodomain factor that is essential for the development of gonads, spinal cord interneurons, and thalamic neurons [11,21,22]. In juvenile mice, sevoflurane might suppress brain development via LIMhomeodomain factors or compensate for the hyperexcitability of the thalamocortical network by suppressing LIM-homeodomain factors [23], while sevoflurane exposure might increase Lhx9 gene expression or not change its expression. If it is assumed that expression of the Lhx9 gene enhances neuroinflammation in the mouse hippocampus, sevoflurane might not induce neuroinflammation in aged mice or the neuroprotective mechanism might be vulnerable in aged mice. Expression of the Lhx9 gene might contribute to the development of POCD, and this could be the focus of future research. The Htr5b gene and the Cbln3 gene were also highly up-regulated in aged mice in this study. Serotonin receptors encoded by the Htr5b gene are widely distributed in the central or peripheral nervous system and play a role in neurotransmission [24]. Serotonin antagonists are used as anti-emetic agents in chemotherapy induced emesis and postoperative nausea and vomiting. Our previous results also showed that serotonin receptor genes were not up-regulated by sevoflurane exposure in juvenile mice. These results might suggest that serotonin antagonists are more effective for postoperative nausea and vomiting in aged patients. The Cbln3 gene is known as a protein-coding gene that accumulates at parallel fiber-Purkinje cell synapses, and the proteins provide an anatomical basis for a common signaling pathway regulating circuit development and synaptic plasticity in the cerebellum [25]. Assuming that the expression level of the Cbln3 gene is increased because it acts protectively against neuroinflammation caused by sevoflurane, the juvenile brain might be more prone to neuroinflammation caused by sevoflurane. Therefore, further investigation is needed to determine whether the Cbln3 gene has a protective effect in the hippocampus. Notably, the GABRA6 gene, which encodes GABAA receptor subunit alpha 6, was highly up-regulated in aged mice. The GABAA receptors increase tonic inhibition in somatostatin interneunons and alter circuit activity within the dentate gyrus [26]. GABAA receptors are also known to be a potential target of volatile anesthetics [27]. The Epyc gene was the most down-regulated gene in aged mice. The Epyc gene is located in the mapping interval of MYP3, which has been suggested to be a candidate gene for high myopia [28,29]. The EPYC protein is predominantly expressed in cartilage, and it is important for fibrillogenesis through the regulation of collagen fibrils [30,31]. It is unclear whether the Epyc gene is associated with the effect of sevoflurane. The Oprd gene, which encodes the delta-opioid receptor (OPRD), and the Drd1a gene, which encodes the dopamine receptor D1a, were also highly down-regulated in aged mice. The ghrelin, which is identified as the endogenous ligand for growth hormone secretagogue receptor 1 alpha, induces acute pain and increases OPRD-mRNA expression [32]. The serum growth hormone concentration in juvenile mice might be higher than that in aged mice and might cause the higher expression level of the Oprd gene in the brain. The methods used in this study might have been more harmful for juvenile mice than aged mice, or it is possible that juvenile mice are more likely to feel pain than aged mice. This result regarding the Oprd mRNA expressions suggest that juvenile mice should be treated without a painful sequence. Further investigation is needed to determine whether the treatment of mice affects the expression of the Oprd gene. The dopamine D1 receptor in the hippocampus is essential for the functional relationship between associative learning and synaptic strength at the CA3-CA1 synapse [33]. D1 receptor knock-out mice are known to have reduced spatial learning and fear learning. Sevoflurane per se might inhibit expression of the Drd1a gene in the hippocampus in aged mice and/or enhance expression of the Drd1a gene in juvenile mice. The juvenile mice showed more than 300 counts of Drd1a-mRNA per sample, while the aged mice showed less than 10 counts per sample in this study. Therefore, the difference between juvenile and aged mice in expression level of the Drd1a gene in the hippocampus suggests a difference in postoperative spatial cognitive function. Interestingly, the Adora2a gene, which encodes adenosine A2a receptor, was also highly down-regulated in aged mice in this study. The adenosine modulation system mostly operates through inhibitory A1 receptors and facilitatory A2 receptors, and the adenosine receptors are mutually switching synaptic activities in the brain [34]. Brain insults up-regulate the adenosine A2a receptor through adaptive change of the brain, and adenosine A2a receptor bolsters neuronal plasticity. The Adora2a gene was reported to show an age-dependent decrease in the human hippocampus. In this study, the Adora2a-mRNA expression level was dramatically decreased in aged mice, whereas the published database showed that the mRNA expression level in the elderly human hippocampus was only half of that in the juvenile human [35]. This difference suggests that sevoflurane per se inhibits expression of the Adora2a gene in the hippocampus in aged mice, or the Adora2a gene expresses diversely among the animal species. The adenosine A2a receptor has been reported to be associated with caffeine-induced insomnia [36]. Down-regulation of the Adora2a gene might influence the excitation at emergence from general anesthesia and cause POCD in aged patients. Further study is needed to confirm the association between Adora2a-mRNA expression and POCD. We could not determine whether the changes in mRNA expression levels of individual genes were caused by sevoflurane per se or other pathways. However, our results indicated that there was age-dependent variation in the mRNA expression profile. Although the molecular mechanisms of POCD after sevoflurane exposure were predicted in the present study, further experiments based on the regulation of individual genes are needed to confirm our speculations. Furthermore, we did not examine the behaviors of the animals that might suggest spatial learning, because the mRNA expression profile might change while recording their behavior. While our data cannot be directly extrapolated to humans, they might provide clues for the molecular mechanism of POCD. In addition, the sample size was small in this study, despite having been determined to obtain a power of ≥ 0.8, and we overlooked changes in the expression of genes that were expressed at low levels. Further studies with larger numbers of samples are needed to confirm the changes in genes that are expressed at low levels. In conclusion, expression of the Lhx9 gene, which is thought to be associated with neuronal inflammation, was the most highly upregulated in aged mice. The Epyc gene, which encodes a protein related to the phospholipase-C pathway and ERK signaling, was the most down-regulated in aged mice. These findings may be useful for exploring the mechanisms of POCD and neuronal inflammation after general anesthesia.
3,997
2017-05-05T00:00:00.000
[ "Biology" ]
Giant atoms with time-dependent couplings We study the decay dynamics of a two-level giant atom that is coupled to a waveguide with time-dependent coupling strengths. In the non-Markovian regime where the retardation effect cannot be ignored, we show that the dynamics of the atom depends on the atom-waveguide coupling strengths at an earlier time. This allows one to tailor the decay dynamics of the giant atom and even realize a stationary population revival with appropriate coupling modulations. Moreover, we demonstrate the possibility of simulating the quantum Zeno and quantum anti-Zeno effects in the giant-atom model with periodic coupling quenches. These results have potential applications in quantum information processing and quantum network engineering. I. INTRODUCTION Giant atoms have spurred a rapidly growing interest in the past few years due to the exotic self-interference effects therein [1]. Such systems feature nonlocal interactions between the atoms and the waveguide fields, which are possible if the atomic size is much larger than the wavelength of the field [2][3][4][5][6][7] or if the field is confined in a meandering waveguide that can contact with each atom multiple times [8][9][10]. While the interaction at each atom-field coupling point can still be well described by the dipole approximation, the atoms in these systems can no longer be viewed as points and the phase accumulations of photons (or phonons) between different coupling points should be taken into account. To date, there have been a variety of intriguing phenomena witnessed in giant-atom structures, such as frequency-dependent Lamb shifts and relaxation rates [8], decoherence-free interatomic interactions [9,[11][12][13][14], unconventional bound states [15][16][17][18][19][20][21], and phase-controlled frequency conversions [22,23]. Most recently, giant atoms have also been extended to the non-perturbative regime [24], to chiral quantum optics [12-15, 25, 26], and to synthetic dimensions [27]. Besides the progress above, it is also an interesting topic to study non-Markovian retardation effects in giantatom systems. Indeed, such effects are common, and should be taken into account if the propagation time of photons between different coupling points is comparable to or even larger than the lifetime of the atom [6]. In this case, both the dynamic evolutions [6,16,28,29] and the stationary scattering properties [23,30,31] of the giant atom exhibit significant non-Markovian features that have no counterparts in the Markovian regime, such as *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>bound states that oscillate persistently between the giant atom and the one-dimensional continuum [16] and non-Markovianity induced nonreciprocity [23]. Moreover, non-Markovian retardation effects have also been well studied in systems featuring a semi-infinite waveguide, where a small atom placed in front of the waveguide end can be mapped into a giant atom with two identical coupling strengths [32][33][34][35][36]. In these works, however, the atom-field couplings are assumed to be constant and the non-Markovian retardation effects affect the dynamics in a relatively simple manner. In this paper, we consider a two-level giant atom that is coupled to a waveguide with time-dependent coupling strengths. We reveal that the non-Markovian retardation effect is closely related to the instantaneous atomwaveguide coupling strengths at an earlier time. Based on this mechanism, we consider some simple modulation schemes for the time-dependent couplings, which enable dynamical control of the decay dynamics of the giant atom without changing the frequency of the emitted photons. In particular, it is possible to observe a limited energy backflow from the waveguide to the atom via a sudden change in the coupling strengths. Moreover, we demonstrate how to simulate the quantum Zeno effect (QZE) and quantum anti-Zeno effect (QAZE) [37][38][39][40][41] through a sequence of coupling quenches. This provides an alternative platform for studying Zeno physics and quench dynamics in open quantum systems. II. MODEL AND EQUATIONS We consider a two-level giant atom, which is coupled to a waveguide at two different points with time-dependent coupling strengths g 1 (t) and g 2 (t), respectively, as shown in Fig. 1(a). For ultracold atoms in optical lattices, timedependent couplings can be implemented by dynamically modulating the relative position of the potentials. Such a scheme has recently been used to simulate an effective giant atom coupled to a high-dimensional bath [42]. Here we consider a viable solid-state implementation platform based on superconducting quantum circuits. As shown in Fig. 1(b), a transmon qubit is coupled to a superconducting transmission line with the interaction mediated by a Josephson loop (i.e., a loop containing a Josephson junction) at each coupling point [26]. In this way, the time-dependent couplings can be achieved by modulating the external fluxes through the loops [43][44][45][46]. In this case, the Hamiltonian of the system can be written as ( = 1 hereafter) where σ + (σ − ) is the raising (lowering) operator of the giant atom with transition frequency ω 0 ; a † k (a k ) is the creation (annihilation) operator of the waveguide mode with frequency ω k and wave vector k (along the propagation direction); d is the separation distance between the two atom-waveguide coupling points. In Eq. (1), we have employed the Weisskopf-Wigner approximation [47] (the intensity of the atomic power spectrum is concentrated at the transition frequency ω 0 ) such that the coupling strengths g 1,2 (t) can be treated as k independent. In the single-excitation subspace, the state of the system at time t has the form where c k (t) [c e (t)] is the probability amplitude of creating a photon with wave vector k in the waveguide (of exciting the giant atom); |V denotes the ground state of the whole system. By solving the Schrödinger equation, one obtainṡ where the superscript dot denotes the derivative with respect to time t. For the case that the waveguide field is initialized in the vacuum state, the formal solution of c k (t) can be written as where we have changed the integration variable as +∞ −∞ dk = 2 +∞ 0 dω k /v g with v g the group velocity of the emitted photons in the waveguide. According to the Weisskopf-Wigner approximation, one can assume ω k ≈ ω 0 + ν = ω 0 + (k − k 0 )v g in the vicinity of ω 0 , with k 0 the wave vector corresponding to ω 0 [48,49]. In this way, Eq. (5) becomes (see Appendix A for more details) where φ = k 0 d and τ = d/v g are the phase accumulation and the propagation time of photons traveling between the two atom-waveguide coupling points, respectively; Γ j (t, 0) = 4πg j (t) 2 /v g (j = 1, 2) is the instantaneous decay rate of the atom at the jth coupling point and Γ 12 (t, τ ) = 2π[g 1 (t)g 2 (t − τ ) + g 1 (t − τ )g 2 (t)]/v g describes the retarded correlation decay due to the giantatom structure; Θ(t) is the Heaviside step function. Equation (6) shows that in the non-Markovian regime, where the propagation time τ is comparable to or larger than the lifetime of the atom, the retarded feedback term (i.e., the second term) depends on the coupling strengths g 1 (t − τ ) and g 2 (t − τ ) of the earlier moment t − τ . We point out that such a non-Markovian feature is crucial for observing the unconventional results demonstrated below. For the case of time-independent coupling strengths g 1 (t) ≡ g 1 and g 2 (t) ≡ g 2 , Eq. (6) reduces tȯ with Γ j = 4πg 2 j /v g and Γ 12 = 4πg 1 g 2 /v g . If g 1 = g 2 , Eq. (7) is strictly connected to the dynamic equation governing a small atom coupled in a time-independent manner to a semi-infinite waveguide [34][35][36]. In the following, we will investigate how the time dependence of the coupling strength affects the decay dynamics of the giant atom. III. COSINE-SHAPED MODULATION We first consider cosine-shaped modulations for the atom-waveguide coupling strengths, i.e., g j (t) = g j,0 cos (Ω j t + θ j ), with g j,0 , Ω j , and θ j the amplitude, frequency, and initial phase of the modulation at the jth coupling point, respectively. Here we restrict ourselves to the simple case of g 1,0 = g 2,0 = g 0 and Ω 1 = Ω 2 = Ω. We only consider an initial phase difference by assuming θ 1 = 0 and θ 2 = θ without loss of generality. In this case Eq. (6) becomeṡ with Γ 0 = 4πg 2 0 /v g . Clearly, the dynamics of the giant atom can be controlled via either the modulation frequency Ω or the initial phase difference θ, both of which are experimentally tunable. For example, Eq. (8) can be simplified tȯ if Ωτ = (2m + 1)π (m is an arbitrary integer), tȯ c e (t) = − Γ 0 2 [cos 2 (Ωt) + cos 2 (Ωt + θ)]c e (t) if Ωτ = (2m + 1/2)π. In this case, further control of the atomic dynamics can be achieved by tuning the initial phase difference θ as mentioned above. Before proceeding, we briefly revisit the typical dynamics of a time-independent giant atom described by Eq. (7). It has been shown that the spontaneous emission of such an atom can be suppressed if φ = (2m + 1)π and g 1 = g 2 (the two atom-waveguide coupling paths interfere destructively) and if the propagation time τ is negligible compared with the lifetime of the atom (i.e., the system is in the Markovian regime) [6,8,28,[34][35][36]. In this case, the giant atom is effectively decoupled from the waveguide and becomes "decoherence free" [9,11]. On the other hand, the atom can also exhibit superradiance behavior if the two coupling paths interfere constructively (in this context, "superradiance" refers to enhanced radiative decay due to the giant-atom structure [50]). For the time-dependent model here, we demonstrate in Fig. 2 the dynamic evolutions of the atomic population |c e (t)| 2 (the atom is in the excited state initially) with different values of Ωτ and θ. If both Ωτ and θ are integer multiples of 2π, as shown in Fig. 2(a), the atom still exhibits the long-lived population and the superradiance emission when φ = (2m + 1)π and φ = 2mπ, respectively, similar to the case with time-independent coupling strengths. In this case, the superradiance behavior shows a slight oscillation arising from the cosineshaped modulations, while the long-lived population still does not change with time since the two decay rates (i.e., the instantaneous and the retarded decay rates) are always identical. Interestingly, it shows that the decay dynamics of the atom can be tuned flexibly between the long-lived population and the superradiance behavior by changing the phase difference θ. This can be understood from Eq. (8): the retarded feedback term is completely opposite when θ = 0 and θ = π, while its influence on the decay dynamics is halfway between the two extremes when 0 < θ < π. In view of this, the present proposal provides an alternative scheme for tailoring the decay dynamics of the atom without changing its transition frequency [6]. This scheme is, however, not suitable for a small atom in front of a mirror because it is challenging to introduce the phase difference θ in this case. The product Ωτ of the modulation frequency and the propagation time also plays an important role for the decay dynamics, as shown in Eq. (8). This can be verified from the results in Fig. 2(b), where we change the value of Ω and fix the propagation time τ (i.e., the separation between the coupling points). In this case, the decay dynamics can be tuned between the decoherence-free behavior and the superradiance behavior by changing the value of Ω instead. This can also be seen from the opposite signs of the feedback terms in Eqs. (9) and (10). However, the superradiance behaviors are no longer coincident when φ takes different values (see the blue solid and yellow dashed lines). The latter one shows a slower oscillation due to the smaller modulation frequency, as shown in the inset in Fig. 2(b). IV. STEPLIKE MODULATION In this section, we would like to consider steplike modulations for the atom-waveguide coupling strengths, i.e., the coupling strengths change abruptly at a specific moment from one value to another. For simplicity, we assume identical modulations at the two coupling points, i.e., g 1 (t) = g 2 (t) = g 0 + ∆ g Θ(t − t ′ ) in Eq. (6) with g 0 and ∆ g the initial value and the variation of the coupling strengths, respectively. Figures 3(a) and 3(b) depict the dynamic evolutions of the atomic population with the steplike modulations for φ = 0 and φ = π, respectively. In both cases, one can find a revival or reduction of the population from t = t ′ to t = t ′ + τ . For φ = 2mπ, as shown in Fig. 3(a), the atomic population regains damped when t > t ′ + τ , with the damping rate determined by the absolute value of the final coupling strengths. For φ = (2m + 1)π, however, the atomic population becomes undamped again after the variation, as shown in Fig. 3(b). This implies that one can partially offset the initial energy loss of the giant atom that arises from the non-Markovian retardation effect. The results in Fig. 3(b) can be understood from Eq. (6) for t ∈ [t ′ , t ′ + τ ), i.e., and that for t > t ′ + τ , i.e., On one hand, Eq. (12) shows that for φ = (2m + 1)π a population revival (reduction) should occur at t = t ′ if the value of is negative (positive) (see Appendix B for more details). This can be seen in Fig. 3(c), where the steady value of the atomic population is inversely proportional to F s , and if F s < 0, it becomes larger than that without modulation (see the horizontal dashed line). On the other hand, for t > t ′ + τ , one can see from Eq. (13) that the two decay paths cancel each other again if φ = (2m + 1)π. The results in Fig. 3(a) are much more complicated to analyze with this approach. Nevertheless, the sudden revival or reduction and the modified decay of the atom can also be explained by the altered interference effect between the instantaneous and retarded terms in Eq. (6). We would like to point out that the population revival in Fig. 3 does not violate the energy conservation of the whole system. According to Eq. (12), the sudden change in the coupling strength g(t) modifies the interference effect of the instantaneous and retarded terms, which leads to a further energy loss from the atom to the waveguide, or an energy backflow from the waveguide (the region between the two coupling points) to the atom. However, the atomic population cannot grow back to unity due to the initial energy loss before t = τ . V. QUANTUM ZENO AND QUANTUM ANTI-ZENO EFFECTS Considering that the subsequent decay of the atom can be tuned by a sudden change in the atom-waveguide coupling strengths, as shown in Fig. 3, it is natural to ask if the QZE and its inverse version, i.e., QAZE, can be simulated by repeating such sudden changes. The answer is positive, as will be shown below. The QZE (also known as the Zeno's paradox) states that the decay of an unstable quantum system can be hindered by frequent observations. This effect requires that the time interval of the observations is shorter than the Zeno time before which the survival probability of the system exhibits a short-time quadratic decay [37][38][39][40][41]. Such a short-time behavior, however, is lacking in dynamics governed by Eq. (6) since the Weisskopf-Wigner approximation kills the short-time memory of the environment (the waveguide). In view of this, we resort to a discrete version of the present model, where a two-level giant atom is coupled to a tight-binding lattice (e.g., a one-dimensional array of coupled transmission line resonators [51,52]) with time-dependent coupling strengths. In this case, the short-time behavior can be clearly observed by directly solving the coupled-mode equations of the whole system [53,54]. Now the Hamiltonian of the system can be written as where a m is the annihilation operator of the mth resonator of the lattice; ω a is the frequency of each resonator in the array; J is the coupling constant between adjacent resonators (for simplicity, we only consider the nearest-neighbor couplings). Here we have assumed that the atom is coupled to the 0th and the N th resonators of the lattice with identical time-dependent coupling strength g(t). By performing the transformation a m = k a k e ikm / √ 2π, Eq. (15) becomes withω k = ω a − 2J cos k the dispersion relation of the lattice andg(t) = g(t)/ √ 2π the renormalized coupling strength. In view of this, the tight-binding model here serves as a one-dimensional structured waveguide: If the atomic frequency is within the energy band of the lattice, i.e., ω 0 ∈ [ω a − 2J, ω a + 2J], Eq. (16) basically describes a giant atom weakly coupled to a bath where photon escape from the atom to the lattice can be observed; otherwise, if the atom is tuned off-resonance to the lattice band, the photon escape is prohibited and atom-photon bound states can be formed [55][56][57]. In the resonant case of ω 0 = ω a (i.e., the frequency of the atom lies at the middle of the energy band of the lattice), the phase accumulation between the two atom-lattice coupling points is given by φ = N π/2 [50,58]. In this case, the dynamic evolution of the atomic population |c e (t)| 2 can be determined by solving the coupled-mode equationṡ For our purpose here, we consider a periodic quench scheme for the time-dependent coupling strength in Eq. (15) with n ∈ Z + , where t ′ and t ′′ are the turn-on and quench-off durations within each period, respectively. We first consider the case of N = 4 where the atom exhibits enhanced decay due to the constructive interference between the two coupling paths [27,50]. Figure 4 depicts the dynamic evolutions of the atomic population in this case with different values of quench-off duration t ′′ and shows the evolution without coupling quench for comparison. In the absence of coupling quench, the atom exhibits a short-time parabolic decay before the subsequent exponential behavior (see the top inset) due to the strong memory effect of the structured bath. Based on this short-time behavior, one can find that the atomic decay slows down gradually and even tends to be inhibited as the quench-off duration t ′′ increases. Physically, this is because the memory of the lattice (i.e., the feedback from the lattice to the atom) tends to vanish for long enough quench off and the decay of the atom restarts with the short-time parabolic behavior whenever the couplings are turned on again. That is to say, a coupling quench with large enough quench-off duration mimics an ideal observation which results in a "collapse" of the state [54]. From this perspective, the turn-on duration t ′ corresponds to the time interval between the observations, while the quench-off duration t ′′ serves as the duration of each observation. Finally, we demonstrate in Fig. 5 that how periodic coupling quenches can be used to induce a long-lived giant atom to decay, simulating the QAZE of an open quantum system [38,39,41]. As shown in Fig. 5, in the absence of coupling quench and for N = 2, the population of the giant atom is in some sense undamped (with a persistent oscillation) after the retarded feedback term arising from the giant-atom structure takes into effect. To simulate the QAZE with the present model, we consider periodic coupling quenches with long enough quench-off durations (i.e., large enough t ′′ ) to avoid the memory effect of the lattice sites especially those between the two coupling points (for N = 2, the energy can be partially confined between the two coupling points such that the feedback coming from the lattice cannot be ignored if the quench-off duration is short [27]). Therefore in Fig. 5, we consider the cases of g 0 t ′′ = {0.9, 1.4, 1.9}, which demonstrate that the atomic population decays periodically in an exponential-like manner and falls to zero eventually. This is because the retarded feedback (and hence the giant-atom effect) never kicks in if t ′ is smaller than τ and t ′′ is large enough. In view of this, the QAZE here should diminish gradually as the time delay τ approaches zero. This can be seen from the inset of Fig. 5, where the atomic decay is almost suppressed if the model enters the Markovian regime with small enough τ (see, e.g., the green line with τ = d/v g = N/2J = 0.025/g 0 [50,58]). Moreover, the present scheme is quite different from that of the small-atom case [54], where the atom is assumed to be off-resonant with the lattice and its decoherence is partially suppressed due to the formation of an atomphoton bound state. VI. CONCLUSIONS To summarize, we have investigated the decay dynamics of a two-level giant atom with non-Markovian retardation effect and time-dependent atom-waveguide coupling strengths. We have revealed that the dynamic evolutions of the atomic population can be remarkably modified by the time-varying couplings, depending on the specific modulation form of the coupling strengths as well as the time delay of the retardation effect. Different from the case without modulation, here the retarded feedback term at time t depends on the atom-waveguide coupling strengths at the earlier time t−τ , where τ is the traveling time of photons between the two coupling points. This, thus, provides an alternative way for dynamically controlling the decay dynamics of the atom without changing the frequency of the emitted photons. In particular, by changing the coupling strengths abruptly, it is possible to realize a stationary population revival with which the atomic population grows to a higher value and stays there permanently. Moreover, we have extended our model to a discrete version, in which the atom can exhibit obvious short-time parabolic decay due to the strong memory effect of the structured bath, and have simulated the QZE and QAZE with the help of periodic coupling quenches. Different from the small-atom case, the present model exhibits the QAZE even if the transition frequency of the atom lies within the energy band of the lattice. These results not only have potential applications for controlling the decoherence effects in quantum networks, but also provide an alternative platform for studying non-Markovian retardation effects, Zeno physics, and quench dynamics in open quantum systems.
5,345.2
2022-01-29T00:00:00.000
[ "Education", "Physics" ]
Reeb-Thurston stability for symplectic foliations We prove a version the local Reeb-Thurston stability theorem for symplectic foliations. Introduction A symplectic foliation on a manifold M is a (regular) foliation F , endowed with a 2-form ω on T F whose restriction to each leaf S of F is a symplectic form ω S ∈ Ω 2 (S). Equivalently, a symplectic foliation is a Poisson structure of constant rank. In this paper we prove a normal form result for symplectic foliations around a leaves. The result uses the cohomological variation of ω at the leaf S, which is a linear map (see section 1 for the definition) (1) [δ S ω] x : ν * x −→ H 2 ( S hol ), x ∈ S, where ν denotes the normal bundle of T F , and S hol is the holonomy cover of S. The cohomological variation arises in fact from a linear map: (2) δ S ω x : ν * x −→ Ω 2 closed ( S hol ). The local model for the foliation around S, which appears in the classical results of Reeb and Thurston, is the flat bundle ( S hol × ν x )/π 1 (S, x), where π 1 (S, x) acts on the second factor via the linear holonomy (3) dh : π 1 (S, x) −→ Gl(ν x ). For a symplectic foliations the flat bundle can be endowed with leafwise closed 2-forms, which are symplectic in a neighborhood of S; namely, the leaf through v ∈ ν x carries the closed 2-form j 1 S (ω) v whose pull-back to S hol × {v} is p * (j 1 S (ω) v ) = p * (ω S ) + δ S ω x (v). Our main result is the following: Theorem 1. Let S be an embedded leaf of the symplectic foliation (M, F , ω). If the holonomy group of S is finite and the cohomological variation (1) at S is a surjective map, then some open around S is isomorphic as a symplectic foliation to an open around S in the flat bundle ( S hol × ν x )/π 1 (S, x) endowed with the family of closed 2-forms j 1 S (ω) by a diffeomorphism which fixes S. This result is not a first order normal form theorem, since the holonomy group and the holonomy cover depend on the germ of the foliation around the leaf. The first order jet of the foliation at S sees only the linear holonomy group H lin (i.e. the image of dh) and the corresponding linear holonomy cover denoted S lin . Now, the map (2) is in fact the pull-back of a map with values in Ω 2 closed ( S lin ). Using this remark, and an extension to noncompact leaves of a result of Thurston (Lemma 2), we obtain the following consequence of Theorem 1. Corollary 1. Under the assumptions that S is embedded, π 1 (S, x) is finitely generated, H lin is finite, H 1 ( S lin ) = 0 and the cohomological variation is surjective, the conclusion of Theorem 1 holds. Our result is clearly related to the normal form theorem for Poisson manifolds around symplectic leaves from [3]. Both results have the same conclusion, yet the conditions of Theorem 1 are substantially weaker. More precisely, for regular Poisson manifolds, the hypothesis of the main result in loc.cit. are (see Corollary 4.1.22 and Lemma 4.1.23 [5]): • the leaf S is compact, • the cohomological variation is an isomorphism, when viewed as a map There is yet another essential difference between Theorem 1 and the result from [3], namely, even in the setting of Corollary 1, the result presented here is a first order result only in the world of symplectic foliations, and not in that of Poisson structures. The information that a Poisson bivector has constant rank is not detectable from its first jet. A weaker version of Theorem 1 is part of the PhD thesis [5] of the second author. The local model and the cohomological variation In this section we describe the local model of a symplectic foliation around a leaf, and define the cohomological variation of the symplectic structure on the leaves. In the case of general Poisson manifolds, the local model was first constructed by Vorobjev [9]. The approach presented here is more direct; for the relation between these two constructions see [5]. Let (M, F ) be a foliated manifold, and denote its normal bundle by Then ν carries a flat T F connection, called the Bott connection, given by where, for a vector field Z, we denote by Z its class in Γ(ν). For a path γ inside a leaf S, parallel transport with respect to ∇ gives the linear holonomy transformations: This map depends only on γ modulo homotopies inside S with fixed endpoints. Applying dh to closed loops at x, we obtain the linear holonomy group The linear holonomy cover of a leaf S at x, denoted by S lin,x is the covering space corresponding to the kernel of dh; thus it is a principal H lin,x bundle over S. Also, S lin,x can be defined as the space of classes of paths in S starting at x, where we identify two such paths if they have the same endpoint and they induce the same holonomy transport. The Bott connection induces a foliation F ν on ν whose leaves are the orbits of dh; i.e. the leaf of F ν through v ∈ ν x covers the leaf S through x, and is given by Therefore, S lin,x covers of the leaves of the foliation F ν above S via the maps (4) p The local model of the foliation around the leaf S is the foliated manifold The linear holonomy induces an isomorphism between the local model and the flat bundle from the Introduction Consider now a symplectic structure ω on the foliation F , i.e. a 2-form on T F ω ∈ Ω 2 (T F ) whose restriction to each leaf is symplectic. We first construct a closed foliated 2form δω on (ν, F ν ), which represents the derivative of ω in the transversal direction. For this, choose an extension ω ∈ Ω 2 (M ) of ω and let Since ω is closed along the leaves of F , Ω(X, Y ) ∈ ν * , thus Ω ∈ Ω 2 (T F ; ν * ). Now, the dual of the Bott connection on ν * induces a differential d ∇ on the space of foliated forms with values in the conormal bundle Ω • (T F ; ν * ); this can be given explicitly by the classical Koszul formula , X 0 , . . . , X i , . . . , X j , . . . , X p ), for η ∈ Ω p (T F ; ν * ), X i ∈ Γ(T F ). Denote the resulting cohomology by H • (F ; ν * ). It is easy to see that Ω is d ∇ -closed. In fact, this construction can be preformed in all degrees, and it produces a canonical map (see e.g. [2]) which maps [ω] to [Ω]. Also, if ω + α is a second extension of ω (where α vanishes along F ), then Ω changes by d ∇ λ, where λ ∈ Ω 1 (T F ; ν * ), is given by Note that there is a natural embedding where p : ν → M is the projection. It is easy to see that under J the differential d ∇ corresponds to the leafwise de Rham differential d Fν on the leaves of F ν . In particular, we obtain a closed foliated 2-form δω := J (Ω) ∈ Ω 2 (T F ν ), which we call the vertical derivative of ω. Since δω vanishes on M (viewed as the zero section), it follows that p * (ω) + δω is nondegenerate on the leaves in an open around M ; thus (ν, F ν , p * (ω) + δω) is a symplectic foliation around M . Consider now a symplectic leaf S. Restricting p * (ω) + δω to the leaves above S, we obtain closed foliated 2-forms along the leaves of the F νS , denoted by is symplectic will be regarded as the local model of the symplectic foliation around S; i.e. we think about the local model as a germ of a symplectic foliation around S. In order to define the cohomological variation of ω, consider first the linear map , where the map p v is the covering map defined by (4). By the discussion above, choosing a different extension of ω changes p * v (δ S ω) by an exact 2-form; hence the cohomology class [p * v (δ S ω)] is independent of the 2-form Ω used to construct δ S ω. The induced linear map to the cohomology of S lin,x , will be called the cohomological variation of ω at S In the Introduction we denoted the lifts of [δ S ω x ] to the holonomy cover S hol , respectively to the universal cover S uni of S, by the same symbol. We finish this section by proving that, up to isomorphism, the local model is independent of the choices involved. The proof uses a version of the Moser Lemma for symplectic foliations (Lemma 5 from the next section). produce local models that are isomorphic around S by a diffeomorphism that fixes S. Proof. A second 2-form is of the form for some λ ∈ Ω 1 (T F ; ν * ). We apply the Lemma 5 to the symplectic foliation (ν, F ν , p * (ω) + δω), and the foliated 1-form α := J (λ) which vanishes along M . The resulting diffeomorphism is foliated. In particular, above any leaf S of F it sends the local model corresponding to Ω to the local model corresponding to Ω ′ . Five lemmas In this section we prove some auxiliary results used in the proof of Theorem 1. Reeb Stability around non-compact leaves Consider a foliated manifold (M, F ) and let S be an embedded leaf. The classical Reeb Stability Theorem (see e.g. [6]) says that, if the holonomy group H hol is finite and S is compact, then a saturated neighborhood of S in M is isomorphic as a foliated manifold to the flat bundle where T is a small transversal that is invariant under the holonomy action of H hol . Since actions of finite groups can be linearized, it follows that the holonomy of S equals the linear holonomy of S. So, some neighborhood of S in (M, F ) is isomorphic as a foliated manifold with the flat bundle from the previous section Below we show that the proof of the Reeb Stability Theorem from [6] can be adapted to the non-compact case, at the expense of saturation of the open. (6), by a diffeomorphism that fixes S. Proof. Since the holonomy is finite, it equals the linear holonomy, and we denote H := H hol = H lin and S := S hol = S lin . The assumption that S be embedded allows us to restrict to a tubular neighborhood; so we assume that the foliation is on a vector bundle p : E → S (with E ∼ = ν S ), for which S, identified with the zero section, is a leaf. Then the holonomy of paths in S is represented by germs of a diffeomorphism between the fibers of E. Each point in S has an open neighborhood U ⊂ E satisfying • for every x, y ∈ S ∩ U , the holonomy along any path in S ∩ U connecting them is defined as a diffeomorphism between the spaces Let U be locally finite cover of S by opens U ⊂ E of the type just described, such that for all U, U ′ ∈ U, U ∩ U ′ ∩ S is connected (or empty), and such that each U ∈ U is relatively compact. We fix x 0 ∈ S, U 0 ∈ U an open containing x 0 , and denote by Consider a path γ in S starting at x 0 and with endpoint x. Cover the path by a chain of opens in U ξ = (U 0 , . . . , U k(ξ) ), such that there is a partition Since the holonomy transformations inside U j are all trivial, and all the intersections U i ∩ U j ∩ S are connected, it follows that the holonomy of γ only depends on the chain ξ and is defined as an embedding Denote by K the kernel of π 1 (S, x 0 ) → H. The holonomy cover S → S can be described as the space of all paths γ in S starting at x 0 , and two such paths γ 1 and γ 2 are equivalent if they have the same endpoint, and the homotopy class of γ −1 2 • γ 1 lies in K. The projection is then given by [γ] → γ(1). Denote by x 0 the point in S corresponding to the constant path at x 0 . So, we can represent each point in S (not uniquely!) by a pair (ξ, x) with ξ ∈ Z and endpoint x ∈ U k(ξ) ∩ S. The group H acts freely on S by pre-composing paths. For every g ∈ H fix a chain ξ g ∈ Z, such that (ξ g , x 0 ) represents x 0 g. Consider the open on which all holonomies h x0 x0 (ξ g ) are defined, and a smaller open and h x0 x0 (ξ gh ) are the same, by shrinking O 1 if necessary, we may assume that Consider the following open Then O ⊂ O 1 , and for h ∈ H, we have that So h x0 x0 (ξ h ) maps O to O, and by (7) it follows that the holonomy transport along ξ g defines an action of H on O, which we further denote by Since U is a locally finite cover by relatively compact opens, there are only finitely many chains in Z of a certain length. Denote by Z n the set of chains of length at most n. Let c ≥ 1 be such that ξ g ∈ Z c for all g ∈ H. By the above, and by the basic properties of holonomy, there exist open neighborhoods {O n } n≥1 of x 0 in O: satisfying the following: 1) for every chain ξ ∈ Z n , O n ⊂ O(ξ), 2) for every two chains ξ, ξ ′ ∈ Z n and x ∈ U k(ξ) ∩ U k(ξ ′ ) ∩ S, such that the pairs (ξ, x) and (ξ ′ , x) represent the same element in S, we have that Denote by S n the set of points in x ∈ S for which every element in the orbit xH can be represented by a pair (ξ, x) with ξ ∈ Z n . Note that for n ≥ c, S n is nonempty, H-invariant, open, and connected. Consider the following H-invariant open neighborhood of S × {x 0 }: On V we define the map for ( x, v) ∈ S n ×O n+c , where (ξ, x) is pair representing x with ξ ∈ Z n and x ∈ U k(ξ) . By the properties of the opens O n , H is well defined. Since the holonomy transport is by germs of diffeomorphisms and preserves the foliation, it follows that H is a foliated local diffeomorphism, which sends the trivial foliation on V with leaves V ∩ S × {v} to F | E . We prove now that H is H-invariant. Let ( x, v) ∈ S n ×O n+c and g ∈ H. Consider chains ξ and ξ ′ in Z n representing x and xg respectively, with x ∈ U k(ξ) ∩ U k(ξ ′ ) ∩ S. Then ξ ′ and ξ g ∪ξ both belong to Z n+c and (ξ ′ , x), (ξ g ∪ξ, x) both represent xg ∈ S. Using properties 2) and 4) of the opens O n , we obtain H-invariance: . Since the action of H on V is free and preserves the foliation on V, we obtain an induced local diffeomorphism of foliated manifolds: Hence x and x ′ , both lie in the fiber of S → S over x, thus there is a unique g ∈ H with x ′ = xg. Let n, m ≥ c be such that ( x, v) ∈ S n ×O n+c and ( x ′ , v ′ ) ∈ S m ×O m+c , and assume also that n ≤ m. Consider ξ ∈ Z n and ξ ′ ∈ Z m such that (ξ, x) represents x and (ξ ′ , x) represents x ′ . Then we have that . Since both (ξ ′ , x) and (ξ g ∪ ξ, x) represent x ′ ∈ S, and both have length ≤ m + c, again by the properties 2) and 4) we obtain which proves injectivity of H. Thurston Stability around non-compact leaves To obtain the first order normal form result (Corollary 1), we will use the following extension to non-compact leaves of a result of Thurston [8]. Lemma 2. Let S be an embedded leaf of a foliation such that K lin , the kernel of dh : π 1 (S, x) → H lin , is finitely generated and H 1 ( S lin ) = 0. Then the holonomy group H hol of S coincides with the linear holonomy group H lin of S. Proof. Denote by V := ν x , the normal space at some x ∈ S. The linear holonomy gives an identification of the normal bundle of S in M with the vector bundle ( S lin × V )/H lin . Passing to a tubular neighborhood, we may assume that the foliation F is on ( S lin × V )/H lin , and that its linear holonomy coincides with the holonomy of the flat bundle, i.e. the first order jet along S of F equals the first order jet along S of flat bundle foliation. Consider the covering map The leaf S 0 := S lin × {0} of the pull-back foliation p * (F ) on S lin × V satisfies: (1) S 0 has trivial linear holonomy; (2) H 1 ( S 0 ) = 0; (3) π 1 ( S 0 ) ∼ = K lin is finitely generated. Thurston shows in [8] that, under the assumption that S 0 is compact, the first two conditions imply that the holonomy group of S 0 vanishes. It is straightforward to check that Thurston's argument actually doesn't use the compactness assumption, but it only uses condition (3); and we conclude that also in our case the holonomy at S 0 of p * (F ) vanishes. Now consider a loop γ in S based at x such that [γ] ∈ K lin . This is equivalent to saying that γ lifts to a loop in S lin , hence to a loop γ in S 0 . The holonomy transport along γ induced by p * (F ) projects to the holonomy transport of γ induced by F , and since the first is trivial, so is the latter. This proves that K lin is included in the kernel of π 1 (S, x) → H hol , and since the other inclusion always holds, we obtain that H hol = H lin . Foliated cohomology of products Let M and N be two manifolds. Consider the product foliation T M × N on We denote the complex computing the corresponding foliated cohomology by The elements of Ω • (T M × N ) can be regarded as smooth families of forms on M : Denote the corresponding cohomology groups by We need two versions of these groups associated to a leaf M × {x}, for a fixed We explain the third map; the first two are constructed similarly. Consider an element [η] ∈ H q gx (T M × N ), which is represented by a foliated q-form η that is closed on some open containing M × {x}. We define the corresponding linear map: The germ at x of the function η, c is independent of the choice of the representatives, yielding a well-defined element Ψ gx ( Proof. Denote the constant sheaves on M associated to the groups C ∞ (N ), C ∞ x (N ) and C ∞ gx (N ) by S 1 , S 2 and S 3 , respectively. By standard arguments, the de Rham differential along M induces resolutions S i → C • i by fine sheaves on M : N ). Hence, the foliated cohomologies from (9) are isomorphic to the sheaf cohomologies with coefficients in S 1 , S 2 and S 3 respectively. On the other hand, for any vector space V , denoting by V the constant sheaf on M , one has a natural isomorphism: Hence, we obtain isomorphisms: ). We still have to check that these maps coincide with those from (9). For this we will exploit the naturality of the maps in (10). In the first case, consider the evaluation map ev y : C ∞ (N ) → R, for y ∈ N . This induces a sheaf map ev M y : S 1 → R into the constant sheaf over M , which is covered by a map ev By naturality of (10), it follows that the following square commutes: Since Φ R is the usual isomorphism given by integration, and by the explicit description of the map Ψ, this implies that Ψ = Φ. For the second map in (9) and (11) we proceed similarly, but using the inclusion i : C ∞ x (N ) → C ∞ (N ) instead of ev y . This gives rise to a sheaf map S 2 → S 1 which lifts to their resolutions, and then we obtain a commutative square Using also that Ψ = Φ, this implies the equality Ψ x = Φ x . Similarly, for the third map in (9) and (11), but using the projection map p : C ∞ (N ) → C ∞ gx (N ) (instead of the inclusion), we obtain a commutative square (N ) . Again, since Ψ = Φ, we obtain that Ψ gx = Φ gx . This concludes the proof. We will use the following consequences of Lemma 3 (the first appeared in [4]). is exact for all y ∈ N . Then there exists θ ∈ Ω q−1 (T M × N ) such that dθ = η. Moreover, if η x = 0 for some x ∈ N , then one can choose θ such that θ x = 0. Proof. First, we claim that the projection p : ×N ) induces a surjective map in cohomology. By the description of the maps Ψ and Ψ gx , we have a commutative diagram . By Lemma 3, the horizontal maps are isomorphisms, and since the vertical map on the right is surjective, so is the vertical map on the left. Equivariant submersions We prove now that submersions can be equivariantly linearized. Since ω is invariant, it follows that ω 1 coincides with ω on U 1 := g∈H g U 0 . We compute now the variation of ω at S. Since ω and ω 1 coincide around S, they have the same variation at S. Using the extension of ω (or equivalently of ω 1 ) that vanishes on vectors tangent to the fibers of the projection to S, we see that the variation δ S ω is given by the H-equivariant family: The local model is represented by the H-equivariant family of 2-forms: Smoothness of f follows from Lemma 3. Clearly, f (0) = 0 and its differential at 0 is the cohomological variation that restricts to a diffeomorphism between the leaf S v and the leaf S χ(v) . The pullback of ω 1 under χ is the H-equivariant family We have that Equivalently, this relation can be rewritten as where {α v } v∈U is an H-equivariant family of exact 2-forms that vanishes for v = 0. By Corollary 2, p * (α) is an exact foliated form on S × U , and moreover, we can choose a primitive β ∈ Ω 1 (T S × U ) such that β 0 = 0. By averaging, we may also assume that β is H-equivariant, thus it is of the form β = p * (β) for a foliated 1-form on β on ( S × U )/H that vanishes along S. We obtain:
5,922
2013-07-16T00:00:00.000
[ "Mathematics" ]
Theologizing the Aristotelian Soul in Early Modern China: The Influence of Dr Navarrus’ Enchiridion (1573) over Lingyan lishao (1624) by Francesco Sambiasi and Xu Guangqi : Lingyanlishao 靈言蠡勺 [ LYLS ] (Humble Attempt to Discuss the Soul, 1624) by the Calabrian Jesuit Francesco Sambiasi (1582–1649) and the Chinese mandarin Xu Guangqi 徐光啓 (1562–1633) was the first Chinese‑language treatise on the scholastic Aristotelian soul and a pioneering work in Sino–Western intellectual exchanges. Until now, the dominant assumption has been that the first vol‑ ume ( juan ) of this work is simply an adaptation of the Coimbra commentaries on De Anima [ DA ] and Parva Naturalia [ PN ]. This article demonstrates, however, that while most of the first juan is based on these Coimbra commentaries, its treatise on the substance of the soul was likely derived from another source, namely the Enchiridion , a 16th century confessional manual by the Spanish Augus‑ Introduction: Coimbra in China When Michele Ruggieri (1543Ruggieri ( -1607) ) and Matteo Ricci (1552Ricci ( -1610) ) established the first stable Jesuit mission in Zhaoqing in the early 1580s, they discovered a metaphysical chasm between their Christianity and the neo-Confucianism of their Chinese interlocutors.Whereas Christianity insisted upon the absolute transcendence of the Creator God and the immateriality of the immortal substantial soul, they perceived in the neo-Confucian tradition a monism opposed to these metaphysical claims that were integral to their soteriological message.Hence, from the beginning of the mission they polemicized with Chinese thought using philosophic arguments derived from the scholastic Aristotelian tradition in which they had been trained (Canaris 2019).Over time, the Jesuits realized that a more comprehensive presentation of scholastic Aristotelianism was necessary to establish Christianity as the intellectual peer of Chinese thought. The manuals chosen for the systematic exposition of scholastic Aristotelianism in Chinese were a series of textbooks on the Aristotelian corpus prepared by Jesuits at the University of Coimbra (Meynard 2017).First published between 1592 and 1606 in eight volumes under the oversight of the Portuguese Jesuit Manuel de Góis (1543Góis ( -1597)), these textbooks, known as the Cursus Conimbricenses, exerted significant influence at the time and were used at various Jesuit colleges throughout Europe, including La Flèche where Descartes studied in his youth (Des Chene 2000).The commentaries provided a relatively accessible summary of scholastic philosophy, which had been updated to suit the needs of a posthumanist Europe (Carvalho 2018).In sum, nine Chinese-language works inspired by these Coimbra commentaries were published between 1624 and 1640s, covering a broad range of Aristotelian works, such as De Anima [DA], Parva Naturalia [PN], De Caelo, De Generatione et Corruptione, De Meteorologica, Isagoge, Categoriae, Ethica Nicomacheana, and the Problemata (Meynard 2017). The first of these scholastic-Aristotelian works in Chinese was a treatise on the soul entitled Lingyan lishao 靈言蠡勺 [LYLS] (Humble Attempt to Discuss the Soul), which was published in two juan 卷 (volumes) at the Shenxiu Church 慎脩堂 in Hangzhou with a preface dated between 14 August and 12 September 1624 (Chan 2015, p. 366). 1 Like many Chinese Christian works published in the late Ming and early Qing, LYLS was the fruit of a collaboration between a missionary and a Chinese Catholic convert.The Calabrian Jesuit missionary Francesco Sambiasi (1582-1649) explained orally (koushou 口授) the content, which was then recorded (bilu 筆錄) and presumably polished into literary Chinese by the illustrious Ming convert, scholar, and politician Xu Guangqi 徐光啓 (1562-1633). 2 As the first systematic treatment of the soul in Chinese, this work holds especial significance in the history of Sino-Western intellectual exchange, yet it has received scarce attention compared with other Jesuit Chinese writings.Sambiasi and Xu were highly conservative in their translation choices, making the work seem less interesting from the perspective of comparative philosophy and theology.While Sambiasi's confrère Giulio Aleni (1582Aleni ( -1649) ) made daring comparisons with the neo-Confucian tradition in his own adaptation of the Coimbra commentary on DA, LYLS follows a conventional structure, engages minimally with Chinese thought and seems closely wedded to its source texts (Aleni 2020).The present article argues that the chief contribution of LYLS consists in its attempt to articulate an holistic account of the soul that is accommodated to the spiritual and practical needs of the Chinese missionary context.This process of accommodation can be fully understood only through a precise identification of its European sources.The current scholarly consensus is that the first juan is an adaptation of the Coimbra commentaries on DA and PN (Verhaeren 1935;Meynard 2015).The present article proposes a new possible textual connection that has been hitherto unnoticed by scholarship: the Enchiridion, a sixteenth-century confessional manual by the Spanish Augustinian theologian Martín de Azpilcueta (1492Azpilcueta ( -1586)), commonly known as Doctor Navarrus.Through a close textual analysis, it is argued that Sambiasi and Xu based the first section of the first juan 卷 (volume) of LYLS on the Enchiridion to construct a more accessible and concise theological definition of the soul that is obscured by the philosophic focus of the Coimbra commentaries. Composition and Content of LYLS LYLS was published at a testing time for the Jesuit China mission.The Church in China had just emerged from a spate of persecution instigated in 1616 by Shen Que 沈㴶 (1565-1624), the vice minister of the Nanjing Ministry of Rites.Between 1617 and 1620, Sambiasi lived together with Giulio Aleni, Niccolò Longobardo (1559-1654), and other Jesuits in Hangzhou under the protection of the literatus Yang Tingyun 楊廷筠 (1562-1627), a prominent Chinese Catholic convert (Standaert 1988, pp. 91-92).At the same time, the Jesuits themselves were divided over mission policy in what became known as the Terms Controversy.Jesuits exiled from Japan took exception to what they believed were excessively liberal accommodations to Chinese thought and culture adopted by the China missionaries and insisted that indigenous Chinese vocabulary could not express the transcendence of Christian theological concepts such as God, the angels, and the soul.Longobardo, who succeeded Ricci as superior of the mission was convinced by the arguments of the Japan Jesuits, writing a treatise in the mid-1620s on the topic (Longobardo 2021).While Sambiasi's own position in the Terms Controversy is unclear, 3 Sambiasi and Xu's conservative translation choices and lack of engagement with Confucian thought in LYLS reflects the tense environment in which the work was composed.Like other Chinese Christian texts published at the time, such as Tianzhu shengjiao qimeng 天主聖教啓蒙 (Introduction to the Catholic Religion, 1619) by João da Rocha and Daiyipian 代疑篇 (Treatise to Supplant Doubts, 1621) by Yang Tingyun, LYLS predominately employs the phonetic transliteration ya-ni-ma 亞尼瑪 to render the Christian concept of the rational soul.However, the semantic translation linghun 靈魂 is still occasionally used, especially in glosses, to maintain continuity with Jesuit writings published before the Terms Controversy, such as Ricci's Tianzhu shiyi 天主實義 (True Meaning of the Lord of Heaven, 1603) (Ricci 2016). LYLS is divided into two brief juan.The first juan follows a structure typical of early modern treatments of the soul though with a distinctive innovation.After the preface (yin 引), it contains a treatise on the substance of the rational soul (論亞尼瑪之體), a treatise on the vegetative and sensitive powers of the rational soul (論亞尼瑪之生能覺能), and a treatise on the rational powers of the rational soul (論亞尼瑪之靈能) that orders largely scholastic Aristotelian content according to the Augustinian division of memory (論記含者), intellect (論明悟者), and will (論愛欲者) (Zhou 2017).Aquinas had famously rejected this threefold division, arguing on Aristotelian grounds that intellectual memory was inseparable from the intellect. 4Early modern and Jesuit manuals invariably adhered to Aquinas on this point, but Alessandro Valignano (1539-1606), the Jesuit Visitor of the Japan and China missions, perhaps inspired by the Spiritual Exercises, employed the Augustinian division in his Japanese catechism (Catechismus Christianae fidei, 1586), which was followed by Ricci in Tianzhu shiyi (Valignano 1586, pp. 32v-33r;Ricci 2016, p. 297).While the treatise on the rational powers in LYLS is structurally Augustinian, its content is emphatically Aristotelian.In contrast, the second juan is not Aristotelian but Augustinian in content, containing a treatise on the likeness of the soul's dignity to God (論亞尼瑪之尊與天主相似) followed by a treatise on the supreme good (論至美好之情) (Meynard 2015).The highly conventional nature of these contents suggests a dependency on European textual archetypes, but the identification of their sources is made difficult by the significant overlap in these works.Mere thematic and doctrinal correspondences are not sufficient to prove a dependence, as the same points recur in multiple texts from the Middle Ages onward, often in similar order. Already in 1935, the Lazarist missionary Hubert Verhaeren identified the Coimbra commentary on DA as the major source of LYLS (Verhaeren 1935).Verhaeren noted that the preface of LYLS was a close translation of the prooemium of the Coimbra commentary and that the first juan followed the same general order of subjects, including the soul, its nature, its vegetative, sensitive, and rational powers, and the three faculties of the rational soul.He also noted that the first chapter on the substance of the soul, which is divided into nine articles, seemed to follow the order of the Coimbra commentary, and discovered passages which had been evidently adapted from the Coimbra commentary. Yet Verhaeren was also aware of the significant differences between the Coimbra commentary on DA and LYLS.The first obvious discrepancy is the theological content of the second juan, which has no correspondence in the Coimbra commentaries since the Jesuit curriculum separated the study of philosophy and theology.While Thierry Meynard has proposed some possible candidates, the evidence is inconclusive and further philological work is needed to confirm Sambiasi and Xu's sources for the second juan (Meynard 2015, p. 230). Yet even in the first juan, there is significant content not found in the Coimbra commentaries.For instance, the first juan discusses topics such as salvation by grace and works and the immortality of the soul, which are not explicitly discussed in the Coimbra commentary on DA.While not rejecting, tout court, the possibility of the Coimbra commentaries as a source, Isabelle Duceux argued that the theological anthropology of LYLS was closer to that of Thomas Aquinas' Summa theologica and sought to identify the parallels between the two works in her annotations to her Spanish translation of the work (Duceux 2009, p. 37).However, in his review of Duceux's translation, Thierry Meynard has convincingly shown that LYLS is structurally and philosophically closer to the Coimbra commentaries than the Summa theologiae (Meynard 2015).For instance, where Aquinas lists four inner senses of the sensitive power of the soul (common sense, phantasia, estimative power, and memorative power), LYLS agrees with the Coimbra commentary in reducing these to two inner senses (common sense and phantasia).Furthermore, Meynard demonstrated that Sambi-asi and Xu's treatment of memory was an adaptation of the Coimbra commentary on PN, which Sambiasi also consulted in other Chinese works of his. The Enchiridion as One of the Sources of LYLS The focus on major works such as the Coimbra commentaries and Aquinas' Summa has led to the neglect of alternative sources that were popular at the time but have since fallen into oblivion.This article contends that while the Coimbra commentaries on DA and PN were the most important sources for the first juan of the LYLS, Sambiasi and Xu sought to construct a more integrated theological definition of the soul by consulting another source that has been overlooked by scholarship: Doctor Navarrus' Enchiridion, which was first published in Portuguese at Coimbra in 1552 and republished in at least eighty-one editions before 1615 in Latin, Portuguese, and Spanish (Decock 2018, p. 121).The Latin edition, first published in 1573 in Rome, became the standard version, and was significant in moral theology and even economics. 5Dr Navarrus trained in theology at the University of Alcalá between 1509 and 1516, received his doctorate in canon law from Toulouse in 1518, and taught first at the University of Salamanca from 1524 and then at the University of Coimbra from 1538 to 1556. Although Dr Navarrus was not a Jesuit, his moral theology was influential for the development of Jesuit casuistry (Maryks 2008, pp. 51-52).Dr Navarrus' Enchiridion was recommended by the official edition of the Directory to the Spiritual Exercises from 1599 and helped shape the structure of the Ratio Studiorum.Many compendia of the Enchiridion were produced, one of the most successful being by the Jesuit Pietro Alagona (1549-1624).Alagona's compendium was first published in Rome in 1590 and was republished in at least twenty-three editions in Latin, Italian, and French (Dunoyer 1967, pp. 102-8).With European expansion in the Americas and Asia, Dr Navarrus' writings and their compendia played an integral role in the production of normative knowledge and practice on a global scale (Bragagnolo 2024). Perhaps in part due to Dr Navarrus' blood relationship with St Francis Xavier, Dr Navarrus' writings also had a particularly strong influence on the Jesuits' missions in India and Japan. 6Among the books that the Portuguese Jesuit Melchior Nunes Barreto (c.1520-1571) brought to Japan in 1556 were eight copies of manuales de Navaro, which amounted to two copies of Navarrus' textbook for each priest in Japan (Gay 1959(Gay -1960, p. 157, p. 157).In the inventory of the Macau College compiled in 1616, one copy of "Navarros" and fifty-three copies of a compendium of Dr Navarrus' textbook are listed (Humbertclaude 1941).Yoshimi Orii argues that these copies were most likely the edition of Alagona's compendium that the Jesuits had printed in 1597 in Japan with the European printing press brought by the Jesuits to Nagasaki in 1590 (Orii 2024).Three copies of Navarrus' work can be found in the collection of books belonging to Bishop Diogo Valente (1568-1633) (Golvers 2006).While the inventory was compiled on 11 November 1633, after Valente's death, it is not certain when these books entered into Valente's possession.According to Golvers, some may have arrived with Nicolas Trigault in 1619, some from the College of Macau, and others from the Japan mission.A copy of the 1593 Latin edition of this book can also be found in Verhaeren's catalogue of the Jesuits' Beitang Library in Beijing, though this edition also bears the stamp of Policarpo de Sousa (1697-1757), Bishop of Beijing from 1743 till his death, and is unlikely to be the copy consulted by Sambiasi and Xu (Verhaeren 1969, p. 250). As Alagona's compendium does not include the citations that can be found in LYLS, Sambiasi and Xu most likely consulted the complete Latin edition of the Enchiridion that could have been easily obtained in Macau.This Latin edition contained a series of ten preliminary chapters (or "preludes"), the first five of which were a series of treatises on the soul.These preludes were designed as the theoretical preparation for the discussion on moral theology that is the principal focus of the work.Hence, throughout Dr Navarrus' theoretical exposition of the soul, there is always a concern to ensure its relevance for spiritual cultivation and pastoral practice. There is very little scholarship on Dr Navarrus' treatment of the soul.As the Catalan theologian Josep-Ignasi Saranyana remarks, the general presumption has been that the work is devoid of philosophic or speculative value. 7Dr Navarrus' treatment of the soul is fundamentally a summary of Aquinas' Summa theologiae, and its structure and content are heavily influenced by the Summa moralis by the Dominican friar and archbishop of Florence Antonino Pierozzi (1389Pierozzi ( -1459)), who similarly begins his work on moral theology with a treatise on the soul that follows a similar structure.Dr Navarrus derives his basic definition of the soul from the Summa moralis and expands Pierozzi's definition with more examples and content. 8While Dr Navarrus cites the Summa moralis on occasion, he does not formally acknowledge his structural and theoretical debt to the work. 9 There are compelling textual reasons why Dr Navarrus' Enchiridion, and not the Coimbra commentary on DA, must be considered the primary source of the first treatise on the substance of the soul in the first juan of LYLS.First, the nine claims that Sambiasi and Xu make about the soul in this treatise follow almost exactly the definition of the soul provided in the first prelude of Dr Navarrus' Enchiridion (Azpilcueta 1593, pp. 3-5).While similar content can be found in the Coimbra commentary, contrary to Verhaeren's claim, it is not an exact match, since in LYLS, there are theological topics, such as the immortality of the soul, grace, and beatitude, which are not found in the Coimbra commentary but are found in the Enchiridion (see Table 1).Moreover, Sambiasi and Xu's nine claims about the soul are followed by a list of six mistaken conceptions about the soul, which can also be found in the same order and with similar content in the Enchiridion (see Table 2) (Azpilcueta 1593, p. 6-8).Third, all the citations of Augustine, pseudo-Augustine, and pseudo-Bernard of Clairvaux in the first treatise of LYLS can also be found in the first prelude of the Enchiridion and in the same order (see Table 3).Notably, at the conclusion of the list of mistaken conceptions of the soul in the Enchiridion, Dr Navarrus refers to the citation of pseudo-Bernard of Clairvaux made at the beginning of the prelude.The same citation reappears in the same place in LYLS, though Sambiasi and Xu chose to re-paraphrase the text to serve as the conclusion of the treatise.These citations cannot be found in any of the Coimbra commentaries.Considering that there are approximately 15,726 characters in the first juan, around 22% of the first juan has been adapted from the Enchiridion. Table 1.Comparison of the nine-part definition of the soul in the first treatise on the substance of the soul (論亞尼瑪之體) in juan 1 of LYLS and the first prelude (De essentia animae rationalis) of the Enchiridion.N.B.: only the titles of the sections have been reproduced here.A close textual analysis of the first three definitions can be found below.39. [Error 5] Furthermore, some infer from this that the human heart is the seat of the ya-ni-ma, and that it alone dwells in the centre and governs all the parts of the body.They compare it to the ruler of a kingdom who lives in the court and rules the four realms.This is not the case.The ya-ni-ma lives fully throughout the whole body, gives life to its substance and forms its substance.For example, in one part it can be found in its entirety.But it gives life to its parts, forms its parts, and there is nowhere where it is not present.How can we say that it only lives in the centre and governs from a distance each part?However, the ya-ni-ma, despite being wholly present throughout, it gives life to it, forms it.While it lives within the heart, it puts into action and is involved in all the vital functions.For example, the fire in the body and the blood in the body all come from the heart just as how water comes from a spring and is divided into tributaries.] Furthermore, from this can be inferred that it is wrong to say that the ya-ni-ma is the blood of people or that it is a part of people's blood.The ya-ni-ma is of a spiritual category, and it is completely inside the entire body, and it is completely inside all the parts.How could it be blood?How could it be in blood?However, blood is the vehicle (yu 輿) of life.It is not possible here to analyse systematically the entirety of Sambiasi and Xu's attempt to adapt and translate Dr Navarrus' nine-part definition of the substantial soul, but the first three definitions will suffice to illustrate their strategy.A striking difference between the Enchiridion and LYLS is that the Chinese definitions in LYLS are often longer than those in the Enchiridion.This is surprising, because the Jesuits' Chinese-language translations are generally more concise than their Western-language source texts.For instance, the Coimbra commentaries contain detailed argumentation that is not only difficult to translate into Chinese, but also would perhaps be distracting for the Chinese reader, who would need simple definitions and clear articulations of philosophic and theological positions rather than the minutiae of debates irrelevant to the mission field.Yet the definitions provided in the Enchiridion are already extremely concise, and their concision would have posed obstacles to their lucidity in Chinese.Let us take for instance the first of Dr Navarrus' definitions of the soul as subsistent: I said substance lest the definition lack genus.Every excellent definition must consist of genus and difference.This was taught by Aristotle through theory and by the jurist Ulpian through praxis and usage as explained in the commentaries by Bartolus and others.It is agreed that the term substantia is the genus for the human soul because every human soul is a substance, as St Thomas proves.On the contrary, however, not every substance is a human soul.[Dixi, Substantia, ne definitio careat genere, quo et differentia debet constare omnis optima definitio, quod docuit per theoriam Aristoteles et per praxim et usum Iurisconsultus Ulpianus, ubi Bart.et alii hoc explicant: et constat verbum substantia, esse genus ad animam humanam: omnis etenim anima humana est substantia, ut probat divus Tho.non tamen e contrario omnis substantia est anima humana.](Azpilcueta 1593, p. 3) To a European reader, versed in the basics of scholastic logic, Dr Navarrus' intensional definition would have been perfectly clear and sufficient: substance is the genus or general category to which a rational soul belongs, and the ensuing eight points constitute the differentiae that distinguish the rational soul from other substances.But to a Chinese reader, a mere literal Chinese rendering of terms such as genus and differentia would have been bewildering; hence, in the Chinese translation of this passage, Sambiasi and Xu add a basic definition of these logical terms, explaining the difference between genus (zong 總) and species (zhuan 專), as well as between substance (zili 自立) and accident (yilai 依賴): 又從此推,或言亞尼瑪是人之血 What is meant by substance?Everyone who investigates the nature of things and wants to define the name of a thing, must use the genus and species as a method; it is not possible to omit either.(The genus means that which is shared by the many.For example, people have life; plants and animals also have life.Life is shared by people and things.As for species, people have a soul by which they can make rational inferences; plants and animals do not have this.Only people have a soul.Therefore, if we say that people are a living thing, we are speaking in terms of genus.If we say that people have the ability to reason, we are speaking in terms of their species.)Substance is the genus of the ya-ni-ma.The substance is not a ya-ni-ma, but the ya-ni-ma is a substance.For example, when we speak of living things, we do not only mean people, but rather people are a living thing.(In the theory of investigating things there is substance and accident.The substance is an independent body upon which other things depend.The accident cannot subsist by itself; it exists only by depending on the substance.If it does not depend upon a subsistent thing, it cannot be a thing by itself.)(Huang and Wang 2013, pp. 1:320-321) In this instance, it is not readily apparent where Sambiasi and Xu sourced this extra information.Sambiasi and Xu may have composed their own original comment, or they may have drawn upon other Coimbra commentaries, such as the Coimbra commentary on the Dialectica where these terms find precise definitions (Couto 1606).Another possibility is that they consulted the sources cited in Dr Navarrus' marginal annotations.In the passage above, Dr Navarrus cites Topics 6.1-2 where Aristotle explains the principles for defining things and Commentary on the Sentences II,d. 3,q. 1,art. 6 where Aquinas defines the soul as a being in the genus of substance both as a species, insofar as it is subsistent and can survive independently of the body, and as a principle, insofar as it is the form of the body.Similar content can be found in the additional text of LYLS.Sambiasi and Xu's additional text is differentiated from their direct translation of the Enchiridion through the use of smaller characters, whereas the text in larger characters corresponds almost exactly to Dr Navarrus' original text.In this way, Sambiasi and Xu have transposed early modern commentarial practices, which rely primarily upon marginal annotations, to the traditional Chinese convention of interlinear commentary in smaller characters, which dates at least to the Tang dynasty (Gardner 1998).Despite their failure to precisely identify their sources, to the Chinese reader, it is apparent that LYLS consists of multiple textual layers. In subsequent sections, it is very likely that Sambiasi and Xu had consulted Dr Navarrus' references, particularly the Summa, to flesh out the text.Dr Navarrus' original definition of the soul as subsistent consists of one jargon-filled sentence, which would have been impenetrable for the Chinese reader: I said "subsistent by itself" in order to differentiate it from the vegetative soul of plants and the sensitive soul of other animals, which cannot subsist by them-selves, as the same St Thomas proves.[Dixi, per se subsistens, ut differat ab anima vegetativa plantarum, et a sensitiva ceterorum animalium, quae non possunt per se subsistere, ut probat idem sanctus Thomas.](Azpilcueta 1593, p. 4) This brief definition has three marginal annotations from the Summa theologiae: the first being Summa theologiae 1, q. 75, art.3, which explains that animal souls are not subsistent and depend on the body; the second being Summa theologiae 1, q. 75, art.6, which explains the incorruptibility of the intellectual soul compared to the animal soul; and the third being Summa theologiae 1, q. 76, art.3, which argues that humans only have one soul that subsumes the functions of vegetative and sensitive souls.In LYLS, Sambiasi and Xu translate Dr Navarrus' definition literally, and then supply an interlinear gloss that matches the content in each of Dr Navarrus' three marginal annotations.The only discrepancy is that the content of the second marginal annotation is presented not as a gloss but as part of the main text: What is meant by subsistence?We speak of subsistence to draw a distinction from living souls and perceptive souls.(There are three types of souls: living soul, perceptive soul and rational soul.The soul of plants has life but lacks perception and reason.The soul of animals has life and perception but lacks reason.The soul of people has life, perception and reason.)The living soul and the perceptive soul come from matter and both depend upon their bodies to exist.The living and perceptive souls are exhausted when the thing upon which they rely is exhausted.The rational soul is in people and does not come from matter.It does not rely upon its body for existence.Even when people die it does not expire.Therefore, it is subsistent.(Subsistence and substance have different meanings.For example, a person is a substance, and a horse is also a substance.However, the form of a horse is due to the presence of the horse.Without the horse, there is no form of the horse.It cannot be said that [the form of the horse] is subsistent.The ya-ni-ma of a person is present regardless of whether the person is present or not.Therefore, it is said to be subsistent.)(Huang and Wang 2013, p. 1:321) Even though Sambiasi and Xu based this part of LYLS upon the first prelude of the Enchiridion, Sambiasi and Xu sought to accord it with the corresponding passage in the Coimbra commentary on DA.In the third part of Dr Navarrus' definition, the rational soul is defined as "incorporeal" (incorporea): I said "incorporeal" to differentiate it from the corporeal substance and to refute Diogenes and other pagan philosophers, who said that the soul is wind, or air, as Saint Isidore and Saint Antoninus relate.Isidore calls these philosophers heretical followers of Tertullian.[Dixi, Incorporea, ad differentiam substantiae corporeae, et ad damnationem Diogenis et aliorum ethnicorum Philosophorum, qui dixerunt animam esse ventum, vel aerem, ut refert B. Isidorus et S. Antoninus quos appellat haereticos Tertullianistas Isidorus.](Azpilcueta 1593, p. 4) Sambiasi and Xu's translation of this passage is almost identical in structure and content, except that Sambiasi and Xu define the soul as "spiritual" (shen zhi lei 神之類) and implicitly redirect Dr Navarrus' critique from the Presocratics to the neo-Confucians, who argued that the soul (hun 魂) was subject to the realm of qi 氣, and thus could not be understood as purely immaterial: By the above-mentioned category of spirits is meant that the spiritual category is differentiated from the others which do not belong to the category of spirits, namely the living and perceptive souls.This serves to rectify other erroneous theories, such as that which says that the soul qi.[前謂神之類,言神類以別於他不 屬 神之類,如生、覺魂等。又以正他諸妄說,如謂魂爲氣等也。] (Huang and Wang 2013, p. 1:321) Interestingly, in the Coimbra commentary on DA, the soul is defined in similar terms as a "spiritual substance" (spiritus, sive substantia spiritalis) (Góis et al. 1598, p. 41).While Aquinas used spiritalis and incorporeus as effective synonyms, he had a strong preference for the term "incorporeal" due to his polemic with the doctrine of universal hylomorphism, which had been attributed to the Jewish philosopher Avicebron (Solomon Ibn Gabirol, 1021/1022-1070) (Saranyana 1988, p. 194).Universal hylomorphism postulates that all creation, including spiritual substances such as angels, consist of matter and form, but that the matter of spiritual substances was of a purer and subtler matter.This "spiritual matter" was not understood as corporeal or extended but a principle of passivity capable of undergoing change.This doctrine was promoted by Bonaventure and other Franciscans, but was opposed by Aquinas, who explained the mutability of angels in terms of their composition of essence and being (Case 2020).While in papal documents preceding this debate such as the decree Firmiter of the Fourth Lateran Council (1215), the term "spiritual" is used freely in relation to incorporeal creatures, and in later documents, such as the Council of Vienne (1312), and the Fifth Lateran Council (1512-1517), the term "spiritual" is scrupulously avoided (Saranyana 1988, p. 194).Despite adopting the term "spiritalis", the Coimbra commentary clearly reaffirms the Thomistic view that the soul is not only incorporeal, but also immaterial. 15 Sambiasi and Xu's use of the Coimbra definition should not be considered a philosophic position in these debates, especially as they make clear later in LYLS that the soul as a spiritual substance is immaterial and "cannot be seen by human eyes and thus can understand the principles of all things" (亞尼瑪,神類也。無形無質,亦不屬於人目,而明達萬 物萬事之理) (Huang and Wang 2013, pp. 1:339-340).Nonetheless, it is possible that Sambiasi and Xu chose this translation to introduce the doctrine of the soul in less confrontational terms.In the Chinese intellectual tradition, shen 神 (spirit) was often used as a synonym of hun 魂 (spiritual soul) to indicate the part of the spirit that ascends after the death of the body in contrast to the po 魄 (bodily soul) that descends after death (Yü 1987).Even as Sambiasi and Xu reject the neo-Confucian definition of the soul as subordinate to qi, by defining the soul as shen, they stress that the Christian doctrine of the soul was not diametrically opposed to Chinese thought. Sambiasi and Xu's translations of Dr Navarrus' citations of pseudo-Bernard of Clairvaux, pseudo-Augustine, and Augustine also constitute sophisticated attempts to scaffold the presentation of dogma for the Chinese reader.The citation that opens and closes the treatise on the substance of the soul is the famous incipit of the Meditationes piissimae de cognitione humanae conditionis, a work then attributed to St Bernard of Clairvaux but now regarded as the work of an unknown twelfth-century Cistercian monk (Bell 2023, pp. 24-25).Sambiasi and Xu's translation captures the spirit of Bernard's exhortation to cultivate the interior life, but features conspicuous transpositions.First, pseudo-Bernard contrasts our obsession over others (alios) and our neglect of ourselves (seipsos), whereas Sambiasi and Xu objectify this as a contrast between the forgetting of self (忘自己) and the pursuit of many goods (多物).Second, while pseudo-Bernard identifies God as the object of both exterior and interior inquiry, Sambiasi and Xu universalize, in Aristotelian terms, the object of inquiry as "the good" (meihao 美好), which is then indicated in smaller characters (and thus presented as commentary) as God (Tianzhu 天主).This more abstract representation of the good that is obtained through interior cultivation might suggest a comparison with the thought of Lu Jiuyuan 陸九淵 (1139-1993) and the School of Mind, which similarly stresses moral cultivation through introversion (Tian 2023).However, Sambiasi and Xu deliberately avoid Mencian terms like liangxin 良心 (good mind), employing instead the word meihao, which was not commonly used in Confucian philosophic texts.At the same time, these transpositions serve to draw a stronger thematic link with the preface (yin 引) of LYLS, which is drawn from the prooemium of the Coimbra commentary on DA. Here, Sambiasi and Xu cite the Delphic injunction to "know oneself" (renji 認己) as the epistemic foundation of all knowledge.In this way, they establish both the political and theological relevance of the soul and make a comparison between their aims and the neo-Confucian concept of gewu qiongli 格物窮理 (investigating things and probing the principle), which also sought to ground empirical research on the cultivation of the mind-heart (xin 心).Hence, their phraseology simultaneously evokes a degree of conceptual familiarity while stressing the novelty of their approach. Conclusions: Theologizing Aristotle There is an obvious reason why Sambiasi and Xu would prefer to consult Doctor Navarrus' summary over the Coimbra commentary on DA for their definition of the soul.The Coimbra commentary, despite its relative accessibility compared to other scholastic manuals, was a complex text with dense philosophic argumentation that would have been not only difficult to translate but also bewildering for the Chinese reader.In contrast, Doctor Navarrus' treatise on the soul was concise and lucid, providing a clear and easy-tofollow structure.But Sambiasi and Xu's interest in the Enchiridion goes beyond its practicality: while Doctor Navarrus does not reject Aristotle, he regards the Aristotelian definition of the soul as the "act of the physical body potentially possessing life" (actus corporis physici, organici, potentia vitam habentis) as insufficient for Christianity.He was aware of the polemics over the immortality of the soul that had been stirred by Pietro Pomponazzi's treatise De immortalitate animae.Aristotle was notoriously ambiguous about the immortality of the rational soul, and Alexander of Aphrodisias (fl.200) argued that Aristotle's definition of the soul as the form of the body was incompatible with immortality.This view was revived by Pomponazzi but then condemned in the Fifth Lateran Council (1512-1517) as heretical precisely at the time when Dr Navarrus was studying at Alcalá.Hence, Dr Navarrus insists on using Scripture and revelation to inform his "more fitting" (aptius) theological definition that explicitly includes topics like the immortality of the soul, grace, and salvation: Secondly, it is abundantly clear that no one can have perfect knowledge of our rational soul without Sacred Scripture or knowledge of the orthodox faith.For this reason, I have omitted the definition of the soul related by Aristotle and explained by the angelic and omniscient Thomas, namely "the rational soul is the act of the organic physical body possessing potentially life."Let us define it more appropriately for our purpose as follows: the rational soul is a substance subsistent by itself, incorporeal, immortal, created by God out of nothing, is infused in the body in space and time so as to be its substantial form, suited to obtain beatitude through grace and good works.[Secundo, quod animae nostrae rationalis perfecta cognitio nulli unquam sine sacrarum literarum, aut fidei orthodoxae cognitione plene patuit.Quapropter omissa definitione animae, quam tradit Aristoteles quamque explicat Angelicus, et omniscius ille Thomas scilicet, Anima rationalis est actus corporis physici organici, potentia vitam habentis: definiamus eam nostro proposito sic aptius: Anima rationalis est substantia per se subsistens, incorporea, immortalis, creata a Deo, ex nihilo, ubi et quando infunditur corpori, ut sit forma substantialis eius, per se ad beatitudinem, per gratiam, et bona opera consequendam apta.](Azpilcueta 1593, p. 3) The need for an integrated theological definition of the soul was even more pressing in China.In the European context, Aristotelian philosophers such as Pompanazzi could distinguish between the philosophic truth of the soul's mortality and the theological truth of its immortality, but in China, such distinctions would be perilous for the efficacy of the evangelical message.After all, neo-Confucian thought shared the Alexandrian assumption that the animating force of the body was not immortal insofar as hun and po were traditionally understood to dissipate after the death of the body.Hence, Sambiasi and Xu, like Dr Navarrus, stressed the need to ground the definition of the rational soul on Scripture (shengjing 聖經) and faith (xinde 信德) and translated Dr Navarrus' definition literally: If you wish to understand fully the wonder of the ya-ni-ma, there are two things which are needed: first, we must rely upon the affirmations of the Lord of Heaven in the classics; second, we must rely upon the light of the virtue of faith.(The virtue of faith is the virtue of believing in the Lord of Heaven.)In this work we rely upon the Sacred Scriptures and the virtue of faith to discuss the soul in outline.Ya-ni-ma is a substance, is self-subsistent, like a spirit, cannot die, and is created by the Lord of Heaven; [its creation] comes from nothing to existence; it is bestowed upon us in space and time; it is the form of our body; by relying upon e-laji-ya 額辣濟亞 (which is translated as holy favour) and our own good deeds, it can attain true beatitude.(Huang and Wang 2013, p. 1:320) The inclusion of grace and beatitude within Dr Navarrus' definition of the soul reflects his concern that scholastic metaphysics, as exemplified by the theologians at Coimbra, was simply too abstract to be useful for the practical goal of salvation, which should be at the heart of all metaphysical enquiry. 16While Dr Navarrus' first three preludes are essentially a theoretical and dogmatic overview of the soul, from the fourth prelude onward, he applies his conclusions to the pursuit of salvation, elucidating, for example, the passions of the soul, cultivation of virtues, the nature of sin, and the sacrament of confession.We see a similar concern for practical spirituality in LYLS where Sambiasi and Xu detail the need for both grace and good works (shanxing 善行) to obtain salvation (zhenfu 真福).Like Dr Navarrus, Sambiasi and Xu conclude their definition of the rational soul with St Augustine's call for repentance: "All people who can decide for themselves (zizhu 自主) and wish to remove their past unrighteousness cannot become righteous without repenting.This is called deciding for oneself [i.e., free will].Since children are ignorant, they cannot make their own decisions and cannot reason" (凡能自主之人,欲去前不義,不自悔,不能遷於義者,曰能 自主。爲孩童無知,不能自主者,不論故也) (Huang and Wang 2013, p. 1:323). 17 Whereas the Coimbra commentary on DA was intended for a university-teaching context, LYLS, like the Enchiridion, was intended to be used on the mission field: in Dr Navarrus' case, the confessional, and in Sambiasi and Xu's case, in the conversion of China.In LYLS, Sambiasi and Xu sought to provide theologically and philosophically precise definitions of the soul to stir the reader to conversion and to keep in view the promise of salvation.In this context, detailed reconstructions of philosophic disputes would be not only unnecessary, but also even counterproductive.Similarly, a purely philosophic treatment of the soul without reference to its theological ramifications would have been quite confusing for a Ming-dynasty scholar who would be hearing about the Christian doctrines for the first time.Doctor Navarrus' theological definition of the soul provided Sambiasi and Xu with a convenient starting point for their pioneering work of intellectual exchange.Notes 1 All translations in this article are the author's, unless otherwise noted.LYLS can be found in various archives, including the Archivum Historicum Societatis Iesu in Rome (Jap.Sin.II, 60) and the Biblioteca Apostolica Vaticana in Rome (Borgia Cinese, 324.6).For a modern punctuated edition, see (Huang and Wang 2013, vol. 1, pp. 320-53).For a list of editions, see the Chinese Christian Texts Database (https://libis.be/pa_cct/index.php/Detail/objects/1061accessed on 1 March 2024).In this article, the author has employed the unpublished, punctuated edition of Huang Zhipeng 黃志鵬, which differs in places to that of Huang and Wang.For the convenience of the reader, page references have been provided for the edition of Huang and Wang. 2 The precise nature of their collaboration is difficult to ascertain.As Xu Guangqi did not know Latin, it is assumed that Sambiasi was responsible for consulting the European sources, while they were both responsible for the translation choices. 3 In 1623, Sambiasi conducted an interview with Xu Guangqi on the Terms Controversy.While the report detailing the interview has been lost, its title provided by Giandomenico Gabiani in a catalogue of Jesuit writings on the Terms Controversy suggests that Sambiasi disagreed with Longobardo as Longobardo's arguments are described as being "never well proven" (numquam bene probatis argumentis) (Bernard-Maître 1949, p. 70). 4 Aquinas, Summa theologiae Ia, q. 79, art.6. 5 The work, contained in an appendix a Latin translation of Azpilcueta's treatise on usury. the Coimbra commentary on DA. 12 Not discussed in the Coimbra commentary on DA. 13 For a modern English translation, see Bell (2023).14 Whereas Dr Navarrus simply refers to the citation made at the beginning of the prelude, Sambiasi re-paraphrases the quote.Notably, both Sambiasi and Dr Navarrus place this repeated citation after the enumeration of mistaken conceptions about the soul. 15 Table 2 . Comparison of the six mistaken conceptions of the soul in the first treatise on the substance of the soul (論亞尼瑪之體) in juan 1 of LYLS and the first prelude (De essentia animae rationalis) of the Enchiridion.N.B.: some of the entries have been abbreviated due to their length. (Azpilcueta 1593, p. 6)37.[Error 3] Furthermore, from this it is inferred that the ya-ni-ma of a person is not a person but a part of a person.Since it lacks shape or appearance and cannot die, it must unite with the body to form a person. Table 3 . Comparison of citations in the first treatise on the substance of the soul (論亞尼瑪之體) in juan 1 of LYLS and the first prelude (De essentia animae rationalis) of the Enchiridion.
9,695
2024-03-25T00:00:00.000
[ "History", "Philosophy" ]
Improved PSO Algorithm Based on Exponential Center Symmetric Inertia Weight Function and Its Application in Infrared Image Enhancement : In this paper, an improved PSO (Particle Swarm Optimization) algorithm is proposed and applied to the infrared image enhancement. The contrast of infrared image is enhanced while the image details are preserved. A new exponential center symmetry inertia weight function is constructed and the local optimal solution jumping mechanism is introduced to make the algorithm consider both global search and local search. A new image enhancement method is proposed based on the advantages of bi-histogram equalization algorithm and dual-domain image decomposition algorithm. The fitness function is constructed by using five kinds of image quality evaluation factors, and the parameters are optimized by the proposed PSO algorithm, so that the parameters are determined to enhance the image. Experiments showed that the proposed PSO algorithm has good performance, and the proposed image enhancement method can not only improve the contrast of the image, but also preserve the details of the image, which has a good visual effect. Introduction The work of this paper mainly includes two parts. First, we propose an exponential central symmetric inertia weight function and a local optimal solution jump mechanism to optimize the PSO algorithm, and then we put forward a new infrared image enhancement method based on the combination of bi-histogram equalization and dual-domain image decomposition algorithm. The proposed improved PSO algorithm is used for parameters optimization and then to obtain the enhanced image. The contrast of images are improved while preserving image details. Meta-heuristic algorithms have strong flexibility, are simple and easy to implement, do not rely on gradient information, and avoid local optimal solutions. Therefore, they are widely used in various fields of engineering problems. Meta-heuristic algorithms can be divided into evolutionary-based, physics-based, and swarm-based algorithms. Evolutionary-based algorithms are inspired by the principles of biological evolution in nature, the most typical being the genetic algorithm [1]. Each new individual is a combination of the best from the previous generation; individuals formed by the combination of excellent individuals are likely to be better than the previous generation, thus the algorithm is optimized with the process of evolution. Physics-based algorithms simulate rules of physical change, such as simulated annealing algorithm [2] and gravity search algorithm [3]. This kind of algorithms simulates some basic physical laws, such as the laws of gravity, ray, electromagnetic force, etc. Among them, PSO algorithm is often considered. Although it has some defects, such as premature convergence and can easily to fall into local optimal solution, many scholars have improved PSO. Although meta-heuristic algorithms differ in principle, they have a common feature that they are composed of exploration and exploitation phases [13]. The exploration phase wants to traverse as many possible search areas as possible. Finding a balance between the two, i.e., global search and local search, is a challenging task. Ma [19] proposed a chaotic PSO algorithm with arctangent acceleration coefficient to seek a balance between global search and local search. Wang [20] proposed a hybrid quantum PSO algorithm, which uses flight and jump operations to improve the accuracy of QPSO (Quantum Particle Swarm Optimization) and enhance the search ability. Zhang [21] introduced scalar operators and learning operators into PSO and proposed a vector cooperative PSO algorithm. Zhou [22] introduced two mechanisms, namely competitive group optimization and reverse learning, choosing different learning mechanisms according to fitness value, and proposed a reverse learning competitive PSO algorithm. Engelbrecht [23] proposed a dynamic PSO algorithm based on arithmetic crossover. Chen [24] used two different crossover operations to disseminate promising samples through the crossover of the optimal position of each particle's personal history to establish an effective guiding paradigm and maintain good diversity. Tawhid [25] combined the PSO algorithm with the crossover operator of genetic algorithm to solve the global optimization problem, avoiding the problems of population stagnation and premature convergence. With the continuous development and progress of infrared technology, the infrared imaging system has been widely used in target detection [26], precise guidance [27], optical remote sensing [28], night navigation [29], and other fields. However, the low contrast of infrared image limits its application. Therefore, it is of great significance to search for effective methods to improve the quality of infrared images. Image enhancement algorithms can be roughly divided into spatial-domain based algorithms, transform-domain algorithms, and learning based algorithms. Spatial-domain based algorithms enhances the image at the gray level; typical algorithms include histogram equalization [30]. Transform-domain algorithms transform the spatial domain image into the frequency domain [31], such as wavelet [32]. In recent years, deep learning technology has been developed rapidly and applied to image enhancement, such as deep bilateral learning [33], deep photo enhancer [34], and scale-recurrent network [35]. Traditional algorithms based on spatial-domain and transform domain are usually based on a priori knowledge or experience, setting some parameters for image enhancement. Learning based algorithms establish the model and enhance the images through a lot of learning and training. The enhancement result has a great relationship with the accuracy of the model and the number of samples. Histogram equalization algorithm, as the basis of image enhancement algorithm, has the advantages of simple implementation and remarkable effect, thus it has been widely used. However, the traditional histogram equalization algorithm has the defect of reducing contrast, thus many scholars have improved it accordingly. Kim [30] proposed a BBHE (Brightness preserving Bi-Histogram Equalization) algorithm, which takes the average brightness of the image as the threshold. The image is decomposed into two sub-graphs, which are processed with histogram equalization, respectively. After that, the image is merged to maintain the brightness characteristics of the original image to a certain extent. Shajy [36] used RMSHE (Recursive Mean-Separate Histogram Equalization) to enhance medical images and obtain good results. The [37] used the minimum mean variance constraint before and after bi-histogram equalization to determine the gray scale threshold, making the contrast enhancement effect visually appear natural. Tang [38] proposed a bi-histogram equalization using modified histogram bins method to segment images according to their median brightness to achieve the retention of average brightness. Ashiba [39] proposed adaptive histogram equalization with contrast limitation to enhance the infrared image. However, the histogram equalization algorithm still has the following defects: (1) the number of gray levels decrease, the image information entropy decreases, and local details are missing; (2) the edge is not enhanced; and (3) the average gray value is fixed. In this paper, an improved PSO algorithm is proposed and applied to infrared image enhancement. Firstly, a new exponential center symmetry inertia weight function is constructed to make the inertia weight coefficient change with the number of iterations and the current position of particles. The global search ability is increased in the early stage of the search, and the local search ability is strengthened in the late stage of the search, so as to achieve the balance between local search and global search. Then, a local optimal solution jumping strategy is introduced into the PSO algorithm. We call the new PSO algorithm EXPSO. A new infrared image enhancement method combining the advantages of bi-histogram equalization algorithm and dual-domain image decomposition algorithm is proposed. The fitness function is constructed by using five image evaluation indexes to search for the optimal parameters, and the EXPSO algorithm is used to optimize the parameters to obtain a better image enhancement effect. The main contributions of this paper are as follows: 1. A new inertia weight function of PSO algorithm is constructed to make the weight coefficient change with the number of iterations and the current position of particles. Global search ability is increased in the early stage of search, and local search ability is strengthened in the late stage of search, so as to achieve the balance between local search and global search. 2. The mechanism of jumping out of the local optimal solution is introduced into the PSO to avoid the algorithm falling into a local optimal solution. 3. A new infrared image enhancement technology is proposed, which combines the advantages of bi-histogram algorithm and dual-domain image decomposition to increase the contrast of the enhanced image without losing the image details. The rest of the paper is structured as follows. Section 2 introduces the improved PSO algorithm. An infrared image enhancement algorithm based on bi-histogram equalization and dual-domain image decomposition is proposed in Section 3. Experiments are presented in Section 4, including verifying the performance of the PSO algorithm and the effect of the proposed image enhancement algorithm. Particle Swarm Optimization PSO was proposed by Kennedy [4] and is widely used. In the PSO algorithm, the current position of the particle is a candidate solution to the corresponding optimization problem, and the particle has two properties: position and velocity. Let the position of the ith particle of the population be After h iterations, the optimal position of the individual is The optimal position of the group is The update formula of position and velocity can be expressed as follows: where c 1 and c 2 are learning factors, while r 1 and r 2 are random numbers between 0 and 1. Exponential Center Symmetry Inertia Weight Function The inertia weight factor was proposed by Shi [40]. The inertia weight factor of traditional PSO algorithm is fixed. If its value is too large, the convergence speed will slow down; if its value is too small, it easily falls into a local optimal solution. The way we think about it is that, in the early stage of the search, by setting a large weight, the algorithm has strong global search ability and guarantees the particle traverses the entire space, while, in the late stage of the search, using a small inertia weight factor strengthens the local search ability and increases the speed of convergence, which can significantly improve the performance of the algorithm. Therefore, this paper uses the current iteration depth and fitness value to construct the function of inertia coefficient to optimize the PSO algorithm. First, the function based on iteration depth is constructed as follows: where h max denotes the maximum number of iterations set. It can be seen that the function is a monotone decreasing function of [−1, 1]. Then, the function based on the fitness is constructed as follows: where f it max and f it min stand for the maximum and minimum of the current calculated fitness, respectively. Their values are constantly updating as the particle search proceeds and their initial values are f it max = f it and f it min = 0; when h > 2, f it max and f it min are updated. Then, the weight coefficient function is constructed based on s 1 (•) and s 2 (•) as follows: where σ 1 and σ 2 are constants that control the change rate of w. At the beginning of iteration, the weight coefficient is larger to enhance the global search ability of the algorithm; at the end of iteration, the weight coefficient is smaller to enhance the local search ability of the algorithm, so as to accelerate the convergence speed of the algorithm and avoid falling into the local optimal solution. The relationship between weight coefficient and iteration depth transformation is shown in Figure 1. Local Optimal Solution Jumping Strategy We introduce a mutation factor to construct the optimal solution jumping strategy. If the particle state is the same for m consecutive iterations, the mutation factor is introduced and tested to see whether the mutation factor makes the fitness function better. If the mutation is better, the mutation is retained; otherwise, the mutation is deleted. The mutation factor is expressed as: where h k is the depth of iteration when immersed in a local optimal solution. ξ is the step length and it is defined as follows: where u ∼ N 0, σ 2 u , v ∼ N (0, 1) and σ u is defined as follows: The particle jumping process is shown in Figure 2. The figure shows that the small step and large step occur alternately during the process, which can help the particle jump out of local optimal solutions. EXPSO Algorithm Flow The flow of EXPSO algorithm is shown in Algorithm 1. Algorithm 1 Pseudo code of EXPSO. Initialize the parameters(X max , X min , D, m, v min , v max , c 1 , c 2 , N) Initialize the particle swarm positions Calculate the fitness of each particle while Iter<Iter max do Updata the Inertia weight factor use Equation (7) Calculate the fitness of each new particle Get p bset and x best if p bset stays the same for m consecutive generations then Update x use Equation (8) Calculate the fitness of each new particle if p bsetnew > p bset then Replace x and p bset end if end if Updata p bset and x best Iter = Iter + 1 end while Image Enhancement Method The idea of image enhancement in this paper is to improve the image contrast by using the method of bi-histogram enhancement and improve the image edge details by using a dual-domain image decomposition method. The fitness function is constructed by combining the advantages of the two, and EXPSO is used to optimize the parameters to find the optimal parameters and obtain a better visual effect. The flow chart of the method is shown in Figure 3. Contrast Enhancement Based on Bi-Histogram Equalization The average brightness of the original image I is set as I m ∈ {I o , I 1 , · · · , I L−1 }. Setting it as a threshold, the image is decomposed into two sub-images I L and I U . Histogram equalization is carried out for the two sub-images, respectively, and then the processed sub-images are merged to get the output image. The process can be expressed as follows: where where p L (x) and p U (x) are cumulative probability functions of the two sub-images whose gray value is x, respectively. The traditional bi-histogram equalization algorithm uses the average brightness to segment the image. For the infrared image, the image is usually dark, which can easily cause obvious errors. Therefore, the proposed EXPSO algorithm is adopted in this paper to optimize the threshold X T . Section 3.3 details the specific optimization process. Detail Enhancement Based on Dual-Domain Image Decomposition In this paper, by referring to the ideas in the literature [41], the original image is decomposed into high and low frequency components by dual-domain image decomposition. This algorithm not only considers the spatial distance of the pixels in the neighborhood, but also considers the difference in the gray value of the pixels. For pixel x, N x is defined as a window centered on x with radius r, and the bilateral kernel function inside is defined as follows: where σ s and γ are the spatial parameters of the kernel function and the pixel related parameters. σ 2 is the noise variance. The expression of dual-domain filter is: where I is the original image. The image is decomposed into low and high frequency components by dual-domain image decomposition. Texture features and details are distributed in the high frequency component. Therefore, the detail texture can be highlighted by enhancing the high-frequency image. In this paper, a simple and effective method of linear amplification is used to enhance the detail, and its expression is as follows: f biout (I) = I outL + βI outH (16) where β is the enhancement factor. After bi-histogram equalization and dual-domain image decomposition enhancement, combined with the advantages of both, the final output enhanced image can be expressed as: where α is an adjustment factor, which was used to control the contribution proportion of bi-histogram equalization and dual-domain image decomposition. X is the output image. It can be seen from Equation (17) that there are six parameters to be determined. We next construct the fitness function and use EXPSO algorithm to optimize these parameters to obtain the final image. Fitness Function In this study, five commonly used image evaluation indexes were used to construct the fitness function: entropy, average gradient, contrast, Niqe, and Brisque. (1) Information entropy Information entropy is used to measure the information contained in the image. The higher the information entropy is, the richer the information contained in the image is and, to some extent, the better the image quality is. The calculation formula is as follows: (18) where P (x) is the probability of the occurrence of gray value x. (2) Average gradient The average gradient reflects the change of gray value in the edge region of the image, which can reflect the sharpness of the image and the retention ability of the detail texture. The calculation formula is as follows: (3) Contrast Contrast can reflect the strength of enhancement effect. Wu [42] put forward the definition of contrast in 2011, considering the histogram of image I has N nonzero entries. The calculation formula of contrast is: where x k is the gray level and p k is the probability of gray level x k . (4) Niqe Niqe is an unreferenced image quality evaluation algorithm proposed by Mittal et al. [43] in 2013. It evaluates the image quality according to the distance between the feature model parameters of the image to be evaluated and the pre-established model parameters. The evaluation value of Niqe algorithm is consistent with the result of human eye perception. The smaller is the Niqe value, the better is the image quality. In this article, N (X) represents the Niqe value of image X. (5) Brisque Brisque is a kind of natural scene statistics based on general reference image quality assessment model, using the local scene statistical model for the coefficient of normalized luminance quantization image quality. Various types of distortion samples are used to train the SVM model parameters and multiple corresponding hyperplanes, and not distortion types combined with different kinds of distortion . The corresponding probability quality score is finally calculated [44]. In this paper, B (X) represents the Brisque value of the image X. Among them, the higher the information entropy is, the higher the contrast is, the higher the average gradient is, and the lower Niqe and Brisque are, the better the results is. Therefore, a multi-objective optimization model is constructed in this paper as follows: where H (X), A (X), C (X), N (X), and B (X) represent information entropy, average gradient, contrast, Niqe, and Brisque, respectively. The model is a multi-objective optimization problem. To simplify it, we normalize it into a single-objective optimization problem as follows: ε 1 + ε 2 + ε 3 + ε 4 + ε 5 = 1 (22) where I is the input image, ε i are the weight factors, and X is the output image. This model has only boundary constraints. The proposed EXPSO algorithm is used to minimize the function F, and each parameter is solved and substituted into Equation (17) to obtain the final enhanced image. EXPSO Algorithm Performance Experiment Six function optimization problems were used to test the performance of the proposed EXPSO algorithm. The functions are shown in Table 2. The dimension of the function is 30. The proposed algorithm was compared with PSO [4], HFPSO [45], GQPSO [46], and HCQPSO [47]. The results are shown in Figure 4. It can be seen that the EXPSO algorithm in this paper has certain advantages in convergence accuracy and convergence speed. Infrared Image Enhancement Experiment To verify the effectiveness of the proposed algorithm, state-of-the-art methods were selected, namely SRRM [48], BBHE [30], CLAHE [39], DPE [34], EFF [49], CRM [50], and JED [51], and publicly available datasets were used, namely OTCBVS Benchmark Dataset [52] and FIR Sequence Pedestrian Dataset [53]. Information entropy, Average gradient, Constrast, Niqe, and Brisque index were used as the objective evaluation factors. The experimental results are shown in Figure 5. It can be seen in the figure that the contrast of the original image is relatively weak. The image contrast is not significantly improved after processing by the algorithms show in Figure 5b,f-h. Figure 5d is the processing result of CLAHE algorithm, with good contrast, but details are lost. For example, the upper left corner of Img1 is too bright, resulting in details being lost, and the ground is too bright and the grass is too dark in Img2. Figure 5c,i presents good visual effects. It can be seen from Img1 and Img2 that the overall brightness of the algorithm in this paper is higher than that of the algorithm shown in Figure 5c. The algorithm in this paper can enhance the contrast while preserving the details and texture of the image. The performance of each algorithm can be further seen from the objective evaluation factors. The results under the information entropy index are shown in Table 3. Under the information entropy index, our proposed algorithm achieved the best results. The results show that the algorithm in this paper did not lose the image information entropy, but increased the image information entropy, while the traditional BBHE algorithm reduced the image information entropy. Average gradient index results are shown in Table 4. In terms of average gradient, the algorithm proposed in this paper obtained the best results except Img1. Contrast index results are shown in Table 5. In contrast index, the algorithm in this paper greatly improved the contrast of the images. Our algorithm obtained the highest contrast. Niqe results are shown in Table 6. In terms of Niqe, the proposed algorithm worked best on all three images. Brisque results are shown in Table 7. In terms of Brisque, the proposed algorithm worked best on all three images. The objective evaluation factor also shows that the algorithm in this paper increases the contrast of the image while preserving the image information. Algorithms Img1 Img2 Img3 Img4 Img5 Mean Conclusions In this paper, an improved PSO algorithm called EXPSO is proposed and applied to the infrared image enhancement. The new exponential center symmetry inertia weight function is constructed and the local optimal solution jumping mechanism is introduced to make the algorithm consider both global search and local search. A new image enhancement method is proposed based on the advantages of bi-histogram equalization algorithm and dual-domain image decomposition algorithm. The fitness function is constructed by using five kinds of image quality evaluation factors (information entropy, average gradient, contrast, Niqe, and Brisque), and the parameters are optimized by the EXPSO algorithm, so that the parameters are determined to enhance the image. Experiments were carried out to verify the effectiveness of the proposed EXPSO algorithm and the effect of the image enhancement method. Experimental results show that the EXPSO algorithm converges more quickly than the other four algorithms. In the image enhancement experiment, the proposed algorithm has good effect under five objective evaluation factors. The experimental results show that the proposed image enhancement method can not only improve the contrast of the image, but also preserve the details of the image.
5,349.4
2020-02-05T00:00:00.000
[ "Computer Science", "Engineering" ]
Seroconversion for SARS-CoV-2 in rheumatic patients on synthetic and biologics Disease Modifying Anti-Rheumatic Drugs in São Paulo, Brazil Introduction - To date, there is a lack of information on how immunomodulatory drugs for autoimmune rheumatic diseases (ARDs) impair humoral immune response following SARS-CoV-2 exposure. Hence, we examined anti-SARS-CoV-2 IgG/IgM positivity in ARD patients on disease-modifying anti-rheumatic drugs (DMARDs). Methods - We conducted a prospective study with ARD patients on different synthetic or biologic DMARDs (sDMARDs or bDMARDs) and control patients without DMARDs. All patients underwent a clinical baseline interview. They were tested for anti-SARS-CoV-2 IgG/IgM at baseline and three months later. Patients were monitored for incident respiratory symptoms during the follow-up. rRT-PCR for SARS-CoV-2 was performed for suspected COVID-19 infection. A univariate analysis was conducted according to antibody positivity to nd signicant associations for seroconversion. Results - We included one hundred patients for the analysis. Half of the patients who turned IgG positive in the study remained asymptomatic. All positive rRT-PCR patients showed seroconversion for anti-SARS-CoV-2 IgG. A borderline signicant association was found for bDMARD use in IgG-positive patients (42.9% vs. 19.8%, p=0.056). On the other hand, none of the patients on non-antimalarial sDMARD had detectable anti-SARS-CoV-2 IgG compared to 35.4% of the remainder of the sample, reaching borderline statistical signicance (0.0% vs. 35.4%, p=0.050). Conclusions - Serology for COVID-19 yielded a 14% incidence in our sample, half evolving asymptomatically. Temporally withholding bDMARD therapy in ARD patients during the pandemic based on possible humoral response impairment is not suitable. sDMARD was associated with a lower incidence of anti-SARS-CoV-2 IgG positivity, and further studies on this possible impact are warranted. Introduction Coronavirus disease 2019 , caused by a newly described beta coronavirus known as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2 virus) [1], has spread worldwide since the rst o cial case was reported in Wuhan, Hubei Province, central China, in December 2019 [2]. Since then, with a well-marked feature of fast dissemination by inter-human contact, in addition to its high level of virulence, the disease has brought people to an unprecedented health crisis and has forced the World Health Organization (WHO) to declare that COVID-19 has become pandemic [3]. Currently, the entire world registers over 21 million infected people and a number of lethal cases of approximately 760.000 [4]. However, despite being classi ed as a major public health problem, coronavirus disease is usually characterized by the presence of mild respiratory symptoms (cough, fever, dyspnea and fatigue) accompanied by lymphopenia. Nevertheless, in the most severe cases, it might evolve to pneumonia with an acute respiratory syndrome and sometimes lead to death [5]. In this pandemic setting, for the purpose of identifying susceptible groups, it has been shown that patients with severe SARS-CoV-2 infection share some comorbidities, such as diabetes mellitus, arterial hypertension, coronary heart disease, and previous lung disease [6]. Since all these conditions are characterized by in ammation, it is reasonable to presume that COVID-19 infection might arise in patients with chronic in ammatory rheumatic diseases [7], especially because the increased risk posed by viral infections in in ammatory disease patients is well known [8]. Furthermore, most synthetic and biologic disease-modifying antirheumatic drugs (DMARDs), currently used in rheumatologic clinical practice, have already been shown to increase both the incidence and severity of infections in general [9,10]; thus, COVID-19 could additionally run a more severe course in these patients. However, with the increase in coronavirus scienti c data, from the initial case reports to tens of reasonably well-designed studies on risk factors for COVID-19, it became clear that in ammatory rheumatic diseases were seldom included as a risk factor both for incident or severe SARS-CoV-2 infection [11]. In addition, there is evidence of some biologic DMARDs being used for the treatment of severe cases of COVID-19 [12,13,14], as well as hydroxychloroquine, a drug used for a long time in rheumatic diseases, which has shown some e cacy in COVID-19 treatment [15]. The diagnosis of COVID-19 acute infection is based on clinical features but preferably con rmed by the detection of viral RNA in naso/oropharyngeal swabs by nucleic acid ampli cation methods such as realtime reverse transcription-polymerase chain reaction (rRT-PCR) and loop-mediated isothermal ampli cation (LAMP) [16,17,18]. Serologic tests for IgG, IgA and IgM anti-SARS-CoV-2, targeting different viral antigens, have recently been implemented in clinical practice. Its value resides in con rming exposure to SARSCoV-2, including patients with negative RT-PCR results, being more effective, particularly after 10 days of symptom onset, negative [19]. Serology may also prove to be important for identifying the development of persistent COVID-19 immunity, as detected by the persistence of serum antibody positivity, particularly for IgG [20]; however, whether it could prevent recurrent infection is still unknown. The ability to produce detectable levels of anti-SARS-CoV-2 antibodies after COVID-19 exposure seems to vary among patients. Some patients will develop high titers of IgM/IgA and most importantly IgG, while a substantial amount of them will not present any serum antibody detected by current methods, even after a PCR-con rmed COVID-19 infection [21]. The factors, clinical or demographic, that determine one person to produce detectable antibodies after exposure are unclear. Likewise, it is also unknown whether rheumatic patients and the use of conventional or biologic DMARDs have any effect on anti-SARS-CoV-2 antibody development. This study aimed to assess the serologic behavior of rheumatic patients on synthetic and biologic DMARD during the COVID-19 pandemics in São Paulo, Brazil. Patient selection One hundred patients (≥18 yrs) with a diagnosis of rheumatic diseases followed by ve rheumatologists (members of this research team: FMS, MOP, JBL, JFC, CPF) were enrolled in this prospective study from March 2020 to August 2020 in São Paulo, Brazil. To ensure representativeness of using multiple different synthetic and biologic DMARDs, a convenience sampling method was performed by selecting patients according to medication use into four groups: Group 1 (no antimalarial/DMARD), Group 2 (antimalarial monotherapy), Group 3 (antimalarial plus any other synthetic DMARD) and Group 4 (antimalarial plus biologic DMARD). Clinical and demographic data Patients underwent a baseline clinical interview by telephone, email or o ce appointment to con rm medical information. Demographic and disease clinical data were collected. Patients were also asked at baseline whether they had any respiratory symptoms suggestive of COVID-19 at any time since the beginning of the pandemic. They were then weekly assessed for a total period of 12 weeks using a speci c questionnaire to monitor symptoms such as cough, rhinorrhea, dyspnea, anosmia, fatigue, diarrhea and fever, as well as the need for hospitalization. Laboratory data assessment Study participants were scheduled for two at-home blood sample collections for anti-SARS-CoV-2 IgM and IgG identi cation. An automated chemiluminescence immunoassay (CLIA) for the qualitative determination of IgG and IgM antibodies against the spike (S) and nucleocapsid (N) proteins from SARS-CoV-2 in human serum or plasma was run in the MAGLUMI analyzer (Snibe Diagnostics, Shenzhen China) according to the manufacturer's instructions. The results are presented in aleatory units per mL (AU/mL) in comparison to calibrators also provided in the kit. The rst blood collection was drawn at baseline and the second one up to twelve weeks later. Between these two procedures, all patients were monitored through weekly telephone contact actively searching for new-onset respiratory symptoms. Symptomatic patients were referred to their treating rheumatologist to judge whether these symptoms could not be otherwise explained by previous chronic respiratory conditions. If the acute respiratory syndrome was deemed to be highly suggestive of COVID-19 infection by the treating physician, then the patient was submitted to at-home naso/oropharyngeal swab collection for SARS-CoV-2 rRT-PCR testing. The combined naso/oropharyngeal swabs were immersed in 3 mL of sterile saline 0.9% and transported to the lab. RT-PCR: An aliquot of 200 µL was extracted by the DSP Virus/Pathogen kit in the automated platform QIAsymphony and eluted in 60 µL. Five microliters of eluate was subjected to rRT-PCR with primers and probe from the viral E gene in duplex to the cellular control RNAseP, as described [22], employing TaqMan Fast Virus 1-Step Master Mix (ThermoFisher, Brazil). A Ct value of 35 was adopted as the cut-off. The limit of detection was determined as 408 copies/mL by probit analysis using the ACCUPLEX SARS-COV-2 reference material (0505-0126, Seracare, USA). Patients whose serologic test resulted in IgG positivity at baseline were censored and thus not submitted to the second blood collection. Statistical analysis All demographic and clinical variables were compared between patients according to serologic status, which was assessed in four different scenarios: positivity for any immunoglobulin (Ig) at any time, positivity for IgG at any time, seroconversion for any Ig throughout the follow-up and seroconversion for IgG throughout the follow-up. Seroconversion was de ned as the absence of the respective antibody at baseline followed by a later positive test. All analyses were performed using R software version 3.5.2 (R Development Core Team, 2005). Chisquare, Fisher's exact, Mann-Whitney, Student's T, and Welch's T tests were used as appropriate. A univariate analysis was performed between baseline variables for the different serologic classi cations. The signi cance level was set at 5% (p=0.05). The study was approved by the local ethical board (Ethics Committee from Hospital Santa Paula) and by the national ethical board (CONEP-National Commission on Ethics and Research) at the register number CAAE: 30444020.3.0000.0008. All patients signed a written informed consent form before enrollment, and the study was conducted in accordance with the Declaration of Helsinki [23]. Results A total of 100 patients were selected and included in the nal analysis ( Figure 1). The demographic data are described in Table 1. The cohort was largely represented by autoimmune rheumatic diseases. Systemic lupus erythematous (SLE) was the most common diagnosis (19%), followed by psoriatic arthritis (PsA) (16%) and rheumatoid arthritis (RA) (15%). The sample size for each group was as follows: Group 1 (n=28), Group 2 (n=23), Group 3 (n=23) and Group 4 (n=26). Twenty-six (26%) patients were not on any synthetic or biologic DMARD, including antimalarial drugs. These individuals represented a miscellaneous combination of rheumatic non-autoimmune diseases. They served the purpose of a control group (Group 4). At baseline, 7 (7%) patients tested positive for anti-SARS-CoV-2 antibodies, either IgG, IgM or both. Of these, 6 were positive for IgG and, hence, were censored. None except for 1 could recall any respiratory symptoms since the beginning of the pandemics. The patient who did recall respiratory symptoms presented four weeks before study enrollment with typical COVID-19 symptoms, including fever, fatigue, cough and dyspnea. By that time, her chest CT con rmed a highly likely COVID-19 pneumonia, and although she was admitted for a few days, no oxygen supplementation was warranted. Her recovery was unremarkable. The remaining 94 (94%) patients were submitted to weekly follow-up and nally to the second blood test. Thirty-three (33%) patients presented respiratory symptoms, mostly mild, during the follow-up. None of them required admission. Nine of these cases were considered highly suggestive of COVID-19 infection and were then submitted to SARS-CoV-2 rRT-PCR testing. Three (33.3%) were positive, and six (66.7%) were negative. Notably, all three positive rRT-PCR patients later had detectable anti-SARS-CoV-2 IgG. Additionally, two suspected patients whose rRT-PCR results were negative also had detectable anti-SARS-CoV-2 IgG in the follow-up. Twenty-one (21%) individuals tested positive for some anti-SARS-CoV-2 Ig at some point of the study. As expected, there was a trend for a higher incidence of respiratory symptoms among those who tested positive for some Ig compared to those who did not (52.6% vs. 29.1%, p=0.062). No other signi cant difference or trend was found when Ig-positive patients were compared to Ig-negative patients ( Table 2). Fourteen (14%) patients tested positive for anti-SARS-CoV-2 IgG at some point of the study. These patients were signi cantly older (54.3 yrs ± 8.2 vs. 45.2 yrs ± 14.6, p=0.002) than their IgG-negative counterparts. There was also a borderline signi cant association for more frequent use of bDMARDs in IgG-positive patients (42.9% vs. 19.8%, p=0.056) (Figure 2). It is remarkable to note that half of the patients (50.0%) who turned IgG positive in the study remained asymptomatic (Table 3). In Figure 3, the nal results for any time anti-SARS-CoV-2 positivity in the entire sample is depicted. Potential predictors for any Ig seroconversion and speci cally for IgG seroconversion were also assessed. Fourteen (14%) patients subsequently tested positive for some anti-SARS-CoV-2 Ig at follow-up after negative baseline serology. These patients presented more frequently with respiratory symptoms during the follow-up compared to those patients who remained persistently Ig negative (64.3% vs. 27.7%, p=0.012) ( Table 2). Eight (8%) patients developed detectable IgG in the second serology after testing negative at baseline. A trend for a higher incidence of respiratory symptoms was found in these patients compared to those who showed no IgG seroconversion (62.5% vs. 31.4%, p=0.075). While none of these patients were on use of sDMARD, nearly one-third of patients who remained IgG negative during the follow-up were on sDMARD, reaching borderline statistical signi cance (0.0% vs. 35.4%, p=0.050) ( Table 3). Discussion This was a prospective study in which all patients underwent the same standardized protocol, with blood serology by a highly accurate method at two different time points. Our study assessed the pattern of anti-SARS-CoV-2 antibodies during the pandemics of COVID-19 in Brazilian rheumatic patients, and we found that fourteen percent were infected by SARS-CoV-2, as con rmed by anti-SARS-CoV-2 IgG positivity. Herein, although infected patients presented more often with respiratory symptoms, it is remarkable to note that asymptomatic COVID-19 infections were fairly frequent in this population (50.0%). None of the patients showed severe COVID-19, and all patients who presented with respiratory symptoms in the study fully recovered. We also found a higher use of bDMARD and a lower use of sDMARD in those patients who turned SARS-CoV-2 IgG positive, even among asymptomatic COVID-19 infections. To date, this is the rst prospective study to assess anti-SARS-CoV-2 seroconversion in rheumatic disease patients. Synthetic and biologic DMARDs are well known for increasing both the frequency and severity of infections in rheumatic disease patients who are on chronic use [10]. Although the magnitude and propensity for speci c pathogens may vary among different drugs, on average, this has been true for both bacterial and viral etiologies [8,24]. In this scenario, COVID-19 started to be a challenge to rheumatologists: whether the rheumatic diseases or their own treatment could be a risk factor for SARS-CoV-2 infection or either for the outcome of coronavirus disease in those infected rheumatic patients. At rst, it was reasonable to expect that autoimmune rheumatic disease patients on synthetic and/or biologic DMARDs would be particularly vulnerable to more frequent and severe COVID-19 infections. Recently, different cohorts with rheumatic patients infected by SARS-CoV-2 have been published, and this idea has been contradicted [25,26,27,28]. However, some authors have shown that the clinical course and disease severity of COVID-19 in these patients are closely related to what overtakes the general population. Therefore, risk factors such as age and previous cardiovascular and pulmonary diseases are likely to play a major role in determining the risk for infection severity in rheumatic disease patients [29]. Accordingly, in our study, despite synthetic and biologic DMARD users, we found no severe clinical manifestations in our infected patients. However, how the immune system in synthetic and biologic DMARD users reacts to SARS-CoV-2 exposure and the degree to which its antibody production capacity is affected is vastly unknown. To contribute to lling in the knowledge gap on the matter, our cohort was able to show some seroconversion patterns in rheumatic disease patients on synthetic and biologic DMARDs after SARS-CoV-2 exposure. Fourteen (14.0%) percent of our cohort eventually had anti-SARS-CoV-2 IgG detected by CLIA, which has been shown to be highly speci c for diagnosing COVID-19 [30]. Supporting this is the fact that all PCR-con rmed COVID-19 infections in our cohort had a later IgG titer above the upper limits and were hence considered IgG positive. We did not consider isolated anti-SARS-CoV-2 IgM positivity as a surrogate of COVID-19 infection because of the cross reaction with rheumatoid factor IgM [31], present in part of our sample. Notably, the only patient who initially tested positive for IgM and negative for IgG further tested negative for both antibodies in the follow-up blood collection. He remained asymptomatic throughout the study. A second patient whose serology was negative in the rst blood exam tested positive for isolated IgM in the follow-up test. She also remained asymptomatic during the study and ever since. IgM titers can be detected before IgG increases in acute COVID-19 infections; however, persistent or transient positivity for IgM not followed by IgG detection is rather common in the authors' experience, and false positivity must be considered in these cases [21]. We found a statistical trend for a higher prevalence of bDMARD use in our patients who tested positive for anti-SARS-CoV-2 IgG when compared to patients not on bDMARDs. This difference must be interpreted with caution since it might simply result from a more frequent use of health services by bDMARD users than their counterparts. Hence, it should not be automatically taken as an immune promoting in uence or as any sort of COVID-19 infection protective role by bDMARDs. It is, however, reassuring to notice that slightly over one quarter (26.0%) of bDMARD patients in the study adequately produced anti-SARS-CoV-2 IgG, and none evolved into severe COVID-19 infection. Although no de nitive conclusion can be drawn from these data, it does seem that bDMARD users retain their humoral immunity against SARS-CoV-2. These results are in line with the recently published data from the COVID-19 Global Rheumatology Alliance, where bDMARD use was associated with less severe COVID-19 infection in autoimmune rheumatic disease patients [25]. In the opposite direction, the absence of non-antimalarial sDMARD users in those patients who seroconverted for anti-SARS-CoV-2 IgG during the follow-up must be interpreted with caution, as confounding factors might have in uenced this result. For instance, different levels of SARS-CoV-2 exposure may exist between sDMARD users and non-sDMARD users. Furthermore, the lack of anti-SARS-CoV-2 production may not necessarily be associated with a lack of immune response to COVID-19, as cellular immunity has been studied and seems to play a protective role in COVID-19 infection [32,33,34]. The strength of this cohort is based on the fact of being a prospective study analyzing the region with one of the highest COVID-19 infection incidences during the peak rate and the overwhelming health system; data reliability, as the responsible treating physicians were also members of the research team; the sensitivity and speci city of the serologic tests; and the fact that we were able to assess patients suspected for COVID-19 infection with PCR throughout the protocol. The limitations of the study include the sample, which was comprised of patients diagnosed with a wide range of different rheumatic diseases, some of whom were not autoimmune diseases. Thus, a role for each of these conditions on SARS-CoV-2 seroconversion could not be assessed separately. Similarly, both sDMARD and bDMARD use encompassed many different drugs, and the distinct role of SARS-CoV-2 seroconversion for each of these drugs is expected and could not be assessed due to the small sample size. Serology for COVID-19 yielded a 14% incidence in this population; half of these patients evolved asymptomatically, and none presented severe clinical manifestations. Hence, temporally withholding rheumatic patient treatment during the pandemic based on this concern is not warranted. Furthermore, bDMARD use seems not to hamper the humoral immune response to SARS-CoV-2, although no de nite conclusion about this matter can be drawn from our study, and sDMARD use was associated with a lower incidence of anti-SARS-CoV-2 IgG positivity. Whether sDMARD hampers the humoral immune response, switches humoral to cellular immunity or even impacts COVID-19 infection remains to be elucidated. Figure 1 Schematic overview of the study design. Prospective study with four treatment arms. A total of 100 patients with rheumatic diseases were enrolled. Six of these patients had detectable anti-SARS-CoV-2 IgG at baseline. All patients were followed for up to 12 weeks with regular weekly telephone contact actively searching for incident respiratory symptoms. Seroconversion for anti-SARS-CoV-2 IgG was found in eight patients.
4,667.6
2020-10-23T00:00:00.000
[ "Medicine", "Biology" ]
Phase transitions in small-world systems: application to functional brain networks In the present paper the problem of symmetry breaking in the systems with a small- world property is considered. The obtained results are applied to the description of the functional brain networks. Origin of the entropy of fractal and multifractal small-world systems is discussed. Applying the maximum entropy principle the topology of these networks has been determined. The symmetry of the regular subgroup of a small-world system is described by a discrete subgroup of the Galilean group. The algorithm of determination of this group and transformation properties of the order parameter have been proposed. The integer basis of the irreducible representation is constructed and a free energy functional is introduced. It has been shown that accounting the presence of random connections leads to an integro- differential equation for the order parameter. For q-exponential distributions an equation of motion for the order parameter takes the form of a fractional differential equation. We consider the system that is described by a two-component order parameter and discuss the features of the spatial distribution of solutions. Introduction Recently Eguiluz et al. [1] have presented a method of construction of functional brain networks proceeding from the results of functional magnetic resonance imaging measurements in a human. In these experiments, a magnetic resonance activity of certain parts of brain (so-called voxels) is measured at each discrete time step. By we denote a voxel's activity at the instant of time . It was proposed to consider that two voxels are functionally linked if the value of their temporal correlation exceeds a certain positive value independent of the value of their anatomical connection. The correlation coefficient between any pairs of voxels and is calculated as where 2 = 2 ( ) − ( ) 2 , brackets … represents temporal averages and ( ) is the blood oxygenation level dependent signal of the voxel in case of brain scanning data. The elements of the correlation matrix determine the value of correlations among various parts of the cerebral cortex. Using highly correlated nodes Eguiluz et al. [1] have constructed a network and determined that the degree distribution of the obtained network has the form ~− , where ≈ 2 (figure 1) [1]. It has been also shown that these networks possess a small-world structure, a community structure and are fractals [2]. However it should be stressed that the degree distributions for = 0.6 and = 0.7 are actually described by the -exponentional distribution and for = 0.7, ≈ 1 whereas for = 0.6, > 1 [3]. For = 0.5 the form of the degree distribution of the correlation network is characteristic of stretched exponential probability distributions [3]. Fitting with the help of formula (2) is actually performed by using the maximum likelihood method [5]. In paper [6] the correlation network obtained from functional magnetic resonance imaging measurements is compared with that of the derived from the numerical simulation of 2D Ising model at various temperatures. Near the critical temperature a striking similarity in statistical properties of these two networks are observed and it makes them indistinguishable. The similarity of these networks allows to suppose that a collective dynamics is inherent in a human brain. On the other hand it also follows that the dynamics of the brain functioning in such systems takes place near the critical point and generates spatio-temporal structures. To study the dynamics generating spatio-temporal structures in brain we develop the theory of the Landau -Ginzburg type for the systems with a small-world structure. We first introduce an algorithm to generate small-world networks. Then applying the maximum entropy principle and accounting multifractality of the system we determine a degree distribution for such systems. Using the principle of least action we derive the equation of motion for the order parameter representing spatio-temporal structures near the critical point. We also consider a specific example. Topology of functional brain network with a small-world property The algorithm constructing a small-world network was for the first time proposed by Watts and Strogatz [7]. At the initial instant of time there is one-dimensional lattice of nodes with periodic boundary conditions where each link connecting a vertex to one of its nearest neighbours in the clockwise sense is left in place with probability 1 − , and with probability is reconnected to a randomly chosen other vertex. Long range connections are therefore introduced. As a result a network structure with a small-world property emerges. However this network is not scale-free that is it has degree distributions with non-power law forms [8]. A human brain consists of ~10 10 neurons, each of which is connected with other neurons by ~2 • 10 4 links. As is shown in [1] the degree distributions of functional brain networks in the log-log scale have a rectilinear region. Then another construction of a small-world network is a more convenient model for a human brain. We proceed from a closed system of nodes with periodic boundary conditions where each node is connected with the neighbors by links. A new edge is added to this system at each instant of time, one of the ends of this edge being connected with one of the nodes of the regular lattice with probability 1/ while the other end of the edge being connected with that in accordance with the preferential attachment principle / . Thus the rate of the connectivity change of the node (without taking into account the contribution of the initial regular graph edges) is determined by two contributions and represented by the equation Taking into account that at the instant a full network connectivity is = 2 and the change in the total degree of the network at one time step is ∆ = − − 1 = 2 we obtain = 1. Then equation (3) takes the form The solution of this equation has the form where is a constant of integration which can be determined from the condition = 2 . The degree distribution of this network is described by the -exponential distribution in the form [9] Here is a measure of complexity of the system. The value = 1 corresponds to the degree distribution in the form of the Gaussian distribution emerging at large temporal shaping of such system. At ≠ 1 and for large enough the network is characterized by the degree distribution emerging at the earlier steps of forming a network. Distribution (6) describes the results of processing of functional magnetic resonance imaging measurement data = 0.6 [1] pretty well. The presented algorithm shows that order and disorder are inherent in small-world systems. However it should be taken into consideration that the structure of these networks may be homogeneous, fractal and multifractal. To derive a distribution function for such systems we use the maximum entropy principle. The famous Boltzmann-Gibbs entropy which is determined as is characteristic for homogeneous structures, where the distribution function is normalized to unity. We consider the quantity Fractional generalization of this integral gives where is a gamma function. For admissible systems when = in case of a fractal structure we have = ln presenting the Shafee entropy [8]. It is clear that at = 1 we obtain the Boltzmann -Gibbs entropy. A multifractal is a mixture of fractals. Suppose that dimensions of fractals in a multifractal are distributed uniformly that is = 1/( − 1). Then multiplying (9) by and integrating over we obtain Consequently for admissible systems we have representing the Tsallis entropy [7]. If → 1 the Tsallis entropy coincides with that of Boltzmann -Gibbs. These results can be generalized if we consider a multifractal structure as a mixture of fractal substructures. We introduce the distribution function allowing to introduce an equation for determination of the systems entropy and , is an incomplete gamma function. Considering the standard constraints =1 = 1 and where are observed quantities and applying the maximum entropy principle we determine the distribution function where = 1, … , , and and are the Lagrange multipliers determined from the constraints. Distribution (16) is characteristic of the systems with a skewed degree distribution. Note that distribution function (16) describes the degree distribution of the functional brain network precisely enough in case = 0.5. Using the constraints = 1 , −1 = , −1 2 = 2 and entropy (11) in the framework of the maximum entropy principle we obtain the degree distribution in the form (6) describing the degree distribution of the functional brain network in case = 0.6, and where = [9]. In case of constraints = 1, = and entropy (11) the maximum entropy principle gives a distribution function in the form where describing the degree distribution of the functional brain network precisely enough in case = 0.7. Order parameter in functional brain networks The exact solution of the Ising model has been found only for the one-dimensional and twodimensional cases in the zero external field [12]. So for the description of spatio-temporal structures occurring in the functional brain networks we shall develop a theory of phase transitions of Landau -Ginzburg type [13]. It should be stressed that the theory of phase transitions in graphs is discussed in [14] and analysis of the systems with long-range space interactions and temporal memory is given in [15]. We shall proceed from the construction of a small-world network which is a regular graph with additional shortening links. Under such consideration a regular substructure of a small-world network could possess a symmetry of the discrete subgroup of the Galilean group. There occurs a problem to determine transformation properties of the order parameter. To solve this problem we first have to determine a number of the components of the order parameter involved into the process under consideration. We proceed from the measured time series and use the Takens method [16,17]. Choosing from a set of experimental data equidistant points, we obtain a set of discrete variables 0 , … , −1 + − 1 , where = 1, … , − 1 and is a temporal shift. It is necessary construct the points of the phase space and calculate a correlation function of the attractor. where is the Heaviside function and is distance. Further we construct the dependence ln on ln [16,17]. If the value of the slope depending on reaches a plateau above a certain then the system represented by the present temporal sequence must have an attractor. The value which reached saturation should be considered as a dimension of the attractor and the value above which the saturation is observed should be considered as the minimum number of variables necessary for modulation of the behavior corresponding to the current attractor [16,17]. Based on the results of the analysis of the time series we can choose the as a number of the order parameters. If 0 is an irreducible representation of the discrete subgroup of the Galilean group of a regular subgraph and , = 1, … , , = 1, … , are the order parameters which are transformed in accord with this irreducible representation where is a number of wave vectors in the star * of the irreducible representation 0 and is dimension of a small representation, then the dimension 0 is = × . Thus the choice of a particular irreducible representation could be made from the standard reference books of irreducible representations [18]. It should be stressed that if the wave vector under consideration is expressed irrationally in terms of vectors of the reciprocal lattice then only invariants of the rotation group have corresponding irreducible representations. If the wave vector is expressed rationally in terms of vectors of the reciprocal lattice then the irreducible representation under consideration admits anisotropic invariants. The phenomenon of hysteresis is inherent in a human brain and thus an irreducible representation may not satisfy the Landau condition that is contain invariants of the third order. As small-world systems are initially inhomogeneous a free energy functional must have the Lifshitz invariant. So the task is to consider irreducible representations corresponding to the internal points of the Brillouin zone. During the transition into modulated structures the point symmetry elements of the initial phase remain [19]. As the Landau and Lifshitz conditions are violated we deal with phase transitions of the first order. Derivation of the equation of motion for the order parameter near the critical point in the functional brain network To derive an equation of motion for the order parameter in the system with a small-world property we determine the free energy functional in the form [13,15]: (22) We took into account the presence of random shortening links in the structure. Here is a spatial coordinate, is time and the functions 0 , , ′ , ′ and 1 , , ′ , ′ describe the influence of the small-world property on the critical properties of the system. Integration is performed over the region in two-dimensional space 2 to which , belong. The equation of motion for the order parameter , is derived with the use of the Gateaux derivative of the functional , , determined as: where = is a smooth integrable function. Further it is suitable to introduce the functions: We note that such choice of 0 , , ′ , ′ and 1 , , ′ , ′ allows to separate spatiotemporal derivatives. The dynamic equation for the order parameter is determined from the principle of stationarity , = 0 and for the arbitrary function , has the form: This is an integro-differential equation allowing to obtain an equation of motion for the order parameter for various kernels 0 , , ′ , ′ 0 and 1 , , ′ , ′ . We suppose that 0 , , ′ , ′ = − ′ 0 , ′ , 1 , , ′ , ′ = − ′ 1 , ′ . In this case time and the spatial coordinate separate. In case of one component order parameter in accord with the symmetry arguments , = 2 2 , + 4 4 , , then from (27) we obtain: Here is a control parameter of the system and is a positive parameter. The form of the function is determined from the condition of invariance. Hence includes the integer basis of invariants of the irreducible representation of the space group of the high symmetry phase. The solutions of (28) are investigated in [20]. Systems described by two-component order parameter In case of the system described by the two-component order parameter which is transformed according to the irreducible representation corresponding to the internal rational point of the Brillouin zone we have an integer basis consisting of two invariants. In the polar coordinate system these invariants have the form 2 and cos , where and are an amplitude and a phase of the order parameter correspondingly and is a parameter of anisotropy. In this case a spatial dependence of the order parameter in accord with (27) takes the form: The integral expression in (29) could be considered as averaging over the distribution function where is a renormalized phase of the order parameter. When = 2 the spatial distribution of is determined by the equation (33) In case 0 > 0 2 , the trajectory increases indefinitely. where am is the Jacobi elliptic function. The solution corresponding to 0 = 0 2 determines a separatrices. In this case d d = ±2 0 cos 2 , and for the initial condition = 0 = 0 the solution of this equation has the form: For finite motion the solution of fractional equation (30) is determined as is the Mittag-Leffler function and the function + is the Euler gamma function. When = 2 we have 2.2 − 2 = sin and the solution of the linear fractional equation has the form = sin . When 1 < < 2, we have a continual number of differential equations. The oscillating function with a decreasing amplitude is a solution of (30). Conclusion Order and disorder are inherent in small-world systems and consequently in functional brain networks. We propose the algorithm constructing a network equivalent to functional brain networks. We suppose that a regular substructure of a small-world network could be described by a discrete subgroup of the Galilean group. In such approach random connections generate the medium in which the order parameter moves. We have presented the equations allowing to determine an entropy for fractal and multifractal small-world networks and applying the maximum entropy principle we have derived characteristic degree distribution functions. The obtained various distribution functions in the log-log scale have a rectilinear region. Using this property with the help of the principle of the least action we have derived an equation of motion for the order parameter in the form of a fractional differential equation. We have presented the algorithm for determination of the transformation properties of the order parameter. The Rapp diagram is constructed from the analysis of the measured time series. The number of the components of the order parameter is determined from this diagram. On the other hand the number of the components of the order parameter coincides with the dimension of the irreducible representation equal to that of the irreducible representation of a small group multiplied by a number of wave vectors in the star corresponding to the wave vector under consideration. These data can be found in literature [13]. We show that in case of the functional brain networks only internal points of the Brillouin zone should be considered. Therefore we deal with the phase transitions without the loss of the point symmetry elements [14]. We show that in case of two-component order parameter the spatio distribution of the order parameter is determined by the one-dimensional sine-Gordon fractional differential equation. The change of the fractional dimension due to the fractal dimension of the structure changes the medium in which the order parameter moves. It could be shown that if the fractional dimension of the order parameter equation of motion changes from 2 to 1 the solution different from zero becomes identically zero. Thus our results discover the principles of a human brain functioning near the threshold of criticality.
4,226.6
2015-04-13T00:00:00.000
[ "Mathematics" ]
Hopfion canonical quantization We study the effect of the canonical quantization of the rotational mode of the charge Q=1 and Q=2 spinning Hopfions. The axially-symmetric solutions are constructed numerically, it is shown the quantum corrections to the mass of the configurations are relatively large. Introduction Since the early 1960s, the topological solitons have been intensively studied in many different frameworks. These localized regular field configuration are rather a common presence in non-linear theories, they arise as solutions of the corresponding field equations in various space-time dimensions. Examples in 3+1 dimensions include well known solutions of the Skyrme model [1], monopoles in Yang-Mills-Higgs theory [2] and the solitons in the Faddeev-Skyrme model [3], [4]. Though the structure of the Lagrangian of the Faddeev-Skyrme model is exactly the same as Skyrme theory, the topological properties of these models are very different, while in the former model the O(4) scalar field is the map S 3 → S 3 , the triplet of the Faddeev-Skyrme fields is the first Hopf map S 3 → S 2 . It was shown that solutions of the latter model should be not just closed fluxtubes of the fields but knotted field configurations [5]. Consequent analysis revealed a very rich structure of the Hopfion spectrum [6,7]. A number of different models which describe topologically stable knots associated with the first Hopf map S 3 → S 2 are known in different contexts. It was argued, for example, that a system of two coupled Bose condensates may support Hopfionlike solutions [8], or that glueball configurations in QCD may be treated as Hopfions [9]. One of the reasons for the interest in Skyrme model is related with the suggestion that, in the limit of large number of quark colours there is a relation between this model and the low-energy QCD with an identification between topological charge of the Skyrmion and baryon number [11,12]. This approach involves a study of spinning Skyrmions and semiclassical quantization of the rotational collective coordinates as a rigid body. The classical Skyrmion is usually quantized within the Bohr-Sommerfeld framework by requiring the angular momentum to be quantized, i.e., the quantum excitations correspond to a spinning Skyrmion with a particular rotation frequency. In the recent paper [13] an axially symmetric ansatz was used to allow the spinning Skyrmion to deform. Furthermore, it was suggested to treat the Skyrme model quantum mechanically, i.e., apply the canonical quantization of the collective coordinates of the soliton solution to take into account quantum mass corrections [14]- [17]. It turns out the correction decreases the mass of the spinning Skyrmion, so one can expect similar effect in the Faddeev-Skyrme model. Similarity between the Lagrangians of the Faddeev-Skyrme and Skyrme models suggests to take into account (iso)rotational collective degrees of freedom of the Hopfions whose excitation may contribute to the kinetic energy of the configuration and strongly affect other properties of the spinning Hopfions [18]. An obviously relevant generalization then is related with canonical quantization of the rotational excitations. Though the spinning Hopfions were considered in early paper [4], a systematic study of their properties was not performed yet. One of the reason of that is that consistent consideration of the soliton solution of the Faddeev-Skyrme model is related with rather complicated task of full 3d numerical simulations [6,7]. However this task becomes much simpler if we restrict our consideration to the case of the axially symmetric Hopfions of charge 1 and 2. In this Letter we are mainly concerned with canonical quantization of the rotational collective coordinates of these Hopfions. The model Let us begin with a brief review of the Faddeev-Skyrme model in 3+1 dimensions which is the O(3)-sigma model modified by including a quartic term: Here φ a = (φ 1 , φ 2 , φ 3 ) denotes a triplet of scalar real fields which satisfy the constraint |φ a | 2 = 1. For finite energy solutions the field φ a must tend to a constant value at spatial infinity, which we select to be φ a (∞) = (0, 0, 1). This allows a one-point compactification R 3 ∼ S 3 , thus topologically the field is the map φ(r) : R 3 → S 2 characterized by the Hopf invariant Q = π 3 (S 2 ) = Z and is the "pion" mass term which is included to stabilize the spinning soliton. Note that our choice for this term is a bit different from the usual mass term in the conventional Skyrme model (i.e., µ 2 (1 − φ 3 ) ) since for the fields on the unit sphere it seems to be more convenient to perform numerical calculations. The energy of the Faddeev-Skyrme model is bound from below by the Vakulenko-Kapitansky inequality [19] E ≥ const|Q| 3 4 . In the classical case one can rescale the Lagrangian (1) to absorb the coupling κ into the rescaled mass constant, however consequent canonical quantization of the spinning Hopfion does not allow us to scale this constant away. For the lowest two values of the Hopf charge Q = 1, 2 the Hopfion solutions can be constructed on the axially symmetric ansatz [4] parametrised by two functions f = f (r, θ) and g = g(r, θ) of r, θ as a triplet of the scalar fields in circular coordinate system where n, m ∈ Z. An axially-symmetric configuration of this type A m,n has topological charge Q = mn, where the first subscript labels the number of twists along the loop and the second is the usual O(3) sigma model winding number associated with the map S 2 → S 2 , thus the ansatz (2) corresponds to the configurations A 1,1 and A 2,1 . Furthermore, one readily verifies that the parametrization (2) is consistent, i.e. the complete set of the field equations, which follows from the variation of the original action of the model (1), is compatible with two equations which follow from variation of the reduced action on ansatz (2). However this trigonometric parametrization is not very convenient from the point of view of numerical calculations because of the numerical errors which originate from the disagreement between the boundary conditions on the angular-type function g(r, θ) on the ρ-axis and the boundary points r = 0, ∞, respectively 1 . Indeed, the reduced classical rescaled two-dimensional energy density functional, resulting from the imposition of axial symmetry stated in ansatz (2), is given by 1 Note that numerical difficulties of the same type are common in the Skyrme model [20]. Hopfions at µ 2 = 2 and ω = 0. The resulting system of the Euler-Lagrange equations can be solved when we impose the boundary conditions such that the resulting field configuration will be regular on the symmetry axis, at the origin and on the spatial asymptotic. The charge Q = 1 A 1,1 configuration possesses the maximum of the energy density at the origin, the energy density isosurfaces are squashed spheres as seen in Fig.1, left. The charge Q = 2 A 2,1 solutions have toroidal structure( see Fig.1, right). Inclusion of the mass term increases the attraction in the system, the total energy of the massive Hopfion increases monotonically as mass parameter µ increases [21]. The residual O(2) global symmetry of the ansatz (2) with respect to the rotations around the third axis in the internal space allows us to consider the stationary spinning classical Hopfions Here, to secure stability of the configuration with respect to radiation, the rotation frequency ω is a parameter restricted to the interval Substituting this ansatz into the lagrangian (1) gives where M is the static energy of the Hopfion and Λ is the moment of inertia Λ = 1 16π sin θdrdθ sin 2 f 2r 2 + ∂f ∂θ and the conserved quantity is the classical spin of the rotating configuration J = ωΛ. Note that the structure of the expression for the density of the moment of inertia (7) in the rigid body approximation does not depend on the phase function g(r, θ). However the function f (r, θ) is angle dependent. The mass of the static Hopfion as a function of the parameter µ is presented in Fig. 2 As the angular velocity ω increases, the total energy of the spinning configuration as well as the moment of inertia and the angular momentum are increasing monotonically [18]. Investigation of the energy density distribution reveal very interesting picture, as ω increases a hollow circular tube is formed inside the Hopfion energy shell, both for the charge 1 and charge 2 as shown in Figs.3. The moment of inertia of the configuration diverges as ω → µ. The classical spinning Hopfion can be quantized within the Bohr-Sommerfield scheme by requiring the spin to be quantized as J 2 = j(j + 1), where j is the rotational quantum number taking half-integer values [4,22]. The difference between our approach, where rotation occurs only around z axis and therefore is characterized by of U(1) representations (i.e. takes only integer values), and the discussion presented in the paper [22] in that in the latter case the charge Q = 1 A 1,1 configuration was considered by a analogy with the case of the spinning Skyrmion where the usual hedgehog ansatz U = exp (iF (r)(n a · τ a )) with a single radially dependent profile function f (r) was implemented instead of the parametrization (2). The relation between these two parametrizations can be explicitly written as The functions f (r, θ) and g(r, θ) which parametrize the axially-symmetric ansatz (2) are related to the approximation by radial function F (r) of [22] as cos f (r, θ) = cos(2θ) sin 2 F (r) + cos 2 F (r), tan g(r, θ) = cos F (r) sin F (r) cos θ . Surprisingly, the hedgehog parametrization works extremely well for the minimal energy A 1,1 configuration. It was pointed out also by Ward [23] who used the stereographic parametrization of the A 1,1 and A 2,1 Hopfions in terms of the single radial-dependent function F (r). For the former case this parametrisation is: The relation to the ansatz (2) is given by the expression thus, we can represent the profile functions f (r, θ) and g(r, θ) as tg g(r, θ) = − F r cos θ . Finally, note that these two radial functions F (r) and F (r) which are used in the parametrizations (8) and (11), respectively, are related as Thus we will revisit the problem of the canonical quantization of the Hopfion using approach previously discussed in [15]- [17]. For the sake of simplicity here we restrict our analyse to the case of the axially-symmetric configurations A 1,1 , A 2,1 . Similarity of the Lagrangian (1) with the conventional Skyrme model suggests that in order to apply the standard canonical quantization procedure it is convenient to re-express the expression (1) in terms of the hermitian matrix fields which parametrises the Hopfion configuration. This matrix can be written compactly as where the usual algebra of the Pauli matrices (τ + , τ 0 , τ − ) yields Here the symbol in the square brackets is the SU (2) Clebsh-Gordan coefficient. Quantization. Momenta of inertia Similarity of the form of the Lagrangian (19) with that of the Skyrme model suggests that we can quantize the rotational degrees of freedom of the axiallysymmetric Hopfion by wrapping the ansatz (16) with time-dependent unitary matrices A q(t) [12] which rotates the configuration about the third axis: Thereafter the collective rotational degrees of freedom q(t) are treated as quantummechanical variables, i.e. the generalized rotational coordinate q(t) and velocitẏ q(t) satisfy the commutation relations q, q = if 00 . The explicit form of the constant f 00 will be completely determined by canonical commutation relations between quantum coordinates and momenta. As usual, to calculate the effective Lagrangian of the rotational zero mode we have to evaluate the time derivative of the matriẋ Taking into account the commutation relation (21) we obtain Then, keeping only terms proportional to the square of the angular velocity the effective kinetic Lagrangian density can be written as Utilizing the definition of the moment of inertia (7) we can write Thought the expression (26) coincides with its classical counterpart in (6), the corresponding quantum momentum is conjugated to the rotational collective coordinate q and it is defined asp Thus, the canonical commutation relation p, q = −i allows us to define f 00 = 1 Λ . We can also define the U(1) group generator which is the angular momentum operatorĴ = −p = −Λq for eigenstates |k = e −ikq |0 with integer eigenvalues k = 0, ±1, ±2, . . .. We are now in position to evaluate the explicit form of the quantum-mechanical Lagrangian of the Faddeev-Skyrme model. Using expression (24) we obtain: The total effective Hamiltonian corresponds to the complete Lagrangian L = L cl + L q which includes both classical and quantum mechanical parts: Here the quantum mass correction ∆M appears when the canonical commutation relation is taken into account: (31) where we used the definition (7). Note that an interesting peculiarity of the integrand in (31) is that it exactly reproduces the structure of the density of the moment of inertia (7), thus in the rigid body approximation we can immediately evaluate the quantum corrections to the axially-symmetric configurations A 1,1 , A 2,1 as thus, for the configurations with topological charges Q = 1, 2 the quantum correction to the Hopfion mass is negative and it is about 16% and 25% of the classical masses, respectively. A more consistent treatment of the quantum correction to the Hopfion mass needs minimization of the total energy functional Varying it we obtain rather cumbersome set of two coupled integro-differential equation for functions f (r, θ) and g(r, θ) which then should be solved numerically. The results will be reported elsewhere. Conclusion The main purpose of this letter was to present the scheme of the canonical quantization of the rotational mode of the charge Q = 1 and Q = 2 spinning Hopfions and evaluate the quantum corretions to the mass of these axiallysymmetric configurations. To this end we have used the technique described in [15]- [17] in the context of the Skyrme model and Baby Skyrme model [24]. The model is stabilised by additional coupling to a potential (mass) term by analogy with the Baby Skyrme model, this leads to appearance of the Yukawa-type exponential tail of the Hopfion fields. The analysis of the quantum corrections to the mass of the axially symmetric charge Q = 1, 2 solitons showed that, like in the Skyrme model, the corrections are negative and relatively large. It remains to systematically analyze the effect of quantization of the rotating Hopfions beyond the usual Bohr-Sommerfeld framework and the rigid body approximation we implemented in the present letter. As a direction for future work, it would be interesting to study the effect of canonical quantisations of the spinning knotted Hopfions, e.g. to consider how the shape of the celebrated Q = 7 trefoil knot configuration K 3,2 will be affected by the quantum corrections or if the axial symmetry of the spinning charge Q = 3 buckled configuration will be restored. Other buckling and twisting transmutations of the Hopfions which are related with a change of the symmetry of various spinning configurations of higher Hopf degree are also possible, one can expect an axially symmetric state may be the lowest energy state in this case. This work is now in progress [18].
3,640
2012-04-02T00:00:00.000
[ "Physics" ]
Wireless Sensor Network (WSN) of a flood monitoring system based on the Internet of Things (IoT) . Indonesia records exceptionally high rainfall, particularly during the rainy season when almost all areas of the country are consistently showered with heavy rain. Vigilance is therefore crucial due to the risk of flooding from overflowing rivers or dams. It is essential to develop flood monitoring systems to mitigate the risk and impact of flooding. This study aimed to design and build a flood monitoring system with parameters that support flood warnings. These include measurement of the water level using an ultrasonic sensor and rainfall using a tipping bucket-based hall sensor. The flood detection system was installed at Pondok Aren, Tangerang Selatan, Banten. A website was developed to display information on water levels and rainfall measurements every 10 minutes, as well as cumulative rainfall over 24 hours, presented in values, tables, and graphs. The device design included a warning feature in the form of a strobe light that would activate if the water level exceeded the minimum threshold in addition to providing rainfall status notifications. The system performed well in trials, with data transmitted to the database every 10 minutes. Raingauge sensors exhibited a 0.86% error rate, while the ultrasonic sensor showed an average error rate of just 0.25%. Introduction The vast archipelagic nation of Indonesia faces a significant and persistent threat from natural disasters, 76% of which are categorized as hydro-meteorological events, including floods, landslides, tropical cyclones, and droughts [1]. Situ Parigi is an artificial lake spanning 52,500 square meters with depths ranging from 1 to 4 meters.It is located in Perigi Lama Village, Pondok Aren, South Tangerang, Banten.Situ Parigi contains two water gates that are controlled by gatekeepers.The gates are opened when the water level reaches a specified height to prevent flooding in the surrounding areas due to river water overflow.Opening the water gates also helps to maintain the level of Situ Parigi, thus ensuring its capacity is not exceeded.During the rainy season, Situ Parigi frequently experiences heavy rainfall, creating a rapid increase in its water level.The gatekeepers must therefore conduct daily or afternoon inspections of the water level at the gates, especially during periods of intense rainfall.However, these inspections continue to be performed manually by physically visiting the gates.In a bid to improve the process, this research introduces the development of an automatic water level monitoring system. Floods are among the frequent natural disasters that occur in Indonesia, particularly during the rainy season.These disasters can lead to both material and non-material * Corresponding author<EMAIL_ADDRESS>losses for communities, including damage to buildings, the loss of valuable belongings, and even the loss of life.Factors contributing to flooding include high rainfall and overflowing river water levels.To minimize casualties and the impact of flood disasters, it is essential to promptly disseminate information and early warnings about potential floods [2]. A monitoring and warning system that is accessible, fast, and continuously available is critical for delivering urgent flood-related information to communities.An early warning mechanism is also an essential tool for informing the community so that they can prepare for impending floods [3].The implementation of an effective natural disaster early warning system requires appropriate technology.One commonly used method comprises an Internet of Things (IoT)-based disaster warning information system.This offers numerous advantages, including automatic and real-time operation 24/7 [4]. A flood early warning system aims to assist the community in anticipating flood occurrences.This research therefore develops a flood monitoring system employing rainfall and ultrasonic sensors to monitor water levels.The system will trigger an audio alarm and strobe lights whenever the water level reaches or exceeds a predetermined threshold.This monitoring approach not only simplifies the task of inspecting water levels for the gatekeepers of Situ Parigi but also enhances overall disaster preparedness. Ultrasonic sensor The progression of global digitalization has led to significant advances in technology that enable measurements to be conducted free from any physical contact with the objects being measured.One such cutting-edge technology employs sound waves, commonly known as ultrasonic waves [5].The technique of water level detection using ultrasonic sensors has gained widespread popularity due to its high accuracy, which minimizes analysis errors.Ultrasonic sensors operate on the principle of sound wave reflection to detect the presence of a specific object in front of them.Ultrasonic waves are sound waves with frequencies above 20 kHz.They share similar properties with regular sound waves, including the ability to bounce off surfaces and propagate through solid and air mediums with low energy, making them suitable for distance measurements both in the air and underwater [6].An ultrasonic sensor comprises two main units: a transmitter and a receiver.The transmitter circuit emits ultrasonic waves while the receiver circuit detects the reflected waves [7].Ultrasonic sensors operate at frequencies ranging from 40KHz to 400 KHz.The transmitter emits ultrasonic waves into the air.When these waves encounter specific objects, they are reflected and then received back by the receiving sensor unit within the ultrasonic sensor, as shown in Figure 1.[8]. The measurement distance for the JSN-SR04T ultrasonic sensor ranges from 25cm to 4.5m, which means it can be positioned more safely in a higher location compared to submerged conditions.The process of determining the distance from the JSN-SR04T ultrasonic sensor uses the following equation 1 [9].D = (HLT) x (SS) / 2 (1) Where: D = distance (in cm) HLT = high-level time or sensor output data (in µs) SS = speed of sound (0.034 cm/µs) Fig. 2. Ultrasonic sensor JSN-SR04T Rain gauge The tipping bucket rain gauge is commonly used to measure rainfall.It is favored for its automatic functionality and high efficiency in collecting rainfall data Figure 3. Fig. 3. Tipping bucket rain gauge The tipping bucket comprises four main parts: a rainwater collection funnel, a tipping sensor, a transducer in the form of a reed switch, and a transducer pulse generator connector.When obtaining measurements, the area of the funnel hole is a determining factor in the amount of rainfall received.The conversion factor used to calculate the amount of rainfall from the rain gauge is based on the volume of water required to tip one bucket.Typically, the conversion factor is expressed as a unit of volume (e.g., millimeters or inches) per tip.For example, if one tip of the rain gauge results in 0.2 millimeters of water, then the conversion factor is 0.2 mm/tip.Thus, each tip of a bucket corresponds to 0.2 millimeters of measured rainfall [10]. It is essential to periodically check and calibrate the conversion factor to ensure the accuracy of the rainfall data produced.The rainfall counter operates via a sensor that measures rainfall by collecting rainwater up to a specific value (e.g., 0.1 mm, 0.2 mm, 0.5 mm, or 1.0 mm).When the rainwater reaches this value, the sensor is triggered and the reed switch connects, generating a square pulse signal.This signal is then counted or converted to obtain the total amount of rainfall. The pulses are recorded manually or automatically using a digital data recorder.The recorder stores information on the number of pulses or triggers generated by the sensor.The total rainfall over a specific period can be calculated by analyzing this data. The system provides an accurate and efficient method for collecting rainfall data.Because the sensor is triggered by a specific amount of rain, each measurement can be precisely calibrated.The collected data is valuable for various applications, including weather analysis, hydrological research, and other fields that require detailed information about rainfall in a particular region.A detailed algorithm explanation for Figure 5 is given as follows: 1. Start: The system begins the program by initializing the devices, which consist of the JSN-SR04T ultrasonic sensor, The hall effect magnetic sensor, and the NodeMCU ESP8266.2. If the device initialisation fails, the system will rerun the process.If the device initialization is successful, the program proceeds to the next step. System Block Diagram Fig. 6.Block diagram of the system The system employs numerous input systems containing ultrasonic sensors to measure the water level and rain gauges to measure rainfall.The readings and measurements from these sensors and instruments are then processed through an Arduino microcontroller.The resulting data, which includes flood information and warnings, can be received through multiple media, namely an LCD for direct monitoring on the device, and can also be sent via the internet to a website.If the water level exceeds the predefined threshold, the relay will activate a strobe light. Implementation of the system The implementation of the instrument and display are shown in Figures 7 and 8, respectively.Figure 7 shows all the components of the device, assembled as a whole, viewed from the front and also from the side, along with the data logger component.The implementation of the website interface followed that of the designed website layout.The system website interface displaying water level and rainfall monitoring data at the Situ Parigi reservoir can be accessed at https://fewsstmkg.com.The website also presents monitoring data for water levels at West Jurang Mangu and rainfall at Pondok Pucung.The interface consists of five menu tabs: home, graphs, tables, locations, and info.The data on the website is updated approximately every ±10 minutes.Figure 8 shows the display output of the website interface. Testing of JSN-SR04T Sensor The JSN-SR04T sensor was tested to gauge its output response and assess its performance in measuring the height parameter.This testing was performed by comparing the sensor with a rolling tape measure.The comparative data were recorded and used to calculate the correction values generated by the sensor [11].Set points of 100 cm, 140 cm, and 150 cm were used, representing the distances between the sensor and the Styrofoam.Five measurements were taken for each set point.The comparative data from the JSN-SR04T sensor can be seen in Table 1.Based on the comparative results between the JSN-SR04T sensor and the rolling tape measure, it was determined that the sensor had an average measurement error of 0.2% at set point 100, 0.286% at set point 140, and 0.133% at set point 150. Testing of the Magnetic Sensor on the ARG The magnetic sensor on the ARG was tested to understand its output response, thus facilitating an assessment of its performance in measuring the tip count parameter.This evaluation enabled the calculation of rainfall values for each tip of the tipping bucket on the ARG.Comparative data were recorded and utilized to obtain the correction values generated by the magnetic sensor, as detailed in Table 2.The data in Table 2 indicate that the water measurements using the tipping bucket were read relatively accurately, with an average error rate of 0.86%. Testing of the Water Level Float Sensor The water level float sensor was tested to understand its output response; this enabled an assessment of its performance in the specific parameter of sending a high signal.The sensor was tested by repeatedly raising and lowering the sensor float.When the float rises, the sensor sends a high output signal; when it lowers, it sends a low output signal.The sensor output was displayed on an I2C 20x4 LCD screen, and the data can be seen in Table 3. Table 3. Results of the water float sensor comparison Based on the results of the water level float sensor testing in Table 3, it can be observed that the water level float sensor has a measurement error of 0%. System Field Test A field test of the system was conducted by installing the equipment at the Situ Parigi reservoir.Water level and rainfall data were then sent to a website using the ESP8266 every 10 minutes.The process of sending data to the website proceeded smoothly.The data stored in the database is displayed on the website https://fewsstmkg.com.Selected data acquired during the system field test is shown in Figure 9. Based on the sample data from Figure 9, the water level of the Situ Parigi reservoir was observed to rise with an increase in rainfall intensity.During the testing period, the rainfall at Situ Parigi was classified as moderate, with a 24-hour rainfall intensity measurement of 50.46 mm/day.To test whether the strobe light would activate when the water level of the Situ Parigi reservoir reached or exceeded the level of the water level float sensor, the float on the sensor was manually raised.When the float was lifted, the strobe light activated, as shown in Figure 12.This observation confirmed that the strobe light functioned correctly. An automatic transfer switch (ATS) is a device that operates automatically when the power source from the utility grid (PLN) is interrupted or experiences an outage.In such cases, the switch will transition to an alternative power source, which in this system is a battery.To test that the ATS was functioning correctly, a cut in the power from the PLN was simulated by moving the miniature circuit breaker (MCB) switch to the OFF position.This disconnected the flow of utility power to the terminal. When the MCB switch was returned to the ON position, the utility power from PLN began flowing again, as indicated by the illumination of a blue 5mm LED light.However, when the MCB switch was moved to the OFF position, the utility power from PLN ceased, triggering an automatic switch to the battery power source with a transfer delay of approximately ±2 seconds.Therefore, the ATS functioned correctly and effectively transferred power sources during the simulated utility power outage. Conclusion The following conclusions are drawn from this research: 1.The design of this device incorporated a JSN-SR04T sensor, a water level float sensor, a hall effect magnetic sensor, and a strobe light.A NodeMCU ESP8266 was used as the microcontroller and data transmitter.The water level and rainfall measurement data were presented through an I2C 20x4 LCD and a website.2. The system measured parameters such as water level and rainfall every 10 minutes, and the data were subsequently transmitted to a database using the NodeMCU ESP8266 via the Internet.Data were transmitted to the database every 10 minutes.The system display comprised an I2C 20x4 LCD (on-site) and a website (online).3. The water level and rainfall monitoring system at Situ Parigi reservoir, utilizing the JSN-SR04T sensor, water level float sensor, and hall effect magnetic sensor, was found to function effectively.The transmission of data to the database worked well, and the magnetic sensor and water level float sensor both had an error rate of 0%.Additionally, the average error rate for the JSN-SR04T sensor was 0.25%.4. To display the measured water level and rainfall values on the website, the data measured every 10 minutes by the sensors were sent via ESP8266 over the internet using the HTTP protocol to a MySQL database.PHP was used to connect the data from the database to the website.JavaScript was employed to provide real-time data visualisation on the website, while HTML and CSS were utilised to enhance the website aesthetics.The early warning system was established by placing the water level float sensor at a maximum water height of 180 cm.When the water level equaled or exceeded the height of the water level float sensor, the sensor sent a high signal to the ESP8266.This signal was then processed and led to the activation of the strobe light to provide an early warning. Fig. 5 . Fig. 5. Flowchart of the prototype water level and rainfall monitoring system. 3. The JSN-SR04T sensor measures the distance from the sensor to the Styrofoam.Next, the program calculates the water level of the reservoir based on this distance value.4. The hall effect magnetic sensor provides a LOW input when the seesaw moves on the tipping bucket.The program then calculates the rainfall value based on this input.5.If the water level in the reservoir exceeds the height of the water level float sensor, the latter will rise, activating the reed switch within it, closing the circuit, and providing a HIGH input signal to the NodeMCU ESP8266.This HIGH input will be processed by the program in the NodeMCU ESP8266, causing the relay to be set to LOW.This allows electricity to flow to the strobe light, which will activate it and provide an early warning.6. Display the water level of the reservoir and the rainfall value on the I2C 20x4 LCD. 7. The reservoir water level and rainfall data are sent through the NodeMCU ESP8266 Wi-Fi module to the MySQL database and displayed on the website.8.If the device is not turned off, the program will restart from measuring the distance from the JSN-SR04T sensor to the Styrofoam.If the device is turned off, the system program is completed. Fig. 8 . Fig. 8. Implementation of the website display Fig. 11 . Fig. 11.Data chart of the website Figures 9 - 2 . Figures 9-11 show that the data successfully sent to the database can be displayed on the website.The tab menus on the website are explained as follows: 1. Section a) displays the home tab page, containing information about the water level, 10-minute rainfall, and 24-hour rainfall at Situ Parigi reservoir.It also provides information about the water level at Jurang Mangu Barat and the 10-minute and 24-hour rainfall at Pondok Pucung.The background box displaying the 24-hour rainfall value for the Situ Parigi reservoir and Pondok Pucung changes colour and provides a scrolling text notification indicating the rainfall status for each location based on the respective 24-hour rainfall values.The colour changes based on the following ranges of 24-hour rainfall values:  Rainfall of 0.5-20 mm/day (green) indicates light rain. Rainfall of 20-50 mm/day (yellow) indicates moderate rain. Rainfall of 50-100 mm/day (orange) indicates heavy rain. Rainfall of 100-150 mm/day (red) indicates very heavy rain. Rainfall >150 mm/day (purple) indicates extreme rain.The date and time of the last sensor data sent to the database are also displayed, enabling technicians to quickly inspect any issues if the device is not sending data to the database.2. Section b) shows the table tab page, displaying tables for the water level and 10-minute rainfall at Situ Parigi reservoir.It also includes a table for the water level at Jurang Mangu Barat and the 10-minute rainfall at Pondok Pucung.Each displayed table contains the values from the latest 100 data points sent to the database.3. Section c) presents the graph tab page, showcasing graphs for the water level and 10-minute rainfall at Situ Parigi reservoir.It also includes graphs for the water level at Jurang Mangu Barat and the 10-minute rainfall at Pondok Pucung.Each displayed graph represents the values from the latest 100 data points sent to the database. Fig. 12 . Fig. 12. Data chart of the website Table 2 . Result of rain gauge testing
4,254.6
2023-01-01T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Identification of 37 Heterogeneous Drug Candidates for Treatment of COVID-19 via a Rational Transcriptomics-Based Drug Repurposing Approach A year after the initial outbreak, the COVID-19 pandemic caused by SARS-CoV-2 virus remains a serious threat to global health, while current treatment options are insufficient to bring major improvements. The aim of this study is to identify repurposable drug candidates with a potential to reverse transcriptomic alterations in the host cells infected by SARS-CoV-2. We have developed a rational computational pipeline to filter publicly available transcriptomic datasets of SARS-CoV-2-infected biosamples based on their responsiveness to the virus, to generate a list of relevant differentially expressed genes, and to identify drug candidates for repurposing using LINCS connectivity map. Pathway enrichment analysis was performed to place the results into biological context. We identified 37 structurally heterogeneous drug candidates and revealed several biological processes as druggable pathways. These pathways include metabolic and biosynthetic processes, cellular developmental processes, immune response and signaling pathways, with steroid metabolic process being targeted by half of the drug candidates. The pipeline developed in this study integrates biological knowledge with rational study design and can be adapted for future more comprehensive studies. Our findings support further investigations of some drugs currently in clinical trials, such as itraconazole and imatinib, and suggest 31 previously unexplored drugs as treatment options for COVID-19. Introduction Coronavirus disease 2019 (COVID-19) is a new infectious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The common clinical features of the COVID-19 range from mild respiratory symptoms to severe acute respiratory distress syndrome, and may be accompanied with a wide spectrum of gastrointestinal, cardiovascular, or neurological manifestations [1]. COVID-19 first appeared in December 2019 in Wuhan, China, where the first pneumonia cases of unknown origin were reported [2]. The disease has spread rapidly around the globe and the World Health Organization (WHO) declared a COVID-19 pandemic on March 11th 2020 [3]. By 23 December 2020 the number of infected subjects has reached over 78 million with more than 1.7 million deaths worldwide. Currently, there are over 21 million active cases, with about 0.5% of patients in critical condition [4,5]. Overall, COVID-19 has caused numerous socioeconomic consequences in every aspect of daily life, ranging from great economic loses in all sectors, challenges in the healthcare system, travel restrictions, social distancing and lockdowns [6], all of which have seriously impacted mental health, with higher rates of anxiety, depression and stress reported among the general population [7]. To date, several vaccines have been developed to prevent further spreading of COVID-19 [8]. Furthermore, massive efforts have been undertaken to find effective treatments for those who had already contracted the disease-at present, there are more than 4000 clinical studies related to COVID-19 as listed on ClinicalTrials.gov database, where majority were focused on chloroquine or hydroxychloroquine with or without azithromycin, lopinavir/ritonavir, remdesivir or dexamethasone [9]. Except for dexamethasone which led to improvement in survival of hospitalized patients in need for supplemental oxygen [10], none of the aforementioned drugs showed significant efficacy in ameliorating the COVID-19 outcome [11][12][13]. Some of the other identified drugs, such as ivermectin, ibrutinib, imatinib or ruxolitinib, are currently in different phases of clinical trials, but so far none of them has been recommended for treatment of COVID-19 by the COVID-19 Treatment Guidelines Panel [14]. Only two drugs are currently approved for COVID- 19: in October 2020 U.S. Food and Drug Administration (FDA) approved remdesivir as the first drug for the treatment of severe COVID-19 cases in need for hospitalization [15], and in November 2020 the same entity issued an emergency use authorization for the drug baricitinib [16]. However, since these drugs seem to have limited efficiency and serious side effects [17], it is urgent to identify more potent and safe therapeutics that could significantly decrease mortality and relieve the burden of COVID-19 on human health and healthcare systems worldwide. In conventional circumstances, the process of developing new drugs and testing their safety and efficacy in clinical trials takes several years, and the median cost of this process is almost a billion USD per drug [18]. The fastest and cheapest way to discover therapeutics for COVID-19 is repurposing of already approved drugs. Multiple experimental and computational drug repurposing strategies have been developed to discover novel indications for well characterized drugs that had already passed extensive clinical studies [19]. Examples of successful drug repurposing include sildenafil-originally an antihypertensive drug repurposed for erectile dysfunction, thalidomide-sedative that is also effective against erythema nodosum leprosum and multiple myeloma, or aspirin-an analgesic that can also be used to decrease the risk of cardiovascular disease and colorectal cancer [19]. In the past decade, a principle of transcriptomic signature reversion has been increasingly employed as a computational drug repurposing strategy, especially in cancer research [20]. This approach identifies drugs with inverted transcriptomic signatures in relation to the signature of a disease. Treatment of patients with such drugs could thus potentially reverse the disease transcriptomic signature, presumably ameliorating the disease phenotype as a result [19]. Signature matching of transcriptomic data started in 2006 with The Connectivity Map (CMap) project [21] which further evolved into the Library of Integrated Network-Based Cellular Signatures (LINCS) [22,23]. Transcriptomic signature reversion approach using CMap has been extensively used in pharmacogenomics studies. Examples of experimentally validated drugs that were identified by this approach include antiepileptic drug topiramate that showed potential for treating inflammatory bowel disease [24], mTOR inhibitor rapamycin shown to induce glucocorticoid sensitivity in acute lymphoblastic leukemia [25], and ikarugamycin and quercetin shown to reduce inflammation in cystic fibrosis [26]. Even though clinical trials are still missing to provide a definite proof-of-concept, such approach has demonstrated a potential for drug prioritization in pharmacogenomics research [20]. Several in silico drug repurposing studies based on a transcriptome reversal approach have already been published in the context of COVID-19 [27][28][29][30][31]. However, one common issue in such studies is a complete lack or insufficiency of the criteria for inclusion of datasets in the analysis which may then produce misleading results. Careful consideration of biological parameters, such as, for example, host cell tropism of the virus, is needed to assess the suitability of each dataset. Furthermore, high variability among datasets and noise in the transcriptomes often cast doubts on the validity of the results [32]. Combining biological knowledge with bioinformatics approaches is much needed to ensure the validity of such studies and to increase chances that selected drugs will be efficient in suppressing disease symptoms. In this study, we used the CMap computational drug repurposing approach to identify drug candidates with the potential to revert host transcriptome alterations triggered by the SARS-CoV-2 virus. Specifically, we searched publicly available transcriptomic datasets deposited via Gene Expression Omnibus (GEO) [33] of SARS-CoV-2 infected cells with matching non-infected controls, identified differentially expressed genes (DEGs), and employed LINCS [22] database to select drugs with transcriptomic signatures opposite to those induced by the SARS-CoV-2 infection. Of note, we used a rational strategy based on unsupervised machine learning approaches to select biologically relevant datasets, i.e., those that exhibited relatively high magnitude of response to viral infection. Moreover, we filtered DEGs to obtain those that are shared among multiple datasets so that they reflect a general and robust response to infection which should correspond to a physiological state of an infected organism. The obtained final list of 37 drug candidates was then characterized using bio-and chemoinformatic analyses to provide additional insights into the pathogenesis and viral-host interaction mechanism(s). Results In this study we determined transcriptome alterations in cells infected with SARS-CoV-2 in relation to non-infected controls and used these data to reveal drugs that induce the opposite-sense changes in the subset of the transcriptome affected by the virus. We propose that these drugs could reverse the host transcriptomic signature induced by SARS-CoV-2 and are thus potential candidates for treatment of COVID-19. Overall study design is depicted in Figure 1. results [32]. Combining biological knowledge with bioinformatics approaches is much needed to ensure the validity of such studies and to increase chances that selected drugs will be efficient in suppressing disease symptoms. In this study, we used the CMap computational drug repurposing approach to identify drug candidates with the potential to revert host transcriptome alterations triggered by the SARS-CoV-2 virus. Specifically, we searched publicly available transcriptomic datasets deposited via Gene Expression Omnibus (GEO) [33] of SARS-CoV-2 infected cells with matching non-infected controls, identified differentially expressed genes (DEGs), and employed LINCS [22] database to select drugs with transcriptomic signatures opposite to those induced by the SARS-CoV-2 infection. Of note, we used a rational strategy based on unsupervised machine learning approaches to select biologically relevant datasets, i.e., those that exhibited relatively high magnitude of response to viral infection. Moreover, we filtered DEGs to obtain those that are shared among multiple datasets so that they reflect a general and robust response to infection which should correspond to a physiological state of an infected organism. The obtained final list of 37 drug candidates was then characterized using bio-and chemoinformatic analyses to provide additional insights into the pathogenesis and viral-host interaction mechanism(s). Results In this study we determined transcriptome alterations in cells infected with SARS-CoV-2 in relation to non-infected controls and used these data to reveal drugs that induce the opposite-sense changes in the subset of the transcriptome affected by the virus. We propose that these drugs could reverse the host transcriptomic signature induced by SARS-CoV-2 and are thus potential candidates for treatment of COVID-19. Overall study design is depicted in Figure 1. Selection of the Relevant Datasets At first, publicly available transcriptomes obtained from cells infected with SARS-CoV-2 and their non-infected counterparts were identified. Considering heterogeneity in design of the identified studies, datasets for the analysis were pre-selected based on the pre-defined criteria (see Materials and Methods). As a result, the seven transcriptomic datasets were Figure S1). These datasets were obtained from four cell lines: normal human bronchial epithelial cell line (NHBE), human lung adenocarcinoma alveolar epithelial cell line (A549) with and without overexpressed angiotensin-converting enzyme 2 (ACE2; host receptor to which SARS-CoV-2 binds and enters the cell), and human lung adenocarcinoma airway epithelial cell line (Calu-3). For three cell lines datasets obtained upon infection with two different multiplicity of infection (MOI) were included in the analysis (Table 1). To provide an additional source of data for NHBE, we included a dataset obtained from human bronchial organoids generated from NHBE cell line, even though this dataset did not fully match the inclusion criteria (the samples were collected for RNA-seq analysis 5 days post-infection instead of 24 h post-infection). We next identified DEGs in each infected sample relative to corresponding noninfected control (Table S1, Figure S2a). Interestingly, we observed high variation in the number of DEGs among different datasets, indicating that some samples are more responsive to SARS-CoV-2 than the others. We thus decided to reduce the list of analyzed datasets to only those with relatively high magnitude of response to the virus in terms of transcriptome changes. To that end, we performed PCA on all eight transcriptome datasets and found that the distance on the score plot between infected and non-infected counterparts was apparently larger for A549-ACE2 and Calu-3 (regardless of the MOI) as compared to A549, hBO and nHBE samples ( Figure 2). This PCA analysis indicated that the magnitude of response to the virus was higher for A549-ACE2 and Calu-3 relative to A549 without ACE2 overexpression, hBO and nHBE samples. To additionally evaluate the effect of MOI, the separate PCA analyses were conducted for each cell line infected with two MOIs ( Figure S3a-c). In A549-ACE2 and Calu-3 cell lines, there were more differences between non-infected and infected samples, than between samples treated with different MOI, indicating high sensitivity of these lines to low quantities of the virus. Conversely, A549 cells showed negligible changes in the transcriptome upon infection with low MOI of 0.2 as compared to high MOI of 2, suggesting high threshold for infection. Next, differential gene expression analyses were performed for A549, A549-ACE2, and Calu-3 cell lines, considering pairs of SARS-CoV-2 infected and non-infected samples with different MOI as covariate (Table S2, Figure S2b). Combining transcriptomes of different MOIs for each cell line in one analysis would correspond to a general and robust response to SARS-CoV-2 infection which much better represents physiological situation where different cells are exposed to a range of quantities of viral particles. Furthermore, pooling two different MOIs in this kind of analysis contributes to reduction of noise in the data. In order to reduce the noise in the dataset obtained from NHBE cells for which there were no available transcriptomes upon infection with different MOIs, we performed an analogous analysis of NHBE cells pooled with hBOs (DEG analysis and PCA), given that hBOs were generated from NHBE cells (Table S2, Figure S3d). To create final selection of datasets for further analysis which takes into account the magnitude of response to the virus and the effect of different MOI, we performed hierarchical clustering on the results of the four differential gene expression analyses ( Figure S4). This analysis yielded two distinct clusters: one that contained A549-ACE2 and Calu-3 infected with SARS-CoV-2 at both MOIs, and another that included A549, hBO and NHBE samples. Hence, we opted to continue our study only on A549-ACE2 and Calu-3 cell lines, whereby the data for both MOIs were included. Selection of the Relevant DEGs upon SARS-CoV-2 Infection The next objective was to identify the most relevant DEGs that reflect robust change in the host transcriptomic signature upon SARS-CoV-2 infection in more than one cell To additionally evaluate the effect of MOI, the separate PCA analyses were conducted for each cell line infected with two MOIs ( Figure S3a-c). In A549-ACE2 and Calu-3 cell lines, there were more differences between non-infected and infected samples, than between samples treated with different MOI, indicating high sensitivity of these lines to low quantities of the virus. Conversely, A549 cells showed negligible changes in the transcriptome upon infection with low MOI of 0.2 as compared to high MOI of 2, suggesting high threshold for infection. Next, differential gene expression analyses were performed for A549, A549-ACE2, and Calu-3 cell lines, considering pairs of SARS-CoV-2 infected and non-infected samples with different MOI as covariate (Table S2, Figure S2b). Combining transcriptomes of different MOIs for each cell line in one analysis would correspond to a general and robust response to SARS-CoV-2 infection which much better represents physiological situation where different cells are exposed to a range of quantities of viral particles. Furthermore, pooling two different MOIs in this kind of analysis contributes to reduction of noise in the data. In order to reduce the noise in the dataset obtained from NHBE cells for which there were no available transcriptomes upon infection with different MOIs, we performed an analogous analysis of NHBE cells pooled with hBOs (DEG analysis and PCA), given that hBOs were generated from NHBE cells (Table S2, Figure S3d). To create final selection of datasets for further analysis which takes into account the magnitude of response to the virus and the effect of different MOI, we performed hierarchical clustering on the results of the four differential gene expression analyses ( Figure S4). This analysis yielded two distinct clusters: one that contained A549-ACE2 and Calu-3 infected with SARS-CoV-2 at both MOIs, and another that included A549, hBO and NHBE samples. Hence, we opted to continue our study only on A549-ACE2 and Calu-3 cell lines, whereby the data for both MOIs were included. Selection of the Relevant DEGs upon SARS-CoV-2 Infection The next objective was to identify the most relevant DEGs that reflect robust change in the host transcriptomic signature upon SARS-CoV-2 infection in more than one cell type ( Figure S5). To that end, we overlapped DEGs obtained from the two selected differential gene expression analyses (A549-ACE2 and Calu-3 infected with low and high MOI) and selected only the DEGs that were shared among these datasets. Furthermore, only DEGs whose expression consistently changed in the same direction-either up or down were considered. This procedure resulted in the list of 636 DEGs (Table S3). The cellular pathways affected by SARS-CoV-2 infection were determined by performing Gene Ontology (GO) enrichment analysis of these 636 DEGs. The analysis showed that SARS-CoV-2 infection upregulates various cellular responses such as immune and neuroinflammatory response pathways, and downregulates cold-induced thermogenesis and cholesterol biosynthetic pathways ( Figure 3, Table S4). As expected, one of the GO categories affected by SARS-CoV-2 infection was "host defense against viral infection". Since host defense against viral infection represents a beneficial mechanism for the cells and presumably should not be reverted by drugs, we excluded a total of 97 overexpressed genes that fell into this GO category (Table S5). The resulting 539 DEGs were employed in the subsequent steps of our study (Table S6, Figure S6). Figure S5). To that end, we overlapped DEGs obtained from the two selected differential gene expression analyses (A549-ACE2 and Calu-3 infected with low and high MOI) and selected only the DEGs that were shared among these datasets. Furthermore, only DEGs whose expression consistently changed in the same direction-either up or down were considered. This procedure resulted in the list of 636 DEGs (Table S3). The cellular pathways affected by SARS-CoV-2 infection were determined by performing Gene Ontology (GO) enrichment analysis of these 636 DEGs. The analysis showed that SARS-CoV-2 infection upregulates various cellular responses such as immune and neuroinflammatory response pathways, and downregulates cold-induced thermogenesis and cholesterol biosynthetic pathways ( Figure 3, Table S4). As expected, one of the GO categories affected by SARS-CoV-2 infection was "host defense against viral infection". Since host defense against viral infection represents a beneficial mechanism for the cells and presumably should not be reverted by drugs, we excluded a total of 97 overexpressed genes that fell into this GO category (Table S5). The resulting 539 DEGs were employed in the subsequent steps of our study (Table S6, Figure S6). Identification of Drugs with a Potential to Reverse Transcriptomic Signature Upon SARS-CoV-2 Infection To identify drugs capable of inducing inverted transcriptomic signature in the host relative to transcriptional changes triggered by SARS-CoV-2, we performed a LINCS connectivity map analysis of the final list of DEGs upon SARS-CoV-2 infection ( Figure S7). Although it included data obtained in as many as 30 cell lines, at least for some compounds, LINCS does not provide data specifically for the selected A549-ACE2 and Identification of Drugs with a Potential to Reverse Transcriptomic Signature Upon SARS-CoV-2 Infection To identify drugs capable of inducing inverted transcriptomic signature in the host relative to transcriptional changes triggered by SARS-CoV-2, we performed a LINCS connectivity map analysis of the final list of DEGs upon SARS-CoV-2 infection ( Figure S7). Although it included data obtained in as many as 30 cell lines, at least for some compounds, LINCS does not provide data specifically for the selected A549-ACE2 and Calu-3 cells. Furthermore, some drugs display highly variable effects in different cell lines making it difficult to predict how they would affect the two lines of interest. To increase probability that the selected drugs would have analogous effects on A549-ACE2 and Calu-3 cell lines, we filtered out compounds with documented highly variable effects across multiple cell lines. This additional filtering step led to the retention of drugs with robust effects across multiple cell lines. In addition, we focused on already approved drugs with the aim of their repurposing (see Materials and Methods). The final list includes 37 drug candidates with a potential to reverse transcriptomic signature upon SARS-CoV-2 infection ( Table 2). These drugs meet the following criteria: (1) Bio-and Chemoinformatic Characterization of the Drug Candidates for Repurposing against SARS-CoV-2 Infection To evaluate whether 37 drug candidates share some biological and molecular properties, we performed clustering of the drugs based on the following parameters: pharmacological class and current indication, mechanism of action (MOA), molecular structure, and known protein targets (Tables S7 and S8). In terms of their current indication, we found that 11 of the drugs cluster as antiinfective agents, 7 as neuropsychiatric drugs, and 5 as cardiovascular drugs, whereas the remaining drugs are pharmacologically heterogeneous ( Figure S8a). The MOAs of these 37 drugs were also heterogeneous ( Figure S8b). The clusters with two or maximum three drugs were bacterial ribosomal subunit inhibitors, bacterial topoisomerase II inhibitors, 14-alpha demethylase inhibitors, histamine receptor antagonists, and dopamine receptor antagonists. Furthermore, no significant similarity of the molecular structures was found among the drugs, with the maximum Tanimoto coefficient of 0.54 obtained for itraconazole and ketoconazole, which is still lower than the threshold of 0.85 above which drugs are considered significantly similar ( Figure S9). Selected drug candidates are also heterogeneous based on their physicochemical properties (Table S9, Figure S10). Grouping of the drugs based on their protein targets reveals no obvious preference for any of the four main drug target groups: G-protein coupled receptors (GPCR), ion channels, kinases, nuclear receptors (Table S10, Figure S11a). Indeed, majority of the selected drugs target GPCRs (14/37, 38%), and most drug targets are membranebound (174/282, 62%; Figure S11b), as in the case when the unfiltered list of all existing drugs is considered [37][38][39]. In summary, this analysis illustrates that the 37 drugs with a potential to reverse SARS-CoV-2 transcriptomic signature are highly heterogeneous in terms of their properties, with main clusters based on their current therapeutic indication ( Figure S12). In order to find out which specific biological pathways affected by SARS-CoV-2 infection can be reversed by the selected 37 drug candidates, we performed Drug and Target Set Enrichment Analysis (DSEA and TSEA). Biological pathways regulated by these drugs (Table S11) were overlapped with the pathways affected by the virus identified in the previous step of this study ( Figure 3, Table S4). The overlap contains the following categories: metabolic and biosynthetic process, immune system process, cellular and tissue developmental process, cellular architecture and dynamics, signaling pathways, and response to stimulus, with steroid metabolic process being targeted by almost half of the selected drug candidates (Figure 4, Table S12). We conclude that these pathways, in particular the steroid metabolic processes, might be the key druggable pathways for the reversal of SARS-CoV-2 transcriptomic signature. pathways, and response to stimulus, with steroid metabolic process being targeted by almost half of the selected drug candidates (Figure 4, Table S12). We conclude that these pathways, in particular the steroid metabolic processes, might be the key druggable pathways for the reversal of SARS-CoV-2 transcriptomic signature. Discussion In this study we used a rational approach to filter currently available transcriptomic datasets and determine relevant DEGs upon SARS-CoV-2 infection, followed by using this information to identify drug candidates with inverse transcriptomic signature as compared to SARS-CoV-2-induced transcriptome changes. This work revealed 37 diverse drug candidates that could potentially reverse SARS-CoV-2 signature through targeting a range of biological pathways, including immune response, metabolic and biosynthetic process, cell differentiation and proliferation, and signaling pathways. To increase chances that the obtained drug candidates for repurposing will be Discussion In this study we used a rational approach to filter currently available transcriptomic datasets and determine relevant DEGs upon SARS-CoV-2 infection, followed by using this information to identify drug candidates with inverse transcriptomic signature as compared to SARS-CoV-2-induced transcriptome changes. This work revealed 37 diverse drug candidates that could potentially reverse SARS-CoV-2 signature through targeting a range of biological pathways, including immune response, metabolic and biosynthetic process, cell differentiation and proliferation, and signaling pathways. To increase chances that the obtained drug candidates for repurposing will be efficient in vivo, we applied multiple measures of caution during development of our bioinformatics pipeline. This was achieved by introducing several filtering steps on each level of the analysis and these include: (1) selection of transcriptomic datasets obtained from biosamples whose transcriptomes are highly responsive to SARS-Cov-2 infection; (2) reduction of noise in the available transcriptomic data; (3) removal of the genes important for cellular defense against virus from the list of target DEGs, and (4) removal of the drugs with documented variable effects across different cell lines. At the time when this study was performed several datasets obtained from various cells or tissues were available. However, not all tissues or cells are equally affected by SARS-CoV-2 and hence not all datasets have equal value as a source of transcriptomic data. Indeed, the host cell tropism of SARS-CoV-2 depends on the cellular expression of factors that control viral entry and reproduction. For instance, to successfully infect the host cells, SARS-CoV-2 requires the presence of angiotensin-converting enzyme 2 (ACE2) and transmembrane serine protease 2 (TMPRSS2) at the cellular surface [40]. Moreover, different cell types vary in their ability to support production of the new virions [41]. In line with these findings, we observed dramatic differences in transcriptome alterations among several cell types exposed to SARS-CoV-2. Using PCA and hierarchical clustering on the selected cell types (Figure 2, Figure S4) based on their total transcriptomes and their DEGs, we were able to select the two cell lines-A549 cells expressing ACE2 and Calu-3 cells for further analyses. These cell lines were relatively sensitive to the virus, i.e., their transcriptomes were more responsive to viral infection as compared to wild-type A549 cells, NHBE cells and hBO. The selected cell types are presumably more vulnerable to the infection and thus represent priority targets for therapeutic intervention. Of note, our results are in agreement with the previously published data suggesting that A549 and NHBE have low or variable levels of ACE2 receptor [34,42,43] and are thus not ideal models for studying SARS-CoV-2 infection. Data noise is a common issue in the analysis of transcriptomes obtained from different sources or from limited number of samples. Correspondingly, we have observed high variability in DEGs among different datasets. To reduce the noise, we harmonized the transcriptomic data by including only datasets that met a set of defined criteria and by filtering datasets as described above. Finally, only consistent DEGs-those that were shared between the two selected biosamples, were used in further analyses. Upon viral infection, many biological pathways are hijacked by the virus and used for production of new virions. However, cellular defense against the virus is also activated. Reversal of the expression of genes that belong to the latter pathway would obviously represent a disadvantage for the cell, at least in the early stages of infection. Therefore, we performed a gene ontology enrichment to classify all affected DEGs into various biological pathways. To ensure that the selected drugs will not affect cellular defense against virus, we omitted all genes that fell into this category from the list of DEGs used for drug selection. To identify drugs that could be repurposed for COVID-19 we used the filtered subset of DEGs as an input for LINCS connectivity map analysis. Since LINCS does not contain data specifically for the selected cell lines, and many drugs display cell type-specific effects, an additional parameter was introduced in the analysis -the robustness of drug effects across multiple cell types. In that we eliminated drugs for which we found evidence that their effects are not conserved across multiple cell lines. This step is important as it increases the likelihood that the selected drugs will have the desired effect in other cell types, including our biosamples of interest. The above described pipeline (Figure 1) resulted in the list of 37 drug candidates with potential to reverse SARS-CoV-2 transcriptomic signature. However, bio-and chemoinformatic analysis of these drugs revealed that they are diverse in terms of their chemical structure, physicochemical properties, targets and biological pathways that they affect, which is in agreement with our observation that DEGs upon SARS-CoV-2 infection are involved in multiple cellular processes. Therefore, it is possible that each small cluster of drugs could reverse a different subset of SARS-CoV-2 transcriptome signature. It is tempting to speculate that a combination of two or more individual drugs belonging to different clusters could have more potent transcriptome reversal effects as compared to using a single drug. We identified several biological processes affected by SARS-CoV-2 that can be targeted by drugs, and these include metabolic, developmental, immune, and signaling processes ( Figure 4). Interestingly, steroid metabolic process was at the top of these processes as it was targeted by half of the selected drugs. This result is in agreement with a documented role of cholesterol in the infection of the cells by another coronavirus, transmissible gastroenteritis virus [44], as well as by a porcine nidovirus [45]. Furthermore, cholesterol is an important constituent of the cellular membranes, and those are essential for almost all aspects of the viral life cycle, including the attachment of the virus to the cell surface, fusion of the virus with the plasma membrane and/or endosomes, viral replication in double-membrane vesicles and budding of the virus from intracellular membrane compartments [46]. Finally, steroids have a substantial effect on host immune response [47][48][49]. Some of the drugs from our list are already being tested for their effects against COVID-19. These include antiinfective drugs ritonavir, azithromycin, atovaquone and itraconazole, antineoplastic drug imatinib and antidepressant drug fluoxetine [9]. Furthermore, azithromycin, one of the drug candidates selected in this study, was previously identified by a recently published network-based approach [50]. Also, at least three receptors that are targeted by drugs identified in this study were also suggested as putative targets in other network-based approaches. These include sigma non-opioid receptor 1 (SIGMAR1) [51,52] targeted by nortriptyline, and beta-2 adrenergic receptor (ADRB2) and androgen receptor (AR) [53] targeted by nortriptyline, levobunolol and ketoconazole. Our results lend support for further investigation of these drugs or drug targets in experimental approaches to treatment of COVID-19. Finally, this study suggests novel drug candidates for COVID-19 treatment, such as memantine, ibutilide, or trimethadione. In comparison with other studies which employed similar computational approach for drug repurposing based on transcriptome reversal [27][28][29]31], we observed only a minor or no overlap of the drug candidate lists. Shared candidates were ADRA1B antagonists (nortriptyline in our study), as well as ACE inhibitor perindopril and NR1I2 agonists (econazole and ritonavir in our study) that were also identified by El-Hachem et al. [30]. This limited agreement between similar studies may stem from using different starting transcriptomic datasets as well as from differences in the criteria applied in the dataset and DEGs selection procedure. The main strengths of this study are a sound study design and integration of current biological knowledge with rigorous statistics. The pipeline we developed employs a rational and biologically relevant selection of the datasets and differentially expressed genes with the aim to increase reliability of the results. A limitation of this work is that it has been performed in the early phase of COVID-19 investigations on a relatively small number of available datasets. Moreover, the selected datasets were obtained from cancer cell lines which are not an optimal source of transcriptomic data for studying a cancerunrelated disease such as COVID-19. Nevertheless, our pipeline could be applied in future more comprehensive studies upon publication of transcriptomic datasets obtained from more relevant biosamples such as SARS-CoV-2-infected primary human cell lines. This approach would also benefit from more profound understanding of the cellular tropism of SARS-CoV-2 and of vulnerability of different primary cell lines to the virus. In such future study, our pipeline could be refined to incorporate an additional filtering step of the datasets with positive selection of the vulnerable cell lines and negative selection of the indifferent cell lines, whereas drugs could be additionally filtered based on their selective efficiency only in the relevant cell types to avoid bystander toxicity. The pipeline could also be further upgraded to address more complex questions such as temporal dimension of transcriptome changes upon SARS-CoV-2 infection and timing of drug treatments. This would require more knowledge on the dynamic nature of cellular changes post-infection. Larger number of the relevant transcriptomic datasets could then be analysed after their initial clustering based on time post-infection and cell type. Finally, while in this study we focused only on robustly affected genes with fold change higher than 2 due to the limited number of datasets, the developed pipeline could also be further refined by optimizing fold change threshold in a gene-specific manner, given that fold change in expression does not have equal biological effects for all genes. This will also be possible upon generation of more transcriptomic datasets and of more profound knowledge about genes and pathways that are key to the pathogenesis of COVID-19. Differential Gene Expression Analyses The complete bioinformatics pipeline was performed in the free software environment for statistical computing R, version 4.0.0 [54]. Differential gene expression analysis was performed with the R package DESeq2 version 1.28.1 [55]. Raw counts from each of the included transcriptomic datasets were first pre-filtered to remove genes with read counts lower than 10. The remaining raw counts were normalized using DESeq2 variance stabilizing transformation (VST). PCA analysis was performed on the normalized raw counts. For further downstream analysis only DEGs with false discovery rate (FDR) adjusted p-value < 0.05 and fold change >2 for upregulated genes or <0.5 for downregulated genes were considered. Hierarchical clustering of datasets was performed with DEGs as an input with Euclidean distance measure and complete linkage as a clustering method, using base R function hclust. R package BiomaRt version 2.44.0 [56,57] with Ensembl database was used to convert gene names to Entrez ID for downstream analysis. Functional enrichment analysis was performed with the R package clusterProfiler version 3.16.0 [58]. GO over-representation test was done separately for up-and downregulated DEGs and the results were filtered based on FDR adjusted p-value less than 0.05. Redundant GO terms were removed by applying semantic similarity method implemented within the function simplify, using the similarity cut-off of 0.4 [59]. Library of Integrated Network-Based Cellular Signatures (LINCS) Database Analysis Transcriptomic signatures induced by SARS-CoV-2 infection were compared with the signatures induced by treatments with various small molecule compounds using the CMap analysis approach. The CMap analysis was conducted using LINCS reference database Phase 1 via an R package signatureSearch version 1.2.5 [60]. Within signatureSearch, LINCS reference database consisted of differential gene expression analysis of 12,328 genes obtained upon treatments of 30 cell lines with 8140 compounds as perturbagens, which corresponded to a total of 45,956 signatures [60]. The results of LINCS analysis are lists of perturbagen-cell line connectivity scores represented by Tau (Tau is a standardized score ranging from −100 to 100, where more negative/positive value signifies more exten- sive reversal/enhancement of transcriptomic signature by a perturbagen in a given cell line) [22]. The obtained list of signatures was further filtered according to the following pipeline: (1) FDR adjusted p-value of weighted connectivity score was given for each perturbagencell line combination. Only significant combinations with FDR adjusted p-value less than 0.05 were selected. (2) Tau connectivity score was given for all significant perturbagen-cell line combinations. Wherever a perturbagen was tested in multiple cell lines, the mean Tau connectivity score and its coefficient of variation (CV, described as the standard deviation divided by the mean) were calculated. Only perturbagens with CV < 1, i.e., those that showed coherent transcriptomic signature in multiple cell lines were chosen. Finally, all perturbagens with Tau < −85 were filtered for further analysis. The recommended Tau threshold of −90 was lowered to −85 to increase the final number of identified drug candidates. (3) The list of perturbagens was additionally reduced to include only approved drugs which were used for downstream analysis. Information about drug approval status was obtained via CLUE Repurposing App (https://clue.io/repurposing-app/; selection of 2427 drugs in launched phase). Bio-and Chemoinformatic Analyses of Candidate Drugs Information about drugs (molecular formula, molecular structure (as canonical Simplified Molecular Input Line Entry System (SMILES)), chemical class, pharmacological class, current indication based on The Anatomical Therapeutic Chemical (ATC) classification, mechanism of action (MOA) and cellular location) was collected from PubChem online database (https://pubchem.ncbi.nlm.nih.gov/) [61]. Physicochemical profiles of the drugs were estimated using ADMET Predictor TM 9.5 software (Simulations Plus, Inc., USA) with canonical SMILES of compounds as inputs [62]. Parameter relative polar surface area (RelPSA) was calculated using DataWarrior software [63]. Ionisation states of the drugs were estimated from acidity and basicity ionization constants calculated by ADMET Predictor TM 9.5 software. Information on drug target and the type of drug-target interaction was obtained from online databases DrugBank (https://go.drugbank.com/) [64] and Drug Gene Interaction Database (DGIdb; https://www.dgidb.org/) [65]. Cellular location of drug targets was extracted from DrugBank. Information about drug target protein families and superfamilies was obtained from UniProtKB (https://www.uniprot.org/) [66] and InterPro (https://www.ebi.ac.uk/interpro/) [67], while information on enzyme class was obtained from Integrated relational Enzyme database (IntEnz; https://www.ebi.ac.uk/ intenz/) [68]. Functional enrichments on the levels of drugs (Drug Set Enrichment Analysis (DSEA)) and targets (Target Set Enrichment Analysis (TSEA)) were performed with signa-tureSearch using hypergeometric test function and GO annotation. Results were filtered based on FDR adjusted p-value less than 0.05 and redundant GO terms were removed using REVIGO online tool (http://revigo.irb.hr/) [69] with similarity cut-off of 0.7. Clustering of the drugs was performed in the following steps: (1) for structural similarity only, canonical SMILES were transposed into circular ECFP6 (extended-connectivity fingerprint of diameter 6) fingerprints using R package rcdk version 3.5.0 [70] with default options; (2) similarity matrix was calculated from binary (or ECFP6 in case of structural similarity) fingerprints with default Tanimoto similarity metric using package fingerprint version 3.5.7 [71]; (3) hierarchical clustering was performed using base R function hclust with distance matrix as input (1 -Tanimoto similarity metric) and default option of complete linkage as a clustering method. Preparation of Figures All figures (except pipelines and drug-target-pathway network) were designed in R, version 4.0.0 [54] using the following packages: ggplot2 version 3.3.2 to visualize results of PCA analysis and create barplots [72], dendextend version 1.14.0 to visualize results of hierarchical clustering as dendrogram [73], and clusterProfiler version 3.16.0 for depicting results of GO enrichment analysis [58]. Drug-target-pathway network was visualized using open source software for network visualization Cytoscape version 3.7.1 [74]. Supplementary Materials: The following are available online at https://www.mdpi.com/1424-8 247/14/2/87/s1, Figure S1: Selection of the relevant datasets (detailed pipeline), Figure S2: Minor portion of DEGs is shared among multiple datasets, Figure S3: The PCA score plots for the three cell lines with two different MOIs and for a combination of NHBE cells and hBO, Figure S4: Hierarchical clustering of various biosamples based on transcriptomic signature changes upon SARS-CoV-2 infection, Figure S5: Selection of the relevant DEGs (detailed pipeline), Figure S6: Final list of consensus DEGs upon SARS-CoV-2 infection, Figure S7: Selection of the drugs (detailed pipeline), Figure S8: Distribution of 37 repurposable drug candidates with a potential to reverse transcriptomic signature upon SARS-CoV-2 infection based on their properties, Figure S9: Hierarchical clustering of 37 drug candidates based on molecular structure, Figure S10: PCA biplot demonstrating heterogeneity of 37 drugs in physicochemical space, Figure S11: Distribution of 37 drug candidates based on drug target properties, Figure S12: Hierarchical clustering of 37 drug candidates based on combined properties; Table S1: List of DEGs for each dataset separately (8×), Table S2: List of DEGs for each group of datasets separately (4×), Table S3: List of 636 DEGs common between A549-ACE2 and Calu-3, Table S4: List of significantly enriched pathways involved in SARS-CoV-2 infection, Table S5: Description of GO Biological Process categories for which DEGs were excluded, Table S6: Final list of 539 DEGs common between A549-ACE2 and Calu-3 after exclusion of "host defense against viral infection" genes, Table S7: Characterization of 37 drug candidates with a potential to reverse transcriptomic signature upon SARS-CoV-2 infection, Table S8: Target characterization of 37 drug candidates, Table S9: Physicochemical properties of 37 drug candidates, Table S10: Main drug target protein families distribution comparison for all FDA approved drugs and 37 drug candidates, Table S11: List of significantly enriched pathways regulated by 37 drug candidates, Table S12
9,249.8
2021-01-25T00:00:00.000
[ "Biology", "Medicine" ]
African Journal of Biotechnology Review Bacterial species identification getting easier The traditional methods of bacterial identification are based on observation of either the morphology of single cells or colony characteristics. However, the adoption of newer and automated methods offers advantage in terms of rapid and reliable identification of bacterial species. The review provides a comprehensive appreciation of new and improved technologies such fatty acid profiling, sequence analysis of the 16S rRNA gene, matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF), metabolic finger profiling using BIOLOG, ribotyping, together with the computational tools employed for querying the databases that are associated with these identification tools and high-throughput genomic sequencing in bacterial identification. It is evident that with the increase in the adoption of new technologies bacterial identification is becoming easier. INTRODUCTION Bacteria are primarily grouped according to their morphological characteristics (shape, presence or absence of flagella, and arrangement of flagella), substrate utilisation and Gram staining.Another important trait is their pattern of growth on solid media as different species can produce very diverse colony structures (Christopher and Bruno, 2003).The traditional methods that employ observation of either the morphology of single cells or colony characteristics remain reliable parameters for bacterial species identification.However, these traditional techniques have some disadvantages.Firstly, they are time-consuming and laborious.Secondly, variability of culture due to different environmental conditions may lead to ambiguous results.Thirdly, a pure culture is required to undertake identification, making the identification of fastidious and unculturable bacteria difficult and sometimes impossible.To evade these pro-blems, newer and automated methods which rapidly and reliably identify bacteria have been adopted by many laboratories worldwide.At least one of these methods, namely analysis of the 16S rRNA gene, does not require a pure culture.Combining these automated systems with the traditional methods provides workers with a higher level of confidence for bacterial identification.This review serves as a comprehensive appreciation of these new technologies.The methods we discuss are fatty acid profiling, sequence analysis of the 16S rRNA gene, protein profiling using matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF), metabolic finger profiling using BIOLOG, and ribotyping, together with the computational tools employed for querying the databases that are associated with these iden-tification methods.We further discuss the role of high-throughput genomic sequencing in bacterial identification.Unfortunately, labo-ratories in poor countries cannot afford some of these new systems.With increased access to these technologies, workers in many laboratories will find the identification of bacterial species easier. THE MORPHOLOGICAL IDENTIFICATION OF BACTERIA As it has always been the desire of humankind to understand the environment, the classification and identification of organisms has always been among the priorities of the early scientists.Unlike zoologists and botanists who have a plethora of morphological traits with which to identify animals and plants, the morphological characters for identifying bacteria are few and limiting.This not only provided a challenge, but also an opportunity for creativity.Gram staining was a result of the creative insight of Hans Christian Joachim Gram (1850Gram ( -1938) ) to classify bacteria based on the structural properties of their cell walls.It was based on Gram staining that bacteria could be differentially classified as either Gram positive or Gram negative, a convenient identification and classification tool that remains useful today.Although there are few morphological traits, and little variation in those traits, identification based on morphology still has significant taxonomic value.When identifying bacteria, much attention is paid to how they grow on the media in order to identify their cultural characteristics, since different species can produce very different colonies (Christopher and Bruno, 2003).Each colony has characteristics that may be unique to it and this may be useful in the preliminary identification of a bacterial species.Colonies with a markedly different appearance can be assumed to be either a mixed culture or a result of the influence of the environment on a bacterial culture which normally produces known colony characteristics or a newly discovered species. The features of the colonies on solid agar media include their shape (circular, irregular or rhizoid), size (the diameter of the colony: small, medium, large), elevation (the side view of a colony: elevated, convex, concave, umbonate/umbilicate), surface (how the surface of the colony appears: smooth, wavy, rough, granular, papillate or glistening), margin/border (the edge of a colony: entire, undulate, crenated, fimbriate or curled), colour (pigmentation: yellow, green among others), structure/opacity (opaque, translucent or transparent), degree of growth (scanty, moderate or profuse) and nature (discrete or confluent, filiform, spreading or rhizoid).Cell shape has also been used in the description and classification of bacterial species (Cabeen and Jacobs-Wagner, 2005).The most common shapes of bacteria are cocci (round in shape), bacilli (rod-shaped) and spirilli (spiral-shaped) (Cambray, 2006). Observations of bacterial morphologies are done by light microscopy, which is aided by the use of stains (Bergmans et al., 2005).Dutch microbiologist Antonie van Leeuwenhoek (1632Leeuwenhoek ( -1723) ) was the first person to observe bacteria under a microscope.Without staining, bacteria are colourless, transparent and not clearly visible and the stain serves to distinguish cellular structure for a more detailed study.The Gram stain is a differential stain with which to categorise bacteria as either Gram positive or Gram negative.Observing bacterial morphologies and the Gram reaction usually constitutes the first stage of identification.Specialised staining for flagella reveals that bacteria either have or do not have flagella and the arrangement of the flagella differs between bacterial species.This serves as a good and reliable morphological feature for identifying and classifying bacterial species. Light microscopy was traditionally used for identifying colonies of bacteria and morphologies of individual bacteria.The limitation of the light microscope was its often insufficient resolution to project bacterial images for clarity of identification.Scanning electron microscopy (SEM) coupled with high-resolution back-scattered electron imaging is one of the techniques used to detect and identify morphological features of bacteria (Davis and Brlansky, 1991).SEM has been widely used in identifying bacterial morphology by characterizing their surface structure and measure cell attachment and morphological changes (Kenzata and Tani, 2012).A combination of morphological identification with SEM and in situ hybridization (ISH) techniques (SEM-ISH) clarified the better understanding of the spatial distribution of target cells on various materials.This method has been developed in order to obtain the phylogenetic and morphological information about bacterial species to be identified using in situ hybridization with rRNA-targeted oligonucleotide probes (Kenzata and Tani, 2012). These morphological identification techniques were improved in order to better identify poorly described, rarely isolated, or phenotypically irregular strains.An improved method was brought up for the bacterial cell characterization based on their different characteristics by segmenting digital bacterial cell images and extracting geometric shape features for cell morphology.The classification techniques, namely, 3σ and K-NN classifiers are used to identify the bacterial cells based on their morphological characteristics (Hiremath et al., 2013). In addition to microscopy, several other tools for bacterial identification are useful to confirm identities based on morphology, thereby increasing the level of confidence of identity.Among these tools is the analysis of fatty acid profiles which will be discussed. FATTY ACID ANALYSIS Fatty acids are organic compounds commonly found in living organisms.They are abundant in the phospholipid bilayer of bacterial membranes.Their diverse chemical and physical properties determine the variety of their biochemical functions.This diversity, which is found in unique combinations in various bacterial species, makes fatty acid profiling a useful identification tool. The fatty acid profiles of bacteria have been used extensively for the identification of bacterial species (Purcaro et al., 2010).Fatty acid profiles are determined using gas chromatography (GC), which distinguishes bacteria based on their physical properties (Núñez-Cardona, 2012). Reagents to cleave the fatty acids are required for saponification (45 g sodium hydroxide, 150 ml methanol and 150 ml distilled water), methylation (325 ml certified 6.0 N hydrochloric acid and 275 ml methyl alcohol), extraction (200 ml hexane and 200 ml methyl tert-butyl ether) and sample clean-up (10.8 g sodium hydroxide dissolved in 900 ml distilled water).Information on the fatty acid composition of purple and green photosynthetic sulphur bacteria includes fatty acid nomenclature, the distribution of fatty acids in prokaryotic cells, and published information on the fatty acids of photosynthetic purple and green sulphur bacteria (Núñez-Cardona, 2012).This information also describes a standardised gas chromatography technique for t h e fatty acid analysis of these photosynthetic bacteria using a known collection and wild strains. The cellular fatty acid analysis for bacterial identification is based on the specific fatty acid composition of the cell wall.The fatty acids are extracted from cultured samples and are separated using gas chromatography.A computer generated, unique profile pattern of the extracted fatty acids is compared through pattern recognition programs, to the existing microbial databases.These databases include fatty acid profiles coupled with an assigned statistical probability values indicating the confidence level of the match.This has become very common in biotechnology. The fatty acid analysis for bacterial identification using gas-chromatography became simpler with the available computer-controlled chromatography and data analysis (Welch, 1991).The fatty acid analysis method uses electronic signal from the gas chromatographic detector and pass it to the computer where the integration of peaks is performed (Sasser, 2011).The whole cellular fatty acid methyl esters content is a stable tool of bacterial profile in identification because the analysis is rapid, cheap, simple to perform and highly automated (Giacomini et at., 2000).In addition, bacterial identification can be done at or below the species level.Adams et al. (2004) determined the composition of the cellular fatty acid (CFA) of Bacillus thuringiensis var.kurstaki using the MIDI Sherlock microbial identification system on a Hewlett-Packard 5890 gas chromatograph.This study revealed the capability to detect the strain variation in the bacterial species B. thuringiensis var.kurstaki and to clearly differentiate strain variants on the Tshikhudo et al. 5977 basis of qualitative and quantitative differences in hydrolysable whole CFA compositions in the preparations examined.Since this technology was used to resolve strain differences within a species, we can easily assume that the differentiation of species is done more accurately when fatty acid profiling is used.Kloepper et al. (1991) isolated and identified bacteria from the geocarposphere, rhizosphere, and root-free soil of field-grown peanut at three sample dates, using the analysis of fatty acid methyl-esters to determine if qualitative differences exist between the bacterial microflora of these zones.The dominant genera across all three samples were Flavobacterium for pods, Pseudomonas for roots, and Bacillus for root-free soil.Heyrman et al. (1999) Other clusters contained nocardioform actinomycetes and Gram-negative bacteria, respectively.A cluster of the latter contained extreme halotolerant bacteria isolated in Herberstein (Heyrman et al., 1999).At present, no bacterial identification method is guaranteed to provide absolute identity to all presently known bacterial species and therefore a number of methods are employed for a single identification procedure.Another method that is widely used for bacterial identification is sequence analysis of the 16S rRNA gene. SEQUENCE ANALYSIS OF THE 16S rRNA GENE Ribosomal RNA genes are a critical part of the protein synthesis machinery.They are omnipresent and therefore classification based on the analysis of ribosomal RNA genes does not leave out any of the known bacteria.For this reason, analysis of ribosomal RNA genes is a suitable tool for bacterial species identification and taxonomic categorisation.Moreover, ribosomal RNA genes are conserved but have sufficient variation to distinguish between taxa (Woese, 1987).In prokaryotes, ribosomal RNA genes occur in copies of three or four in a single genome (Fogel et al., 1999).The 16S rRNA gene has become a reliable tool for identifying and classifying bacteria.Over time, the 16S rRNA gene has shown functional consistency with a relatively good clocklike behaviour (Chanama, 1999) and its length of approximately 1,500 bp is sufficient for bioinformatic analysis (Janda and Abbott, 2007). Analysis of the 16S rRNA gene requires that this gene be amplified by polymerase chain reaction (PCR) and the resultant PCR product sequenced.The gene sequence can then be matched with previously obtained sequences obtainable from various DNA databases.This method has been so widely adopted that DNA sequence database databases are flooded with sequences of the 16S rRNA gene.Almost all new sequences deposited for query have matches and any 16S rRNA gene copy which does not match any known bacterial species is believed to be new (Chanama, 1999).In certain instances there is no requirement for pure colony amplification of the 16S rRNA gene, which makes this method suitable for studies of fastidious and unculturable bacteria and a good tool for the metagenomic analysis of environmental samples.Petrosino et al. (2009) defined metagenomics as "cultureindependent studies of the collective set of genomes of mixed microbial communities, (which may) be applied to the exploration of all microbial genomes in consortia that reside in environmental niches, in plants or in animal hosts". With the advent of metagenomic analyses of gross DNA samples, analysis of the 16S rRNA gene is proving its worth.In 16S rRNA-based metagenomics, gene sequencing has been widely used for probing the species structure of various bacteria in the environment (Shah et al., 2010).The 16S rRNA gene sequence is used to detect bacterial species in natural specimens and to establish phylogenetic relationships between them (Eren et al., 2011).This is made possible by the fact that all bacterial species contain the 16S rRNA gene, which has highly conserved regions on which to design universal primers, as well as hypervariable regions that are useful in distinguishing species. The 16S rRNA gene has hypervariable regions which are an indication of divergence over evolutionary time.The 16S rRNA genes of bacteria possess nine hypervariable regions (V1 -V9) that display considerable sequence diversity in different species of bacteria (Chakravorty et al., 2007).These regions are flanked by conserved regions on which universal primers can be designed for their amplification.Based on the fact that the variation of the hypervariable regions is correlated with the identity of taxa, it is often of no use to analyse the whole 16S rRNA gene when identifying species.This adds to the convenience of using the 16S rRNA gene for identifying bacterial species.Since high-throughput sequencing platforms sequence short segments of DNA, analysis of only these hypervariable regions, which are a few hundred bases long, falls within the scale of massive parallel sequencing.This has accelerated the generation of 16S rRNA sequences and their entry into public databases. It is easy for sequence analysis of the 16S rRNA gene to be adopted by many laboratories because it generally requires only PCR and sequencing, which are widely used techniques for many other applications.As a result, there are many studies that have employed sequence analysis of the 16S rRNA gene in taxonomic classifycation.The computational tools have been employed to identify a wide range of bacteria through the sequence analysis of their 16S rRNA genes.Using this method, the 16S rRNA gene fragments are amplified using PCR method, and bacteria are identified based on 16S rRNA gene sequence similarity based method on the existing microbial databases. According to Barghoutti (2011), when pure PCR products of the 16S gene are obtained, sequenced, and aligned against bacterial DNA data base, then the bacterium can be identified.For bacterial identification, the 16S rRNA gene is regarded as the most widely accepted gene (Song et al., 2003).Signature nucleotides of 16S rRNA genes allow classification and identification of bacterial species even if a particular sequence has no match in the database.The distinctive approach when identifying bacterial species using this method is to perform high-throughout sequencing of 16S rRNA genes, which are then taxonomically classified based on their similarity to known sequences in existing databases (Mizrahi-Man et al., 2013).Kumrapich et al. (2011) examined the endophytic bacteria in the internal tissues of sugarcane leaves and stems using molecular methods.They used a nutrient agar medium to cultivate the endophytes, whereupon 107 isolates of bacteria in the internal tissues of sugarcane leaves and stems were selected for analysis and 23 species of bacteria were identified and divided into three groups, based on the 16S rRNA sequences and phylogenetic analysis.The taxa identified were Sphingobacterium, Bacillus amyloliquefaciens, Bacillus cereus, Bacillus megaterium, Bacillus pumilus, Bacillus subtilis, Agrobacterium larrymoorei, Burkholderia cepacia, Chromobacterium violaceum, Acinetobacter (one strain), Enterobacter (three strains), Klebsiella (one strain), Serratia (one strain), Pantoea (three strains), and Pseudomonas (two strains). Based on the amplified 16S rRNA gene sequencing, Bhore et al. (2010) identified bacterial isolates from the leaves of Gaultheria procumbens (eastern teaberry, checkerberry, boxberry, or American wintergreen) as Pseudomonas resinovorans, Paenibacillus polymaxa, and Acinetobacter calcoaceticus.Muzzamal et al. (2012) isolated and identified an array of 76 endophytic bacteria from the roots, stems, and fresh and wilted leaves of various plants in Pakistan.The morphological, biochemical and physiological characterisation and 16S rRNA gene sequence analysis of the selected endophytic isolates led to the identification of different bacterial species belon-ging to the genera Bacillus, Pseudomonas, Serratia, Stenotrophomonas and Micromonospora. Although sequence analysis of the 16S rRNA gene has been by far the most common, reliable and convenient method of bacterial species identification, this technique has some shortfalls.Firstly, with this method it is not possible to differentiate between species that share the same sequence of this gene.Identification of bacterial species based on sequence analysis of the 16S rRNA gene relies on matching the obtained sequence with the existing sequence.Matching with a sequence that was incorrectly identified leads to incorrect identification. Other problems associated with using the 16S rRNA gene are sequencing artefacts and problems with the purity of bacterial isolates which may lead to incorrect identification.Problems associated with sequence analysis of the 16S rRNA gene when identifying bacterial species argue for the use of alternative methods to confirm findings.Among these alternative methods is matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOF MS) which relies on exploiting differences in bacterial protein profiles. MATRIX-ASSISTED LASER DESORPTION/ IONISATION TIME-OF-FLIGHT MASS SPECTROMETRY (MALDI-TOF MS) A rapid, high-throughput identification method, MALDI-TOF MS, has been introduced in bacterial taxonomy.This system has brought reliability, simplicity and convenience.MALDI-TOF is the only polypeptide fingerprinting-based methods even to be used for bacterial identification.The first studies regarding the identification of bacteria by MALDI-TOF were conducted towards the end of the 1990s and technology was made available as a research tool.It was commercialised for use in private and public laboratories in 2008 and the delay was in commercialising MALDI-TOF was because of the lack of robust information tools and efficient databases.The MALDI-TOF technique offers easily determinable peptide/protein fingerprints for the identification of bacterial species.This technique has the ability to measure peptides and other compounds in the presence of salts and to analyse complex peptide mixtures, making it an ideal method for measuring non-purified extracts and intact bacterial cells. Bacterial cultures to be queried are spotted on the MALDI-TOF plate which is placed in the time-of-flight (TOF) chamber.Each sample is spotted at least in duplicate, to verify reproducibility.A control specimen of known identity is included to ensure correct identity.The samples are allowed to air-dry at room temperature, inserted into the mass spectrometer and subjected to MALDI-TOF MS analysis.In addition to the cell-smear and cell-extract methods, additional sample preparation methods, as described previously (Smole et al., 2002), are used on a small number of strains.These include heat treatment (15 min at 95°C) of the cell extracts and cell smears, sonication (30 s, 0.3 MHz) of intact cells and the so-called sandwich method (Williams et al., 2003). MALDI-TOF MS has been successfully applied to a number of taxa of Listeria species (Barbuddhe et al., 2008), Campylobacter spp.(Fagerquist et al., 2007;Grosse-Herrenthey et al., 2008), Streptococcus pyogenes (Moura et al., 2008), the Burkholderia cepacia complex (Vanlaere et al., 2006), Arthrobacter (Vargha et al., 2006), Leuconostoc spp., Fructobacillus spp., and Lactococcus spp.(De Bruyne et al., 2011).According to De Bruyne et al. (2011), different experimental factors, including sample preparation, the cell lysis method, matrix solutions and organic solvents may affect the quality and reproducibility of bacterial MALDI-TOF MS fingerprints and this warrants the use of alternative methods to guarantee correct identification.Computational tools for MALDI -TOF are used according to the tasks they perform: Firstly, pre-processing of spectra, then unsupervised data mining methods which can be used for preliminary data examination, then supervised classification applied for example, in biomarker discovery. A MALDI-TOF dataset represents a set of mass spectra with two spatial coordinates x and y assigned to each spectrum.Unsupervised data mining, unsupervised methods are used for data mining, can be applied without any prior knowledge, and aim at revealing general data structure. Supervised methods (mainly classification) require specifying at least two groups of spectra which need to be differentiated, for example, by finding m/z-values differentiating spectra of tumor regions from spectra of control regions (Alexandrov, 2012).For isolates requiring identification to the species level (n _986), correct species identifications is done by the Biotyperand Vitek MS systems and the Saramis database. BIOLOG Different methods have traditionally been used to identify bacteria based on biochemical activity.These methods include the oxidase test and the catalase test.The Biolog OmniLog Identification System [or simply "Biolog" (Biolog Inc, Hayward, California)], a system that utilises automated biochemical methodologies, as an instrument (Miller and Rhoden, 1991;Holmes et al., 1994;Morgan et al., 2009) that tests a microorganism's ability to utilise or oxidise a panel of 95 carbon sources. Tetrazolium violet is incorporated in each of the substrates contained in a 96-well microtitre plate.Biolog's patented technology uses each microbe's ability to use particular carbon sources, and uses chemical sensitivity assays to produce a unique pattern or "phenotypic fingerprint" for each bacterial species tested.As a bacterium begins to use the carbon sources in certain wells of the microplate, it respires.With bacteria, this respiration process reduces a tetrazolium redox dye and those wells change colour to purple.The end result is a pattern of coloured wells on the microplate that is characteristic of that bacterial species. A unique biochemical pattern or "fingerprint" is then produced when the results are surveyed.The fingerprint data are analysed, compared to a database, and identification is generated.The Biolog system was originally created for the identification of Gram-negative bacteria, but since the introduction of this system in 1989, the identification capability of the instrument has broadened to include Gram-positive bacteria (Stager and Davis, 1992). According to Morgan et al. (2009) isolates are prepared according to the manufacturer's instructions in the OmniLog ID System User Guide (Biolog, Hayward, CA).All isolates, except the Bacillus species, are cultured at 35°C on a Biolog Universal Growth (BUG) agar plate with 5% sheep blood.After an incubation period of 18 to 24 h, the bacterial growths are emulsified to a specified density in the inoculating fluid (0.40% sodium chloride, 0.03% Pluronic F-68, and 0.02% gellan gum).Bacillus species require a special "dry-tube method" preparation as described by the manufacturer.Colonies are picked with a sterile wooden Biolog Streakerz™ stick and rubbed around the walls of an empty, sterile, glass tube.Inoculating fluid (5 ml) is added to suspend the bacterial film.The suspension is subsequently used to inoculate culture wells of Gram-positive microplates (Biolog, Hayward, CA). For all isolates, each well of the Gram-positive or Gramnegative microplate is inoculated with 150 μL of the bacterial suspension.Depending on the type of organism, the microplates are incubated at 30 or 35°C for 4 to 24 h.If bacterial identification has not occurred after 22 h, a reading of "no ID" is given.Each metabolic profile is compared with the appropriate GNor GPOmnilog Biolog database (Biolog, Hayward, CA), which contains biochemical fingerprints of hundreds of gram negative and gram positive species (Morgan et al., 2009).Biolog has been applied successfully to a number of taxa such as Paenibacillus azotofixans (Pires and Seldin, 1997), Xanthomonas campestris pv.campestris (Massomo et al., 2003) and Glycine spp.(Hung and Annapurna, 2004). Computational tools such as standard multivariate analysis tools which include cluster analysis, principal component analysis and principal coordinate analysis are available for simple set summurization of numerical taxonomikc traits.Another tool is the co-inertia analysis which is a multivariate statistical method that perform a joint analysis on two data tables and assign egual consideration of both of them.These method is a two table ordination method that facilitate establishment of connections between tables with data domains that contain the same or even different numbers of variable method allow are to connect various standard singletable coordination methods such as principal component analysis and correspondence analysis. Mantel test is a regression procedure in which variables themselves are either distance or dis-similarity matrices, summarising pair similarities among objects.Computations and graphic displays of Mantel test and the co-inertia analysis are obtained using ADE-4 package (Thiolouse et al., 1997).The documentation and downloading of this programme is available in the internet. The Biolog method indicates potential, but not actual, catabolic activity of a community.Glimm et al. (1997) noticed that an assortment of substrates does not neces-sarily reflect substrates which are available to bacteria in in the soil environment, so one can suspect that some microbial species are incapable of growing on plates because of the lack of proper substrates. According to Morgan et al. (2009) the Biolog system requires pure cultures and the subsequent growth of the bacteria -and pure culture and growth are frequently problematic when it comes to slow-growing, fastidious, unusual, nonviable, or non-culturable bacteria.The turnaround time required for identifying bacterial isolates can be several days to several weeks.The Biolog system is better at identifying fermentative organisms than nonfermenters.However, it should be noted that biochemically active nonfermenters do achieve high identification rates (88%) in the Biolog system, so a different product may be more suitable for inactive nonfermenters.Due to its disadvantages, other bacterial species identification procedures are required. RIBOTYPING The identification of bacterial species based on ribotyping exploits sequence differences in rRNA.DNA is extracted from a sample and is digested with restriction enzymes to generate a unique combination of discrete-sized fragments (ribotyping fingerprint) for a particular bacterial species.This pattern is queried in a database containing numerous patterns of different bacterial species.Before a ribotyping fingerprint database had been developed, rRNA fragments produced from restriction digestion would be probed with a known DNA probe for bacterial species identification. A known ribotyping system, RiboPrinter ® , is an automated system used for characterising bacterial samples and is a well regarded method of genotyping pure culture isolates which is often used in epidemiological studies.The basis of ribotyping is the use of rRNA as a probe to detect chromosomal restriction fragment length polymorphisms (RFLPs).The whole DNA of a pure culture is extracted and cleaved into various lengths of fragments using many endonucleases.The resultant fragments are separated by gel chromatography, then probed with labelled rRNA oligonucleotides.Kivanç et al. (2011) used the RiboPrinter ® to identify a total of 45 lactic acid bacteria from 10 different boza (a malt drink) samples in Turkey.In a study by Inglis et al. (2002) an automated ribotyping device was used to determine the ribotypes of a collection of Burkholderia pseudomallei isolates, and the comparison of automated ribotyping with DNA macrorestriction analysis showed that an EcoRI ribotyping protocol can be used to obtain discriminating molecular typing data on all isolates analysed.Optimal discrimination was obtained by analysing gel images of automated EcoRI ribotype patterns obtained with BioNumerics software in combination with the results of DNA macrorestriction analysis. HIGH-THROUGHPUT SEQUENCING TECHNIQUES There are four sequencing technologies available (capillary sequencing, pyrosequencing, reversible terminator chemistry, sequence-by litigation).The Sanger capillary sequencing is still based on the same general scheme applied in 1977 for the φX174 genome.Roche/454 GS FLX Titanium sequencer was the first of the new high-throughput sequencing platforms on the market and it was released in 2005.It is based on the pyrosequencing approach.Compared to Sanger sequencing, it is based on iteratively complementing single strands and simultaneously reading out the signal emitted from the nucleotide being incorporated.Illumine Genome Analyzer II/IIx is a reversible terminator techno-logy and employs a sequencing-by-synthesis concept that is similar to that used in Sanger sequencing, however the Illumina sequencing requires protocol the sequence to be determined are converted in to special sequencing library, which allows them to be amplified and immobilised for sequencing (Bentley et al., 2008).The SOLiD sequence platform (sequencing-by-litigation) is very different from the rest discussed thus far and the sequence extension reaction is not carried out by polymerases but rather by ligases (Shendure et al., 2005).The Sanger capillary sequencing is a low-throughput method and the sequencing error observed for Sanger sequencing is mainly due to errors in the amplification step (a low rate when done in vivo), natural variance, and contamination in the sample used, as well as polymerase slippage at low complexity sequences like simple repeats (short variable number tandem repeats) and homopolymers (stretches of the same nucleotide).The the high-throughput techniques (pyrosequencing, reversible terminator chemistry, sequence-by litigation) makes bacterial identification easier and even possible for even single research groups to generate large amounts of sequence data very rapidly and at substantially lower costs than traditional Sanger sequencing. Novel DNA sequencing technologies called highthroughput sequencing (HTS) techniques are capable of generating massive amounts of genetic information with increased speed, accuracy and efficiency.Highthroughput genome sequencing provides a more detailed real-time assessment of the genetic traits of bacteria than could be achieved with routine subtyping methods.HTS technologies are used for studying diversity and genetic variations and solving genomic complexities.Approximately 300 complete bacterial genomes had been sequenced by 2010. This has aided and sped identification of bacterial species and these HTS technologies remain useful especially for identification of bacterial species that constitute a population in a sample. CONCLUSION The traditional identification of bacteria on the basis of phenotypic characteristics is generally not as accurate as Tshikhudo et al. 5981 identification based on genotypic methods.The more traditional methods whereby bacteria have been identified based on their physical properties, are compound light microscopy in combination with histological staining and electron microscopy.The later is the conventional scanning microscope which generally offers unique advantages such as high resolution and great depth of field. The fatty acid profiles of bacteria, which are determined with the aid of gas chromatography, have also been used extensively for the identification of bacterial species.Bacterial phylogeny and taxonomy have further benefited greatly from the use of the sequence analysis of 16S ribosomal RNA, which makes the identification of rarely isolated, phenotypically anomalous strains possible. Comparison of the bacterial 16S rRNA gene sequence has emerged as a preferred genetic technique.The 16S rRNA gene sequence analysis can better identify poorly described, rarely isolated, or phenotypically aberrant strains can be routinely used for identification of mycobacteria and can lead to the recognition of novel pathogens and noncultured bacteria. Cutting-edge techno-logies such as MALDI-TOF MS, Biolog and the RiboPrinter ® has facilitated bacteriological identification even further.The MALDI-TOF MS technique offers easily determinable peptide or protein finger printing for the identification, typing and characterisation of various strains.Biolog has been used to identify various lactic acid bacteria strains.Biolog tests a microorganism's ability to utilise or oxidase a panel of carbon sources and this method is used when characterising bacterial samples within a fixed degree of similarities.The computational tools have been developed for quering the relevant microbial databases that are associated with the bacterial identification methods.From the current review, it is evident that with the increase in the adoption of new technologies and high-throughput sequencing techniques, bacterial identification is becoming easier. isolated 428 bacterial strains, of which 385 were characterised by fatty acid methyl ester analysis (FAME).The majority (94%) of the isolates comprised Gram-positive bacteria and the main clusters were identified as Bacillus sp.
7,175
2013-10-31T00:00:00.000
[ "Biology", "Computer Science", "Environmental Science" ]
AN OVERVIEW OF RANDOMIZATION IN CLINICAL TRIALS Randomization is a process of assigning the treatments to various experimental units in a purely chance manner to the control and treatment groups in clinical trials. The process of reducing the experimental error by dividing the relatively heterogeneous experimental units in to homogeneous blocks is known as local control. This paper describes some of the main steps in those performing reviewers of randomized controlled trials (RCT). Randomization is a process of assigning the treatments to various experimental units in a purely chance manner to the control and treatment groups in clinical trials. The process of reducing the experimental error by dividing the relatively heterogeneous experimental units in to homogeneous blocks is known as local control. This paper describes some of the main steps in those performing reviewers of randomized controlled trials (RCT). …………………………………………………………………………………………………….... Introduction:- In the 1920s RA Fisher introduced randomization as an essential technique of his approach to the design of experiments, validating significance tests. In its absence the experimenter had to rely on his judgment that the effects of biases could be discounted. Twenty years later, A Bradford Hill promulgated the random assignment of treatments in clinical trials as the only means of avoiding systematic bias between the characteristics of patients assigned to different treatments. The two approaches were complementary, Fisher appealing to statistical theory, Hill to practical needs. The two men remained on good terms throughout most of their careers. To rule out subjective bias in subjects under study , blinding trials should be conducted. In single blind trial one group of patients is given one drug and another is given other drug of the same colour and size or a placebo. So no patient knows what he is given. In double blind trial not only the patient but also the observers do not know which patients are given drug and which patients are on placebo. In a triple blind trial neither the patients nor the observers nor the person analyzing the data know which patients are given drug and which patients are on placebo. Random allocation of patients for treatment and controlled groups may be done by random numbers. All patients may be collected and distributed two envelopes red and white at random Discussion:- Designing of an experiment means deciding how the observations or measurements should be taken to answer a particular question in a valied and efficient way. As per to RA Fisher the basic principles of Design of Experiments are Randomization, Replication and local control. One of the main purposes of randomization is to improve comparability between treatment groups by balancing observed and unobserved covariates in expectation. Randomization furthermore helps to mitigate the risk of selection bias and, depending on the randomization procedure, can protect against imbalanced group sizes throughout the allocation process. Despite the many benefits of randomization, there are also some limitations; for a comprehensive discussion. One issue that cannot be addressed by randomization is that patients usually enter a clinical trial sequentially and are often treated Corresponding Author:-Javed Ali Khan Address:-Investigator, National Research Institute of Unani Medicine for Skin Disorders, Hyderabad-500038, India. 36 immediately. Consequently, new patients will be enrolled and assigned to therapies, while others have already received treatment . This delay in time entails several potential sources of bias: On the one hand, the treatment success itself may be affected by unobserved time trends (chronological bias). These may result from, for example, improved treatment performance due to experience gain, or changes in inclusion or exclusion criteria. On the other hand, the sequential enrollment creates the risk for selection bias whenever blinding cannot be fully attained. Before calculation of sample size , the investigator must see whether the type of outcome variable is quantitative(ie., continuous such as blood pressure, pulse rate, weight of a patient) or qualitative (ie., severity of disease, mild, moderate, intense, sign and symptoms: present/absent). A study is general should have single or primary outcome measure. Other measures should be secondary. If, however, more than one measure are of equal importance. Separate sample size calculations should be done and large sample size arrived at should e used. A rough idea about the outcome variable in the two groups is the most important input for calculating sample size. In case of outcome variable being quantitative, mean and standard deviation would be the summary required while in qualitative outcome variable proportions would be needed. To be more specific for calculation of sample for any study requires that the investigator should specify the following fours parameters, so that the fifth parameter ie., the sample size could be calculated: 1. Investigator has to specify the amount of error is prepared to tolerate in concluding that difference exists when in fact no difference. This is known as Type-I error. 2. Probability of concluding that the amount of error prepared to tolerate in concluding that the difference between the two groups is real when in fact the difference is due to chance factor alone. This is known as type II error. 3. The third parameter is inversely related to the size of the study is the difference which is regarded as clinical importance. The fixed randomization is used in two forms: 1. Simple randomization 2. Block Randomization. Simple randomization is the most elementary form of randomization. It is usually carried out using a random number table or random numbers generator from a computer. Hill Introduced Blocked randomization in 1951 to avoid serious imbalances in the two groups. Blocked randomization guarantees that at all time during randomization the number of patients in the two groups will be equal. Summary: The benefits of randomization are numerous. It ensures against the accidental bias in the experiment and produces comparable groups in all the respect except the intervention each group received. The purpose of this paper is to introduce the randomization, including concept and significance and to review several randomization techniques to guide the researchers and practitioners to better design their randomized clinical trials. Use of online randomization was effectively demonstrated in this article for benefit of researchers. Simple randomization works well for the large clinical trails (n>100) and for small to moderate clinical trials (n<100) without covariates, use of block randomization helps to achieve the balance. For small to moderate size clinical trials with several prognostic factors or covariates, the adaptive randomization method could be more useful in providing a means to achieve treatment balance.
1,492.4
2020-12-31T00:00:00.000
[ "Mathematics" ]
Analyzing Stereotypes in Generative Text Inference Tasks Stereotypes are inferences drawn about people based on their demographic attributes, which may result in harms to users when a system is deployed. In generative language-inference tasks, given a premise, a model produces plausible hypotheses that follow either logically (natural language inference) or common-sensically (commonsense inference). Such tasks are therefore a fruitful setting in which to explore the degree to which NLP systems encode stereotypes. In our work, we study how stereotypes manifest when the potential targets of stereotypes are situated in real-life, neutral contexts. We collect human judgments on the presence of stereotypes in generated inferences, and compare how perceptions of stereotypes vary due to annotator positionality Introduction Social categories refer to collections of people with shared traits; stereotypes-cognitive structures that associate categories (e.g., man, Black, poor, professor) with both roles (e.g., doctor) and traits (e.g., absent-minded)-are central to how people construe social meaning (Levon, 2014;Macrae and Bodenhausen, 2001;Greenwald et al., 1998). Social psychology has studied how stereotypes, as a cognitive process, are entwined with the production of human affects of prejudice and in-group favoritism, as well as behaviors like discrimination (Stangor, 2014;Jackson, 2011). Linguistic anthropology and sociolinguistic studies argue that language-as the predominant way of naming categories and transmitting knowledge-is the only (or at least the primary) mechanism by which social stereotypes are shared as part of cultural knowledge (Fishman, 1956;Stangor and Schaller, 2012;Maass and Arcuri, 1996) Table 1: Annotation example; the hypothesis is automatically generated from the premise. Both annotators found the hypothesis grammatically correct and plausible. One annotator viewed this hypothesis as negative stereotypical towards Cuban people, assuming that they have problems with jobs. The other annotator had the opposite opinion. Annotators differ in their backgrounds and social groups they belong to. In this paper, we study ways in which categories implicate inferences around stereotypical roles and traits computationally. 1 Approaching stereotyping through the lens of inference allows us to focus on what models learn as implications rather than simply associations (e.g., that lexical semantics models typically find antonyms like "hot" and "cold" to be highly related). Specifically, we train models for English textual inference-including both logical-(NLI) and commonsense-inference (CI)-and investigate how stereotypes are reproduced by these models. The models we train generate hypothesis text given a fixed premise text (e.g., "PERSONX lights up candles", where PERSONX is substituted with the target category label), and by varying the target category label, we are able to investigate what and how much stereotypical information the model produces in its generated hypotheses (see Table 1). To perform this analysis, we collect human judgments on the generated hypotheses, given explic- itly stated target categories in an otherwise neutral premise, such as that in Table 1. We focus on 71 target categories drawn from six stereotype domains that are particularly salient in the United States 2 , listed in Table 2. With the collected human judgments, we first investigate which models and categories lead to stereotyped inferences, and the degree to which the invoked stereotypes are negative. It is well established that stereotypes are both an individual phenomenon-something that resides in the heads of individual people-as well as a cultural phenomenon-that "[sterotypes] exist also in 'the fabric of society' itself" (Stangor and Schaller, 2012), and as such who the annotators are matters (Hovy and Spruit, 2016;Jørgensen et al., 2015;Hazen et al., 2020). In view of this, part of our analysis specifically considers how individual annotators' perceptions of stereotypes may vary. Overall, we find that socioeconomic status and politics are the domains most likely to yield stereotyped inferences. This is notable, as most existing work in this space has focused on the domains of gender and race (see § 2). We also discover that within these domains, certain target categories are more likely to yield negatively stereotyped inferences; specifically, the categories of poor, working class, and formerly incarcerated people. For human judgements, we observe that annotators disagree the most on the questions about whether an inference is based on the identity mentioned in the premise, as well as whether it reflects a stereotype or not. This appears especially true when the hypotheses include less well-known stereotypes, or stereotypes toward groups that are not typically stereotyped in US culture. Significant limitations. The most significant limitation is our focus on English and US cul-2 Although we focus on the US, many of these categories are salient globally, especially gender, sex and class (Fiske, 2017). Other domains may also be globally relevant due the US's export of stereotypes through media (Crane, 2014). ture, as discussed above; this means that while we may recognize negative stereotypes of (for instance) Latin Americans in the US, we will likely miss negative stereotyping of Roma in Spain. Our work is also limited to just six stereotype domains, and we do not explicitly account for intersectionality. While our annotators are of diverse cultural backgrounds, another limitation is that there are only four, limiting the breadth of our analysis of annotator positionality. Related Work Our work builds on a growing body of recent computational literature on stereotypes (often termed "bias"). A major focus of past work has been on the domains of gender and race, across a variety of tasks including language modeling, coreference resolution, natural language inference, machine translation, and sentiment analysis Rudinger et al., 2018;Lu et al., 2018;Dinan et al., 2019;Kiritchenko and Mohammad, 2018); Blodgett et al. (2020) provide a review. There has simultaneously been a range of work aimed to mitigate problems of stereotyping in NLP systems, including many in the space of text generation (Sheng et al., 2020;He et al., 2019;Clark et al., 2019;Huang et al., 2020). In comparison to this line of work, our main extensions are (a) a broader range of domains considered, and (b) a specific focus on the generation of entailed text. Several very recent papers have also explored other stereotype domains, including disabilities (Hutchinson et al., 2020), and larger collections of domains similar to ours. For instance, two recently released datasets by Nadeem et al. (2020) and Nangia et al. (2020) provide example texts and measurements to determine if a language generation system exhibits stereotyping toward the domains of nationality, race, religion, profession, orientation, disability, age, appearance, socioeconomic status, and gender. Li et al. (2020) probes transformer-based question answering models on stereotypes towards gender, nationality, religion, ethnicity domains. Here, question/answer pairs are constructed where a particular answer either does or does not contain a known stereotype. Our analysis is similar to these, with a slightly broader set of domains, a focus on inference rather than question answering, and a post-hoc analysis of what a model actually produces, rather than a predefined dataset of potentially expected stereotypes. An advantage of the dataset approach is re-usability, while an advantage of the post-hoc analysis approach is that it may capture stereotypes we had not thought of a priori. Data Generation & Annotation We conduct experiments to study stereotypes with a focus on generative text inference tasks. To do that, we construct a list of stereotype domains and a list of target categories for each of the domains. We also manually create a list of underspecified, real-life context situations for instantiated premises. Using these constructed premises, we conditionally generate hypotheses from three models. The resulting premise-hypothesis pairs are then judged for stereotypes by four humans annotators. Background on Text Inference Tasks We consider two text inference tasks: natural language inference (NLI; also textual entailment) and commonsense inference (CI); both are typically framed as classification tasks (Dagan and Glickman, 2004;Bowman et al., 2015;Williams et al., 2018). Namely, given a text premise p and a text hypothesis h, determine the relationship r between the two. For NLI, the typical set of relationships are r = ENTAILED if p logically entails h, CONTRADICTED if h contradicts p, and NEUTRAL otherwise. While CI tasks are less standardized than NLI, here we follow the if-then formulation used in ATOMIC (Sap et al., 2018) and COMET (Bosselut et al., 2019). There, a premise is a short sentence describing a scenario involving a generic participant ("PersonX"). Associated with each premise is a multiplicity of hypotheses, capturing likely or plausible inferences belonging to one of several predefined relation types, e.g., X-INTENT (inferences about PersonX's intent) or X-EFFECT (inferences about the scenario's effect on PersonX). See appendix Table A1 for the full list of relations. Following Bosselut et al. (2019), we consider text inference from a generative perspective: given a premise p and relation type r, generate a hypothesis h that bears that relation to p. This framing enables us to explore what trained models have learned about inference, without providing explicit hypothesis prompts. For NLI, we focus on two finetuned GPT-2 models using the SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) datasets. For CI, we use the COMET model (Bosselut et al., 2019), which is trained on the ATOMIC (Sap et al., 2018) dataset. 3 More details are in Appendix A. Experimental Setup Our goal is to construct hypotheses like "The [TAR-GETCATEGORY] person is cutting up fish for dinner." To this end, we define a set of domains and target categories, and a set of context situations. Stereotype Domains. Certain social categories are more likely to be referenced in stereotyped inferences. As discussed in § 2, previous works have mostly focused on two domains: gender (typically men vs. women) and race (typically Black vs. White). To broaden the space of consideration, we mostly follow Nangia et al.'s (2020) taxonomy of stereotype domains, which is a narrowed version of US Equal Employment Opportunities Commission's list of protected categories; to this set, we also add the political stance domain. Overall, the six stereotype domains we choose to focus on are: race/color/ethnicity/ancestry (henceforth, race, gender, religion, nationality), socioeconomic status (henceforth, socio), and political stance (henceforth, politics). Target Categories. Within each stereotype domain, our goal is to select target categories that are (a) common and (b) most likely to be the target of stereotypes in the United States; we rely on authoritative sources to assembled these lists. For religion, nationality, race, socio, and politics, we mostly follow the lists from outside resources (such as Pew, the World Atlas, and Wikipedia; see Appendix C); for gender, we manually create the list. Note that many categories have multiple possible labels; we attempt to use ones that are currently generally benign and affirming, to avoid triggering stereotypical inferences based on an explicitly negative represen-tation of the target category 4 . For instance, we use formerly incarcerated person instead of felon and Black or African American instead of older and/or related derogatory terms. 5 This choice, however, means that our results do not capture the full extent of stereotypes, as more derogatory terms often come with stronger stereotypical inferences, even for the same category (Devine and Baker, 1991). Table 2 is the list of our 71 target categories, which also includes spelling variations for some categories (e.g., presence or absence of a hyphen). In our analysis, we merge multiple terms under one category into a single label (e.g., Latino, Latina, and Latin American are analyzed as Latin). The table of substitutions is provided in the supplement. Context Situations. For our experiments, we manually construct a list of 102 real-life contexts into which the target categories will be inserted. Our aim here is to create premises that describe situations that are unlikely themselves to lead to the production of stereotypes in hypotheses, independently of the target category. When we construct the list we follow two strategies, thus creating two types of contexts. The first type, DAILY ACTIVITIES are neutral contexts in which it is difficult to imagine a direct extension to a stereotype. These are situations that do not beg for any particular continuation, such as "PERSONX holds a cup" or "PERSONX goes for a walk". The second type, TRIGGERS, are situations that are neutral, but which may trigger some stereotypical output for different target categories. For instance, with the premise: "PERSONX is holding a knife", plausible hypotheses include "PERSONX is cooking", "PERSONX is being attacked", and "PERSONX is attacking somebody." See the supplement for the full list. 6 One limitation of using these context templates is that the resulting premise can be somewhat unnatural. However, in order to control triggers other than the tested target category, we opted for simple, if sometimes stilted, over more complex. Data Generation We are interested in model behavior for different target categories in our constructed contexts. We generate around 130k example premises for later human annotation from the cross-product of: (a) target category (71 total), (b) context situation (103 total), (c) model (3: SNLI, MNLI, COMET), and (d) entailment relation (1 for NLI and 3 for CI 7 ), and generating three hypotheses for each. To get an initial sense of the outputs generated, we ran the Hugging Face sentiment analysis system on the generated hypotheses 8 ; details are in Appendix B. For each example, it gives a label "negative" or "positive" with an associated probability. For instance, "African Americans participate in demonstrations" gets a negative label, with probability near chance: 54%. Overall, the hypotheses generated from three models have around half negative hypotheses and half positive. In general, CI (COMET) tends to produce on average slightly more negative inferences (56%) than NLI (51% for both MNLI and SNLI). Human Annotation The annotation is done by the four authors of this paper. 9 For annotation, we sample a subset of examples from the generated dataset. In total, we an- Does the hypothesis include more information than the premise? Is it grammatically correct? (If not, skip the remaining questions) PLAUS? Is the hypothesis plausible given the premise? (If not, skip the remaining questions). IDEN? The hypothesis appears to be based on the target category identity. SITU? The hypothesis appears to be based on the context situation. SENTI? What types of inferences does the hypothesis have about the target category? STEREO? Does the hypothesis contain known (to you) stereotypes of the target category? WHAT? What is the evoked association of the target category based on the hypothesis? target categories in our neutral contexts, we have annotators answer the questions in Table 3. VALID?: annotators decide whether the hypothesis is grammatically correct and includes more information than the premise. Possible answers are yes or no. For instance, given "A Catholic person saw dust on the floor.", an invalid hypothesis is "The dust on the floor PERSONX." PLAUS?: annotators decide whether the hypothesis is plausible given the premise. The possible answers are yes or no. A plausible hypothesis means the hypothesis is logically consistent with the premise. For instance, given "A Catholic person saw dust on the floor.", an implausible hypothesis is "The dust on the floor was the same as tan." IDEN?: annotators choose whether the hypothesis appears to be based on the target category's identity to one of the following degrees: yes, maybe yes, not sure, maybe no, no. For example, given the same premise "A Buddhist person is sleeping.", the hypothesis "The person is poor." is likely based on the target category identity. SITU?: annotators choose whether the hypothesis appears to be based on the situation: yes, maybe yes, not sure, maybe no, no. For instance, given the premise "A Buddhist person is sleeping.", the hypothesis "The person is having a happy dream." is based on the situation. It is possible for an inference to be based on both situation and identity, as in the same premise with hypothesis "The poor person is having a happy dream." SENTI?: annotators identify if the hypothesis expresses ppositive or negative sentiment toward the target person on a five item Likert scale: from negative, maybe negative, neutral to maybe positive, positive. For example, "The American per-son is rude and entitled" is a negative inference, whereas "The woman makes sure her family is happy" is a maybe positive inference. STEREO?: annotators choose whether the hypothesis conforms to stereotypes they know; options are yes, no, or maybe. People's perceptions on whether a hypothesis is stereotypical or problematical are highly subjective (Hazen et al., 2020), and one research question we seek to answer is how annotators' levels of agreement may vary for different target categories (see §4.2). Overall, stereotypes can harmful even when positive; the nurturing stereotype of women is used to justify exclusion from professional settings (Tinsley et al., 2009), and, for women who do not conform to the stereotype, can lead to increased sexual harassment (Leskinen et al., 2015). WHAT?: annotators write the possible associations evoked from the hypothesis as free text. 12 Findings & Analysis We analyze our results from two perspectives: model behavior and human judgment. We first discuss the models' behaviors for different stereotype domains, target categories, and situations. We then explore how human judgments deviate depending on target categories and domains, and provide annotator agreement analysis. We are also interested throughout in evidence of defaulting (Rosch, 1975)-the observation that some target categories often go unremarked (e.g., "woman" may often be used to mean "cis woman", and "American" may often be used to mean "white American")-in the generations, for instance because the models are unused to seeing language with explicit defaults. In analyzing our results, we start from the normative position that identical model behavior across target categories is insufficient, despite being a prevalent goal in past literature (Blodgett et al., 2020, inter alia). We take this position for two reasons. First, because if a person of some category sees an offensive stereotype about themselves in a downstream system, they are harmed even if the same output is generated for other categories. Second, because social hierarchies enable members of some groups to more easily subjugate members of other groups, the same oppressive stereotypes are more likely to harm people in categories lower on the social hierarchy than those higher. For example, for the premise "PERSONX has a child", the generated hypothesis from MNLI is "PERSONX is not allowed to have a child" for African American, Asian American, and Amish. This evokes historically forced sterilization of African American women (Prather et al., 2018), the recently canceled Chinese one-child policy (Xie et al., 2018), and stereotypes of Amish families having many children 13 . These stereotypes are harmful for each of these groups, even though it is also generated for others. The degree of harm also varies by category; for instance, if the same hypothesis were generated for white American, it is unclear that would cause much harm. More examples from COMET are in appendix Table A2 and supplement for SNLI and MNLI. Model Behavior With the collected human annotations, we seek to answer the following research questions: 1. Which models and domains are more prone to invalid and implausible hypotheses? 2. What target categories have more hypotheses based on identity? 3. Which models and domains are more likely to lead to stereotyped hypotheses? Which target categories are more prone to negative inferences? 4. What are the commonly evoked associations? We address each question in turn, expanding on the question, motivating it, and presenting the results. 1. Which models and domains are more prone to invalid and implausible hypotheses? We aim to reveal model's capability of generating plausible hypotheses. It is harmful if models fail to do so for some particular target categories, because then any downstream system will not be able to rely on such inference model. Additionally, we use this question as a filtering step. For each of the stereotype domains (and models), we wish to know what percentages of generated hypotheses are illegitimate. By illegitimate, we mean hypotheses that are grammatically incorrect, do not contain any additional information to the premise, or are implausible. We compare the results across models and find that the MNLI model is more prone to generate illegitimate hypotheses than SNLI and 13 https://amishamerica.com/how-many-children-do-amish-have/ COMET models (21.9% versus 7.4% and 8.1% for SNLI and COMET respectively.) We then compare the percentage across stereotype domains to see for which domain the inference systems are more likely to fail in generating legitimate hypotheses. The results are shown in appendix Figure A3. We find that the inference models generate more illegitimate hypotheses for target categories of religion, socioeconomic status, race, and nationality (∼ 13%) than for target categories of gender and politics (∼ 7%). We also find that the percentage of illegitimate hypotheses is extremely high for British, Asian, low class, poor, and atheist. We can also see some effect of defaults: cis woman (4.8%) generates more implausible outputs than woman (0%), possibly because in the training data for these models, "cis woman" is rarely seen. 2. What target categories have more hypotheses based on identity? When target categories are embedded in real-life, neutral situations, we prefer a model that generates outputs more keyed to the situation than to the identity of the person mentioned. If a model frequently makes inferences based on the identities and ignores the situations for some target categories, this can lead to harms related to those categories. To perform this analysis, we first filter out invalid and implausible hypotheses (VALID?, PLAUS?). Then among the remaining 1144 annotations, we check how many hypotheses are based on identity by looking into IDEN?. For this analysis, annotations of yes and maybe yes are counted as based on identity. Figure 1 shows for each target category the percentage of hypotheses (post-filtering) that are based on identity. We find that across models, around 29% of generated hypotheses are based on identities, and that the target categories of socioeconomic status and religion focus more on identities, in comparison to politics, nationality, race and gender (39% and 33% vs. 29%, 25%, 23%, and 19% respectively). In general, we find that, on average, more vulnerable target categories have a higher percentage of hypotheses generated based on identities. (This is not universal: the target category of aristocratic has generations with the same level of dependency on identity as the low class category, despite the asymmetry in social position here.) We are particularly interested in cases where a hypothesis is based only on identity and not at all on situation: this means that the model has essentially focused exclusively on a person's identity and ignored everything else. Therefore, we explore SITU? and check how many hypotheses are not based on situation for each target category and stereotype domain. Annotations of no or maybe no for SITU? are counted as not based on situation. In the results, we see that hypotheses generated about formerly incarcerated, poor, working class, and Filipino turn out to be highly dependent on identities. However, among these categories, formerly incarcerated and Filipino have 38.9% and 23.5% of hypotheses exclusively based on identities (and not situation), while poor and working class categories only have 6.7% and 14.3% of such inferences. (These percentages are color-coded in Figure 1: higher percentages in red, lower in blue.) Overall, the highest percentage of inferences based exclusively on identities is for religion domain 14.2% and the lowest is for gender domain 4.4%. Similar to our observation on IDEN?, we find vulnerable target categories tend to have more hypotheses that completely ignore the situation. Categories like formerly incarcerated, Asian, Filipino, refugee, Amish, and fascist have a high percentage of hypotheses generated independent of situation. On the other hand, categories such as white, woman, man, trans man, French, and Amer- ican have no hypotheses in which the situation is ignored. 3. Which models and domains are more likely to lead to stereotyped hypotheses? Which target categories are more prone to negative inferences? Although the previous question reflects how much the models' generations depend on identity information, we still want to see directly how frequently explicitly stereotypical hypotheses are generated across different models and stereotype domains. If some model consistently generates hypotheses with stereotypes of some target categories, then it can cause representational harms to people in those target categories. To answer this question, we delve into annotations for STEREO?. For STEREO?, votes for yes and maybe are categorized as containing stereotypes, while no is categorized as do not contain stereotypes. For SENTI?, we count positive and maybe positive as positive inferences, and negative and maybe negative as negative, and neutral as neither positive nor negative. We find the percentages of stereotyped hypotheses and negative hypotheses are similar across all three models: around 28% contain known stereotypes and 59% are with negative sentiment. Detailed results across stereotype domain comparison are shown in Figure 2. Overall, these models generate more stereotyped hypotheses for domains of socioeconomic status, politics, and nationality, compared to domains of race, gender, and religion. The most stereotyped categories from each domains are trans woman, Cuban, Latin American, Fascist, Jewish, and poor. In terms of percentage of negative inferences, socioeconomic status has the least negative inferences of 54% and religion has the highest of 63%. Moreover, we find that the target categories that are more affected by stereotypes are not necessarily prone to have negative inferences. For instance, poor has 67% or stereotyped inferences, while only 33% of those are negative. On the other hand, woman have less than 10% of stereotyped inferences, but 76% are negative. Overall, all models produce negative inferences even for categories with a low level of stereotyping: models achieve some parity in distributing negative generations across domains, but, as discussed in the conclusion, this does not necessarily make the models fair. 4. What are the evoked associations? In Table 4, we provide keywords that are associated by annotators with the target categories. The full list is in supplementary materials. Some of these associations relate to the existing stereotypes, some do not. For instance, democrat based on the generated hypotheses are associated with "rude", "causing trouble", and "making deals." Even though there might be no related stereotypes, such hypotheses still might be harmful to the target category. Human Perceptions of Stereotypes We explore human perceptions of stereotypes. It is known that people's perceptions on whether a hypothesis is stereotypical or not can be subjective (McGarty et al., 2002). Overall, we find that annotators highly agree VALID? on PLAUS? with 91.8% and 85.8% agreements respectively, and highly disagree on IDEN?, SENTI?, and STEREO? with 39.2%, 37%, and 21.8% scores respectively. To calculate annotator agreement, we use the 255 examples that were annotated by all four annotators. Throughout this section, we calculate agreement as the fraction of times all annotators give the same answer. 14 We filter out examples that have fewer than three annotations. This may happen because, for example, some annotators mark the example as invalid or implausible and thus skip the rest of the questions. Then for examples that have four annotations, we randomly pick three of them to calculate agreement. Agreement on Hypotheses Origins. Annotators agree more on the situation question (66.5% agreement) than the identity question (39.2%), likely because the situation question is defined purely on the bases of the stated hypothesis, while the identity question depends on annotators' perceptions of that identity. We observe zero agreement on whether situation is based on identity or not for several target categories such as white, Asian, Mormon, liberal. On the other hand, categories of Jewish, communist, and atheist have complete agreement (100%). In general, we see that annotators have more disagreements on the questions that involve target categories' identities, most likely because these rely more on cultural context. Agreement on Stereotyped Hypotheses. Overall, for STEREO? annotators agree on only 21.8% of the examples. We observe that annotators have complete agreement on categories that are either highly stereotyped such as homeless, trans man, communist or have very little widely known stereotypes such as atheist and Native American. In addition, both categories of atheist and Native American have a very low level (around 6%) of stereotyped hypotheses. We suspect that it is simply eas-ier for annotators to detect stereotypes for typically stereotyped categories. There are also some exceptions like cis woman, which has a high percentage of stereotyped hypotheses (33.3%) but has low annotator agreement (0%). We suspect the reason is that the stereotypes towards cis woman in our dataset are not well-known existing stereotypes, which tends to lead to more disagreements. As an example of annotator disagreement for STEREO?, given the premise "A Latin American person has a child," annotators disagreed about whether the hypothesis "The person then gets pregnant" represents a stereotype or not; those who annotated it as stereotype did so because it evokes a fertility threat stereotype (Gutiérrez, 2009), a stereotype not known by all annotators. Overall, we find that annotators' perception and ability to detect stereotypes varies based on their knowledge of the target categories, arguing that a large-and diverse-set of annotators is important for problems around stereotyping. Because of the subjective nature of these annotations, we further consider agreement at two levels: (1) how often do all annotators agree, and (2) how often do a randomly chosen pair of annotators agree. High percentages for (1) indicate that a question is not particularly subjective (or that all annotators have the same subjective opinion), while a small value of (1) but large value of (2) indicates that a strong degree of subjectivity exists, but that even among four annotators some of them frequently agree. For (1), agreement on the more objective questions such as hypotheses correctness, plausibility, and relatedness to situations have 91.0%, 82.9%, and 66.7% agreement. On the other hand, we observe zero agreement for stereotypes, 24.9% for identity agreement, and 26.6% for sentiment agreement. This suggestsespecially for the 0% for stereotypes-that getting more annotators is needed in order to feel confident about coverage. For (2), we observe overall a high level of agreement for correctness, plausibility, and relatedness to situations with 95.3%, 88.0%, and 82.5% agreement respectively. We additionally observe a reasonable level of agreement for sentiment and stereotypes: 57.1% and 61.2% respectively. Agreement regarding whether a hypothesis is based on identity is the lowest at 50.1%. This suggests that while annotators can agree on these questions, there is sufficient subjectivity that all four rarely do. Conclusion & Discussion We investigated stereotypes in generative inference models from two perspectives: model behavior and human perceptions. We find that the most stereotyped domains by our NLI and CI models are religion and socioeconomic status, rather than gender and race, which are the focus of many previous studies. On the other hand, the stereotype domains and target categories we studied is not exhaustive either; even in a US context, most obviously we are missing domains related to disability, beauty/body type, sexuality, age, pregnancy, and so on. Moreover, since we investigated inference tasks, instead of focusing on models generating "fair" hypotheses over target categories, we are much more concerned with how each hypothesis is perceived by a human reader. We observe some cases in which the models generate similar outputs across several target categories, but for which the generated text is highly stereotyped and thus may cause representational harms. Finally, from human judgments, though our work is limited to US culture and the backgrounds of our four annotators, we find that people's different backgrounds influence their perceptions of stereotypes. Even though this might result in lower agreement scores, such diversity can be actually useful (Pavlick and Kwiatkowski, 2019) in helping to explore the problem space. Overall, when deploying a system, it is important to make a wise consideration on annotators' backgrounds. Considering annotators of different age, professions, education, and culture might give a multiplicity of valuable perspective on stereotypes. A Implementation Details Text Inference Datasets. For training our generative inference models, we use three datasets: two of them cover natural language inference, and one is for commonsense inference. The Stanford Natural Language Inference (SNLI) corpus was created by Bowman et al. (2015).It contains about 570k examples. Each example has a premise, relation (entails, contradicts, neutral), and related hypotheses. Premises were taken from captions for the Flickr30k corpus (Plummer et al., 2015). Hypotheses are written by crowd workers as independent image captions. The MultiGenre Natural Language Inference (MNLI) corpus by Williams et al. (2018) was built following the SNLI structure. It has 433k examples. MNLI, being much broader than SNLI, covers ten different domains. It has a range of styles, degrees of formalities, and topics. The Atlas of Machine Commonsense (Atomic) corpus was introduced by Sap et al. (2018). The corpus has about 300k events associated with 877k textual descriptions of inferential knowledge. Such knowledge is collected and organized as if-then relations for hypotheses specifically about a person in a premise named PersonX. There are 4 groups of relations (see Table A1), each group has several if-then relations. In total, there are 9 if-then relations. For instance, given the premise = "PersonX drops a glass", the relation = "Causes for PersonX -because PersonX wanted", then the hypothesis = "to get a glass". B Sentiment analysis Hugging Face sentiment analysis system 16 is used on all 130, 000 generated hypotheses for three models. For CI (COMET), averaged all target categories, 56% of inferences has negative sentiment. The domain with the most negative generations is politics, with average 67% negative inferences, and the most affected target category is fascists with 85% negative. The two next most negative domains are socioeconomic status and religion with about 61% negative. The most negative categories in these are protestants and formerly incarcerated, which are 87% and 79% negative, respectively. For MNLI, the results have less variance: the overall average level of negative sentiment is about 51%. The most negative domain is politics (54%) and category fascists (80%). The domain with the highest variance is socioeconomic status, ranging from 89% negative hypotheses for poor to 24% for wealthy (with an overall average of 30%). The SNLI model has even more uniform distribution within stereotype domains and all target categories have narrow distribution within 45%-52% with the only most extreme case for fascists: 64%. Target Category Hypothesis African American The person then gets called racist. Black The person is seen as loving. Latin American The person then gets pregnant. Arab The person is seen as loving. Pacific Islander The person then has to buy food. Latina The person is seen as caring. White American The person is seen as caring. Native Hawaiian The person then gets a lot of respect from people. Asian American As a result, others want to go to a restaurant. Hispanic The person is seen as untrustworthy. American Indian Then child is protected. Formerly incarcerated The person then gets arrested. Filipino As a result, others want cry. American The person is seen as a parent. British Then child cries. Jamaican The person then gets pregnant. Mexican The person then is asked to stop by a counselor. German As a result, others want cry. Italian The person then is asked questions. Cuban As a result, others want to have a good life. Russian The person then gets pregnant. cis woman The person is seen as loving. woman As a result, others want to get a new baby. man The person is seen as responsible. trans man The person is seen as responsible. trans woman The person then is a father. Buddhist The person is seen as a parent. Catholic The person is seen as pious. Jewish As a result, others want cry. Japanese The person then gets called a father. Amish As a result, others want go to church. Muslim Others then are protected by personx. Mormon The person is seen as a sinner. Protestant Others then cry in sadness. communist The person is seen as racist. capitalist The person then gets a raise. conservative Others then are loved. republican The person is seen as racist. fascist Then child is abused. socialist Others then get a new home. libertarian Others then are taken care of. liberal The person then gets a raise. democrat The person is seen as a racist. bourgeoisie The person then gets pregnant. homeless The person then gets a new home. immigrant Others then they learn about Personx 's culture. formerly incarcerated The person is seen as unintelligent. upper class The child is taken care of. rich The person then gets gifts. wealthy As a result, others want have a good life. refugee The person is seen as caring. lower class The child is taken care of. middle class Others then learn. aristocrat Then child cries from sadness. Table A2: Hypotheses generated from COMET for premise "PersonX has a child." across target categories. Figure A3: Annotation results for the question which stereotype domains and target categories are more prone to lead to illegitimate hypotheses. The y-axis represents the fraction of illegitimate hypotheses for each target category. For each stereotype domain on the xaxis, the grey line and the shaded box represent the average percentage and its 95% confidence interval for this domain.
8,540
2021-01-01T00:00:00.000
[ "Computer Science" ]
Interaction of Spherical Flames of Hydrogen-Air and Methane-Air Mixtures in the Closed Reactor at the Central Spark Initiation with Close-Meshed Obstacles It is known that if the composition of gas mixture is far from concentration limits of ignition the velocity of flame propagation in the presence of obstacles can quickly increase to supersonic values [1,2]. At studying of quickly accelerated flames it is possible to observe DDT (deflagration to detonation transition), however, the velocity of supersonic combustion wave in the presence of obstacles is often below Chapman-Jouquet velocity [3,4]. Therefore from the practical point of view the most prominent aspect in investigation of accelerated flames is caused by problems of engines operation or explosion safety and connected mainly with transition of fast combustion to non-stationary quasi-detonation regimes which destructive influence is more effective than in Chapman-Jouquet regime [5]. Relevance of the researches is connected also with ensuring fire safety in volumes of complicated geometry in particular onboard manned spacecrafts. It is necessary to notice that influence of obstacles, according to [1], can be expressed in the double way. One can observe either the occurrence of a detonation wave due to reflections of shock waves or quenching of a detonation wave as a result of heat losses. Introduction Influence of the obstacles located in different volumes, filled with combustible mixture, on the propagation of Flame Front (FF) is investigated for a long time. These researches are performed to find out both the dependence of combustion regime on the type of obstacles and possibility to influence on combustion regimes by varying of an obstacle shape. It is known that if the composition of gas mixture is far from concentration limits of ignition the velocity of flame propagation in the presence of obstacles can quickly increase to supersonic values [1,2]. At studying of quickly accelerated flames it is possible to observe DDT (deflagration to detonation transition), however, the velocity of supersonic combustion wave in the presence of obstacles is often below Chapman-Jouquet velocity [3,4]. Therefore from the practical point of view the most prominent aspect in investigation of accelerated flames is caused by problems of engines operation or explosion safety and connected mainly with transition of fast combustion to non-stationary quasi-detonation regimes which destructive influence is more effective than in Chapman-Jouquet regime [5]. Relevance of the researches is connected also with ensuring fire safety in volumes of complicated geometry in particular onboard manned spacecrafts. It is necessary to notice that influence of obstacles, according to [1], can be expressed in the double way. One can observe either the occurrence of a detonation wave due to reflections of shock waves or quenching of a detonation wave as a result of heat losses. It is possible to carry the aforesaid to an initial stage of flame acceleration, namely to the moment when the laminar flame meets an obstacle of net shape; that is the subject of the present research. This interaction causes the development of flame instability, promoting its acceleration [6]. On the other hand, the contact of FF with reactor surface leads to increase of the contribution of heterogeneous reactions, in particular chain termination [7]. This should promote flame suppression. This ambiguous mechanism of obstacles action underlies that physical means of detonation suppression (nets, nozzles etc.) [8] are not always effective. The influence of obstacles (nets and perforated spheres with the minimum cell size 2×2 mm and aperture diameter of 4 mm accordingly), located in combustible gas, on the visible velocity of combustion of stoichiometric hydrogen-air and hydrogen -oxygen mixtures was investigated in [2]. Acceleration of FF in 1.5-2.5 times was always observed after obstacles. DDT was observed for hydrogenoxygen mixtures, depending on energy of initiation [2]. However, the data on interaction of flames of lean hydrogenair mixtures with penetrable obstacles are practically absent in the literature, though such experiments are of interest for an establishment of influence of opposite factors: flame accelerating (instability development) and flame suppression (chain termination on obstacle surface). The work is aimed to investigation of flame propagation dynamics of lean hydrogen-air mixtures and stoichiometric one of natural gas with air in net sphere, FF propagation through a net sphere and the further propagation of FF outside the net sphere. Experimental Experiments were performed with lean mixtures of hydrogen (7.5-15% H 2 )-air and stoichiometric mixture of Natural Gas (NG) with air at initial atmospheric pressure and 298К in a horizontally located stainless steel cylindrical reactor of 15 cm in length and 13 cm in diameter. The reactor was supplied by an optical quartz window on one of its butt-ends ( Figure 1). Electrodes of spark ignition (6) were located in the reactor centre; the distance between them was 0.5 mm. On these partially isolated electrodes (6) the net sphere (5) was fixed, with the cut out grooves for the electrodes. The net sphere consisted of two hemispheres fastened by a spring (7). Thus the volume included in net sphere, and the external reactor volume contacted only through net cells. The net was *Corresponding author: Rubtsov NM, Institute for Structural Macrokinetics and Material Science of Russian Academy of Sciences, Moscow Region, Chernogolovka, Russia, Fax: +7 495 962 8025; E-mail<EMAIL_ADDRESS>made from aluminum wire. Net spheres of 3 cm in diameter (diameter of wire 0.2 mm, cell size 0.04 mm 2 ), d=4 cm (diameter of wire 0.25 mm, cell size 0.08 mm 2 ), d=6 cm (diameter of wire 0.3 mm, cell size 0.1 mm 2 ) were used. As it is known, the aluminum surface is always covered with its oxide. Hence, the net surface consisted of aluminum oxide Al 2 O 3 which effectively terminates active centers of combustion (reaction chains) [6]. Experiments were performed as following. First the reactor was filled with CCl 4 (if needed, for better visualization of H 2 -air flame). Notice that the additive up to 4% ССl 4 for the given mixtures is inert [9]. Then the reactor was filled with H 2 , or Natural Gas (NG), and air was added to 1 atm. The mixture was maintained for 15 min for completeness of mixing and then the spark initiation was performed (discharge energy made up 1.5 J). Recording of dynamics of ignition and FF propagation was carried out through an optical window with color high-speed digital camera Casio Exilim F1 Pro (frames frequency-60-1200 s -1 ). The video file was stored in computer memory and its time-lapse processing was performed. The pressure change of in the course of combustion was recorded by means of a piezoelectric gage. Before each experiment the reactor was pumped out to 10 -2 Torr. The degree of expansion of combustion products ε T was determined as follows [6] (Р b is maximum pressure developed in the course of combustion): The normal velocity U n of FF was calculated from the equation [6]: In equations (1), (2) P 0 -initial pressure, γ=1.4-the ratio of specific heats, V v -visible flame velocity. Results and Discussion In all experiments with hydrogen flames the flame initiated in a net sphere passed through net cells (except the mixture 4% ССl 4 +7.5% Н 2 +air). It means that interaction of a rather slowly propagating flame (normal velocity of FF in 15% Н 2 with air (≈ 50 cm/s) is ~ 6 times less than in the stoichiometric flame; the normal velocity of FF in 10% Н 2 with air (≈ 20 cm/s) is ~ 15 times less than in the stoichiometric flame [9]) with a surface providing effective chain termination (Al 2 O 3 ), doesn't lead to flame suppression. Thus, the influence of heterogeneous chain termination of H atoms on flame propagation isn't enough for flame quenching under our conditions. It has been established that the flame in 4% ССl 4 +7.5% Н 2 -air mixture doesn't pass through net spheres; however in the sphere separate flame cells caused by thermal diffusion [9] are observed after initiation. If there is no net sphere in the reactor, cellular flame rises upwards as in [10]. Thus, under our conditions there is critical concentration of H 2 at which the flame doesn't pass through a net sphere. It is in agreement with calculations [11] where it is shown that the influence of chain termination on flame propagation should be observed in the immediate vicinity to the lower concentration limit of flame propagation (which for hydrogen-air mixtures make up ~ 5% H 2 [9]). The sequences of video images of FF propagation illuminated with 4% ССl 4 , in mixtures of 10% Н 2 and 15% Н 2 with air correspondingly, illustrating the influence of a net sphere on dynamics of flame propagation are presented in Figures 2 and 3. As is seen after propagation through the net sphere FF is considerably disturbed in comparison with flame propagation without a net sphere. As is seen in Figure 2 FF consists of cells (of the thermal diffusion nature, see above) which become smaller with increase in H 2 concentration in agreement with [9,11]; thus in the course of FF propagation long-wave disturbances occur. Figure 4b at the time ~ 365 ms acoustic fluctuations occur in the presence of the net sphere during the combustion of 10% Н 2 -air mixture, and the time of pressure raise becomes shorter in comparison with the process without the net sphere in accordance with [2]. Figures 4 and 5. As is seen in As is seen in Figure 5a during combustion of 15% Н 2 -air mixture without the net sphere acoustic fluctuations occur after pressure maximum in the time interval 110 ÷ 125 ms. In the presence of the net sphere of d=3 cm (Figure 5b) acoustic fluctuations arise considerably before pressure maximum at the time ~ 95 ms, i.e. the presence of the obstacle leads to faster development of a flame instability and provides intensification of combustion. Notice that in a spherical bomb of larger diameter (38.4 cm) ( Figure 3 [14], curve ф□=0.4) acoustic fluctuations arise before the pressure maximum. It is necessary to pay attention to the fact that the larger diameter of the net sphere is, the later acoustic fluctuations arise. As is seen in Figure 5c acoustic fluctuations are raised after the pressure maximum at the time ~ 115 ms. It means that the presence of a net obstacle leads to development of instabilities on FF and to occurrence of acoustic fluctuations. In the following series of experiments it has been shown that the combustion of stoichiometric NG-air mixtures for all net spheres used in the present work completely covers the reactor volume. However, unlike combustion of hydrogen-air mixtures, at inner surface of the net sphere FF actually stops and its luminescence vanishes ( Figure 5). Dynamics of increase in a FF radius in absence and in the presence of a net sphere is shown in Figure 3c. As is seen FF in the vicinity of the net sphere is slowed down, however after propagation through the obstacle FF is considerably accelerated in agreement with [2]. Then at the reactor wall FF is slowed down again due to the change in conditions of expansion of combustion products [6,9,12,13]. One can see the occurrence of bright streams of hot gas from the volume inside of the net sphere out of it which arise after FF reaches the reactor walls (shots 17-19 Figure 3b). As it was shown specially, these streams are due to small glowing Al 2 O 3 particles carried away by a gas stream. In accordance with contemporary representations of hydrogen combustion secondary exothermic reactions are missing in the process [9]. In addition the presence of a net sphere should lead to faster cooling of gas inside it. Therefore the gas stream should be directed into the net sphere. However the gas stream is directed outside it. The determination of the nature of this phenomenon demands more detailed research. It has been established that in the presence of net spheres flame propagation in mixtures of both 10% Н 2 and 15% Н 2 in air is accompanied by a characteristic sharp sound, i.e. acoustic fluctuations of gas occur. Notice that flame propagation in the mixture of 10% Н 2 with air without a net sphere isn't accompanied by sound effect. The dependencies of change in total pressure on time for flame propagation in specified mixtures in the presence of the net sphere are shown in It means that the mechanism of penetration of the flame of NGair mixture through the net obstacle differs from the mechanism for hydrogen-air mixtures. The sequence of video images of flame propagation in NG-air mixture through the net obstacle is shown in Figure 6а, the dynamics of increase in the FF radius in the presence of a net sphere is shown in Figure 6b. As is seen from Figure 6a at approach of FF to inner surface of the net sphere the flame front practically disappears (shot 20). At combustion out of the obstacle FF isn't accelerated but propagates with almost constant velocity. Therefore the excitation of acoustic fluctuations at the expense of flame acceleration in combustion of this mixture does not take place and is not experimentally recorded. The estimation of normal velocity of flame propagation out of the obstacle using equations (1) and (2) gives ~ 27 cm/s; this value is close to the normal velocity of a spherical flame for this mixture composition (35 cm/s [9]). One can assume that attenuation of NG -air flame is connected with intensive heterogeneous destruction of active intermediate products of combustion on the net surface. However stable intermediate products of combustion (for example, hydroperoxides) diffusing through net cells again initiate flame propagation outside of the net sphere. The absence of a sound effect at combustion of the mixture testifies in favor of this assumption, i.e. presence of a net obstacle doesn't lead to FF instability and to occurrence of acoustic fluctuations. Let's specify that normal velocities of FF of mixtures 10% Н 2 and 15% Н 2 with air make up 21 cm/s and 45 cm/s correspondingly in agreement with [10] in which the data of several groups of authors on determination of U n and a curve of its average values is presented. Notice that the normal velocity of flame propagation in stoichiometric NG-air mixture makes up ~35 sm/s, i.e. the values of U n for three mixes under investigation are close to each other. However the partial heat of combustion of NG is considerably higher than that of hydrogen [9]. Therefore the disappearance of NGair flame at the net obstacle has no explanation on the basis of the only thermal theory 6 . It means that in the work the direct evidence is obtained that the active centers of combustion of methane and hydrogen, determining flame propagation, have the different chemical nature [15,16]. The reason of close-to-zero velocity value of NG-air flame in the vicinity of net obstacle is due to the fact that light hydrogen atoms easily penetrate through the net obstacle, but chain carriers of methane combustion, on the contrary, are effectively terminated on Al 2 O 3 . Conclusions It is shown that the flames of lean hydrogen air mixtures (8%-15% Н 2 ) can propagate through aluminum net spheres (cell size 0.04-0.1 mm 2 ); the flame of 15% Н 2 in air after obstacle is accelerated; acoustic gas fluctuations occur in the reactor. The less diameter of net sphere is the earlier acoustic fluctuations occur. On the contrary the flame of 8% natural gas-air mixture passes through obstacles relatively slow; after the obstacle flame velocity remains constant; acoustic fluctuations aren't experimentally observed. The conclusion is made that active centers of methane and hydrogen combustion, determining flame propagation, have different chemical nature.
3,668
2013-12-25T00:00:00.000
[ "Engineering", "Chemistry", "Physics" ]
Property Tax Exemption for Government-Owned Real Estate in Mexico The purpose of this article is to contribute to reflection on whether or not current policies exempting government-owned real estate from paying property tax are appropriate, from the perspective of Mexican municipal finance laws. This source of public revenue was given to Mexican municipalities in February 1983, so for the past 34 years it has been and remains an unfulfilled promise in terms of tax collection. A number of studies have been published on the economic determinants of property tax in Mexico, but exemptions from this tax have not been studied from a regulatory standpoint; this issue is still unexplored and unaddressed by experts in the field. This paper seeks to answer the following questions: What are the municipal finance laws regarding exemptions of government real estate from property tax? How do property tax exemptions for government real estate limit municipal revenue potential? What other factors have contributed to limiting the revenues generated by this tax? Introduction The purpose of this paper is to analyze property tax exemptions extended to government-owned real estate in Mexico.This analysis is particularly pertinent at the present time because it offers evidence of the need to reform municipal finance laws to ensure that this tax is more productive, by repealing tax exemptions that should no longer be in effect.Under current financial conditions, municipalities should no longer grant tax subsidies to properties owned by state and federal governments, because these have much more budget leeway for their public administration.Municipalities need to strengthen this local source of revenues in order to bolster funding for local public services. The paper is organized in four sections.The first offers a brief review of the theoretic background on local autonomy and exemptions.The second describes the method, instruments, and primary and secondary sources used to gather the information.The third section presents the empirical evidence, and the fourth presents conclusions. Local Autonomy and Exemptions In Mexican states, local autonomy requires that governments have the power to attend to local interests and certain power of their resources, which in turn requires a measure of fiscal autonomy.But what does fiscal autonomy mean?For Oates, only revenues in which local authorities decide upon the taxable object or event, the tax base and tax rate, can be considered autonomous [1].For that author, any other arrangement violates the implicit connection between payment of a tax, and the benefit obtained from the public expenditure. The two basic issues in public revenues are to what extent municipal governments should be self-funding, and by which method should they collect their revenues.From a political standpoint, the greater the degree of revenue autonomy, the greater the capacity for local control.According to the principle of subsidiarity, as in today's European Union, public responsibilities should be entrusted to the authorities closest to the citizens [2].Furthermore, as Bahl indicates, fiscal autonomy permits a municipal government to determine the size and composition of its budget according to its needs [3]. In practice, one of the essential problems of local public finance is that revenues are limited, because the federal government appropriates the largest.Municipal governments' economic capacity thus depends on transfers from other spheres of government, compounded by the fact that there are numerous agencies operating in the local sphere that are completely independent of municipal authority [4].Local governments face serious administrative difficulties in levying taxes, particularly a lack of information on taxpayer income, properties or consumption, making it hard to precisely determine the taxable bases [5].Tax collection is also more expensive for local governments because they lack economies of scale.Furthermore, state governments decide on the substantive elements of municipal taxes, and municipalities only manage and collect them, so their tax system does not necessarily correspond to their funding needs [6]. Given the growing importance of local administration in recent years, it is indispensable that municipalities have enough revenues to finance their public services and to play their part in economic activity [7] rity Institute (IMSS), and many others, are also assumed to be exempt [13]. In this country, exemptions are partly to blame for the fact that in 2011, municipalities had to rely on federal allocations to fund 67.5% their public expenditure [14].This illustrates the exaggerated degree of tax centralism that munici- [15].This tax, which is known as the warhorse of local taxation [16] provides revenues for funding public services in Italy, France and Portugal [17].In light of these contrasts, a review of the regulatory framework for this tax in Mexico may yield useful information for policymakers. Method This work was developed through qualitative instruments.The authors first reviewed the theory on local autonomy and property taxes, then examined statistics on property tax collection and social welfare indicators published by the National Institute for Statistics, Geography and Informatics [18].A total of 32 municipal finance laws were identified and information as also drawn from the webpages of 32 Mexican states.Information was compiled on the number of properties that are exempt from tax, the amount of taxes exempted, and effective collection, through e-mails, phone calls and contact with municipal authorities 1 . The sample is not intended to be representative, as not all local government webpages contain information on exemptions.Nationwide data was unavailable because it is not published by INEGI, which made municipal governments more useful sources of information.This limited the data available on the number of exempt properties to those provided for the municipalities of four states in northwestern Mexico which belong to zone one, of the eight work tables of the current National Tax Coordination System.The analysis of the property taxes exemptions is absent from the literature in Mexico, Tello only refers to that it is rife of special treatments [13].This paper, with a small sample, represents only the first advance in the empirical study of property tax exemptions extended to government-owned real estate in this country. Empirical Evidence. Review of the Regulatory Framework and Data This paper argues that eliminating property exemptions for government properties and thus increasing revenues from this source would enable local governments to depend less on federal and state allocations and to fund public services through local taxes.It would also fortify accountability for elected officials now in office, and according to Haughwout and Inman, misconduct in the management of local public finance causes both companies and people to migrate out of territories [19].This was the motivation for main reforms by which this tax was de-centralized in the past. The 1983, 1999 and 2013 Reforms to Increase Tax Collection For Bird and Slack, property tax revenues rarely account for more than 3 percent In an effort to amend the poor results and increase from this tax, the 1999 reform added an exemption, that read: "…unless those goods are used by stateowned enterprises or private parties, in any form, for administrative ends or purposes other than those inherent to its public purpose."This reform was intended to force state-owned enterprises to pay taxes, bolstering local governments' limited taxation capacities [22]. But hard data from the OECD shows us that in 2014, property tax collection amounted to 0.32 (see Graph 1) percent of GDP, the clearest evidence that the 1983 and 1999 reforms to constitutional article 115 did not have the intended effect, and that revenues continued to stagnate [23].The reform failed to give municipal governments control over the fees, rates and the land and building assessment value tables that would enable them to collect taxes due on these properties.In actual practice, the Mexican municipality today continues to lack taxation powers. Recent Diagnosis Aware of this historical scarcity of local sources revenues for Mexico's muni-Graph 1. Mexico property tax (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014).Source: Author's elaboration with database of the OECD (2017) Tax on Property, https://data.oecd.org/tax/tax-on-property.htm.cipal governments, the federal executive branch designed a public policy focused on improving the performance of this tax.This consisted of the addition of an article 2-A to the Tax Coordination Law (TCL) which provided for a new way of distributing the Municipal Promotion Fund (MPF) [24].This fund had in the past been distributed completely in accordance with tables on property tax and water rights collections data.With the 2013 reform, starting in 2015 these funds would be distributed "70 percent by the same criteria and 30 percent according to the revenues generated from property taxes, provided the government of each state is responsible for collecting the property tax on behalf of the municipality" [25].It remains to be seen whether this adjustment to the TCL will actually improve the productivity of property taxes, because one particularly power full imitation is the exemption granted to government buildings. Exploring Municipal Finance Legislation Having analyzed the theory and background of property tax performance in the preceding section, particularly in federal legislation, this section turns its focus to the content of municipal finance law with regard to property tax exemptions for government buildings. No statistics are available on the precise amount of tax lost by income tax exemptions for each state, but a state-by-state review of local regulations can provide some insight into the sources of revenue not being exploited in each.Table 1 shows the municipal finance laws of 31 states and the Federal District (now called Mexico City).The second column cites the article on payment of property tax, the third shows the type of government property exempt from payment, and the fourth shows the requirement(s) to qualify for the exemption.There is one exception, the state of Morelos, in which the 2014 Municipal Finance Law, article 93 b is 2, establishes that "property tax is due upon the ownership or possession of land located within municipal territory, regardless of its use or purpose" [26].The findings of this paper suggest that this would be ideal for all municipalities in Mexico-in other words, that all properties should be subject to taxes regardless of who owned them, which would infuse new life into municipal public finances. In the state of Campeche, in addition to public property exemptions, article 28 of the local finance law allows for exemptions on property acquired by family inheritance, as defined in the Civil Code of that state. In another three states-Chiapas, Hidalgo and Tabasco-a written request is needed in order for public property to qualify for a property tax exemption.In Hidalgo, owners must submit a request to the Municipal Treasury within 30 days after receiving a tax collection notice.In Tabasco, besides the written request, owners must attach the corresponding proof to be analyzed by the person in charge of municipal tax policy. In Mexico City, exemption must also be requested in writing by the property owner, and it must be renewed every 5 years.In Durango, federal and state gov- ernments must prove they own the buildings in order to qualify for the exemption.In Morelos, only state-owned government property used for the purposes of educational activity is exempt from property tax. In Nayarit, the law considers all properties subject to tax, except for those exploited directly by the federal, state and municipal governments; and in San Luis Potosi, public property used for any other than its public purpose is subject to tax. There are twenty states in which municipal governments, in keeping with their respective municipal finance laws, obligate governments to pay tax on properties used for other than public purposes.These are:Baja California, Baja California Sur, Coahuila, Colima, Chihuahua, Guanajuato, Guerrero, Jalisco, Michoacán, Nuevo León, Oaxaca, Puebla, Querétaro, Quintana Roo, Sinaloa, Sonora, Tamaulipas, Tlaxcala, Veracruz and Yucatán.Finally, in Zaca-N.G. Z. Espinoza Property Tax Collection by State As Table 2 Some Data on Exemptions: Northwest Mexico How do property tax exemptions for government properties limit tax revenue potential for local governments?The theoretic section of this paper notes that there is no data available on the amount of expenditure the government saves by not paying this tax.Each state has its own municipal finance law, which includes property tax exemptions or abatements for public or private property.Ideally, there would be statistical data for each municipality, but given the exiting limitations this paper focuses on data from municipalities in northwestern Mexico, except Baja California Sur to illustrate the limitations posed by property tax exemptions. In the case of Sonora, exemptions of government property account for 5 percent of the total value of the properties listed in municipal land registries (see Table 3).These tax benefits extended to government properties are an indicator of the impact they have on municipal fiscal health.Baja California Sur did not provide information on exemptions, so for this 2 For example, tax stimulus offered by the Law to Promote Investment for the Economic Development of Sinaloa (2014). Other Factors That Limit Tax Collection Having examined in the previous section the impact of property tax exemptions for government properties, this section reviews other factors, particularly property tax revenues to be mediocre and unproductive for local governments 3 . These include poor technical quality of the land registries 4 and scarcity of available resources.Most clearly, however, the 32 municipal finance laws of all the states afford a preferential treatment to buildings owned by municipal, state and federal governments, which not only means a greater loss of collected taxes, but an unequal burden for those who do pay property tax. Recommendations Impose budget coordination rules by which federal and state governments must compensate municipalities for at least 20 percent of the amount of tax revenue they would have received on exempt properties.This is consistent with the 20 percent that each state's General Participation Fund allocates to its municipalities, pursuant to the Federal Tax Coordination Law. The additional revenues municipalities would receive to offset the exemptions should be earmarked for: 1) Improving local taxation system through measures such as personnel training and new technology to make property tax administration more efficient. 2) Evaluating whether the property tax is bing properly administered since 2015, which was the year in which municipalities signed an agreement with state governments to help them boost property tax collection. 3) Funding the implementation of systems for incorporating and updating information on formal and informally owned property in the land registry in order to augment and improve it, as this is the main source of tax and non-tax information on public and private property. Conclusions Despite the limitations imposed by the lack of data on tax collection in all Mexican municipalities, this paper has endeavored to provide a descriptive analysis of property tax exemption policies toward government properties, from the perspective of municipal finance law.The findings illustrate the need to eliminate this type of tax privilege in order to strengthen local autonomy and make it easier to identify taxable property, determine the tax base and set the tax rate.In For the purposes of this article, the term "local governments" refers to sub-national governments of all kinds. 1 The authors are grateful to the public officials of the municipal property tax departments of Baja California and the Cadastral Institute of Sonora for providing this information (November and December 2014).Data from Sinaloa were obtained from Publication 169, Praxis series.N. G. Z. Espinoza, M. A. Moya DOI: 10.4236/me.2018.9100571 Modern Economy of a country's GDP [8].The most recent data for Mexico indicate that in the year 2013, according to data from the OECD (2016) (see Appendix), it was barely 0.2 percent [20].In this country, reforms to article 115 of the constitution in 1983 and 1999, the first of which decentralized property tax to the municipal level, have not borne the expected fruit.Regulations permitting exemptions of this tax have been based on the resolution published in the Official Gazette of the Federation (OGF), amending article 115 of the Mexican constitution read in part: federal law may not limit the faculty of local governments to impose real-estate property taxes nor to grant exemptions.Local laws may not establish exemptions or subsidies with respect to private companies or individual; only property in the public domain of the Federation, of the States or Municipalities, may be exempted from those taxes [21]. ITax on property in GDP percentage 15 year period (2000-2014) keeping with the principle of subsidiarity, public officials in charge of collecting municipal taxes and providing local goods and services in Mexico should be administered by the authorities who are closest to the people served.A review of municipal finance laws from state to state show that the exemptions vary.Despite the 1983 and 1989 amendments to article 115 of the Constitution, and more recently, in 2013, to the Federal Tax Code, tax collection has been stagnating for three decades.In fiscal year 2012, Mexico City was responsi-3 . Local governments are considered capable of promoting local economic growth, for example by providing the necessary infrastructure.But in Mexico, municipal revenues today account for a scant 2% of total public revenues; 91% go to the federal government and the remaining 7% to state governments.Property tax exemptions N. G. Z. Espinoza, M. A. Moya DOI: 10.4236/me.2018.9100569Modern Economyerode one of the few revenue sources available to local governments, so eliminating or curbing these exemptions may restore their capacity to meet the increasingly ambitious goals assigned to them. [12]en states that there are justifications for granting tax exemptions on both public and private property.But the problem is that this may create locational distortions and tend to encourage inefficient use of land-based inputs.Exemptions may [also] increase the regressivity of the property tax[9].As for exemptions to attract investment, it has been argued that "tax exemption may induce a new company to locate in the municipality, or an established company to expand, […] but it also reduces government revenues"[10].Neighboring municipalities may engage in tax competition in an attempt to attract investment by offering subsidies in the form of non-payment of property taxes.In an exploration of this type of tax in the United States, state that "tax competition for economic development has a long history in the United States and its reform appears to be an intractable challenge; there is reason to be optimistic about improving the use of property tax incentives.State governments control local government taxing powers […] [and can] use property tax incentives for business more effectively"[11].Bahl, Youngman and Martínez-Vazquez sustain that this type of tax exemption in developing countries, such as Mexico, can lead to tax revenues 2 or 5 times below the levels of developed nations[12].This problem of municipal public finance in Mexico is explained in Tello as follows: The property tax sys- Table 1 . Exemptions for government properties.: Appendix.Author's preparation based on a query regarding 32 municipal finance laws in effect in the states of Mexico, as researched in November 2014.*In México City (formerly Distrito Federal, now Ciudad de México) it is called the Tax Code; in Tamaulipas and Chihuahua the Municipal Code; in Tlaxcala the Financial Code; and in Veracruz the Public Finance Code (Appendix). Source shows, in the year 1989, property tax collection in Mexico totaled 415 million pesos, 31 percent of which came from Mexico City.In 1996, property tax revenues totaled 5188 million pesos, and all of the states together, excluding Mexico City, contributed less than half of this amount-barely 46 percent of the total.In 2012, once again, Mexico City accounted for the lion's share of property tax revenues, with 67 percent of the 31,542 million pesos collected by the local governments of our country.During 2014, the government of Mexico City brought in tax revenues of around 39,898 million pesos, of which about 29 percent were from property tax- [28]7].(ElUniversal, 2015).The strong performance of public revenue collection may have been the result of a December 2013 amendment to the article 127 of the Mexico City Tax Code (2014) indicating that "the tax base for property tax shall be the assessed value determined by taxpayers […] according to the market value of the property"[28].The rest of the Mexican states continue to calculate property tax based on the official assessment of the land registry office.So the government of Mexico City collects, nationwide, proportionally more of the total property taxes. Table 3 . [31]erty tax exemptions in Northwestern Mexico.: information obtained by phone and e-mail in November and December 2014 from staff at the Land Registry Institute of Sonora and the Municipal Land Registry of Baja California.Data on Sinaloa were obtained from the Forum of Public Administrators held by INAP in September 2015.Tax collection tables were obtained from the webpage of INEGI (State and Municipal Public Finance Statistics).statewehaveonly the tables from INEGI (2014) on property tax collection.In Sinaloa, for example, there were 32,558 properties owned by the three levels of government, that did not pay property tax.In addition, articles 38, 43 and 44 of the Municipal Finance Law of Sinaloa (2014) grants a partial tax exemption to labor and peasant unions, and a 50 and 40 percent abatement for owners in the residential and commercial sectors, respectively[29].This has reduced total tax revenues for Sinaloa, which in turn receives less from the Municipal Promotion Fund, obligating the state to keep its vehicle use and ownership tax in place., then the municipal government, through the Treasury, will proceed to revoke the exemption and request payment of property tax, effective as of the date it is proven to have a different use[30].A full set of data was available from the state of Baja California, and sources interviewed phone said that its municipalities administer the land registry directly, as the attribute is municipal by nature, not state or federal.In this state some 18,694 properties are exempt (see Table4), representing 1.4 percent of the total, and the municipality of Ensenada, for example, extends property tax emptions to 2037 properties, Mexicali to 5,790, Tijuana to 8,961, Playas de Rosarito to 982 and Tecate to 924, which represent 1.1, 1.4, 1.5, 1.1 and 1.4 percent of the tax rolls of each respective municipality.Baja California's 2015 Municipal Finance Law allows exemption from payment of tax on tax public property at the federal, state or municipal level, except when the property is used by private agents or state-owned enterprises for ends other than its public purpose[31].This data property tax exemptions in these states of Mexico reveals that, despite the constitutional reforms to article 115 in 1983 and 1999, there has been no improvement in the performance of municipal property tax revenues.The exception would be Mexico City, which in 2014 obtained strong returns from its property taxes, because starting in that year it began calculating the tax rate based on the market value of the property instead of the land registry assessment, based on changes in its tax code (2014).This experience might be usefully applied by municipalities in the other states of Mexico. Source Table 4 . Property tax exemptions in baja california.ones, that limit the hidden potential of property taxes.Social welfare indicators were analyzed for the same states as those examined in the previous section(INEGI, 2014), and reveal that the tax problem, particularly property tax, correlates with other more deep-rooted variables like per capita household income, unemployment, poverty and extreme poverty, in Mexican municipalities.The four states of northwest Mexico show a marked disparity in income, as evident in the Gini coefficient (see Table5): approximately a third of the population lives in poverty and the rate of informal employment averages 43%.High rates of poverty and unemployment be assumed to have a negative impact, both in terms of tax collection-because they indicate that a substantial portion of the publication is not in a very good position to pay taxesand in terms of the pressure they put on the demand for public services.Other issues that exert pressure on municipal governments on the spending side are the types of housing and access to basic services, as shown in Table6.In Sinaloa, only 89% of homes are equipped with basic services (plumbing, sewage, electricity), and only 49.5% of homes were roofed in a resistant material.Thus, compounding the problem of limited property tax collection revenues is the pressing need for public assistance and infrastructure to improve quality of life in the municipality.The data presented in this section attest to the variety of factors that cause Source: Information obtained by phone and e-mail in November and December 2014 from staff at the Land Registry Institute of Sonora and the Municipal Land Registry of Baja California.Data on Sinaloa were obtained from the Forum of Public Administrators held by INAP in September 2015.Tax collection tables were obtained from the webpage of INEGI (State and Municipal Public Finance Statistics).socioeconomic
5,615.4
2018-01-03T00:00:00.000
[ "Law", "Economics" ]
Inhomogeneity of epidemic spreading In this study, we use the characteristic infected cluster size to investigate the inhomogeneity of the epidemic spreading in static and dynamic complex networks. The simulation results show that the epidemic spreads inhomogeneously in both cases. Also, the inhomogeneity of the epidemic spreading becomes smaller with increasing speed of moving individuals and almost disappears when the speed is high enough. I. INTRODUCTION In recent years, physicists and biologists have been attracted to the spreading of the epidemic. Undoubtedly, the successive large-scale outbreaks of SARS ͑Severe Acute Respiratory Syndrome͒ and H1N1 make them more enthusiastic about this study. Complex networks as a new branch of statistical physics provide a reliable model for the intensive study of the epidemic spreading. When studying the epidemic spreading in complex networks, researchers usually adopt the homogeneous network structure 1-8 and homogeneous mixing 3,[9][10][11] hypotheses. As research progressed, researchers began to discuss the credibility of the homogeneous network structure hypothesis and take into account the inhomogeneity of the network structure. [1][2][3][4][5]12 In this study, we try to check the credibility of the homogeneous mixing hypothesis, meaning that each infected individual has the same probability of contacting with any susceptible ͑healthy͒ individual, 13 which has not been well discussed until now. We will focus on the susceptible-infected-susceptible ͑SIS͒ model 14 on complex networks. Here each node in the network represents an individual, and each link represents a connection along which the epidemic can spread. A susceptible individual can be infected by its infected neighbors with some probability, and the infected individual can become susceptible with other probability. As the infected individuals always infect their neighbors, we predict that the spreading of the epidemic may actually be inhomogeneous. That is, the probabilities of infected individuals connecting with susceptible ones differ significantly. To investigate the inhomogeneity, we perform large scale numerical simulations on static and dynamic networks. In these simulations, we study two spreading modes of the SIS model. In one mode, the probability that a susceptible individual is infected is unrelated to the number of its infected neighbors. In another mode, the probability increases with the number. In this work, the inhomogeneity of the epidemic spreading is characterized by the characteristic infected cluster size ͑hereinafter referred to as "CICS"͒, where the cluster is the subnet whose nodes are connected 13,15 ͑i.e., from any node, one can reach any other node along links in the subnet͒, the infected cluster only includes infected individuals, and the CICS is the typical size of the largest infected cluster, namely, the number of infected individuals of the largest infected cluster. The simulation results show that the infected individuals are always distributed inhomogeneously and prone to gather into large clusters, even if they walk randomly. More interestingly, the inhomogeneity of the epidemic spreading decreases with increasing speed of the individuals, and the epidemic nearly spreads homogeneously when the moving speed is high enough. II. MODEL In our model, N individuals walk randomly in a square of linear size L with periodic boundary conditions, and they are distributed randomly in the square initially. The x i ͑t͒ and i ͑t͒ are the position and motion direction of the ith individual at time t. Then, at time t + ⌬t, they are updated according to where v i ͑t͒ = ͑v cos i ͑t͒ , v sin i ͑t͒͒ is the velocity of the ith individual at time t, v is the speed which is the modulus of the velocity. The speed v is same for all individuals and remains constant in motion. i follows the uniform distribution in ͓− , ͔. ⌬t is the update interval and is set to 1. If ͉x i ͑t͒ − x j ͑t͉͒ Յ r 0 , i j, the jth ͑ith͒ individual is called a neighbor of the ith ͑jth͒ at time t, where r 0 is the interaction radius. This means that there is a link between the ith and jth nodes in the corresponding network. We assume that the update of the individuals' positions and states is simultaneous with the fixed time interval ⌬t. Each individual has two states: susceptible ͑S͒ and infected ͑I͒. The next state of each individual depends on its current state and its neighbors' states. Concretely, if the ith individual is infected currently, then it is cured and becomes susceptible at the next time step with probability ␤; if it is susceptible currently, then it can be infected by its infected neighbors in either of two modes. In mode 1, the ith individual is infected with probability ␣ if it has k͑i͒ Ͼ 0 infected neighbors. In mode 2, it is infected with probability 1−͑1−␣͒ k͑i͒ . Apparently, each infected neighbor of the ith individual infects it independently. Researchers have analyzed the epidemic thresholds of mode 1 in various complex networks [1][2][3]16 and have shown that the degree distribution of the network has a significant impact on the epidemic spreading. Liu et al. 17 investigated the epidemic spreading of mode 2 in community networks. The epidemic spreading of mode 2 in dynamic networks has been explored by Frasca et al. 18 recently. In this work, the mode 1 and mode 2 are both studied. Without lack of generality, we set ␤ = 1, as it only affects the time scale of the infection evolution. 19 Considering the reality, we will not set ␣ to a large value. As a result, the infected individuals can gather into several clusters rather than one. Then, the homogeneity of the epidemic spreading can be explored by analyzing and comparing the CICSs of the different cases. In the following, the number of individuals N is fixed to 100. = N / L 2 is denoted as the density of the network. The size of the square L is measured in units of the interaction radius r 0 ͑r 0 =1͒. As we can see, L is critical for determining the network structure. If L approaches to r 0 = 1, most individuals are connected together. Then, too many individuals will be infected persistently. By contrast, if L is much larger than r 0 = 1, individuals have few opportunities to interact with each other. The network is divided into many small clusters, then not many individuals will be infected. As a result, the epidemic will disappear soon. In this work, we only discuss the cases when L is proper. When v =0 ͑static networks͒, the percentage of the infected individuals is plotted as a function of ␣ with different L in Fig. 1. As shown in Fig. 1͑a͒ ͓Fig. 1͑b͔͒, when L =10 in mode 1 ͑mode 2͒, only when ␣ is above 0.6 ͑0.45͒, the epidemic can spread in the network. When L = 4 in mode 1 ͑mode 2͒, the epidemic persists as long as ␣ is above 0.15 ͑0.07͒; about one third ͑half͒ of the individuals are infected when ␣ equals to 0.5. Thus, we choose L = 7 to ensure that neither the epidemic will disappear, nor too many individuals will be infected, with a large range of ␣ value. Moreover, Fig. 6͑b͒ in Sec. IV indicates that the moving of individuals cannot affect the percentage of the infected individuals in mode 2 and has little influence on the percentage in mode 1. So, we also choose L = 7 when v Ͼ 0 ͑dynamic networks͒. In static networks, the individuals are distributed randomly and keep still over time, so each individual connects to another with the same probability. Thus the degree distribution P͑k͒, as the probability that a node is linked to k other nodes, of static networks is Poissonian. In dynamic networks, the individuals are randomly distributed initially and walk randomly in the square. Therefore, the probability that an individual connects to another is a constant, which equals to the probability of the static case. Then the degree distribution P͑k͒ of dynamic networks is also Poissonian. In Sec. III, we investigate the epidemic spreading in static networks ͑v =0͒. Section IV is devoted to the study of the epidemic spreading in dynamic networks ͑v Ͼ 0͒. III. EPIDEMIC SPREADING IN STATIC NETWORKS In order to characterize the homogeneity of the epidemic spreading, we introduce the "homogeneous mode." In this mode, the infected individuals, whose number is N H , are randomly scattered in a square of linear size L ͑L =7͒. Then their distribution is homogeneous. When the infection den- sity H = N H / L 2 is not quite large, the distribution of the infected cluster size in homogeneous mode is similar to the distribution of the cluster size of the site percolation. 20 That is, there exists a c . When Ͻ c , the number of infected clusters n decays approximately as a power law in , while decaying much faster than a power law for Ͼ c ͑see Fig. 2͒. In reality, the infected clusters whose sizes are larger than c barely exist, so the infected cluster with the critical infected cluster size c is regarded as the largest infected cluster that can be observed, and c is denoted as the CICS. With the increase of H , a critical infection density c emerges, above which it is possible that all the individuals gather into one cluster. Our experiments show that c is about 1.4. As shown in Fig. 1, the maximum number of infected individuals is about 50, that is, the maximum infection density is about 1 ͑50/ 7 2 ͒. In such cases, N H infected individuals always gather into more than one cluster. Then we can characterize the homogeneity of their distribution with the CICS. For the convenience of discussion, we denote the CICS of homogeneous mode as H . Correspondingly, the number and density of the infected individuals and the CICS in mode 1 ͑mode 2͒ are denoted by N 1 , 1 = N 1 / L 2 , and 1 ͑N 2 , 2 = N 2 / L 2 , 2 ͒, respectively. We will see later that when 1 = 2 = H , 1 , 2 , and H differ significantly, that is, the epidemic spreads inhomogeneously in mode 1 and mode 2. At t = 0, a given number of individuals are taken as the seeds of the infection ͑the proportion is 10%͒, while all the others start from the susceptible state. Also, all individuals are scattered randomly in the square, namely, the infected and susceptible individuals mix well. After an initial transient process, the systems stabilize in a steady state with a constant average infection density. For each value of ␣, we can get the values of ͑ 1 , 1 ͒. With changing ␣, 1 is plotted as a function of 1 ͓the middle curve in Fig. 3͑a͔͒. In the same way, 2 is plotted as a function of 2 ͓the top curve in Fig. 3͑a͔͒. In homogeneous mode, each H ͑or each N H = H L 2 ͒ corresponds to one H , then we can plot H as a function of H ͓the bottom curve in Fig. 3͑a͔͒. For the sake of getting ␣ value corresponding to a given 1 ͑ 2 ͒, we plot i versus ␣, i =1,2 in Fig. 3͑b͒. The initial infection density is 10%, so the epidemic will be persistent with time, only when the infection density is larger than 0.1 after a long spreading. Therefore, we make the epidemic spread in the square for a long time and then discuss the situation when H,1,2 Ն 0.1 below. As is shown in Fig. 3͑a͒, we draw the following conclusions by contrasting i , i =0,1,2 with H = 1 = 2 . When H,1,2 ͓0.1, 0.7͔, 1 and 2 are larger than H significantly, which means that the inhomogeneity exists. Concretely, when H,1,2 is near 0.2, 1 and 2 are up to 2-3 times H , which is a significant difference. With increasing H,1,2 , the multiple becomes less although the difference between H and 1 ͑ 2 ͒ has little change. Then we can say that the smaller the infection density H,1,2 , the larger the inhomogeneity of the epidemic spreading. In these cases, when H = 1 = 2 , H can reach two or more times 1 and 2 ͓see the line of Fig. 3͑a͔͒, that is, the number of infected individuals of homogeneous mode is two or more times that of mode 1 and mode 2. It illustrates that, from another point of view, the epidemic spreading in mode 1 and mode 2 is inhomogeneous. When H,1,2 is about 0.8, H and 1 ͑ 2 ͒ are similar, that is, infected individuals are distributed homogeneously in mode 1 and mode 2. However, the ␣ values of mode 1 and mode 2 reach 0.5 and 0.8, respectively ͓see Fig. 3͑b͔͒. They are so large that we do not discuss this case. In fact, the epidemic always spreads near the infected individuals, that is, the infected individuals only infect their susceptible neighbors, and thus it is a natural thing for epidemic to spread inhomogeneously when the infection density is not so large. Figure 3͑a͒ also shows that when 1 = 2 and H,1,2 ͓0.1, 0.7͔, 2 Ͼ 1 . It means that the epidemic spreading in mode 2 is more inhomogeneous than in mode 1, namely, the infected individuals are much easier to gather into large clus-ters in mode 2 than in mode 1. In addition, when the values of ␣ are the same, 2 Ͼ 1 , as can be seen from Fig. 3͑b͒. It also demonstrates the same conclusion. The reason is that the probability that a susceptible individual is infected increases with the number of its infected neighbors in mode 2, i.e., the susceptible individuals are more easily infected in mode 2 than in mode 1. IV. EPIDEMIC SPREADING IN DYNAMIC NETWORKS In this section, we discuss the inhomogeneity of the epidemic spreading while the individuals walk randomly in the square ͑v Ͼ 0͒, and let v ͓0.2, 10͔. As has been argued, L is set to 7, the initial proportions of the infected individuals in mode 1 and mode 2 are 10%. In Fig. 4, we plot the evolution of the infection density 1 and 2 in mode 1 and mode 2 in the case that v ͓0.2, 10͔, respectively. As shown in Fig. 4, after the initial transient process, 1 ͑ 2 ͒ fluctuates narrowly around a value, which means that the SIS model reaches the steady state. Figure 5 shows the evolution of 1 ͑t͒ and 2 ͑t͒ in the case that v ͓0.2, 10͔ and 1 = 2 = 0.4. For each value of v, we can plot 1 ͑ 2 ͒ as a function of 1 ͑ 2 ͒ by changing the ␣ value. Curves of i versus i , i =1,2 at v = 0.2 and the curve of H versus H ͓same with that in Fig. 3͑a͔͒ are given in Fig. 6͑a͒. It shows that when 1 ͑ 2 ͒ ͓0.1, 0.7͔, the epidemic spreading is inhomogeneous; when 1 ͑ 2 ͒ is about 0.8, the infected individuals are distributed homogeneously. Clearly, this conclusion is consistent with that of static networks, which are drawn in Fig. 3͑a͒. The correspondences between 1 ͑ 2 ͒ and ␣ for different v ͓0,10͔ are given in Fig. 6͑b͒. Interestingly, the different v ͑including v =0͒ corresponds to the similar 2 for a given ␣, as shown in Fig. 6͑b͒. This means that the moving of individuals cannot affect the proportion of the infected individuals in mode 2. This conclusion can be obtained by letting the delay of the model = 0 in Ref. 18. In order to further study the impact of speed v on the inhomogeneity of the epidemic spreading, we will investigate the variation of 1 ͑ 2 ͒ with increasing of v for different 1 ͑ 2 ͒. As the epidemic spreading emerges the inhomogeneity when 1 ͑ 2 ͒ ͓0.1, 0.7͔, 1 ͑ 2 ͒ are set to 0.2, 0.4, and 0.6 in the following discussion. At first, we plot 1 as a function of 1 by changing the value of ␣ for each v. Then a set of functions of 1 and 1 is obtained by changing the v value. Corresponding to 1 = 0.2, a set of ͑v , 1 ͒ can be gotten. Then 1 is plotted as a function of v when 1 = 0. bottom subplot of Fig. 7͑a͒. For the sake of comparison, we plot two straight lines in the same subplot, which respectively correspond to H ͑the bottom dashed line͒ and 1 ͑the top one͒ for 1 = 0.2 in Fig. 3͑a͒. Similarly, the curves and straight lines corresponding to 1 = 0.4, 0.6 are also plotted in Fig. 7͑a͒ ͑the middle subplot for 1 = 0.4, the top subplot for 1 = 0.6͒. In the same way, we plot 2 as a function of v for 2 = 0.2, 0.4, 0.6, respectively, and the corresponding straight lines in Fig. 7͑b͒. As can be seen in Fig. 7, when v ͓0.2, 2͔, 1 ͑ 2 ͒ is apparently larger than H for each 1 ͑ 2 ͒, which means that the epidemic spreading is inhomogeneous. Concretely, when v = 0.2, 1 ͑ 2 ͒ is close to that of the static network ͑v =0͒. With increasing of v, 1 ͑ 2 ͒ decreases evidently and moves toward H ͑homogeneous mode͒ progressively. This means that the inhomogeneity of the epidemic spreading becomes smaller with increasing speed v. Generally speaking, it is easier to maintain a structure in the static environment. Interestingly, our simulation results indicate that when the individuals walk randomly and the speed is not very high, their distribution is inhomogeneous. That is, the inhomogeneity of the epidemic spreading is kept in the dynamic environment. When v Ͼ 2, 1 ͑ 2 ͒ is always near H , which means that the distribution of the infected individuals is approaching to that in homogeneous mode. This is because when v is large enough, any infected individual can easily jump out the area that is covered by its infected neighbors. Therefore, the large infected clusters cannot form. As also shown in Fig. 7, the infection density affects the difference between the maximum and minimum of 1 ͑ 2 ͒. When 1 = 0.2 ͑ 2 = 0.2͒, the maximum of 1 ͑ 2 ͒ is 1.54 ͑2.26͒ times the minimum, while when 1 = 0.6 ͑ 2 = 0.6͒, the maximum of 1 ͑ 2 ͒ is 1.16 ͑1.34͒ times the minimum. Then we can say that the smaller the infection density 1 ͑ 2 ͒, the stronger v affects the inhomogeneity of the epidemic spreading. Besides, comparing Fig. 7͑a͒ and Fig. 7͑b͒ represents that when 1 = 2 and v ͓0.2, 2͔ are the same, 2 are always larger than 1 . This means that the inhomogeneity is more obvious in mode 2 than in mode 1 at the same speed v, which is in accord with the case of static networks. V. CONCLUSIONS In this paper, the inhomogeneity of the epidemic spreading in two spreading modes of the SIS model is investigated. The simulations in the static and dynamic networks show that the infected individuals are usually prone to gather into large clusters as the infected individuals always infect their neighbors. For such a reason, the epidemic usually spreads inhomogeneously. Even in dynamic networks, the inhomogeneity can be kept well. And, the smaller the infection density, the more inhomogeneously the epidemic spreads. However, the inhomogeneity decreases with the increase of the individuals' speed in the dynamic networks, and the epidemic spreading becomes almost homogeneous when the speed is large enough.
4,617.2
2010-06-01T00:00:00.000
[ "Environmental Science", "Mathematics", "Physics" ]
Preparation and Properties of Chitosan/Graphene Modified Bamboo Fiber Fabrics Chitosan (CS) and graphene (Gr) were used to modify bamboo fiber fabrics to develop new bamboo fiber fabrics (CGBFs) with antimicrobial properties. The CGBFs were prepared by chemical crosslinking with CS as binder assistant and Gr as functional finishing agent. The method of firmly attaching the CS/Gr to bamboo fiber fabrics was explored. On the basis of the constant amount of CS, the best impregnation modification scheme was determined by changing the amount of Gr and evaluating the properties of the CS/Gr modified bamboo fiber fabrics. The results showed that the antibacterial rate of CGBFs with 0.3 wt% Gr was more than 99%, and compared with the control sample, the maximum tensile strength of CGBF increased by 1% in the longitudinal direction and 7.8% in the weft direction. The elongation at break increased by 2.2% in longitude and 57.3% in latitude. After 20 times of washing with WOB (without optical brightener) detergent solution, the antimicrobial rate can still be more than 70%. Therefore, these newly CS/Gr modified bamboo fiber fabrics hold great promise for antibacterial application in home decoration and clothing textiles. Introduction Chitin deacetylates to form chitosan, which has good biocompatibility, degradability, bactericidal and bacteriostatic effects in different environments [1]. Graphene, as a two-dimensional carbon nano-material, has attracted much attention due to its excellent mechanical, thermal, optical and electrical properties. Graphene and its composites can also form hydrogen bonds with biological molecules on the cell wall through oxygen-containing groups such as carboxyl and hydroxyl groups on the graphene lamellae, which can isolate the cytoplasm of bacteria and eventually cause the bacteria to lose nutrients and die [2]. Zhao et al. [3] prepared graphene oxide-based antibacterial cotton fabric by direct adsorption, radiation crosslinking and chemical crosslinking. Abate et al. [4] modified polyester fabric with chitosan to optimize its antimicrobial and hydrophobic properties. The chitosan/graphene composite film prepared by Xie et al. [5] has good mechanical and vapor transport properties. Bamboo fibers are made of bamboo pulp and viscose fibers by wet spinning. Bamboo fibers are different from ordinary viscose fibers and have no obvious skin-core structure. Scanning electron microscopy showed that bamboo fibers had different cross-sectional sizes, uneven distribution of microporous structure, and a large number of grooves and hollow grooves, which made bamboo fibers have good hygroscopicity and air permeability, comfortable handle [6], and provided suitable matrix Preparation and Characterization of CGBFs The test temperature was controlled at about 26 • C. Six pieces of bamboo fiber fabrics with the sizes of 100 mm × 100 mm (weft direction × warp direction) were selected to flatten and remove the wool edges so as to make the surface clean and smooth. They were dried in a 60 • C oven until they were completely dried. 1 g of chitosan was dissolved in 100 mL acetic acid solution with 1% mass concentration, stirred for 30 min and then stirred for 3 h with APTES crosslinking agent. The mass fraction of APTES is 2% to the CS solution. The crosslinked chitosan solution was added with Gr of 0.1, 0.2, 0.3, 0.4 and 0.5% mass concentration, respectively. In order to express conveniently, different ratios of CS to Gr were given different sample codes as shown in Table 1. Table 1. Sample codes of different ratios of chitosan (CS) to graphene (Gr). CS/Gr mixed solution was obtained by ultrasonicating for 30 min and the sample codes were shown as: 0.1 wt% CS and 0.1 wt% Gr (C1G1); 0.1 wt% CS and 0.2 wt% Gr (C1G2); 0.1 wt% CS and 0.3 wt% Gr (C1G3); 0.1 wt% CS and 0.4 wt% Gr (C1G4); and 0.1 wt% CS and 0.5 wt% Gr (C1G5). Bamboo fiber fabrics were soaked in the CS/Gr mixed solutions for 45 min, respectively, then rinsed and dried for further tests. Antibacterial Rate Test According to the national standard GB/T 20944. , the antimicrobial activity of modified bamboo fiber fabrics was qualitatively evaluated by E. coli. The CGBFs were tested for their antimicrobial activity. Colony count method was used to calculate the antimicrobial activity according to Formula (1). The average antimicrobial activity R was obtained after three counts: where T 0 was the number of bacteria on the plate of blank sample and T 1 was the number of bacteria on the plate of tested sample. Observation of Micromorphology In a general room temperature environment, the composite fabrics of the control sample and the modified ones were fixed on a metal bracket with conductive adhesive. Platinum plating and gold plating were sprayed after drying, and observed by environmental scanning electron microscopy (FEI Company, New York, NY, USA). Fourier Transform Infrared Spectrometer (FTIR) Test The surface functional groups of the composite fabrics of the control sample and the modified ones were measured by FT-IR infrared spectrometer (Avance 300, Bruker Company, Berlin, Germany). The fabrics were completely dried and placed on the top of the carrier sheet to be flattened for infrared scanning. The transmission mode of infrared microscope was selected when testing, the frequency of spectrum scanning was 200 times, the resolution of infrared spectrum was 1.5 cm −1 and the range of spectrum acquisition was 4000-650 cm −1 . Mechanical Properties Test According to ISO13934-1 "Testing of Tensile Strength of Fabrics," the fabric strength was tested by YG (B) 026D-250 strength tester (Aoran Tech. Co. Ltd., Shanghai, China). The finished fabrics were cut according to the specifications of 210 cm × 297 cm. The holding distance was 100 mm and the drawing speed was 100 mm/min. Five repeats were tested for each sample and the average values were shown in Table 2. The breaking strength and elongation at break were taken as the indexes to evaluate the fabric strength. Washing Fastness Test The washing fastness of impregnated fabrics was tested according to GB/T 12490-2014 "Textile Color Fastness Test for Family and Commercial Washing Fastness." After washing and drying, the L, a, and b-values of each washed fabric were measured by CM-2500d color meter of Konica Minolta (Tokyo, Japan). Washing Resistance and Antimicrobial Test According to the national standard GB/T 8629-2017 household washing and drying procedures for textile testing, the fabrics were washed and dried several times. According to the national standard GB/T 20944.3-2008, the antimicrobial activity of modified bamboo fiber was qualitatively evaluated by E. coli. The antimicrobial rate was obtained by counting the number of bacteria and taking the average value as Section 2.1.2. mentioned. Viscosity Test DV-S rotary viscometer (Nanjing, China) of Nanjing Zijin Metrology Co., Ltd. was used to measure the viscosity of the impregnating solution. Pour the impregnating solution into the test container and insert the cylinder into the impregnating solution until it has completely passed the top rotor of the cylinder. Adjust the speed, turn on the motor, test and read after data stabilization. Measure three times continuously. The difference between the measured value and average value should not exceed (+3%) of the calculated average value. Otherwise, the fourth measurement should be done and finally, the average value was obtained. Principle of Preparation of CS/Gr Solution Aminos on chitosan can amide with carboxyl groups on graphene to form -NHCO-bonds, as shown in Figure 1. The amino group of chitosan is in the second place in the molecular structure and its performance is weak. Therefore, when adding APTES crosslinking agent to strengthen the amino group it is easy to form composite materials closely linked with graphene. The modification mechanism is shown in Figure 2 [8]. Principle of Preparation of CS/Gr Solution Aminos on chitosan can amide with carboxyl groups on graphene to form -NHCO-bonds, as shown in Figure 1. The amino group of chitosan is in the second place in the molecular structure and its performance is weak. Therefore, when adding APTES crosslinking agent to strengthen the amino group it is easy to form composite materials closely linked with graphene. The modification mechanism is shown in Figure 2 Effect of Graphene Addition on Antibacterial Property of Impregnated Fabrics Under normal conditions, chitosan is insoluble in water, but soluble in acidic medium. In acidic medium, the amino group of chitosan molecule is protonated to form -NH3+, which results in acid dissolution [9]. As shown in Figure 3, the molecular structure of chitosan has amino group, and some studies have shown that amino group in chitosan molecule is the main driving force of its antimicrobial properties [10]. Principle of Preparation of CS/Gr Solution Aminos on chitosan can amide with carboxyl groups on graphene to form -NHCO-bonds, as shown in Figure 1. The amino group of chitosan is in the second place in the molecular structure and its performance is weak. Therefore, when adding APTES crosslinking agent to strengthen the amino group it is easy to form composite materials closely linked with graphene. The modification mechanism is shown in Figure 2 Effect of Graphene Addition on Antibacterial Property of Impregnated Fabrics Under normal conditions, chitosan is insoluble in water, but soluble in acidic medium. In acidic medium, the amino group of chitosan molecule is protonated to form -NH3+, which results in acid dissolution [9]. As shown in Figure 3, the molecular structure of chitosan has amino group, and some studies have shown that amino group in chitosan molecule is the main driving force of its antimicrobial properties [10]. Effect of Graphene Addition on Antibacterial Property of Impregnated Fabrics Under normal conditions, chitosan is insoluble in water, but soluble in acidic medium. In acidic medium, the amino group of chitosan molecule is protonated to form -NH 3 +, which results in acid dissolution [9]. As shown in Figure 3, the molecular structure of chitosan has amino group, and some studies have shown that amino group in chitosan molecule is the main driving force of its antimicrobial properties [10]. Principle of Preparation of CS/Gr Solution Aminos on chitosan can amide with carboxyl groups on graphene to form -NHCO-bonds, as shown in Figure 1. The amino group of chitosan is in the second place in the molecular structure and its performance is weak. Therefore, when adding APTES crosslinking agent to strengthen the amino group it is easy to form composite materials closely linked with graphene. The modification mechanism is shown in Figure 2 [8]. Effect of Graphene Addition on Antibacterial Property of Impregnated Fabrics Under normal conditions, chitosan is insoluble in water, but soluble in acidic medium. In acidic medium, the amino group of chitosan molecule is protonated to form -NH3+, which results in acid dissolution [9]. As shown in Figure 3, the molecular structure of chitosan has amino group, and some studies have shown that amino group in chitosan molecule is the main driving force of its antimicrobial properties [10]. As shown in Figure 4, the number of Escherichia coli on the modified bamboo fiber fabrics dropped significantly on the plate, which means that the modified bamboo fiber fabrics have a better antimicrobial effect. The sharp edges of graphene sheets cause physical damage to bacterial cell membranes, which results in the outflow of intracellular substances and the death of bacteria [11]. Polymers 2019, 11, x FOR PEER REVIEW 5 of 13 As shown in Figure 4, the number of Escherichia coli on the modified bamboo fiber fabrics dropped significantly on the plate, which means that the modified bamboo fiber fabrics have a better antimicrobial effect. The sharp edges of graphene sheets cause physical damage to bacterial cell membranes, which results in the outflow of intracellular substances and the death of bacteria [11]. The antibacterial rate of chitosan composite fabrics is shown in Figure 5. When the concentration of chitosan is 0.3%, the antibacterial rate reaches over 98% and tends to be 100%. The antibacterial rate of the chitosan composite graphene modified fabric is shown in Figure 6. When the concentration of 0.1% chitosan composite is 0.1% graphene, the antibacterial rate reaches 94%, while the antibacterial rate of 0.1% chitosan composite fabric is less than 75%. It can be seen that the addition of graphene composite fabric improves the antibacterial property of the fabric. In addition, when the pH value of the impregnating solution is about 3.7, the impregnating solution is adjusted to the neutral one. It is found that the color of the fabric begins to yellow and the antimicrobial property decreases. This is because the antimicrobial property of chitosan is mainly based on the protonated amino group, which is in the neutral environment after neutralizing acetic acid, and the protonated amino group no longer exists. However, the fabric treated with chitosan graphene solution has a strong antimicrobial property and less environmental impact, so the addition of graphene makes up for the defect that chitosan has a poor antimicrobial property under neutral and alkaline conditions. The antibacterial rate of chitosan composite fabrics is shown in Figure 5. When the concentration of chitosan is 0.3%, the antibacterial rate reaches over 98% and tends to be 100%. The antibacterial rate of the chitosan composite graphene modified fabric is shown in Figure 6. When the concentration of 0.1% chitosan composite is 0.1% graphene, the antibacterial rate reaches 94%, while the antibacterial rate of 0.1% chitosan composite fabric is less than 75%. It can be seen that the addition of graphene composite fabric improves the antibacterial property of the fabric. In addition, when the pH value of the impregnating solution is about 3.7, the impregnating solution is adjusted to the neutral one. It is found that the color of the fabric begins to yellow and the antimicrobial property decreases. This is because the antimicrobial property of chitosan is mainly based on the protonated amino group, which is in the neutral environment after neutralizing acetic acid, and the protonated amino group no longer exists. However, the fabric treated with chitosan graphene solution has a strong antimicrobial property and less environmental impact, so the addition of graphene makes up for the defect that chitosan has a poor antimicrobial property under neutral and alkaline conditions. Figure 7 shows ESEM images of CS/Gr modified bamboo fiber fabrics (CGBFs). With the increase of graphene content, the surface adhesion of bamboo fiber fabrics increased significantly. The surface of untreated fabrics was smooth, free of impurities, and with characteristic tubular texture by ESEM. By comparison, it can be found that when the graphene content reaches 0.4 wt%, the surface morphology of treated bamboo fiber fabric is covered by graphene finishing agent, which masks the Figure 7 shows ESEM images of CS/Gr modified bamboo fiber fabrics (CGBFs). With the increase of graphene content, the surface adhesion of bamboo fiber fabrics increased significantly. The surface of untreated fabrics was smooth, free of impurities, and with characteristic tubular texture by ESEM. By comparison, it can be found that when the graphene content reaches 0.4 wt%, the surface morphology of treated bamboo fiber fabric is covered by graphene finishing agent, which masks the ESEM. By comparison, it can be found that when the graphene content reaches 0.4 wt%, the surface morphology of treated bamboo fiber fabric is covered by graphene finishing agent, which masks the original characteristic stripes on the surface of the fiber. When the graphene content is 0.3 wt% or lower, graphene is loaded on a single fiber and will not attach between the fibers, so adding a small amount of graphene will not block the fabric, will maintain the air permeability, and improve the comfort of wearing [12]. At the same time, a lot of irregular accumulation can be found on the surface of bamboo fibers covered by finishing agents, which characterizes the graphene microsheets dispersed on the fabric. With the increase of graphene concentration, the dispersion of CS/Gr in some parts of bamboo fibers is macroscopically uneven. Morphology Characterization of Impregnated Fabrics with Different Graphene Additions Polymers 2019, 11, x FOR PEER REVIEW 7 of 13 original characteristic stripes on the surface of the fiber. When the graphene content is 0.3 wt% or lower, graphene is loaded on a single fiber and will not attach between the fibers, so adding a small amount of graphene will not block the fabric, will maintain the air permeability, and improve the comfort of wearing [12]. At the same time, a lot of irregular accumulation can be found on the surface of bamboo fibers covered by finishing agents, which characterizes the graphene microsheets dispersed on the fabric. With the increase of graphene concentration, the dispersion of CS/Gr in some parts of bamboo fibers is macroscopically uneven. Figure 8 shows the infrared spectra of CS/Gr modified fabrics. It can be seen that the broad absorption peaks of 3500 cm −1 -3300 cm −1 are due to the stretching of the -OH group. The characteristic absorption peaks of hydroxyl groups indicate that all samples contain cellulose. The CS/Gr spectra shows that 1364 cm −1 and 1017 cm −1 after adding graphene, correspond to C-O and C-O-C stretching correspond to graphene, respectively. The peaks at 3330 cm −1 and 1017 cm −1 are obvious, which are caused by the overlap of O-H and C-O stretching peaks of graphene with the stretching peaks of fabric cellulose. The absorption band near 1636 cm −1 is the in-plane stretching vibration peak of amide-NH, which indicates that the amino group in chitosan covalently binds to the carboxy group of graphene, and proves that the crosslinking of chitosan and graphene has been completed. Compared with CS/Gr composite fabrics, the stretching vibration absorption peak of -OH at 3330 cm −1 of crosslinked fabrics shifted to a low wavenumber (red shift) and its width narrowed, indicating that crosslinking enhanced the interaction between modifiers and fabrics [13]. Figure 8 shows the infrared spectra of CS/Gr modified fabrics. It can be seen that the broad absorption peaks of 3500 cm −1 -3300 cm −1 are due to the stretching of the -OH group. The characteristic absorption peaks of hydroxyl groups indicate that all samples contain cellulose. The CS/Gr spectra shows that 1364 cm −1 and 1017 cm −1 after adding graphene, correspond to C-O and C-O-C stretching correspond to graphene, respectively. The peaks at 3330 cm −1 and 1017 cm −1 are obvious, which are caused by the overlap of O-H and C-O stretching peaks of graphene with the stretching peaks of fabric cellulose. The absorption band near 1636 cm −1 is the in-plane stretching vibration peak of amide-NH, which indicates that the amino group in chitosan covalently binds to the carboxy group of graphene, and proves that the crosslinking of chitosan and graphene has been completed. Compared with CS/Gr composite fabrics, the stretching vibration absorption peak of -OH at 3330 cm −1 of crosslinked fabrics shifted to a low wavenumber (red shift) and its width narrowed, indicating that crosslinking enhanced the interaction between modifiers and fabrics [13]. Effect of Graphene Addition on Mechanical Properties of Impregnated Fabrics The mechanical properties of modified fabrics change with the increase of graphene content as shown in Table 2. Graphene is a rigid material with high strength. After adding graphene, the tensile strength increases. When the amount of chitosan is 0.1%, the maximum tensile strength decreases by 4.2% in warp and 4.7% in weft. The elongation at break is unchanged in longitude and decreased by 7.8% in latitude. This shows that after chitosan finishing, the fabric becomes brittle and hard, and the tensile resilience decreases [14,15], so that the mechanical breaking strength and breaking elongation all follow. After adding graphene, the maximum tensile strength increases by 1% in longitudinal direction, 7.8% in weft direction, 2.2% in longitudinal direction of elongation at break, and 57.3% in weft direction, indicating that the mechanical properties of the fabric are improved by adding graphene. Effect of Graphene Addition on Washing Resistance and Antibacterial Activity of Impregnated Modified Fabrics Washability is an important index for evaluating fabric properties. The antibacterial rate of fabrics impregnated with 0.1% chitosan solution decreased by 32% after five washes. The reason is that acetic acid is easy to wash and remove. The fabrics with different graphene content were washed 20 times, and the antimicrobial rate was measured every 5 times and the results were recorded ( Figure 9). The sterilization efficiency of these five fabrics decreased slowly with the washing times. There is a hydrogen bond between chitosan and cellulose. Graphene and cellulose fibers have poor crosslinking stability. After crosslinking of chitosan, graphene and chitosan form an enclosed structure [16]. At the same time, it was found that when graphene content was 0.3, 0.4 and 0.5 wt%, the degree of decline began to be gentle after 10 times of washing. Perhaps with the increase of graphene content, the amount of self-polymerization of cellulose long molecular chains decreased Effect of Graphene Addition on Mechanical Properties of Impregnated Fabrics The mechanical properties of modified fabrics change with the increase of graphene content as shown in Table 2. Graphene is a rigid material with high strength. After adding graphene, the tensile strength increases. When the amount of chitosan is 0.1%, the maximum tensile strength decreases by 4.2% in warp and 4.7% in weft. The elongation at break is unchanged in longitude and decreased by 7.8% in latitude. This shows that after chitosan finishing, the fabric becomes brittle and hard, and the tensile resilience decreases [14,15], so that the mechanical breaking strength and breaking elongation all follow. After adding graphene, the maximum tensile strength increases by 1% in longitudinal direction, 7.8% in weft direction, 2.2% in longitudinal direction of elongation at break, and 57.3% in weft direction, indicating that the mechanical properties of the fabric are improved by adding graphene. Table 2. Mechanical properties of control sample, CS and CS/Gr modified fabrics. Effect of Graphene Addition on Washing Resistance and Antibacterial Activity of Impregnated Modified Fabrics Washability is an important index for evaluating fabric properties. The antibacterial rate of fabrics impregnated with 0.1% chitosan solution decreased by 32% after five washes. The reason is that acetic acid is easy to wash and remove. The fabrics with different graphene content were washed 20 times, and the antimicrobial rate was measured every 5 times and the results were recorded (Figure 9). The sterilization efficiency of these five fabrics decreased slowly with the washing times. There is a hydrogen bond between chitosan and cellulose. Graphene and cellulose fibers have poor crosslinking stability. After crosslinking of chitosan, graphene and chitosan form an enclosed structure [16]. At the same time, it was found that when graphene content was 0.3, 0.4 and 0.5 wt%, the degree of decline began to be gentle after 10 times of washing. Perhaps with the increase of graphene content, the amount of self-polymerization of cellulose long molecular chains decreased relatively, and the contact area between graphene and cellulose fibers would not be markedly reduced, thus the washing durability rate would be increased [17]. relatively, and the contact area between graphene and cellulose fibers would not be markedly reduced, thus the washing durability rate would be increased [17]. Color Fastness The test chromaticity indices are L, a and b. The color characteristics of each sample are the average values of five test points. As can be seen from Figure 10a, graphene has a great influence on fabric dyeing. With the increase of graphene content, the L-value decreases gradually, which indicates that the larger the graphene content, the lower the lightness index of the fabric and the darker the color after impregnation. At the same time, it can be found that with the increase of washing times, the change of brightness index is gradually flat, which indicates that with the increase of surface adsorption, the absorbent impregnating solution is limited and some of it will be washed off, which corresponds to the results of washability and the antimicrobial test. The more graphene is added, the greater the color difference before and after washing. The lighter tones of deeper tones have lower firmness when washed and wet rubbed. Because the dye molecules are more saturated in deeper tones, it is easier to remove from the inside of the fibers during washing [18]. Figure 10b shows that with the increase of graphene content, the red-green chromaticity index a-value gradually decreases and maintains a positive number, but with the increase of washing times, the a-value shows a downtrend, indicating that the fabric has been greened after washing. At the same time, as shown in Figure 10c, the yellow-blue chromaticity index b-value gradually increases, indicating that the fabric blues after washing, which makes the color of the fabric lighter [19]. Color Fastness The test chromaticity indices are L, a and b. The color characteristics of each sample are the average values of five test points. As can be seen from Figure 10a, graphene has a great influence on fabric dyeing. With the increase of graphene content, the L-value decreases gradually, which indicates that the larger the graphene content, the lower the lightness index of the fabric and the darker the color after impregnation. At the same time, it can be found that with the increase of washing times, the change of brightness index is gradually flat, which indicates that with the increase of surface adsorption, the absorbent impregnating solution is limited and some of it will be washed off, which corresponds to the results of washability and the antimicrobial test. The more graphene is added, the greater the color difference before and after washing. The lighter tones of deeper tones have lower firmness when washed and wet rubbed. Because the dye molecules are more saturated in deeper tones, it is easier to remove from the inside of the fibers during washing [18]. Figure 10b shows that with the increase of graphene content, the red-green chromaticity index a-value gradually decreases and maintains a positive number, but with the increase of washing times, the a-value shows a downtrend, indicating that the fabric has been greened after washing. At the same time, as shown in Figure 10c, the yellow-blue chromaticity index b-value gradually increases, indicating that the fabric blues after washing, which makes the color of the fabric lighter [19]. Effect of Graphene Addition on Viscosity of Impregnating Solution Under different control variables, the results of the viscosity test are shown in Figure 11. With the increase of graphene content, the viscosity of the impregnating solution fluctuates slightly, but the overall trend is gradually rising. It shows that graphene can increase the viscosity of the impregnating liquid system when other conditions remain unchanged. The size distribution of graphene particles used in this experiment is wide. All particles are very small and most of the particles reach the micron level [8]. This shows that although the size of graphene powder after grinding decreases, there is still agglomeration phenomenon. At the same time, the chitosan solution itself has a certain viscosity. The viscosity of the impregnating solution increases slightly after adding graphene. The reason may be that with the increase of graphene mass fraction, the distance between graphene lamellae decreases, the probability of contact increases, and agglomeration easily occurs, which leads to the increase of graphene sheet diameter. The viscosity of the impregnating solution increases with the increase of graphene mass fraction [20]. Effect of Graphene Addition on Viscosity of Impregnating Solution Under different control variables, the results of the viscosity test are shown in Figure 11. With the increase of graphene content, the viscosity of the impregnating solution fluctuates slightly, but the overall trend is gradually rising. It shows that graphene can increase the viscosity of the impregnating liquid system when other conditions remain unchanged. The size distribution of graphene particles used in this experiment is wide. All particles are very small and most of the particles reach the micron level [8]. This shows that although the size of graphene powder after grinding decreases, there is still agglomeration phenomenon. At the same time, the chitosan solution itself has a certain viscosity. The viscosity of the impregnating solution increases slightly after adding graphene. The reason may be that with the increase of graphene mass fraction, the distance between graphene lamellae decreases, the probability of contact increases, and agglomeration easily occurs, which leads to the increase of graphene sheet diameter. The viscosity of the impregnating solution increases with the increase of graphene mass fraction [20]. Conclusions Bamboo fiber fabrics were impregnated with chitosan/graphene solution, which gave the obtained fabrics excellent antimicrobial and mechanical properties. When the mass ratio of graphene was 0.2%, the antimicrobial rate of the fabrics reached more than 98%. ESEM images showed that the impregnating modifier (CS/Gr) had been grafted onto the fabric. When the mass ratio of graphene was below 0.3%, no bridge was erected between the fibers, which ensured good air permeability of the fabric. FTIR characterization also confirmed that the amino group of chitosan combined with the carboxyl group of graphene to produce the amide group, proved that the CS/Gr and fabric were successfully crosslinked. The addition of chitosan made the fabric brittle and the mechanical properties weakened. The tensile strength and elongation at break of the fabric increased after adding graphene, and the mechanical properties of the fabric increased. The antibacterial rate of the fabrics with the composite mass ratio of 0.1 wt% graphene remains above 70% after 20 washes, while that of the fabrics treated with 0.1% chitosan after 5 washes decreases by 32% due to the lack of strong bonding between chitosan and fabrics, and the removal of acetic acid by washing. At the same time, the color of the modified fabric is less affected by repeated washing and after washing, the fabric will Conclusions Bamboo fiber fabrics were impregnated with chitosan/graphene solution, which gave the obtained fabrics excellent antimicrobial and mechanical properties. When the mass ratio of graphene was 0.2%, the antimicrobial rate of the fabrics reached more than 98%. ESEM images showed that the impregnating modifier (CS/Gr) had been grafted onto the fabric. When the mass ratio of graphene was below 0.3%, no bridge was erected between the fibers, which ensured good air permeability of the fabric. FTIR characterization also confirmed that the amino group of chitosan combined with the carboxyl group of graphene to produce the amide group, proved that the CS/Gr and fabric were successfully crosslinked. The addition of chitosan made the fabric brittle and the mechanical properties weakened. The tensile strength and elongation at break of the fabric increased after adding graphene, and the mechanical properties of the fabric increased. The antibacterial rate of the fabrics with the composite mass ratio of 0.1 wt% graphene remains above 70% after 20 washes, while that of the fabrics treated with 0.1% chitosan after 5 washes decreases by 32% due to the lack of strong bonding between chitosan and fabrics, and the removal of acetic acid by washing. At the same time, the color of the modified fabric is less affected by repeated washing and after washing, the fabric will be greened and blued. To sum up, CS/Gr modified fabric is a kind of low cost, high efficiency, green and environmental protection textile.
7,183.4
2019-09-21T00:00:00.000
[ "Materials Science" ]
Genome-Wide Association Study to Identify Marker–Trait Associations for Seed Color in Colored Wheat (Triticum aestivum L.) This study conducted phenotypic evaluations on a wheat F3 population derived from 155 F2 plants. Traits related to seed color, including chlorophyll a, chlorophyll b, carotenoid, anthocyanin, L*, a*, and b*, were assessed, revealing highly significant correlations among various traits. Genotyping using 81,587 SNP markers resulted in 3969 high-quality markers, revealing a genome-wide distribution with varying densities across chromosomes. A genome-wide association study using fixed and random model circulating probability unification (FarmCPU) and Bayesian-information and linkage-disequilibrium iteratively nested keyway (BLINK) identified 11 significant marker–trait associations (MTAs) associated with L*, a*, and b*, and chromosomal distribution patterns revealed predominant locations on chromosomes 2A, 2B, and 4B. A comprehensive annotation uncovered 69 genes within the genomic vicinity of each MTA, providing potential functional insights. Gene expression analysis during seed development identified greater than 2-fold increases or decreases in expression in colored wheat for 16 of 69 genes. Among these, eight genes, including transcription factors and genes related to flavonoid and ubiquitination pathways, exhibited distinct expression patterns during seed development, providing further approaches for exploring seed coloration. This comprehensive exploration expands our understanding of the genetic basis of seed color and paves the way for informed discussions on the molecular intricacies contributing to this phenotypic trait. Introduction Since its domestication approximately 10,000 years ago, wheat (Triticum aestivum L.) has become a cornerstone of global food security, contributing significantly to meeting the dietary needs of the global population.Its widespread cultivation and consumption have established wheat as a primary source of calories and protein, providing sustenance for a substantial portion of the global population [1].This unique variation in wheat both adds to its nutritional profile and holds promise for enhancing the overall dietary diversity and health benefits available to consumers [2].Wheat provides important nutrients and compounds such as anthocyanins, carotenes, and phenolic acids, which have strong antioxidant effects [3].Colored wheat, with its anthocyanin content, has a powerful ability to combat chronic diseases such as obesity, cancer, and cardiovascular issues, and it can even slow aging [4].In contrast to common wheat, the red color of which arises from carotenoids and catechol in the outer layer, the color of colored wheat is mainly attributable to anthocyanins.Colored wheat also contains many tocopherols, phenolic acids, and essential trace elements needed for the human body [5,6]. The transformative impact of single nucleotide polymorphism genotyping arrays (SNP arrays) extends beyond their pivotal role in exploring genetic variations in both animal and plant populations [7,8].By facilitating the identification and analysis of hundreds of thousands of SNPs in a single assay, these arrays serve as a robust platform for unveiling genome-wide sequence variability among individuals and populations [9].SNP arrays provide a high-throughput and cost-effective method for analyzing genetic diversity, and they have been extensively employed in constructing genetic linkage maps, exploring evolutionary relationships, unraveling functional genomics, and supporting conservation efforts.Genotyping arrays have played, and continue to play, a critical role in the genotyping of various crop species.Consequently, the common study of SNPs often identifies loci that are blocks of correlated SNPs associated with the trait of interest [10]. In recent decades, high-density SNP genotyping arrays such as Illumina Wheat 9K, 90K, 15K, Axiom ® Wheat 660K, Wheat 55K, Axiom ® HD Wheat (820K), Wheat Breeders' 35K Axiom, and Wheat 50K Triticum TraitBreed arrays have been developed for marker-assisted breeding in common wheat [11][12][13][14][15][16].This technology facilitates the rapid genotyping of wheat varieties, precise identification of genetic variants linked to crucial traits, and marker development for easy integration into breeding programs.High-density genotyping arrays significantly increase researchers' ability to study many wheat samples, making it easier to identify genetic variations and advanced wheat breeding techniques. In this study, we used the F 3 population of both colored and noncolored wheat lines to identify loci associated with seed color using the comprehensive Illumina Wheat 90 K SNP array.In addition, we explored the mechanisms governing changes in seed color through a comparative analysis of RNA sequences during the seed developmental stages of colored and noncolored wheat.By integrating the results of genome-wide association studies (GWASs) and RNA sequencing (RNA-Seq), we unraveled the changes in expression in differentially expressed genes (DEGs) located near quantitative trait loci that regulate seed color.This collaborative approach sought to enhance our understanding of the complex mechanisms governing seed coloration in wheat, with GWASs providing valuable insights into genetic associations, complemented by a detailed exploration of gene expression patterns through RNA-Seq.In addition, the findings from this study offer novel insights into potential candidate genes influencing wheat seed coloration, particularly during the critical seed filling and maturity stages. Phenotypic Evaluations Images of the F 3 seeds are presented in Figure 1.Of the initial 214 individuals in the F 3 segregated population, some seeds, including damaged or broken ones, were excluded, resulting in 155 F 3 plants available for this study.This subset of 155 F 3 plants was evaluated for traits related to seed color, encompassing chlorophyll a, chlorophyll b, carotenoid, anthocyanin, L*, a*, and b*.The distribution of the results from the phenotype evaluation is depicted in Figure 2A-G, and essential summary statistics, including range, mean, and coefficient of variation, are presented in Table S2. Pearson's correlation coefficient (r) estimated between the traits in the F 3 population is presented in Figure S1.The associations were positive and highly significant (all p < 0.01) among carotenoid, chlorophyll a, and anthocyanin; L* and carotenoid; a* and L*; b* and carotenoid; and L* and a*.By contrast, strong negative correlations were detected between carotenoid and chlorophyll b, L* and anthocyanin, and b* and anthocyanin (all p < 0.01) in the F 3 population (Figure S1).Pearson's correlation coefficient (r) estimated between the traits in the F3 population is presented in Figure S1.The associations were positive and highly significant (all p < 0.01) among carotenoid, chlorophyll a, and anthocyanin; L* and carotenoid; a* and L*; b* and carotenoid; and L* and a*.By contrast, strong negative correlations were detected between carotenoid and chlorophyll b, L* and anthocyanin, and b* and anthocyanin (all p < 0.01) in the F3 population (Figure S1). Phenotypic Evaluation of Marker Distribution, Population Structure, and Linkage-Disequilibrium (LD) Decay Of the 81,587 SNP markers initially present on the wheat 90K iSelect array for genotyping, 3969 high-quality SNP markers remained after eliminating those with minor allele frequencies <0.05 and missing data >10%.The selected SNP markers exhibited a Pearson's correlation coefficient (r) estimated between the traits in the F3 population is presented in Figure S1.The associations were positive and highly significant (all p < 0.01) among carotenoid, chlorophyll a, and anthocyanin; L* and carotenoid; a* and L*; b* and carotenoid; and L* and a*.By contrast, strong negative correlations were detected between carotenoid and chlorophyll b, L* and anthocyanin, and b* and anthocyanin (all p < 0.01) in the F3 population (Figure S1). Phenotypic Evaluation of Marker Distribution, Population Structure, and Linkage-Disequilibrium (LD) Decay Of the 81,587 SNP markers initially present on the wheat 90K iSelect array for genotyping, 3969 high-quality SNP markers remained after eliminating those with minor allele frequencies <0.05 and missing data >10%.The selected SNP markers exhibited a Phenotypic Evaluation of Marker Distribution, Population Structure, and Linkage-Disequilibrium (LD) Decay Of the 81,587 SNP markers initially present on the wheat 90K iSelect array for genotyping, 3969 high-quality SNP markers remained after eliminating those with minor allele frequencies <0.05 and missing data >10%.The selected SNP markers exhibited a genomewide distribution, with the highest number on the A subgenome (2500), followed by the B (1249) and D (218) subgenomes.An analysis of their chromosome-wide distributions revealed the highest marker density on chromosome 2A (653), followed by chromosomes 1A (591) and 2B (315).Conversely, chromosomes 5D (11) and 7D (20) contained the fewest markers (Table S3). The population structure of the 155 wheat genotypes was examined using the ∆K method and validated using principal component analysis (PCA).The ∆K method and PCAbased population structure analysis identified three distinct groups in the GWAS results (Figure 3A,B).LD decay was estimated by calculating r 2 for all 3969 markers.Genome-wide LD decayed with genetic distance, and LD decayed by 50% at 134 Mb for the entire genome (Figure 3C). The population structure of the 155 wheat genotypes was examined using the ΔK method and validated using principal component analysis (PCA).The ΔK method and PCA-based population structure analysis identified three distinct groups in the GWAS results (Figure 3A,B).LD decay was estimated by calculating r 2 for all 3969 markers.Genome-wide LD decayed with genetic distance, and LD decayed by 50% at 134 Mb for the entire genome (Figure 3C). GWASs The significant MTAs for the seven phenotypic traits were identified by scrutinizing Q-Q and Manhattan plots in GWAS using FarmCPU and BLINK (Figure 4A-E).The application of a stringent threshold (−log10P > 5) served as a robust criterion for designating MTAs as significant in the GWAS.The analysis revealed eleven MTAs, including three from FarmCPU and eight from BLINK (Table 1).All 11 MTAs originated from BLINK (L*, a*, and b*) and FarmCPU (L* and a*).Notably, some MTAs were detected by multiple methods, such as BS00067992_51 (detected in FarmCPU L* and BLINK), Ra_c13247_528 (detected in BLINK L* and a*), and RAC875_rep_c105150_1024 (duplicated in FarmCPU a* and BLINK a*).The phenotypic variation explained (PVE) by these SNPs ranged between 0.17% and 86.08%.In BLINK (a*), the SNP with the lowest PVE was Ra_c13247_528 (0.17%).Interestingly, these specific SNPs were also detected by BLINK (L*), albeit with a significantly higher PVE of 19.64%.In addition, the analysis of the chromosomal distribution of MTAs revealed distinct patterns, with the majority being located on chromosomes 2A, 2B, and 4B.Specifically, six MTAs were identified on chromosome 2A, whereas chromosomes 2B and 4B each harbored one MTA (Table 1). GWASs The significant MTAs for the seven phenotypic traits were identified by scrutinizing Q-Q and Manhattan plots in GWAS using FarmCPU and BLINK (Figure 4A-E).The application of a stringent threshold (−log 10 P > 5) served as a robust criterion for designating MTAs as significant in the GWAS.The analysis revealed eleven MTAs, including three from FarmCPU and eight from BLINK (Table 1).All 11 MTAs originated from BLINK (L*, a*, and b*) and FarmCPU (L* and a*).Notably, some MTAs were detected by multiple methods, such as BS00067992_51 (detected in FarmCPU L* and BLINK), Ra_c13247_528 (detected in BLINK L* and a*), and RAC875_rep_c105150_1024 (duplicated in FarmCPU a* and BLINK a*).The phenotypic variation explained (PVE) by these SNPs ranged between 0.17% and 86.08%.In BLINK (a*), the SNP with the lowest PVE was Ra_c13247_528 (0.17%).Interestingly, these specific SNPs were also detected by BLINK (L*), albeit with a significantly higher PVE of 19.64%.In addition, the analysis of the chromosomal distribution of MTAs revealed distinct patterns, with the majority being located on chromosomes 2A, 2B, and 4B.Specifically, six MTAs were identified on chromosome 2A, whereas chromosomes 2B and 4B each harbored one MTA (Table 1). Gene Expression Analysis during Seed Development in the Vicinity of MTAs To gain deeper insights into the genomic context of these MTAs, a comprehensive annotation was conducted using IWGSC Wheat RefSeq v1.1.This annotation effort uncovered a noteworthy discovery.Specifically, 69 genes were identified within the genomic vicinity of each significant MTA locus (Table S4).These genes, positioned within a 250 kb radius of the MTAs, present a rich source for further exploration and potential functional implications related to the observed phenotypic traits.Based on the RNA-Seq Gene Expression Analysis during Seed Development in the Vicinity of MTAs To gain deeper insights into the genomic context of these MTAs, a comprehensive annotation was conducted using IWGSC Wheat RefSeq v1.1.This annotation effort uncovered a noteworthy discovery.Specifically, 69 genes were identified within the genomic vicinity of each significant MTA locus (Table S4).These genes, positioned within a 250 kb radius of the MTAs, present a rich source for further exploration and potential functional implications related to the observed phenotypic traits.Based on the RNA-Seq data, 16 of 69 genes displayed a greater than 2-fold difference in gene expression between colored and noncolored wheat during seed developmental stages (10 DAF, 20 DAF, and 30 DAF; Figure 6A,B).Two genes (TraesCS2A02G424200, and TraesCS2A02G424600) were found in close proximity to the MTA associated with L*, whereas five genes (TraesCS2A02G532800, TraesCS2A02G436300, TraesCS2A02G436800, TraesCS2A02G436200, and TraesCS2A02G435800) were located near the MTA linked to a*.In addition, two genes (TraesCS2A02G409400 and TraesCS2A02G409600) on chromosome 2A were near the MTA related to b*.All these genes were identified via BLINK analysis (Table 2).Two genes on chromosome 4A (TraesCS4B02G070800 and TraesCS4B02G071000) and four genes on chromosome 2A (TraesCS2A02G551200, TraesCS2A02G551900, TraesCS2A02G551700, and TraesCS2A02G552400) were found to be closely associated with L* and a*, as identified via FarmCPU analysis.The expression patterns of all 16 genes during the seed developmental stages are illustrated in Figure 6B.Among them, eight genes, categorized as transcription factors, flavonoid pathway-related genes, and ubiquitination pathway genes, were selected, and their expression patterns are depicted in Figure 6C.To assess the reliability of the RNA-Seq results, RT-qPCR was employed to validate the expression profiles of selected genes, including anthocyanin regulatory R-S protein (MYC protein, TaesCS2A02G409600), MYB transcription factor (TraesCS2A02G552400), bHLH transcription factor (TraesCS2A02G409400), cinnamoyl-CoA reductase (CCR, TraesCS4B02G071000), cinnamyl alcohol dehydrogenase (CAD, TraesCS4B02G071000), and F-box protein (TraesCS2A02G551700).The RT-qPCR results were consistent with the RNA-Seq findings, confirming the concordance between the two independent methods.These genes were specifically chosen from the MYB-bHLH-WD40 (MBW) complex, lignin pathway, and E3 ubiquitin ligase categories (Figure 6C) for comprehensive validation, and the congruence of the results further strengthens the robustness of our findings (Figure 7).comprehensive validation, and the congruence of the results further strengthens the robustness of our findings (Figure 7).comprehensive validation, and the congruence of the results further strengthens the robustness of our findings (Figure 7). Discussion In this study, we conducted a comprehensive examination of seven phenotypic traits within an F 3 population derived from both colored and noncolored wheat using GWASs.The variability in the range of each phenotypic dataset was notable, including coefficients of variation surpassing 50% for chlorophyll b, carotenoid, and anthocyanin contents (59.22%, 51.77%, and 54.84%, respectively).This substantial variation likely influenced the outcomes, as evidenced by all 11 significant MTAs being associated with the phenotypic traits L*, a*, and b*.Considering these results, the observed high interrelation among L*, a*, and b* represents a noteworthy observation.This robust correlation indicates potential associations among these phenotypic traits, suggesting the possibility of shared genetic or biochemical pathways influencing seed color.The CIELAB color space, comprising the L*, a*, and b* channels, captures distinct aspects of color perception, such as lightness, the greenmagenta spectrum, and the blue-yellow spectrum.These channels, reflecting specific color attributes, might hold associations with underlying biological factors [17].In particular, an increase in anthocyanin content was negatively correlated with L* and b*, suggesting that as anthocyanin levels rise, seed brightness decreases, manifesting in a blue-yellow spectrum shifting toward the blue end. The relationships among the genotypes were analyzed using two distinct methods as follows: subgrouping analysis based on population structure and PCA.Both analyses identified three consistent subgroups, affirming the reliability of the genotype analysis.LD decay over genetic or physical distance in a population influences the marker coverage density required for effective GWASs.More rapid LD decay implies the necessity of higher marker density to capture markers in close proximity to causal loci [17].In this study, LD decayed to half of its maximum value at 134 Mb across the entire genome.Wheat, being a self-pollinating species with an extremely large genome, exhibits a larger LD decay distance than other plants, including maize [18,19].Moreover, LD decay can vary among mapping populations of the same species, as observed in Chinese wheat landrace (5.98 Mb) and Mexican bread wheat (22.85 Mb) [20,21].These variations are likely attributable to differences in cultivation practices, breeding methods, breeding history, and evolutionary history [22].Additionally, the use of recombinant inbred lines (RILs) with distinct seed coat phenotypes, namely, noncolored wheat (yellow) and colored wheat (deep purple), in the development of the F 3 population could be one reason for the observed higher LD decay distance. In this study, BLINK and FarmCPU analyses identified eight MTAs associated with L*, a*, and b* traits.Furthermore, among the 69 genes near these eight MTAs, 16 exhibited significant expression patterns during seed developmental stages, and the corresponding expression patterns of these genes were also determined.Interestingly, the anthocyanin regulatory R-S protein (TraesCS2A02G409600), a MYC transcription factor with a basic helix-loop-helix motif, demonstrated continuous upregulation during seed development in colored wheat both in the results of RNA-Seq and RT-qPCR, underpinning its role as a key regulator of anthocyanin structural genes [23].Moreover, the MYB transcription factor (TraesCS2A02G552400) and bHLH transcription factor (TraesCS2A02G409400) were also highly expressed during seed developmental stages in colored wheat.MBW protein complexes, which comprise MYB, bHLH, and WD40 repeat factors, are recognized as transcriptional regulators governing the production of secondary metabolites, including proanthocyanidins and anthocyanins [24].These regulatory elements assemble into the ternary complex MBW, and this complex might utilize alternative MYB and bHLH components to regulate specific steps in the biosynthetic pathways of proanthocyanidins and anthocyanins [25,26]. Phenylpropanoid compounds, including flavonoids and lignin, consist of numerous secondary metabolites that are widely distributed in various tissues and organs of plants.The biosynthesis of lignin and flavonoids shares the early enzymatic steps of the phenylpropanoid pathway before diverging into the flavonoid and lignin pathways [27].Shi et al. (2022) reported the mechanism underlying the homeostatic regulation of flavonoid and lignin biosynthesis in the phenylpropanoid pathway of plants [28].In this study, CCR (TraesCS4B02G071000) and CAD (TraesCS2A02G424600), which are involved in specific steps of the monolignol pathway, were downregulated during seed developmental stages in colored wheat, as demonstrated by both RNA-Seq and RT-qPCR.Moreover, similar trends have been reported in Arabidopsis in which mutant lines deficient in CCR and CAD genes accumulate higher amounts of flavonol glycosides in the stem, indicating a redirection of the phenolic pathway [29]. The ubiquitin-proteasome system, which regulates selective protein degradation via the 26S proteasome, is a key mechanism for the post-translational regulation of gene expression and protein quality control in eukaryotes [30].This system plays a pivotal role in governing signal transduction, metabolic processes, differentiation, cell cycle transitions, and stress responses by orchestrating the degradation of specific proteins [31,32].Ubiquitin E3 ligases, which are conserved throughout eukaryotes, perform diverse regulatory functions by catalyzing the covalent attachment of ubiquitin to target proteins [33].The Arabidopsis genome encodes more than 1500 E3 ubiquitin ligase proteins, which are categorized into various families such as the HECT, RING1, Kelch-type, U-box, and Cullin-RING ligase (CRL) families.Among these, the F-box protein operates as a component of the SKP1-Cullin-F-box complex within the CRL family of E3 ubiquitin ligases [34][35][36][37][38][39].Three E3 ubiquitin-protein ligases (TraesCS4B02G070800, TraesCS2A02G551700, and TraesCS2A02G435800), including one RING E3 ubiquitin ligase and two F-box proteins, exhibited significant expression during seed developmental stages.In addition, validation via RT-qPCR analysis revealed that TraesCS2A02G551700 displayed increased expression in colored wheat during seed developmental stages.Although the specific roles of these E3 ligases in seed coloration remain elusive, further molecular investigations could reveal their functional associations with seed pigmentation.Subsequent research endeavors employing molecular biology approaches could help elucidate the intricate functions linking these E3 ligases to seed coloration. Plant Materials RILs with distinct seed coat phenotypes, namely, yellow (accession no.10DS1673) and deep purple (accession no.10DS1674) were obtained from Korea University Wheat Subgene Bank [40].Crossbreeding between yellow and deep purple wheat lines resulted in the generation of F 2 plants.F 3 seeds from each of the 155 F 2 plants were selected for use in this study, with three seeds selected from each plant.Seeds were germinated on moistened filter paper at room temperature for 24 h, followed by vernalization at 4 • C in a dark chamber for 4 weeks.Each seedling was then transferred to a Magenta box (6.5 × 6.5 × 20 cm 3 , Greenpia Technology Inc., Seoul, Republic of Korea) containing polypro mesh.Seedlings were grown in Magenta boxes filled with 180 mL of water for 14 days in the growth facility at 23 • C and a day/night photoperiod of 16 h/8 h. Anthocyanin and Chlorophyll Content Analysis For anthocyanin content, homogenized F 3 wheat seeds were mixed with 1 mL of methanol-hydrochloric acid (1% HCl, w/v) and incubated at 4 • C for 24 h.The absorbance was measured at 530 and 657 nm using a UV/VIS spectrophotometer (Jenway, Keison Products, Chelmsford, UK) as described previously [41].The anthocyanin content was determined using the formula Q = (A 530 − 0.25A 657 ) × M −1 (Q: anthocyanin yield; A 530 and A 657 : absorption at the indicated wavelengths; M: mass of the plant).The leaves of each F 3 plant were ground using liquid nitrogen, and 100 mg of the resulting powder was used for chlorophyll measurements.Chlorophyll content was determined following the method outlined by Hong et al. (2018) [41].To determine the chlorophyll and carotenoid levels, samples of homogenized 14-day-old wheat seedlings were suspended in 100% acetone at 4 • C in the dark [42].The homogenized samples were centrifuged at 12,000× g for 10 min, and the supernatant was used for pigment determination.The absorbance of the supernatant was recorded at 470, 644.8, and 661.6 nm using a UV/VIS spectrophotome-ter.The chlorophyll content was estimated using the extinction coefficients provided by Lichtenthaler (1987) [42]. Grain Color Determination The color of wheat grains was determined using the L*, a*, and b* color scale with a ColorMate spectrophotometer (SCINCO, Seoul, Republic of Korea).Before the color measurement, the instrument was calibrated with standard black and white tiles.Each seed sample was placed in a Petri dish prior to reading the color parameters.The color L*, a*, and b* values were monitored and measured using embedded software (ColorMaster software 2017) in the device with three technical replicates. Genotyping and SNP Calling For the genotyping assay, leaves were sampled from each F 3 population and stored at −80 • C until use.DNA was extracted from a single plant from each germplasm following the CTAB method outlined in the USDA instructor's manual [43].The extracted DNA was sent to the USDA-ARS Small Grain Genotyping Center in Fargo, ND (https://wheat.pw.usda.gov/GenotypingLabs/fargo;accessed on 7 March 2022) for processing using the Illumina iSelect 90K SNP Assay (Illumina, San Diego, CA, USA).SNP allele clustering and genotype calling were performed using GenomeStudio Module Polyploid Genotyping 2.0 software (https://support.illumina.com/downloads/genomestudio-2-0.html,accessed on 12 June 2023).Markers with minor allele frequencies <0.05 and missing data >10% were removed, resulting in 3969 high-quality SNPs for population structure and genome-wide association analyses.Following filtering, missing genotypes were imputed using BEAGLE v4.1 with the default settings [44]. Population Structure and LD The program STRUCTURE v2.3.4,a model-based Bayesian cluster analysis tool was employed to infer the population structure [45].The analysis involved 5000 burn-in periods followed by 50,000 Markov chain Monte Carlo iterations, ranging from 1 to 10 clusters (K), to identify the optimal K. Three independent runs were conducted for each K, and the most likely subgroups were determined by assessing the estimated likelihood values (∆K) using Structure Harvester [46].LD between marker loci on each chromosome was assessed with the squared allele frequency correlation (r 2 ) using standalone TASSEL v.5.0 [47] and visualized using R.The LD decay distance was determined by fitting a non-linear model following the procedure described by Remington et al. [48], with an r 2 threshold set at 0.1 and r 2 equal to half of the maximum LD value. GWASs For GWASs of SNPs related to seed color, the GAPIT R package (version 3.0) was used [49].Two GWAS methods were applied, namely, fixed and random model circulating probability unification (FarmCPU) and Bayesian-information and linkage-disequilibrium iteratively nested keyway (BLINK) [50,51].In total, 3969 SNPs obtained after filtering were used for GWASs.To visualize the false positives of the implemented methods, Q-Q and Manhattan plots were generated using the internal program within GAPIT.The Manhattan plots depict the genomic distribution of marker associations, whereas the Q-Q plots assess the observed versus expected p-values.A stringent threshold of −log 10 P of 5.0 was applied to ensure robust identification of significant MTAs across the implemented genome-wide association study methods. Transcriptome Data Analysis Deep purple wheat and yellow wheat were cultivated in a radiation breeding research farm located at 35.5699 • N and 126.9722 • E (Jeongeup, Republic of Korea).The spikes were tagged at flowering time, and the grains were harvested at 10, 20, and 30 days after flowering.The samples were stored at −80 • C until further use.Total RNA was isolated from developing wheat seeds collected 10 days after flowering (DAF), 20 DAF, and 30 DAF using Meng and Feldman's method [52].An RNeasy plus micro kit (Qiagen, Hilden, Germany) was used to purify total RNA.Total RNA was isolated from developing wheat seeds collected 10 DAF, 20 DAF, and 30 DAF to construct RNA-Seq paired-end libraries using the TruSeq RNA sample preparation kit (Illumina).Each library was sequenced using the Illumina HiSeq2000 platform.The raw reads were preprocessed using Trimmomatic v0.36 to remove adapter sequences and low-quality bases [53].The preprocessed reads were mapped to a high-quality wheat (T.aestivum L.) reference genome (International Wheat Genome Sequencing Consortium) from IWGSC using HISAT2 v2.1 [54,55].The alignment was capable of determining alternative spliced transcripts for gene models based on IWGSC RefSeq v1.1.The HTSeq v0.6.1 high-throughput sequencing framework was employed to count the number of reads mapped to the exons of each gene [56].DEGs were determined by p < 0.05, false discovery rate < 0.05, and absolute fold change >4 using edgeR [57] in the Bioconductor package.DEGs were identified by pairwise comparison at each time point between yellow and purple seeds.The log2-transformed transcript per million values was calculated using TPMCalculator and used to construct heatmaps of DEGs under yellow and purple wheat [58].To identify genes associated with each agronomic trait, high-confidence annotated genes located within ±250 kb of each identified marker-trait association (MTA) were selected from the transcriptome data.The heatmap of gene expression was generated using MeV software, version 4.9.0 [59]. Gene Expression Analysis RT-qPCR was performed using Bio-Rad CFX Opus 96 (Bio-Rad, Hercules, CA, USA) and TB Green premix EX Taq II (Takara, Tokyo, Japan).RT-qPCR primers for the indicated genes were designed using an oligonucleotide properties calculator.Each PCR reaction mixture (20 µL) contained 10 µL of 2 × TB Green premix, 1 µL of the first-strand cDNA, and gene-specific primers.The reactions were performed in the Bio-Rad CFX Opus 96 system under the following conditions: 30 s of denaturation at 95 • C, followed by 40 cycles of PCR amplification at 95 • C for 10 s and 65 • C for 30 s.The primers are presented in Table S1. Conclusions Overall, our comprehensive investigation into the genetic basis of seed color in wheat uncovered eight significant MTAs related to colorimetric traits (L*, a*, and b*) and candidate genes associated with seed coloration.The identified MTAs and candidate genes, including those encoding putative components of the MBW complex and E3 ubiquitin ligases, provide valuable insights into the molecular mechanisms governing seed color in wheat.Further investigations are essential to validate these correlations and unveil the precise roles of the identified genes in determining wheat seed color.This research provides a foundation for future studies to unravel the intricate molecular mechanisms governing the diverse colors of wheat seeds. Figure 1 . Figure 1.F3 population seed images.The figure displays seed images representative of the F3 population used in this experiment showing the observed variation in seed color.The F3 population originated from a crossbreeding of recombinant inbred lines (RILs) with distinct seed coat phenotypes, namely, yellow (accession no.10DS1673, sourced from the Korea University Wheat Subgene bank) and deep purple (accession no.10DS1674).The intentional inclusion of RILs with diverse seed coat phenotypes contributed to the generation of a genetically heterogeneous population, facilitating the exploration of seed color-related traits in subsequent analyses. Figure 1 . Figure 1.F 3 population seed images.The figure displays seed images representative of the F 3 population used in this experiment showing the observed variation in seed color.The F 3 population originated from a crossbreeding of recombinant inbred lines (RILs) with distinct seed coat phenotypes, namely, yellow (accession no.10DS1673, sourced from the Korea University Wheat Subgene bank) and deep purple (accession no.10DS1674).The intentional inclusion of RILs with diverse seed coat phenotypes contributed to the generation of a genetically heterogeneous population, facilitating the exploration of seed color-related traits in subsequent analyses. Figure 1 . Figure 1.F3 population seed images.The figure displays seed images representative of the F3 population used in this experiment showing the observed variation in seed color.The F3 population originated from a crossbreeding of recombinant inbred lines (RILs) with distinct seed coat phenotypes, namely, yellow (accession no.10DS1673, sourced from the Korea University Wheat Subgene bank) and deep purple (accession no.10DS1674).The intentional inclusion of RILs with diverse seed coat phenotypes contributed to the generation of a genetically heterogeneous population, facilitating the exploration of seed color-related traits in subsequent analyses. Figure 3 . Figure 3. Genotype analysis and linkage-disequilibrium (LD) decay.(A) Principal component analysis of 155 genotypes using 3969 single nucleotide polymorphisms provides insights into the genetic relationships among individuals.(B) Population structure analysis with three clusters reveals distinct subgroups within the 155 genotypes, enhancing our understanding of the genetic diversity of the population.(C) The LD decay plot depicts the genome-wide decay of LD with genetic distance.The region in which LD decays to half is highlighted in green, and 50% decay occurred at 134 Mb across the genome. Figure 3 . Figure 3. Genotype analysis and linkage-disequilibrium (LD) decay.(A) Principal component analysis of 155 genotypes using 3969 single nucleotide polymorphisms provides insights into the genetic relationships among individuals.(B) Population structure analysis with three clusters reveals distinct subgroups within the 155 genotypes, enhancing our understanding of the genetic diversity of the population.(C) The LD decay plot depicts the genome-wide decay of LD with genetic distance.The region in which LD decays to half is highlighted in green, and 50% decay occurred at 134 Mb across the genome. Figure 4 . Figure 4. Manhattan and Q-Q plots for significant MTAs.(A) Manhattan and Q-Q plots for BLINK (L*) analysis, illustrating genomic regions with significant associations with the L* trait in wheat.(B) Manhattan and Q-Q plots for BLINK (a*), highlighting significant MTAs related to a* in the wheat genome.(C) Manhattan and Q-Q plots for BLINK (b*), revealing genomic loci significantly associated with b* in wheat.(D) Manhattan and Q-Q plots for FarmCPU (L*), displaying genomic regions with noteworthy associations with the L* trait using FarmCPU.(E) Manhattan and Q-Q plots for FarmCPU (a*), presenting significant MTAs related to a* in wheat through FarmCPU. Figure 4 . Figure 4. Manhattan and Q-Q plots for significant MTAs.(A) Manhattan and Q-Q plots for BLINK (L*) analysis, illustrating genomic regions with significant associations with the L* trait in wheat.(B) Manhattan and Q-Q plots for BLINK (a*), highlighting significant MTAs related to a* in the wheat genome.(C) Manhattan and Q-Q plots for BLINK (b*), revealing genomic loci significantly associated with b* in wheat.(D) Manhattan and Q-Q plots for FarmCPU (L*), displaying genomic regions with noteworthy associations with the L* trait using FarmCPU.(E) Manhattan and Q-Q plots for FarmCPU (a*), presenting significant MTAs related to a* in wheat through FarmCPU. Figure 5 . Figure 5. Box plots of allelic differences of significant MTAs.(A) Allelic differences for the significant MTAs identified via FarmCPU analysis for L* in wheat.(B) Allelic differences for significant MTAs identified via BLINK analysis for L* in wheat.(C) Allelic differences for significant MTAs identified via BLINK analysis for b* in wheat.Statistical analysis was performed using ANOVA followed by Duncan's post hoc analysis (p < 0.001) to assess significant differences in mean phenotypic values among genotypes with different allelic variants.Different letters indicate statistically significant differences and the blue circles represent the distribution of lines based on alleic differencesMTA, marker-trait association; BLINK, Bayesian-information and linkage-disequilibrium iteratively nested keyway; FarmCPU, fixed and random model circulating probability unification. Figure 5 . Figure 5. Box plots of allelic differences of significant MTAs.(A) Allelic differences for the significant MTAs identified via FarmCPU analysis for L* in wheat.(B) Allelic differences for significant MTAs identified via BLINK analysis for L* in wheat.(C) Allelic differences for significant MTAs identified via BLINK analysis for b* in wheat.Statistical analysis was performed using ANOVA followed by Duncan's post hoc analysis (p < 0.001) to assess significant differences in mean phenotypic values among genotypes with different allelic variants.Different letters indicate statistically significant differences and the blue circles represent the distribution of lines based on alleic differencesMTA, marker-trait association; BLINK, Bayesian-information and linkage-disequilibrium iteratively nested keyway; FarmCPU, fixed and random model circulating probability unification. Figure 6 . Figure 6.RNA sequencing results and DEGs positioned within a 250 kb radius of the MTAs.(A) Images of seed samples during seed developmental stages at 10 DAF, 20 DAF, and 30 DAF used for RNA sequencing.Heatmaps illustrating (B) sixteen genes displaying a greater than 2-fold difference in gene expression between colored and non-colored wheat during the seed developmental stage, and (C) DEGs involved in transcription factors, phenylpropanoid compounds, and E3 ubiquitin ligase. Figure 6 . Figure 6.RNA sequencing results and DEGs positioned within a 250 kb radius of the MTAs.(A) Images of seed samples during seed developmental stages at 10 DAF, 20 DAF, and 30 DAF used for RNA sequencing.Heatmaps illustrating (B) sixteen genes displaying a greater than 2-fold difference in gene expression between colored and non-colored wheat during the seed developmental stage, and (C) DEGs involved in transcription factors, phenylpropanoid compounds, and E3 ubiquitin ligase. Figure 6 . Figure 6.RNA sequencing results and DEGs positioned within a 250 kb radius of the MTAs.(A) Images of seed samples during seed developmental stages at 10 DAF, 20 DAF, and 30 DAF used for RNA sequencing.Heatmaps illustrating (B) sixteen genes displaying a greater than 2-fold difference in gene expression between colored and non-colored wheat during the seed developmental stage, and (C) DEGs involved in transcription factors, phenylpropanoid compounds, and E3 ubiquitin ligase. Table 2 . Identification of genetic loci associated with phenotypic traits of wheat (Triticum aestivum L.) based on genome-wide association studies.BLINK, Bayesian-information and linkage-disequilibrium iteratively nested keyway; FarmCPU, fixed and random model circulating probability unification. Table 2 . Identification of genetic loci associated with phenotypic traits of wheat (Triticum aestivum L.) based on genome-wide association studies.BLINK, Bayesian-information and linkagedisequilibrium iteratively nested keyway; FarmCPU, fixed and random model circulating probability unification.
7,890.4
2024-03-22T00:00:00.000
[ "Agricultural and Food Sciences", "Biology" ]
Stiffness Enhancement in Nacre-Inspired Nanocomposites due to Nanoconfinement Layered assemblies of polymers and graphene derivatives employ nacre’s tested strategy of intercalating soft organic layers with hard crystalline domains. These layered systems commonly display elastic properties that exceed simple mixture rule predictions, but the molecular origins of this phenomenon are not well understood. Here we address this issue by quantifying the elastic behavior of nanoconfined polymer layers on a model layered graphene-polymer nanocomposite. Using a novel, validated coarse-grained molecular dynamics simulation approach, here we clearly show that the elastic properties of layered nanocomposites cannot be described by volume fraction considerations alone and depend strongly on both interfacial energy and nanostructure. We quantify the relative importance of polymer nanoconfinement and interfacial energy on polymer structure and elasticity, and illustrate the validity of our model for two polymers with different intrinsic elastic properties. Our theoretical model culminates in phase diagrams that accurately predict the elastic response of nacre-inspired nanocomposites by accounting for all material design parameters. Our findings provide widely applicable prescriptive guidelines for utilizing nanoconfinement to improve the mechanical properties of layer-by-layer nanocomposites. Our findings also serve to explain why the elastic properties of organic layers in nacre exhibit multifold differences from the native and extracted states. (transition region between two different phases in a material) formation in nanocomposites. Systematic studies on synthetic nanocomposites reveal that several molecular mechanisms, such as topological constraints induced by impermeable platelets, chain adsorption onto surfaces, and dispersion of nanoinclusions influence the mechanical properties of polymer nanolayers [16][17][18] . These mechanisms are collectively called nanoconfinement effects, and it hypothesized that they may contribute to the exceptionally high elastic response observed in nacre and nacre-inspired systems. Most of the circumstantial evidence for these effects comes from polymer thin films, which exhibit drastic changes in glass transition behavior due to substrate effects, in analogy with layered nanocomposites [19][20][21][22][23] . Nanoconfinement of polymer thin films near hard surfaces with strong adhesion energy gives rise to a higher apparent glass-transition temperature (T g ), and elastic properties may change both above and below (T g ) [24][25][26][27] . The length-scale over which these properties change, the so-called interphase width, is a key factor governing the viscoelastic properties of nanocomposites, although it is difficult to measure it experimentally 15,24,[28][29][30][31][32] . Such interphases also exist in nacre, as evident from nanoindentation experiments, AFM imaging, and finite element modeling 12,[33][34][35][36][37] . These investigations concur on the observation that the elastic modulus of organic layers are higher than what is anticipated for the organic layers, lying broadly in the range 2 -40 GPa 12, [33][34][35][36][37] . Conversely, studies on the actual bulk properties of the organic layers reported an elastic modulus of 100 Pa 38 to 20-100 MPa 39 for the organic phase using different experimental techniques. We note that the micromechanics models and measurements employed for these analyses often do not account for anisotropy that is likely to occur in such systems, as observed in semicrystalline polymer-clay nanocomposites 40 . Thus, these values obtained are considered to be representative isotropic equivalent material constants. While it is clear that the organic layers confined in their nanolayers in nacre and nacre-inspired nanocomposites exhibit significant differences under nanoconfinement, how these properties depend on factors such as layer thickness and interfacial energy remains to be established. In this article, we aim to study the nanoconfinement effect using a novel coarse-grained molecular dynamics (CGMD) model of nacre-inspired poly(methyl methacrylate) (PMMA)/graphitic systems, as recently synthesized and studied in experiments 41 . For hydrated organic layer in nacre, the elastic modulus is reported to be close to PMMA's elastic modulus. Thus, the PMMA/multi-layer graphene system has similar constitutive behavior as the nacre constituents. The simulation approach utilizes recent advances in mesoscale modeling of materials, namely the development of coarse-scale models that can capture the mechanical properties of multi-layer graphene 42 and methacrylate polymers 43,44 at length and time scales inaccessible to all-atom MD simulations. This ability allows us to efficiently carry out size-dependence studies using models validated by experiments. Here, we first discuss the details of the modeling approach. We follow up with results focusing on the properties of the soft phase, interrogating size-dependence along with interfacial energy. Finally, we summarize our conclusions and present analytical models that provide guidelines for designing and optimizing material properties in nacre-inspired systems. Methods Coarse-Grained Models. 2-bead per monomer model for PMMA. The coarse-grained potential for PMMA used in this study is based on a generalized CG force field that we developed in a recent study 43 . The bonded parameters were derived from all-atomistic probability distributions of local structural metrics, and long-range interactions were based on molecular mobility and density measurements as described in the original work 43 . Each monomer in PMMA is modeled as 2 bead groups in our CG model: the backbone methacrylate group "A" (C 4 O 2 H 5 ) and side-chain group "B". The bond stretching, angle bending, and dihedral interactions in the CG model are developed by matching them to respective atomistic probability distribution functions using the inverse Boltzmann method. We employ a Gromacs-style 12− 6 Lennard-Jones (LJ) potential to model the cohesive nonbonded interactions between beads excluding the nearest bonded neighbors: where ε is the depth of potential well and σ is the point at which the potential crosses the zero energy line. S LJ (r) is a polynomial function that provides the interaction a smooth transition to zero from r inner = 12 Å to r outer = 15 Å. The parameters have been calibrated to match the experimental density at room temperature and the glass transition temperature, T g , of bulk PMMA, resulting ε AA = 0.5 kcal/mol, σ AA = 5.5 Å for backbone beads and ε BB = 1.5 kcal/mol, σ BB = 4.42 Å for sidechain beads, which yeild density of 1.15 g/cm 3 and T g of 385 K for bulk PMMA. Additionally, the model is validated using experimental data on the Flory− Fox constants for PMMA that define the molecular weight dependence of T g , which our model readily captures with no additional empirical input. Coarse-grained Model for Multi-layer Graphene. The details of the graphene model are explained in our earlier work 42 and are briefly summarized here. We cluster 4 atoms into one bead and the potential energy can be written as: where V g_bond , V g_ang , V g_dih , V g_nb represent the total bond, angle, dihedral, and pair wise non-bonded interactions. The functional form of the interactions are as follows: g nb 12 6 where D 0 is the depth of the bond potential well, α is related to the width of the potential well of the bond, d 0 , θ 0 are equilibrium bond and angle. k θ , k ϕ are spring constants of angle and dihedral interactions. ε is the depth of the non-bonded potential well and σ determines the equilibrium distance between two non-bonded beads (r eq = 2 1/6 σ). Based on the geometry of the mapping, in our system: d 0 = 2.8 Å, θ 0 = 120°, and σ = 3.46 Å. The rest of the parameters are calibrated based on material properties as: D 0 = 196.38 kcal/mol, α = 1.55 Å, k θ = 409.40 kcal/mol, k ϕ = 4.15 kcal/mol, ε = 0.82 kcal/mol. For the non-bonded interactions, the cut off distance is calibrated as 12 Å and the depth of the energy is calibrated such that the interlayer adhesion energy is 260 mJ/m 2 . As discussed in the original work, all of these values are in close agreement with data from experiments and density functional theory calculations 42 . This graphene CG model yields, for a monolayer, an elastic modulus of 900 GPa, failure strength of 81 GPa, and in-plane shear modulus of ~2 GPa in zigzag and ~1.5 GPa in armchair pulling directions, all in good agreement with experimental results 42 . The interlayer shear modulus is about 2 GPa for the system studied here, but depending on the stacking configuration, it can be much lower. A key feature of the model is its ability to predict the elastic and plastic response of multi-layer graphene, and features such as superlubricity, where a drastic reduction in shear resistance can be observed at specific stacking arrangements. Our CG model can quantitatively capture complex mechanical behavior such as non-linear elasticity, buckling of the sheets under large shear deformation or anisotropy in the zigzag and armchair directions for large-deformation and fracture, and is thus well equipped to model graphene properties in nanocomposite materials. Methodology for the Coarse-Grained Molecular Dynamics Simulations. We use the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS), widely used open sourced simulation package to carry out our CGMD simulations. The systems are composed by 2 phases: the relatively soft polymer phase and the hard graphene phase. Periodic boundary conditions (PBC) are applied in all 3 directions (x,y,z), so that with the two phases stacked together, the system constitutes infinitely long layers in both the x and y directions, and an infinite repeating bilayer structure of alternating graphene and polymer phases in the z direction. Thereby, a simplified, uniform layer-by-layer structure inspired from the nanostructure of nacre can be formed. The graphene phase has N finite graphene sheets that are 9.7 nm long (x) and 50.96 nm wide (y), containing 2 graphene flakes of equal size in each sheet plane. With PBC, the resulting lateral spacing between the flakes in the y direction is ~4.8 Å. When N > 1, the sheets, AB stacked in a staggered fashion, are shifted by one half of the length of the graphene flake with respect to the neighboring (above or below) layers, resulting in an overlapping percentage of 50%. In the polymer phase, PMMA chains with 100 monomers per chain are first equilibrated at 800 K and then slowly cooled down to room temperature. The polymer films with different thickness h are then placed onto the graphene phase to create the layered systems. The interaction between the graphene and polymer is captured by the LJ 12-6 potential: gp nb gp gp gp 12 6 where ε gp is the depth of the Lennard-Jones potential well for graphene-polymer interaction strength and σ gp is the point where potential crosses zero point line. The cutoff distance is set to be 15 Å. Previous studies have shown that stronger interfacial interactions lead to higher T g and elastic modulus near interface seen in supported thin films and nanocomposites [44][45][46][47] . For graphene derived materials, this can be straightforwardly achieved through surface functionalization, as in graphene oxide. To better understand the impact of interfacial interaction strength on the elastic response, ε gp = 0.5, 1.25, 2.0 kcal/mol are used in our modeled nanostructures so that the interfacial energies represent different types of interfaces from weakly bonded graphene polymer interface to highly adhesive graphene oxide polymer interface where no-slip condition at interface is ensured. The interfacial energy between graphene and soft layers scales linearly with ε gp and is 0.08 J/m 2 , 0.25 J/m 2 and 0.45 J/m 2 respectively. For weak interfacial interaction strength, the calculated adhesion energy is comparable to experimentally measured graphene polymer adhesion energy 48 . Our interfacial energy systems ensure no-slip boundary conditions at polymer graphene interface, which can be achieved experimentally with surface functionalization as in the case of graphene oxide. To construct our modeled structures, we select N = 1, 2, 5, 8, corresponding to graphene phase thickness of 0.34nm, 0.68nm, 1.7nm, and 2.72nm. We choose h = 2 nm, 5 nm, 10 nm, 20 nm, 40 nm for polymer phase. We also wish to elucidate whether the confinement effect depends on the type of the polymer used, specifically its bulk elastic properties. Therefore we reduce the cohesive interaction in sidechain groups by adjusting the ε BB parameter in our CG PMMA model to 0.1 kcal/mol. This results in a hypothetical polymer that has a lower glass transition temperature and a lower modulus for the polymer phase 44,45 . In total, 60 systems with varying h, N and interfacial interaction ε gp are studied for both low cohesive energy and high cohesive energy polymer nanocomposites. A schematic of system is shown in Fig. 1. The densities of the modeled systems are calculated to be 1.19 ± 0.03 g/cm 3 , which is the same as the density in bulk polymer phase. It is therefore considered that the confinement on polymer phase does not significantly change the average density of the polymer. For equilibration, we start with a fast push off phase using a soft potential to randomize chain configurations, and then equilibrate the system under 800 K for 6 ns. During the equilibration process, the cut off distance of our CG model is set to be 2 1/6 σ so that only repulsive interactions are allowed for the polymer phase to overcome conformational energy barriers and achieve equilibrium at high temperature. The system is then cooled down to 300K and equilibrated for 2 ns with the full potential described in Equations 1-7. Polymer chains in our modeled nanostructures have 100 monomers per chain and are below the entanglement length measured in experiments 49 , and thus find their relaxed conformations more readily. Here we ensure that the polymer chains have converged conformations by initializing polymer chains with desired end-to-end distances and monitoring the mean-square internal distances (MSID) in polymer chains during the equilibration procedure. Steady states are attained by monitoring the convergence in MSID curves for the polymer layers 50 . After equilibration runs, a strain-controlled uniaxial tensile test is performed by deforming the simulation box in the y direction at a strain rate of 2 × 10 8 s −1 . This high strain rate is inherent to MD simulations, which includes dynamical information usually on ps or ns timescales. We note that since the polymers used are below their glass-transition temperature, strain rate effects on the measured moduli are not expected to be very large. We also note that high strain rates are highly relevant to ballistic impact and other protection applications where nacre-inspired systems could potentially be utilized. In such cases, the deformations occur athermally, and strain rate effects are minimal for modulus measurements, which are governed chiefly by the cohesive interactions. Previous molecular dynamics studies show that employing a strain rate between 0.4 and 40 × 10 8 s −1 yields consistent results for elastic modulus calculations in other systems 51 . During the deformation, the pressure is kept at zero in all directions except for the loading direction. Virial stress is computed for each atom in the simulation box and is averaged over all atoms in polymer phase to get stress in confined polymer film. Elastic modulus of polymer phase is then computed from the slope of a linear fit to the stress-strain curve with strain ε = 0-0.015. Results and Discussion Stress strain curves in modeled systems. First, we present results from the constant strain rate tensile testing simulations, which provide insights into the mechanical response of the system. We focus initially on how the thickness of the polymer layer influences elastic properties. For this purpose, we present results from a series of computational thought experiments where we vary the nanostructure of the multilayer system by controlling both h, the thickness of the polymer layer, and N, the thickness of the graphene layer as defined by the number of sheets. A typical stress strain curve of the nanocomposite system as well as the polymer phase is shown in Fig. 2(a,b). The overall stress-strain behavior of the nanocomposite indicates that the material can be considered linear elastic up to 1.5% strain. Shortly after the linear elastic region, multilayer graphene starts to yield due to interlayer sliding between graphene sheets and stick-slip events occur between graphene flakes, marking the onset of a plastic regime. This plastic deformation mechanism is associated with a post-yield plateau in the stress-strain curve that exhibits repeated peaks and valleys as the sheets slide. Meanwhile, the interfacial energy between the polymer and graphene phase is large enough such that the chain ends are physisorbed and move with the graphene layers. The large shear stresses that develop in the soft layers eventually give rise to graceful failure of the material. Given the complexity of the mechanisms involved and their size-dependence, here we limit our focus to the small deformation regime, that is, the linear elastic region, where the nanoconfinement effects are already not well understood. Specifically, we aim to investigate the combined influence of geometric nanoconfinement and interfacial energy on the elastic properties of the polymer phase. Structural characteristics of the confined polymer layer. The first key question we ask here is whether the confinement by graphene layers induces structural changes in the polymer layer. If any changes occur in the structural arrangement of the polymer chains, one could potentially correlate these with the changes in the mechanical behavior as well [52][53][54][55] . In order to study the confinement effect on the conformation of polymer chains in the soft phase, we calculate the gyration tensor for each polymer chain, and plot the average value against polymer layer thickness, h in Fig. 3. R g is a measure of the molecule's size and orientation in specified directions and is defined as: where M is the total mass of the system, m i and r i are the mass and position of each atom in the system, r com is the center of mass for all the atoms. In our systems, z-axis is orthogonal to the plane of layers of graphene and polymer. Therefore, two directions are of interest here to study the confinement effect: in-plane (parallel to xy plane, or equivalently the plane of the graphene sheets) and out-of-plane (perpendicular to xy plane). R g,xy is the square root of average value of both x and y direction R g squared. The confinement effect on structure is clear in our bilayer systems: as h reduces, both in-plane and out-of-plane R g deviates from R g in bulk polymer: R g,xy increases and out-of-plane R g,z decreases. Calculated R g results suggest that the strength of the interfacial interaction does not drastically change the average structural confirmations for the cases studied here. Instead, the soft layer thickness, h, and the associated topological changes seem to be the more dominant factor governing R g , as all 3 studied interfacial energy systems produce very similar results. Therefore, for clarity, only systems with ε gp = 2.0 kcal/mol are shown in Fig. 3. Each data point in Fig. 3 is an averaged value over 5 distinct simulation runs. The errors are small compared to the value and therefore omitted in Fig. 3 for clarity. Elastic modulus for confined polymer phase. The next question that remains to be answered is whether the structural changes observed in the confined soft layers directly correlate with changes in mechanical properties that are associated with nanoconfinement. For this purpose, we compare the elastic moduli of the polymer in the bulk (E bulk ) and nanoconfined (E film ) phases, and again systematically map out the effects of material nanostructure. To calculate elastic modulus E bulk for bulk phase of polymers, we perform tensile test on systems with periodic boundary conditions and obtain the slope of a linear fit to the stress-strain curve with strain ε = 0-0.015, averaging results from 5 distinct simulation runs yields 3.50 ± 0.21 GPa and 0.30 ± 0.11 GPa for polymer with ε BB = 1.5 kcal/mol (PMMA) and 0.1 kcal/mol (low cohesive interaction polymer) respectively in our CGMD model. Figure 4 illustrates how the E film scales with h under nanoconfinement for both material systems and degree of polymerization. For clarity, calculated elastic modulus data points are omitted in Fig. 4, only predictions from our analytical model (Eq. 11) are presented. Detailed plots containing both elastic modulus data points and prediction curves from analytical models are available in the Supporting Information. For the weak interfacial interaction cases with ε gp = 0.5 kcal/mol, the elastic modulus of the PMMA layer does not see an increase from its bulk value but an increase of over 130% for low cohesive polymer layer with ε BB = 0.1 kcal/mol is still observed. For strong interfacial interactions ε gp of 1.25 kcal/mol and 2.0 kcal/mol, as the thickness of soft layer h decreases from 40 nm to 2nm, the elastic moduli of confined PMMA layer increases by 50% and 90% respectively. The trend is similar in polymer layers with ε BB = 0.1 kcal/mol, but in this case the elastic modulus increase is much more significant, ranging from roughly 5 to 8 times the bulk values. The most interesting observation arising from this comparative analysis is that although the elastic modulus of the polymers studied here differs 10 times in the bulk phase, the values are much closer in the confined state, which shows the importance of the confinement effect for soft polymers. Our simulation results compare well with recent experimental studies on elastic modulus of supported PMMA thin films using nanoindentation techniques indicating increment in elastic modulus with decreasing film thicknesses 56 . Thus, at a very high degree of confinement, common in many synthetic nacre-inspired systems, the degree of confinement rather than polymer chemistry may be the most important factor governing the in-plane elastic response. To quantitatively describe the effect of confinement on the elastic moduli of polymer phase, we propose the following model to capture the relationship between elastic moduli and film thickness: where E film is the effective elastic modulus of the confined polymer phase, E bulk is the elastic modulus of bulk phase of polymer, h is the thickness of the confined films and h 0 is a fitting parameter that determines how rapidly the elastic modulus converges to the bulk value for confined polymer films. For the same type of polymer, the fitting parameter h 0 is similar for different values of N but depends on interfacial energy as well as the soft layer thickness. In Fig. 5a, we the average h 0 across all N values for each systems to show the trend with interfacial energy. The growth in h 0 with increasing interfacial energy is clear from this analysis. For the high cohesive energy polymer with ε BB = 1.5 kcal/mol (PMMA), h 0 = 0 nm, 0.96 nm, 1.71 nm and for the polymer with ε BB = 0.1 kcal/mol, h 0 = 3.75 nm, 12.33 nm, 15.36 nm for the three values of ε gp studied respectively. Taking our nanostructures with strong interfacial interaction strength ε gp = 2.0 kcal/mol as an example, for a confined PMMA film, a 17.1 nm thick film would have an elastic modulus that is within 10% of the bulk value. On the contrary, this thickness increases to 153.6 nm for the low cohesive polymer with ε BB = 0.1 kcal/mol. Based on this analysis, one may ask whether the 1/h scaling relationship between E film and thickness h identified here has any physical basis. Here we attempt to provide an explanation for this observation using simple composite concepts. On the basis of chain segment order parameter spatial distribution (details in supporting information) in our confined soft layers, we employ a composite bilayer model to quantify the thickness dependence of E film and justify the best-fit scaling. Based on our finding that the structure properties of confined polymer phase approach bulk-like when one proceeds 2R g distance away from graphene polymer interface (details in supporting information), here we define an interface region h int of 2R g distance from graphene polymer interface. Beyond this region, we assume that the properties converge to bulk like properties and the chains cannot sense the interfaces directly. With this simplification, the confined polymer film with thickness h can be considered to be composed of 2 interface layers with thickness h int and 1 interior layer with bulk like properties. In the interface layer, the elastic modulus can be considered as: where E surf is the upper limit of elastic modulus at graphene polymer interface when confinement effect is infinite or h → 0. As the distance from graphene polymer interface increases beyond the 2R g limit, the modulus can be considered as E bulk for the interior layer. If the film thickness h is less than 2h int , this means that the effects of the two interfaces are pervasive throughout the film and there is no interior bulk-like region. Following this picture, the polymer phase modulus E film can be described as: Further investigation in Equation 11 reveals that when h > 2h int , this bilayer composite model resumes to our empirical model in Equation 9. By equating Equation 9 and 11, we can define E surf in terms of h 0 using our simulation data simply as: Results from Fig. 5(b) shows that E surf increases with interfacial interaction strength in confined polymer layers with ε BB = 1.5 kcal/mol (PMMA) and 0.1 kcal/mol. It should be noted that despite the huge difference in elastic response of bulk PMMA and low cohesive polymer with ε BB = 0.1 kcal/mol, i.e. 10 times difference in modulus, these two polymer films have comparable surface moduli under nanoconfinement (only ~2 fold difference). This is because the surface moduli presumably depends most strongly on the graphene polymer interaction strength compared to other factors. Comparing Fig. 5(a,b) suggests a similar trend for both h 0 and E surf in variables as expected. These results ascertain that nanoconfinement greatly alters properties of polymer layers and quantify the size-effects associated with layer-by-layer systems. Our nacre inspired model for elastic modulus of the confined polymer phase successfully captures the CGMD simulation results and provides simple guidelines for designing nacre inspired nanocomposite materials. The key insight here is that the interchain molecular interaction of the confined polymer, which governs the thickness of the interface region, governs h 0 in our model and determines how fast the confinement effect changes with changing h. Adhesive interaction at the interface is another key factor to consider in such nanocomposites since it influences E surf in the analytical model and governs the effectiveness of confinement in changing the elastic response of the polymer phase. The effect of both confinement and interfacial energy on elastic moduli of the polymer phase can be best reflected in a phase diagram of E film in Fig. 6, where we utilize our proposed model in Equation 11 to predict E film with different confinement thickness h as well as interfacial interaction strengths. For each film thickness h, we first calculate E film in systems with ε gp = 0.5, 1.25, 2.0 kcal/mol using Equation 11 and then linearly extrapolate E surf values for other values of ε gp . In Fig. 6, the predicted E film is normalized with the elastic modulus of the bulk polymer. This analysis illustrates that the confinement effect is much stronger in low cohesive energy polymer layers as even with low interfacial energies, nanoconfinement still results in a significant increase in the elastic moduli of polymer phase. It should be noted that in experiments, it is extremely difficult to increase interfacial energy without compromising other properties of the materials. For example, in our PMMA/graphitic systems, functionalizing graphene sheets, as in the case of graphene oxide, can increase the interfacial energy. However, the modulus of graphene oxide decreases monotonically with degree of functionalization due to the breaking of perfect sp 2 carbon network 57,58 . The elastic modulus gain from the confined soft layers is on the order of few GPas and may not be enough to overcome modulus loss of tens and hundreds of GPas when increasing the degree of functionalization in graphene polymer nanocomposites. Therefore, from materials by design point of view, maintaining a high modulus in the hard layer while achieving large interfacial interactions seems to be crucial. Overall, the trends predicted here with our simulations agree very well with a very recent experimental study on graphene oxide PMMA nanocomposites, where the composite modulus nonlinearly overshoots the rule of mixtures predictions when the polymer layer thickness is reduced to tens of nm 59 . The modulus of multilayer graphene phase calculated from each system shows that changing number of graphene sheets N does not change the overall elastic modulus of graphene phase when N> 1. The modulus is calculated to be E g ~ 300 GPa, which is in agreement with simulation results on multilayer graphene sheets from our previous study 42 . For systems with N = 1, the calculated elastic modulus of graphene phase is lower due to sheet discontinuities, and depends on the graphene polymer interfacial interaction strength (details in supporting information). Regardless, the elastic modulus can be estimated by using a rule of mixtures using our predictions for nanoconfinement and interfacial energy effects. Figure 7 summarizes the elastic modulus predictions for the whole nanocomposite using our simple model. In this particular system, the much stiffer graphene phase dominates the overall elastic response of the nanocomposite. In many biological and bio-inspired nanocomposites, the hard phase materials possess much a lower elastic response than graphene and interfacial energy can be very high through the use of strong electrostatic interactions. Additionally, our analysis on lower cohesive forces between polymers also serves to emulate hydrated systems where a lower bulk modulus but a greater increase in the confined modulus is likewise anticipated. Thus, the nanoconfinement effects seen here are likely conservative estimates and a much greater contribution from the stiffening of the soft polymer phase can be anticipated in certain relevant cases. Conclusions In this work, we utilized coarse-grained molecular dynamics simulations to systematically study the nano confinement effect on the elastic modulus of the confined polymer phase in nacre inspired nanocomposite materials. Structural characterization of the confined polymer phase illustrated that the graphene phase leads to highly aligned polymer chains near the graphene/polymer interface region. Elastic modulus calculation shows that a high degree of confinement increases the elastic modulus of the polymer phase by as much as 2-6 times, depending on the type of the polymer. These results provide fundamental into how the elastic response of the polymer is altered tremendously under confinement compared to the unconfined state, especially at length scales below 5 nm, which is becoming relevant with more recent synthesis approaches to nacre-inspired systems. Our analytical model physically explains the effect of confinement arising from the hard-soft materials interface and quantitatively captures the effect of confinement on the modulus of the polymer layer. In the context of materials by design, our work serves as a guideline to fabricate nacre-mimetic nanocomposites with optimized elastic properties. Utilizing the same methods used in this article, fracture toughness of these nanocomposites could further be studied and provide a complete overview of materials by design approach. The CGMD approach laid out in this study could also be extended to analyze the mechanical behavior of other 2D materials in nanocomposites, and should be straightforward to generalize to other materials systems inspired from nacre.
7,256.2
2015-11-20T00:00:00.000
[ "Materials Science", "Physics" ]
Co-Occurrence of Wing Deformity and Impaired Mobility of Alates with Deformed Wing Virus in Solenopsis invicta Buren (Hymenoptera: Formicidae) Simple Summary Deformed wing virus (DWV) is a major honey bee pathogen found throughout the world. DWV, in association with the varroa mite, causes wing deformity, a shortened abdomen, and neurological impairments, leading to the mortality of millions of honey bee colonies worldwide. At least 12 ant species have been shown to harbor DWV, including the red imported fire ant, one of the most invasive and detrimental pests in the world. To date, there have been no reports in the literature of DWV causing symptoms in ants. In this study, we observed the classic honey-bee-like symptoms of deformed wings in laboratory and field colonies of the red imported fire ants and verified the presence and replication of DWV. This is the first report of the co-occurrence of DWV-like symptoms and DWV in ants. However, more research is needed to determine whether DWV is indeed the causative agent of DW syndrome in S. invicta. Abstract Deformed wing virus (DWV), a major honey bee pathogen, is a generalist insect virus detected in diverse insect phyla, including numerous ant genera. Its clinical symptoms have only been reported in honey bees, bumble bees, and wasps. DWV is a quasispecies virus with three main variants, which, in association with the ectoparasitic mite, Varroa destructor, causes wing deformity, shortened abdomens, neurological impairments, and colony mortality in honey bees. The red imported fire ant, Solenopsis invicta Buren, is one of the most-invasive and detrimental pests in the world. In this study, we report the co-occurrence of DWV-like symptoms in S. invicta and DWV for the first time and provide molecular evidence of viral replication in S. invicta. Some alates in 17 of 23 (74%) lab colonies and 9 of 14 (64%) field colonies displayed deformed wings (DWs), ranging from a single crumpled wing tip to twisted, shriveled wings. Numerous symptomatic alates also exhibited altered locomotion ranging from an altered gait to the inability to walk. Deformed wings may prevent S. invicta alates from reproducing since mating only occurs during a nuptial flight. The results from conventional RT-PCR and Sanger sequencing confirmed the presence of DWV-A, and viral replication of DWV was confirmed using a modified strand-specific RT-PCR. Our results suggest that S. invicta can potentially be an alternative and reservoir host for DWV. However, further research is needed to determine whether DWV is the infectious agent that causes the DW syndrome in S. invicta. Introduction The red imported fire ant, Solenopisis invicta Buren (Hymenoptera: Formicidae), is among the 100 world's worst-invasive alien species [1].Native to South America, they have invaded many countries and regions [2] and become significant pests in the infested areas, due to their adverse impacts on human health, agriculture, wildlife, pets, and livestock [3,4].The red imported fire ant can be a problem for honey bees and beekeepers.For example, in Texas, fire ants were often observed preying on bee brood and dead adult honey bees, particularly when the bee colonies were weak [5].It is a great challenge to control fire ants, and their management heavily relies on synthetic insecticides.Due to the ever-increasing public concern about the potential adverse effect of synthetic insecticides, tremendous effort has been made in searching for safer alternatives.The utilization of their natural pathogens, such as viruses, as potential biological control agents, has been an active research area [6]. Deformed wing virus (DWV) is one of the most-intensively studied insect pathogens in the world due to its significance for the health of honey bees and other pollinators [7].DWV negatively impacts honey bees, resulting in physical abnormalities, including wing deformities, shortened abdomens, discoloration of adult bees, and neurological impairments [7].Its presence correlates with colony failure, particularly in association with the ectoparasitic mite, Varroa destructor [7][8][9].DWV is a positive-sense, single-stranded RNA virus belonging to the genus Iflavirus in the order Picornavirales [10,11] and is a quasispecies virus with three main variants (A, B, and C), found in many parts of the world, including at least 32 U.S. states [12].To date, DWV has been detected in 65 arthropod species in eight insect orders and three Arachnida orders [7].DWV is known only to induce wing deformities in honey bees, bumble bees, and wasps [7,13,14].At least 12 ant species have been shown to harbor DWV, including Solenopsis invicta (red imported fire ant) [11,12] with replication found in only a few ant species, including Linepithema humile [10,15] and Myrmica rubra [16].There have been no reports of a replicative form of DWV causing any visible pathogenic symptoms in ants as previously described in honey bees, bumble bees, and wasps (e.g., deformed wings, ataxia, leg paralysis, or body discoloration). This study had three main objectives.Firstly, it aimed to document the observed classic honey-bee-like symptoms of deformed wings in S. invicta.This was achieved through a detailed description, supplemented with still images and videos of DW alates.The second objective was to demonstrate the co-occurrence of DWV with DW alates of S. invicta by analyzing the DWV in not only alates with deformed wings, but also workers from the same colonies.The final objective was to verify the presence of the replicating form of DWV in S. invicta workers and both asymptomatic and symptomatic alates. Ant Colony Collection and Maintenance A total of 43 S. invicta colonies were used for this project.Twenty-seven colonies were collected and maintained in the laboratory, and 14 colonies were used in situ-left in the ground and sampled for various assays.Two colonies were initiated using new queens collected on 21 July 2021, in the parking lot of Nelco Cineplex, Greenville, Mississippi (see Table S1 in the Supplementary Materials for the information on each colony).For the laboratory colonies, ants were separated from mound soil using a modified dripping method [17], unless otherwise stated.All but two lab colonies were maintained at 26 ± 2 • C and 50% RH in Fluon-coated (Insect-a-Slip, Rancho Dominguez, CA, USA) plastic trays (55 cm × 44 cm × 12 cm) and given ad libitum access to food and water, which consisted of frozen crickets, 10% sugar water solution, and distilled water.Two lab colonies were provided with a finely ground ant food consisting of dried banana, granola bar, and frozen crickets in a 0.5:0.5:1.0 ratio. RNA Extraction, cDNA Synthesis, and Reverse-Transcriptase PCR Workers (20 to 50 mg fresh weight; 10 to 20 ants), individual alates (single, 5 to 11 mg fresh weight), or pooled alates (30 to 50 mg fresh weight; 5 alates) were added into microcentrifuge tubes separately and stored at −80 • C until RNA extraction.Micropestles (Eppendorf, Enfield, CT, USA) were used to homogenize the samples, and total RNA was extracted from each sample using the Zymo Direct-zol™ RNA MiniPrep Kit (Zymo Research, Irvine, CA, USA) following the manufacturer's guidelines.The extracted RNA was treated with DNAse I to remove contaminating genomic DNA following the manufacturer's directions.One microgram of total RNA from each sample was used for cDNA synthesis by the SuperScript™ IV First-Strand Synthesis System (Invitrogen Inc., Carlsbad, CA, USA) following the manufacturer's instruction.RT-PCR was performed with sets of primers, found in Table 1, to detect DWV and the genetic variant, DWV-A.Primers used for this study: At first, we used the DWV, DWV-A, and DWV-B sets of primers for random hexamers' cDNA.We detected the presence of DWV-B in a single sample, so we did not keep using this set of primers; the DWV primer set generates an amplicon of 139 bases, which is difficulty to obtain good quality Sanger sequences from; instead, we switched the DWV-6F and B8 primer set, which generates an amplicon of 393 bases.Later, when we detected the replication of DWV, we used the DWV-F15 and B23 primer set based on the reference.This is a gene-specific primer set and generates an amplicon of 451 bases.We used the tagged DWV-F15 strand (gene-specific) primer for RT-cDNA and PCR with the tag and DWV-23B to determine the replication of DWV.We used the DWV-15F and B23 primer set to detect DWV from S. invicta.See Table 1 for the primer sequences and references. Sanger Sequencing The amplicons of 5 µL RT-PCR were electrophoresed in 2% agarose gel containing 0.5 µg/mL ethidium bromide and visualized under UV light.Then, the single-band (targeted) amplicons were purified using the ExoSAP-IT kit (Applied Biosystems™, Waltham, MA, USA).For the Sanger sequencing [18], we used the BigDye ® Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Foster City, CA, USA).Sequencing reactions were performed in a 96-well plate cycling at 95 • C for 3 min, followed by 25 cycles of 95 • C for 30 s, 55 • C for 1 min, and finally, 68 • C for 2 min on a C 1000 Touch TM Thermal Cycler (Bio-Rad, Hercules, CA, USA).Post-sequencing reaction products were purified by ethanol/EDTA precipitation and injected into a 3730 XL DNA analyzer with Data collection 5.0 vision, Dye set Z (Applied Biosystems, Foster City, CA, USA).The positive control (extracted DWV from symptomatic worker honey bees) and negative control (NTC) were performed in each plate run.The sequencing results were analyzed using DNASTAR SeqMan Ultra vision 17.1 (Madison, WI, USA), and consensus and contig sequences were compared against known DWV viral sequences in GenBank by BLAST [19] to confirm viral identity. Detection of DWV Replication in Solenopsis invicta To determine the active replication of DWV in S. invicta, a modified two-step, strandspecific RT-PCR was performed [5].Five hundred nanograms of total RNA from each sample was used to synthesize cDNA using Maxima Reverse Transcriptase (Thermo Fisher Scientific, Denver, CO, USA).This thermostable reverse transcriptase was used to minimize nonspecific priming.We employed a gene-specific primer coupled to a 5 non-viral tag [20]; Insects 2023, 14, 788 4 of 12 see Table 1.In brief, the primer binding took place at 65 • C for 5 min, and then, the reverse transcriptase reaction temperature was 65 • C for 30 min to reduce the secondary structure and to improve specificity.Negative controls including no template (water), no primer, no transcriptase, and the positive control (DWV symptomatic honey worker bees) were included in each set of RT-PCR reactions.The subsequent PCR amplification was carried out using a primer pair consisting of the tag only together with a virus-specific upstream primer, DWV-R B23; see Table 1,reviewed in [21].This RT-PCR amplicon refers to the viral-RNA-dependent RNA polymerase.Briefly, 5 µL of cDNA from each sample was treated with ExoSAP-IT (Applied Biosystems, Waltham, MA, USA) to clean up the cDNA.PCR reactions were performed with Phusion High-Fidelity DNA Polymerase (New England BioLabs, Ipswich, MA, USA).PCR was briefly conducted at 98 • C for 30 s, followed by 35 cycles at 98 • C for 10 s, 55 • C for 30 s, and 72 • C for 30 s, with a final 72 • C extension for 10 min.The PCR products were visualized in 1.5% agarose gel containing 0.5 µg/mL of ethidium bromide.PCR products with a 451 bp size were purified with ExoSAP-IT (single band at 451 bp) or excised (with unspecific bands) and, subsequently, gel-purified with the QIAEX II Gel Extraction Kit (Qiagen, Germantown, MD, USA/Hilden, Germany).We also used a ten-times dilution of the template to minimize the chance of non-strand-specific cDNA via participation of the residual tagged cDNA primer [22].The Sanger sequencing process was the same as previously described. Videos of DW and NW S. invicta Alates Eight videos depicting both DW and NW male and female alates were captured from six lab colonies and two field colonies.All videos were captured using a Keyence VHX 5000 (Itasca, IL, USA), except for one, where a Samsung Galaxy J7 Star mobile phone was used. Symptom of DW Alates of S. invicta From November 2021 to January 2023, 29 colonies were collected in Washington, Co, MS; 23 colonies produced alates, of which 17 colonies (74%) produced alates with visible wing deformities.We sampled an additional 14 colonies (NBCL 1-to-14) with alates (larvae, pupae, and adults) from August 2022 to June 2023, 9 of which had DW alates present (65%) (see Table S1 in the Supplementary Material for colony details).Wing deformity severity ranged from a single crumpled wing tip to fully crumpled wings for both male and female alates (Figures 1 and 2, respectively).Figure 1B represents the degree of wing deformity most often found in the population of DW male alates thus far observed, with a portion showing more severe wing deformity, as in Figure 2C, or only wing tip deformity, as seen in Figure 1C.Additionally, Figure 1D shows a melanized male alate displaying deformed wings (top) compared to a specimen with normal wings, both from the same colony collected at the same time.We observed DW as early as the non-melanized pupal stage in male alates (images not shown). The number of alates displaying DW in individual laboratory colonies varied greatly, ranging from 1.4% to 15%.The percent DW for both male and female alates is based on a specific period and does not represent percentages for the lifetime of the lab colony.Daily observations of adult alates were made throughout the summer and autumn of 2022 and April to June of 2023.The percent DW data can be found in Table 2. Laboratory Colony 8 had the greatest percent change in DW alates: From 21 August 2022 to 27 September 2022, 606 female alates were collected with 6 showing wing deformity (~1%) and 11 male alates with 1 with DW (9.1%).Then, interestingly, from 15 October 2022 to 29 November 2022, 118 male alates were collected, 39 with a wing deformity (33%), and no DW female alates were observed from Colony 8 during the latter time.S1 in the Supplementary Materials for colony details), which were found to have the highest proportion of severely deformed wing male alates, died out before colonies where only a slight wing deformity was identified.For Colony 8, within a month after recording a high number of DW male alates, the colony died out except for a few hundred adult workers.In [15], it was found that DWV and Linepithema humile virus 1 were both replicating in L. humile, which suggests that they are not merely being vectored as viral particles, but potentially infecting their ant hosts, which makes these viruses candidates for the observed population declines seen in L. humile [23,24].The number of alates displaying DW in individual laboratory colonies varied greatly, ranging from 1.4% to 15%.The percent DW for both male and female alates is based on a specific period and does not represent percentages for the lifetime of the lab colony.Daily observations of adult alates were made throughout the summer and autumn of 2022 and April to June of 2023.The percent DW data can be found in Table 2. Laboratory Colony 8 had the greatest percent change in DW alates: From 21 August 2022 to 27 September 2022, 606 female alates were collected with 6 showing wing deformity (~1%) and 11 male alates with 1 with DW (9.1%).Then, interestingly, from 15 October 2022 to 29 November 2022, 118 male alates were collected, 39 with a wing deformity (33%), and no DW female alates were observed from Colony 8 during the latter time.In addition to wing deformity, we identified many male DW alates displaying impaired mobility ranging from a slow, wobbly (ataxic) gait to the inability to walk or stand.Impaired mobility was observed in alates with varying levels of wing deformity.We identified six male alates with severe wing deformity (crumpled, stubby wings, <1 mm in length); however, they displayed no impaired mobility.On the other hand, we observed several male alates with altered mobility presenting with deformity of a single wingtip.Interestingly, many DW male alates presented a noticeably slower gait, but not all, compared to healthy NW alates reared in the same colonies as DW alates, as determined by multiple observers.The slow, wobbly (ataxic gait) is plainly visible in the Supplementary Videos.Although the specimen number was small, 38 specimens, female alates did not Insects 2023, 14, 788 7 of 12 display noticeably impaired mobility.In addition to female alates not showing noticeably impaired mobility, no workers were observed to have an altered mobility or leg paralysis.In Colony 8 (see above), 14 of the 39 DW male alates had some level of altered mobility (legs) and a single alate that was unable to move its wings.We identified four male alates who were able to move all legs, but unable to stand.* All lab colonies were collected in the vicinity of (33.160N, 90.920 W).All alates from each colony were removed and counted on multiple collection dates within the collection period. Eight videos show both male and female alates displaying both wing and leg deformities concomitant with altered mobility.All videos are present in the Supplementary Materials Section, along with a detailed description of the contents of each video. Identification of DWV in S. invicta DW Alates and Workers Molecular analysis was performed to determine the presence of DWV and distinguish its variants in the S. invicta sample using DWV-specific primers, general primers [25], and primers specific to DWV-A [26].We also used primers specific to DWV-B and were able to detect this subvariant in a single DW male alate from Colony 14, whereas Subvariant A was detected in multiple colonies.Since Subvariant A gave more-consistent results (more easily detectable in S. invicta colonies) compared to that of Subvariant B, we decided, for the purposes of this study, to use Subvariant A as the focus for this paper.Ants from 13 colonies were analyzed.The results are summarized in Table 3. DWV can occur in alates of both sexes, no matter if the alates exhibited wing deformity or not.Workers were DWV-positive in 10 of 13 colonies.Workers from two colonies started from new queens (G-1 and G-2), and the queen in Colony G-1 was also positive.We did not determine if DWV was present in every colony, nor did we look at the same castes from each colony.All Sanger sequencing results for DWV in S. invicta are contained in Supplementary Materials Dataset S1. Wing deformity and crippling paralysis are linked to deformed wing virus (DWV) in numerous pollinator species [7,9,14].DWV has been identified in at least 12 ant species [10,27], including S. invicta [5].However, none of the typical deformities associated with DWV in honey bees have been documented in DWV-positive ant species.This is the first report of the co-occurrence of DWV-like symptoms in ants and DWV. Identification of the Replication Form of DWV in S. invicta DW Alates and Workers To determine the presence of the replicative form of DWV in S. invicta, a modified two-step, strand-specific RT-PCR was performed [20,22] (see Section 2 and Dataset S1, located in the Supplementary Materials).Viral replication of DWV can be present in workers and male and female alates with and without the DW phenotype (Table 4).We did not examine other castes for virus replication.A representative DNA gel image (Figure 3) indicated the detection of the replicative form of DWV.The Sanger sequencing results were then compared to known DWV viral sequences in GenBank by BLAST, which identified the presence of DWV in all samples sequenced; the replicative form of the virus was also detected in several samples, including DW male and female alates, NW Insects 2023, 14, 788 9 of 12 male alates, DW non-melanized pupae, worker pupae, and adult workers (Supplementary Materials, Dataset S1).We prepared a phylogenetic tree (Figure S1) that was based on the sequencing data of the RNA-dependent RNA polymerase gene from the DWV replication data (16 February 2023; see Sanger sequencing data) of S. invicta.Wing deformity and crippling paralysis are linked to deformed wing virus (DWV) in numerous pollinator species [7,9,14].DWV has been identified in at least 12 ant species [10,27], including S. invicta [5].However, none of the typical deformities associated with DWV in honey bees have been documented in DWV-positive ant species.This is the first report of the co-occurrence of DWV-like symptoms in ants and DWV. Identification of the Replication Form of DWV in S. invicta DW Alates and Workers To determine the presence of the replicative form of DWV in S. invicta, a modified two-step, strand-specific RT-PCR was performed [20,22] (see Section 2 and Dataset S1, located in the Supplementary Materials).Viral replication of DWV can be present in workers and male and female alates with and without the DW phenotype (Table 4).We did not examine other castes for virus replication.A representative DNA gel image (Figure 3) indicated the detection of the replicative form of DWV.The Sanger sequencing results were then compared to known DWV viral sequences in GenBank by BLAST, which identified the presence of DWV in all samples sequenced; the replicative form of the virus was also detected in several samples, including DW male and female alates, NW male alates, DW non-melanized pupae, worker pupae, and adult workers (Supplementary Materials, Dataset S1).We prepared a phylogenetic tree (Figure S1) that was based on the sequencing data of the RNA-dependent RNA polymerase gene from the DWV replication data (16 February 2023; see Sanger sequencing data) of S. invicta.The replicative form of DWV was also detected in NW alates in addition to DW alates.The finding that NW alates also carry the replicative form of DWV mirrors what other research groups have found in honey bees [28,29], bumble bees, Bombus terrestris, [30], and the wasp, Vespa crobro [31].For the first time, the presence of replicative DWV in S. invicta has been confirmed. Although the specimen number was small, 38 specimens, we were unable to detect DWV replication in NW female alates.This finding, in addition to the absence of ataxia and leg paralysis in DW female alates, associated with a lower percent of female alates displaying deformed wings, suggests a possible sex-linked association.Additional work is needed to determine if this is the case. Replication has been identified in only a few ant species, including Linepithema humile [10,15] and Myrmica rubra [16].The only negative impact on the overall fitness of any ant species harboring DWV is differential gene expression of several immune response genes in L. humile [32].To date, there have been no reports of the replication of DWV causing any outward morphological and behavioral changes in ants, as previously described in other pollinators (e.g., deformed wings, ataxia, leg paralysis, or body discoloration). We are not aware of any sex-link between this virus and any other insect including ants.With regard to honey bees, a recent study by [33] (and the references therein) reported that emerging drones exhibited overt developmental deformities similar to those seen in the worker brood when injected with DWV as white-eyed pupae.They also reported that a percentage of drones without any outward deformities carried very high titer levels equivalent to drones with outward deformities. Conclusions We observed the classic DWV-induced symptoms found in DWV-infected honey bees, specifically deformed wings and impaired mobility, in laboratory and field colonies of S. invicta and subsequently verified the presence of DWV (and the replicative form of DWV) in the symptomatic and asymptomatic individuals using both RT-PCR and Sanger sequencing.This is the first report of DWV-like symptoms in ants and with the co-occurrence of replicating DWV.More research is required to gain an understanding of the DW phenotype observed in the DWV-positive alates of S. invicta, namely a direct causal link between DWV and observed wing deformity in S. invicta.DW alates are unable to fly, which is a barrier to nuptial flight and necessary in S. invicta reproduction; therefore, wing deformity can potentially impact populations of S. invica. Supplementary Materials: The following Supporting Information can be downloaded at: https: //www.mdpi.com/article/10.3390/insects14100788/s1,Table S1: Information of S. invicta colonies used in this study [34]; Data Set S1A-H: Results of Sanger sequencing of S. invicta samples.;Video S1: Description of eight videos depicting both DW and NW male and female alates from S. invicta lab and field colonies. Funding: This research received no external funding. Figure 1 . Figure 1.Solenopsis invicta specimens displaying the normal wing (NW) and deformed wing (DW) phenotype.(A) Normal wing (NW) adult male alate collected from Colony 8. (B) DW male alate from Colony 2021, which was the first DW alate identified.(C) Wing tip deformity of a DW male alate collected from Colony 11. (D) Two melanized male pupae with DW (bottom) and NW (top), both collected from Colony NBCL-1 collected directly from the ground.All images were captured using a Keyence VHX 5000 (Itasca, IL, USA).Scale bars are 1 mm for all images except Image C, which represents 0.5 mm. Figure 1 . Figure 1.Solenopsis invicta specimens displaying the normal wing (NW) and deformed wing (DW) phenotype.(A) Normal wing (NW) adult male alate collected from Colony 8. (B) DW male alate from Colony 2021, which was the first DW alate identified.(C) Wing tip deformity of a DW male alate collected from Colony 11. (D) Two melanized male pupae with DW (bottom) and NW (top), both collected from Colony NBCL-1 collected directly from the ground.All images were captured using a Keyence VHX 5000 (Itasca, IL, USA).Scale bars are 1 mm for all images except Image C, which represents 0.5 mm.The above-mentioned Colony 8 and other colonies including Colony 2021, Colony 10, Colony 12, Colony 14, and Colony 1B (see TableS1in the Supplementary Materials for colony details), which were found to have the highest proportion of severely deformed wing male alates, died out before colonies where only a slight wing deformity was identified.For Colony 8, within a month after recording a high number of DW male alates, the colony died out except for a few hundred adult workers.In[15], it was found that DWV and Linepithema humile virus 1 were both replicating in L. humile, which suggests that they are not merely being vectored as viral particles, but potentially infecting their ant hosts, which makes these viruses candidates for the observed population declines seen in L. humile[23,24]. Figure 2 . Figure 2. Solenopsis invicta specimens displaying the normal wing (NW) and deformed wing (DW) phenotype.(A) Normal wing (NW) adult female alate collected from Colony 8. (B) DW female alate with a moderate level of wing deformity, also collected from Colony 8. (C) Female DW alate with severe wing deformity, from Colony NBCL-14.(D) The same female alate as C under higher magnification.All images were captured using a Keyence VHX 5000 (Itasca, IL, USA).Scale bars are set at 1 mm for all images, except Image D, which represents 0.5 mm. Figure 2 . Figure 2. Solenopsis invicta specimens displaying the normal wing (NW) and deformed wing (DW) phenotype.(A) Normal wing (NW) adult female alate collected from Colony 8. (B) DW female alate with a moderate level of wing deformity, also collected from Colony 8. (C) Female DW alate with severe wing deformity, from Colony NBCL-14.(D) The same female alate as C under higher magnification.All images were captured using a Keyence VHX 5000 (Itasca, IL, USA).Scale bars are set at 1 mm for all images, except Image D, which represents 0.5 mm. Table 2 . Percentage of DW male and female alates in S. invicta laboratory colonies collected over a specific period during 2022-2023 *. Table 3 . DWV detection in various castes from colonies of S. invicta. Table 4 . Detection of DWV replication in various specimens from colonies of S. invicta.
6,272
2023-09-27T00:00:00.000
[ "Biology" ]
A prospective study of the influence of the skeleton on calcium mass transfer during hemodialysis Background Calcium gradient, the difference between serum calcium and dialysate calcium d[Ca], is the main contributor factor influencing calcium transfer during hemodialysis. The impact, however, of bone turnover, on calcium mass transfer during hemodialysis is still uncertain. Methods This prospective cross-sectional study included 10 patients on hemodialysis for a 57.6±16.8 months, with severe hyperparathyroidism. Patients were submitted to 3 hemodialysis sessions using d[Ca] of 1.25, 1.5 and 1.75 mmol/l in three situations: pre-parathyroidectomy (pre-PTX), during hungry bone (early post-PTX), and after stabilization of clinical status (late post-PTX). Biochemical analysis and calcium mass transfer were evaluated and serum bone-related proteins were quantified. Results Calcium mass transfer varied widely among patients in each study phase with a median of -89.5, -76.8 and -3 mmol using d[Ca] 1.25 mmol/L, -106, -26.8 and 29.7 mmol using d[Ca] 1.50 mmol/L, and 12.8, -14.5 and 38 mmol using d[Ca] 1.75 mmol/L during pre-PTX, early post-PTX and late post-PTX, respectively, which was significantly different among d[Ca] (p = 0.0001) and among phases (p = 0.040). Ca gradient and delta of Ca also differed among d[Ca] and phases (p<0.05 for all comparisons), whether ultrafiltration was similar. Serum Osteocalcin decreased significantly in late post-PTX, whereas Sclerostin increased earlier, in early post-PTX. Conclusions The skeleton plays a key role in Ca mass transfer during dialysis, either by determining pre-dialysis serum Ca or by controlling the exchangeable Ca pool. Knowing that could help us to decide which d[Ca] should be chosen in a given patient. Introduction Disturbances in mineral and bone metabolism in chronic kidney disease patients (CKD-MBD) are highly prevalent and are a major cause of morbidity and mortality. Calcium (Ca) is an essential ion in the management of CKD-MBD. The serum and intracellular levels of Ca are critical in the maintenance of normal physiologic processes. Similarly, Ca balance (net intake minus output) in adults is important in ensuring no excess Ca load that may predispose to extracellular calcification. In patients with CKD, not yet on dialysis, formal balance studies suggest that positive Ca balance occurs at intake levels of 800 mg/day [1,2]. Unfortunately, such studies cannot be done in patients on dialysis due to the Ca changes that occur acutely with dialysis. When patients reach the need for dialysis, Ca mass transfer from dialysate may alter the overall balance in order to extrapolate what we know about Ca balance from pre-dialysis patients to dialysis patients. Currently, there is considerable controversy in the literature about the optimal Ca concentration in the dialysate (d[Ca]), and recommendations of a d[Ca] of 1.25, 1.5 or 1.75 mmol/L are mostly opinion-based. For those patients with suspected adynamic bone disease, some studies have shown benefits from using a d[Ca] of 1.25 mmol/L on bone and mineral parameters [3,4]. Most nephrologists and current guidelines believe that an ideal d [Ca] should provide a near-neutral Ca balance during dialysis [5][6][7]. Yet, only limited studies have evaluated the quantity of Ca that moves between patient and dialysate [8][9][10][11][12][13][14][15]. This is mainly due to technical difficulties in obtaining an accurate measurement of Ca in the spent dialysate. Also, we must keep in mind that there are different pools of calcium in blood, the protein-bound, and the diffusible Ca, composed by the ionized Ca (iCa) and the Ca complexed with phosphate and citrate, which increases the difficulty to calculate the intradialytic Ca balance. Multiple factors are involved in Ca mass transfer during a dialysis treatment. First, the gradient between d[Ca] and serum Ca plays a major role in this process. Second, charged particles such as proteins can interfere with the transfer of calcium from the blood to the dialysate, a process named Gibbs-Donnan effect [16]. Third, different treatments may alter Ca levels and/ or Ca balance such as calcitriol, Ca based phosphate binders, and calcimimetics. Finally, the skeleton also seems to impact intra-dialytic Ca balance: there is an exchangeable Ca pool on the bone surface composed of non-collagenous proteins with high Ca affinity that could act as a reservoir for rapid exchanges with the extracellular Ca [17]. Our group has previously shown that Ca balance is dependent not only on Ca gradient, but also on bone turnover, as differences in mass transfer were observed in patients with high compared to low bone remodeling [8]. A limitation of this study was the cross-sectional design, which compared different patients at just one moment such that specific individual factors such as age, gender, mobility, and the severity of bone disease were not considered, and might have played a role in determining different Ca balances. Therefore, we designed a prospective study in dialysis patients with different states of bone turnover: pre and post PTX, where patients bone remodeling goes from high to low turnover. These patients were submitted to consecutive dialysis sessions with different d [Ca] before PTX, during hungry bone syndrome (early post-PTX), and after stabilization of bone disease (late post-PTX). a. Study design This was a prospective cohort study, where patients were submitted to 3 dialysis sessions with different d [Ca] in each of the 3 consecutive phases of the study: 1. Pre PTX: SHPT while waiting for PTX (within 90 days pre PTX) 2. Early post-PTX: During the "hungry bone syndrome", defined as the post operative period after PTX in which there is a severe hypocalcemia and hypophosphatemia with necessity of supplementation of Ca and calcitriol with elevated alkaline phosphatase. All procedures, in this phase, were done within 14 days of PTX, and always after weaning from IV Ca infusion. 3. Late post-PTX: after stabilization of bone remodeling, defined as no need for Ca supplements or normalization of serum alkaline phosphatase. During each phase participants were submitted to 3 randomly assigned, consecutive bicarbonate-based hemodialysis sessions, with different d[Ca]: 1.25 mmol/L, 1.5 mmol/L and 1.75 mmol/L. Each session lasted 4 hours or a little longer to reach 240 min of effective dialysis time, with blood and dialysate flow of 350 and 800 ml/min, respectively. Ultrafiltration was adjusted according to each patient' dry weight. b. Participants Eighteen patients were included between July 2011 and July 2013. All subjects gave informed written consent to participate in the study, which was approved by our Institutional Review Board in accordance with the Declaration of Helsinki. Inclusion criteria were: age 18 or older, CKD patients on dialysis for more than 3 months who attended the CKD-MBD clinic at Hospital das Clínicas da Universidade de São Paulo and the presence of severe SHPT defined as PTH over 800 pg/mL with clinical indication for a PTX. All patients had the diagnosis of high bone remodeling confirmed by bone biopsy in the beginning of the study. c. Blood and dialysate measurements Blood samples were collected on the arterial dialysis tubing before and every 30 min during each dialysis session for biochemical analysis, which included: total Ca (tCa), iCa, phosphate (P), urea, and intact PTH using routine laboratory techniques. Blood samples for measurement of bone-related proteins were collected pre-dialysis, on the first dialysis session of each of the three phases of the study. These samples were centrifuged, aliquoted in eppendorfs, and stored at -80˚C. Serum proteins were then quantified through Multiplex Milliplex map kit-Human Bone Magnetic Bead Panel-HBNMAG-51K (EMD Millipore Corporation, MA, USA1) assay that quantified Dkk1, Leptin, FGF-23, sclerostin, osteoprotegerin and total osteocalcin (OC). Carboxylated OC (GLA) and undercarboxylated OC (GLU) fractions were measured through ELISA assay from Takara1 (Japan). Fresh dialysate samples were collected every 30 min to ensure the maintenance of Ca delivery in the dialysate. We used a partial spent dialysate collection method, which was shown to correlate very well with total spent dialysate collection [18]. Through this technique, spent dialysate and ultrafiltrate were continuously sampled by a reversed automatic injection pump, located in the waste tubing just before the drain, throughout the complete dialysis procedure at a rate of 1L/h as previously described [18][19][20]. This system ensured a constant volume of fluid ejecting it at a 5L capacity recipient. Samples of the pulled fluid were analyzed every 30 min for total Ca measurement. At the end of procedure all diverted fluid was homogenized and three samples were collected for calculation of Ca mass transfer. d. Ca mass transfer and ca gradient Ca dialysate mass transfer (net amount of Ca put into or taken out of dialysate) was calculated using the formula: Ca Mass Transfer = [final dialysate volume (L) Ã final dialysate total Ca (mmol/L]-[dialysate volume (L)] Ã [pre capillary fresh dialysate Ca (mmol/L)]; where: final dialysate volume = dialysate volume (L) + ultrafiltrate volume (L); dialysate volume = 4h x 800ml/min = 192 L (it is fixed); ultrafiltrate volume = adjusted according to patient's dry weight; final dialysate Ca = average of the three total Ca collected at the final homogenized diverted spent dialysate; pre capillary fresh dialysate total Ca = Ca measured at the fresh dialysate in the pre filter capillary. Ca Gradient, which is the difference between blood and dialysate calcium concentrations, was calculated through the following formulas: tCa Gradient = total serum Ca pre dialysis (mmol/L)-initial pre capillary fresh dialysate Ca (mmol/L) and iCa Gradient = ionized serum Ca pre dialysis (mmol/L)-initial pre capillary fresh dialysate Ca (mmol/L). e. Statistical analysis Continuous variables were expressed as mean ± standard deviation or median and percentiles (25; 75), according to the D'Agostino & Pearson omnibus normality test. Categorical variables were expressed as N and percentage. Analysis of variance (ANOVA) for repeated measures or alternative nonparametric Friedman test were used to compare variables in the three different d [Ca] in each phase of the study. ANOVA or the alternative Kruskall-Wallis was used to compare variables among the three phases. Post-tests were done as appropriate. Relationship between independent variables and Ca mass transfer was performed by Spearman correlation. General linear model (GLM) repeated measures were run to determine mean Ca mass transfer differences among the three d[Ca] over time, and to examine differences between phases and the interaction between factors. Statistical analysis was performed with GraphPad Prism 5.0 (Ca, USA) and SPSS 21.0 (SPSS Inc. one Chicago, IL). Significance was assigned at p values < 0.05. Results For study purposes, only the 10 patients who completed the 3 phases of the protocol were analyzed as described in Fig 1. Table 1 shows the characteristics of the study population. They were relatively young, 60% were men and most of them had been on dialysis for more than 4 years, and had clinical and laboratory manifestations of SHPT. Even though no patient had a history of bone fractures, more than a half complained of bone pain. During dialysis sessions, there were no serious adverse events. a. Ca mass transfer during dialysis Based on changes on total and ionized calcium from pre to post hemodialysis (ΔtCa and on the ΔiCa, respectively), we would expect a neutral or negative Ca balance by using d[Ca] of 1.25 mmol/l in the Pre PTX and in the Early post-PTX phases, and positive in all other situations tested in our protocol. However, there was a wide variation on Ca mass transfer among patients (Fig 2A). The amplitude of variation occurred even at similar d[Ca] and changed through the study phases, confirming the hypothesis that is very difficult to predict the Ca mass transfer based exclusively on the d[Ca]. Table 2 The influence of bone remodeling on Ca mass transfer was established by GLM analysis, as during late post-PTX phase, there was a significantly higher Ca mass transfer as compared to Pre PTX and early post-PTX phases (Fig 3). Gathering all 90 dialysis sessions, we found that Ca mass transfer correlated with pre dialysis iCa (r = -0.52; p< 0. b. Ca gradient, iCa and PTH changes during dialysis The variations in tCa, iCa and PTH from pre to post dialysis (ΔtCa, ΔiCa and ΔPTH, respectively) are shown in Table 2 and Fig 4 and S2 Fig. ΔtCa and ΔiCa were significantly different c. Serum protein analysis Serum OC, as well as its fractions GLU and GLA, decreased only in the Late post-PTX phase, as seen in Table 3. Conversely, serum sclerostin increased during the Early post-PTX phase and persisted elevated in the Late post-PTX phase. No significant changes were observed in the serum concentration of any other protein. Discussion In the present study we evaluated the mass transfer of Ca from dialysis at different levels of d [Ca] during different states of bone remodeling. The results demonstrate that the blood Ca remains the most important factor in determining the Ca gradient, which in turn affects net Ca mass transfer. The Ca gradient is determined by the difference between serum Ca and d [Ca], and the large variation in the Ca in the same patient during each treatment will thus alter the gradient, and in turn the Ca mass transfer. Previous studies have shown that Ca gradient is more accurate when based on total Ca, since it includes the complexed and diffusible fraction [15,18]. Interestingly, the Ca gradient to mass transfer relationship (Fig 2B and S1 Fig) was similar whether ionized Ca or total Ca was used. This should facilitate the ability of clinicians to individualize the approach to patients to optimize d[Ca] to avoid excess Ca mass transfer. The state of bone remodeling also plays a role in Ca mass transfer although the relationship was far more complicated. This might be possible either because it influences serum Ca or directly as it controls the Ca available to be dialyzed. We hypothesized that the skeleton has a surface compartment that "buffers" the ionized Ca and provides acute buffering during dialysis that would significantly impact the Ca mass transfer. However, we did not see large differences in the three phases of bone remodeling in Ca mass transfer (Fig 2A). Indeed, we were expecting a positive Ca mass transfer during the early post-PTX phase due to a continuous influx of Ca into bone due to rapid bone formation/mineralization rather than bone resorption, which was not observed. In 1994, Kurz et al [21] performed double radiolabeled Ca and found that the acute Ca accretion (bone uptake) was greatest in patients with high turnover bone disease compared to either mixed uremic osteodystrophy or low turnover bone when studied on a non-dialysis day. However, the net Ca retention, the fraction of the intravenously administered Ca retained 4 weeks after injection was not different among the patients with the different bone histology groups. The latter may imply that dialysis has more of an impact on net Ca mass transfer than the underlying bone histology and therefore the ability to detect acute bone buffering may be limited during dialysis. Our findings are consistent with those of Sigrist et al. [11] in that the greatest predictor of Ca mass transfer during a dialysis session was the Ca gradient. However, based on our results, the role of bone, even if indirect, should not be neglected. Supporting our hypothesis, Talmage et al. [22] have already demonstrated in parathyroidectomized rats that bone was able to continuously supply calcium to the extracellular fluid at high rates during calcium-free peritoneal dialysis. The baseline tCa and iCa levels were similar in the Pre-PTX and early post-PTX phases of the study. We hypothesize this may be explained by the different doses of Ca salts and calcitriol in each of the phases. In contrast, in the Late post-PTX phase when blood levels were likely more stable requiring less medication, the total and ionized Ca levels were nearly 0.45 and 0.25 mmol/L lower, respectively. The determinants of serum Ca level include not only Ca intake, but also the ability of bone to regulate Ca levels. While bone remodeling may take weeks to months to change in response to PTH, there is a buffering capacity of bone due to surface proteins such as OC. In the present study, in the Late post-PTX, we found much lower levels of OC (total, carboxylated, and uncarboxylated) and lower PTH. PTH may in part, regulate the abundance of the surface proteins. Talmage et al. [17] have suggested that PTH, besides stimulation of bone osteoclast resorption, could act by either changing the conformation, the amount of OC, or by removing interfering substances [13]. The importance of these surface proteins in taking up Ca was demonstrated in the study by Kurz et al. detailed above [21]. Authors found that the correlation of bone Ca accretion rate was tightly correlated with OC and alkaline phosphatase and less so with PTH. Similarly, our group [8] has previously shown that the cross sectional analyses of Ca mass transfer demonstrated wide variability. However, multivariate analyses suggested that both the OC levels and the PTH levels could explain, at least partially, the variability. In the present study, each patient was studied with different d[Ca] and different levels of bone remodeling and the results suggest that the important role of bone may have more to do with the blood Ca levels rather than due to acute fluxes with dialysis. In patients in Late post-PTX, there is a lower serum Ca level and a decrease in bone buffering capacity due to reduced bone surface proteins. This may explain why there was less variability in Ca mass transfer regardless of the d [Ca ]. Thus, both the long term bone remodeling and the acute bone buffering may be limited post PTX resulting in lower ambient Ca levels. The present study has some limitations. First, the sample size was relatively small. However, the study design could partially overcome this problem, as patients were their own controls. Also, we enrolled only patients with SHPT and thus our results may not apply to patients with mild hyperparathyroid bone disease. In addition, we used standard thrice-weekly hemodialysis and our results may not be applicable to other more intensive dialysis regimens, where the ideal d[Ca] is still debatable [23,24]. We also did not enroll any diabetic patients, probably due to the fact that these patients commonly have low, rather than high turnover bone disease and rarely require PTX. Also, although both iCa and tCa correlated with Ca mass transfer, we are aware of the existence of the different pools of calcium in blood and the fact that Ca is diffusible and can be complexed with phosphate and citrate, which make difficult in obtaining accurate measurement in the spent dialysate. Our study has also some strength as provided new insights in the Ca mass transfer process during hemodialysis, describing not only the role of Ca gradient but also showing the inter-and intra-patient variation according to bone turnover status. In summary, our results showed that Ca mass transfer during hemodialysis is highly variable but is more dependent on the given ionized (or total) Ca during the treatment. The latter, in turn may depend on the bone's ability to regulate Ca levels. We believe our results suggest that d[Ca] should be determined based on the patient's serum Ca level and net intake of dietary and Ca containing phosphate binders. In patients receiving exogenous Ca from phosphate binders, the goal would be a negative Ca mass transfer in order to maintain more neutral overall balance. This can only be accomplished if the Ca dialysate is less than the patient's serum levels. Fortunately, our results were similar with total Ca and ionized Ca making this approach more suitable in clinical practice. While not tested in this study, our findings also suggest that in patients receiving a calcimimetic, with lower serum Ca, the use of higher (than serum) Ca dialysate may lead to greater net Ca mass transfer. Taken together, our study suggests that the standard practice of choosing only one d[Ca] concentration per dialysis unit, a "one size fits all" approach, does not work. Practitioners prescribe ultrafiltration, sodium, and bicarbonate based on an individual patient's weight and laboratory values, and we believe Ca should be similarly managed based on pre dialysis Ca, d[Ca] and bone remodeling status.
4,627
2018-07-30T00:00:00.000
[ "Medicine", "Biology" ]
Thermo-Optoplasmonic Single-Molecule Sensing on Optical Microcavities Whispering-gallery-mode (WGM) resonators are powerful instruments for single-molecule sensing in biological and biochemical investigations. WGM sensors leveraged by plasmonic nanostructures, known as optoplasmonic sensors, provide sensitivity down to single atomic ions. In this article, we describe that the response of optoplasmonic sensors upon the attachment of single protein molecules strongly depends on the intensity of WGM. At low intensity, protein binding causes red shifts of WGM resonance wavelengths, known as the reactive sensing mechanism. By contrast, blue shifts are obtained at high intensities, which we explain as thermo-optoplasmonic (TOP) sensing, where molecules transform absorbed WGM radiation into heat. To support our conclusions, we experimentally investigated seven molecules and complexes; we observed blue shifts for dye molecules, amino acids, and anomalous absorption of enzymes in the near-infrared spectral region. As an example of an application, we propose a physical model of TOP sensing that can be used for the development of single-molecule absorption spectrometers. INTRODUCTION Photonic sensing of single molecules is becoming a wellestablished scientific direction, providing powerful instruments for biological and medical sciences.−7 Examples of different physical mechanisms for single-molecule studies include single-molecule imaging by optical absorption, 5,6 photothermal detection scheme, 8 plasmon-based photothermal spectroscopy, 9 light-scattering-based techniques, 10 manipulations with optical tweezers, 1 fluorescent microscopy, 11 and several others.A noticeable contribution to the optical detection of single molecules was brought by whispering gallery-mode (WGM) resonators.Such resonators were initially described by Lord Rayleigh in 1910; 12 to date, optical microresonators are known as having unsurpassed quality (Q) factors, up to 10 10 , 13 making them very sensitive to small environmental perturbations.Their basic sensing principles (resonant mode change evaluation) exploit the seminal theory proposed in the 1940s by Bethe and Schwinger. 14ne of the most common shapes of optical microresonators used for biosensing is spheres, since they are relatively easy to fabricate from standard optical fibers and possess high Qfactors. 4However, their effective mode volumes are relatively large: for high-Q spherical resonators of up to 100 μm in diameter, the mode volume at near-infrared probing wavelengths reaches ∼10 3 μm 3 , which was a limiting factor for detecting molecules below the size of a monolayer. 15,16To achieve better localization of probing light, it was proposed to decorate WGM resonators with metal nanoparticles of ∼10 nm, supporting localized plasmonic oscillations. 17WGM resonators coupled to plasmonic nanoparticles, known as optoplasmonic sensors, form the basis of new applications of WGM in sensing.Optoplasmonic single-molecule sensing becomes feasible due to the proportional perturbation of the optical microcavity induced by polarizable molecules like proteins, in tandem with the near-field enhancement of the plasmonic nanoparticle such as a plasmonic nanorod. 2,3Recent examples of optoplasmonic sensor applications have demonstrated the detection of molecular movements in solutions diluted to attomolar concentrations 18 and studying singlemolecule thermodynamics and conformational changes of proteins. 19,20There are bright prospects for single-molecule studies with optoplasmonic WGM, from advancing singlemolecule investigations to specific spectral fingerprinting of molecules. 21Notably, all-dielectric WGM microtoroidal resonators have already been used for single-particle photothermal absorption spectroscopy of nanoparticles 22 and single molecules. 23Despite this, the findings regarding single molecules, as reported by Armani et al. in their publication, were subsequently subjected to additional scrutiny through theoretical calculations in refs 24 and 25.We propose single-molecule detection on the optoplasmonic WGM sensor using the thermo-optical effect initiated by single molecules binding to a plasmonic nanorod.−28 Instead, our approach demonstrates optical single-molecule sensing at comparatively higher power levels, leading to the discovery of the thermo-optoplasmonic (TOP) biosensing mechanism.Indeed, optoplasmonic sensing experiments have mostly been performed at low intensities of WGM excitations (1−100 μW).Nevertheless, it is important to highlight that optoplasmonic sensors typically exhibit a linear response to molecules binding to plasmonic nanoparticles.This indicates that the sizes, numbers, and optical properties of the objects being studied lead to proportional red (or blue) WGM resonance wavelength shifts based on their polarizability.Specifically, molecules such as DNA and protein with an excess polarizability in water induce a red shift in the resonance wavelength, known as the reactive sensing mechanism. 29,30Herein, we reveal that increased intensity of WGM leads to disproportional and sign-changed resonance wavelength shifts in optoplasmonic single molecule detection, which subsequently can be used to estimate the absorption cross-section of single molecules.For this, we built an optoplasmonic sensor with WGMs excited at near-infrared wavelength and gold nanoparticles with near-infrared plasmon resonances to study single-protein attachment events.Changing the parameters of light coupling to the WGM resonator, the Q factor of the sensor, and the exciting intensities, we achieve a high intensity of light that activates thermal hotspots when single proteins attach to the plasmonic nanoparticle, providing information about their absorption.The universality of this technique is confirmed directly via studying binding events for seven types of molecules and complexes: unadulterated proteins, Alexa Fluor 790 (Alexa) conjugated proteins, pure solution-based Alexa molecules, amino acid molecules, and solution-based IRdye 800CW (IRDye) molecules (Table 1). RESULTS Optoplasmonic Sensing of Proteins.Four protein samples were investigated at the first stage: 3-phosphoglycerate kinase (3PGK), adenylate kinase (Adk), and 3PGK and Adk conjugated with Alexa, respectively; protein labeling and other experimental protocols are provided in the Methods. In our experiments, we recorded the attachment events for each molecule type within the single-molecule regime of optoplasmonic sensors.The confirmation of the singlemolecule regime was established by examining survival plots, as detailed in the Supporting Information and similar to the approach described in ref 3. The principal experimental scheme is illustrated in Figure 1a.Spherical WGM resonators were made by melting SMF-28 optical fibers with a CO 2 -laser.To achieve high Q-factors of WGMs, radii of resonators were set to 45 ± 8 μm.WGMs were excited with a 780 nm emission of a cw tunable diode laser via a prism coupler at a wide range of power levels (0.01−5.5 mW) and coupling efficiencies (6− 45%).The chamber, containing the WGM resonator and of ca.300 μL volume, is formed of a polydimethylsiloxane (PDMS) polymer sandwiched between the prism and a microscope cover glass slide.The laser was connected to a fast-speed data acquisition card (DAQ), which synchronizes the laser wavelength scanning (50 Hz) and a photodiode with a PC via a LabVIEW program when recording the transmission spectrum.WGM frequency shifts were tracked in a spectral range of several picometers around a resonance line, while the resonance full width at half-maximum (fwhm) changes were also recorded.The system allows one to achieve ∼1 fm in spectral resolution and 20 ms in time resolution. We used an established, one step wet chemical procedure for attaching gold nanorods to the silica microsphere to assemble the optoplasmonic sensor, based on Baaske et al. 2 The attachments of ∼5 plasmonic gold nanoparticles, with a longitudinal LSPR (localized surface plasmon resonance) peak at 780 nm (Figure 1b), were detected from step signals in the sensor response, mediated by a low-pH HCl solution, as a first step of the experiment.The size of the nanorods was 10 nm × 38 nm, providing a negligible effect on WGM propagation; therefore, we did not observe reflected modes or mode splitting. 32Nanorod binding increases fwhm (full width at halfmaximum) values by roughly 100 fm (i.e., slightly reducing Qfactors) and depending on the nanorod orientation upon attachment (Supporting Information 2 ), though it allows WGM sensors to be capable of detecting single molecules.In the next step of the experiment, analyte molecules (Table 1) chemically react with the attached nanorod by thiol reaction with gold, except for 3PGK which utilizes a nickel-NTA linker (see the Supporting Information). 3PGK and 3PGK−Alexa.Single-molecule biosensing is based on tracking WGM resonance changes, Δλ, under attachment events (Figure 1c).In our experiments, 3PGK molecules from Geobacillus stearothermophilus were selectively bound to the gold nanorods.Generally, under attachment events, WGM resonances are either red (Δλ > 0) or blue (Δλ < 0) shifted in relation to their initial positions, depending on the values of nanoparticle or molecular polarizability. 33wever, our experiments revealed that wavelength shifts can be red or blue for the same molecules, where these shifts are mainly governed by changes of local refractive indexes.Figure 2 shows the dependence of the sign of the wavelength shift on the intensity and represents an intensity-dependent diagram of single-molecule sensing with optoplasmonic sensors.For this diagram, the local evanescent intensity I at the tips of the nanorods (the location where the binding of single molecules can be detected 2,3 ) was calculated by considering the effective mode volumes of TE equatorial modes, power of exciting beams, coupling values, WGM Qfactors, and field enhancement around plasmonic nanorods (see the Methods).The figure can be subdivided into three sections: reactive sensing (red shifts), near-zero shifts, and blue shifts. i. Reactive Sensing Mechanism.The first section of the diagram (Figure 2a, i, and Figure 2b), ending at I ∼ 60 MW cm −2 , corresponds to the conventional reactive sensing mechanism 2,34 when binding events cause changes of the resonant wavelength�positive wavelength shifts Δλ > 0, as presented in Figure 1c.During attachment events, 3PGK molecules cause a strong response, changing the polarizability in the evanescent field of the plasmonic nanorod coupled to the WGM resonator.At such intensity levels, 3PGK binding events demonstrate, independent of the evanescent intensity, wavelength shifts Δλ equal to 6 fm, with standard deviations of signals (σ) within 1 fm (Figure 2c).Extracting signals from the WGM transmission spectra measured with optoplasmonic sensors is described in the Methods.Notably, the attachment events of molecules to nanorods at low intensities of WGM do not significantly affect the spectral width (fwhm) of the resonances (Figure 3d).Variation of the step heights of wavelength shifts seen for resonators of the same size are attributed to differences in the nanorod binding location with respect to the WGM field profile and binding orientation of the nanorod with respect to the WGM polarization. 2i.Near-Zero Wavelength Changes.Region ii of Figure 2a reflects significant changes in the mechanisms of sensing, where the wavelength shifts become near-zero (Figure 3a, b) or protein attachment becomes effectively undetectable by resonance wavelength changes.Near-zero wavelength changes occur at specific WGM intensities where the reactive sensing (positive effect on Δλ) and TOP sensing (negative effect on Δλ) regimes balance, canceling each other out and resulting in a net-zero effect on WGM Δλ.However, the single-molecule attachment events were still observable via step-like changes of fwhm (Figure 3a, e).The fwhm of the resonances becomes wider by 3 fm on average (Figure 3c), relating to increased losses of the WGM resonator.Note that the fwhm at lower intensities was measured as nonresponsive to the binding events of 3PGK molecules (Figure 3d).The difference of fwhm behavior can be elucidated by considering the Q-factors of WGM resonators.These take into account several factors, including most notably: scattering, material losses, losses related to resonator radii, and radiation losses. 20Attachment of 3PGK to optoplasmonic sensors does not lead to changes in the resonator's geometry and therefore, at low intensities, does not contribute to resonator scattering, and fwhm will not change during binding events (we presume that molecular scattering under attachment events could change values of fwhm but within 3σ).Q-factors and fwhm are not usually related to absorption of WGM energy by the molecules under test. 20However, in the present case where fwhm changes occur at high WGM intensities when 3PGK binds (Figure 3f), absorption of WGM energy is the only factor that can be considered to cause the fwhm response.This demonstrates that thermo-optical sensing can therefore be used for direct estimation of absorption by the molecules. iii.Blue Shift.Effectively, the second (ii) and third (iii) regions of Figure 2a represent the same mechanism of WGM resonance changes (i.e., absorptive properties of molecules).The third region of the diagram (Figure 2a, iii) represents blue WGM resonance wavelength shifts observed as binding events of 3PGK molecules to Au nanorods at higher local intensity (I > 89.3 MW cm −2 ).The dependence of the sign of wavelength shifts on the intensity reveals a mechanism of single-molecule detection with optoplasmonic sensors, which is related to the strong mutual influence of the intensity of WGMs and molecules under test.Binding events of 3PGK molecules to the nanorod appear as blue shifts (Δλ < 0) accompanied by the partial absorption of optical energy by different amino acids of the 3PGK molecule (Δfwhm > 0); see Figure 3f.Under normal conditions, when intensities are relatively low, 3PGK molecules have a strong absorption band centered at 276 nm, which corresponds to the absorption by the aromatic amino acids of 3PGK molecules: tryptophan, phenylalanine, histidine, and tyrosine (see the Supporting Information).Our results reveal that, when a single 3PGK molecule is attached to a single plasmonic nanoparticle, it also demonstrates absorption bands in the near-infrared spectrum.One plausible mechanism for these new bands relies on the tryptophan residues.In far-field spectroscopy, tryptophan molecules have absorption bands in the UV range, around 280 nm.However, tryptophan molecules may demonstrate red-shifted bands under strong perturbation caused by Trp radical formation on a plasmonic nanoparticle (please see details in the Tryptamine section below). 35n our model, after a 3PGK binding event, energy is absorbed by the Trp radical in the 3PGK molecule, which relaxes to the ground state by heat release, causing a local temperature increase of the water around the binding location, the time scale of which is unresolved because of the limited time resolution of the sensor (20 ms).Local heating of water makes the local refractive index smaller (water has a negative dn/dT).Such localized heating of water results in blue resonance wavelength shifts.This reveals the sensing mechanism, which is caused by thermal changes initiated by the binding of molecules on higher-intensity optoplasmonic sensors, or TOP sensing.TOP sensing, despite employing a distinct plasmon-enhanced mechanism and resulting in the reported negative wavelength shifts, shares similarities with the thermo-optic mechanism proposed by Armani et al. 23,24 Theoretical confirmation of the single-molecule findings in the context of Armani et al.'s work is still ongoing. 24,25ncluding the local thermal effect, the average value ⟨Δλ⟩ of resonance wavelength shifts induced by single protein molecules binding within the plasmon-enhanced near field (aka plasmonic hotspot) of the optoplasmonic sensor is formulated as (see the Methods) with the effective mode volume V eff of WGM, the excess polarizability α ex and absorption cross-section σ abs of the protein, the refractive index n w (T) of water at temperature T, the water's thermal conductivity k con (in units of W m −1 K −1 ), the effective volume of the heated water V w , and the effective heat transferring length ξ.It should be noted that, unlike the conventional definition of the mode volume of WGM, 36 V eff here is defined based on the local light intensity at the nanorod's hotspot (see the Methods) and it has already included the LSPR-induced enhancement of the local electricfield intensity of the gold nanorod.The left-hand side of eq 1 is completely related to the microcavity (e.g., the mode volume and the resonance shift), and all environmental perturbations appear on the right-hand side of eq 1.Two facts contribute to the resonance shift ⟨Δλ⟩ of WGM: (i) As usual, ⟨Δλ⟩ may arise from the excess polarizability α ex of the protein changing the local refractive index of the WGM microcavity.(ii) The bound molecule absorbs the light energy, raising the local temperature and changing the refractive index of the microcavity's surrounding medium (i.e., aqueous buffer), resulting in an extra resonance shift.Compared to the thermal effect of water, the thermo-optic effect of the microsphere is negligible because of the small rate of change of the refractive index of the microsphere with respect to the temperature (see the Methods).In the low-intensity limit I ∼ 0, 2V eff •⟨Δλ⟩/λ approaches α ex and is positive.As I is enhanced, the thermaleffect-induced resonance shift component grows and the positive value of 2V eff •⟨Δλ⟩/λ is reduced.For a large enough I, 2V eff •⟨Δλ⟩/λ becomes negative.In general, the effective heat transfer length ξ depends on the local electric-field intensity I. Under the linear approximation, we express ξ as ξ = ξ 0 + βI, where the constant ξ 0 approximates the radius of the protein molecule (i.e., the heat transferring distance cannot be smaller than the size of the heat source) and the parameter β may be derived from the curve fitting.Substituting the typical values of n water ∼ 1.33, ) cm 2 , and β = 0.045 nm/(MW/cm 2 ) with the 95% confidence interval (0.034, 0.056) nm/(MW/cm 2 ) from the curve fitting (Figure 2a).The large absorption cross section for 3PGK at 780 nm is observed for the enzyme molecules attached to plasmonic nanorods, providing enhanced near fields of optoplasmonic microcavities at sufficient optical power, suggesting that molecular transitions are excited in the TOP sensing approach that are normally weak in standard absorption spectrometry (see Table 2).Additionally, it is worth noting that the effect of the plasmon resonance shift (resulting from the single-molecule-induced resonance shift) of gold nanorods on single-molecule sensing is negligible (see the Supporting Information). To confirm the mechanism explained, a series of experiments were performed for 3PGK complexes, where a dye (Alexa Fluor 790) with a molecular absorption peak spectrally close to the maximum of the WGM wavelength, i.e., 780 nm, was selected to be covalently attached to 3PGK.The result of single-molecule 3PGK−Alexa binding is plotted in Figure 2a and b as green dots.The results obtained were similar to experiments with unlabeled 3PGK, where positive wavelength shifts at lower intensities were obtained.They have the same values, 6 fm, when sensing by the reactive mechanism at low I because the structure of 3PGK and molecular weight were changed insubstantially by the Alexa label (Table 1).The WGM wavelength changes are switched to blue shifts at higher intensities of WGMs but with greater magnitude than unlabeled 3PGK.In fact, 3PGK−Alexa molecules show blue shifts at even lower intensity than nonlabeled 3PGK molecules.The values of resonance shifts demonstrate big variation, likely related to the different positions of molecules on the nanorod; Figure 2b shows maximal wavelength shifts corresponding to the position of 3PGK at the tips.Under these conditions, the previously nonobservable optical transitions in tryptophan have quite similar values of wavelength shifts to their counterparts in Alexa Fluor 790.The greater magnitude of negative shift and TOP sensing at lower I is due to increasing the absorption of the 3PGK molecules by conjugating Alexa, creating an additive effect to enhance the TOP mechanism. We exclude from our aforementioned analysis the possible fluctuations of temperature due to nanorod heating effects because the proteins are added to the chamber and detected when the sensor is in the steady-state temperature regime.Nonetheless, the increased intensity of WGM causes increased background heating of nanorods (see the Supporting Information), that could make blue shift values upon protein binding smaller at higher intensities.We also consider that increased temperature due to WGM radiation absorption may also cause an increase in temperature of the microsphere.However, using values of thermal conductivities of water, k w = 0.6 W m −1 k −1 , and silica, k s = 1.38 W m −1 k −1 , supposing that the local temperature of silica is the same as the local temperature of water (i.e., the local temperature increase in silica is the same as the local temperature increase in water) .That is to say, under the same local temperature increase, the change of the refractive index of water is 10 times larger than that of silica.According to this, the effect of the silica temperature increase on the resonance shift, which is generally a competitive process, is much smaller in comparison to the water temperature increase. Adk and Adk−Alexa.To provide more information about TOP sensing, we investigated another protein (Aquifex aeolicus Adk) and a protein−dye complex (Adk−Alexa).The results are summarized in Figure 4.In contrast to 3PGK, Adk binding causes smaller wavelength shifts of 3 fm, due to Adk molecules being almost twice smaller (44 kDa for 3PGK vs 24 kDa for Adk).This smaller size also causes a weaker response when Adk−Alexa complexes are attached to the sensor both at high and low intensities of WGM.Indeed, the dominant mechanism of local heating is related to the absorption of Alexa molecules followed by heating the Adk−Alexa complex.Alexa-790 dye absorbs WGM radiation and transforms its energy into heat via nonradiative relaxation.The quantum yield of luminescence of Alexa in solution is about 10%; however, protein−Alexa complexes bound to the sensor have their luminescence quenched.Therefore, we expect the quantum yield to be of units of percent and hence the energy absorbed from WGM is released as heat.If we consider heat capacity values of 3PGK and Adk are roughly equal and the absorption spectra with efficiency of the Alexa-labeled subject proteins reflecting their molecular weight (see the Supporting Information), then the smaller mass of Adk directly indicates smaller heat absorption; i.e., blue shifts for complexes of 3PGK−Alexa and Adk−Alexa will be observed at different intensities. Note that the value of ⟨Δλ⟩ for both 3PGK−Alexa and Adk−Alexa complexes is close to −5 fm, also suggesting that Alexa absorption is the ruling mechanism.An important observation is that Adk molecules themselves, without Alexa conjugation, have no absorption around 780 nm and do not demonstrate blue shifts at high intensities.The crucial difference between absorption of 3PGK and Adk at 780 nm is related to the presence of tryptophan in the composition of 3PGK and absence in Adk. Tryptamine.In order to confirm tryptophan's involvement in TOP sensing, it was appropriate to test the binding of smallmolecule tryptophan to the surface of gold nanorods during WGM.However, competition between the amine and carboxyl group (Figure 5) for binding to the nanorods yielded spiked events rather than step-like binding events.To observe Δλ in a step-like manner, a similar small molecule with the same functional group and optical properties was used: tryptamine.At pH 10, the amine group will be deprotonated and a lone pair of electrons will be available for interactions with the nanorod surface. Upon binding of tryptamine to nanorods, we found a trend of the Δλ profile very similar to that of 3PGK: intensitydependent changes of the sign of resonant wavelength shifts upon binding.This suggests the TOP sensing effects of 3PGK are likely due to the intrinsic tryptophan residues present in 3PGK, and absence of TOP sensing in Adk due to lack of tryptophan residues.This is likely as a result of effects seen similarly in surface-enhanced resonant Raman scattering (SERRS), where electron transfer from excited plasmons in nanoparticles can form species with visible excitation bands.Sloan-Dennison et al. 35 demonstrate the ability of tryptophan to undergo these chemical changes when bound to plasmonic nanoparticles during Raman spectroscopy, forming a Trp −• or Trp +• species with absorption bands that span across the visible range, including at 780 nm.Sloan-Dennison et al. favor the formation of the former Trp −• species, describing an electron-capture event when the indole ring of tryptophan is in close proximity to an excited plasmonic nanorod.This previous investigation also demonstrates that electron capture is possible in Trp-containing proteins.We therefore propose the mechanism of apparent forbidden transitions in this case occurs as follows: the ≈780 nm WGM excites plasmons in the nanorods, resulting in electron transfer from the nanorod to the indole ring of tryptophan/tryptamine.This same 780 nm WGM can excite and allow observation of optical transitions in the newly formed Trp −• species, which when relaxing to the ground state releases energy as heat, resulting in the characteristic blue shifts of TOP sensing. Similarly, the intensity dependence may be due to the requirement to provide sufficient energy to allow electron transfer.This may be most efficient around 70−130 MW cm −2 for proteins. 40Greater variability of tryptamine than larger protein molecules could also be explained due to its smallmolecule nature.This is likely due to tryptamine's closer position to the nanorod (see Figure S2 in the Supporting Information).Local heating effects and evanescent intensity are more enhanced, resulting in wavelength shifts at the same apparent intensity that can show reactive and TOP sensing. −43 DISCUSSION Thermal effects in WGM nanoparticle and molecular sensing have been previously mentioned in several research works.They mostly use the operational principles of WGM sensors, including tracking mode shifts and changes of their fwhm when sensing particles, micelles, and viruses, and show their different, nontrivial responses.For example, when sensing gold nanoparticles, depending on the wavelength of the WGM, they can demonstrate increasing fwhm and either blue-or red-shifting of resonant wavelengths. 44This revealed that it is not just reactive sensing that is possible but also dissipative sensing regimes.The latter can be observed for objects causing losses that may be slightly heated by laser radiation.A similar approach was used in refs 22 and 45 where an object�a polymer molecule or a nanoparticle�placed on a WGM resonator was intentionally heated with a beam, while a second laser was used to monitor WGM resonance changes. −52 The basic principle of this technique is built on registering signals that arise from slight changes of the index of refraction in a sample due to the absorption of a heating light beam.Refractive index changes are measured with a second probing beam, usually of a different color.Even though TOP sensing appears similar to photothermal microscopy and can be associated with the technique described by Goldsmith's group in 2016, 22 these two techniques are different.The previously developed technique was used to demonstrate single plasmonic nanoparticle sensing, relying on detecting changes in signals resulting from altering the heating of a nanoparticle by using external lasers to modulate the heating.When coupled with a single (or even several) molecule bound to the nanorod, this yields changes too minute to measure effectively, well within their measurement noise. Our approach instead focuses on near steady-state heating of the nanorod, where the addition of single molecules can induce significant shifts in the WGM by heating the water at the nanorod's tip, requiring only a minimal heat flux to effect a noticeable change in the temperature of water and, with that, a detectable refractive index change.As we show, these refractive index changes are detected with very high sensitivity on the optoplasmonic platform, which uses the established method of plasmon-enhanced WGM sensing (reactive sensing) for detecting small polarizability changes on that order. Contrary to earlier work focusing on WGM microresonators for photothermal spectroscopy, which achieves quantitative determination of absorption cross sections with comparisons to literature values, our experiments attain single-molecule sensitivity.While the previous work accurately extracts absorption cross sections from bulk measurements with absorbing polymers, 45 our approach leverages the plasmonenhanced reactive sensing mechanism to achieve a high sensitivity at the level of single molecules, albeit not providing the high accuracy of previous measurements with WGM. The possibilities of thermo-optoplasmonic sensing can be further developed with appropriate instrumentation, e.g., utilization of different input wavelengths, broadband scanning with different frequencies of whispering-gallery modes, or use of frequency combs. 53Initially these tasks seem technically difficult; however, it may be implemented with WGM resonators made with planar architectures or sensor-on-achip technologies. 54This could be a natural progression of this research and the further development of our experiments.Another direction of use for this mechanism could be in technical improvements of flow measurements in capillaries, 55 analogous to flow cytometry.This may become an important step in molecular sensing and molecular discrimination by absorption spectra; that can be realized in real-time measure-ments and with very small amounts of probes.Finally, our technique may be combined with other methods for analysis: recently Yang's group presented a combination of WGM with Raman spectroscopy. 56Such a combined approach allows collection of comprehensive information about analytes per single probe. As mentioned, the current technique is applicable for a small set of molecules, i.e., molecules with absorption bands close to plasmonic resonances.However, this technique is also limited by the applied intensities.This is mostly related to the potential damage of molecules under test and difficulties in reaching these higher intensities due to optothermal broadening/narrowing effects. 57 CONCLUSIONS We have demonstrated single molecule sensing of proteins and proteins labeled with organic dye molecules.It was shown that WGM sensing, improved with plasmonic nanoparticles, is dependent on the intensity of the WGM modes.At low intensity, sensing occurs through the reactive mechanism: red or blue shifts of WGM resonances depending on the polarizability of molecules under test and their refractive index.We established that, at high intensity levels, sensing can occur through a different mechanism: through the absorption of energy by molecules under test followed by heating the surrounding solution, causing blue shifts of the WGM.We have coined this mechanism thermo-optoplasmonic (TOP) sensing.The most exciting part of our results shows that, by using thermo-optoplasmonic sensing, it is possible to define the absorption cross-section of single molecules due to its relation to the WGM wavelength shift. Equation 1 establishes the relation between the absorption cross-section of molecules and WGM wavelength shifts.This relation serves as a model showing that optoplasmonic sensors can be used as single molecule spectrometers.Further work can be performed via tracking multiple WGM resonances and contribute to studies of photophysical properties of an enormous number of molecules at the single molecule level. METHODS Enzyme Binding to an Optoplasmonic Sensor.The optoplasmonic sensor consists of two major components, the spherical silica microresonator and plasmonic gold nanorods.6−10 cetrimonium bromide (CTAB) coated gold nanorods (Nanopartz A12-10-780-CTAB) with plasmon resonance at 780 nm were attached to WGM microspheres in 0.02 M HCl, monitored by changes in WGM resonance wavelength (Δλ) and full width at halfmaximum (fwhm).Poly-L-lysine-polyeythylene glycol (PLL-PEG) can be used at this stage to prevent nonspecific binding to the surface of the silica microsphere but was found to not be necessary in this study. 3PGK (from Geobacillus stearothermophilus; for sequence and purification methods, see the Supporting Information) and 3PGK− Alexa immobilization was performed by modifying the gold-nanorod surface with thiolated nitrilotriacetic acid (NTA).A mixture of 50 μM dithiobis(C2-NTA) (Dojindo D550), 450 μM thiol-dPEG4-acid (Sigma-Aldrich QBD10247), and 250 μM TCEP−HCl (tris(2carboxyethyl)phosphine−HCl) was incubated for 10 min before mixing in a 1:30 ratio with 50 mM citrate buffer and 1 M NaCl and submerging the microresonator in the solution for a further 20 min in the chamber.The chamber and resonator were washed with 50 mM HEPES.The NTA molecules, on the surface of the nanorods, are then charged with nickel ions by submerging the microresonator in 0.1 M nickel sulfate for 2 min and the chamber finally washed and filled with 50 mM 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES).A 2 μL portion of 0.1 mg mL −1 3PGK or 3PGK−Alexa was then added to the chamber, while monitoring the resonance shift Δλ of WGMs, relying on the Ni-NTA to His-tag interaction for enzyme immobilization onto the nanorod surface. Adk (from Aquifex aeolicus; for the sequence and purification methods, see the Supporting Information) and Adk−Alexa immobilization was performed by direct covalent attachment via gold−thiol interactions, relying on a C-terminal Cys residue.To do so, the chamber was filled with 50 mM TCEP and 50 mM HEPES, and 2 μL of 0.5 mg mL −1 Adk or Adk−Alexa was added to the chamber and immobilization monitored by shifts in Δλ of WGMs. Tryptamine and Dye Binding to an Optoplasmonic Sensor.Tryptamine binding was performed via amine lone-pair interactions with gold nanorod surfaces at pH 10 in a 50 mM bicarbonate buffer.2 μL of 1 μM tryptamine (Santa Cruz SC-206065) was added to the chamber in order to observe steps in the Δλ of the WGM. Alexa Fluor 790 (ThermoFisher A30051) and IRDye 800CW (LI-COR 929-70020) binding to the sensor was performed via interactions of sulfate groups with the gold nanorods.This was performed at pH 7.5 in 50 mM HEPES at concentrations of 13.3− 26.7 nM. Labeling of 3PGK and Adk with Alexa Fluor 790.3PGK and Adk molecules were labeled with Alexa Fluor 790, using the succinimidyl ester form (ThermoFisher A30051), reacting with free amine groups on the protein surface.Alexa Fluor 790 was dissolved in DMSO to 10 mg mL −1 and mixed with a 3 mg/mL solution of protein in 50 mM HEPES and 0.01 M sodium bicarbonate to a final concentration of Alexa Fluor 790 of 0.833 mg/mL.The mixture was incubated while shaking and protected from light for 1 h.To separate protein from free Alexa Fluor 790 and buffer exchanged to an appropriate buffer, size exclusion chromatography (SEC) was performed.SEC was performed using a HiLoad 16/600 Superdex 75pg column (Cytiva 28-9893-33) with an elution buffer of 20 mM HEPES, 150 mM NaCl (pH 7.5) over 1.5 column volumes.Fractions were collected and fractions were selected through correlation of absorption at 280 and 700 nm and confirmed by SDS-PAGE analysis.Selected fractions were pooled and concentrated using Vivaspin 20 3 kDa MWCO PES (Cytiva 28-9323-58) concentrators by centrifugation at 3 kG for 15 min and repeated until the volume was <500 μL. Data Processing.A graphical user interface developed in MATLAB for processing the WGM time traces was used similarly to the Supporting Information. 19A Labview program was used to record and process the WGM spectra and track the WGM resonance position.Once the WGM time traces were obtained, the data were analyzed for peaks using the MATLAB GUI.First, drift correction caused by slow variations of temperature was applied to remove slow variations of the resonance traces.A first-order Savitzky−Golay filter with a window length depending on the sampling rate was applied to the signal.Second, step-like wavelength traces were analyzed in the MATLAB program to find the resonance shift Δλ values corresponding to proteins binding.Hence, the signal can be close to noise but has a higher amplitude either in wavelength or in fwhm.We quantify useful signals as all steps with an amplitude higher than 3σ (the standard deviation of the background) of that same sample.The value of σ was evaluated by dividing the WGM time trace into windows of N points and evaluating the standard deviation of each Npoint window.Typically, the value of σ is 0.4−0.5 fm, which increases with increased power up to three times.Diagrams (Figures 2 and 4 Evanescent Intensity Calculation.We use a semiempirical approach to evaluate the evanescent intensity I of WGM near the nanorods.The intensities used here are limited to I < 800 MW cm −2 due to optothermal broadening/narrowing during the laser scanning process. 57,58The field distribution E(r) of a specific WGM in microsphere can be numerically computed by using the formulas listed in ref 36.The effective mode volume of WGM is then derived as where ϵ(r) denotes the spatial distribution of the relative permittivity and Λ accounts for the local-intensity enhancement factor that arises from the localized surface plasmon resonance (LSPR) of the gold nanorod at the position r 0 .The typical value of Λ in this work approximates 800.Greater I values could be generated by use of plasmonic nanoparticles with greater near-field enhancement factors and the limiting of optothermal broadening/narrowing effects. 20,58It should be noted that the definition of V eff here, i.e., eq 2, is different from the one defined based on the maximum light intensity inside the microsphere. 36An incident beam with the power P pumps the WGM.At the steady state, the intracavity photon number reaches N P in in 2 = . Here, κ in and κ are the coupling and total loss rates of the microsphere, respectively, ℏ is the Planck's constant, ω = 2πc/λ is the angular frequency of the light, and c is the speed of light.Thus, the light intensity at the position of the nanorods is given by I = ℏωN in c/V eff . For each experimental data point shown in the figures, we measured the corresponding microsphere radius R, input power P, mode wavelength λ, total line width κ, and prism−microsphere coupling efficiency S. The effective mode volume V eff is numerically computed based on R, and the prism−microsphere coupling rate is given by S ( /2)(1 1 ) in = . Then, the evanescent intensity I can be evaluated accordingly.As an example, for IRDye800 we measured R = 46.5 μm, P = 0.19 mW, λ = 780.029083nm, S = 22%, and κ = 503 fm in the experiment.V eff and κ in are respectively computed to be V eff = 5.9 × 10 −18 cm 3 and κ in /κ = 0.0584, and then I is evaluated to be I = 36.2MW/cm 2 . Theoretical Model and Fundamentals for Single Molecule Absorption Spectroscopy.The resonance wavelength shift (from λ to λ′) of WGM induced by the dielectric variation is expressed as 59 r r E r r r E r r where ϵ(r) corresponds to the relative permittivity in the absence of the perturbation and ϵ′(r) denotes the relative permittivity in the presence of the perturbation.Specific to the experiment demonstrated, the resonance shift is caused by (i) the change of the relative permittivity at the location of the ligand protein (approximately, the position of the gold nanorod r 0 ) and (ii) the change of the relative permittivities of the environment (HEPES or Tris buffer) and microcavity due to the temperature rise (from T to T′) caused by the protein heating the water and the microsphere.In addition, the LSPR effect has already been included in the relative permittivity ϵ′(r).Thus, the resonance shift Δλ = λ′ − λ may be rewritten as (a more detailed derivation can be found in ref 59) with the excess polarizability of the protein 60 the relative permittivities of protein ϵ p , water ϵ w (T), and microsphere s (T) at the temperature T, the protein volume V p , the effective volume of the heated water V w , and the effective volume of the heated microsphere V s .Note: the refractive index of proteins is in the order of n = 1.4−1.5, larger than that for water (n = 1.33). 15This means that α ex for all proteins used in this study (Adk, 3PGK) is always positive when measured in aqueous solution via the reactive sensing regime: the reactive regime response for protein binding will always be positive.The relative changes of the water and microsphere permittivities caused by the temperature variation ΔT = T′ − T are given by , and it gives rise to the negative wavelength shifts. 61Thus, the thermo-optic term associated with the microsphere in eq 6 is negligible compared to that of water, and one obtains It is seen that the left side of the above equation is completely related to the microcavity (e.g., the effective mode volume and the resonance shift) and the right side of the above equation includes all environmental perturbations (i.e., the ligand−receptor interactions and the thermal effects).Since V w approximates the hotspot volume, we treat it as a water quasi-particle.Due to the negative value of n T T ( ) w , raising the local water temperature may result in a blue shift of the WGM resonance wavelength.The heat absorbed by the protein per second is given by h = σ abs I with the absorption cross-section σ abs of the protein and the light intensity I at the position of the protein.It should be noted that the LSPR enhancement was taken into account in I. Thus, the temperature increment ΔT is derived as with the water's thermal conductivity k con and the effective heat transferring length ξ. 25 Considering the average of the resonance shifts, we arrive at eq 1. ASSOCIATED CONTENT * sı Supporting Information The Supporting Information is available free of charge at Survivor functions of 3PGK and tryptamine binding wait time to the optoplasmonic sensor, graphical representation of binding regimes of all molecules, calculation of absorption cross sections of molecules and temperatures of "hot spot" regions, explanation of the effect of nanorod plasmon resonance shift on single-molecule sensing, absorption spectra of molecules, protein sequence residue components, experimental results without detrending, experimental results at higher intensities, histograms of wavelength shifts per hot spot intensity, and enzyme expression and purification methods (PDF) a As predicted by the ExPASy ProtParam online tool.31b Alexa Fluor 790 structure unpublished. Figure 1 . Figure1.Optoplasmonic sensing.(a) Optoplasmonic single-molecule sensor scheme.A collimated 780 nm laser beam passes through a 90/ 10 beam splitter: 90% -to a 50 mm focusing lens, 10% -to a power meter (PM).The incident beam (≈6°) reflects from the back side of a prism, inducing evanescent waves that excite whispering gallery modes in a ∼90 μm-diameter silica microsphere placed behind the prism.The reflected beam is focused on a detector (PD); WGMs are observed as dips in the transmission spectrum of the system.A PDMS chamber containing analyte molecules is attached to the back side of the prism.Resonance wavelength shift (Δλ) and full width at halfmaximum (Δfwhm) of WGMs are tracked with the photodetector, connected to a data acquisition card (DAQ).Gold nanorods are attached to the microsphere surface.Protein samples (Adk or 3PGK; conjugated or not with Alexa Fluor 790) bind to these nanorods at the tips within an enhanced electric field that can detect perturbations of polarizability and hence the presence of protein molecules.(b) Extinction spectrum of gold nanorods used in experiments.Inset: electric field distribution around the nanorod with the LSPR at 780 nm.(c) Examples of measured resonance wavelength traces showing red, Δλ > 0, and blue, Δλ < 0, wavelength shifts under the attachment of 3PGK molecules at low and high intensities of WGM, respectively. Figure 2 . Figure 2. Single-molecule detection of 3PGK and 3PGK−Alexa.(a) Optoplasmonic sensing of 3PGK (mean, dark blue circles; raw data, light blue circles) and 3PGK−Alexa (mean, dark green diamonds; raw data, light green diamonds).Averaged values ⟨Δλ⟩ of WGM wavelength shifts depend on the evanescent intensity I of the WGM around Au nanorods.Regions (i), (ii), and (iii) represent reactive sensing, near-zero shifts, and blue shifts, respectively.Inset: Dependence of 2V eff •⟨Δλ⟩/λ with the effective mode volume V eff and wavelength λ = 780 nm on I. Symbols correspond to the experimental data, while the line gives the curve fitting based on eq 1�R 2 = 0.6946.(b) Maximal wavelength shifts vs the evanescent intensity I. Maximal wavelength shifts show binding at the tips of nanorods, when nanorod longitudinal plasmonic modes are effectively excited.(c) Histograms of wavelength shifts Δλ. 3PGK shifts grouped into the reactive sensing mechanism (red shifts, blue) and the thermo-optoplasmonic mechanism (blue shift, cyan).The same for 3PGK−Alexa is shown below: red shifts (green) and blue shifts (lime).The gray area indicates significance levels of triple the standard deviation (3σ). Figure 3 . Figure 3. Sensing of single 3PGK molecules.(a) 3PGK optoplasmonic sensing at the evanescent intensity I = 89.3MW cm −2 .Near-zero resonance wavelength shifts Δλ (blue squares) are accompanied by the changes of full width at half-maximum (Δfwhm) (red circles).Changes of the resonance wavelength Δλ are below the noise level (i.e., triple standard deviation of 3σ) and are considered to be effectively zero.(b) Histogram of the number of attachment events (counts) vs wavelength changes Δλ.Majority of shifts found between −0.8 and −0.6 fm, well below the noise level (3σ = 1.5 fm).(c) Histogram of the number of attachment events (counts) detected via fwhm changes vs Δfwhm.Majority found between 2.75 and 3 fm, greater than the noise level (3σ = 1.8 fm).(d−f) Examples of wavelength shifts and fwhm changes.(d) 3PGK reactive sensing with Δλ > 0 and Δfwhm ≈ 0. (e) Near-zero wavelength shift with Δλ ≈ 0 and Δfwhm > 0. (f) Thermooptoplasmonic sensing with Δλ < 0 and Δfwhm > 0. Figure 4 .− 1 . Figure 4. Single-molecule detection of Adk and Adk−Alexa.(a) Optoplasmonic sensing of Adk (purple triangles) and Adk−Alexa (orange pentagons).Averaged values of WGM wavelength shifts ⟨Δλ⟩ depend on the evanescent intensity I of the WGM around Au nanorods.(b) Maximal wavelength shifts at each intensity I of WGM.Maximal wavelength shifts show binding at the tips of nanorods.(c) Histograms of wavelength shifts.Histograms of Adk (upper) binding shifts by the reactive mechanism (purple).Adk−Alexa (lower) groups are reactive sensing (orange) and blue shifts (gold).The gray area indicates the noise levels of triple standard deviation, 3σ. Figure 5 . Figure 5. Single-molecule detection of small-molecule tryptamine.(a) Optoplasmonic sensing of single tryptamine molecules (blue circles).Averaged values of WGM wavelength shifts ⟨Δλ⟩ depend on the evanescent intensity I of WGM around Au nanorods.Inset: Dependence of 2V eff •⟨Δλ⟩/λ, with curve fit based on eq 1�R 2 = 0.5613.(b) Maximal wavelength shifts at each intensity I of WGM (by absolute values).Maximal wavelength shifts show binding at the tips of nanorods.(c) Histogram of wavelength shifts over 117 individual data points.Both TOP sensing (light blue) and reactive mechanisms (dark blue) are observed, dependent on evanescent intensity.Gray area indicates noise levels of triple the standard deviation, 3σ.Tryptophan and tryptamine structures are presented additionally. With further development of TOP sensing allowing isolation of the shift signals from specific reaction steps at high time resolution, thermo-optoplasmonic sensing may provide a real-time tool for observing transient absorbing states, offering access to sensing transient intermediates of a catalytic or photocatalytic cycle.Pure Dye Molecules.Finally, the TOP sensing mechanism was tested for attachment events of pure Alexa790 and IRDye800 molecules at different intensities.Binding of these molecules to the sensor occurred via interactions of sulfate groups with the gold nanorods (see the Methods). Figure 6 summarizes the results of these control experiments.Upon Alexa molecules binding, sign-changing behavior is observed: red-shifted resonance changes at intensities of up to 60 MW cm −2 , followed by blue-shifted resonances at larger intensities, i.e., a similar sign-changing trend to 3PGK, 3PGK−Alexa, Adk−Alexa, and tryptamine.Binding of IRDye molecules demonstrates features of the TOP sensing mechanism also.Resonant wavelength shifts are switched from red to blue at an even smaller level of intensity.The presence of negative shift signals in the dye data sets was enough to confirm our hypothesis.This demonstrates that absorption at the WGM wavelength by molecules attached to the plasmonic nanorods will allow for TOP sensing effects.The larger scattering in these data sets resulting in near-zero averaged values of dye wavelength-shifts at high intensities are likely due to several mechanisms, including the following: (1) Multiple sulfate-mediated binding sites for the nanorod surface with lower affinity, resulting in different transient binding orientations being possible and hence varying nanorod−dipole interactions.(2) IRDye800 and Alexa790 are small molecules able to explore the surface roughness of gold nanorods, similar to tryptamine.(3) Excitations at high intensities resulting in photodecomposition of dye molecules, allowing detection of nonabsorptive regions of the dye, so both reactive and TOP sensing regimes occur. Figure 6 . Figure 6.Single-molecule detection of Alexa and IRDye.Resonant wavelength shifts and their averaged values.Alexa molecules (red circles) demonstrate sign-changing behavior: red-shifted resonances at intensities up to 60 MW cm −2 and blue-shifts at larger intensities.IRDye molecules (black diamonds) demonstrate blue-shifts even at smaller levels of intensities. ) combined consolidated data of 451 signals from over 50 different experiments.Figure 5 consolidates 117 signals from 10 experiments. index n i (T) of water (i = w) or microsphere (i = s) at temperature T. Equation 4 is then re-expressed as only enhances the local field within a small region (hotspot) around the gold nanorod, both effective volumes V w,s of the heated water and microsphere are of the order of the hotspot volume Table 1 . Characteristic Values of the Tested Molecules and Complexes Extinction coefficient at 280 nm, cm −1 M −1
10,875.2
2023-12-14T00:00:00.000
[ "Physics" ]
Gadollium Nanoparticles Delivery for Low Intensity Focused Ultrasound Diagnosis Ablation of Thyroid Cancer Therapy Chemotherapeutic efficacy can be significantly developed nanotheranostics systems of drug delivery in tumor cells. In this work, we have demonstrated that the self-assembled by C225 conjugates Gd-PFH-NPs (C-Gd-PFH-NPs) for low intensity focused ultrasound diagnosis ablation of thyroid cancer treatment. C-Gd-PFH-NPs have shown excellent stability in PBS. Transmission electron microscopy (TEM) images also exposed the effective construction of C-Gd-PFH-NPs with commonly spherical sized assemblies. The incubation of the C625 thyroid carcinoma with C-Gd-PFH-NPs triggers apoptosis, which was confirmed by the flowcytometry analysis. The C-Gd-PFH-NPs, with remarkably displays the potent antitumor efficacy in a human C625 thyroid carcinoma xenografts. A histopathological result reveals that precisely achieved to additional confirm these outcomes. Further, we successfully examined the efficiency of C-Gd-PFH-NPs when used the thyroid carcinoma low intensity focused ultrasound diagnosis imaging (LIFUS) in vivo. These findings clearable for LIFUS agents with high performing image and different therapeutic purpose will have extensive possible for the future biomedical purposes. Introduction In a recent time, triggerable drug-charged nanocarriers coupled with multiple inner or external stimuli such as pH, temperature, ultrasound, laser, and microwave radiation have been extensively explored for personalized treatment to enable controlled release and have excellent possible to deliver an enhanced anticancer treatment impact also decreased systemic toxicity [1][2][3][4]. Low-intensity focused ultrasounds (LIFUS) have been exhaustively researched for tumour treatment and ultrasounds imaging analysis as one of the probable exterior activates which is non-invasive and displays signi cant tissue-penetrating capacity. In particular, it can signi cantly increase the e cacy of chemotherapy, avoiding harm to the nearby cells and reducing adversarial side effects [5][6][7][8]. Though, the discharge of LIFUS-triggered drugs from nanocarriers and further tumour therapy is still unsatisfactory, largely attributable to the comparatively less accumulation e cacy of nanoparticles-charged nano transporters at tumour places. There are numerous nanotransporters extensively examined on this basis to enhance the aggregation of large number of tumors without causing any side effects [9][10][11][12]. Anaplastic thyroid carcinoma (ATC) is one of the most malignant carcinomas, which is also comparatively rare, characterized by fast proliferation, neck invasion, and remote metastasis. ATC's severe prognosis is due to the tumors' fast progression before diagnosis [13][14][15]. Current treatment is based on different types of combinations in chemotherapy and exterior ray radiation has unsuccessful to enhance existence, resulting in an average existence rate of 4 to 6 months and less than 20% existence level in 12 months. Here are therefore convincing explanations for developing a new theranostics approach for initial nding and e cient ATC treatment [16][17][18][19]. Several reports have shown that overexpression of the epidermal growth factor receptor (EGFR) is strongly associated with tumour progression, migration, and invasion. EGFR is common in ATC. Antibodies or small molecules based on EGFR immunotherapy can signi cantly increase the therapeutic effect against ATC. A human murine chimeric EGFR-targeted monoclonal antibody called Cetuximab have higher empathy to human EGFR's extracellular domain and inhibits the signals of its epidermal growth factor in cells by delaying usual receptor function [20][21][22]. Food and Drug Administration approved preclinical and preclinical treatments using Cetuximab for the treatment of EGFR-expressing cancer tumors' neck and head carcinoma and colorectal carcinoma. This C225 might be a suitable objective for the nanocarriers' structure to improve the therapeutic outcome of ATCs. Remarkably, some researchers have revealed that for a wide spectrum of cancers, the blend of C225 with CPT-11 equivalents such as Gd-PFH-NPs has signi cant synergetic antitumor effects [23][24][25]. Hence, Gd-PFH-NPs in combination with C225 could enhance the ATC diagnostics. But, owing to less vascular dispersal of C225 and hydrophobicity of the Gd-PFA-NPs, the NPs penetrability in the growth and the NPs quantity in the tumor area were inherently imperfect, shows greatly debilitated their anticancer e cacy [26,27]. Opportunely, the problems can be enhanced through incorporating Gd-PFH-NPs and C225 into a one nanotransporters to attain C225 and Gd-PFH-NPs combination chemotherapy while simultaneously providing targeting capability for nanocarriers [24,25,28,29]. Furthermore, for early diagnosis and tumor progression monitoring medical imaging is essential. Numerous researchers proposed that LIFUS have the probable to achieve concurrent US and medication transfer, meeting the present need for initial treatment and ATC therapy [30][31][32]. Due to variability and huge dimensions of microbubbles to realize the tumor theranostic strategy conservative US agents, like as microbubbles, demonstration outstanding US agents for imaging capability but these not appropriate for drug delivery purposes. In order to avoid this problem, intensively studied phase-changing NPs that could be activated via LIFUS [33][34][35]. The phase-changing NPs providing important bene ts in tumor theranostics for the supply of tumor ultrasound and ultrasound-triggered drug. This new strategy offers the possible to develop malignancy treatment and addresses the present theranostic needs in contradiction of ATC signi cantly. The objective of this work was to constructed the modi cation of C225 nanocarrier to exactly prevent ATC that might accrue in cancer cells, in addition to the EPR effect, through the great tumor homing belongings of C225. The Gd-PFH-NPs payload could be released and LIFUS-triggered synergistic chemotherapy with C225 may perhaps suggestively make best use of therapeutic e cacy, improve USI and diminish the side effects of chemotherapy. As shown in Fig. 1. Due to its tremendous biodegradability and biocompatibility, we used a PHF (Per uorohexane) core as the shell structure of the nanocarrier. We then synthesized phase-changing NPs with Per uorohexane liquid (PHF, 29 °C boiling point). Meanwhile, Gd-PFH-NPs were burdened into the nanoparticles at the similar period of time as C225 was conjugated on surface of manganese nanoparticles afford (C-Gd-PFH-NPs) C225-conjugated Gd-PFA-NPs-charged phase transformation. To our knowledge, this is the rst work of a LIFUS-mediated C225 modi ed nanosyste that assimilates tumor targeted both US imagery and US activated drug conveyance to ATC. Experimental Section 2.1. Characterization of C-Gd-PFH-NPs ptical microscopy ( CKX41; Olympus, Tokyo, Japan) and confocal laser scanning microscopy (CLSM) ( Nikon A1, Tokyo, Japan ) have observed the morphology and particle distribution of Gd-PFH-NPs and C-Gd-PFH-NPs. A dynamic light scattering analyzer ( DLS) ( Malvern Instruments, Malvern, UK) was used to determine the mean particle size, polydispersity index (PDI) NPs. Using transmission electron microscopy (TEM) ( H-7500; Hitachi, Tokyo, Japan) the morphological characterization of NPs were carried out. The mean particle size of the nanoparticles was determined by DLS tested within 7 days in order to better illuminate the stability of the Gd-PFH-NPs and C-Gd-PFH-NPs. Synthesis of C-Gd-PFH-NPs Gd and PFH nanoparticles (Gd-PFH-NPs) were fabricated by a lm hydration method coupled with a double emulsion method [36][37][38][39]. 100 mL of Gd solution (10 mg/mL) were added to the CHCl 3 solution. Fluorescent nanoemulsions were obtained according to the above procedure except that the DiI was blended in the lipids solution. C225 Conjugation Conjugation of C225 to the Gd-PFH-NPs loaded nanoparticles was performed using carbodiimide chemistry. Brie y, the prepared HPNs were dissolved in 5 mL of MES buffer solution (0.1 M, pH 5.5) together with a mixture of 3 mg of EDC and 10 mg of NHS, and then incubated vigorously for a period of 1 h on a gentle shaker. The resulting solution was centrifuged and washed three times with PBS to remove unreacted EDC and NHS. Then, the sediment was redissolved in 5 mL of MES buffer solution (0.1 M, pH 8.0). Next, excess C225 was dropped into the above solution and stirred on a gentle shaker for another 2 h. After the reaction was completed, Gd-PFH-NPs were obtained by centrifugation, washed thrice with PBS again to remove unconjugated C225 and preserved at 4 °C before use. All the aforementioned procedures were carried out in an ice bath. PNs with C225 conjugation (C-Gd-PFH-NPs) were also prepared using the same procedures. Cell Culture and Nude Mice The Cell Bank of the Chinese Academy of Sciences (Shanghai, China) acquired a human anaplastic thyroid carcinoma line (C643). The cells were grown in medium RPMI-1640 containing 10% FBS and 1% penicillin-streptomycin at 37 °C in humidi ed air with 5% CO2. BALB/C Female both mice and nude mice (balancing about 19 g, 25 days) were bought then raised. All animals on our studies were collected from the Ultrasound Department, The First A liated Hospital of Jinzhou Medical University Laboratory Animal Center and retained in accordance with rules authorized by the First A liated Hospital of Jinzhou Medical University Animal Ethics Committee (Harbin, China). Furthermore, all animal experimental activities were strictly in line with the policy of the Harbin Medical University's Institutional Animal Care and Use Committee (IACUC), and this study was endorsed by the IACUC. In order to start an ATC model in nude mice, C643 cells were collected, splashed thrice with the FBS free medium of RPMI-1640, and subcutaneously inoculated into each mouse's left ank (3 × 107 C643 cells in 150 µL FBS free medium of RPMI-1640 each mice). A Vernier caliper was used to measure the length and width of the tumour and the tumour quantity was considered by the calculation: volume-(length as width × 2)/2. In Vitro Intracellular Uptake C-Gd-PFH-NPs In cultivation dishes, seeded the C643 cells for CLSM at a mass of 1 × 10 6 cell mL/dish, grown at 37 °C in moistened air comprising 5% CO 2 . The cells were spilt into four groups after 24 h of culture: C-Gd-PFH-NPs were handled respectively with 10 min and 15 min Dil-labeled C-Gd-PFH-NPs (1 mg/mL), and after blocking the cells were washed three times with PBS. Then, Dil-labeled C-Gd-PFH-NPs (1 mg/mL) incubated the cells. The cells were washed with PBS three times after 2 h incubation with nanoparticles, xed with 4 percent paraformaldehyde (200 µL) for 15 minutes, and then gestated by DAPI (10 µg/mL, 200 µL) for 20 min. Lastly, CLSM pictured the dishes. In Vitro Cytotoxicity Assay The CCK-8 assay assessed the cell viability [40][41][42]. C643 cells were seeded into 96-well plates (1 × 103 cells per well, 100µL). After 24-hours' incubation to assess the cell viability Gd-PFH-NPs and C-Gd-PFH-NPs treated at levels of 10, 5, 2.5, 1.25, 0.625 and 0.312 µM for 24 hours. Gd-PFH-NPs and C-Gd-PFH-NPs cells were incubated for 24 hours. The positive control used as the untreated C643 cells. The in vitro cytoxicicty assay performed and the calculated made by the company manufactures guidelines. Apoptotic Staining The morphological changes of the C643 cells were examined by biochemical staining, including acridine orange-ethidium bromide (AO-EB) and Hoechst 33344 staining [43,44]. After incubating for 24 h, the cells were seeded at a concentration of 1 × 104 onto 48 well plates. The cells were treated with Gd-PFH-NPs and C-Gd-PFH-NPs at 2.5 µM concentration for 24 h. On the following day, the staining solution was added. After incubating the plates with the staining solution, the plates were washed with PBS three times. Images were obtained using a uorescence microscope (Accu Scope EXI-310) at a magni cation of 20×. Flow Cytometry/Annexin V-PI Staining The ow cytometry examination was examined by using the Apoptosis Detection Kit of uoresceinisothiocyanate (FITC) (Cell Signalling, China) utilized to con rm the apoptotic ratio of C643 cells. The cells were treated with Gd-PFH-NPs and C-Gd-PFH-NPs at 2.5 µM concentrations for 24 h. The cells were washed thrice by using trypsin, and suspended in 1 × binding buffer (500 µL) with FITC Annexin V (5 µL) and of PI (10 µL). After 20 min incubation, the samples were analysed by ow cytometry. The obtained results were investigated with the BD FACS CantoTM II ow cytometer. Evaluation of the In Vivo Drug Toxicity The in vivo drug toxicity was investigated in ICR mice (4-5 weeks old). Healthy ICR mice were randomly divided into 5 groups (n = 10 mice per group). Drugs were injected through the tail vein on days 0, 3, and 6. Mice were injected with Gd-PFH-NPs (2.5, and 5 mg/kg, Gd equivalent dose), C-Gd-PFH-NPs (2.5, and 5 mg/kg). Saline were injected as a control. The body weights of the mice were recorded every three days. Histologic Analysis For histological analysis, the organs from the sacri ced mice were excised at the end of the treatments with various drugs. After being xed in 4% formaldehyde and embedded in para n, the tumor tissues and organs were further sectioned into 5 µm slices for hematoxylin and eosin (H&E, Sigma) staining. The H&E-stained tissues were imaged by uorescence microscopy (Olympus, IX71). In Vivo Antitumor Activity BALB/c nude mice (4-5 weeks old) were used for the evaluation of the antitumor activities of the nanotherapies. The human prostate cancer cell line C643 was grown to 80% con uence in 90 mm tissue culture dishes. After cell harvesting, the cells were resuspended in PBS at 4 °C to reach a nal concentration of 2.5 × 10 7 cells/mL. The right anks of the BALB/c nude mice were subcutaneously injected with 200 µL of a cell suspension containing 5 × 10 6 cells. At 14 days after implantation, the tumors reached approximately 60 mm 3 in volume, and then the animals were randomly divided into ve groups (n = 7 mice per group). Mice bearing C643 tumor xenografts were injected intravenously with samples solutions (Gd-PFH-NPs at 5 mg/kg, C-Gd-PFH-NPs at 5 mg/kg) three times on days 0, 3, and 6. Saline were also injected as a control. Tumor volumes and body weights were monitored and recorded for 33 days. The lengths (L) and widths (W) of the tumors were measured with calipers, and the tumor volume was calculated by the following formula: V = (L × W2)/2, where W is shorter than L. Mice were sacri ced by CO 2 inhalation at the endpoint of the study [45][46][47]. Data Analysis The data analysis of different groups was conducted with one-way ANOVA in GraphPad Prism 5 software. The signi cant level were considered at P < 0.05 and greatly signi cant at P < 0.001. All data are presented as mean ± SD. (Unless otherwise stated, n = 3). Description of C-Gd-PFH-NPs Having these both compounds in hand, we have examined the TEM analysis of Gd-PFH-NPs (Fig. 1A) and C-Gd-PFH-NPs (Fig. 1B). we next tested whether they are able to recapitulate self-assembly behavior in aqueous solutions. For this purpose, we dissolved the C-Gd-PFH-NPs prodrugs in DMSO (10 mg/mL) and then rapidly injected them into deionized (DI) water under ultrasonication. This procedure allows us to validate the solution was found to be transparent and slightly bluish. Observation by electron microscopy revealed that the drug molecules self-assembled to form a spherical nanoparticle structure. DLS showed a single peak distribution of the nanoparticles. The average hydrodynamic diameter (intensity) of the Gd-PFH-NPs was about ~ 77.1 nm, and the C-Gd-PFH-NPs was about ~ 100.0 nm (Fig. 1B and D). However, there is a certain adhesion between the nanoparticles formed by the self-assembly of simple small molecule drugs. Therefore, we have miscible with many hydrophobic drugs by combining the prodrug with the appropriate amount of C225 molecules. These nano-assemblies are formed and have been widely used for in vivo drug delivery, aiming to solve the problem of adhesion and to optimize cancerspeci c drug delivery. Then, we measured the stability of C-Gd-PFH-NPs with PBS which shows signi cantly stable size in various parameters ( Fig. 1E and F). Taken together, although C-Gd-PFH-NPs can self-assemble to form nanoparticles, they may not be stable enough. Therefore, C225 nanoparticles loaded with Gd-PFH were investigated further to evaluate anticancer e cacy in vitro. In Vitro Intracellular Uptake As illustrated in Fig. 3, the much tougher red uorescence derived from Dil-labeled C-Gd-PFH-NPs was additional obviously concentrated in the C-Gd-PFH-NPs group around the cytomembrane of C643 cells compared to the non-target and antagonistic groups. Furthermore, bigger quantities of red uorescence were noted after exposure to C-Gd-PFH-NPs group. These ndings stated that through the elevated tumour -homing characteristics of C225, C-Gd-PFH-NPs could x tightly to C643 cells, and considerably encouraged intracellular uptake by the C643 cells. In the resentment group, C-Gd-PFH-NPs lost the capacity to objective the C643 cells because the congested by surplus free C225, leading in small levels of C-Gd-PFH-NPs around the cells and demonstrating that C-Gd-PFH-NPs desired targeting effectiveness was the outcome of the EGFR-mediated directing capacity. In Vitro Cytotoxicity Assay The CCK-8 assay assessed the cell viability of different NP formulations at distinct levels, showing a dose dependent model. The cell viability of nanoparticles in the analyzed dose range was noted at more than 80%, level at 10 mg/mL. The comparatively small insigni cant viability proposed that the elevated biocompatibility of phase-changing nanoparticles was appropriate aimed at in vivo application. Reasonably, Gd-PFH-NPs and C-Gd-PFH-NPs cell viabilities decreased considerably as levels of C-Gd-PFH-NPs also increased. In particular, the cell viability of the cells treated with C-Gd-PFH-NPs was the low at the same concentration, implying that the mixture of C-Gd-PFH-NPs could boost cytotoxicity synergistically. The cell viability of C-Gd-PFH-NPs. The remarkably improved cytotoxicity of C-Gd-PFH-NPs may lead from the increased cell membrane permeability caused by the cavitation effect and the improved C-Gd-PFH-NPs at the objective place, which signi cantly increased the inhibitory impression of C-Gd-PFH-NPs on cell development. Morphological Changes in C643 Cancer Cells Dual staining AO-EB is a qualitative technique used to identify live, early, late apoptotic, and necrotic cancer cells using uorescent images to observe morphological changes in the nucleus of cells. AO permeates the intacts membranes of usual and early apoptotic cell and binds to DNA, which uoresces uniform green in normal cells and as patches in early apoptotic cells due to chromatin condensations. In difference, EB is only penetrable in the incapacitated membrane of late apoptotics and necrotics cell, where it uoresces as bright orange patch through its bindings to DNA fragment or apoptotic bodies in late apoptotic cells, and as a unchanging orange uorescence in the necrotic cell, due to have the nuclear changes in the morphology of viable cell. AO-EB-stained C643 cells were incubated with Gd-PFH-NPs and C-Gd-PFH-NPs for 24 h. As presented in Fig. 4, the presence of orange with reddish uorescence with chromatin fragmentation after treatment of C643 cells treated with Gd-PFH-NPs suggested that the C-Gd-PFH-NPs largely induced apoptosis in C643 cells (Fig. 4C). Apoptosis in C643 Cancer Cells Apoptosis may be reckoned as an important obstacle for a damaged cell to become malignant tumors. Since the complexes promote apoptosis induction in cancer cells, ow cytometry using annexin V-FITC / propidium iodide (PI) double staining was carried out for the quantitative discrimination of apoptotic cells. Phosphatidylserine (PS) is a cell cycle signaling phospholipid located inner side of the membrane of a healthy cell but is reverted to the outer membrane for recognition by neighboring cells at the time of apoptosis. Hence, the translocation of phosphatidylserine is a morphological hallmark of apoptosis and can be spotted by its binding with uorescently labeled annexin V which in turn detected by ow cytometry. Further the addition of PI to annexin V stained cells is used to discriminate and concomitantly quantify the live cells (lower left quadrant-annexin V(-)/PI(-)), early apoptotic cells (upper left quadrantannexin V(+)/PI(-)) and late apoptotic cells (upper right-quadrant-annexin V(+)/PI(+)) using FACS. As projected in Fig. 4B, the incubation of Gd-PFH-NPs and C-Gd-PFH-NPs with C643 cells conspicuously induced apoptosis. It is worth to note that the titled complexes induce apoptosis even at very low concentrations which is less than their IC50. In comparison with control, the cell population was higher (6-9%) in annexin V(+)/PI(-) (upper left) quadrant indicating the induction of early apoptosis (Fig. 4D). This effect was ascertained to be high for C-Gd-PFH-NPs than the Gd-PFH-NPs analogous with the results of MTT, and AO-EB staining assays. It is to note that the test samples displayed comparatively better apoptotic induction on C643 cells. In Vitro Ultrasound Imaging Based on C-Gd-PFH-NPs targeted accumulation capacity in tumour cells, we gambled that the phase changing nanoparticles can aid as US contrast to improve USI and treatment scratches. Following the administration of various medicines before LIFUS irradiation, even less or anechoic and less contrast improved US signals were noted in each groups (Fig. 5). Six hours after the administration of various treatments, LIFUS was performed in all groups same time periods with in vivo ultrasound imaging. In comparison with the saline, expressively sturdier spot like echo signs slowly accrued in both modes at the tumour places in the treated group, while no evident deviations were detected in the saline group, and only negligible signs looked in the non-target group. This outcome recommended that C225 eased the directing of tumour tissue accretion, and huge quantities of microbubbles were produced when phasechanging NPs were subjected to ADV at the LIFUS triggered tumour site, resultant in improved US imaging. Though, owing to the absence of C225-mediated targeting capacity, the C-Gd-PFH-NPs inadequate ADV could not effectively improve ultrasound imaging. Furthermore, apparent enrichment without LIFUS irradiation was not found in the Gd-PFH-NPs and C-Gd-PFH-NPs alone could not in vitro improve the ultrasound imaging shown in Fig. 5B. These ndings showed that because of their relative stability, C-Gd-PFH-NPs were appropriate as ultrasound imaging agents and e cient in vivo nanocarriers. Histological Evaluation for Systemic Toxicity The e ciency of anticancer chemotherapeutic drugs is mainly validated by its selective action towards cancer tissues leaving the normal organs undamaged. After the veri cation of low systemic toxicity in the mice injected with Gd-PFH-NPs (2.5, and 5 mg/kg), and C-Gd-PFH-NPs (2.5, and 5 mg/kg), histological analyses were carried out to identify the structural changes in the tissues of vital of organs inclusive of heart, liver, spleen, lung, and kidney of the mice treated with Gd-PFH-NPs and C-Gd-PFH-NPs and compared with control, the saline received mice. Figure 6 represented the histological sections of the heart, liver, spleen, lung, and kidney stained with hematoxylin and eosin (H&E).The photomicrographs of the liver and spleen of the control, Gd-PFH-NPs and C-Gd-PFH-NPs treated groups displayed normal cellular morphology. Under optical microscopy examination, the heart, lung, and kidney of Gd-PFH-NPs and C-Gd-PFH-NPs treated animals showed normal cardiac muscle bers, normal alveolar, and normal glomerular histological characteristics respectively which were found to be similar histological architecture as those of the control group with no treatment-related in ammatory response. In Vivo Antitumor E cacy in C643 Xenograft Tumor Model Considering the promising in vitro biological activity pro les, the in vivo pharmacological e cacy was further investigated in a C643 thyriod xenograft tumor model. In the experimental process, body weight of animals in each group was stable. It suggested that the experimental doses in all groups were tolerable. As shown in Fig. 7A-C, we found an obvious retardation of tumor growth for animals treated with Gd-PFH-NPs and C-Gd-PFH-NPs, as compared to the control group. Speci cally, nanoparticles delivering C-Gd-PFH-NPs more e ciently suppressed tumor growth than administered Gd-PFH-NPs and saline (Fig. 7) panels a tumor site(s) via the EPR effect. Moreover, these C-Gd-PFH-NPs did not signi cantly affect the body weights of mice, indicating that the delivery materials and Gd-PFH-NPs have low systemic toxicity. Most importantly, treatment with the combination of C-Gd-PFH-NPs could signi cantly enhance the e cacy of chemotherapy for C-Gd-PFH-NPs, as evidenced by more remarkable slow-down for tumor growth in relative to the Gd-PFH-NPs and saline (P < 0.05). On day 33, animals in saline groups performed a high average tumor weight of 1.58 g (Fig. 7D). The animals treated with Gd-PFH-NPs and saline exhibited lower mean tumor weight of 0.99 g, 0.55 g, and 0.13 g, respectively. A signi cantly lower mean tumor weight was obvious for C-Gd-PFH-NPs compared to Gd-PFH-NPs and saline (P < 0.05). The results of H&E, TUNEL and Ki67 histopathology analyses were consistent with the results of these therapeutic studies, showing extensive intratumoral apoptosis and reduced cell proliferation caused by the nanoparticle treatments (Fig. 7E). In addition, in H&E staining of the a C643 thyriod tumor slices, we found extremely aberrant histological structures. Compared with the Gd-PFH-NPs tumor tissues, C-Gd-PFH-NPs tumors presented more abundant extracellular matrix with messier cell distribution, which closely recapitulates tumors in thyriod cancer patients. Conclusion The data offered here highpoint a strategy rationale for concurrently attractive the effectiveness and safety of extremely Gd-PFA-NPs. As the synthetic Gd-PFA-NPs and C-Gd-PFA-NPs are fully biocompatible composites with minimal modi cations, the safety risks can be minimized when considering their clinical translation. Furthermore, given the ability of Gd-PFA-NPs to overcome the Cetuximab (C225)-Conjugated C-Gd-PFA-NPs, it was expected that our approach could have high value as an optional therapeutic platform to treat patients with drug-resistant cancer. Lastly, we envision that in addition to taxane agents, this C-Gd-PFA-NPs -based approach could be a simple yet broadly applicable strategy to make improved tolerated and more well-organized cytotoxic nanotherapeutics from other antitumor agents. Competing interests The authors declare that they have no competing interests. Schematic illustration of the microstructure of C-Gd-PFH-NPs and the phase-transformation process by means of LIFUS ultrasound irradiation. Meanwhile, a schematic of LIFUS ablation principles. H&E staining of the major organs (kidney, liver, lung, spleen and heart) excised from different treatment mice groups. Scale bar: 100 μm. Figure 6 Page 24/26 H&E staining of the major organs (kidney, liver, lung, spleen and heart) excised from different treatment mice groups. Scale bar: 100 μm. Tumor weights. The data are presented as the means ± SD (n = 7). E) Representative H&E staining, Ki67, and TUNEL histopathological analysis of the tumors. Figure 7 In vivo antitumor activity of Saline, Gd-PFH-NPs, and C-Gd-PFH-NPs compared to saline. C643 tumor xenograft-bearing BALB/c nude mice were administered with various drugs via intravenous injection at days 0, 3 and 6. A) Changes in tumor volumes. B) Body weights. C) Represent tumor photograph. D)
5,926.2
2020-12-01T00:00:00.000
[ "Medicine", "Engineering" ]
Electromagnetic enhancement of ordered silver nanorod arrays evaluated by discrete dipole approximation The enhancement factor (EF) of surface-enhanced Raman scattering (SERS) from two-dimensional (2D) hexagonal silver nanorod (AgNR) arrays were investigated in terms of electromagnetic (EM) mechanism by using the discrete dipole approximation (DDA) method. The dependence of EF on several parameters, i.e., structure, length, excitation wavelength, incident angle and polarization, and gap size has been investigated. “Hotspots” were found distributed in the gaps between adjacent nanorods. Simulations of AgNR arrays of different lengths revealed that increasing the rod length from 374 to 937 nm (aspect ratio from 2.0 to 5.0) generated more “hotspots” but not necessarily increased EF under both 514 and 532 nm excitation. A narrow lateral gap (in the incident plane) was found to result in strong EF, while the dependence of EF on the diagonal gap (out of the incident plane) showed an oscillating behavior. The EF of the array was highly dependent on the angle and polarization of the incident light. The structure of AgNR and the excitation wavelength were also found to affect the EF. The EF of random arrays was stronger than that of an ordered one with the same average gap of 21 nm, which could be explained by the exponential dependence of EF on the lateral gap size. Our results also suggested that absorption rather than extinction or scattering could be a good indicator of EM enhancement. It is expected that the understanding of the dependence of local field enhancement on the structure of the nanoarrays and incident excitations will shine light on the optimal design of efficient SERS substrates and improved performance. Introduction Surface-enhanced Raman scattering (SERS) has attracted substantial interest over the past decades due to its potential applications in biological sensing and chemical analysis with molecular specificity and ultrahigh sensitivity, which can be even down to the level of single molecules [1,2]. In addition, SERS can be a label-free spectroscopic tool with capabilities in real-time and multi-component analysis. Previous studies showed that Raman signals from molecules adsorbed on nanostructured metal surfaces, especially noble metals (e.g., Ag, Au), could be amplified by a factor of about 10 6 or even higher [3]. Although the underlying mechanism is still unclear, electromagnetic (EM) enhancement arising from the electric field in the vicinity of noble metal structure is considered as the dominant mechanism for such a dramatic Raman enhancement in most cases [4]. Both theoretical and experimental studies have revealed that the "hotspot", which is the concentration of strong EM fields on nanometre-scale regions with high curvatures or gaps/junctions between closely packed nanoparticles, plays a significant role in SERS enhancements [5,6]. As recently demonstrated by Fang et al., a very small number of molecules residing at the hotspots can dominate the overall SERS signals [7]. Significantly, a single hotspot as small as 15 nm has been directly measured by single molecule imaging with accuracy down to 1.2 nm [8]. Tremendous efforts have been devoted to create efficient SERS substrates in recent years [9][10][11]. Among them, aligned Ag nanorod (AgNR) arrays fabricated by oblique angle deposition (OAD) were shown to be promising SERS substrates with enhancement factors of approximately 10 8 [12][13][14][15]. However, the uniformity and reproducibility of SERS substrates remains a major challenge for the applications of SERS. Recently, it has been demonstrated that highly ordered Ag and Cu nanorod arrays can be fabricated by a guided OAD method, which may circumvent the problems of gap-size and diameter control, leading to the reproducible fabrication of highly SERS-active substrates [16]. The SERS enhancement not only depends on the intrinsic properties and the dielectric environment of the metal nanoparticles, but also on their shape, size and spatial arrangement. The incident wavelength, angle and polarization were also proven to greatly affect the performance of an SERS substrate. Previously, Chaney et al. observed that the SERS intensity was dramatically enhanced when the nanorod length increased from 190 to 508 nm in the random AgNR arrays prepared by OAD method. The high aspect ratio and the lateral overlap between adjacent nanorods were considered as the main factors responsible for this phenomenon [12]. Later studies demonstrated that there was an optimal length for the SERS enhancement in the OAD AgNR array [13]. A zig-zag AgNR structure that could generate hotspots at sharp corners also showed potential in enhancing the SERS performance [17]. So far, the understanding of the SERS mechanism in OAD AgNR arrays is still limited. In addition to EM mechanism, surface effect and anisotropic absorbance of molecules were proposed to interpret the SERS enhancement from the AgNR array substrate [18]. Limited systematic studies on OAD AgNR array structures and different measurement conditions used in experimental studies hindered the direct comparison. Here, we took a systematic approach to investigate the SERS enhancements of the two-dimensional (2D) AgNR arrays from the perspective of EM enhancement mechanism by using the discrete dipole approximation (DDA) method [19]. We expect that the understanding of the dependence of local field enhancement on the structure of the nanoarrays and incident excitations will shine light on the optimal design of efficient SERS substrates and facilitate their applications in biomedical sensing and chemical analysis. Numerical calculations DDA method DDA is a powerful and flexible method for describing the farfield and near-field properties of targets with arbitrary geometries in a complex dielectric environment [19][20][21]. In DDA, the continuum target is represented by a finite cubic array of polarizable point dipoles, which is excited by an applied EM field. Each dipole interacts with both of the external field and the induced electric fields generated by all other dipoles in this array. The response of this array to the incident light is then solved self-consistently by using Maxwell's equations. Recently, an extension of DDA to periodic structures has been developed, allowing for the calculation of the optical properties of 1D and 2D arrays. The theoretical principle of the DDA for periodic targets has been described in more detail elsewhere [22]. Briefly, a "target unit cell" (TUC), repeated in single or double directions, is utilized to assemble the periodic array. In this case, each dipole interacts with the incident electric field and the electric fields scattered by all of the other dipoles in the TUC and the replicas of the TUC. The EM problem is then solved self-consistently through Maxwell's equations. In a recent work, Kim et al. showed that this generalized DDA method was an efficient and versatile numerical approach for calculations of optical properties of AgNR array [23]. To investigate the SERS enhancement of AgNR arrays fabricated by OAD method in terms of EM mechanism, we simulated the local field enhancement of the nanoarrays in vacuum employing the open-source code DDSCAT 7.2 developed by Draine and Flatau [19], which has the capability of performing efficient "near-field" calculations in and around the target by using fast-Fourier transform (FFT) methods [21]. The cubic grid spacing was 3 nm in all calculations. The dielectric constants of Ag were obtained from the experimental data of Johnson and Christy [24]. The value of the interaction cut-off parameter γ was taken to be 0.01. Electromagnetic enhancement factor The electromagnetic enhancement factor (EF) is commonly approximated by the following formula [25]: (1) where r m is the location of the molecule, E loc is the enhancement of the local electric field (the ratio of the local field to the excitation field associated with the incident plane wave), and ω and ω′ are the incident and Stokes shifted frequencies, respectively. Normally, the shift is small and can be neglected compared to the plasmonic resonance width in metal nanosystems, leading to a fourth-power dependence [26], To evaluate the EF of SERS for the nanostructures, we calculated the sum and the average of the EF within a unit cell of the periodic lattice over the available surface area except the bottom, which was connected with the supporting substrate, by using EF sum = ∫|E loc | 4 dS and EF avg = ∫|E loc | 4 dS/∫dS, respectively [27]. Note that the value of |E loc | was not calculated exactly at the particle surface, but half a grid point (i.e., 1.5 nm) away from each exposed cube surface. Models The models used here are similar to those published previously in [16]. Figure 1 illustrates the regular hexagonal pattern substrate and four different target units considered in the calculations with the parameters shown on the schematic, selected from possible nanorod array structures fabricated by the guided OAD method [16]. The nanorods were arranged in the hexagonal lattice with a centre-to-centre distance of 300 nm unless otherwise noted (Figure 1a). The orientation of the oblique nanorods was chosen to be along the y-direction, and the tilting angle was set to 42° relative to the y-direction [16]. The upper oblique parts of the nanorods were all modelled as tilted cylinders with a hemispherical cap at each end, in order to avoid the "lightening rod effect" at the top edges of the nanorods in the electrodynamics simulations. The gaps between adjacent nanorods along the y-direction were fixed to 21 nm unless specified otherwise, resulting in a cylinder with a diameter of 187 nm. When investigating the effect of different structures on the SERS enhancement, the volume of each target unit was kept constant. This was achieved by considering a factor of sin(42°) when designing the height of the vertical pillar base in S0:42 and S0:−42:42. The nominal aspect ratio (AR), defined as l 1 /(187 nm), was used for all structures. For simplicity, the supporting substrates of the arrays were not considered in the simulations. Only the 2D AgNR array of S42 with AR = 3.5 and the excitation wavelength of 632.8 nm were investigated Results and Discussion Extinction for isolated nanorods and nanorod arrays Typically, metal nanoparticle with anisotropic structure shows multiple plasmon resonances associated with different modes under appropriate excitations [20]. For AgNRs much smaller than the wavelength of light, the extinction spectra usually exhibit a transverse mode centred at around 420 nm and a longitudinal mode in the range of 500-1100 nm depending on the AR [28,29]. These are considered to arise from the dipole plasmon resonances. For AgNRs of large sizes, however, higher order modes of plasmon resonances can be excited [20]. As the target units investigated in the arrays consist of tilted rods, it is expected that both transverse and longitudinal modes can be excited when they are illuminated under normal incidence. Here, the normal incidence is defined as the light with the propagation direction parallel to the surface normal of the substrate (perpendicular to the y-direction). Figure 2 shows typical extinction efficiency spectra of an isolated S42 AgNR of AR 3.5. Under normal incidence of p-polarization, the extinction spectrum has a broad band starting from 320 nm. A general trend of slow increase in the efficiency is apparent in the range of 400-800 nm, with some noticeable features at around 380, 440 and 680 nm. In order to identify the plasmon modes, the extinction efficiency spectra of the target under the s-polarized and the p-polarized excitations are also depicted in Figure 2, in which the propagation direction of the light is perpendicular to the long axis of the nanorod. A major plasmon resonance peak centred at 360 nm is found under the excitation of s-polarization, along with a broad shoulder at around 550 nm. These resonances can be assigned as dipole (550 nm) and quadrupole (360 nm) plasmon modes, respectively, as found in Ag nanoparticles of large sizes [20]. In the case of p-polarized excitation, the extinction spectra has a broad band ranging from 320 to 800 nm, with three distinguishable peaks located at around 400, 520 and 660 nm. Generally, the number of plasmon modes increases with the increasing of asymmetry. The resonance at 660 nm is ascribed to the dipole plasmon mode, while the resonances at 520 nm and 400 nm may be related to higher-order multipolar plasmon modes. Obviously, the extinction efficiency spectra of the tilted target unit under the normal incidence of p-polarization consist of both transverse and longitudinal modes. As the target units form 2D arrays, the optical properties change due to the coupling effect between neighbouring rods, as depicted in Figure 3. Interestingly, the extinction spectra of the S42 and the S−42:42 arrays are almost the same, so are those of the S0:42 and the S0:−42:42 arrays, although the optical spectra of corresponding individual target units are different from each other. This is probably due to the strong coupling effect resulted from the narrow gap between nanorods investigated here. Effects of structure and length As shown in the previous section, both transverse and longitudinal modes in the tilted nanorods can be excited simultaneously by the p-polarized light under normal incidence. The coupling of EM fields of neighboring rods greatly enhances the local fields, forming so-called "hotspots". Figure 4 shows the calculated contours of EF for AgNR 2D hexagonal arrays of different structures with AR = 3.5. Multiple hotspots are found distributed in the gaps. However, the number and the to give rise intense fields for SERS due to the "lightening rod effect" [5,6,17]. However, similar EFs from S42 and S−42:42, as well as S0:−42:42 and S0:42, show that there are no significant contribution from near-field enhancement right at the corners/bends. This indicates that strong EM coupling in the narrow gap is the dominant factor for the near-field enhancement in these arrays. We further investigated the dependence of EF on the length of AgNR in different structures. A range of aspect ratios from 2.0 to 5.0 was chosen for S42 arrays, while ARs ranging from 3.0 to 5.0 were applied to other structures due to the constraints of the structure parameters investigated in this work. The number of "hotspots" between adjacent nanorods was found to increase in all four structures as their ARs increased. The EF avg and EF sum of each structure with varying ARs are shown in Figure 5a and Figure 5b, respectively. It is interesting to find that the EFs of S0:42 and S0:−42:42 exhibit a similar behavior as the AR increases, both have a general decreasing trend but in an oscillating manner. The EF avg of S42 reaches its maximum at AR = 2.5, more than four times than that in the case of AR = 5.0. It is worth noting that the EFs of S42 and S−42:42 are comparable at the same AR region between 3.0 and 5.0. And both of their EF avg decrease as the AR increases, consistent with the simulation result from Cu nanorod arrays in our previous work [16]. As the increase of surface area can result in an increased amount of molecular adsorbate and in turn an enhanced SERS intensity, here we take the surface area effect into account and compare the total SERS enhancement (EF sum ). As shown in Figure 5b, the surface effect is clearly visible at certain ARs and seems also depending on the structures of target units, although EF sum shows a similar trend against AR as EF avg does. Effect of the excitation wavelength Since the SERS effect is a near-field phenomenon and related to the localized surface plasmon resonance (LSPR) of the nanostructures, it is expected to exhibit a behavior that depends on the excitation wavelength. Here, we calculated the EFs of the S42 AgNR arrays with the commonly used excitation wavelengths, i.e., 514, 532, 632.8 and 785 nm, as shown in Figure 6. As can be seen from Figure 6a, the excitation of 532 nm gives the most intense EF avg at each AR except AR = 3.0. The EF avg of the AR = 2.0 array illuminated by 532 nm is more than twice than that of 632.8 nm. It is interesting that the differences of EF avg between different excitation wavelengths become insignificant at large ARs. The EF avg decreases under both excitations of 514 and 532 nm as the AR increases, while the EF avg shows an oscillating behavior at low ARs in the cases of 632.8 and 785 nm excitations. Notably, the array with AR = 2.0 excited by 532 nm exhibits the most intense EF sum despite of its relative small surface area, as shown in Figure 6b. In order to understand the wavelength dependence of the EM enhancement, the extinction and absorption efficiency spectra of S42 AgNR array with varying ARs were also calculated and are given in Figure 7. It is clear that there is no direct correlation between the extinction efficiency and the average EF or the total EF. However, the dependence of absorption efficiency on the AR at each excitation wavelength shows a similar trend as the total EF. Typical features, such as oscillating behavior at low ARs in the cases of 632.8 and 785 nm excitations and highest efficiency at AR = 2.0 under 532 nm excitation, are consistent with what was observed in Figure 6. It suggests that absorption efficiency could be used as an indicator for SERS enhancement. Nevertheless, the connection between the absorption/extinction spectra and the enhancement in SERS is still not fully understood [30] and requires further investigation. Effect of incident angle As is evident in Figure 8, the EF avg strongly depends on the incident angle. The incident angle is defined as the angle with respect to the surface normal, as illustrated in the insert of Figure 8. The most intense EF avg is obtained when the array is illuminated at a positive angle of about 10° (38° towards the long axis of nanorod). At this angle, the incident direction is neither parallel nor perpendicular to the long axis of the nanorods. The EF avg decreases dramatically when the incident angle deviates from the optimum value. A similar asymmetric angular dependence of the SERS response was experimentally observed by Liu et al. in a tilted AgNR array with a tilting angle of ca. 17° [31], for which the maximum SERS intensity was obtained at an incident angle of about 45° off the surface normal. A modified Greenler's model was also proposed to interpret this phenomenon. In this model, the molecule adsorbed on the side of the nanorod is treated as a dipole perpendicular to the long axis of the nanorod, while the surface of the nanorod was considered as a planar surface. The SERS intensity was assumed to be proportional to the mean square of total scattered field that was calculated by using classical electrodynamics. According to this model, the optimal incident angle Figure 9: The polarization-dependent EF avg (a) and the corresponding absorption (black), scattering (red) and extinction (blue) efficiency factors (b) of S42 AgNR 2D hexagonal array with AR = 3.5. The excitation wavelength is 632.8 nm. increases as the tilting angle of nanorod (with respect to the surface normal) decreases [32]. Therefore, it is not surprising that the optimal incident angle found in our simulation is smaller than that reported in [31]. In fact, the angular dependence of near-field enhancement was also found in the vertical AgNR arrays. It has been revealed that different modes of surface plasmon resonance can only be excited by certain angles of incidence, leading to different near-field enhancements [23,33]. Figure 9a shows the polarization dependence of EF avg from S42 AgNR hexagonal array with AR = 3.5. The excitation wavelength is 632.8 nm, and the wave vector is perpendicular to the substrate. The polarization angle is defined as the angle between the electric-field vector and the y-axis as shown in Figure 1a. The most intense EF avg , 797, occurs at polarization angles of 0 and 180°. This is caused by the strongest EM coupling effect between adjacent nanorods when the exciting electric field vector is polarized along the interparticle axis (y-axis), as is well known in the particle dimer system [34,35]. The EF avg of the array is quite sensitive to the polarization. As the polarization deviates from 0 and 180°, the EF avg rapidly decreases, reaching a minimum value of 44 at polarization angles of 90 and 270°. Effect of incident polarization The polarization dependence of the optical cross sections corresponding to Figure 9a is shown in Figure 9b. The efficiency factors of absorption, scattering and extinction are defined as the ratios of the total cross sections for absorption, scattering and extinction per TUC to the geometrical crosssection of equal-volume sphere in one TUC, respectively [22]. It is found that the absorption shows a polarization with maxima at 0 and 180°, opposite to scattering and extinction that reach the maxima at polarization angles of 90 and 270° and the minima at 0 and 180°. Interestingly, the absorption follows the Figure 10: The dependence of EF avg on the gap size along the y-direction in S42 AgNR 2D hexagonal array with AR = 3.5. The solid curve is an exponential fitting result. The excitation wavelength is 632.8 nm and the polarization is parallel to the y-direction. same polarization dependence as the EF avg , while the scattering and extinction exhibit a different behavior. Previously, Zhao et al. observed that the anisotropy of the SERS polarization was different from that of the polarized UV-vis absorbance of a nonplanar AgNR array substrate [36]. Practically, the UV-vis absorption spectrum measured in the experiment is the sum of absorption and scattering, i.e., extinction. So, the experimental observation is in line with this simulation result. The simulation result also suggests that the absorption rather than the extinction or scattering could be an indicator of EM enhancement in SERS performance, in line with the observation in Section "Effect of excitation wavelength". Effect of lateral gap size EF avg is highly sensitive to gap size, especially to small gap sizes below 15 nm, as shown in Figure 10. There is a dramatic decrease of EF avg with the increase of gap size from 9 to 18 nm, and a much slower decrease with further increase of gap size. The gap size has been a crucial parameter of the SERS substrates because of the strong EM coupling effect at the nanometre scale [37][38][39]. Due to the challenges of fabricating ordered AgNR arrays by the OAD method, the effect of gap size in those arrays have not been experimentally investigated, yet. However, semi-ordered AgNR arrays were obtained by an OAD technique employing 2D Au nano-post arrays in square lattice as seed patterns [40]. The SERS intensities were shown to increase monotonically with the decreasing separation of AgNRs [40], which is consistent with our simulation results. Random vs ordered arrays Although the tilted AgNR arrays fabricated by the OAD method were shown to have SERS enhancement factors greater than 10 8 , they were randomly distributed [12,13], which presents a challenge towards highly uniformed and reproducible SERS substrates. Hence, efforts have been devoted to produce wellpatterned AgNR arrays [16,40]. It is interesting to compare the EFs of random and ordered arrays through theoretical simulations. However, due to the complexity of the 2D arrays, it is difficult to model a truly random AgNR array. Here, target units consisting of six AgNRs arranged in the y-direction with different gap sizes are used to model the 2D random arrays. The averages of the gap sizes in the target units are 21 nm, and the gap sizes between the target units along the y-direction are set to 21 nm, so that the average gap size along the y-direction is the same as that of the 2D ordered array. The gap sizes and the standard deviations (STDEVs) are shown in Table 1. As shown in Figure 11, it is interesting that the EFs of the random arrays are all higher than that of the ordered array. Moreover, the EF increases monotonically as the STDEV of the gap size increases. Remarkably, the random array with a gap size STDEV of 10.6 nm shows a more than three times stronger EF than that of the ordered one. This indicates that the random arrays with the same average gap size (21 nm) as the ordered one can show a better SERS performance, which is consistent with the exponential dependence of the EF on the gap size, as demonstrated in Figure 10. This again manifests the signifi-cance of hotspots in defining total SERS intensity as revealed by Fang et al. experimentally [6]. It is worth pointing out that the difference of EF between random and ordered array is less significant when the average gap size is large and STDEV is small, because the gap sizes are then out of the region of rapidly changing EFs (Figure 10). Effect of diagonal periodicity We have shown that the variations of the gap size in the y-direction have a strong influence on the EF of SERS. However, the EF of a 2D array depends not only on the periodicity in the y-direction (denoted as lateral periodicity) but also on the periodicity in the diagonal directions. Here, we fixed the lateral periodicity to 300 nm, and investigated the dependence of EF on the diagonal periodicity. Figure 12 shows that a smaller diagonal periodicity, i.e., smaller gap size, does not necessarily result in stronger EF in the 2D arrays. In fact, the EF of the 2D array oscillates as the diagonal periodicity increases from 234 to 1239 nm (diagonal gap size varying from 21 to 1050 nm). The EF avg of the ordered array arranged in a regular hexagonal pattern is more than three times lower than that of the ordered array with the diagonal periodicity of 463 nm. It is clear that diagonal periodicity plays an important role in the SERS enhancement for the 2D array but the dependence of EF on the diagonal gap is more complicated than that on the lateral gap and the mechanism needs further investigations. Nevertheless, this simulation indicates a new dimension to design OAD AgNR arrays for optimized SERS performance. Also shown in Figure 12 are the absorption, scattering and extinction efficiency factors at the excitation wavelength of Figure 12: The dependence of EF avg and extinction, absorption and scattering efficiency factors on the diagonal periodicity in the S42 AgNR 2D array with AR = 3.5. The excitation wavelength is 632.8 nm and the polarization is parallel to the y-direction. The dashed lines indicate the diagonal periodicity of the ordered array arranged in a regular hexagonal pattern. 632.8 nm. It clearly demonstrates that absorption follows a similar trend as the EF avg does, different from extinction or scattering. This becomes more obvious for a diagonal periodicity larger than 600 nm. Recently, Near et al. found that the field strength within a plasmon mode trends with the absorption in a silver nanocube [41], which is in line with this simulation. This result is also consistent with our simulations on incident polarization effect (Figure 9), indicating that the absorption rather than the extinction or scattering may be a good indicator of the EM enhancement. Conclusion The enhancement factor of SERS of 2D hexagonal silver nanorod arrays was investigated by using the discrete dipole approximation method. The computational studies clearly showed that "hotspots" were distributed in the gaps between adjacent nanorods, and the narrow gaps resulted in strong EFs. The excitation of 532 nm gives the most intense EF avg at each AR except AR = 3.0, and the array with AR = 2.0 excited by 532 nm showed the most intense EF sum despite of the smallest surface area. However, the influence of different excitation wavelengths on the EF became insignificant as the AR was over 4.0. The EF was found to be strongly dependent on the polarization of the incident light. The most intense EF was obtained when the array is illuminated with an incident angle about 10°o ff the surface normal. The simulations of AgNR arrays of different lengths revealed that increasing rod length generated more "hotspots" but not necessarily increased EF. The EM enhancement of 2D random AgNR arrays was compared with that of an ordered array of the same average gap size. It was found that the average EF of random arrays was stronger than that of an ordered one with the same average gap of 21 nm, which can be explained by the exponential dependence of the average EF on the lateral gap size. Although the narrow lateral gap results in strong EF, the dependence of EF on the diagonal gap shows an oscillating behavior, which implied that the SERS substrates could be optimized by adjusting the diagonal/longitudinal periodicity. The simulation results also indicated that absorption rather than extinction or scattering could be a good indicator of EM enhancement.
6,647.2
2015-03-09T00:00:00.000
[ "Physics" ]
Scalable solutions for the Control Unit of the KM3NeT DAQ system . The neutrino telescopes of KM3NeT are being incrementally expanded, and will reach their final size in the coming years. New versions of optical modules running new versions of firmware and new instrumentation for calibration are being introduced in the originally repetitive lattice. The inner architecture and data flow of the Control Unit of the KM3NeT telescopes is described, with information about computational and architectural complexity. The current goal is to control two full blocks of the KM3NeT/ARCA detector, i.e. 4370 CLBs and 128340 photomultipliers for 230 detection units, with a single mid-range commercial server machine. The system is designed with software protections and fault tolerance for hardware failure. Introduction The KM3NeT Collaboration is incrementally building and operating two neutrino telescopes in the Mediterranean Sea: ARCA † , located off the shore of Sicily is devoted to neutrino astronomy ( [1]); ORCA ‡ , off the French shore is optimised for the study of neutrino oscillations ( [2], [3], [4]). Both are water Cherenkov detectors with 3D lattices of DOMs (Digital Optical Modules) [5], each hosting 31 PMTs (photomultipliers) plus a compass, tiltmeter and humidity and pressure sensors, all controlled by a CLB (Central Logic Board); DOMs are grouped in DUs (Detection Units), which are vertical strings, few hundred meters tall, anchored to the seabed, of 18 regularly spaced DOMs. The spacing of DOMs is wider for ARCA than for ORCA, reflecting the different energy ranges of the neutrinos they are targeted at (up to PeV scale for ARCA, GeV scale for ORCA). In full configuration, ARCA will consist of two blocks of 115 DUs each, with in total 4140 DOMs, 128340 PMTs and 230 DU-base modules, instrumenting more than 1 km 3 of water; ORCA will have 115 DUs, 2070 DOMs, 64170 PMTs and 115 DU-base modules, covering 0.007 km 3 of water. Sea currents change the shape of the detectors; the position of detector elements is measured by an acoustic system including piezo elements in the DOMs, LED beacons and external hydrophones. A Calibration Unit and an Instrumentation Unit monitor water properties. Optical data from the PMTs and acoustic data from the piezos and 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 1 2 3 hydrophones are transmitted to the control station on shore and processed by a computing farm, whereas data from other instruments are used for monitoring and recorded in a remote relational Database. At the time of the RICAP 2022 conference, 19 DUs were already taking data in ARCA and 10 in ORCA. At a given time KM3NeT has indeed more than two detectors in operation: each DU in the test benches at the end of the integration and before deployment is handled as an independent detector, managed by the same system. The Control Unit With the telescopes in their final configurations, about 400,000 parameters (HV, temperature, power, humidity, compass, tiltmeter, etc.) need to be controlled every 60 seconds in ARCA and 200,000 in ORCA. Some are input parameters (e.g. HV), others would raise alarms (e.g. humidity increase) if out of bounds. In addition, the CU (Control Unit) [6] checks the status of the TriDAS (Trigger and Data Acquisition System) and adjusts it according to the current task (calibration or physical data taking). DOM CLBs run two flavours of firmware and operate according to a state machine; DU-base module CLBs also have two variants of firmware for power control; a similar state machine also defines the behaviour of the TriDAS processes. Performances In its full configuration, the ARCA detector will have more than 4000 CLBs needing supervision and communication with the DM, which is the most critical component of the CU from the point of view of performances. A lightweight UDP-based messaging protocol has been developed for this application. Data must frequently be sorted by time or source, via optimised sorting methods. Writing continuously to disk or DB is inefficient, and all CU processes have an internal buffer to optimise writing tasks in bunches of 32 MB or 10 minutes. The monitoring information also needs to be presented in a GUI (Graphical User Interface), with hundreds of parameters represented in JSON and transported over HTTP to each subscribing client, the graphical rendering being left to the client's browser. Once the list of data to be extracted is defined, a direct-access path is created and no further sorting and searching occurs. A robust design must prevent thread starvation (i.e. thread pools exhausted or growing beyond the number of available cores) and deadlocks (both internal, due to data consistency requirements and external, due to cross-process logical error loops). In order to choose a safe attitude, multiple sockets are allocated with properly sized buffers and they are shared among CLBs in fixed groups (automatically defined); there is one readout thread per socket, which distributes data to CLB controller entities in the DM software, each with its own queue. This architecture cancels any chance of thread race. All data logging uses multiple memory slots to avoid thread races; logs are sorted at flush time, when data are copied to disk. Monitoring data from CLBs are processed by threads that are allocated as specified by configuration parameters: worker threads visit the CLB controller entities in a round-robin fashion, to log data in their queues and perform the needed reactions and adjustments. As a result of all this care taken in DM design, measurements done in regular data acquisition days in ARCA with 18 DUs show (Fig. 2) that the CPU load scales almost linearly and not faster than N log N (N being the number of DUs Overload protections CU services are unlikely to cause CPU or memory overloads in normal conditions. On the other hand, TriDAS processes require normally a significant amount of resources and are already now spread over several servers. Protections are set up against runaway CPU or memory consumption. DataFilters and DataWriters are single-thread applications, so each one cannot saturate more than one core. This allows for intrinsic protections, by avoiding instantiating too many processes. DataFilter memory buffers are statically allocated. On the other hand, DataQueue memory buffers tend to grow if there is insufficient computing power (which may happen in case of hardware failure of one or more DataFilter servers). In this case, the resident LAP of each server watches for overall memory consumption and stops/restarts all local processes if the available memory falls below a tunable limit. In case DataWriter memory buffers grow too much, the resident LAP takes similar action. Flexibility, code development and maintenance The CU drives very different entities, because of the timespan of construction of the KM3NeT detectors. CLBs have two different variants and run two different versions of firmware. CLBs in base modules host a separate component for power functions that can have different version of firmware. Additional devices that need to be controlled by the CU are the Calibration Unit and the Instrumentation Unit, and more can appear in the future. The codebase must be flexible enough to allow evolution. The codebase is entirely in C# running on the Mono CLR and has only the Oracle Data Provider as external dependency to connect to the remote Database. Binary images are tagged and self-checked on startup, the fingerprint being added to all data. Loops, filters and sort operations are implemented in functional-programming fashion by means of LINQ, which makes the code expressive and readable. Behaviours are described with action tables. The source code for each device controller is kept in a single file. Event reactions are implemented as well-identified actions. Interprocess calls run on an HTTP library and also the graphical interface uses pure HTTP and Javascript. Static type checking and unit tests are used to actively prevent mistakes. Conclusions The Control Unit software for the KM3NeT Data Acquisition has been presented. It is a mature software suite, but evolving to support new versions of firmware for existing devices, new devices and adapting to changing scenarios. The code is produced to be clear and easy to maintain, with a mission to prevent, mitigate or eliminate risks of data loss. Performances are well under control and one commercial server should manage two blocks of 115 DUs. The architecture is fault tolerant and allows multiple resources standing by.
1,955.8
2023-01-01T00:00:00.000
[ "Physics", "Engineering" ]
Mathematical Analysis and Simulation of an Age-Structured Model of Two-Patch for Tuberculosis ( TB ) This paper studied a structured model by age of tuberculosis. A population divided into two parts was considered for the study. Each subpopulation is submitted to a program of vaccination. It was allowed the migration of vaccinated people only between the two patches. After the determination of ( ) ψ R and 0 R , the local and global stability of the disease-free equilibrium was studied. It showed the existence of three endemic equilibrium points. The theoretical results were illustrated by a numeric simulation. Introduction Tuberculosis (TB) (short for tubercle bacillus) is a widespread, infectious disease caused by various strains of mycobacteria, usually Mycobacterium tuberculosis (MTB).Tuberculosis typically attacks the lungs, but can also affect other parts of the body [1].To be infected bacilli must penetrate deep into the alveoli, but the contagiousness of the disease is relatively low and depends on the immune system of subjects.Individuals at highest risk are young children, adults, deficient elderly, and people living in precarious socio-economic conditions, in nursing or whose immunity is deficient (AIDS, immunosuppressive therapy ...) [2].This is one of the most common old infectious diseases [3] [4], with about two billion people being currently infected.There are about nine million new cases of infection each year and two million deaths per year according to WHO estimations [3] [5].For more information, many authors have worked on the epidemiology of tuberculosis [1]- [3] [5]- [13].In many developing countries in general and sub-Saharan Africa particularly, TB is the leading cause of death, accounting for about two million deaths and a quarter of avoidable adult deaths [11]. It is well known that factors such as the emergence of drug resistance against tuberculosis, the growth of the incidence of HIV in recent years, as well as other diseases favor the development of Koch bacillus in the body call for improved strategies to control this deadly disease [2] [10] [14].Last May, the World Health Assembly approved an ambitious strategy for 20 years (2016-2035) to put an end to World TB epidemic (World Day of fight against tuberculosis-March 24, 2015).In literature, several articles discussed about coinfection: TB-HIV/AIDS and the most recent is [2].Nowadays, it is not a secret for everyone that fighting against infectious diseases is also a fight against poverty.Humans are traditionally organized into well-defined social units, such as families, tribes, villages, cities, countries or regions are good examples of patches [11] [12].For this study, two subpopulations were considered and each was subjected to a vaccination program.However, only the vaccinated individuals can migrate from one patch to another.Despite that we have neglected the relapse rate, to avoid any risk of treated individuals' reactivation, any migration between patches was allowed.After proving that the problem is well defined and it has a unique solution if the initial condition is given, we are able to calculate the reproduction of numbers ( ) ψ ℜ and 0 ℜ .We have established the existence conditions for three en- demic equilibrium points, and the conditions of local and global stability of the equilibrium point without disease.Finally, numerical simulations illustrate clinical outcomes.This paper is organized as follows: Section 2 introduces the two-patch model structured in age to study the dynamics of TB transmission.The existence of positive and unique solutions is demonstrated in Section 3. The point of equilibrium without disease, reproductive numbers ( ) ψ ℜ and 0 ℜ are defined in the section 4 with the local and global stability of the disease-free equilibrium point.The existence of three endemic equilibrium points is proven in Section 5. Some numerical simulation results are given in Section 6.In Section 7, we have a discussion, conclusion and further work., i p a a′ is the probability that an infective individual of age a′ will have contact with and successfully infect a susceptible individual of age a, ( ) i c a is the age-specic per-capita contact/activity rate (all of these functions are assumed to be continuous and to be zero beyond some maximum age).A fraction i φ of newly infected individuals of the sub-population i is assumed to undergo a fast progression directly to the infectious class i I .Rates of migration, of susceptible passage to latent infectious state and treatment are respectively i ρ ; i k and i r .Risk reduction rates of treatment and vaccination are i σ and i δ respectively, ( ) Parameters and Mathematical Model Formulation with initial and boundary conditions: ,0 ,0 ,0 ,0 0 0, ; 0, ; 0, 0, ; 0, , assume that assume that ( ) ( ) ( ) (see Greenhalgh, 1988 [15] and Dietz Schenzle, 1985 [16]), and N t a S t a L t a I t a J t a V t a S t a L t a I t a J t a V t a By summing equations of system (1) and ( 2), we obtain the following equations for the total population ( ) where ( ) ( ) ( ) b a b a b a = + ; 1 a and 2 a are respectively the minimum and maximum age of procreation and a + is the maximum age of an individual, with a + < +∞ . Let The system (1) can be normalized as the following system: , 0 ; , 0 , 0 , 0 , 0 0 The problem is well-posedness, the methode of proof is the same used in [8]. Existence of Positive Solutions In this section we will prove that the system (5) has a unique positive solution, and to achieve this we will write the system (5) in compact form (abstract Cauchy problem). , u t s t l t i t j t v t s t l t i t j t v t = thus, we can rewrite the system (5) as an abstract Cauchy problem: s a l a i a j a v a s a l a i a j a v a = According to these results we have the following results (see [17]- [19]): Lemma 1.The operator F is continuously Fréchet differentiable on X. Lemma 2. The operator A generates a 0 C -semigroup of the bounded linear operators e tA and the space Ω is positively invariant by e tA . Theorem 1.For each 0 u X + ∈ there are a maximal interval of existence [ ) max 0,t and a unique continuous mild solution ( ) Proof.The proof of this theorem can be found in [18]- [20].  Determination of the Disease-Free Equilibrium A steady state , s a l a i a j a v a s a l a i a j a v a of sys- Therefore, we obtain the disease-free steady state ( ) Calculation of the Reproduction Numbers ( ) To study the stability of the disease-free steady state, we denote the perturbations of system by s t a s t a s a l t a l t a l a i t a i t a i a j t a j t a j a v t a v t a v a with boundary conditions: ,0 ,0 ,0 ,0 0, we consider the exponential solutions of system (16) of the form: The system (16) becomes: with boundary conditions: From Equation (18), we obtain: Hence, by Equations (( 20) and ( 21)) after changing order of integration, we obtain: Injecting (22) in the expression of i Γ , and dividing both sides the expression by i Γ (since 0 i Γ ≠ ), we get the characteristic equation: Denote the right-hand side of Equation ( 23) by ( ) ( ) ( ) We define the net reproductive number as ( ) ( ) We can obtain an expression for 0 i ℜ in a similar way as the derivation of ( ) called the basic reproductive number (when a purely susceptible population is considered) (see [8]). Local Stability of the Disease-Free Equilibrium Theorem 2. The infection-free steady-state ( 5) is locally asymptotically stable (l.a.s.) if We know that Equation (23) has a unique negative real solution * λ if, and only if, ( ) 23) has a unique positive (zero) real solution if ( ) To show that * λ is the dominant real part of roots of ( ) i G λ , we let x iy λ = + be an arbitrary complex solution to Equation (23). Note that ≤ .It follows that the infection-free steady state is l.a.s.if  In this corollary, we have the three cases of the unstability of the disease free equilibrium. Corollary 1. 1) whenever ( ) 1 ψ ℜ > , the disease free is locally asymptotically stable in the first patch and unstable in the second. 2) whenever ( ) 1 ψ ℜ < , the disease free is unstable in the first patch and locally asymptotically stable in the second. Global Stability of the Disease-Free Equilibrium The disease-free equilibrium of system ( 5) is globally asymptotically stable if 0 1 ℜ < and Proof.The proof consist to show that ( ) Integrating system (5) along characteristic lines we get Injecting ( 27) in (28), and changing order of integration, we obtain: Injecting (29) in , and changing order of integration, we obtain: ( ) ( ) For this disease can disappear without any form of intervention, according to these results we must ensure that there is no new infected and the infectious rate does not reach a certain spread. Existence of an Endemic State There exists three endemic steady state of system ( 5) whenever ( ) ψ ℜ . The First Boundary Endemic Equilibrium Theorem 4. A boundary endemic equilibrium of the form , , , , , ,0,0,0, E s a l a i a j a v a s a v a = whenever ( ) This means that the disease is endemic in the first sub-population and dies out in the second sub-population. Proof.The method commonly used to find an endemic steady state for age-structure models consists of obtaining explicit expressions for a time independent solution of system ( 5) with the initial conditions: Integrating system (31), we obtain: By injecting (37) in (34), we obtain: Injecting (40) in the expression of * 1 Γ , and dividing by * 1 Let 1 H , the function define by: ( , 0 0, and , so the net reproductive number is given by ( ) ( ) ( ) We now see that an endemic steady state exists if Equation (41) has a positive solution. The Second Boundary Endemic Equilibrium Theorem 5. A boundary endemic equilibrium of the form ,0 , a v a s a l a i a j a v a = whenever ( ) > .This means that the disease is dies out in the first subpopulation and is endemic in the second sub-population. Proof.(Ideas of proof) ( ,0 , a v a s a l a i a j a v a = satisfies the following equations: with the initial conditions: Integrating system (51), we obtain: Hence, by the similar method using in theorem 4, we obtain the result. The Interior Endemic Equilibrium Theorem 6.An interior endemic equilibrium of the form s a l a i a j a v a s a l a i a j a v a = whenever ( ) ℜ > , which corresponds to case when the disease persists in the two sub-populations. Proof. s a l a i a j a v a s a l a i a j a v a = By injecting (58) in (59), we obtain: Let i H , the function define by: ( ) which is defined implicitly).It follows that when ( ) there exists an endemic steady state distribution which is given by the unique solution of Equation (64) corresponding to * ˆi Γ . Simulation In this section, when ( ) ( ) 1 ψ ℜ > we will evaluate the impact of BCG vaccine and the birth rate of the population in the dynamics of spread of TB.Assuming that all parameters are the same in both patches except the vaccine rate, we observe an increase in the number of infected if the vaccination rate decreases (Figure 2). Also taking the same parameters except birth rates, we see an increased number of infected if the rate increases (Figure 3).), we have the evolution of the number of infectious individuals (Figure 4). Discussion, Conclusion and Future Work In this paper, an age structured model of two-patch for tuberculosis was analyzed and discussed.Each sub-population is subjected to a vaccination program.Apart from age; the vaccinated compartment, we introduced as a class of treated in the model proposed by Tewa J. Jules in [11] and allowed the migration of vaccinated population.The same result was found if the most susceptible migrated too.Although some studies have shown an ineffectiveness of BCG in the prevention of tuberculosis [21], our work demonstrated the contribution of BCG in the process of eradicating TB.The negative impact of the increase in the birth rate was shown.If we neglect the mortality death rate linked to the disease, we obtain the only usual condition of global stability to the disease free equilibrium i.e. 0 1 ℜ < .It remains for us many challenges such as the endemic equilibrium points of this model and the one of [8] to deal with.For future work, in order to study the real impact of the tuberculosis migration in the dynamic of the expansion of the disease, we will use this model and authorize the migration of all individuals (i.e.susceptible, infected, infectious, vaccinated and treated). Two-patch age structured model of tuberculosis was considered.The model is to split the population into two subpopulations.The recruitment is only possible in the class of susceptible and the vaccinated individuals were able to migrate between the two subpopulations.Each subpopulation is divided into five classes based on their epidemiological status: susceptible, vaccinated, latent, infectious or treated.We denote these subgroups respectively.The birth rate of the patch i is ( ) i b a ; ( ) i a µ and ( ) a µdenote the mortality rate related to the disease relative to the patch i and the rate of natural mortality.The time and age depended of the force of infection of the subpopulation i is Figure 1 . Figure 1.Flow chart of the two-patch model for tuberculosis disease transmission. (a) respectively, the coordinates of ij A are obtained from straight expressions (note that each  Corollary 3. The disease-free equilibrium is globally asymptotically in:1) the first sub-population if1 ( a We now see that an endemic steady state exists if Equation (64) has a positive solu- Figure 2 . Figure 2. Evolution of the number of latents individuals with Figure 4 . Figure 4. Evolution of the number of infectious individuals when: ) d . This solution may not be unique since i H may not be monotone ( ( )  * , i h η Γ
3,521
2016-09-12T00:00:00.000
[ "Mathematics", "Medicine" ]
KS@LTH at SemEval-2020 Task 12: Fine-tuning Multi- and Monolingual Transformer Models for Offensive Language Detection This paper describes the KS@LTH system for SemEval-2020 Task 12 OffensEval2: Multilingual Offensive Language Identification in Social Media. We compare mono- and multilingual models based on fine-tuning pre-trained transformer models for offensive language identification in Arabic, Greek, English and Turkish. For Danish, we explore the possibility of fine-tuning a model pre-trained on a similar language, Swedish, and additionally also cross-lingual training together with English. Introduction Offensive language is a prevalent phenomenon in many online communities and social media platforms. Due to the vast amount of content, it is often infeasible to manually moderate all user submitted content. Computational methods for identifying this type of content is one possible way to help mitigate the problem. Different aspects of the problem such as aggression (Kumar et al., 2018), cyber bulling (Sprugnoli et al., 2018) and hate speech (Malmasi and Zampieri, 2017) have been studied in recent work. OffensEval 2019 used a new three-level hierarchical annotation schema to capture multiple aspects of offensive language in one framework (Zampieri et al., 2019a). While much of the previous work is focused on English, offensive language detection is a multilingual problem. Apart from country specific communities, large social media platforms such as Facebook and Twitter have many users interacting in their native tongue. Recently, offensive language detection addressed different languages such as German (Wiegand et al., 2018), Arabic (Mulki et al., 2019), Italian (Sanguinetti et al., 2018), and Spanish (Fersini et al., 2018). In OffensEval 2020, the first level task of offensive language detection has been expanded to cover five languages, Arabic, Danish, English, Greek, and Turkish. Transfer learning is nothing new in NLP but over time, the pre-training has become more complex, incorporating more context. In recent years, language models based on the transformer architecture pre-trained on large amounts of unlabeled text and then fine-tuned on downstream tasks have been used to achieve state-of-the-art (SOTA) results on many natural language benchmarks (Devlin et al., 2018;Liu et al., 2019;Yang et al., 2019). In OffensEval 2019, seven of the top ten models used BERT in some way (Zampieri et al., 2019b). One of the advantages of transfer learning is that it can potentially reduce the amount of labeled data that is needed. The model can learn general features of language from a large unannotated corpus during pre-training. Task specific features can then be learned from a smaller annotated corpus. On some datasets, using a pre-trained language model has shown to match the results of models trained from scratch on ten times more data. Adding language model fine-tuning on unlabeled domain specific text can potentially reduce the need for labeled data even more (Howard and Ruder, 2018). One obstacle to using large transformer models is that the pre-training step is expensive. The Megatron-LM has 8.3 billion parameters and was trained over 9 days on 512 GPUs (Shoeybi et al., 2019). In comparison, the fine-tuning step is relatively inexpensive. This makes model sharing an important part of applying large transformer models to many tasks. The HuggingFace Transformers library provides a platform for sharing models developed by researchers and the community, and a unified API for using them (Wolf et al., 2019). One additional challenge with multilingual offensive language detection is low resource languages. Such languages might lack both unlabeled data for pre-training and labeled data for fine-tuning. One possible solution in such cases is to use multilingual models. Such models can achieve lower perplexity than monolingual models for language modeling of low resource languages (Conneau and Lample, 2019). In some contexts, multilingual models can even outperform monolingual models on downstream tasks . In the case of lacking labeled data, they have also shown to perform well on zero-shot cross-lingual classification tasks. This type of transfer works best between typologically similar languages. However, transfer is possible to some extent even between languages with different scripts (Pires et al., 2019). This paper describes our system for OffensEval 2020 . We participated in Sub-task A: Offensive language identification for all language tracks. Based on the recent success of the transformer architecture, we compared monolingual BERT models for Arabic, English, Greek, and Turkish with the XLM-R multilingual model . We found that the monolingual models outperform the multilingual models for all languages on the development data. We used models available through the HuggingFace Transformers library. Since no monolingual models were available for Danish, we initially compared a Swedish BERT model with multilingual XLM-R. We found that the Swedish model worked reasonably well on the development data, while XLM-R only predicted the majority class for most runs. We hypothesized that this is due to the small and imbalanced Danish dataset; similar high variance results have been seen for BERT in Devlin et al. (2018) and Phang et al. (2019). To get around the problem of the small dataset, we tried cross-lingual training of Danish and English using XLM-R which outperformed the Swedish BERT model. In Section 2 we give a short description of the task and data used. Section 3 presents our approach, describing data preprocessing, models and training approach. Section 4 shows our results on the test data for OffensEval 2020. Task and Data OffensEval 2020 uses a multilingual dataset of posts from Twitter, tweets, with annotations following the hierarchical annotation schema proposed by Zampieri et al. (2019a). Only the first level of annotation is provided for all languages. This level discriminates between two kinds of tweets: • Offensive (OFF): Tweets containing any form of offensive language. This includes insults, threats, and profanity. • Not Offensive (NOT): Tweets not containing any form of unacceptable language. The goal of the task is to distinguish between offensive and not offensive tweets. Macro-averaged F1-score is used as evaluation metric. Table 1 shows a summary of the labeled training datasets for each language. All the datasets are imbalanced to some extent, with the majority of tweets being labeled as not offensive. Danish is the most extreme in this regard, having only 13% of tweets labeled as offensive. We can also see that the size of the datasets varies quite a bit, with Turkish having about ten times as many labeled instances as Danish. Data Preprocessing A minimal amount of preprocessing was done. We applied only two operations to all languages: 1. Multiple consecutive user mentions were replaced with a single @User to reduce sequence length and noise. 2. All tweets were truncated or padded to a common length. This length was chosen separately for each language to be the smallest length longer than 95% of all tweets in the training set. Additional processing was done on the external datasets for English. We sampled about 10,000 additional tweets from Davidson et al. (2017). Samples were chosen such that the complete labeled tweet dataset became balanced. Tweets with at least 3 annotators labeling it as either offensive language or hate speech were labeled as OFF. Tweets with all annotators agreeing on the neither-class were labeled as NOT. A balanced dataset of 13,000 Wikipedia comments, from the Kaggle dataset, were also added. To be consistent with the Twitter data, all comments were at most 280 characters. Any comment having at least two of the labels toxic, severe toxic, obscene, threat, insult, or identity hate was labeled as OFF. Comments with no negative labels were labeled as NOT. For both datasets, we replaced URLs with a URL token, and for the tweet dataset, we replaced user mentions with @User. Additionally we sampled 400,000 tweets from the English silver standard data using confidence scores as weights. These were then filtered down further to the 40,000 tweets with highest confidence using our model as described in section 3.3. Models Vaswani et al. (2017) initially introduced the transformer architecture in the context of machine translation. While previous approaches relied on convolutional and recurrent neural networks, they showed that a relatively simple architecture based on feed-forward neural networks and attention mechanisms could provide better results while being more parallelizable and faster to train. Like previous sequence-tosequence models the transformer consists of two main components: an encoder component and a decoder component. Radford et al. (2018) trained a left-to-right language model, GPT, using only the decoder part of the transformer and fine-tuned it on multiple downstream tasks with minimal task specific changes. Devlin et al. (2018) showed the importance of bi-directional pre-training for certain types of tasks by obtaining new SOTA results on 11 NLP benchmarks, including an almost 8 point improvement on GLUE. Their model architecture, named BERT (Bidirectional Encoder Representations from Transformers), is the architecture we used for all monolingual models apart from English. Since the decoder component of the transformer already does masking of subsequent positions, it is a natural choice for the next word prediction language modeling task used by GPT. To be able to train a bidirectional language model, BERT instead uses the encoder part of the transformer. Apart from increasing the size, it is almost identical to the initial transformer implementation. BERT consists of a stack of encoders, 12 for BERT BASE and 24 for BERT LARGE , compared to 6 in the original transformer. Each encoder, in turn, consists of two main parts: a self-attention layer followed by a feed-forward neural network. Self-attention is the mechanism which allows the transformer to consider other words in the sequence when encoding the current word. BERT increases the number of attention heads from 8 in the original Transformer to 12 for BERT BASE and 16 for BERT LARGE . Finally the number of hidden units in the feed-forward neural networks is also increased from 512 to 758 and 1024 for BERT BASE and BERT LARGE , respectively. We used pre-trained BERT language models without changes to the base architecture. For the finetuning step, we followed the approach for single sentence classification suggested by Devlin et al. (2018). A single fully connected classification layer was added to the base model. A special [CLS] token was prepended to all inputs. The contextual representation of this token was used as an embedding for the complete sentence, and passed to the classification head. The complete base model was fine-tuned during training. Liu et al. (2019) showed that BERT is undertrained. Their model, RoBERTa, uses exactly the same architecture as BERT. RoBERTa outperforms BERT simply by training on more data, with larger batches, for a longer time. Some additional simple changes in the pre-training approach, such as removing one of the pre-training objectives and training on longer sequences, improved the results even further. This is the monolingual model we used for English. There were no pre-trained RoBERTa models available for the other languages. The fine-tuning approach is identical to the one used for BERT. Similarly, in the multilingual context, the XLM-RoBERTa (XLM-R) model we used achieves much of its improvement over previous multilingual models by using several orders of magnitude more data . also find that vocabulary size has a large impact when many languages are used. Again XLM-R uses the same model architecture as BERT. However, the increase of vocabulary size from 30K to 250K leads to an increase of the total number of parameters from 110M and 335M to 270M and 550M for the BASE and LARGE models, respectively. All five languages are present among the 100 languages used during pre-training of XLM-R. The fine-tuning approach is identical to the one used for the previous models. A summary of the different pre-trained models that we used for each language is provided below: Table 2: Mean and maximum F1 macro on the development sets for five random restarts on each language and model. Experiments We carried out the initial experimentation and the hyperparameter selection using the English data from OffensEval 2019. We followed the fine-tuning procedure recommended for BERT by Devlin et al. (2018). We tested the following parameters, where the best performing values are underlined: • Batch size: 16, 32 • Learning rate (Adam): 5e-5, 3e-5, 2e-5 • Epochs: 2, 3, 4 The dropout was kept constant at 0.1 for all layers. Overall we found that fine-tuning was relatively insensitive to batch size and learning rate. However, most random restarts seemed to overfit when using more than 2 epochs. The same hyperparameters were then used for all further experiments. For each language, 20% of the data was set aside as a development set and used for model selection. For each model, we ran five random restarts with different data shuffling and classifier head layer initialization. The model with the best macro-averaged F1-score on the development set was then used for submission. Table 2 summarizes the results we obtained. For English, the training was done in two steps. Initially, we trained the model using only the labeled data. We then used this model to label 400,000 samples from the silver standard data. We labeled the 20,000 instances with the highest scores as OFF and the 20,000 instances with the lowest scores as NOT. We finally added these 40,000 tweets to the training set used to train the final model. For Danish, we initially failed to train XLM-R to predict anything other than the majority class. Since XLM-R has shown promising cross-lingual transfer results, we tried training Danish together with English. We did this by shuffling the Danish training data with the English data from OffensEval 2019. We evaluated the models only on the Danish development dataset. Table 3 shows our results on the official test data. The figures are similar to those we obtained on the development dataset. Danish shows the largest drop in performance, going from 0.813 on the development dataset to 0.775 on the test dataset. Nonetheless, since the development set was rather small, it might be difficult to conclude on the generalization performance. Impact of external and silver standard data Previous work has shown that models for offensive language detection often generalize poorly to other datasets (Karan andŠnajder, 2018;Swamy et al., 2019;Arango et al., 2019). This is especially true when Table 4: Results on the English test set using different subsets of the training data. For the combination OffensEval19 + Silver, the silver standard data was processed using the approach described previously, but only using OffensEval19 for the initial training. evaluating across domains, e.g. between Twitter and Wikipedia, but also within the same domain. Some features are likely platform specific and some datasets focus on specific aspects of offensive language. The data collection process can also lead to some types of content being overrepresented. We tried to determine the impact of the different English datasets we used by retraining the model on different subsets of the data. The results on the test set are shown in table 4. All the labeled datasets perform reasonably well on their own. Surprisingly the sampled Wikipedia data performs just as well as the OffensEval 2019 data. The sampled data from (Davidson et al., 2017) performs worse. This might be due to it being smaller and oversampled to contain more offensive tweets. This hypothesis is also supported by the fact that when used with the OffensEval 2019 data, the results are comparable with the submitted model. Finally, the silver standard data seems to be most useful when the original labeled dataset is small. Error analysis To get a better understanding of the kind of mistakes the system makes we studied some of the misclassified instances. To get some indications of what words are important for the classification of a given sentence, we applied LIME (Ribeiro et al., 2016). In short, LIME estimates the importance of a word by: 1. Generating many distorted versions of the original tweet. 2. Applying the original classifiers to the distorted tweets. 3. Training a white-box model to predict the output of the original classifier given a version of the tweet. Table 5 shows five instances from the English OffensEval 2019 dataset, where the classifier assigned a high confidence to the wrong class. Examples 1 and 2 are very short and the profanity dominates the other words. Both examples look like reasonable classifications. However, the same thing seems to happen in Example 3. The word shit dominates the otherwise inoffensive sentence. Example 4 has no direct profanity. Looking at bigrams using LIME, stinking cute is correctly identified as inoffensive. Example 5 doesn't seem to have any offensive language. It is possible that it could be considered offensive given external knowledge about the people mentioned. Given only the tweet, the classification looks reasonable. # Tweet Prediction Label 1 Are you fucking + serious? URL OFF NOT 2 And dicks + . URL OFF NOT 3 #Room25 is actually incredible, Noname is the shit + , always has been, and I'm seein her in like 5 days in Melbourne. Life is good. Have a nice day. OFF NOT 4 @User Aw she is so stinkingcute + ! How old is she now? NOT OFF 5 #ChristineBlaseyFord is your #Kavanaugh accuser. #Liberals try this EVERY time. #ConfirmJudgeKavanaugh URL NOT OFF Table 5: Examples of misclassifications for English. Using LIME, we marked words that have a large impact on the classification. A + indicates agreement with the predicted label and a -indicates disagreement. Conclusions In the context of offensive language detection for multiple languages, we found that fine-tuning transformer models works well. Monolingual models outperform multilingual models for all languages studied. However, multilingual models can still be a viable alternative when no monolingual models are available. When the amount of labeled data is small, they can also be used for cross-lingual transfer. We showed the positive effect of cross lingual transfer when augmenting Danish with English.
4,092.2
2020-01-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
GH97 is a new family of glycoside hydrolases, which is related to the α-galactosidase superfamily Background As a rule, about 1% of genes in a given genome encode glycoside hydrolases and their homologues. On the basis of sequence similarity they have been grouped into more than ninety GH families during the last 15 years. The GH97 family has been established very recently and initially included only 18 bacterial proteins. However, the evolutionary relationship of the genes encoding proteins of this family remains unclear, as well as their distribution among main groups of the living organisms. Results The extensive search of the current databases allowed us to double the number of GH97 family proteins. Five subfamilies were distinguished on the basis of pairwise sequence comparison and phylogenetic analysis. Iterative sequence analysis revealed the relationship of the GH97 family with the GH27, GH31, and GH36 families of glycosidases, which belong to the α-galactosidase superfamily, as well as a more distant relationship with some other glycosidase families (GH13 and GH20). Conclusion The results of this study show an unexpected sequence similarity of GH97 family proteins with glycoside hydrolases from several other families, that have (β/α)8-barrel fold of the catalytic domain and a retaining mechanism of the glycoside bond hydrolysis. These data suggest a common evolutionary origin of glycosidases representing different families and clans. Background On the basis of sequence similarity, glycoside hydrolases (or glycosidases, EC3.2.1.-) have been grouped into 96 families (GH1-GH100, except GH21, GH40, GH41, and GH60) by the Carbohydrate-Active Enzymes (CAZy) classification [1,2]. In the case of poly-domain proteins each catalytic domain is considered separately. A family was initially defined as a group of at least two sequences displaying significant amino acid similarity and with no significant similarity with other families [1]. Later, some related families of glycosidases have been combined into clans [3,4]. According to its definition, a clan is a group of families that are thought to have a common ancestry and are recognized by significant similarities in tertiary structure together with conservation of the catalytic residues and a catalytic mechanism [3]. Glycosidases catalyze hydrolysis of the glycosidic bond of their substrates via two general mechanisms, leading to either inversion or overall retention of the anomeric configuration at the cleavage point [4][5][6]. Currently, 14 clans (GH-A-GH-N) are described, and in total they contain 46 families [2]. Families of four clans (GH-A, GH-D, GH-H, and GH-K), as well as several other families, that have not been assigned to any clan, contain proteins with a similar (β/ α) 8 -barrel fold of the catalytic domain [2]. Several glycosidases, that do not have any homologues, are included into a group of non-classified glycoside hydrolases [1,2]. In several instances, proteins from this group have been reclassified into new families when their homologues were found [7]. Two different clans have never been merged in the CAZy classification [2], even after their significant similarity has been established. Instead, related clans (and families) having statistically significant sequence similarity of the corresponding proteins were proposed to be grouped into superfamilies at a higher hierarchical level. For example, we have described the furanosidase (β-fructosidase) superfamily, that includes clans GH-F (inverting glycosidases) and GH-J (retaining glycosidases), as well as the GHLP (COG2152) family of enzymatically-uncharacterized proteins [8][9][10][11]. Nowadays, some families are very large. For example, GH13 family (clan GH-H) includes more than 2,000 representatives [2]. This large and poly-specific group of enzymes has been studied by many authors [12][13][14][15][16][17][18][19]. In particular, it was shown that splitting of this family into smaller subfamilies allowed to clarify the relationship of its members [12,13]. In this work we updated the GH97 family of glycosidases, performed its phylogenetic analysis, and established its evolutionary relationship with several other glycosidase families. Results and discussion Collecting sequences of family GH97 PSI-BLAST search of the non-redundant database with the Bacteroides thetaiotaomicron α-glucosidase SusB (97A1_BACTH, see Table I) as a query sequence yielded 32 protein sequences with the worst (the largest) E-value of 2 × 10 -20 during the first round. Among them we found 10 paralogous proteins from B. thetaiotaomicron ATCC29148 and their 22 homologues from other species. Among 32 obtained proteins were found all 24 members of the GH97 family listed at the CAZy server [2]. Genomic BLAST revealed 13 additional homologous sequences. Based on the sequence similarity, we propose to enlarge the GH97 family by including all known homologues of SusB. As a result, currently this family includes 45 proteins. The majority of them represent Eubacteria (16 different species). Three other sequences correspond to Archaea (Haloarcula marismortui) and two uncultured bacteria. Four sequences are annotated in the NCBI database as eukaryotic (Anopheles gambiae) genome fragments. Only five out of 45 protein sequences (from Anopheles and an uncultured bacterium) are short fragments (Table I). PSI-BLAST searches with a few randomly selected divergent representatives of the GH97 family used as a query sequence during the first round always yielded the same 32 protein sequences as with 97A1_BACTH. An analysis of the order of the sequence appearance during the first round of searches by PSI-BLAST, depending on the query, allows us to distinguish five subfamilies (97a-97e) in the GH97 family with at least two known members in each of them (Table I). The obtained pairwise alignments were used for generating the protein multiple sequence alignment of family GH97. The most conserved parts of the alignment are shown on Figure 2. The fragment of Leifsonia xyli CTCB07 genome [GenBank: NC_006087] revealed by Genomic BLAST has 2 stop codons in the region homologous to genes of GH97 family proteins. An analysis of the nucleic acid sequence allowed us to detect a frame shift (data not shown). The improved ORF encodes protein sequence (97C1_LEIXY), showing a significant sequence similarity with the other members of family GH97 along its whole length ( Figure 2). However, it was impossible to determine the very beginning of the protein sequence including the start codon. This protein is a divergent representative of the GH97 family and it could not be classified into any subfamily on the basis of pairwise sequence comparison. 97C1_LEIXY and its closest homologue 97D1_CAUCR (Evalue = 2 × 10 -54 ) have only 30% of sequence identity. A short gene fragment [GenBank: AY350337] from an uncultured bacterium was revealed by Genomic BLAST. It had been obtained and sequenced during PCR screening of human gut microflora [36]. The deduced protein sequence (97A2_UNBAC) corresponds to the C-terminal part of the others GH97 family proteins and has the highest similarity level with 97A1_BACTH (63% of sequence identity) and 97A1_TANFO (60%). It allows us to include this protein fragment into subfamily 97a (Table I). PSI-BLAST search of the non-redundant protein database yielded a unique eukaryotic protein fragment [GenPept: EAL42226] homologous to GH97 family proteins. Screen-Structure of Bacteroides thetaiotaomicron ATCC29148 genome fragment containing gene clusters for starch and hemicellulose utilization Figure 1 Structure of Bacteroides thetaiotaomicron ATCC29148 genome fragment containing gene clusters for starch and hemicellulose utilization. Arrows indicate the direction of gene transcription. Red arrows correspond to glycosidase (GH) and glycosyltransferase (GT) genes: family belonging is indicated. Yellow arrows correspond to genes coding outer membrane proteins involved in starch binding (susC-susF) and their homologues. Green arrows correspond to genes of the transcriptional activator SusR and predicted transcriptional regulators homologous to AraC. Cluster of the starch utilization genes sus Cluster of the hemicellulose utilization genes [37]. These 4 sequences were aligned for the identification of overlapping regions. AAAB01064948 sequence is homologous to the central part of AAAB01006165 sequence having 54% of identity at the protein level. The ends of AAAB01020110 sequence are respectively homologous to one end of AAAB01006165 and AAAB01068263 sequences: 65% and 69% sequence identity at the protein level. Thus, these 4 sequences correspond to at least two different genes. In total, they cover a complete bacterial gene encoding of a protein of family GH97. Taking into account i) a high similarity level of the 4 deduced protein sequences with bacterial proteins (50-71% identity with 97A1_BACFR, 97A2_BACTH, 97A1_TANFO, and 97A1_BACTH), ii) the intron-free gene structure, iii) an inability to map the genes on the mosquito chromosomes, and iv) absence of GH97 family proteins in any other eukaryotic organism, we suggest the bacterial origin of these four gene fragments. The bacterial origin could have resulted from a contamination of Anopheles gambiae tissue used for preparing of genome library by mosquito Bacteroides-like gut microflora. The evidence for such kind of contamination was obtained when testing the 35,575 clones from A. gambiae cDNA library [38]. It was found that at least 808 sequences appeared to be bacterial contaminants. In order to enlarge database of family GH97 we performed screening of the so-called "Environmental Sam-ples data" [39]. It revealed 60 nucleic acid sequences from the Sargasso Sea that are homologous to genes of GH97 family proteins. However, the majority of them encode only short protein fragments and many of them have a very high level of sequence similarity. Among them we found only 5 full-size or almost complete genes (each encodes a protein consisting of more than 650 amino acid residues). Three additional "gene" sequences were obtained by combining overlapping gene fragments with almost identical sequences (at least 95% of sequence identity at the protein level). Hypothetical proteins (97A1_ENSEQ-97A8_ENSEQ) encoded by these 8 genes should be placed in the 97a subfamily, on the basis of sequence similarity (Table I). Moreover, the majority of the incomplete genes encode protein fragments belonging to the same subfamily. Only four [GenPept: EAE76000, EAE67019, EAH16525, and EAH96685] and two [Gen-Pept: EAE21375 and EAG68085] protein fragments correspond to subfamilies 97b and 97c, respectively. One short fragment (137 amino acids; [GenPept: EAD85224]) cannot be unambiguously classified into any subfamily of the GH97 family. An analysis of the nucleic acid sequence encoding the latter protein fragment [GenBank: AACY01501371] allowed us to extend the protein fragment by using another start codon. The resulting protein sequence (97C1_ENSEQ; 218 amino acids) shows similarity with the sequences of the other members of family GH97 along its whole length. However, it was still impossible to include this protein fragment into any subfamily on the basis of pairwise sequence comparison. Phylogenetic analysis of family GH97 To check the actual relationships of proteins within the GH97 family we performed a phylogenetic analysis using [72]. Numbers of nucleic sequences are given (in parentheses) if the corresponding protein sequences have not been deposited. In some cases (asterisked), protein sequences were edited by changing the start codon. b Protein length was established as the number of amino acids in the corresponding precursor. Incomplete sequences (protein fragments) are asterisked. Portion of the multiple sequence alignment of the sequences analyzed Figure 2 Portion of the multiple sequence alignment of the sequences analyzed. Ten-letter name for each sequence is indicated in the leftmost column (for origin of the sequences see Table I). The alignment continuously spans three panels. Distances to the N-and C-termini and length of omitted fragments are indicated. Highly conserved residues are highlighted in sequences. Amino acid positions that are highly conserved within several subfamilies but varied in amino acid residues in different subfamilies are coloured. Subfamily belonging of sequences (for family GH97) are indicated in the most right. Amino acid residues, interacting with the substrate in the active center of GH27 and GH31 family glycosidases, are indicated by arrows at the bottom [50][51][52][53][54]. The arrow on the gray background corresponds to the Asp residue, playing the role of the nucleophile in glycosidases of families GH27 and GH31. Red asterisks over and under the alignment indicate three conserved positions (in red) probably corresponding to the nucleophile and proton donor in the glycosidases of family GH97 (see text). Alignment of GH27_ORYSA and GH31_ECOLI is structure-based. At the bottom of the figure, β-strands and α-helixes of the (β/α) 8 -barrel are indicated. The first part of the barrel (β1-β4) is shown according to the known structures of GH27 and GH31 family members [51,54]. The second part of the barrel (α4-α8) is based on generalization of predictions for several GH97 family proteins by 3D-PSSM, GOR IV, and nnpredict programs. Subfamily 97d Subfamily 97b Subfamily 97e Subfamily 97a the obtained multiple sequence alignment. It is well Phylogenetic trees of family GH97 Figure 3 Phylogenetic trees of family GH97. The trees were reconstructed by the PHYLIP package. Each node was tested using the bootstrap approach and the number of supporting pseudoiterations (out of 1000) is indicated for each internal knot. Subfamily belongings of sequences are indicated, the value of bootstrap support for each subfamily is coloured in yellow. Red arrows indicate to the enzymatically-characterized proteins 97A1_BACTH and 97A1_TANFO (see text). The origin of sequences is given in Table I B known that phylogeny is the best basis for verification of subfamily structure of a protein family. In many works, where composition of a glycosidase family has been analyzed, the monophyletic status was used as the main argument for a subfamily description. Among others [40][41][42][43][44], this method has been applied to GH13 [12,13], GH27 [23,24], and GH36 [24] families of glycoside hydrolases. In order to verify our subdivision of the GH97 family into subfamilies we checked the clustering of the family members in the phylogenetic tree. The maximum parsimony (MP; Figure 3A) and the neighbor-joining (NJ; Figure 3B) trees have very similar topology, suggesting the correct interpretation of the evolutionary events. When any subfamily of the GH97 family was considered as an outgroup, both MP and NJ trees showed that all other subfamilies appear to form monophyletic groups with a high bootstrap value (at least 95.4% of support at both trees). It should be noted that there is no pair of subfamilies that compose neighbor clusters on both trees with significant bootstrap support. This suggests approximately the same evolutionary distance between each pair of the subfamilies. The archaeal protein 97A1_HALMA is a clear outlayer in the cluster of subfamily 97a at MP and NJ trees ( Figure 3). The other members of this subfamily compose several subclusters, that include representatives either from Bacteroidetes or Proteobacteria phyla. Unclassified protein 97C1_LEIXY is the closest neighbor of subfamily 97c cluster at MP and NJ trees ( Figure 3) and therefore it can be considered as a divergent representative of this subfamily (Table I). Phylogenetic analysis of 97C1_ENSEQ protein fragment (data not shown) allowed us to place it into the same subfamily 97c. An analysis of the GH97 family multiple sequence alignment revealed a number of amino acid positions that are highly conserved within several subfamilies but varied in amino acid residues in different subfamilies ( Figure 2). Taken together, these signature sequence positions allow to predict the subfamily belonging of a protein sequence. Relationship of family GH97 with some other glycosidase families Depending on the GH97 query and the statistical significance threshold of E-value, during the second or third PSI-BLAST iterations, as a rule, we detected statistically significant similarities with α-galactosidases. They represent families GH27 and GH36 of clan GH-D (the α-galactosidase superfamily). More distant similarities were found with glycosidases of family GH31 (the α-galactosidase superfamily) and in some cases with enzymaticallyuncharacterized proteins from COG0535. COG0535 has been annotated as a family of predicted Fe-S oxidoreductases, like the closest COG0641 [45]. Our BLAST searches show, that both COG families are related to the radical SAM superfamily of Fe-S enzymes [46], having (β/α) 8 -barrel fold [PDB: 1R30]. When we used some representatives of subfamily 97a (for example, 97A1_BACTH) as a query and an E-value cut-off of 0.01, it was possible to reveal statistically significant similarity with glycosidases of family GH20 (clan GH-K). A similarity with proteins of this family was detected after the second PSI-BLAST iteration, while the next one or two iterations revealed a distant relationship with members of COG0296 (family GH13 of clan GH-H). It should be noted that glycosidases from the clans GH-D, GH-H, and GH-K have a similar (β/α) 8 -barrel fold of their catalytic domain and the same molecular mechanism of the hydrolyzing reaction [2]. Thus, our results agree with the data of several authors [20,25,[47][48][49] showing the relationship of glycosidases from GH13, GH27, GH31, and GH36 families. More detail analysis of these families and their relationship was done by Rigden [26]. Using the α-galactosidases from rice (GH27_ORYSA, family GH27) and Lactobacillus plantarum (GH36_LACPL, family GH36) as a query sequence for PSI-BLAST searches we found their homology with some representatives of the GH97 family (for example, 97B1_BACFR and 97B2_BACTH) after two or three iterations. However, a statistically significant sequence similarity of GH97 family proteins with α-galactosidases is restricted to a fragment of about 100-150 amino acid residues ( Figure 2). This fragment corresponds to the N-terminal half of the catalytic (β/α) 8 -barrel domain of glycosidases from the αgalactosidase superfamily [50][51][52][53][54]. This half of the domain is known to be more conserved than the C-terminal half [26]. Therefore, we can assume that the catalytic domain of the GH97 family proteins also has a similar (β/ α) 8 -barrel fold. In order to check whether the whole (β/α) 8 -barrel domain is present in GH97 family proteins, we tried to reconstruct their secondary and tertiary structure. The SWISS-MODEL program failed to unambiguously predict the type of the tertiary structure. The 3D-PSSM, GOR IV, and nnpredict programs were used for prediction of the protein secondary structure. The results obtained suggest that the central part of the GH97 family protein sequences represents a typical and complete (β/α) 8 -barrel domain ( Figure 2). The N-and C-terminal parts of the sequences, mainly consisting of β-strands, most probably form two additional non-catalytic domains with an unknown function. However, different programs produce contradictory results regarding the number and exact location of the β-strands (data not shown). The non-catalytic domains of glycosi-dases from the α-galactosidase and α-glucosidase superfamilies are also predominantly composed of β-strands. At least some of these domains are involved in oligomerization and carbohydrate binding [2,54]. In all known glycosidases with the (β/α) 8 -barrel fold, the amino acid residues involved in the active center are located on the C-termini of the β-strands [61], a similar location of the active site was found in many other (β/α) 8barrel fold enzymes [60]. It is well known that two acidic groups (Asp and/or Glu) are almost always involved in the glycosidase active center, playing the roles of nucleophile and proton donor [4][5][6]. Their sequence location has been determined for several representatives of the GH27 and GH31 families [54,[62][63][64][65][66][67][68][69]. The Asp residue, playing the role of nucleophile, is located on the C-terminus of the fourth β-strand of the barrel. This residue is highly conserved among proteins of the αgalactosidase superfamily [23,26]. The homologous residue in the GH97 family proteins is more variable, being Asp in all members of three subfamilies (97b, 97c, and 97d) and Gly in the other proteins (subfamilies 97a and 97e), including 97A1_BACTH and 97A1_TANFO ( Figure 2). Since these two proteins display the α-glucosidase activity [29,30,70] we can conclude that a residue, set in another site, plays the role of nucleophile at least in some proteins of the GH97 family. It should be noted that we have found a residue on the C-terminus of the fifth βstrand in GH97 family sequences that is Gly in 97b, 97c, and 97d subfamilies, but Glu and Asp in subfamilies 97a and 97e respectively ( Figure 2). Therefore, this residue can be suggested as a possible nucleophile in glycosidases of 97a and 97e subfamilies. As a rule, the catalytically essential residues are highly conserved among enzymatically active members of a glycoside hydrolase family, being either Asp, or Glu. The distance between the carboxylic groups of the nucleophile and the proton donor should be similar in order to keep the catalytic machinery. Thus, the difference in the predicted nucleophile residue between 97a and 97e subfamilies is unexpected. However, this does not exclude the existence of a glycosidase activity in proteins with Asp residue at the fifth β-strand (subfamily 97e). To illustrate, in the GH32 family the Asp residue was experimentally shown to be the nucleophile, while several proteins of this family have Glu residue at the homologous position and at least some of them are catalytically active [10,11]. The proton donor of families GH27 and GH31 is located on the C-terminus of the sixth β-strand of the (β/α) 8 -barrel domain. It is outside of the N-terminal half of barrel, which can be unambiguously aligned with proteins of the GH97 family. However, on the C-terminus of the sixth βstrand of the predicted (β/α) 8 -barrel of the GH97 family there is an Asp residue, which is highly conserved in all subfamilies of the family (Figure 2). We suggest this residue as a possible proton donor. Taking into account another structure of the active center and significant sequence similarity of only a half of the catalytic domain, the current data do not support an inclusion of the GH97 family into the α-galactosidase superfamily. As far as we know, 97A1_BACTH and 97A1_TANFO are the only enzymatically-characterized proteins in the GH97 family [2]. All other members of this family have been found recently during genome projects and are encoded by ORFs. Genes of this family are represented only in a limited number of Eubacteria from phyla Actinobacteria (1 genus), Bacteroidetes (4 genera), Planctomycetes (1 genus), and Proteobacteria (3 and 4 genera from αand γ-classes, respectively), as well as in a unique Archaea (Haloarcula marismortui). However, many of these bacteria have several paralogous genes. The most interesting case is that of B. thetaiotaomicron ATCC29148, which has α-glucosidase SusB (97A1_BACTH) and 9 putative paralogues representing four GH97 subfamilies (Table I), at least two of the paralogues (97C1_BACTH and 97C2_BACTH) are also expressed in vivo [28]. This human commensal microorganism is known as a bacterium with the highest number of glycosidase and glycosyltransferase genes [27,71]. Taken together, these facts we can suggest that evolution of GH97 family proteins has been associated with multiple duplications, gene elimination, and horizontal transfer. Conclusion The results of the sequence analysis allow us to distinguish five subfamilies in the GH97 family of glycoside hydrolases. The experimental data on the enzymatic activity are available only for two representatives of the GH97 family: α-glucosidases 97A1_BACTH and 97A1_TANFO [29,30,70]. However, we suppose that the other members of this family may also possess some glycosidase activities. Our data suggest that proteins of this family have a com-mon evolutionary origin with glycosidases of the α-galactosidase superfamily. Many genes, encoding proteins of the GH97 family, are located in clusters with genes of glycoside hydrolases and other carbohydrate-active enzymes. For example, 97C1_BACTH and 97C2_BACTH (subfamily 97c) are encoded by genes of B. thetaiotaomicron located at a hemicellulose utilization locus together with eight other glycosidase genes (Figure 1). Taken together, these data support a recent suggestion to consider family GH97 (or GHX) as a new family of glycoside hydrolases [2,24]. The evolutionary relationship of GH97 proteins with glycosidases of the GH-D, GH-H, and GH-K (and probably GH-A) clans allows to extrapolate their common most important characteristics to glycoside hydrolases of the GH97 family. We can predict a similar (β/α) 8 -barrel fold of the catalytic domain and retaining mechanism of the glycoside bond hydrolysis for glycosidases of the GH97 family. Methods Protein and nucleic sequences were retrieved from the NCBI database [72]. All proteins analyzed in this work were designated by a ten-letter name (see Table I). The search for homologous proteins was done using the PSI-BLAST [73] and Genomic BLAST at the NCBI server. The statistical significance threshold for including a sequence in the model (E-value) used by PSI-BLAST in the next iteration was either 10 -2 or 10 -3 , BLOSUM45 was used as a substitution matrix. Multiple sequence alignment was prepared manually using the program BioEdit [74] on the basis of BLAST pairwise alignments. The multiple sequence alignment was used to implement classical phylogenetic inference programs, using either maximum parsimony or distance methods. Programs PROTPARS and NEIGHBOR from the PHYLIP package (version 3.6; [75]) were used. Moreover, programs SEQ-BOOT, PROTPARS, and CONSENSE and programs SEQ-BOOT, PROTDIST, NEIGHBOR, and CONSENSE were successively used to derive confidence limits, estimated by 1000 bootstrap replicates, for each node in the maximum parsimony and distance tree, respectively. The program TreeView Win32 (version 1.6.6; [76]) was used for drawing the trees. An analysis of the order of the display sequence during searches by PSI-BLAST [73] was used for a preliminary division of a family into subfamilies. The latter was defined as a group of proteins that are displayed at the top of the list in a PSI-BLAST query results. Depending on particular criteria of the protein similarity used, the algorithm can split a family into a larger or smaller number of groups of proteins. Like in some of our previous works [10,23,24,77], in this study we define a subfamily as a group of proteins that have at least 30% sequence identity. Phylogenetic analysis was used in order to verify the obtained subfamilies and to clarify their boundaries. The monophyletic status was used as a criterion for the final definition of a subfamily. The SWISS-MODEL modeling server [78] was used to predict the tertiary structure of proteins based on their amino acid sequences. The 3D-PSSM [79], GOR IV [80] and nnpredict [81] programs were used for prediction of the protein secondary structure. The 3D-PSSM program also was used to search the PDB database. Added in proof After submission of the manuscript, six new sequences of GH97 family proteins have been deposited at the NCBI database. Five of them (97A1_SHEBA, 97A1_SHEFR, 97A1_SHEDE, 97A1_SHEAM, and 97A1_SPHAL) belong to subfamily 97a (Table I). The sixth protein 97X1_SOLUS cannot be unambiguously classified into any subfamily of the GH97 family on the basis of pairwise sequence comparison, composition of the signature sequence positions, and phylogenetic analysis. Most probably it corresponds to a new subfamily.
5,983.6
2005-08-30T00:00:00.000
[ "Biology", "Chemistry" ]
Complete genome sequence of the novel duck hepatitis B virus strain SCP01 from Sichuan Cherry Valley duck Background The duck hepatitis B virus (DHBV) strain, designated SCP01, was isolated and identified from a Sichuan Cherry Valley duck in Southwestern China. To determine the origination and evolution of this isolated strain, we carried out complete genome sequencing of this strain. Findings Sequencing of the nucleotide sequence of DHBV strain SCP01 revealed a genome size of 3021 bp that contained three open reading frames, designated as C, S, and P, which were consistent with those of other duck hepatitis B viruses nucleotide sequences available in the GenBank of NCBI. Sequence comparisons based on the full-length genomic sequences showed that the DHBV SCP01 strain had the highest similarity (99.64 %) with the sequence of strain DHBV-XY, but had a lower similarity (90.04 %) with the sequence of strain DHBV CH5 isolated from Southwestern China. Phylogenetic analysis revealed that the DHBV-XY and DHBV SCP01 formed a branch that was clearly distinct from the other strains. Conclusion This study show that the DHBV SCP01 strain from Sichuan belonged to “Western” isolates, while the DHBV CH5 from Sichuan belonged to “Chinese” isolates. These data will promote further research into the evolutionary biology, epidemiology and pathobiology of hepadnavirus infections. In addition, continuing duck hepatitis B virus surveillance in poultry is critical to understand the patterns of DHBV infection, and to find further animal infection models to study HBV infection. Background Duck hepatitis B virus (DHBV), first discovered in Peking ducks in 1980 (Mason et al. 1980), and subsequently reported in Germany and other countries throughout the world (Mattes et al. 1990;Triyatni et al. 2001;Mangisa et al. 2004;Liu et al. 2014), is a member of the genus Avihepadnavirus, family Hepadnaviridae. The genome of DHBV, a complete minus and incomplete plus strand, is circular and approximately 3.0 kb in length (Cova et al. 2011). The genomic DNA is maintained in a circular conformation by a short cohesive overlap between the two DNA strands (Molnar-Kimber et al. 1984). DHBV is similar to hepatitis B virus (HBV) in terms of genetic organization and virus replication, and causes speciesspecific transient (acute) or persistent infection (Jilbert and Kotlarski 2000). There are approximately 240 million people worldwide that are chronically infected with HBV (Lavanchy and Kane 2016). Chronic HBV infection results in liver disease, including inflammation, fibrosis, cirrhosis and hepatocellular carcinoma (HCC) (Feng et al. 2010). DHBV does not lead to severe clinical disease in ducks or a drop in productivity, but serves as an animal infection model of human HBV, and has been used widely for comparative studies. Virus isolation Positive serum screened by PCR using the primers P1 and P2 (Table 1) was collected and filtered through a 0.22 µm filter. The 9-day-old duck embryonated eggs were inoculated with the filtered suspension (100 μL/embryo) into the allantoic cavity and then cultured in a 37 °C incubator and checked daily. The allantoic fluid was harvested at 4 days after inoculation and then for another round of inoculation. Nucleotide sequencing of the complete genome Viral DNA was extracted from 100 µL of allantoic fluid using the TIANamp Virus DNA/RNA Kit (TIANGEN BIOTECH, Beijing, China) according to the manufacturer's instructions. Afterwards, based on the multiple alignments of the complete genome of DHBV available in GenBank, two pairs of primers were designed using Primer Premier 5 software (Table 1; Fig. 1) (Günther et al. 1995). Primers P3 and P4 were designed to amplify the complete genome sequence of DHBV, the PCR amplification was performed in a 50 μL mixture containing 0.5 μL LAmp ™ DNA polymerase (5 U/μL), 1 μL dNTP Mix (10 Mm each), 5 μL 10 × LAmp ™ buffer (with 20 mM MgCl 2 ), 1 μL viral DNA template, 2 μL each of primers P3 and P4 (10 μM), and 38.5 μL ddH 2 O. The amplification procedure consisted of denaturation at 94 °C for 5 min followed by 32 cycles of denaturation at 94 °C for 30 s, annealing at 57 °C for 30 s, extension at 72 °C for 3 min 10 s, and then a final extension at 72 °C for 7 min. Primers P5 and P6 were used to amplify the region including the sequences of P3 and P4, which PCR conditions were similar to the above amplification. The amplified products were purified using the Universal DNA Purification Kit and cloned into the pGM-T vector (TIANGEN BIOTECH), and then sequenced using classical dideoxy Sanger sequencing (TSINGKE, Chengdu, China). The sequences were assembled using the Chromas software package (http://www.Technelysium.Com.au/chromas.html) to produce the final genome sequence. Genetic characterization To exhibit the genome features of the SCP01 strain, sequence analysis was conducted using the DNAMAN program. The open reading frames (ORFs) were identified according to the online tool ORF Finder (http://www. ncbi.nlm.nih.gov/gorf/gorf.html) in National Center for Biotechnology Information (NCBI). Phylogenetic analysis Phylogenetic analyses were performed using the maximum likelihood method with the use of Mega5.1 software (Tamura et al. 2011). Initial trees for the heuristic search were obtained by applying the neighbor-joining method to a matrix of pairwise distances estimated using the maximum composite likelihood approach. The bootstrap consensus trees inferred from 1000 replicates and branches corresponding to partitions reproduced in less than 70 % bootstrap replicates were collapsed. Results and discussion In June 2014, DHBV strain SCP01 was isolated from ducklings in a commercial Cherry Valley duck breeding company in Sichuan Province, Southwestern China. Subsequently, the nucleotide sequence of DHBV strain SCP01 was amplified by PCR using the primers P3 and P4, primers P5 and P6. Then the amplified products were purified and cloned into the pGM-T vector and then sequenced using classical dideoxy Sanger sequencing. Sequencing revealed that the PCR products through primers P3 and P4, primers P5 and P6 corresponded to the predicted length of 3027 and 526 bp in size, respectively, then the sequences were assembled using the Chromas software package to produce the final genome sequence of 3021 bp in length. The isolated virus was identified as DHBV and named SCP01. Sequence analysis revealed that the genome of DHBV SCP01 (GenBank accession number KM676220) was a double-strand circular DNA and had a size of 3021 bp, with a G + C content of 43.03 %. In addition, the coding region of DHBV SCP01 had three ORFs, designated ORF C, S, and P, which were predicted in the genome by comparison with the proposed structures of other duck hepatitis B viruses nucleotide sequences available in the GenBank of NCBI and identified by the following major criteria: an ATG start codon, a minimum length of 60 bp, and less than 60 % overlap with adjacent ORFs, using the online ORF Finder in NCBI. ORFs S, C and P were predicted to encode the viral surface (preS/S protein: 36.2 kDa and S protein: 18.2 kDa), the core protein (preC/C protein: 35 kDa and C protein: 30.3 kDa), and the polymerase protein (P protein: 89.6 kDa), based on sequence similarities and the presence of conserved domains. The DHBV SCP01 sequence was aligned with 13 reference strains of DHBV and a Snow goose hepatitis B virus (SGHBV, GenBank accession number AF110998) sequence obtained from GenBank database. The percent nucleotide identity and amino acid identity of DHBV strain SCP01 with those of other avian hepadnaviruses are summarized in Table 2. The DHBV SCP01 genome is the same size as the Western isolates AY250902.1 (Mangisa et al. 2004 Sequence alignment between the reference strains and the DHBV SCP01 was performed using the Mega 5.1 software, and the neighbor-joining method (Clustal W) was applied. The corresponding phylogenetic tree was constructed using the sequence data of SGHBV, a member of the Avihepadnavirus, as the outgroup. The phylogenetic relationships are presented in Fig. 2. DHBV-XY (GenBank accession number HQ214130.1; origin, Xinyang, China) and DHBV SCP01 formed a branch that was clearly distinct from the other strains. We found that the DHBV SCP01 strain had the highest similarity (99.64 %) with the sequence of strain DHBV-XY, but had a lower similarity (90.04 %) with the sequence of strain DHBV CH5 (GenBank accession number EU429325.1; origin, Sichuan, China) isolated from Southwestern China. Comparing the DHBV SCP01 strain with other isolated DHBV strains, the ORF S, C, and P similarities were approximately 88.35-99.90, 86.60-99.78, and 89.62-99.62 %, respectively, and the complete genome sequence similarity was approximately 89.47-99.64 %. This report will aid our understanding of the epidemiology and molecular characteristics of DHBV from Cherry Valley ducks in Southwestern China. The phylogenetic analysis of the entire nucleotide sequence indicated that the DHBV strain SCP01 was clustered with the strains from Western countries, and it was more closely related to DHBV-XY from central China, than to strains isolated from other areas of China, France, Canada, India, South Africa, Australia, Germany, and the United States. The DHBV strain SCP01 and DHBV-XY belong to "Western" isolates, while others (especially DHBV CH5) belong to "Chinese" isolates. This might be because of the introduction of exotic ducks between countries in recent years during rapid economic development. Phylogenetic analysis of the individual ORFs did not alter the position of the DHBV SCP01 isolate in the trees. Upon translation of ORF P, the DHBV SCP01 isolates were found to share signature amino acids with the Western isolates, as opposed to those of the Chinese isolates (Table 3). The translated sequences of the DHBV SCP01 isolates had a few amino acid changes when compared with the sequences of Western isolates, especially between the DHBV-XY and the DHBV strain SCP01, which were mostly in the polymerase protein (three amino acid changes). Conclusion This study show that the DHBV SCP01 strain from Sichuan belonged to "Western" isolates, while the DHBV CH5 from Sichuan belonged to "Chinese" isolates. The complete molecular characterization of the DHBV SCP01 strain will contribute to further studies on molecular epidemiology and enable the development of better measures to control DHBV. To date, there are no effective, preventive vaccines against DHBV in poultry, and the nature of circulating DHBV and HBV have remained largely elusive in China. Our purpose in submitting this report is our hope that these data will promote investigations by others in the virology community into the evolutionary biology, epidemiology and pathobiology of hepadnavirus infections. In
2,382.8
2016-08-17T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
HINT3: Raising the bar for Intent Detection in the Wild Intent Detection systems in the real world are exposed to complexities of imbalanced datasets containing varying perception of intent, unintended correlations and domain-specific aberrations. To facilitate benchmarking which can reflect near real-world scenarios, we introduce 3 new datasets created from live chatbots in diverse domains. Unlike most existing datasets that are crowdsourced, our datasets contain real user queries received by the chatbots and facilitates penalising unwanted correlations grasped during the training process. We evaluate 4 NLU platforms and a BERT based classifier and find that performance saturates at inadequate levels on test sets because all systems latch on to unintended patterns in training data. Introduction Over the last few years, task-oriented dialogue systems have gained increasing traction for applications like personal assistants, automated customer support agents, etc. This has led to the availability of several commercialised and/or open conversational bot building platforms. Most popular systems today involve intent detection as a vital part of their Natural Language Understanding (NLU) pipeline. Recent advances in transfer learning (Howard and Ruder, 2018;Peters et al., 2018;Devlin et al., 2019) has enabled systems that perform quite well on existing benchmarking datasets (Larson et al., 2019;Casanueva et al., 2020). Definitions of intent often vary across users, tasks and domains. Perception of intent could range from a generic abstraction such as "Ordering a product" to extreme granularity such as "Enquiring for a discount on a specific product if ordered using a specific card". Additionally, factors such as imbalanced data distribution in the training set, assumptions during training data generation, diverse background of domain experts involved in defining the classes make this task more challenging. During inference, these systems may be deployed to users with diverse cultural backgrounds who might frame their queries differently even when communicating in the same language. Furthermore, during inference, apart from correctly identifying in-scope queries, the system is expected to accurately reject out-of-scope (Larson et al., 2019) queries, adding on to the challenge. Most existing datasets for intent detection are generated using crowdsourcing services. To accurately benchmark in real-world settings, we release 3 new single-domain datasets, each spanning multiple coarse and fine grain intents, with the test sets being drawn entirely from actual user queries on the live systems at scale instead of being crowdsourced. On these datasets, we find that the performance of existing systems saturates at unsatisfactory levels because they end up learning spurious patterns from the training dataset instead of generalising to the perceived meanings of intents. We evaluate 4 NLU platforms -Dialogflow 1 , LUIS 2 , Rasa NLU 3 , Haptik 45 and a BERT (Devlin et al., 2019) based classifier on all 3 datasets and highlight gaps in language understanding. We further probe into queries where all the current systems fail and question the efficacy of the current approach of learning. Additionally, we repeat all our experiments on the subset of training data and show a performance drop in all the systems despite retaining relevant and sufficient utterances in the training subset. We've made our datasets and code freely accessible on GitHub to promote Prior Work Despite intent detection being an important component of most dialogue systems, very few datasets have been collected from real users. Web Apps, Ask Ubuntu and Chatbot datasets from (Braun et al., 2017) contain a limited number of intents (<10), oversimplifying the task. More recent datasets like HWU64 from (Liu et al., 2019) and CLINC150 from (Larson et al., 2019) span a large number of intents in multiple domains but are generated using crowd sourcing services hence are limited in diversity in user expressions which arise from but not limited to domain specific presumptions, context from how and where the bot is made available, paraphrases emerging from cultural and ethnic diversity of user base, conversational slang, etc. Our work has some similarity with CLINC150, in that they also highlight the problem of out-ofscope intent detection and with BANKING77 from (Casanueva et al., 2020) Datasets We introduce HINT3, a collection of datasets shown in Table 2 -SOFMattress, Curekart and Powerplay11 each containing diverse set of intents in a single domain -mattress products retail, fitness Table 1 shows few example intents of varying granularity in HINT3 dataset, along with examples of training queries created by domain experts and in-scope, out-of-scope queries received from real users. Training Data Collection Training data is prepared by a team of domain experts trying to emulate real users after in-depth research of historical user queries. The experts do not create an explicit set of out of scope queries primarily because the universe of such queries is infinitely big. Training datasets show class imbalance, occurrence of domain specific words, acronyms 7 . All training data queries are in English. Dataset Variants In addition to Full training sets, we create Subset versions for each training set. For each class, after retaining the first query we iterate over the 7 github.com/hellohaptik/HINT3/tree/master/data exploration rest, discarding a query if it has an entailment score (Bowman et al., 2015) greater than 0.6 in both directions with any of the queries retained so far i.e. the subset version has the following property where I is the set of all intents,X i is the set of queries retained for class i, E(h, p) is the entailment scoring function with h as hypothesis and p as premise. We use ELMo model trained on SNLI (Peters et al., 2018;Parikh et al., 2016) 8 for E(h, p). These are intended to evaluate performance with only semantically different sentences in the training set as ideally systems should already understand semantically similar queries to the ones present in the training set. Test Data Collection and Annotation Our test sets contain the first message received by live systems from real users over a period of 15 days. Inter-annotator agreement was 75.8%, 80.0% and 73.4% for SOFMattress, Curekart and Power-play11 respectively and conflicts were resolved by domain experts. One major reason for low interannotator agreement was unclear criteria for defining an intent which sometimes lead to overlapping intents of different levels of granularity, even after we had made sure to manually merge any conflicting or highly similar intents in the training data. Directly coming from real users our test set queries also contain messaging slangs, acronyms, spelling mistakes, grammatical mistakes and usage of code-mixed languages 7 . Queries in non-Latin script or code-mixed languages were marked as out of scope (labelled as NO NODES DETECTED). Since live chat systems don't cater all the queries related to a brand, our test set contains relevant outof-scope queries received from users about that domain. Any identifiable information of users, brands was replaced with made-up values in both train and test sets. Benchmark Evaluation We evaluated the performance of our datasets on platforms like Dialogflow, LUIS, RASA and Haptik in addition to evaluating performance on BERT. All layers of BERT were fine-tuned with a learning rate of 4e-5 for up to 50 epochs with a warmup period of 0.1 and early stopping. Out-Of-Scope (OOS) prediction We use thresholds on the model's probability estimate for the task of predicting whether a query is OOS. We show performance on thresholds ranging from 0.1 to 0.9 at an interval of 0.1 to show the maximum performance a model can achieve irrespective of how we choose the threshold. Metrics We consider Accuracy and Matthew's Correlation Coefficient 9 as overall performance metrics for the systems. We use OOS recall (Larson et al., 2019) to evaluate performance on OOS queries and accuracy of in-scope queries to evaluate performance on inscope queries. Figure 1 presents results for all systems, for both Full and Subset variations of the dataset. Best Accuracy on all the datasets is in the early 70s. Best MCC for the datasets varies from 0.4 to 0.6, suggesting the systems are far from perfectly understanding natural language. Results In Table 3, we consider in-scope accuracy at a very low threshold of 0.1, to see if false positives on OOS queries would not have mattered, what's the maximum in-scope accuracy that current systems are able to achieve. Our results show that even with such a low threshold, the maximum in-scope accuracy which systems are able to achieve on Full Training set is pretty low, unlike the 90+ in-scope accuracies of these systems which have been reported on other public datasets like CLINC150 in (Larson et al., 2019). And, the in-scope accuracy is even worse for the Subset of the training data. Table 5 shows percentage drop in in-scope accuracy on subset data across all systems as compared to in-scope accuracy on full data. The drop varies from 0.6% to 22.3% across datasets and platforms. In an ideal world, this drop should be close to 0 across all datasets, as if the system understands the meaning of queries in training data, its performance should not get affected at all by removing queries in training data which are semantically similar to the ones already present. Analyzing few example queries which failed on all platforms in Table 4 suggests that these models aren't actually "understanding" language or capturing "meaning", instead capturing spurious patterns in training data, as was also pointed in (Bender and Koller, 2020). Predicting based on these spurious patterns, which models latch on to during training, leads to models having high confidence even on OOS queries. Figure 2 shows this behaviour on SOFMattress Full dataset, as significant percentage of OOS queries have high confidence scores on all systems, except LUIS, for which it is at the cost of in-scope accuracy. Conclusion This paper analyzed intent detection on 3 new datasets consisting of both in-scope and out-ofscope queries received on 3 live chat bots over a period of 15 days. Our findings indicate that there's a significant gap in performance on crowdsourced datasets vs in a real world setup. NLU systems don't seem to be actually "understanding" language or capturing "meaning". We believe our analysis and dataset will lead to developing better, more robust dialogue systems.
2,324
2020-09-29T00:00:00.000
[ "Computer Science" ]
Skin Wound Healing Rate in Fish Depends on Species and Microbiota The skin is a barrier between the body and the environment that protects the integrity of the body and houses a vast microbiota. By interacting with the host immune system, the microbiota improves wound healing in mammals. However, in fish, the evidence of the role of microbiota and the type of species on wound healing is scarce. We aimed to examine the wound healing rate in various fish species and evaluate the effect of antibiotics on the wound healing process. The wound healing rate was much faster in two of the seven fish species selected based on habitat and skin types. We also demonstrated that the composition of the microbiome plays a role in the wound healing rate. After antibiotic treatment, the wound healing rate improved in one species. Through 16S rRNA sequencing, we identified microbiome correlates of varying responses on wound healing after antibiotic treatment. These findings indicate that not only the species difference but also the microbiota play a significant role in wound healing in fish. Introduction The skin functions as a barrier between the body and the environment. It is important to keep the skin intact to maintain the animals' integrity. Although the basic function of the skin is very similar in most animals, its composition and organization vary between species and their habitats [1,2]. By using diverse model systems, the mechanisms of wound healing and strategies to improve the wound healing process have been widely studied [3][4][5]. Fish can serve as a good model to study wound healing as wound repair mechanisms in fish are very similar to those in mammals [6][7][8]. Although wound healing of diverse fish species was reportedly affected by the temperature that fish live in [9,10], the importance of host factors on wound healing in many species has not been well addressed. Fish skin is composed of epidermis and dermis [2,11], and wound healing mechanisms differ between wound types [6,7]. Deep wounds in fish take longer to heal than superficial and partial wounds, and recovery follows a similar process as in mammals [8]. At an initial stage, keratocytes derived from the intermediate layer of the epidermis move quickly to cover the wounded area, followed by inflammation. Neutrophils and macrophages are recruited to the wounded area to induce inflammation and activate growth factor signaling, which promotes cell proliferation and the formation of the granulation tissue. The granulation tissue forms along the wound borders and replaces the damaged tissue. The skin houses a vast microbiota [12,13] that can migrate to the wound bed upon injury, and the role of microbiota, especially commensal bacteria, in wound healing has been studied previously [14]. Although the utility of antibiotics in wound healing has been debated [15,16], it was recently identified that commensal microbiota plays a key role in wound healing in mice. Upon skin injury, commensal bacteria move to the dermis and recruit neutrophils that activate dendritic cells (pDCs), which secrete type I IFN. Type I IFN promotes the expression of growth factors from fibroblasts and macrophages [14]. As in the mouse study, we hypothesized that the commensal microbiota on the fish skin might play a positive role in wound healing. In our preliminary results, we noticed that Silurus microdorsalis, which is a scaleless fish species and lives in rocky habitats, has a fast wound healing rate. Catfish, a scaleless fish with a fast wound healing rate [17], contains a large amount of mucus on their skin that assists wound healing [18]; in contrast, scales seem to delay re-epithelialization after mechanical wounds [7,8]. However, few reports compare the wound healing rates of fish with and without scales in a similar environment. To test if skin microbiota plays a positive role in the wound healing rate in fish, we screened seven fish species that encompass fish both with and without scales and are locally available and identified two fish species that showed the fastest wound healing rates. A previous study had reported that rifampicin, a broad-spectrum antibiotic, induces a drastic change in the composition of microbiota on fish skin and gut [19]. Therefore, the role of microbiota in wound healing was tested in the presence of rifampicin. Of note, we found a positive effect of antibiotics on wound healing in one species. From 16S rRNA metagenome sequencing, we could identify several bacteria correlated with differential wound healing responses to antibiotic treatment. Korean Bullhead and Chinese Bleak Display Faster Wound Healing Rates To test the positive effect of microbiota on wound healing rates in fish, first, we tried to select fish that have fast wound healing rates. Seven locally available fish species were selected based on the differences in scales and habitats ( Figure 1A). The seven fish species consisted of two fish species of the order Siluriformes (Pseudobagrus fulvidraco, Silurus microdorsalis) without scales on their skin, two fish species of the family Cyprinidae (Aphyocypris chinensis, Rhynchocypris oxycephalus) with their scales exposed to the surface of the epidermis, and three fish species of the family Cobitidae (Misgurnus mizolepis, Niwaella multifasciata, Iksookimia koreensis) with their scales embedded in the dermis. These fish are known to have different upstream and downstream habitats. After wounding (3 mm in diameter; deep wound), all fish showed a similar wound healing process; wound size initially enlarged until four days post-wounding (dpw) and then contracted afterward; pigment recovery started around six dpw (Figure 1B-D and Figure S1A-C). The wound healing rate was quantified by the extent to which the wound size decreases and the pigment recovers ( Figure 1E,F). Korean bullhead (P. fulvidraco) and Chinese bleak (A. chinensis) showed the fastest wound healing rates among the seven species (average wound size/pigment recovery (aW/aP) [%] at ten dpw: Korean bullhead, 25.704/77.627; Chinese bleak, 0.000/97.450). Although most S. microdosalis died in the middle of the experiment, they had a similar wound healing rate to that of the Korean bullhead ( Figure S1, see Supplementary Materials). In addition to the differences between individual species, the type of scale was correlated with the wound healing rate. Wound healing was faster in two fish species of the family Cyprinidae, which have scales exposed to the surface of the epidermis, than in three fish species of the family Cobitidae, which have scales embedded in the dermis (aW/aP [%] at ten dpw: Cyprinidae, 28.299/73.806; Cobitidae, 101.503/9.759; Figure S1D,E). Rifampicin Treatment Induces Different Effects on the Wound Healing Rate We tested if skin microbiota has a positive effect on the wound healing rate. Two fish species (Korean bullhead and Chinese bleak) were selected for testing the effect of microbiota on wound healing since they have the fastest wound healing rate. Rifampicin was used to induce changes in microbiota on fish skin. Figure S1). Scale bars: 3 mm. (C,D) Changes in wound size percent (C) and pigment percent (D). Wound size (pigment) percent: (wound size (pigment) at (n) dpw/0 dpw)*100. (E,F) Quantification of wound healing rate based on changes in wound size (E) and pigmentation (F). The wound healing rate was compared by wound size (pigment) percent at 10 dpw. Sample size (n) is denoted next to the graph. ns: not significant. * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001. Before making a wound on the skin, fish were raised either in APW (group A) or APW + RIF (25 µg/mL; group R) for three days ( Figure 2A) to induce changes in the microbiota. In Chinese bleaks, the initial expansion of the wound size appeared to be larger in group R than group A ( Figure 2B,C), and wound size reduction occurred faster in group A than in group R ( Figure 2B,C,E; average dpw of 80% wound size in group A/R, 6.448/7.170). In Korean bullheads, there was no difference in initial wound size expansion rate (1-4 dpw) between group A and group R. However, after the initial expansion, group R had a faster wound size reduction rate than group A ( Figure 2B,C,E; average dpw of 80% wound size in group A/R, 9.043/7.835). While the pigmentation area of Chinese bleaks did not show any significant difference between group A and group R, pigment recovery of Korean bullheads was faster in group R than in group A ( Figure 2B,D,F; average dpw of 80% pigmentation in group A/R, 11.974/9.358). Histological analysis was performed to confirm that the wound healing of each group was completed at 16 dpw ( Figure S2). Similar to the control showing clear separations of boundaries between epidermis, dermis, and red muscle, the wounded side also had distinguishable layers; however, the dermis thickened, and the skin structure was considerably disorganized. . (E,F) Quantification of wound size percent (E) and pigment percent (F). The wound healing rate was compared with an average day when wound size percent becomes 20% or pigment percent becomes 80%. Sample size (n) is denoted next to the graph. ns: not significant. * p ≤ 0.05, ** p ≤ 0.01. Microbiomes in Two Fish Species Respond Differently to Rifampicin Treatment To determine if changes in fish skin microbiota after rifampicin treatment are associated with changes in wound healing rate, metagenome analysis was performed on the mucus. Microbiomes were sampled from three fish in each group on the third day after rifampicin treatment and were analyzed using 16S rRNA metagenome sequencing ( Figure 3A). The non-bacterial contamination was less than 1.8%. From the analysis of 12 samples of the metagenome, 22,459 features and 336 bacteria were identified to the genus level. The compositions of the microbiomes of Korean bullheads and Chinese bleaks were different before the treatment, suggesting that host factors regulate the skin microbiota ( Figure 3B). Rifampicin treatment changed the composition of microbiomes in both Korean bullheads and Chinese bleaks ( Figure 3B). Notably, the composition of the microbiome in the rifampicin-treated Korean bullheads was similar to that in the Chinese bleaks before the treatment. Although the composition of the microbiomes changed upon rifampicin treatment, the compositions of the top twenty dominant microbiomes from each group were quite similar ( Figure S3A). This result may suggest that habitats and raising environments have a particularly important effect on the major commensal microbiota on the skin. The percentage of the top twenty dominant genera (*family), 1% Ab, and 0.1% Ab, were very similar between the groups ( Figure 3C). In all groups, the top five dominant genera or families* (class) are Muribaculaceae (Bacteroidia), Muribaculaceae* (Bacteroidia), Lactobacillus (Bacilli), Muribaculum (Bacteroidia), and Lachnospiraceae* (Clostridia), with differences in their ranks in each group ( Figure S3A). The composition of the top five dominant genera of Chinese bleak in group A decreased by~7.4% in group R (from 64.46% to 57.06%). In contrast, the top five dominant genera of Korean bullhead in group A increased by~8.8% in group R (from 53.74% to 62.54% of the total microbiome). Upon Rifampicin Treatment, the Microbiome Changes with Some Correlation with the Wound Healing Rate To identify correlations between changes in the wound healing rate after rifampicin treatment, we performed a differential abundance test on the microbiome of group A and group R. Bacteria showing more than a two-fold difference (FDR < 0.1) between groups were plotted at the genus level ( Figure 4A-D and Figure S3B,C). Only ten genera showed differences between groups in Chinese bleak compared with 45 in Korean bullhead, which implies that the antibiotic sensitivity of bacteria in Korean bullhead is higher than that in Chinese bleak. In Korean bullhead, bacteria with higher compositions (≥0.1%) included the genera (class) Elizabethkingia (bacteroidia), Niveispirillum (Alphaproteobacteria), and unclassified (Verrucomicrobiae) ( Figure 4C and Figure S3B). In particular, the genus Elizabethkingia (4.61%) was 5149 times more abundant in group A than in group R. Elizabethkingia is commonly found in nature and is known as a bacterium that lives in fish, causing infections [18]. Another harmful bacterium that causes infections, Shewanella (0.37%, 59.30 fold more abundant in group A), was found in Korean bullhead. This bacterium plays a role in the decaying of fish [19]. In contrast to Korean bullhead group A, only ten genera were abundant in group R, and their composition was minor (≤0.12%). None of these bacteria were known to be harmful. Taken together, in Korean bullhead, the number of bacteria with higher abundance in group A than in group R was large, and many of these were found to negatively affect the fitness of the fish. In contrast, the number of bacteria with higher abundance in group R than group A was relatively small, and these were not known to be harmful. The results suggest that the difference in wound healing rate between Korean bullhead in group A and group R could be attributed to the harmful bacteria in group A, which were reduced in composition upon rifampicin treatment. Discussion In this study, we identified differences in the wound healing rate between seven species that were raised in the same environment. Upon wounding, no general difference was observed between fish with and without scales. However, the wound healing rates differed significantly depending on the type of scales; the wound healing rate was faster in fish with exposed scales than in those with deeply embedded scales. We have no explanation for the actual mechanisms of how these two types of scales influence wound healing. Differences in the composition of skin between fish species may underlie the varied wound healing rate. The presence of club cells varies between fish species, and club cells were presumed to perform protective functions against external stressors, including mechanical wounds [20,21]. However, the fish with their scales embedded in the skin were also reported to have club cells on their skin [22,23], implying that simply the presence of the club cells could not support the difference in wound healing rates. Other factors, including the difference in the density of club cells, the response of club cells upon mechanical wounding, and the contents in the club cells, would play a role in varied wound healing rates [20]. Confirming the role of club cells in varied wound healing rates requires further research. Among seven local fish species, we identified two fish species (Korean bullhead and Chinese bleak) that showed the fastest wound healing rates. Since the seven fish were raised in similar environments, the differences in wound healing rate could be attributed to host factors, including genetic factors, the migration and functions of keratocytes and immune cells, and cell proliferation [6][7][8]. In addition, the commensal microbiota can affect wound healing mediated by type I IFN-dependent and IL-1β repair mechanisms [14,24]. However, in contrast to the recent reports, we identified a negative role of microbiota in wound healing of fish skin. Upon rifampicin treatment, only Korean bullhead showed a positive response to wound healing. The difference between the mouse study and our study could be attributed to the different environments in which the animals were raised. In the mouse study, mice were kept in a specific pathogen-free or relatively clean environment; therefore, after wounding, few environmental microbiotas may have had access to the wound bed [25]. In contrast, the fish in our experiment were exposed to water that could be a source of very diverse microbiota [17,26]. Moreover, from 16S rRNA metagenome sequencing, we found several harmful bacteria on the skin of Korean bullhead. Upon rifampicin treatment, the composition of those bacteria was severely reduced. Since the fish were raised in the same condition and showed no signs of infection, we infer that any difference caused by rifampicin treatment may have resulted from changes in the composition of commensal microbiota and not from infection by pathogens. Further research is needed to evaluate the effects of removing harmful bacteria that we found on healing wounds. Furthermore, we could not entirely rule out the possibility of some differences in the mechanisms of the wound healing process mediated by microbiota; therefore, it would be interesting to examine whether upstream and downstream signaling pathways of type I IFN-dependent and IL-1β-dependent repair mechanisms are conserved or divergent between mammals and fish upon antibiotic treatment. In summary, we have shown that wound healing rates vary between fish species and that the composition of microbiota had a correlation with wound healing rate. This suggests that there would be some host factors speeding up the wound healing process and that the presence of specific microbiota can facilitate or inhibit the wound healing rate. A large number of the recent microbiome and genome analyses performed on diverse vertebrate species should be harnessed to identify the host factors and the molecules mediating the interactions between hosts and microbiota during wound healing. Our study in fish, where the presence of host factors controlling wound healing rates and microbiome correlates of improved wounding healing rates were presented, offers a valuable resource for further research that focuses on improving the wound healing process in vertebrates including both fish and humans. Wound generation and antibiotic treatment. To introduce the wound, all fish were anesthetized using MS-222 solution (Sigma-Aldrich, St. Louis, MO, USA; 0.17 g/L APW). Wounds were introduced onto the skin with a reusable rapid punch kit (WPI, Sarasota, FL, USA; 3.0 mm tip). For A. chinensis and R. oxycephalus, wounds were introduced onto the left flank directly anterior to the anal and dorsal fins; for other fish, wounds were introduced at the point where the anterior and the posterior were divided in a 2:1 ratio considering the shape of the body. Rifampicin solution (KisanBio, Seoul, Korea; 25 µg/mL APW) was used, assuming that the steady-state internal concentration in fish becomes similar to the concentration in water [19]. The APW and APW with rifampicin (APW + RIF) in the aquarium were changed once a week to maintain antibiotic effectiveness. Imaging and quantification of wounds. To track the wound healing rate, each fish was anesthetized using MS-222 solution, and the images were taken using a digital camera connected to a stereo-microscope (SMZ 745T; Nikon, Tokyo, Japan). Wounds were imaged every day until six days post-wounding (dpw) for all fish species, followed by imaging every two days. In the experiment with rifampicin, wounds were imaged every day. In both experiments, seven fish from each species were imaged. Wound size and pigment were quantified using Image J (ver.1.53c). Each wound was characterized based on the color difference between the wounded and unwounded area. The wound size was estimated by identifying the edges of the wound and measuring the area. Microbiome sampling. Two species, P. fulvidraco and A. chinensis were divided into two groups of eight. One group was raised in APW and the other in APW + RIF. The microbiota were harvested according to methods described previously [19]. Briefly, the mucus of each fish was extracted by adding PBST (137 mM NaCl, 10 mM phosphate, and 0.1% Tween 20, pH 7.4) in a sterile 15 mL conical tube and then vortexed for 2 min, pausing every 20 s. The mucus extracted from the PBST solution was centrifuged for 2 min at 13,000 rpm at room temperature to remove the top layer. 16S rRNA metagenome sequencing. Total genomic DNA extraction was performed using a QIAamp DNA Stool Mini Kit (Qiagen, Germantown, MD, USA). The concentration of DNA was measured using a Qubit 3.0 Fluorometer (ThermoFisher, Waltham, MA, USA) to ensure that adequate amounts of high-quality genomic DNA had been extracted (>1 ng/µL). The V3-V4 hypervariable region of the bacterial 16S rRNA gene was amplified using PCR. PCR was performed using two primers: the forward primer (341F: 5 -TCGTCGGCAGCGTCAGATGTGTATAAGAGACAGCCTACGGGNGGCWGCAG-3 ) and reverse primer (806R: 5 -GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAGGACTACH VGGGTATCTAATCC-3 ). A total of 5 ng DNA was used for doing PCR. The reaction was set up as follows: extracted genomic DNA 2.5 µL; amplicon PCR forward primer (1 µM) 5 µL; amplicon PCR reverse primer (1 µM) 5 µL; 2× KAPA HiFi Hot Start Ready Mix 12.5 µL (total 25 µL; Roche, Wilmington, MA, USA). PCR was performed in a T100 Thermal Cycler (Bio-Rad, Hercules, CA, USA) using the following program: 1 cycle of denaturing at 95 • C for 3 min, followed by 25 cycles of denaturing at 95 • C for 30 s, annealing at 55 • C for 30 s, elongation at 72 • C for 30 s, and a final extension at 72 • C for 5 min. Under this PCR condition, the non-template reaction produced no PCR products. AMPure XP beads (A63881; Beckman coulter, Brea, CA, USA) were used to purify the free primers and primer dimer species in the amplicon products. To sequence, the amplicon, dual indices, and Illumina sequencing adapters were attached using the Nextera XT Index Kit (FC-131-2001; Illumina, San Diego, CA, USA), and the amplicon was purified again using AMPure XP beads. Before sequencing, the DNA concentration of each PCR product was determined using a Qubit 3.0 Fluorometer, and the quality of the amplicon was tested using a bioanalyzer (2100 Bioanalyzer; Agilent, Santa Clara, CA, USA). The amplicons from each reaction mixture were pooled in equimolar ratios based on their concentration. The sequencing was performed by Sanigen Inc. (Suwon, Korea) using the Illumina MiSeq system. The paired-end MiSeq Illumina reads (2 × 300 bp) were processed using QIIME2 (version 2020.8). Artificial sequences were removed using Trimmomatic (version 0.38.jar). Denoising was performed using dada2 in QIIME2. A 16S rRNA database called Silva and machine learning techniques were used to classify the flora. Furthermore, 16S rRNA from chloroplasts and mitochondria were additionally removed. Statistical analysis. The statistical analysis of sequencing data was performed using R studio (ver.4.0.4). Read counts were normalized using DESeq2 (ver.1.30.1). After the normalization, a PCA plot was generated using the plotPCA. Using the ggplot2 (ver.3.3.3), the top twenty dominant bacterial species present in each group were described as bar plots of taxonomic classification. Heatmaps were produced using Heatmap.2 (gplots ver.3.1.1). The statistical analysis of wound healing rate was performed using the GraphPad program (ver.9.1.0). Group comparisons were performed using the Mann-Whitney t-test, and statistical significance was set at p-value < 0.05. Histological analysis. Hematoxylin and eosin (H&E) staining was performed as follows: after anesthetizing the fish in MS-222, P. fulvidraco was perfused with 10 mL 1× PBS for 1 min, followed by perfusion with 10 ml 4% PFA for 1 min. For A. chinensis, the tissue was quickly harvested without perfusion. The skin tissue of the fish was cut and postfixed using 4% PFA overnight at 4 • C. After fixation, the skin was embedded in OCT (Sakura, Torrance, CA, USA) and cryosectioned with a thickness of 30 µm using a cryostat (CM3050S; Leica, Wetzlar, Germany). H&E staining kit (Vector Lab, Burlingame, CA, USA) was used for the staining. The stained tissues were imaged using an optical microscope (DM500; Leica, Wetzlar, Germany) with a digital camera (ICC50E; Leica, Wetzlar, Germany).
5,278
2021-07-21T00:00:00.000
[ "Environmental Science", "Biology", "Medicine" ]
Minimax hypothesis testing for curve registration This paper is concerned with the problem of goodness-of-fit for curve registration, and more precisely for the shifted curve model, whose application field reaches from computer vision and road traffic prediction to medicine. We give bounds for the asymptotic minimax separation rate, when the functions in the alternative lie in Sobolev balls and the separation from the null hypothesis is measured by the l2-norm. We use the generalized likelihood ratio to build a nonadaptive procedure depending on a tuning parameter, which we choose in an optimal way according to the smoothness of the ambient space. Then, a Bonferroni procedure is applied to give an adaptive test over a range of Sobolev balls. Both achieve the asymptotic minimax separation rates, up to possible logarithmic factors. Curve registration Our concern is the statistical problem of curve registration, which appears naturally in a large number of applications, when the available data consist of a set of noisy, distorted signals that possess a common structure or pattern.This pattern constitutes the essential information that we want to dig out from the observations.However, the deformations of the signals are generally nonlinear and relatively complex, which complicates the statistical task.Fortunately it is relevant in some cases to assume that the signals only differ from each other by a horizontal shift: we call this modeling the shifted curve model.For instance, it was successfully adopted for the interpretation of the ElectroCardioGramms: each deflection is considered as a repetition of the same signal starting at a random time.Isserles et al. [28] proposed an estimator of the common pattern.Interestingly, the assumptions on the deformations are in practice violated due to the baseline wandering, a periodic vertical perturbation of the potential, but the estimation of the structural pattern performs well yet. By contrast, SIFT descriptors (cf.Lowe [31]) in computer vision are an example where the specification of the deformations is essential: selected keypoints of an image are assigned with descriptors including a histogram of the local gradient.If the image is rotated, the histogram of each keypoint is simply shifted by the angle of the rotation.To match the keypoints of the two images, it is then sufficient to test the adequation of their histograms with the shifted curve model.So, testing the model is sometimes the main concern, and even when estimation matters, the adequation of the model may have to be tested, as the estimation techniques depend on the structure of the deformations. Shifted curve model This paper deals with the shifted curve model, which we will state in a Gaussian sequence form, but which originally relates on two 2π-periodic functions f and f # in L 2 .Expanding these functions in the complex Fourier basis, we get where c j (f ) = 1 2π 2π 0 f (t)e −ijt dt and c j (f # ) = 1 2π 2π 0 f # (t)e −ijt dt.With this notation, if f and f # only differ from each other by a shift, then the Fourier coefficients verify c j (f # ) = e ijτ c j (f ), for some real τ in [0, 2π] and all non-zero integers j.Hence, if we introduce the pseudo-distance d such that d 2 ((c 1 , . ..), (c # 1 , . ..)) and given that c j (f ) = c −j (f ) for every integer j, testing that f was shifted from f # amounts to testing if d(c, c # ) = 0 where c = (c 1 (f ), c 2 (f ), . ..) and c # = (c 1 (f # ), c 2 (f # ), . ..).Now, if we assume that the observations are given by the white noise model dY (t) = f (t) dt + σ dW (t) and dY # (t) = f # (t) dt + σ dW # (t), where σ > 0 and W, W # are independent Wiener processes, we can state our model in a more convenient Gaussian sequence form: where • {ξ j , ξ # j ; j = 1, 2, . ..} is a family of independent complex random variables, whose real and imaginary parts are independent standard Gaussian variables, • σ is assumed to be known. Our problem amounts to testing H 0 against H 1 with where C is a positive constant and ρ σ is a sequence of positive real numbers. For reasons that we shall explain later, we assume that c and c # belong under the alternative to a Sobolev ball with s > 0. With this notation, we denote Θ 0 and Θ A detailed discussion of the model is deferred to Section 4, but before that, we point out that our choice of the Gaussian sequence model is not restrictive, since this model is equivalent in Le Cam's sense to many other models, including Gaussian white noise, density estimation (cf.Nussbaum [32]), nonparametric regression (cf.Brown and Low [8], in the case of random design in Reiß [34], in the case of nonGaussian noise in Grama and Nussbaum [22] and Grama and Nussbaum [23]), ergodic diffusion (cf.Dalalyan and Reiß [12]).On the other hand, the Gaussian noise is accepted in computer vision as a good approximation of the Poisson noise, that is more natural in this context. Minimax testing A randomized test in our model is a random variable taking values in [0, 1] and measurable with respect to the σ-algebra engendered by (Y , Y # ).In practice, the user simulates an independent random variable with a Bernoulli distribution of parameter the value of the test, which was computed from the data (Y , Y # ).The null hypothesis is accepted, respectively rejected, when the result of the simulation is 0 or 1.We say that a test is nonrandomized when it only takes the values 0 or 1. To measure the performance of a test ψ, we choose the minimax point of view, in which the errors of first and second kind are defined by Note that in the nonrandomized case, α(ψ, Θ 0 ) = sup Θ0 P c,c # (ψ = 1) and β(ψ, Θ 1 ) = sup Θ1 P c,c # (ψ = 0).We say that consistent testing in the asymptotic minimax sense is possible if for all α, β > 0, there exists a test ψ σ such that The distance between the null and the alternative hypotheses, Cρ σ , determines the existence of such tests.Indeed, if Cρ σ is too small, no testing procedure is asymptotically better than a blind guess, for which α(ψ, Θ 0 ) + β(ψ, Θ 1 ) = 1. For a fixed pair α, β, we call ρ * σ the asymptotic minimax separation rate if there are two positive constants C * and C * such that consistent testing is impossible for ρ σ = ρ * σ and C < C * , and possible for ρ σ = ρ * σ and C > C * .The best constants C * and C * satisfying these conditions are called exact separation constants.Conventionally, one applies the informal minimal writing length rule to avoid nonuniqueness of the minimax separation rate and of these constants.Moreover, a test which is consistent when ρ σ = ρ * σ and for some C > 0 is called asymptotically minimax rate optimal. There is a vast literature on the subject of minimax testing: minimax separation rates were investigated in many models, including the Gaussian white noise model, the regression model, the Gaussian sequence model and the probability density model, for the greater part in signal detection, i.e., testing the hypothesis "f ≡ 0" against the alternative " f ≥ Cρ σ ".We present a selective overview of the papers that are the most relevant in the context of this work. Starting from Ingster [25], Ermakov [15] and Ermakov [16], where the minimax separation rate and the exact separation constants were obtained when the functions in the alternative lie in ellipsoids and the separation from 0 is measured by the l 2 -norm, various cases were considered: l p -bodies as well as Sobolev, Hölder and Besov classes.We refer to Ingster and Suslina [27] and Ingster [26] for a survey.The cases when the functions in the alternative set lie in Sobolev or Hölder classes and the separation from 0 is measured by the sup-norm or by their values at a fixed point were studied in Lepski and Tsybakov [30].Finally, the case of the L p -norm with p < 2 in Besov classes was considered in Lepski and Spokoiny [29].Now, all the previously cited results are asymptotic, in the sense that the noise level σ (in the white noise model) tends to 0. But from a practical point of view, it may be interesting to look at the problem from a nonasymptotic point of view.In the regression and Gaussian sequence models, Baraud [1] derived nonasymptotic minimax separation rates when the functions in the alternative lie in l p -bodies (0 < p ≤ 2) and the separation from 0 is measured by the l 2norm.Baraud, Huet, and Laurent [2,3] proposed procedures for testing linear or convex hypotheses in the regression model, and Fromont and Lévy-Leduc [18] inspected the improvement implied by a further hypothesis on the periodicity of the signal in the periodic Sobolev balls. Composite null hypothesis testing Up to here, we have reviewed results dealing mainly with a simple null hypothesis, namely in the case of signal detection: "f ≡ 0".In contrast, the testing problem in the shifted curve model deals with a composite null hypothesis.Here, we give a brief overview of the papers presenting hypothesis testing problems with composite null hypotheses. The series of papers Baraud [1], Baraud, Huet, and Laurent [2,3] tackled the case of a nonparametric null hypothesis, but their assumptions are not applicable in our set-up, since our null hypothesis as defined in (3) is neither linear nor convex.On the other hand, the test of a parametric model against a nonparametric one was studied in a substantial number of papers (cf.Horowitz and Spokoiny [24] and references therein), but only in Horowitz and Spokoiny [24] from a minimax point of view.The minimax separation rate that they obtained is the same as with a simple null hypothesis.This is due to the strong assumptions made on the behaviour of the estimator of the parameter characterizing the model under H 0 . On a related note, Gayraud and Pouet [20,21] treated a more general composite null hypothesis in the regression model, that is mainly characterized by its entropy.In fact, the set of functions in the null hypothesis can grow with the sample size, and so be nonparametric.Their rate is the same as in the case of a simple hypothesis.Finally, Butucea and Tribouley [9] also considers the case of a nonparametric null hypothesis, since H 0 is "f = g", where f and g are two density functions. Adaptive testing A limitation of the minimax approach is that the optimal tests depend on the smoothness class.This is not convenient from a practical point of view, because the choice of the smoothness seems to be unnatural and arbitrary.To obtain handier procedures, we need an adaptive definition for hypothesis testing. Prior to testing, some sets of smoothness parameters s, L must be chosen, over which adaptation is performed.Typically, these sets are taken as compact intervals [s 1 , s 2 ], [L 1 , L 2 ].To each couple of smoothness parameters (s, L), we associate the smoothness set F s,L , and we write Θ s,L 0 and Θ s,L 1 the corresponding null and alternative hypotheses.Note that, in our problem, Θ s,L 0 ≡ Θ 0 is independent of the smoothness parameters, and that Θ s,L 1 depends on (s, L), not only because c and c # are in F s,L , but also since ρ σ is allowed to be a function of s: as a matter of fact, Θ s,L 1 depends on the choice of the radius Cρ σ (s).The easiest way to achieve adaptation is to use the test corresponding to the most constraining smoothness (s 1 , L 2 ), but this entails a significant loss of efficiency if the tested parameters are in fact smoother. Thus, we prefer a more economical approach and we will say that consistent adaptive testing is possible uniformly over However, adaptive testing is not always possible without loss of efficiency, i.e., taking ρ σ (s) = ρ * σ (s) for each s.That is why it was suggested in Spokoiny [37] to replace σ by σd σ in the expression of ρ * σ (s), where d σ is a sequence of positive real numbers, which can be seen as a necessary payment regarding the intensity of the noise to achieve adaptivity.Now, we say that ρ * σdσ (s), s ∈ [s 1 , s 2 ] is the adaptive asymptotic minimax separation rate if there are two positive constants C * and C * such that adaptive consistent testing is impossible for ρ σ (s) = ρ * σdσ (s) and C < C * , and possible for ρ σ (s) = ρ * σdσ (s) and C > C * .Spokoiny [37] proved that the optimal asymptotic factor is (log log σ −1 ) 1/4 , for signal detection in Besov balls.Gayraud and Pouet [21] extended this result for Hölder classes in the regression model. Fan, Zhang, and Zhang [17] provided a generic tool to construct minimax and adaptive minimax tests: the generalized maximum likelihood, that we also use in the present work to build our procedures both in the nonadaptive and adaptive contexts. Our contribution The problem considered in the present work is qualitatively different from the aforementioned works on the minimax separation rate, since our null hypothesis is not only composite but also semiparametric.Furthermore, it seems that the finite-dimensional parameter cannot be uniformly consistently estimated, which contrasts with the situation of Horowitz and Spokoiny [24]. Nevertheless, we propose a testing procedure which is consistent when the separation rate is of order (σ 2 log σ −1 ) 2s/4s+1 .This rate is then proven to be minimax, up to a possible logarithmic factor.Indeed, no testing procedure is consistent for a separation rate smaller than σ 4s/4s+1 , which is the rate of signal detection in the Gaussian sequence model when the signal to be detected belongs to a Sobolev ball and the separation from 0 is measured by the l 2 -norm. Further, an adaptive test is proposed to circumvent the limitations of the nonadaptive approach.This test is minimax rate optimal, up to a possible logarithmic factor, uniformly over a family of Sobolev balls. Finally, there is a gap between our lower and upper bounds for the asymptotic minimax separation rate.It could be argued that the lower bound is suboptimal, and that the minimax separation rate for the shifted curve model does contain our logarithmic factor.Indeed, the problem of testing the goodness-of-fit of the shifted curve model can be regarded as an adaptation to the unknown shift parameter.As a matter of fact, if adaptation to the unknown smoothness typically entails a loglog-factor, other types of adaptation can bring simple logarithmic ones: it is proved in Lepski and Tsybakov [30] that the asymptotic minimax separation rate for signal detection when the signal to be detected belongs to a Sobolev or Hölder ball and the separation from 0 is measured by the sup-norm is (σ 2 log σ −1 ) s/2s+1 , while it is σ 2s/2s+1 when the separation from 0 is measured by the value of the signal at a fixed point.The logarithmic factor can be interpreted as a payment for the adaptation of the problem of testing at one point when this point is unknown.Furthermore, note that the same logarith-mic factor appears in Fromont and Lévy-Leduc [18], where upper bounds on the minimax separation rate are established in the problem of periodic signal detection with unknown period. Organization of the paper The rest of this paper is organized as follows: a nonadaptive procedure is proposed in Section 1, and adjusted in Section 2 to obtain an adaptive test.We also state their minimax performances, which Section 3 indicates to be at least nearly optimal in the minimax sense.The theorems are proved in Sections 5 to 7, and the lemmas used in their proofs are presented in Section 8.The model is discussed in Section 4. Nonadaptive testing procedure Here, we build a test which will be proven later to be minimax, up to a possible logarithmic factor.Indeed, the procedure achieves the rate (σ 2 log σ −1 ) 2s/4s+1 . Our proposal, which carries on the work presented in Collier and Dalalyan [11], is based on standardized versions λ σ (N ) of estimators of d(c, c # ): for N ∈ N * and q ∈ R. Put into words, the test ψ σ (N, q) rejects the null hypothesis when the statistic λ σ (N ) exceeds the threshold q and accepts it otherwise.The following theorem establishes the minimax properties of this testing procedure for a proper choice of the tuning parameters. with s and L are positive real numbers, defined in (8) with ] and q = q α , the quantile of order 1 − α of the standard Gaussian distribution.Then Remark 1.In the rest of this section and in the proof, we skip the dependence of N σ (s, L) in s and L when no confusion is possible.The proof of this result is given in Section 5. Let us now develop a brief heuristic describing how one could have guessed the optimal value of ρ σ . Heuristic for the performance of the nonadaptive procedure Our proof will show that, under H 0 , λ σ (N σ ) is bounded from above in probability.Thus, we decide to reject the null hypothesis when λ σ (N σ ) is larger than a constant to be chosen properly. On the other hand, we inspect the behaviour of the statistic under the alternative hypothesis and give a condition on ρ σ under which the test statistic is orders of magnitude larger than a constant, so that the procedure can have the desired power. We derive the lower bound Re e ijτ ξ j ξ # j + negligible terms. The proof will establish that the second term is bounded in probability, while the third, that we call perturbative, is of order √ log N σ .The first term, up to a 4 √ N σ σ 2 factor, is an approximation of the square of the pseudo-distance d(c, c # ).Since c and c # lie in F s,L , the remainder of the sum can be bounded from above, up to a constant factor, by N −2s σ .In a nutshell, we get the heuristical lower bound Consequently, the alternative is detected as soon as Heuristic for the constant C We may now ask how small the constant C can be without making our testing procedure inefficient.This constant is only optimized for our test, and we do not claim it to be optimal in the minimax sense. The previous optimization shows that the test achieves its best rate when N σ is of the order of ρ * σ −1/s .Now, denoting N σ = [cρ * σ −1/s ], a similar heuristic can give an optimized constant C in the definition of Θ 1 .Indeed, Lemma 6 gives the more precise lower bound (C 2 − 4L 2 c −2s )ρ * σ 2 for the sum in the first term, and we will prove the exact order of magnitude of the third to be 256 c 4s+1 log N σ .Thus and this leads to a minimization problem determining the choice of c (cf. Theorem 1). Adaptive testing procedure The procedure given in the previous section possesses asymptotic minimax optimality properties thanks to an appropriate choice of the tuning parameter N σ , but the practician needs to determine values of s and L to implement the test. As it seems arbitrary and nonintuitive to make assumptions on the smoothness of the signals, it is necessary to design testing procedures independent of s and L that are nearly as good, in the minimax sense, as the procedure proposed in the previous section. In this section, we only assume that an interval [s 1 , s 2 ] is available such that c, c # ∈ F s,L for some s ∈ [s 1 , s 2 ] and L ∈ [0, +∞[.We propose a testing procedure depending on s 1 and s 2 but independent of s and L, that achieves the same rate of separation, i.e., σ 2 log σ −1 2s/4s+1 , as the test based on the precise knowledge of s and L. Furthermore, this rate is achieved uniformly over the Sobolev classes F s,L with s ∈ [s 1 , s 2 ] and L belonging to any compact interval included in R + . Here is the idea of its construction.The nonadaptive testing procedure proposed above depends on s only via the tuning parameter N σ (s, L).In the followings, we will change the definition of N σ (s, L) to avoid the dependence on L and we will write N σ (s).Using a Bonferroni procedure like in Gayraud and Pouet [21] or Horowitz and Spokoiny [24], we consider the maximum of these tests for several values of N σ (s), more precisely, we consider tests of the form ψσ (q) = max N ∈N ψ σ (N, q).For this kind of test, the next proposition gives bounds for the first and second type errors: Proposition 1.Let N be a set of positive integers and denote ψσ (q) the test max N ∈N ψ σ (N, q), where ψ σ is defined in 8, then α( ψσ (q), Θ 0 ) ≤ N ∈N α(ψ σ (N, q), Θ 0 ) β( ψσ (q), Θ s,L 1 ) ≤ min N ∈N β(ψ σ (N, q), Θ s,L 1 ).Consequently, the set N has to be as small as possible (to control the first kind error), but rich enough to approximate the set of all N σ (s) for s ∈ [s 1 , s 2 ].We will show in the proof that each N ∈ N brings adaptation over all Sobolev balls of regularity s such that there is a S such that N = N σ (S) and S ≤ s ≤ S + 1/ log σ −1 .Hence, we introduce the following notation leading to a proper choice of N . For every Consider the test ψσ = max N ∈N (σ1,σ2) ψ σ N, 2 log log σ −1 , where ψ σ is defined in (8).Then, for the interval [s 1 , s 2 ] used in the construction of the test ψσ and for any interval Remark 2. In the statement of this theorem, one observes that the constants L 1 and L 2 are not used in the definition of the test, while L was, in the definition of the nonadaptive procedure.Indeed, we optimized the separation constant C and gave an expression depending on L, while this optimization was not our matter in the second theorem.Remark 3. The theorem claims that there exists a value of C for which the first and second type errors can be controlled.From the proof of the theorem, we see that it is sufficient that such a constant satisfies Heuristic for the performance of the adaptive procedure Here we explain why our adaptive procedure achieves the same rate as the nonadaptive one.The heuristic of the previous section roughly holds, with this difference that max N λ σ (N ) is of loglog-order under the null hypothesis.But this term is negligible in view of the perturbative term, so that the performances of the test do not deteriorate in the adaptive problem. Lower bound for the minimax rate After stating the performance of our tests, we prove in this section that they are at least nearly rate optimal.Indeed, we are able to establish a lower bound for our model, by proving that the detection of a signal lying in a Sobolev ball when the separation from 0 is measured by the l 2 -norm (cf.( 17) for a precise definition) is simpler than ours, in the sense that every lower bound result for this model is adaptable for our purpose.Let us first introduce the classical signal detection problem, for which the minimax separation rate, and even the exact separation constants, are known: For this model, we define the errors of first and second kind of a test ψ class by where we denote P c the probability engendered by Y = (Y 1 , Y 2 , . ..) when (c 1 , c 2 , . ..) = c. Theorem 3. Given the two models exposed in ( 2) and ( 17), we have where the infima are taken over all tests of level α respectively for our model and for the classical one. Thus, our model can benefit from every lower bound result on model (17).We choose to exploit the nonasymptotic results presented in Baraud [1], Proposition 3. The following theorem shows that the asymptotic minimax separation rate for our problem is not smaller than σ 4s/4s+1 . Corollary. Let α and β be in where the infimum is taken over all tests of level α for the shifted curve model. Remark 4. We can approximate ρ by computing sup Remark 5. Our proof shows that every lower bound result for adaptive testing could be used for our purpose as well, for instance Gayraud and Pouet [21]. Model The choice of our model was inspired by practical considerations, and we intend to apply it to a problem in computer vision: that of keypoint matching as briefly discussed in Collier and Dalalyan [11].Accordingly, it is necessary to justify the realism of model (2). Variance Although the theoretical analysis of this paper is carried out for the Gaussian sequence model, the procedure we propose admits a simple counterpart in the regression model, at least in the case of deterministic equidistant design.According to the theory on the asymptotic equivalence, our results hold true for this model as well, provided that s > 1/2 (cf.Rohde [36]).However, in the model of regression, it is not realistic to assume that the variance of noise is known in advance.Nevertheless, one can compute a consistent estimator of the variance (cf.Rice [35]) and plug this estimator in the testing procedure.In an analogous setup, it is proved in Gayraud and Pouet [20] for example, that this plug-in strategy preserves the rate-optimality of the testing procedure.We believe that a similar result can be deduced in our set-up as well. Symmetry of the model In our modelization, the two parts corresponding in the Gaussian white noise model to two different functions are treated symmetrically: the same model, with the same variance and the same noise, applies to both.But, in applications, the signals that we want to match with each other are thought to have the same nature.In addition, it seems that it is not meaningful to consider the case when the regularities of the Sobolev balls are different for the signals: under H 0 , the regularity has to be the same. Besides, one could want to normalize both equations to get the same variance for both sides.But, this would also change the functions, which would not only differ from each other by a shift, but also by a dilatation.Therefore, the application of our methodology to this case is not straightforward.However, a detailed inspection reveals that our results carry over to the case when we replace σ by max(σ, σ # ). Weighted estimator In Collier and Dalalyan [11], another estimator of d 2 (c, c # ) is used, stemming from a penalization of the log-likelihood ratio.This could be adapted in our context by considering the test statistic where w = (w 1 , w 2 , . ..) is a sequence of real numbers in [0, 1] depending on σ. Under some conditions on w, our study would undergo only few modifications, and only the optimal constants would be changed.For simplicity sake, we chose not to consider the weighted estimator. From classical signal detection to shift testing A first guess to try solving our problem could be to use an estimator τ of the shift and to apply the classical signal detection methods to the sequence (Y j − e ij τ Y # j ).But this approach fails, since it is not possible to get any consistent estimator of the shift.Indeed, for example, the shift may not be identifiable.Consequently, the study of the perturbative term (cf.first heuristic after Theorem 1) is unavoidable, in order to take into account every possible shift.We think that this uncertainty entails a price, i.e., a supplementary factor in the minimax separation rate.Once again, the problem is whether it is possible to estimate the dilatation parameter consistently. First kind error Here, we prove that the asymptotic first kind error of the test ψ σ does not exceed the prescribed level α.To this end, denote τ * a real number such that, under H 0 , ∀j ≥ 1, c # j = e ijτ * c j .We skip the dependence of τ * on c and c # .Using the inequality where Finally, using Berry-Esseen's inequality (cf.Theorem 5), we get and this gives the desired asymptotic level. Second kind error It remains to study the second kind error of the test, and to show that it tends to 0. Our proof is based on the heuristic given earlier in Section 1: we decompose λ σ (N σ ) into several terms, and make use of their respective orders of magnitude.The decomposition gives Re e ijτ ξ j ξ # j . For simplicity sake, we introduce some notation: which, combined with (22), leads to: In addition to c s,L , introduced in the definition of N σ , we will need the constant c ′ and ǫ, defined as . Separating the different terms to study them independently, we write • Let us first study sup s,L , Lemma 1 allows to apply Lemma 2 with x 0 = δρ σ and M = (c The choice of the parameters yields for σ small enough so that the second part of Lemma 2 holds: • Let us now turn to P σ 2 √ N σ A σ > ǫρ 2 σ .Prior to using Berry-Esseen's inequality (cf.Theorem 5), we derive log σ −1 into the formula of the theorem and using the bound 1 x √ 2π for every positive x, • Finally, it remains to control P 2σ 2 B σ > c ′ ρ 2 σ .We apply Lemma 3: 4s+1 + e −Nσ/2 → 0. Proposition 1 Let N be a set of positive integers and denote ψσ (q) = max N ∈N ψ σ (N, q), where ψ σ is defined in 8. Second kind error Finally, we study the second kind error and prove that it converges to 0. For s ∈ [s 1 , s 2 ], define S = max t ∈ Σ(s 1 , s 2 ) | t ≤ s , where we omit the dependence of S in s for simplicity sake.Note that 0 ≤ s − S ≤ 1 log σ −1 .S is an approximation of s which will be sufficient for our purpose according to Lemma 6. We introduce the notation j=1 Re e ijτ ξ j ξ # j . and computations similar to those of the previous section yield sup • Let us study sup s,L sup Θ s,L +1) 2 , Lemma 1 allows to apply Lemma 2 with x 0 = δρ * σ (s) and M = σ 2 32 N σ (S) log log σ −1 + C 2 ρ 2 σ (s).On the other hand, the choice of δ entails that for C large and σ small enough Hence, applying the second part of Lemma 5, we get an inequality where the right-hand side converges to 0 as σ tends to 0: • Consider the second term.Berry-Esseen's theorem (cf.Theorem 5) implies the following inequality, where the right-hand side converges to 0 as σ tends to 0: • Let us turn to the third term.We apply Lemma 3 and get an inequality where once again the right-hand side converges to 0 as σ tends to 0: Proof of Theorem 3 Consider a randomized test ψ in the shifted curve model.We will define a corresponding test in the classical model with smaller first and second kind errors, and it is sufficient to establish the result.First note that there is a measurable function f with respect to the σ-algebra engendered by the sequences Y and Y # and with values in [0, 1] such that ψ = f (Y , Y # ).Denoting ǫ a sequence of i.i.d random variables N (0, σ 2 ) independent from Y , we define ψ class = E ǫ f (Y , ǫ)|Y , where E ǫ is the integration with respect to the probability engendered by ǫ. ψ class is σ(Y )-measurable and thus constitutes a test for the classical model. This testing procedure can be interpreted as a test in the shifted curve model when A similar inequality holds concerning the second kind error. Proof of Lemma 4. We refer to Collier and Dalalyan [11], Lemma 3, for a proof of this lemma.First recall Berman's formula, that we will need in the proof. Theorem 4 (Berman [4]).Let N be a positive integer, a < b some real numbers and g j , j = 1, . . ., N be continuously differentiable functions on [a, b] satisfying N j=1 g j (t) 2 = 1 for all t ∈ R and η j , j = 1, . . ., N , some independent standard Gaussian variables.Then Finally, we recall here Berry-Esseen's inequality, in a simpler version than Theorem 5.4 of Petrov [33]. Theorem 5 (Berry-Esseen's inequality).Let N be a positive integer and some random variables X 1 , . . ., X N iid ∼ X and such that E(X) = 0, Var(X) = γ Future research Our model is only a simple version of the curve registration problem.In further work, we could study what happens when the signals are shifted and dilated by considering the pseudo-distance d2 (c, c # ) = inf τ,a +∞ j=1 |c j − a e ijτ c # j | 2 .
8,118.6
2011-09-06T00:00:00.000
[ "Mathematics" ]
Kinetic modeling of seeded nitrogen in an ITER baseline scenario ITER as the next-level fusion device is intended to reliably produce more fusion power than required for sustainably heating its plasma. Modeling has been an essential part of the ITER design and for planning of future experimental campaigns. In a tokamak or stellarator plasma discharge, impurities play a significant role, especially in the edge region. Residual gases, eroded wall material, or even intentionally seeded gases all heavily influence the confinement and, thus, the overall fusion performance. Nitrogen is such a gas envisaged to be seeded into a discharge plasma. By modeling the impurities kinetically using the full three-dimensional Monte-Carlo code package EMC3-EIRENE, we analyze the distribution of nitrogen charge-state resolved in a seeded ITER baseline scenario and draw conclusions for the hydrogen background plasma density. Lastly, we compare the influence of a more refined kinetic ion transport in EIRENE including additional physical effects on the impurity density. Introduction By completing the ITER tokamak [1], magnetic confinement plasma physics will reach a next step on its way to fusion energy. As the completion will still take some years, modeling via numerical simulation is the favored tool to preplan experiments and estimate ideal machine settings for optimized plasma discharge parameters [2,3,4,5]. To intentionally seed an impurity of different species, density, and energy is one tunability option in a fusion device. Ideally, such a trace gas allows for a distinct energy loss of the main plasma species [6], for setting the heat deposition on the target [7], for reaching detachment [8], and, overall, for improving the confinement [9]. The three-dimensional Monte-Carlo code EIRENE [10,11], usually used for kinetically modeling neutral particle transport in the plasma edge region, is a reliable and widely used simulation tool for mainly three reasons: it is coupled to broad databases of atomic, molecular, and other reactive data [12], it is applied to realistic large-scale geometries [13], and it is combined with all major European plasma edge codes [14,15,16]. The three-dimensional edge Monte-arXiv:1912.09134v1 [physics.plasm-ph] 19 Dec 2019 Carlo code EMC3 [17] is also coupled to EIRENE into the code package EMC3-EIRENE, allowing to study plasma discharges in full 3D. Some charged particles might have a short lifetime compared to their collisionality τ life τ coll . Handling of such ions requires a kinetic treatment, as assuming local thermodynamic equilibrium is deceptive. Recently, the capability of describing ions kinetically via EIRENE in EMC3-EIRENE context has been extended [18,19,20] by first order drift effects, anomalous cross-field transport, and regarding the mirror force, while beforehand only field-line tracing and energy relaxation have been included. We take an ITER hydrogen plasma baseline scenario model from the 2008 database [1] and perturb it by installing a nitrogen density gas puff, where we perform a charge state resolved kinetic modeling of nitrogen and analyze the seeding's impact on the main plasma. We compare the different physics model in EIRENE, namely the "classic" one including only field-line tracing and energy relaxation due to the background conditions to the "enhanced" one, adding drifts, diffusion, and mirror force. This is an exemplary case study which can be easily expanded to study different or additional impurity species. The paper is organized as follows. In Sec. we introduce the simulation setup of the EMC3-EIRENE runs and explain how a kinetically simulated impurity species influences the main plasma. Afterwards, we present results on the aforementioned perturbed ITER baseline modeling in Sec. . Sec. focuses on the comparison between the different fidelity in the kinetic ion transport of EIRENE, while we will close in Sec. by summarizing the results and drawing conclusions for future endeavors. EMC3-EIRENE Simulation Setup EMC3 and EIRENE are coupled iteratively, meaning that at first the plasma background equilibrates in the magnetic structure. Afterwards, EIRENE particles are started on that background, probabilities for collisional events depend on the current plasma density and temperature. Neutrals of the same atomic species as the main plasma can change their character, which means that e.g. in the case studied in this manuscript the kinetically treated neutral molecule in EIRENE H 2 might switch towards a main plasma species fluid parcel of H + in EMC3, which eventually could become an again kinetically treated minority ion H + 2 in EIRENE. If an EIRENE particle is not of the main plasma species, interaction from EIRENE towards EMC3 happens only via the electron momentum balance and temperature equation. This iterative coupling between EMC3 and EIRENE is performed until the solution converges. For a more detailed explanation on the underlying physics equations, see [17,10,11,20]. In Fig. 1 one finds the simulation area, which consists of an axisymmetric toroidal section of 40Âř of the ITER tokamak with adequate boundary conditions. Examplarily, the main plasma species density n H + is plotted. n + H Figure 1: Schematic simulation setup of the ITER baseline case as described in the text. Shown is the simulation volume, the EMC3 brick structure, and, exemplarily, the proton density n H + , the main plasma species. The simulated ITER case 2297 Ψ N = 0.83 is a hydrogen plasma case from the 2008 baseline scenario [1], where EIRENE is handling the hydrogen recycling at the target plates plus a constant molecular hydrogen gas puff in order to establish semi-detached plasma conditions. The different reactions for the hydrogen species H 2 , H, and H + 2 in EIRENE comprise electron impact ionization, recombination, charge exchange, elastic collision, and dissociation. We add a monoenergetic molecular nitrogen gas puff with E N 2 = 0.026 eV at the hydrogen puffing location, cf. red arrow in Fig. 1, with a flux of Φ = 3.2 × 10 15 s −1 and a total number of puffed-in nitrogen of one tenth of the amount of puffed-in hydrogen. It is possible for the molecular nitrogen to dissociate in three different ways [21], while the molecular ion has two implemented reactions [22], namely For the atomic ions we add the possibility to ionize up and recombine down in their charge state level, following the data provided in the ADAS database [23]. N 7+ only has the opportunity to recombine. The simulation area splits into three zones: the confined core region, where field lines are closed, the scrape-off layer, where field lines enter the wall segments, and a private flux area below the divertor. Kinetic Modeling of Nitrogen We In Fig. 2 one notices the molecular nitrogen N 2 being present mainly in the private flux region below the divertor dome. As it is not charged, it can reach this area, and once it leaves the private flux area it gets ionized due to the background conditions, thus leaving no density behind in the scrape-off layer. For atomic nitrogen N, one notices a relatively large density at the divertor target plates which is due to neutralizing when catching a surface electron during a reflection. With peak densities of less than 10 11 cm −3 , both the density for N 2 and N are relatively low. One finds even lower densities for the molecular ion N + 2 and the singlely charged nitrogen atom N + in Fig. 3. Again, as for N, one finds the density peaked close to the target plates at the divertor, implying that the background plasma is energetic enough to immediately ionize both N + 2 and N + further up, as is also clear from the relatively low densities around 7 × 10 10 cm −3 . For N 2+ to N 6+ we find a certain trend (cf. Figs. 4-6), the density is steadily increasing and the distribution peak is approaching the separatrix, as for N 2+ we find the maximum close at the divertor we eventually find N 6+ being peaked directly at the separatrix. This is again due to the radial increase of density, temperature, and, ultimately, energy of the main plasma, opening up for higher ionization levels towards the center. For the density of N 7+ , as can be found in Fig. 6 on the right-hand-side, one finds the major part to be distributed in the confined area. Next, we analyze the effect the nitrogen gas puff has on the main plasma by comparing the densities for molecular hydrogen H 2 , atomic hydrogen H, molecular ionic hydrogen H + 2 , which is the minority ion, and protons H + in Figs. 7-10. Note that the first three species are handled by EIRENE, while the latter is the only species being dealt with on EMC3 side. These figures are structured in the same way; on the left-hand-side we show the density distribution with the nitrogen gas puff turned on, on the right-hand-side there is no puffed in nitrogen. We remark that the integrated absolute amount of hydrogen in the vessel is the same in both cases. For the molecular density, which is presented in Fig. 7, we notice a similar distribution, mainly in the private flux area. If nitrogen is puffed, however, that density is reduced by a factor of roughly 1.5. The atomic density exhibits a comparable effect (cf. Fig. 8), the density peaks close to the divertor targets and is slightly redistributed and reduced in case of nitrogen puffing. The minority ion H + 2 (Fig. 9) shows strong differences in case of a nitrogen gas puff being applied, however, we remark that the absolute density is significantly lower in this case at around ∼ 10 10 cm −3 . Now, most striking and most significant is the change in the plasma main species Fig. 10. While in case of no nitrogen being puffed, we find a peaked density at around n H + ≈ 10 14 cm −3 in the divertor region, almost at the separatrix. In case of nitrogen puffing, that rather distinct peaking at the separatrix broadens up widely into the divertor region, plus a significant increase of the overall amount of protons, which is clearly visible by a relatively large density of n H + > 1.5 × 10 14 cm −3 being directly applied at the divertor target plates. Enhanced Kinetic Ion Transport In this section we focus on the impact of the recently introduced changes in the physical model of the kinetic ion transport in EIRENE [20]. In this publication, the enhancements of first-order drift effects ∇B-and curvature-drift, magnetic mirror force, and anomalous crossfield diffusion accounting for turbulence effects have been introduced. Fig. 11 compares the results of this enhanced kinetic ion transport to the aforementioned classic EIRENE version including only field-line tracing and energy relaxation. It is constructed in the following way: plotted on the x-axis is the length along the gas puff direction (cf. red arrow in Fig. 1), on the y-axis is a logarithmic scaling of the particle density in cm −3 . Shown are the densities of N 5+ (blue), N 6+ (red), and N 7+ (black) obtained using the classic EIRENE ion transport (solid) and the enhanced one (dashed), respecively, while in the latter case we chose D ⊥ = 1 m 2 /s as the perpendicular diffusion coefficient. The vertical gray line marks the last closed flux surface (LCFS), hence, to the left of it is the scrape-off layer (SOL) and to the right is the confined area. For the N 5+ distribution we note almost no difference whether the enhancements in the physical model have been regarded or not, as both treatments give a relatively flat profile at n N 5+ ≈ 10 11 cm −3 well in the SOL. For N 6+ , we notice a significant decrease in the peaking at the LCFS by more than an order of magnitude, whereas for N 7+ we remark a drastic change from a pure peaking at the LCFS in the classic EIRENE treatment towards a smeared out profile well into the confined area in case the drift, diffusion, and mirror force enhancements are regarded. While this flattening of the formerly highly peaked profile is mainly due to diffusion, this cannot be established as a general trend. Multiple investigations where either drift or diffusion have been turned off artificially have shown that neither of those two effects is mainly responsible for the simulated plasma profiles but indeed both have to be regarded together. Figure 11: Densities of kinetically simulated N 5+ (blue), N 6+ (red), and N 7+ (black) in the puffed ITER baseline scenario as described in the text vs. length along gas puff. Compared are the classical EIRENE physics model (solid) to the one enhanced by drifts, diffusion, and mirror force (dashed). Conclusions and Outlook We presented ITER simulations perturbed by a nitrogen gas puff using kinetic ion transport simulations in EMC3-EIRENE, for the first time applying the physical enhancements in the transport description introduced in [20]. We showed the charge state resolved density distributions in the divertor region, which give important insight on the radiative properties of the plasma and, thus, on the overall confinement performance. Afterwards, we summarized the influence of the external nitrogen gas puff on the main plasma species. Ultimately, we stressed the large influence the physics enhancements of drifts, diffusion, and mirror force have on the simulated profiles. This case study may serve as a template for ongoing investigations including either different or additional impurity species, e.g. neon or hydrocarbons. EIRENE is able to host for arbitrary species, as long as the necessary reactive data is provided.
3,123.4
2019-12-19T00:00:00.000
[ "Physics" ]
Diurnal variation of mountain waves Mountain waves could be modified as the boundary layer varies between stable and convective. However case studies show mountain waves day and night, and above e.g. convective rolls with precipitation lines over mountains. VHF radar measurements of vertical wind (1990–2006) confirm a seasonal variation of mountain-wave amplitude, yet there is little diurnal variation of amplitude. Mountain-wave azimuth shows possible diurnal variation compared to wind rotation across the boundary layer. Introduction Information on diurnal variation of mountain waves could be useful since the effect of, for instance, diurnal convection is uncertain.Convection could disrupt stable airflow of mountain waves (Ludlam, 1952), add to the mountain peaks forcing waves (Wallington, 1977), or modify wave modes (Ralph et al., 1997) and amplitudes (Georgelin et al., 1996). Gravity waves above convection are usually categorised as convection waves, separate from mountain waves, and waves above orographic convection have also been interpreted as a type of convection wave (Rovesti, 1970;Bradbury, 1990;Hauf, 1993).However, waves above convective rolls over mountains (vertical wind tens of cm s −1 or more, on timescale of several hours, and disappearing with a turbulence layer for horizontal wind near zero) often appear typical of mountain waves (Worthington, 2002). Correspondence to: R.M. Worthington (rmw092001@yahoo.com)1.1 Definitions of mountain waves Mountain waves could be defined as in standard theoretical models of wavelike air flow over a ridge, with the lowest streamlines usually following the mountain surface; similar waves modified by convection could be excluded from this category of classic or idealised mountain wave.Waves linked to orographic convection could have been wrongly identified as mountain waves in many studies using e.g.aircraft or radar data in the troposphere and stratosphere, without also measuring the boundary-layer structure. However, terminology for waves above mountains is often less specific about the cause of the waves, and instead based on wave characteristics, e.g."standing wave" if remaining almost static, or "orographic wave" if associated with mountains.A typical definition of mountain wave as "an atmospheric gravity wave, formed when stable air flow passes over a mountain or mountain barrier" (American Meteorological Society, Glossary of Meteorology) does not exclude effects of convection, rotors and turbulence on mountain wave formation, or a wave launching height above the mountain surface (although "lee wave" could imply a mountain obstacle upwind, at the same height as the wave, and more directly causing the wave). There are other variations on standard mountain-wave theory, such as a stagnant boundary layer absorbing waves instead of reflection at the ground (Smith et al., 2002;Jiang et al., 2006).Also there can be separate categories of mountain wave, such as "evening wave" (Roper and Scorer, 1952) for a mountain wave formed as convection stops and a stable boundary layer develops, maybe linked to katabatic wind. This paper uses a definition of mountain wave as a standing gravity wave above mountains (excluding e.g.propagating gravity waves such as typical convection waves).The paper looks at mountain waves above convective rolls for two case studies in Sect.2, since extensive convective rolls could be expected to disrupt daytime mountain waves, and provide an example of diurnal variation linked to stable-convectiveresidual boundary layer development above mountains (e.g., Kalthoff et al., 1998).Section 2.1 also includes weather radar measurements of precipitation lines for comparison with e.g.Kirshbaum and Durran (2005a).Section 3 then uses thousands of hours of VHF radar data to check for diurnal and seasonal variations of mountain-wave amplitude, and Sect. 4 shows mountain-wave azimuth and compares VHF radar and satellite measurement methods. Convective rolls, precipitation lines, and mountain waves 2.1 Case study, 1 October 2001 Miniscloux et al. (2001), Cosma et al. (2002) and Kirshbaum and Durran (2005a) ("MCKD") show weather radar measurements of along-wind precipitation lines above mountains.Numerical models imply these precipitation lines can be caused by convergence lines and convective rolls, triggered by mountains and mountain waves.Weather radar in Fig. 3 shows precipitation often also in lines, in the region of cloud lines above mountains in Fig. 2 (e.g.north and south of label "Birmingham").Average rain distribution is similar to orographic rain increasing above high ground (e.g., Bonell and Sumner, 1992).Some southwest-north-east rain lines to the east at ∼12:00-15:00 UT advect with the wind instead of remaining above mountains, and orographic rain to north, ∼53-56 • N, 1-4 • W, is less linear.There is also deeper convection in the cloud and rain lines, with thunder ∼11:00-12:00 UT at Birmingham, from a heavy rain area appearing near mountains of south Wales a few hours earlier.Occurrence of rain lines allows compari-son with MCKD, using VHF radar to measure the wave field (e.g., Röttger, 2000), and with convective rain as another factor in any diurnal effect of the convective boundary layer on mountain waves. Figures 4a, c show bands of upward and downward vertical wind (W ) typical of mountain waves (e.g.Worthington, 2002, Figs. 8a, 11a), measured using a 46.5 MHz VHF radar near Aberystwyth (Fig. 1).Figures 4a, c, 7a use a vertical radar beam; symmetric 6 • beams show similar waves but are noisier above ∼16 km height.Vertical wavelength increases with jet wind speed as expected for mountain waves in Figs. 4a, b (e.g., Worthington et al., 2001).Therefore along-wind rain lines above mountains in Fig. 3 (MCKD) are occurring in a case study similar to other observations of mountain waves above convective rolls (Worthington, 2002(Worthington, , 2005)). Convective rolls could raise the effective surface of the mountains causing mountain waves, for sheared airflow over shallow convection (Sinha, 1966); alternatively vertical air motion in mountain waves could trigger convective rolls downwind within the lowest region of wave flow (Kirshbaum and Durran, 2005a,b).Mountain waves and convective rolls could be difficult to separate as cause and effect; whether mountain wave or convection starts further upwind could be significant.However cloud streets in Figs.2b-d already start slightly upwind of the VHF radar measuring mountain waves in Fig. 4. Figure 5 shows MODIS (Moderate-resolution Imaging Spectroradiometer) 250-m resolution images, with examples of wave cloud upwind of cloud streets near 62 • N, 7 • W (Fig. 5c); cloud streets possibly upwind of wave cloud then continuing over mountains higher than 1 km, 53 • N, 4 • W (Fig. 5f); and smooth cloud streets similar to lenticular wave cloud, 51.5 • N, 10 • W (Figs. 5a, b, d, e).In Figs.5a, b, d, e there also appear to be cloud streets above sea to south and west.Since convective rolls and mountain waves can often occur separately, or with either upwind, they could be described as separate processes which can coincide and interact, instead of one process causing the other. For diurnal variation, Figs.2-4 show mountain waves and also cloud and rain lines occur both night and day on 1 October 2001, e.g.Fig. 2a at 02:26 UT.Sunrise and sunset are ∼06:15 and 17:55 UT.However along-wind cloud lines in Figs.2a and 2b-e could be of different types such as streaks and rolls (e.g., Young et al., 2002;Shun et al., 2003). Case study, 19 February 2004 Figure 6 shows cloud streets starting upwind, continuing above and downwind, of mountains.There are variations in thickness of the cloud streets, with appearance similar to "knots in strings", above mountain areas of e.g.North York Moors (54.3 • N, 1 • W), Peak District (53 • N, 2 • W) and near Wales (52.5 • N, 3 • W) which are higher than e.g.Chiltern Hills in Tian et al. (2003)."Knot" spacing of ∼9 km is larger than individual cumulus clouds, not caused by satellite scan lines, and positions of "knots" are aligned perpendicular to the cloud streets, suggesting perhaps a wave pattern with phase lines perpendicular to cloud streets (Bradbury, 1990;Hindman et al., 2004). Vertical-beam spectral width corrected for beambroadening (Fig. 7c) shows a turbulent layer for over 10 h at 16-17 km height, where mountain waves disappear at a critical layer (e.g., Worthington and Thomas, 1996).High spectral width near 10 km is partly spurious, caused by low signal-noise ratio below the tropopause at ∼11 km, and problems of exactly removing a substantial beam-broadening component in >30 m s −1 horizontal wind.Since 16-17 km is above the regions of high jet-stream wind shear, Fig. 7 could show mountain-wave breaking not shear instability although horizontal wind speed is several m s −1 .Typical convection waves should propagate downwind, whereas the waves above cloud streets in Fig. 7a keep the same phase for hours above the VHF radar.One explanation could be if mountain waves can exist through the convective boundary layer (Winstead et al., 2002), keeping the wave pattern "anchored" to the mountains; then waves as in Fig. 7a not only look like mountain waves, but can be partly caused as in standard mountain wave theory, modified by convection. Satellite images at 02:35, 04:16 and 21:07 UT show wave cloud instead of the cloud streets in Fig. 6, yet Fig. 7 shows mountain waves and a turbulence layer for most of 19 February 2004.Sunrise and sunset are at ∼07:25 and 17:35 UT.Sections 2.1 and 2.2 therefore show a lack of diurnal variation of mountain waves, when variation could be expected. Diurnal and seasonal mountain-wave amplitude Figure 8 shows diurnal and seasonal variation of surface weather (Figs.8a-d), and magnitude of vertical wind |W | (Figs.8f-i) as a more direct measure of mountain-wave (Kuettner et al., 1987;Gage et al., 1989;Sato, 1992;Böhme et al., 2004) with periods of tens of minutes while retaining more static mountain waves. Surface weather is included in Fig. 8 since if solar radiation, temperature and wind show minimal diurnal variation in winter at the VHF radar location, then diurnal variations of mountain waves might also be minimal.Sorting data for time of year shows also if any diurnal effect follows seasonal variation of sunrise and sunset time; Fig. 8 uses 30 intervals of ∼12.2 days.Figures 8a-d show some diurnal vari-ation through winter.Surface wind differs in Figs.8c and d, since Fig. 8c is measured near the top of a low hill, and Fig. 8d in a valley, more sheltered from prevailing wind, except for increased afternoon wind speed from e.g.sea breeze channelled in valleys, slope and valley winds, and convective boundary-layer mixing. Wind speed higher in the boundary layer could be more correlated to mountain-wave amplitude than surface wind.Nastrom and Gage (1984) report |W | more correlated to 700 mB than 850 mB wind.Correlation of wind profiles is also possible to e.g.airglow (Sukhodoyev et al., 1989) or orographic rain (Neiman et al., 2002).Figure 9 shows correlation of wind profiles to |W | at 1.7-2.5 km height where mountain waves are immediately above their source region and below possible critical layers higher in the atmosphere.Correlation to |W | at e.g.3-8 or 12-15 km instead of 1.7-2.5 km is more constant above ∼1 km.Correlation in Fig. 9 is only ∼0.35 or less, because of e.g.variations in horizontal position of mountain waves, and wave structure; also correlation profiles are altered by e.g.UHF data quality decreasing with height, and the horizontal separation of VHF radar from radiosondes (typically tens of km; 50 km to their launch site at Aberporth).However, the height of maximum correlation is mostly ∼0.5-1 km, using all wind directions, or subsets as |W | increases with both westerly and easterly wind (Prichard et al., 1995).Figure 8e shows wind speed at 800 m height, with less diurnal variation, and faster wind speed in autumn and winter than Figs.8c, d. |W | increases in autumn and winter, similar to Fig. 8e and seasonal variation of mountain-wave clouds (Cruette, 1976;Lester, 1978).Diurnal variation of |W | is much less than seasonal variation and appears fairly random.Figure 8 can use subsets of wind speed, wind direction, and/or surface weather; Fig. 8i is for 2-km wind >10 m s −1 , to check for diurnal variation in summer with faster wind speed.However, there is a pattern similar to Figs. 8f-h for Fig. 8i, and also for: low-level wind from north-west-south 180 • segment mostly over the sea, or north-east-south 180 • segment over land (Fig. 1) with different surface heating and boundary layer; using e.g.maximum |W | at any height 3-8 km instead of mean; or using variance, W 2 .Also, W probability distribution is nearly constant with time of day. Despite lack of diurnal variation in Figs.8f-i, the boundary layer below mountain waves varies between stable and convective.However, even case studies in Figs. 4, 7 show little diurnal effect, above convective rolls.If mountain waves show almost no diurnal variation of amplitude, this could imply that mountain wave systems can have altered forcing mechanisms (stable and linear, or turbulent including convection) in the boundary layer without the wave field varying significantly above the boundary layer.(d, e) satellites, 1996-2006, and surface wind averaged from up to 26 sites (Fig. 1).Green, red and blue lines show yearly medians, for 00:00-24:00 UT and other time intervals.Vertical lines in (a, b, d, e) show last data in Worthington (1999bWorthington ( , 2001)).Dots in (c) are for every third data point.Mountain-wave azimuth is of horizontal wavevector for (a, b), and wave clouds for (d, e), measured clockwise from north. Boundary-layer wind and wave azimuth Another parameter to check for diurnal variation is azimuth of mountain-wave horizontal wavevector, on average between the surface and tropospheric wind azimuths (Worthington, 1999b(Worthington, , 2001)).Figures 10-12 are to check for diurnal variation in over 15 years of data, and compare results from VHF radar and satellites, using an improved method of measuring wave azimuth on satellite images.Other mountain-wave parameters could also be useful, such as any diurnal variation of horizontal phase speed from zero, for e.g.numerical models. Figure 10 shows mountain-wave azimuth measured as in Worthington (1999aWorthington ( ,b, 2001)), on average clockwise from surface wind in Figs.10a, d, and anticlockwise from ∼2 km (1.7-2.3 km) wind in Figs.10b, e. Wave azimuth from VHF radar uses height-time intervals 3-8 km ×1 h, with |W |>0.05 m s −1 and azimuth error <20 • .Data to right of vertical lines in Figs.10a, b, d, e are more recent than Worthington (1999bWorthington ( , 2001)), to check that the wave and cloud azimuth results persist and do not disappear.Also Fig. 10c shows expected clockwise wind rotation with height in the boundary layer, to compare with Figs.10a, b, d, e. Liziola andBalsley (1997, 1998) use an alternative 3-radar method for measuring wave azimuth, which may give better results for propagating convection waves than for mountain waves (Carter et al., 1989). 2. set all areas to zero except mountain-wave or convective-roll cloud lines, above or near land since surface wind measurements in Fig. 1 are above land, also so mountain waves are above their source region, instead of being downwind lee waves. 3. take 2-D autocorrelation of each image using Fast Fourier Transform (and optionally subtract smoothed autocorrelation). 4. find the azimuth of the autocorrelation pattern, by rotating in steps of 1 • using cubic interpolation, and averaging north-south in the square region of Fig. 11.The north-south average shows maximum east-west variations from its median, when the autocorrelation pattern is rotated with its lines north-south. 5. cloud azimuths (∼5%) are discarded if there are problems from e.g.lines of cloud shadows, or multiple wave azimuths., 1996-2006.Wave azimuth from satellite is perpendicular to cloud lines, pointing upwind.Time separation between radar and satellite data is 0, 1 or 2 h.The diagonal line is for equal radar and satellite azimuths. Figures 10d, e include cloud streets as in Worthington (2001), expected to be slightly clockwise from parallel to the surface wind, in checking if mountain-wave clouds are slightly clockwise from perpendicular.There are more data before 1999 in Figs.10d, e than Worthington (2001), since the autocorrelation method can also measure azimuth of patchy cloud lines.An average image rotation of 5 • clockwise from north is subtracted as in Worthington (2001). Green, red and blue lines in Figs.10a-c show yearly medians of the difference between wave and wind azimuth, for 00:00-24:00 UT and other time intervals.Red lines in Figs.10a-c mostly show less positive or more negative azimuth differences than blue lines.Horizontal wind rotation from surface to 2 km height is less for the daytime convective boundary layer (Fig. 8j), so red and blue lines in Fig. 10c are for 12:00-17:00 UT and 23:00-04:00 UT. Plots similar to Fig. 8j for difference of mountain-wave and wind azimuth are more variable than Fig. 8j, but possibly show wave azimuth is nearer to 2-km wind and further away from surface wind at e.g.15:00-20:00 UT compared to 05:00-10:00 UT (blue, red lines in Figs.10a, b).An explanation could be if, at the mountain-wave launching height (Shutts, 1997), the horizontal wavevector of mountain waves is on average parallel to horizontal wind (Worthington, 1999b); in the afternoon, with a developed convective boundary layer, mountain wave azimuth could be parallel to upper-boundary-layer wind; in the morning, with more stable residual-layer flow over a shallower convective boundary layer, mountain wave azimuth could instead be parallel to lower-boundary-layer wind, with much variability from profiles of e.g.wind shear and temperature lapse rate.Occurrence of boundary-layer mountain-wave clouds at night, and convective clouds in daytime, could be consistent with a higher wave launching height instead of mountain waves ceasing in daytime., e).Measurements are in the same hour, or ±1, ±2 h to provide more data.VHF radar and satellite measurements of wave azimuth agree fairly well, with median difference <5 • for Fig. 12, despite being limited to occurrence of VHF aspect sensitivity and wave cloud. Conclusions Mountain waves near 52.4 • N, 4.0 • W show seasonal variation of amplitude, but much less diurnal variation despite the effects of boundary-layer convection.This negative result is however useful, in studying the effects of boundary layers on mountain waves. Rain lines above mountains (Miniscloux et al., 2001;Cosma et al., 2002;Kirshbaum and Durran, 2005a,b) can occur in convective rolls beneath typical mountain waves observed by VHF radar.Convective rolls can start upwind of mountain waves, not only triggered downwind. Horizontal wavevector of mountain waves is between surface and tropospheric wind direction, both day and night, but possibly nearer to surface than 2-km wind azimuth in the morning, compared to evening. Figures 1-4 show a case study on 1 October 2001 with mountain waves above lines of convection and rain.Figures 2a, c, d, e are from NOAA AVHRR (National Oceanic and Atmospheric Adminstration, Advanced Very High Resolution Radiometer), and Fig. 2b from Landsat.There are cloud lines south-west to north-east, near parallel to the south-westerly surface wind, above mountains ∼52-53 • N, 3-4 • W. Fig. 11 . Fig. 11.Example satellite images of (a) mountain waves, (b) convective cloud streets, and autocorrelation of the unshaded areas of cloud.North is at the top.Diagonal lines show cloud azimuth from the region of autocorrelation marked by a square. Fig. 12 . Fig. 12. Mountain-wave azimuths from VHF radar and satellite images, 1996-2006.Wave azimuth from satellite is perpendicular to cloud lines, pointing upwind.Time separation between radar and satellite data is 0, 1 or 2 h.The diagonal line is for equal radar and satellite azimuths.
4,377.8
2006-11-21T00:00:00.000
[ "Environmental Science", "Physics" ]
Effects of Propolis and Phenolic Acids on Triple-Negative Breast Cancer Cell Lines: Potential Involvement of Epigenetic Mechanisms Triple-negative breast cancer is an aggressive disease frequently associated with resistance to chemotherapy. Evidence supports that small molecules showing DNA methyltransferase inhibitory activity (DNMTi) are important to sensitize cancer cells to cytotoxic agents, in part, by reverting the acquired epigenetic changes associated with the resistance to therapy. The present study aimed to evaluate if chemical compounds derived from propolis could act as epigenetic drugs (epi-drugs). We selected three phenolic acids (caffeic, dihydrocinnamic, and p-coumaric) commonly detected in propolis and the (−)-epigallocatechin-3-gallate (EGCG) from green tea, which is a well-known DNA demethylating agent, for further analysis. The treatment with p-coumaric acid and EGCG significantly reduced the cell viability of four triple-negative breast cancer cell lines (BT-20, BT-549, MDA-MB-231, and MDA-MB-436). Computational predictions by molecular docking indicated that both chemicals could interact with the MTAse domain of the human DNMT1 and directly compete with its intrinsic inhibitor S-Adenosyl-l-homocysteine (SAH). Although the ethanolic extract of propolis (EEP) did not change the global DNA methylation content, by using MS-PCR (Methylation-Specific Polymerase Chain Reaction) we demonstrated that EEP and EGCG were able to partly demethylate the promoter region of RASSF1A in BT-549 cells. Also, in vitro treatment with EEP altered the RASSF1 protein expression levels. Our data indicated that some chemical compound present in the EEP has DNMTi activity and can revert the epigenetic silencing of the tumor suppressor RASSF1A. These findings suggest that propolis are a promising source for epi-drugs discovery. Introduction Propolis is a resinous mixture produced by honeybees and used in the construction and protection of the hive [1]. This natural origin product is derived from different botanical sources. Thus, the mixed composition of propolis depends on the geographical area and the local flora, which significantly contribute to its heterogeneous and complex chemical composition [2]. It is estimated that raw propolis contains hundreds of chemical compounds, whose extract shows a plethora of biological and pharmacological activities, for instance immunomodulatory [3], antitumoral [4], anti-inflammatory [5,6], antioxidant, and antibacterial, among others [7]. Results The MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] test was used to evaluate the cytotoxic effect of the propolis, phenolic acids, and EGCG in the BT-20, BT-549, MDA-MB-231, and MDA-MB-436 triple-negative breast cancer cell lines. Propolis reduced cell viability in a dose-and-time dependent manner on BT-20 and BT-549 cells. After 72 h of in vitro exposure, the half maximal inhibitory concentration (IC 50 In order to investigate the potential of propolis as a source of novel DNMT inhibitors (DNMTi), we first investigated the effect of the EEP to change the global content of DNA methylation in the above-mentioned cell lines. After 96 h of in vitro treatment with propolis with a dose of approximately ½ IC50 calculated for the sensible cell lines, no differences were detected relative to the respective controls ( Figure 2A). In order to investigate the potential of propolis as a source of novel DNMT inhibitors (DNMTi), we first investigated the effect of the EEP to change the global content of DNA methylation in the above-mentioned cell lines. After 96 h of in vitro treatment with propolis with a dose of approximately 1 2 IC 50 calculated for the sensible cell lines, no differences were detected relative to the respective controls ( Figure 2A). Figure 2. A) Analysis of global DNA methylation content after propolis treatment relative to the respective control. B) Locus-specific DNA methylation analysis by MS-PCR. BT-459 cells are fully methylated at the RASSF1A promoter region (M = methylated and U = unmethylated alleles). The in vitro treatment with propolis and EGCG was able to partially demethylate this locus, as evidenced by detection of the PCR product with primer-specific to detect the unmethylated DNA sequence. C) RASSF1 protein expression levels in BT-549 cells after the in vitro treatment with propolis (10 µg/mL), p-coumaric acid, and EGCG (10 µM) during 96 h. *p < 0.05. Based on the sensibility of BT-549 cells to p-coumaric acid and EGCG, this cell line was selected for further experiments designed to evaluate the potential of p-coumaric acid as an epigenetic-drug in comparison to the EGCG, which is a well-known DNMTi. The RASSF1 locus was chosen because it is frequently hypermethylated in the promoter region of RASSF1A alternative transcript. MS-PCR (Methylation Specific-Polymerase Chain Reaction) analysis confirmed that this promoter region is fully methylated in BT-549 cells. In addition, after 96 h of continuous exposure to EEP (10 µg/mL) or EGCG (10 µM) this promoter region was partially demethylated ( Figure 2B). No changes in the DNA methylation pattern was observed after the treatment with p-coumaric acid. While the treatment with EEP reduced the intracellular levels of the RASSF1 protein, the exposure to p-coumaric acid or EGCG does not changed the RASSF1 protein expression levels in this cell line ( Figure 2C). In parallel to the experiments described above, the strategy of molecular docking was used to evaluate the potential interactions of the three phenolic acids with the methyltransferase domain (MTAse) of the human DNMT1 protein. The computational predictions by molecular docking of SAH (S-adenosyl-L-homocysteine) and the MTase domain reproduced the molecular model based in the respective co-crystallography. Thus, the molecules of SAH (the endogenous intrinsic inhibitor) and The in vitro treatment with propolis and EGCG was able to partially demethylate this locus, as evidenced by detection of the PCR product with primer-specific to detect the unmethylated DNA sequence. (C) RASSF1 protein expression levels in BT-549 cells after the in vitro treatment with propolis (10 µg/mL), p-coumaric acid, and EGCG (10 µM) during 96 h. * p < 0.05. Based on the sensibility of BT-549 cells to p-coumaric acid and EGCG, this cell line was selected for further experiments designed to evaluate the potential of p-coumaric acid as an epigenetic-drug in comparison to the EGCG, which is a well-known DNMTi. The RASSF1 locus was chosen because it is frequently hypermethylated in the promoter region of RASSF1A alternative transcript. MS-PCR (Methylation Specific-Polymerase Chain Reaction) analysis confirmed that this promoter region is fully methylated in BT-549 cells. In addition, after 96 h of continuous exposure to EEP (10 µg/mL) or EGCG (10 µM) this promoter region was partially demethylated ( Figure 2B). No changes in the DNA methylation pattern was observed after the treatment with p-coumaric acid. While the treatment with EEP reduced the intracellular levels of the RASSF1 protein, the exposure to p-coumaric acid or EGCG does not changed the RASSF1 protein expression levels in this cell line ( Figure 2C). In parallel to the experiments described above, the strategy of molecular docking was used to evaluate the potential interactions of the three phenolic acids with the methyltransferase domain (MTAse) of the human DNMT1 protein. The computational predictions by molecular docking of SAH (S-adenosyl-l-homocysteine) and the MTase domain reproduced the molecular model based in the respective co-crystallography. Thus, the molecules of SAH (the endogenous intrinsic inhibitor) and the EGCG (a known DNMTi from natural origin) were used as references. As expected, the docking solution of SAH and EGCG showed the best site occupancy and the greatest number of possible interactions with specific amino acid residues ( Figure 3A,B,E) in the catalytic pocket of the protein DNMT1. Although docking solutions were similar among the three phenolic acids, the p-coumaric acid was the ligand showing the most common interactions when compared with SAH and EGCG. Table 1 resumes the main results of docking calculations based on the best site occupancy, lower free binding energy and amino acid interactions of each chemical compound. Molecules 2020, 25, 1289 5 of 14 the EGCG (a known DNMTi from natural origin) were used as references. As expected, the docking solution of SAH and EGCG showed the best site occupancy and the greatest number of possible interactions with specific amino acid residues (Figure 3 A,B,E) in the catalytic pocket of the protein DNMT1. Although docking solutions were similar among the three phenolic acids, the p-coumaric acid was the ligand showing the most common interactions when compared with SAH and EGCG. Table 1 resumes the main results of docking calculations based on the best site occupancy, lower free binding energy and amino acid interactions of each chemical compound. Gly1149, Gly1150, Leu1151, Val1580 Glu1266, Arg1310 CID-PubChem Compound ID number; (*) Retrieved from ChemSpider (https://www.chemspider.com/Default.aspx); (**) RMSD-Root-Mean-Square Deviation. Discussion The antitumoral effects of propolis towards human cancer cell lines has been well documented. The present study aimed to identify natural bioactive molecules derived from propolis that are able to inhibit DNA methyltransferases, leading to the reactivation of silenced genes due to promoter hypermethylation. Thus, four triple-negative breast cancer cell lines and three phenolic acids (i.e., caffeic, dihydrocinnamic, and p-coumaric acids) present in the sample of Brazilian propolis were selected for in vitro and in silico analysis. Our data demonstrated that EEP decreased the viability of the BT-20 and BT-549 cell lines, but this effect was not detected in those of MDA-MB-231 and MDA-MB-436. Unlike Brazilian propolis, Cuban propolis presented a cytotoxic effect on MDA-MB-231 cells [31]. Specific differences in the chemical composition of the propolis samples and the heterogeneity of the genetic and epigenetic background of these cell lines could explain the differential response to propolis treatment in the breast cancer cell lines analyzed in the present study. Among the phenolic acids tested, only the pcoumaric acid and the EGCG showed cytotoxic effects in the four triple-negative breast cancer cell lines. The pathways mediating the cytotoxic effects of EGCG have been described in the literature [32,33], while the cytotoxic effects of p-coumaric acid in breast cancer cells have been poorly investigated. A short-term preclinical model indicated the involvement of p-coumaric acid in the chemoprevention of colon cancer [34]. The potential antitumoral of p-coumaric acid has also been demonstrated by the down-regulation and inhibition of EGFR active site in colon cancer cell lines [35,36]. The treatment with this chemical compound induced apoptosis in MCF-7 breast cells in a −8. 3 4.547 Met1169, Asp1190, Ans1578 Discussion The antitumoral effects of propolis towards human cancer cell lines has been well documented. The present study aimed to identify natural bioactive molecules derived from propolis that are able to inhibit DNA methyltransferases, leading to the reactivation of silenced genes due to promoter hypermethylation. Thus, four triple-negative breast cancer cell lines and three phenolic acids (i.e., caffeic, dihydrocinnamic, and p-coumaric acids) present in the sample of Brazilian propolis were selected for in vitro and in silico analysis. Our data demonstrated that EEP decreased the viability of the BT-20 and BT-549 cell lines, but this effect was not detected in those of MDA-MB-231 and MDA-MB-436. Unlike Brazilian propolis, Cuban propolis presented a cytotoxic effect on MDA-MB-231 cells [31]. Specific differences in the chemical composition of the propolis samples and the heterogeneity of the genetic and epigenetic background of these cell lines could explain the differential response to propolis treatment in the breast cancer cell lines analyzed in the present study. Among the phenolic acids tested, only the pcoumaric acid and the EGCG showed cytotoxic effects in the four triple-negative breast cancer cell lines. The pathways mediating the cytotoxic effects of EGCG have been described in the literature [32,33], while the cytotoxic effects of p-coumaric acid in breast cancer cells have been poorly investigated. A short-term preclinical model indicated the involvement of p-coumaric acid in the chemoprevention of colon cancer [34]. The potential antitumoral of p-coumaric acid has also been demonstrated by the down-regulation and inhibition of EGFR active site in colon cancer cell lines [35,36]. The treatment with this chemical compound induced apoptosis in MCF-7 breast cells in a −6.7 6.809 Gly1149, Gly1150, Leu1151, Val1580 hydrocinnamic acid 107 Molecules 2020, 25, 1289 6 of 14 Gly1149, Gly1150, Leu1151, Val1580 Glu1266, Arg1310 CID-PubChem Compound ID number; (*) Retrieved from ChemSpider (https://www.chemspider.com/Default.aspx); (**) RMSD-Root-Mean-Square Deviation. Discussion The antitumoral effects of propolis towards human cancer cell lines has been well documented. The present study aimed to identify natural bioactive molecules derived from propolis that are able to inhibit DNA methyltransferases, leading to the reactivation of silenced genes due to promoter hypermethylation. Thus, four triple-negative breast cancer cell lines and three phenolic acids (i.e., caffeic, dihydrocinnamic, and p-coumaric acids) present in the sample of Brazilian propolis were selected for in vitro and in silico analysis. Our data demonstrated that EEP decreased the viability of the BT-20 and BT-549 cell lines, but this effect was not detected in those of MDA-MB-231 and MDA-MB-436. Unlike Brazilian propolis, Cuban propolis presented a cytotoxic effect on MDA-MB-231 cells [31]. Specific differences in the chemical composition of the propolis samples and the heterogeneity of the genetic and epigenetic background of these cell lines could explain the differential response to propolis treatment in the breast cancer cell lines analyzed in the present study. Among the phenolic acids tested, only the pcoumaric acid and the EGCG showed cytotoxic effects in the four triple-negative breast cancer cell lines. The pathways mediating the cytotoxic effects of EGCG have been described in the literature [32,33], while the cytotoxic effects of p-coumaric acid in breast cancer cells have been poorly investigated. A short-term preclinical model indicated the involvement of p-coumaric acid in the chemoprevention of colon cancer [34]. The potential antitumoral of p-coumaric acid has also been demonstrated by the down-regulation and inhibition of EGFR active site in colon cancer cell lines [35,36]. The treatment with this chemical compound induced apoptosis in MCF-7 breast cells in a Discussion The antitumoral effects of propolis towards human cancer cell lines has been well documented. The present study aimed to identify natural bioactive molecules derived from propolis that are able to inhibit DNA methyltransferases, leading to the reactivation of silenced genes due to promoter hypermethylation. Thus, four triple-negative breast cancer cell lines and three phenolic acids (i.e., caffeic, dihydrocinnamic, and p-coumaric acids) present in the sample of Brazilian propolis were selected for in vitro and in silico analysis. Our data demonstrated that EEP decreased the viability of the BT-20 and BT-549 cell lines, but this effect was not detected in those of MDA-MB-231 and MDA-MB-436. Unlike Brazilian propolis, Cuban propolis presented a cytotoxic effect on MDA-MB-231 cells [31]. Specific differences in the chemical composition of the propolis samples and the heterogeneity of the genetic and epigenetic background of these cell lines could explain the differential response to propolis treatment in the breast cancer cell lines analyzed in the present study. Among the phenolic acids tested, only the pcoumaric acid and the EGCG showed cytotoxic effects in the four triple-negative breast cancer cell lines. The pathways mediating the cytotoxic effects of EGCG have been described in the literature [32,33], while the cytotoxic effects of p-coumaric acid in breast cancer cells have been poorly investigated. A short-term preclinical model indicated the involvement of p-coumaric acid in the chemoprevention of colon cancer [34]. The potential antitumoral of p-coumaric acid has also been demonstrated by the down-regulation and inhibition of EGFR active site in colon cancer cell lines [35,36]. The treatment with this chemical compound induced apoptosis in MCF-7 breast cells in a −6.0 6.049 Gly1147, Discussion The antitumoral effects of propolis towards human cancer cell lines has been well documented. The present study aimed to identify natural bioactive molecules derived from propolis that are able to inhibit DNA methyltransferases, leading to the reactivation of silenced genes due to promoter hypermethylation. Thus, four triple-negative breast cancer cell lines and three phenolic acids (i.e., caffeic, dihydrocinnamic, and p-coumaric acids) present in the sample of Brazilian propolis were selected for in vitro and in silico analysis. Our data demonstrated that EEP decreased the viability of the BT-20 and BT-549 cell lines, but this effect was not detected in those of MDA-MB-231 and MDA-MB-436. Unlike Brazilian propolis, Cuban propolis presented a cytotoxic effect on MDA-MB-231 cells [31]. Specific differences in the chemical composition of the propolis samples and the heterogeneity of the genetic and epigenetic background of these cell lines could explain the differential response to propolis treatment in the breast cancer cell lines analyzed in the present study. Among the phenolic acids tested, only the pcoumaric acid and the EGCG showed cytotoxic effects in the four triple-negative breast cancer cell lines. The pathways mediating the cytotoxic effects of EGCG have been described in the literature [32,33], while the cytotoxic effects of p-coumaric acid in breast cancer cells have been poorly investigated. A short-term preclinical model indicated the involvement of p-coumaric acid in the chemoprevention of colon cancer [34]. The potential antitumoral of p-coumaric acid has also been demonstrated by the down-regulation and inhibition of EGFR active site in colon cancer cell lines [35,36]. The treatment with this chemical compound induced apoptosis in MCF-7 breast cells in a Discussion The antitumoral effects of propolis towards human cancer cell lines has been well documented. The present study aimed to identify natural bioactive molecules derived from propolis that are able to inhibit DNA methyltransferases, leading to the reactivation of silenced genes due to promoter hypermethylation. Thus, four triple-negative breast cancer cell lines and three phenolic acids (i.e., caffeic, dihydrocinnamic, and p-coumaric acids) present in the sample of Brazilian propolis were selected for in vitro and in silico analysis. Our data demonstrated that EEP decreased the viability of the BT-20 and BT-549 cell lines, but this effect was not detected in those of MDA-MB-231 and MDA-MB-436. Unlike Brazilian propolis, Cuban propolis presented a cytotoxic effect on MDA-MB-231 cells [31]. Specific differences in the chemical composition of the propolis samples and the heterogeneity of the genetic and epigenetic background of these cell lines could explain the differential response to propolis treatment in the breast cancer cell lines analyzed in the present study. Among the phenolic acids tested, only the p-coumaric acid and the EGCG showed cytotoxic effects in the four triple-negative breast cancer cell lines. The pathways mediating the cytotoxic effects of EGCG have been described in the literature [32,33], while the cytotoxic effects of p-coumaric acid in breast cancer cells have been poorly investigated. A short-term preclinical model indicated the involvement of p-coumaric acid in the chemoprevention of colon cancer [34]. The potential antitumoral of p-coumaric acid has also been demonstrated by the down-regulation and inhibition of EGFR active site in colon cancer cell lines [35,36]. The treatment with this chemical compound induced apoptosis in MCF-7 breast cells in a concentration-dependent manner [37]. Furthermore, this last study demonstrated that the treatment with p-coumaric acid was associated with increased acetylation in H3 histone, suggesting its potential for HDAC inhibition [37]. Epigenetic factors, including DNA methylation and histone modifications, work together to regulate essential cellular processes such as developmental programs, genome integrity, gene expression, cell proliferation and survival, and death pathways [38]. Aberrant DNA methylation profiles, including global hypomethylation and gene-specific hypermethylation, contribute to the disruption of epigenetic mechanisms and are considered as a promising field for preventing cancer and therapeutic strategies [39]. The DNMT are a family of enzymes that is responsible for establishing and maintaining the DNA methylation patterns throughout mammalian genomes [40]. The enzymatic methylation reaction consists in the transfer of a methyl group from the substrate S-adenosylmethionine (SAM) to the fifth carbon position of the pyrimidine ring of cytosines located in dinucleotides cytosine-guanine (5 -CpG-3 ) [41]. As a result, SAM is converted into SAH. This normal byproduct of methyl donation act as a competitive inhibitor of DNMTs due to its binding in the MTase domain. Besides SAH, DNMT activity can be controlled by small molecules [42]. Therefore, to test the hypothesis that among the complex composition of propolis there are chemical compounds that are able to inhibit DNMTs, we first evaluated the effect of propolis treatment in the global DNA methylation in four breast cancer cells, but no differences were found in the relative methylation content between cells treated and the respective untreated control. Then, we used computational simulations based on docking to evaluate possible interactions between the phenolic acids and the MTase domain of the human DNMT1. Molecular docking is an in silico technique that is widely used in the ligand-protein simulation and has been used to identify new epigenetic inhibitors and to understand the mechanisms of action of known compounds as well as novel drugs for epigenetic therapy [43]. Overall, the docking simulations showed that the analyzed phenolic acids could interact with the MTase domain in a way similar to the intrinsic inhibitor SAH and EGCG, although with higher free binding energy. However, using an in vitro prokaryotic model with the recombinant methylase M.SssI, the docking predictions were not validated for p-coumaric acid and ECGC. Based on the molecular docking evidence, we further investigated if the treatment with propolis, p-coumaric acid, or EGCG could revert the locus-specific methylation and lead to gene re-expression. The RASSF1 gene has been considered a target gene for this kind of analysis [44]. This gene has several isoforms, but two of them, RASSF1A and RASSF1C, have been implicated in cancer origin and progression. These isoforms are transcribed from distinct promoters and each of them has an associated CpG island. However, RASSF1A and RASSF1C promoter regions show opposite DNA methylation patterns: while the CpG island of RASSF1A isoform is frequently hypermethylated in several cancer types, the CpG island of RASSF1C remains unmethylated. It has been suggested that the hypermethylation of RASSF1A may be a marker for early cancer detection and prognosis [29]. Since several natural compounds present in food and herbs can inhibit DNMT expression and the activity of RASSF1A, it has been also considered as a target to demethylating drugs for cancer therapy [44]. Here we demonstrated that EEP and EGCG, but nor p-coumaric acid, were able to partly demethylate RASSF1A in BT-459 cells. The effect of EGCG on the demethylation of RASSF1A or its reactivation has not been previously reported [44]. However, under the experimental conditions used in the present study, demethylation was not associated with an increase in the RASSF1 protein levels. In contrast, although the treatment of BT-459 cells with propolis does not change the methylation pattern of RASSF1A, it led instead to a reduced RASSF1 protein expression level. Histone modifications and DNA methylation are key epigenetic events leading to the silencing of RASSF1A. It has been suggested that the abrogation of RASSF1A can allow RASSF1C expression. In a previous study, we analyzed the expression level of these alternative transcripts of RASSF1 gene by quantitative real time RT-PCR [45]. We described that while the RASSF1A transcript is silenced by hypermethylation in breast cancer cell lines, the mRNA of RASSF1C is overexpressed in BT-549 cells when compared with epithelial mammary cells [45]. The antibody used in the protein quantification in the ELISA assay is unable to differentiate the 1A and 1C isoforms of the RASSF1 protein, limiting the interpretation of its results. Nevertheless, these data suggest that propolis exposure could reduce the expression of isoform RASSF1C or disrupt the ratio RASSF1A/RASSF1C in cancer cells. This finding is relevant because, contrarily to RASSF1A, some studies indicated that RASSF1C has oncogenic properties and could promote cell survival and proliferation [46]. Here, we demonstrate that some component of propolis extract reverted the DNA methylation of an important tumor suppressor gene. New studies are clearly necessary to identify this/these compound(s). Epi-drugs targeting DNA methyltransferases are becoming a promising alternative to improve cancer therapy, since the combined use of DNMTi at low doses might revert resistance to cytotoxic agents, in part, by remove the acquired epigenetic alterations associated with the resistance to therapy [47]. Materials and Methods The present study used in vitro and in silico approaches to investigate if propolis-derived molecules can inhibit DNMTs. Detailed study design is given in the supplementary Figure S2. Cell Lines and Cell Culture Four triple-negative breast cancer cell lines (BT-20, BT-549, MDA-MB-231, and MDA-MB-436) were obtained from the Tissue Culture Shared Resource at the Lombardi Comprehensive Cancer Center from Georgetown University, Washington DC, USA. Before the experiments, genomic authentication was conducted and the culture conditions were described previously [50]. High Glucose Dulbecco's Modified Eagle's Medium (DMEM, LGC Biotecnologia, Cotia, SP, BR) supplemented with 10% fetal bovine serum (LGC, Biotecnologia, Cotia, SP, BR), 1% of penicillin (10.000 U/mL) and streptomycin (10.000 µg/mL) (Thermo Fisher Scientific, Waltham, MA, USA) was used for all cell lines. For the BT-20 cells, the culture medium was supplemented with Gibco ® MEM non-essential amino acids solution (Thermo Fisher Scientific, Waltham, MA, USA). Cell Viability Assay Colorimetric MTT assay was performed to assess cell metabolic activity by the ability of mitochondrial NAD(P)H-dependent oxidoreductase enzymes to reduce the soluble yellow tetrazolium salt [3-(4,5)-dimethylthiazol-(-z-y1)-3,5-diphenyltetrazolium bromide] to insoluble purple formazan crystals. Cell lines at exponential in vitro growth were trypsinized with trypsin/EDTA 0.25% solution (LGC Biotecnologia, Cotia, SP, BR). Afterwards, the cells were diluted in 1mL of culture medium, which was counted in the Countess TM Automated Cell Counter (Invitrogen, Carlsbad, CA, USA) and seeded at a density of 2 × 10 3 cells in 96-well plates. After a period of 24 h for cell adherence, they were exposed to different concentrations of EEP (6.25, 12.5, 25, 50 and 100 µg/mL). Cells were also exposed to the following concentrations of caffeic acid, dihydrocinnamic acid, p-coumaric acid, and EGCG: 6.25, 12.5, 25, 50 and 100 µM, during 24, 48, and 72 h. These chemical compounds were either diluted in dimethyl sulfoxide (DMSO) or ethanol (Sigma Aldrich, St. Louis, MO, USA). Untreated control cells exposed to the respective diluents, were used as references. After the treatment, the medium was aspired, and 100 µL of MTT solution (1 mg/mL) was added to each well. Cells were incubated for 4 h at 37 • C. Formazan crystals were dissolved in DMSO (100 µL). The absorbance was measured using the LX800 (BioTeK ® , Winooski, VT, USA) at 540 nm automated plate reader. Corrected absorbance values were used to estimate the cell viability expressed by the ratio: (A540 average treated cells-A540 average blank)/(A450 average untreated control cells-A540 average blank). The assays were performed in triplicates. In itro Treatments and DNA Extraction The breast cell lines were seeded at a density of 1 × 10 5 cells in 25 cm 2 culture flasks and incubated at 37 • C. After 24 h, the cells were treated with 10 µg/mL of EEP and 10 µM of either p-coumaric acid or EGCG for 96 h. The culture medium was replaced with fresh medium every 24 h. All experiments were performed in triplicates and a mock treatment was done with diluents (control). Then, the cells were allowed to recover for 24 h prior to harvesting. After the treatment, the cells were treated with trypsin, centrifuged and the pellet was frozen at −80 • C. The genomic DNA was obtained by standard proteinase K digestion, followed by phenol/chloroform extraction and ethanol precipitation. Fluorescent DNA quantification was determined with the QuantiFluor ® dsDNA Systems and Quantus™ Fluorometer (Promega, Madison, WI, USA), according to the manufacturer's instructions. Global DNA Methylation Content The effect of propolis in the global DNA methylation was investigated with the Imprint Methylated DNA Quantification Kit (Sigma Aldrich, St. Louis, MO, USA), following the manufacturer's instructions. The methylated DNA fraction was captured using a 5-methylcytosine antibody and was colorimetrically quantified. For each experimental condition, methylation analysis was performed in triplicate (100 ng of input DNA). Three independent biological replicates and fully methylated control DNA were also included in this experiment. The relative content of DNA methylation of propolis treated/control cells was determined by absorbance (A) at 450 nm by the following formula: (A450 average propolis treated cells-A450 average blank)/(A450 average untreated control cells-A450 average blank). Methylation-Specific Polymerase Chain Reaction of RASSF1A Promoter Qualitative Methylation Specific-Polymerase Chain Reaction (MS-PCR) was performed to verify the effect of propolis, p-coumaric acid, and EGCG treatments in the locus-specific methylation pattern of the breast cancer cell line BT-549. The genomic DNA (1 µg) was modified by sodium bisulfite protocol with EpiTect Bisulfite Kit (Qiagen, Hilden, Germany). After DNA modification, PCR conditions and amplification were conducted as described by a previous study of our group [51]. Expression of the Protein RASSF1 The expression levels of the protein RASSF1 was determined by an enzyme linked immuno-sorbent assay using the human RASSF1 ELISA Kit (Aviva Systems Biology, San Diego, CA, USA). Initially, 1 × 10 5 cells were exposed to 10 µg/mL of EEP and 10 µM of p-coumaric acid or EGCG for 96 h, as described above. Afterwards, the cells were collected by addition of trypsin/EDTA solution 0.25% (LGC Biotecnologia, Cotia, SP, BR), centrifuged, and washed three times in cold PBS 1× (Phosphate-Buffered Saline). The cells were resuspended in PBS 1X, subjected to three freeze/thaw cycles at −20 • C for lysis and centrifuged at 1500× g for 10 min at −8 • C to remove cellular debris. The protein concentration in the cell lysates was estimated in the NanoDro 1000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) and diluted with a standard diluent. ELISA protocol follows the manufacturer's recommendations. The results were based on the relative optical density (OD) at 450nm (OD450), as follows: (Relative OD450) = (well OD450) -(mean blank well OD450). The standard curve was generated by plotting the mean replicate relative OD450 of each standard serial dilution point versus the respective standard concentration (ranging from 10,000 to 156.25 pg/mL, dilution factor 1/2). The RASSF1 concentrations in the samples were interpolated by linear regression. Molecular Docking Docking calculations between ligands and the methyltransferase domain of the human DNMT1 was performed with the AutoDock Vina software [52]. The three-dimensional structure was obtained from The Protein Data Bank (PDBID 4WXX). The chemical structures of probable ligands were retrieved from The PubChem database (pubchem.ncbi.nlm.nih.gov): caffeic acid (CID 689043), hydrocinnamic acid (CID 107), p-coumaric acid (CID 637542), and EGCG (CID 65064). SAH (CID 439155) was used as a reference molecule in each step. The area of interest on the MTase domain was defined by establishing a cube at the geometric center of the co-crystalized SAH, with dimensions of 20 × 20 × 20 Å, covering the SAH binding site by employing a grid-point spacing of 1.0 Å. The x, y, and z coordinates for the center MTase domain were −45.55, 61.52, and 6.091, respectively. For each ligand tested, an exhaustiveness of 10 was used. The best docking solution of each ligand was selected based on the lowest free binding energy (Kcal/mol), geometric position and residue contacts analyzed by AutodockTools software [53]. The Root Mean Square Deviation (RMSD) values were calculated according to the default cutoff parameter of AutoDock Vina [52]. For protein surface and image building, we used UCSF Chimera visualization software (University of California, San Francisco, CA, USA) [54]. In Vitro Inhibition of the CpG Methylase M.SssI Assay The recombinant enzyme M.SssI methylates all cytosine residues in double-stranded DNA fragments at CpG dinucleotides. Initially, a fragment of 658 bp from the human gene MECP2 (chrX:154,030,181-154,030,838) was generated by Polymerase Chain Reaction (PCR). This amplicon contains 26 CpGs and one recognition site of the BstUI restriction enzyme (5 -CGCG-3 ). The digestion of this PCR product with BstUI generates two fragments of 332 bp and 336bp; however, the cleavage is inhibited by cytosine methylation. Thus, the purified fragment of 658bp was used as substrate DNA for the in vitro methylation assay. The methylation reaction contained 400 ng of substrate DNA and 4 U of M.SssI methylase (New England Biolabs, Ipswich, MA, USA) in a final volume of 50 µL at 37 • C overnight, as described by Brueckner et al. [55]. p-coumaric acid or EGCG were tested at 25, 50, 100, 200, and 400 µM. Positive (without any drug test) and unmethylated (without methylase M.SssI) controls were also included in the experiment. After completion, the reaction was inactivated at 65 • C for 15 min, followed by purification and digestion with BstUI (50 mM potassium acetate, 20 mM tris-acetate, 10 mM magnesium acetate, and 100 µg/mL BSA, at 60 • C). The visualization of BstUI digested fragments on 6% polyacrylamide gel electrophoresis is indicative of unmethylated restriction sites due to the inhibition of the enzyme activity. Statistical Analysis The statistical significance of the experimental data compared with untreated controls was determined by a paired t-test or ANOVA, corrected by Dunnett's test for multiple comparisons. The significance level was 5% and the statistical tests were performed using the GraphPad Prism 8 software (GraphPad Software, Inc., San Diego, CA USA). Conclusions In conclusion, the present study showed that propolis reduced the viability of BT-20 and BT-549 cells, while p-coumaric acid and EGCG showed cytotoxic effects in all analyzed triple-negative breast cancer cell lines. Molecular docking simulations indicated that the phenolic acids and EGCG can interact with the MTase domain of the DNMT1 enzyme. Moreover, the potential use of small molecules derived from propolis in the discovery of new epi-drugs is supported by the fact that propolis partially demethylate the promoter region of RASSF1A in the BT-549. Further studies are clearly necessary in order to characterize propolis-derived chemical compounds as new epi-drugs. Supplementary Materials: The following are available. Figure S1: Cell viability analysis after in vitro treatment with caffeic and dihydrocinnamic acids in triple-negative breast cancer cell lines. Figure S2: Workflow chart of this study.
7,690.2
2020-03-01T00:00:00.000
[ "Biology" ]
Fuzzy Comprehensive Evaluation of Pilot Cadets’ Flight Performance Based on G1 Method : In this paper, to better evaluate the flight performance of pilot cadets, a flight performance evaluation index system was constructed based on the task of the traffic pattern, the flight training manual, and interviews with instructors. The fuzzy comprehensive evaluation model established by the G1 method was used to evaluate the flight performance of pilot cadets. The flight data of 30 flight cadets were collected to verify the applicability of the fuzzy comprehensive evaluation model. The results showed that the index system established in this paper can meet the requirements of flight performance evaluation. In addition, the fuzzy comprehensive evaluation results were consistent with the evaluation results of experts. Therefore, the system is effective and feasible for the evaluation of pilot cadets’ flight performance through the fuzzy comprehensive evaluation model established by the index system and G1 method Introduction Safety is critical to the development of various industries, particularly civil aviation.The civil aviation industry plays an important role in the modern economy.An efficient and safe civil aviation transport network can promote commercial development and economic growth.At present, air travel is one of the most important means of long-distance travel and international communication.Ensuring safe flights is essential to protect the lives and property of passengers.In addition, maintaining a good safety record and implementing proactive safety measures can increase passenger confidence and loyalty, thereby contributing to the long-term growth and sustainability of the business.Aviation safety is, therefore, of vital importance to passengers, airlines, the economy, national security and the international community.A safe, reliable and sustainable civil aviation system can provide people with a better travel experience. However, in current civil aviation transportation, there have been frequent unsafe flying incidents caused by crew factors.Statistical data show that the proportion of accidents caused by mechanical reasons is decreasing each year [1,2], whilst accidents caused by human factors account for over 70% of the total [3,4].Pilot errors account for around 60% [5][6][7].For example, the flight crew of Air China Flight 129 lost situational awareness during the flight and disregarded the height restrictions around the airport [8], while the captain of Asiana Airlines Flight 214 demonstrated poor manual flight skills and heavily relied on the autopilot system during the landing of a wide-body aircraft [9], which led to inadequate flight capabilities causing aviation accidents.These unsafe flight incidents reflect the insufficient knowledge and performance of existing pilots, the mismatch between pilot evaluation systems and industry development, and highlight the importance of conducting evaluations of pilot ability. In recent research on pilot performance evaluations, researchers and scholars have primarily focused on methods and models.Hebbar and Pashilkar [10] performed a realistic approach and landing flight scenario using a reconfigurable flight simulator, and made subjective and quantitative measurements of pilot performance.Wojcik et al. [11] adapted the Demand Resource Evaluation Scores (DRES) as a metacognitive indicator to assess pilot students' perceptions during simulated training of a novel manoeuvre.The research found that individual metacognitive evaluations of a stressful aviation manoeuvre might be important for progress in flight performance. Some researchers also used quick access recorder (QAR) data to evaluate pilots' flight performance.Wang et al. [12] developed an evaluation method for pilot's performance during the landing phase based on flight QAR data.A flight landing operation performance evaluation system (FLOPES) was set up based on the evaluation model.Chen and Huang [13] used the Bayesian Network to perform flight crew performance evaluation. Based on an analysis of 484 aviation accidents caused by human factors, a flight crew performance model was constructed.Zhang et al. [14] studied the relationship between QAR data and pilot performance, and put forward one-dimensional convolutional neural networks (1-D CNN) that consider QAR metrics in an integrated manner.Then an approach was developed to achieve the state of pilot performance evaluations. For this research's method, studies found that fuzzy comprehensive evaluation refers to the use of fuzzy mathematics methods to assess the possibility of evaluating fuzzy objects influenced by multiple factors based on certain evaluation criteria [15][16][17].The fuzzy comprehensive evaluation method has been widely applied in the research of evaluation modelling in various fields and has achieved good results [18][19][20]. In weight calculation, subjective weighting methods like the analytic hierarchy process (AHP) method and the order relationship analysis method (G1 method) are commonly used.The AHP method is a simple and practical method for handling complex problems with multiple objectives, criteria and levels.The AHP involves experts assigning values in the range from 1 to 9, or their reciprocals, to express the relative importance between two indicators [21].After these valuations, a consistency check is performed [22].However, when dealing with a large number of indicators, errors in assessing their importance can occur, resulting in inconsistent judgements [23], such as A being more important than B, B being more important than C, and C being more important than A. Thus, determining the importance between A and C becomes problematic. In contrast, the G1 method is an improved index weighting method based on the AHP algorithm [24].The G1 method avoids the consistency test required by the AHP algorithm by using the non-inferior relationship between indicators, and it has the characteristics of simple operation and wide applicability [25].The G1 method initially requires experts to rank the importance of indicators according to their judgments and subsequently assign values to express the importance between them.This approach effectively circumvents the drawbacks of the AHP method.In the context of designing a flight trainee skill assessment, this study acknowledges that there may be a substantial number of indicators to consider.If the AHP method was employed, it could lead to consistency check failures.Moreover, repeatedly convening experts for the assessment of indicator importance is not very convenient.Consequently, the G1 method was chosen as the approach to calculating the indicator weights in this research. Some scholars combined the G1 method with fuzzy comprehensive evaluation method for the construction research of evaluation models [26][27][28].Thus, applying the G1 method [29] and the fuzzy evaluation method [30] to the assessment of pilot flight performance enables a comprehensive consideration of various factors exhibited by pilot flight performance.Based on the importance and evaluation results of each factor, the original qualitative evaluation can be quantified to better handle the multifactorial, fuzzy, and subjective judgments in the assessment of pilot flight performance. Meanwhile, traffic pattern flight encompasses all flight processes, from take-off to landing, and flight training for pilot cadets primarily focuses on traffic pattern flight training [31].Therefore, the detailed contributions and novelty of this paper are as follows: (1) Based on the characteristics of flight tasks in the traffic pattern and the flight training manual, a flight performance evaluation index system that can accurately assess the flight performance of cadets in traffic pattern flight is derived. (2) By determining the weight relations of various indicators through the G1 method, an evaluation model of flight performance for pilot cadets using the fuzzy comprehensive evaluation method is established. (3) The flight data of pilot cadets are obtained through experiments.The evaluation results of flight performance of pilot cadets are calculated by inputting the data into the evaluation model.The results show that the evaluation index system and the method proposed in this paper are suitable for a flight performance evaluation of pilot cadets. Flight Performance Evaluation Index System for Pilot Cadets The traffic pattern flight is a major course for pilot cadets in their flight training.They can acquire flight performance such as take-off, climb, turn, cruise, descent, and landing from the traffic pattern flight training.Standard thresholds and scoring rules for monitoring indicators are set in advance to evaluate the flight performance.The real-time monitoring of flight data and overshoot situations is performed during traffic pattern flight training for pilot cadets. Flight Process of Traffic Pattern The traffic pattern flight is a flight subject that contains a complete flight process.The flight altitude of this subject generally does not exceed 1500 feet.For example, an airport with a runway magnetic heading of 359° is used for traffic pattern flight training.In this flight training, the maximum flight altitude is 1100 ft, and the pilot cadets are required to complete 11 flight phase tasks in sequence, including take-off, upwind flight, crosswind turn, crosswind flight, downwind turn, downwind flight, base turn, base flight, final turn, final flight and landing.Figure 1 shows the traffic pattern training process.There are a number of tasks to be completed during the traffic pattern flight.In the take-off phase, the pilot cadet's control of the aircraft's status is examined.The pilot cadet needs to control the aircraft to accelerate along the centerline of the runway and to pull the nose up when reaching the rotation speed.During various lateral flights, the pilot cadet needs to manage the aircraft's track and energy, control the aircraft at the predetermined altitude and heading, and prevent the aircraft's speed from exceeding the threshold.At each turning point during the flight, the pilot cadet needs to maneuver the aircraft to form a proper turning attitude at the appropriate position, monitor the bank angle so that it does not exceed the limit and promptly correct it before the turn is completed. The approach and landing phase is the most challenging phase for the pilot cadet.At the end of the final turn, the pilot cadet should position the aircraft on the extended centerline of the runway and parallel to the runway heading.During the approach, the pilot cadet controls the energy, heading, and altitude of the aircraft based on the position of the glide slope and reference points.Before touchdown, the pilot cadet needs to ensure they will land within the touchdown area on the runway.The pilot cadet then controls the aircraft to slow down and taxi along the centerline of the runway. Flight Performance Evaluation Index System The selection of evaluation indicators is a crucial step in the evaluation of flight performance, as it directly affects the evaluation results of the pilot performance.In traffic pattern flight, the pilot cadet follows the training manual for flight operations, and each flight phase has corresponding operational requirements.The pilot cadet's performance can be intuitively reflected by the relevant flight parameters [32]. The selected indicators should reflect the accuracy of the pilot cadet's control of the aircraft during the traffic pattern flight, and achieve comprehensive monitoring.By considering the importance of various flight parameters and their practical value in flight performance evaluations, and based on the characteristics of flight tasks in various phases and the training manual, the evaluation indicators were selected. According to the principles of systematicity, scientific quality, comparability, and practicability [33], the pilot cadet's flight performance can be reflected by the control of the aircraft's energy, flight attitude, flight trajectory during different flight phases in the traffic pattern flight.Therefore, based on the pilot cadet's control over the aircraft's energy, flight attitude, flight trajectory, a preliminary selection of evaluation indicators for the pilot cadet's flight performance was made. In terms of flight energy control, the pilot cadet's assessment is based on parameters such as altitude, speed, and load factor.In terms of flight attitude control, the pilot cadet's assessment is based on parameters such as pitch angle and bank angle.For flight trajectory control, the pilot cadet's assessment is mainly based on trajectory parameters.Thus, for the flight energy control of the pilot cadet, nine evaluation indicators were selected, including altitude deviation in crosswind turn, altitude deviation in downwind, altitude deviation in runway threshold, rotation speed, maximum climb rate in upwind, maximum descent rate in approach, touchdown rate, touchdown overload, maximum flight overload.For the flight attitude control of the pilot, six evaluation indicators were selected, including maximum pitch angle in upwind, maximum pitch angle in final, maximum bank in crosswind turn, maximum bank in downwind turn, maximum bank in turning base, maximum bank in turning final.For the flight trajectory control of the pilot, six evaluation indicators were selected, including track deviation in upwind, track deviation in crosswind, track deviation in downwind, track deviation in approach, touchdown deviation, and taxing deviation. Based on the characteristics of flight tasks in different phases and the flight training manual, this study provisionally identified 21 evaluation indicators for evaluating the flight performance of pilot cadets.To make the indicators more rational and scientific, the Delphi method was used to interview 168 flight school instructors, airline captains, instructors, experts and academics in the field of pilot training.After two rounds of expert interviews, 90.5% of the initially selected indicators were established as the evaluation criteria for the traffic pattern flight performance of the pilot cadet.The statistics of the experts including numbers and percentages for the interviews are shown in Table 1.The statistics for the Delphi questionnaires are presented in Table 2. Through the collaborative analysis of expert opinions, two indicators of flight attitude control, maximum pitch angle in upwind and maximum pitch angle in final, were found to be repetitive, with two indicators of flight energy control, maximum climb rate in upwind and maximum descent rate in approach.The remaining 19 indicators had low repetitiveness and could effectively achieve the evaluation of flight performance for the pilot cadet's traffic pattern flight throughout the entire process. Based on the expert interviews, two flight attitude control indicators, maximum pitch angle in upwind and maximum pitch angle in final, were removed.The evaluation indicators for the flight performance of pilot cadets were then optimized.As a result, a total of 19 evaluation indicators in three dimensions were finalized.The flight performance evaluation index system for pilot cadets is shown in Figure 2. Method The evaluation of pilot cadets' flight performance is a complex system, and the selection of appropriate evaluation methods and the establishment of a proper evaluation model will lead to a more accurate assessment of pilot cadets' traffic pattern flight performance.In this study, the G1 method and the fuzzy comprehensive evaluation method are used to establish an evaluation model of pilot cadets' traffic pattern flight performance and to evaluate their flight performance. Subjective Weight Calculation by G1 Method To divide a complex problem into a hierarchical structure model with interrelation and subordination, the complex problem is decomposed into several indicators, level by level.The weights can be obtained by determining the importance ranking and degree of each indicator in the hierarchical structure model.In this paper, i W is used to represent the weight of the main criteria indicators' level and i w is used to represent the weight of the sub-criteria indicators' level.The implementation method of the G1 method was as follows: • Determine the order relationship of indicators.For a set of flight performance evalu- , , n u u u  , experts ranked the importance of these evaluation in- dicators from high to low based on their personal understanding and experience.Let this ranking be denoted as Assign importance values to adjacent indicators.3. where where w represents the weight coefficient of the expert q on the evaluation indi- cator i u . Grading Standard The grading standard of evaluation indicators is the basis of the evaluation system for the flight performance of flight cadets.We divided the grading standard into four levels: excellent, good, medium, and bad.Therefore, the comment set V = { Excellent, Good, Medium, Bad} was established for fuzzy comprehensive evaluation. By consulting the flight training manual for the maximum threshold requirements of flight control and flight parameters in the traffic pattern flight, the threshold relationship between the flight performance level of the pilot cadet and the corresponding evaluation indicators for each flight performance is derived.Table 4 shows the relationship between the grading standards of 19 evaluation indicators and the flight performance levels.Each grading standard corresponds to a deviation value, which serves as the basis for the membership function.The fuzzy membership degrees of each factor are determined using membership functions.Figure 3 shows a graphical representation of the triangular fuzzy membership function Figure 3a and the trapezoidal fuzzy membership function Figure 3b.As shown in Table 4, the deviation value thresholds corresponding to the different levels of the 19 criteria can be categorized into two scenarios.In the first scenario, the thresholds have a center-symmetric distribution around zero.In the second scenario, the thresholds are spread over a range and spread in both directions.In this context, if the deviation value of flight data falls to the left of the threshold of the Excellent level or to the right of the threshold of the Bad level, it can also be considered Excellent or Bad, with a membership degree of 1.If the deviation value of flight data falls between the thresholds of two levels, this study postulates that the value is a membership with the level to the left and simultaneously with the level to the right.Furthermore, when the deviation value is exactly at the threshold of a particular level threshold, this research assumes that the value has the highest degree of membership with that particular level, with a membership degree of 1.Therefore, the trapezoidal membership function is used to deal with the boundary thresholds and the triangular membership functions for the intermediate levels. By using a combination of triangular fuzzy membership function and trapezoidal fuzzy membership function, a hybrid membership function is established to evaluate the flight performance of pilot cadets in the traffic pattern flight.The hybrid membership function is shown in Equation ( 5): The graphical representation of the hybrid membership function is shown in Figure 4. Evaluation Matrix Based on the experimental data that were obtained and the hybrid membership functions, the main criteria element evaluation matrix of each pilot cadet is calculated.Each 1, 2, , 9 ; 1, 2, , 4 where 1 R means the evaluation matrix of the main criteria element 1 U .The ij r in 1 R is calculated based on the deviation data obtained from the experiment and the membership function.i refers to the number of indices of 1 U and j refers to the number of grading standard in the comment set V . Fuzzy Comprehensive Evaluation The fuzzy comprehensive evaluation model i B of a main criteria element is calcu- lated by Equation ( 7): [ ] where symbol  is the fuzzy synthesis operator.j b represents the membership degree corresponding to each grading.By using the maximum membership rule to deal with i B , the final fuzzy comprehensive evaluation result can be obtained.In addition, to better reflect the impact of various indices in the evaluation of pilot cadet's flight performance on traffic pattern, the weighted average fuzzy synthesis operator ( ) , M ⋅ + was chosen in this paper, and the ( ) Equation ( 8) can comprehensively consider the evaluation indicators and comprehensive evaluation results, which makes the fuzzy comprehensive evaluation results more practical.After obtaining the evaluation results i B of the main criteria element, the eval- uation results i B of the main criteria element need to be normalized, while the in the calculation result.A new fuzzy comprehensive evaluation matrix R is formed us- ing the normalized results.Then, the product-weighted fuzzy synthesis operator is used to recalculate and obtain the comprehensive evaluation results B of the pilot cadet's flight performance on the traffic pattern flight. Results and Discussions The pilot cadets evaluated are selected from a flight school in China that trains approximately 320 pilot cadets per year.In terms of training equipment, the school is equipped with a professional flight simulator training room that can accommodate up to 90 pilot cadets for simulator training.At present, the evaluation of the cadets' traffic pattern flight performance is mainly based on subjective scoring, which has low accuracy and is highly influenced by the evaluators' subjectivity.In order to obtain more accurate evaluation results, this study attempts to utilize the flight data collected during the cadets' training and apply a fuzzy comprehensive evaluation method to objectively evaluate the cadets' traffic pattern flight performance. Experimental Design and Data Pre-Processing In order to obtain flight data and expert judgements on the training performance of the pilot cadets, which were used to validate the flight performance evaluation model, a flight experiment was designed specifically for the pilot cadets in the traffic pattern flight.The specific experimental procedure is shown in Figure 5.A total of 30 male student pilots aged between 18 and 20 years old, healthy and wellrested prior to the test, and three flight instructors with more than 5 years' experience and valid flight certificates, participated in the experiment. The flight experiment used the Cessna 172R flight simulator.During the experiment, control deviation data were recorded as the pilot cadets operated the flight simulator.After the experiment, the three instructors evaluated the flight performance of the cadets based on the maximum deviation values recorded during the flight experiments, using the evaluation threshold table for flight evaluation indicators.The expert evaluation results were then obtained by calculating the average scores given by the three instructors.The maximum deviation values obtained for each indicator and the expert evaluation results were then verified, and the complete and accurate data were stored.Finally, the dataset of the maximum deviation values and the expert evaluation results for the 30 pilot cadets in the 19 evaluation indicators were obtained.The statistical analysis of the maximum deviation values for the 30 cadets across the 19 evaluation indicators is shown in Table 5. Calculation of Indicator Weights The weights of the 19 evaluation indicators were calculated using the G1 method.A total of 168 experts were invited to rate the importance of the indicators, and 131 survey responses were collected.Based on the collected survey results, the weights of each indicator were calculated and are presented in Table 6. Flight Performance Evaluation Based on the calculation results of the weights, the evaluation results of the main criteria element of the pilot cadets' traffic pattern flight performance are calculated using a fuzzy evaluation model.Taking the pilot cadet 1 as an example, a comprehensive evaluation of this pilot cadet's traffic pattern flight performance is conducted using a fuzzy comprehensive evaluation method.The maximum deviations of the first pilot cadet in the 19 evaluation indicators are shown in Table 7. According to Equation (7), the evaluation results i B of the pilot cadet 1 can be cal- culated as follows: [ ] Following the principle of maximum membership degree, the evaluation results of the pilot cadet 1 at the level of the main criteria elements are shown in Table 8. Element Evaluation Result . . By synthesizing the comprehensive evaluation matrix R with the weight matrix us- ing the weighted average fuzzy synthesis operator, the comprehensive evaluation results of pilot cadet 1 can be obtained. [ ] Based on the maximum membership degree obtained from the calculation results, the flight performance evaluation result of pilot cadet 1 is excellent. Then, by repeating the steps of the above fuzzy comprehensive evaluation of flight performance, the evaluation results of traffic pattern flight performance for all 30 pilot cadets can be calculated.The evaluation results of the flight performance for the 30 pilot cadets are shown in Table 9. Results Analysis The evaluation results show that among the 30 pilot cadets that were evaluated, 14 cadets' evaluation results are excellent, 2 cadets' evaluation results are good, 9 cadets' evaluation results are medium, and 5 cadets' evaluation results are poor.By examining the flight theory learning and simulated flight training performance of the selected pilot cadets during their daily schooling, it is clear that the evaluation results have a strong correlation with their performance.The cadets who are rated as excellent have better theoretical learning performance and a better performance in the hard simulated flight training test while the cadets with poor evaluation results have a weak learning performance and poor training performance.Therefore, it can be verified that the evaluation model of pilot cadets' traffic pattern flight performance established by the fuzzy comprehensive evaluation method in this paper can evaluate pilot cadets' handling performance in traffic pattern flights.To verify the reliability of the model evaluation results and their consistency with the expert evaluation results, the model evaluation results were compared with the expert evaluation results. Figure 6 shows the evaluation results by the experts and the method proposed in this paper.By comparing the model evaluation results with the expert evaluation results, it can be concluded that the model evaluation results obtained in this paper are basically consistent with the expert evaluation results, which proves the accuracy of the modelling method for evaluating the flight performance of pilot cadets in traffic pattern flight.It also confirms the good practical application effect of the fuzzy evaluation method used in this paper.Through the evaluation of pilot cadets' flight performance, flight instructors and flight schools can clearly identify the differences in aircraft energy control, flight attitude control, and flight trajectory control among different pilot cadets, as well as the overall level of pilot cadets' flight performance within the group.This provides an important theoretical basis for instructors to develop personalized training plans in a timely and accurate manner. Conclusions In this paper, an evaluation model of pilot cadets' flight performance in traffic pattern flight was established using the Delphi method, the G1 method, and the fuzzy comprehensive evaluation method.For the evaluation of pilot cadets' flight performance, a traffic pattern flight was selected as the flight subject for the evaluation of flight performance.Combined with the flight training manual and expert interviews with flight instructors, a total of 19 flight parameters were selected as evaluation indicators, forming a flight performance evaluation index system for pilot cadets.The weight of the evaluation indicators determined by the G1 method incorporates the judgments of numerous experts on the degree of impact of the indicators.The evaluation model established by the fuzzy comprehensive evaluation method avoids the errors in the evaluation results caused by the subjective factors of the instructors.Then, the flight data of pilot cadets were obtained through simulated flight training experiments.By inputting the flight data into the evaluation model, the flight performance evaluation results were calculated for each pilot caded.The results show the evaluation index system established in this paper accurately reflects the flight performance of the pilot cadets.The evaluation model based on the G1 method and the fuzzy comprehensive evaluation method established in this paper has consistent evaluation results with expert results.Therefore, the evaluation model has good rationality, accuracy, and applicability in evaluating the performance of pilot cadets. The contribution of this study is to propose an evaluation model that assesses the flight performance of pilot cadets during their traffic pattern flight training.By using this evaluation model to assess their current level of flight proficiency, it could help them to identify weaknesses in their flight operations and provide scientific support for the development of personalized training plans.In addition, the evaluation of pilot cadets' flight performance will also help to promote safer and more sustainable civil aviation operations in the future. Figure 2 . Figure 2. Flight performance evaluation index system of the pilot cadet. 1 ku − and k u are two adjacent in- dicators, and the relative importance between the two indicators can be represented by k r .The assigned values of k r are shown in Table 1 kw − and k w represent the weights of the 1 k − indica- tor and the k indicator. •• After obtaining the weight n w of indicator n u , the weight i w of the remaining in- dicator i u can be calculated step by step using Equation (3): An expert group determines the weight vector.If there are h experts involved in the process of determining the indicator weights, different experts may provide different order relationships and weight coefficients among the indicators.To solve this problem, Equation (4) is used to calculate the total weight coefficients * i w for all ex- perts regarding the indicators: (a) Triangular fuzzy membership function (b) Trapezoidal fuzzy membership function iR is considered as a main criteria element and is denoted as Figure 5 . Figure 5.The experimental process of the traffic pattern flight. not necessary to normalize the evaluation results of the main criteria elements when constructing a comprehensive evaluation matrix for the pilot cadet 1.Therefore, the comprehensive evaluation matrix R for flight performance can be obtained. Figure 6 . Figure 6.Comparison between expert evaluation results and fuzzy comprehensive evaluation method evaluation results. Table 1 . The statistics of experts. Table 2 . The statistics for the Delphi questionnaires. Table 3 . Relative importance assignment for adjacent indicators. k r Table 4 . Flight performance evaluation threshold standards. Table 5 . The statistical analysis of the maximum deviation values for the 30 cadets across the 19 evaluation indicators. Table 6 . The weights of each indicator. Table 8 . The evaluation results of the pilot cadet 1 at the main criteria elements' level. Table 9 . The evaluation results of flight performance for the 30 pilot cadets.
6,634.6
2023-11-05T00:00:00.000
[ "Engineering" ]
Oxidative Stress Induced Inflammation Initiates Functional Decline of Tear Production Oxidative damage and inflammation are proposed to be involved in an age-related functional decline of exocrine glands. However, the molecular mechanism of how oxidative stress affects the secretory function of exocrine glands is unclear. We developed a novel mev-1 conditional transgenic mouse model (Tet-mev-1) using a modified tetracycline system (Tet-On/Off system). This mouse model demonstrated decreased tear production with morphological changes including leukocytic infiltration and fibrosis. We found that the mev-1 gene encodes Cyt-1, which is the cytochrome b560 large subunit of succinate-ubiquinone oxidoreductase in complex II of mitochondria (homologous to succinate dehydrogenase C subunit (SDHC) in humans). The mev-1 gene induced excessive oxidative stress associated with ocular surface epithelial damage and a decrease in protein and aqueous secretory function. This new model provides evidence that mitochondrial oxidative damage in the lacrimal gland induces lacrimal dysfunction resulting in dry eye disease. Tear volume in Tet-mev-1 mice was lower than in wild type mice and histopathological analyses showed the hallmarks of lacrimal gland inflammation by intense mononuclear leukocytic infiltration and fibrosis in the lacrimal gland of Tet-mev-1 mice. These findings strongly suggest that oxidative stress can be a causative factor for the development of dry eye disease. Introduction Dry eye disease is a deficiency in tear instability, mainly induced by low tear production, and a functional decline of the lacrimal gland induced by age-related chronic inflammation [1][2][3]. Such age-related chronic inflammation supported the reported prevalence of dry eye disease [4][5][6][7][8]. However, the molecular mechanism of age-related lacrimal gland inflammation is unclear. The main cause of chronic inflammation is postulated to involve oxidative stress, and the main endogenous source of oxidative stress is the electron transport chain in mitochondria [9]. The mev-1 mutant of the nematode Caenorhabditis elegans has a genetic dysfunction in complex II of the mitochondrial electron transport chain [10] and overproduces a superoxide anion (O 2 2 ) from the mitochondria [11]. The lifespan of this mev-1 mutant decreases dramatically as oxygen concentrations are increased from 1 to 60% [12]. In addition, mev-1-like dominant negative SdhC (SdhC 171E ) increases oxidative stress and reduces the lifespan in Drosophila [13]. To determine whether mouse lacrimal gland functional decline is related to oxidative-stress-induced inflammation, a mev-1 conditional transgenic mouse (Tet-mev-1) was established with a modified tetracycline system (Tet-On/Off system) [14], which equilibrates transgene expression to endogenous levels [15]. Excessive oxidative stress induces mitochondrial respiratory chain dysfunction and results in excessive apoptosis leading to low birth weight and growth retardation in Tet-mev-1 mice [14]. Using this mouse model, we found that the lacrimal gland of Tet-mev-1 mice produced more O 2 2 and oxidative protein than the lacrimal gland of wild type mice. This new model provides evidence that mitochondrial oxidative damage in the lacrimal gland induces lacrimal dysfunction resulting in dry eye disease. Animals and Materials C57BL/6L and Tet-mev-1 mice were bred and maintained under specially pathogen free (SPF) conditions in the Center of Genetic Engineering for Human Disease (CGHED) (Tokai University School of Medicine, Kanagawa, Japan). Doxycycline was administered in a drinking water mix (dose: 2 mg/ml). All mice used in analyses were 3 month old males. Histopathology Under the operating microscope, the lacrimal gland and submandibular salivary gland were surgically excised after death. A portion of each dissected specimen was immediately embedded in optimal cutting temperature (OCT) compound (Tissue-Tek; Miles Inc., Elkhart, IN, USA) and snap frozen in pre-cooled isopentane at 280uC. The remainder of the tissues was analyzed after being fixed in 4% paraformaldehyde or 10% neutral buffered formalin and embedded in paraffin wax. HE staining and Azan staining. Five micrometer-thick paraffin embedded sections fixed in 4% paraformaldehyde were cut and stained with HE. Additionally, 5 mm-thick paraffin embedded sections fixed in 10% neutral buffered formalin underwent Azan staining to evaluate the severity of fibrosis in the lacrimal gland. Immunohistochemical analysis of DNA damage due to oxidative stress (8-OHdG). The 5 mm-thick paraffin embedded sections fixed in 4% paraformaldehyde were cut and stained with a mouse anti-8-OHdG monoclonal antibody (Japan Institute for the Control of Aging [JaICA], Shizuoka, Japan) to analyze DNA damage due to oxidative stress [16,17]. After removal of paraffin, the sections were placed in 10 mM citrate buffer solution and autoclaved at 121uC for 10 min. After blocking with 10% normal goat serum (Vector Laboratories, Burlingame, CA), sections were first blocked with Avidin/Biotin blocking reagent (Vector Labs) and then with a mouse on mouse blocking reagent (M.O.M. TM ). Blocking with the anti-mouse IgG blocking reagent (Vector Laboratories) was completed overnight at 4uC. Sections were exposed to diluted mouse anti-8-OHdG monoclonal antibody (1:10). Antibody binding was detected with a horse anti-mouse IgG ABC kit (Vector Laboratories) according to the manufacturer's protocol. The bound antibodies were visualized by the addition of diaminobenzidine tetrahydroxychloride. Analysis of the mononuclear cell fraction using histochemical staining (CD4, CD8, CD19 and F4/80). Immunohistochemical analysis was performed according to a standard protocol with a panel of mouse monoclonal antibodies specific for CD4, CD8, CD19 and F4/80, (eBioscience, San Jose, CA) [18,19]. Briefly, 8 mm-thick frozen sections were air dried, fixed in acetone for 20 min at room temperature, and rehydrated in phosphate-buffered saline (PBS). Nonspecific binding was inhibited by incubating the specimens with 5% goat serum in PBS for 30 min at room temperature. The sections were incubated with the optimally diluted primary antibody at room temperature for 2 h, followed by incubation with a peroxidase-conjugated rabbit anti-mouse IgG antibody (HistofineH Simple Stain Rat MAX PO (M)) (Nichirei Biosciences Inc, Tokyo, Japan) for 45 min. The bound antibodies were visualized by the addition of diaminobenzidine tetrahydroxychloride. All steps were followed by three washes with PBS. Nuclei were counterstained with hematoxylin for 1 min [20]. Quantitative real-time RT-PCR RNA extraction. An acid guanidinium-phenol-chloroform method was used to isolate RNA from tissues and cultured cells. The following protocol describes isolation of RNA from mouse lacrimal gland tissue. Immediately after removal from the animal, the tissue was minced on ice and homogenized (at room temperature) with 0.85 ml of 4 M guanidinium thiocyanate (GTC) in a glass-Teflon homogenizer and subsequently transferred to a 15 ml polypropylene tube with 2 ml of 4 M GTC, 0.15 ml of 10% sarcosyl and 0.72 ml of 2-mercaptoethanol. A total of 0.3 ml of 2 M sodium acetate, pH 4, 3 ml of phenol (water saturated), and 0.6 ml of chloroform-isoamyl alcohol mixture (24:l) were sequentially added to the homogenate, with thorough mixing by inversion after the addition of each reagent. The final suspension was shaken vigorously for 10 s and cooled on ice for 15 min. Samples were centrifuged at 7000 rpm for 20 min at 4uC. After centrifugation, RNA was present in the aqueous phase whereas DNA and proteins were present in the interphase and phenol phase. The aqueous phase was transferred to a fresh tube, mixed with 3 ml of isopropanol, and then placed at 220uC for at least 2 h to precipitate the RNA. Centrifugation at 7000 rpm for 20 min at 4uC was again performed and the resulting RNA pellet was washed in 3 ml of 70% ethanol and centrifuged at 7000 rpm for 20 min at 4uC. After centrifugation, the RNA pellet was airdried (1 h) at room temperature. After drying, 88 ml 0.1% diethyl pyrocarbonate (DEPC) in distilled water was added to the pellet. The solution was transferred to a 2 ml Eppendorf tube with 2 ml DNase (20 U), 10 ml DNase buffer and 0.5 ml RNase inhibitor (Pharmacia) and was heated for 30 min at 37uC. After cooling on ice, the solution was added to 400 ml of a chloroform-phenol mixture (1:l) and 300 ml of 0.1% DEPC in distilled water. After 20 min on ice, the solution was centrifuged at 12000 rpm for 20 min at 4uC. The aqueous phase was transferred to a fresh tube with 35 ml 3 M sodium acetate and 1 ml 100% ethanol. After mixing, this solution was placed at 220uC for 30 min and centrifuged at 12000 rpm for 20 min at 4uC. The sediment was washed with 400 ml 70% ethanol and centrifuged at 12000 rpm for 5 min at 4uC. The sediment was air-dried for 1 h at room temperature and 100 ml 0.1% DEPC in distilled water was added. Isolation of mitochondria Mitochondria were isolated from mouse lacrimal glands using a standard procedure involving differential centrifugation [21,22]. After washing with ice-cold PBS, the lacrimal glands were minced in a volume of isolation buffer (210 mM mannitol, 70 mM sucrose, 0.1 mM EDTA, and 5 mM Tris-HCl, pH 7.4). The minced lacrimal glands were homogenized in isolation buffer at 800 rpm with 30 strokes using a Teflon homogenizer. The homogenate was centrifuged at 2000 rpm for 10 min at 4uC. The supernatant was transferred to a fresh tube and centrifuged at 14000 rpm for 10 min at 4uC. The mitochondria-containing pellet was suspended in TE buffer (50 mM Tris-HCl pH 7.4 and 0.1 mM EDTA). Measurement of activity of complexes I and II of the electron transport chain The activity of NADH-coenzyme Q oxidoreductase (complex I) and succinate-coenzyme Q oxidoreductase (complex II) in mitochondria was measured as previously described [22,23]. Tissues were homogenized in isolation buffer (10 mM HEPES, pH 7.4, 0.15 M NaCl). The resulting homogenate was centrifuged at 2506 g for 10 min to remove debris. The supernatant was further centrifuged at 310006 g for 20 min. The pellet was suspended in isolation buffer. Complex I activity was assayed by measuring NADH-sensitive NADH-cytochrome c reductase activity at 37uC in 200 ml 0.1 M Tris-SO 4 buffer at pH 7.4, containing 0.32 mg cytochrome c and 1 mM sodium cyanate. Complex II activity was assayed by measuring malonate-sensitive succinatecytochrome c reductase activity. The reference cuvette contained 20 ml of 20% sodium malonate solution. Measurement of carbonylated protein Carbonylated protein as an indicator of oxidized protein was detected by an enzyme linked immunosorbent assay (ELISA) [25]. Isolated mitochondrial proteins from the lacrimal gland were treated with 10 mM DNPH. A total of 250 ng of mitochondrial protein in 50 mM NaHCO 3 was coated on an enhanced proteinbinding ELISA plate (Caster) by incubating at 4uC for 8 h. Nonspecific binding to the plate was minimized by blocking the wells with 100 ml blocking buffer (3% BSA and 0.1% NaN 3 in PBS) at 37uC for 1 h. After the supernatant was removed, 100 ml of anti-DNP antibody diluted with buffer G (0.1% BSA, 0.1% gelatin, 0.1% NaN 3 and 1 mM MgCl 2 in PBS) was added to each well and incubated at 37uC for 1 h. After the supernatant was removed, the plate was washed four times with PBS and 100 ml of horseradish peroxidase-conjugated secondary antibody diluted with 0.05% Tween 20 in PBS was added followed by incubation at 37uC for 1 h. The plate was washed four times to remove the unbound secondary antibody. After 100 ml of ELISA coloring solution (0.0156 M C 6 H 8 O 7 , 0.1 M Na 2 HPO 4 ?12H 2 O, 0.4 mg/ ml o-phenylenediamine dihydrochloride and 0.2 ml/ml 30% H 2 O 2 ) was added to each well, the reaction was terminated by the addition of 100 ml of 1 M H 2 SO 4 . The absorbance was measured using a computer-controlled spectrophotometric plate reader (Spectra Max 250: Molecular Devices) at a wavelength of 492 nm. Corneal fluorescein staining Corneal fluorescein staining was performed as described by Rashid et al. [26]. Sodium fluorescein (1%) was applied to the cornea of mice. Three minutes later, eyes were flushed with PBS to remove excess fluorescein, and corneal staining was evaluated with a hand slit lamp (Kowa, Tokyo, Japan) using cobalt blue light. Punctate staining was recorded using a standardized grading system of 0 to 3 for each of the three areas of the cornea [27][28][29]. Aqueous tear measurement For 3 min, tears (0.5 ml) from each mouse were collected in a microcapillary tube. Tear volume was measured using capillary length (mm). Tear volume was normalized against the body weight of each mouse and the experiments were performed three times to validate the tear measurement. Results Histopathology of the lacrimal glands revealed no inflammation in Tet-mev-1 mice without Dox (Tet-mev-1/Dox(2)) or in wild type mice (C57BL/6J) with Dox (WT/Dox(+)) or without Dox (WT/ Dox(2)) at 3 months old. Tet-mev-1/Dox(+) mice typically had multifocal inflammation and fibrosis around acinar cells in the lacrimal gland (Fig. 1a, b). However, histopathology of the salivary glands showed no inflammation in all mice (Fig. 1c). Moreover, although the superoxide anion was overproduced in the whole body of Tet-mev-1/Dox(+) mice, other main internal organs examined (i.e., liver, heart, kidney, lung and brain) did not have an inflammatory response (data not shown). To clarify the inflammatory status, we investigated the immunostaining by cell surface antigens (CD4, CD8, CD19, and F4/80). Various immunocytes such as cytotoxic T cell, helper T cells, activated B cells, and pan-macrophages had infiltrated the inflammatory focus (Fig. 1d). This inflammation was not observed in WT/Dox(+) mice, which suggested that doxycycline administration did not cause inflammation in the lacrimal gland. In addition, quantitative real-time RT-PCR analysis of the cytokines in the lacrimal gland showed an increase in inflammatory cytokines including TNFa, IL-6 and INFc, which may be related to the inflammatory reaction in the lacrimal gland of Tet-mev-1/Dox(+) mice. Expression of the anti-inflammatory cytokine IL-10 was increased. (Fig. 1e, f). Tet-mev-1 mice contain the mutation site of SDHC V69E, which is located within the functional ubiquinone (CoQ)-binding region of complex II [15,30,31]. Tet-mev-1 mice are conditional transgenic mice and were designed to have decreased affinity of CoQ for complex II in mitochondria, which would induce electron leakage and lead to an increase in production of superoxide anion from complex II in the presence of doxycycline. The activity of complexes I and II in mitochondria of the lacrimal gland was compared between WT/Dox(+) and Tet-mev-1/Dox(+) mice. In the mitochondria of the Tet-mev-1 mouse, only the activity of complex II was decreased, and, thus, reactive oxygen species (ROS) was overproduced from complex II with doxycycline. According to the intended design of the model, complex I activity of the lacrimal gland was not significantly different between WT/ Dox(+) and Tet-mev-1/Dox(+) mice, and complex II activity in Tetmev-1/Dox(+) mice was significantly lower than in WT/Dox(+) mice (p = 0.008, Fig. 2a). The activity of complex II-induced O 2 2 production in the lacrimal gland significantly increased in Tet-mev-1/Dox(+) mice compared with that in the other types of mice (p = 0.014, Fig. 2b). We then measured carbonylated protein as a marker of oxidized proteins, which accumulate in the mitochondrial fractions of wild type mice during aging [25]. Our results showed that carbonylated protein amounts in the lacrimal gland of wild type mice were not significantly different between Dox(+) and Dox(2) mice. Therefore, doxycycline did not affect the quantity of carbonylated protein. Carbonylated protein content was determined by ELISA and the ratio of WT/Dox(+) and Tet-mev-1/ Dox(+) was three times higher than the ratio of WT/Dox(2) and Tet-mev-1/Dox (2) (p,0.01, Figure 2c). The compound 8-OHdG accumulates with aging [32], and accordingly, 8-OHdG was used as a marker of oxidative damage in DNA in our study. Immunohistological labeling intensity for 8-OHdG was higher in the lacrimal gland of Tet-mev-1/Dox(+) mice compared with that in the other types of mice (Fig. 2d). Discussion It is well known that lacrimal and salivary gland functions decline with age in humans [33,34]. We first hypothesized that both lacrimal and salivary gland functions decline in Tet-mev-1/ Dox(+) mice. However, the severe inflammation and fibrosis associated with functional decline occurred in the lacrimal gland, but not in the salivary gland. We hypothesized that the inherent tissue responses to oxidative stress in the lacrimal and salivary glands are different. Pharmacological cholinergic blockade (subcutaneous injection of scopolamine hydrobromide) inhibits lacrimal gland function. It also stimulates inflammatory cytokine production and lymphocytic infiltration in the lacrimal gland. This systemic cholinergic blockade does not induce a nonspecific inflammation at three sites (conjunctival goblet cells, submandibular glands and small intestine) that receive cholinergic innerva- Figure 1. Inflammation of the lacrimal gland in Tet-mev-1 mice with Dox. A, HE staining shows that Tet-mev-1 mice with Dox (Tet-mev-1/ Dox(+)) typically have multifocal inflammation. The other types of mice (Tet-mev-1/Dox(2), WT/Dox(+) and WT/Dox(2)) have no inflammation in the lacrimal gland. Scale bar, approximately 100 mm. B, Azan staining was used to evaluate the severity of fibrosis in the lacrimal gland. Tet-mev-1/Dox(+) only shows fibrosis around acinar cells in the lacrimal gland. Scale bar, approximately 40 mm. C, Histopathology of the salivary glands shows no inflammation in all types of mice. Scale bar, approximately 100 mm. D, In lacrimal glands of Tet-mev-1/Dox (+) mice, CD4 + T cells, CD8 + T cells, CD19 + cells (B cells) and F4/80 + cells (pan-macrophage) were observed. Scale bar, approximately 100 mm. E, Proinflammatory cytokines were evaluated by real-time RT-PCR (ratio to WT/Dox (2)). Proinflammatory cytokines (TNF-a, IL-6, IL-1b, and IFN-c) were increased in Tet-mev-1/Dox(+), especially IL-6 and IFN-c, and IL-10 was also increased. F, Row data about Proinflammatory cytokines evaluated by Real-time RT-PCR is shown. doi:10.1371/journal.pone.0045805.g001 tions [35]. These results suggest that the lacrimal gland is subject to inflammation by various stimuli in contrast with the salivary gland. Mitochondria generate ATP through aerobic respiration, whereby glucose, pyruvate, and NADH are oxidized, thus generating ROS as a byproduct. In normal circumstances, the deleterious effects caused by the highly reactive nature of ROS are balanced by the presence of antioxidants. However, high levels of ROS are observed in chronic human diseases such as neurodegeneration [36], digestive organ inflammation [37], and cancer [38]. Recent work exploring the mechanisms linking ROS and inflammation suggest that ROS derived from mitochondria (mtROS) act as signal transducing molecules to trigger proinflammatory cytokine production [39]. Cells from patients with TNFR1-associated periodic syndrome (TRAPS) demonstrate that increased mtROS levels influence the transcription of proinflammatory cytokines such as IL-6 and TNF. TRAPS manifests as episodes of fever and severe localized inflammation with mutations in TNFR1. Inhibition of mtROS production inhibited MAPK activation and production of IL-6 and TNF in cells from TRAPS patients [40]. The mtROS in Tet-mev-1/Dox(+) mice may also directly induce increasing production of TNF-a and IL-6 and continuously induce inflammation in the lacrimal gland. Protein oxidation is a biomarker of oxidative stress and many different types of protein oxidative modification can be induced directly by ROS or indirectly by reactions of secondary byproducts of oxidative stress [41]. Lacrimal gland function has been reported to decrease gradually with aging, leading to reduced tear secretion and dry eye disease in the elderly [3,7]. Aging occurs, in part, as a result of the accumulation of oxidative stress caused by ROS that are generated continuously during the course of metabolic processes. Levels of 8-OHdG as a DNA oxidative stress marker and 4-HNE as a by-product of lipid peroxidation are higher and tear volume is decreased in middle-aged rats. Caloric restriction prevents a decline in lacrimal gland function and morphological changes and might be associated with a reduction in oxidative stress [42]. We confirmed that 8-OHdG immunohistological labeling intensity was higher in the lacrimal gland of Tet-mev-1/Dox(+) mice than in other mice types and the ratio of carbonylated protein content in mice with Dox was three times the ratio of mice without Dox. Collectively, mtROS production may damage DNA and induce the accumulation of carbonylated protein in the lacrimal gland. These biochemical and histochemical data suggest that overproduced superoxide anion from the mitochondria affect directly and/or indirectly oxidative damage and inflammation in the lacrimal gland. It is believed that chronic inflammation of the lacrimal gland is a major contributor to insufficient tear secretion. Chronic inflammation of the lacrimal gland occurs in several pathologic conditions such as autoimmune diseases (Sjögren syndrome, sarcoidosis, and diabetes) or simply as a result of aging [43]. The relationship between inflammation of the lacrimal gland and tear secretion deficiency has been described [44,45]. IL-1b induces a severe inflammatory response in the lacrimal gland and inhibits lacrimal gland secretion and subsequent dry eye disease [44]. A single injection of interleukin-1 into the lacrimal glands induces reversible inflammation and leads to destruction of lacrimal gland acinar epithelial cells, which results in decreased tear production. However, these inflammatory responses subside and lacrimal gland secretion and tear production return to normal levels [45]. For the dry eye model, we first reported the accelerated oxidation of protein, lipid, and DNA of the ocular surface in the rat swing model [46,47]. Accumulated oxidative damage caused the functional decline of the lacrimal gland and dry eye disease in Tet-mev-1/Dox(+) mice. In the lacrimal gland, age-related chronic inflammation, and age-related functional alterations including decreased acetylcholine release and protein secretion, might be related to dry eye diseases [48,49]. Our study clearly demonstrated that oxidative stress from mitochondria induced dry eye disease with morphological changes in the lacrimal gland of mice. In conclusion, reducing oxidative stress might be one of the possible treatments for age-related/ROS-induced dry eye disease.
4,902.8
2012-10-05T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
The effect of loading direction and Sn alloying on the deformation modes of Zr: An in-situ neutron diffraction study Deformation modes (slip and twining) in a strongly textured model hcp alloy system (Zr – Sn) have been investigated using in-situ neutron diffraction and deformation along with complementary electron mi-croscopy. Analysis of the evolution of the intergranular strain evolutions and intensity of speci fi c re- fl ections from neutron diffraction show differential in fl uence of Sn on the extent of twinning too, depending on the deformation direction. While Sn displayed very noticeable in fl uence on twin activity when samples were compressed along a direction that predominantly activates prismatic slip, this effect was not seen when samples were compressed along other different directions. These experimental ob- servations were successfully simulated using a CPFE (crystal plasticity fi nite element) model that incorporates composition sensitive CRSS (critical resolved shear stress) for slip and composition insensitive CRSS activation of twinning. The success of the CPFE model in capturing the experimental observations with respect to twin evolution suggests that the twinning in Zr is chie fl y governed by the initial crys- tallographic texture and the associated intergranular stress state generated during plastic deformation. Introduction Slip and twinning are two principal modes of plastic deformation in metallic materials [1,2]. It is well known that slip by dislocations contributes to the majority of observed deformation. Twinning on the other hand, results in significant changes in the crystal orientation and consequent crystallographic texture evolution [3][4][5]. Thus understanding of both of these phenomena is essential for deciphering plastic deformation and associated microstructural and textural evolutions. Although the existing knowledge on the mechanisms of slip in cubic materials is quite extensive, the same cannot be said for hexagonal close packed (hcp) materials. This is partly due to the interplay of multiple slip systems that makes the deformation a more complicated process [6,7]. The role of alloying elements on the deformation modes in these materials further complicates the picture. In addition to complications due to multiple slip modes, hcp materials are also known for their extensive tendency for twinning on account of paucity of easy slip systems along certain crystallographic directions [1,[8][9][10]. Twinning in these materials plays a crucial role in textural evolution and low temperature ductility. Thus, our ability to develop predictive deformation models for hcp materials critically depend on understanding the role of alloying elements on these two different modes of deformation, i.e. slip and twinning. Due to significant differences in the core structures of dislocations belonging to different slip modes (for e.g. oa 4 type and oc þa 4 type dislocations) [11,12], prima facie it appears that the influence of the alloying element shall be different in these slip modes. However, such effects are not documented in the available literature. Further, the mechanism and criteria under which deformation twins form and grow are not fully understood. Again, the role of alloying elements in deformation twinning adds additional complexity. Presently, there is no concrete experimental evidence of the governing criteria for the twin nucleation and previous work has relied chiefly on model assumptions. Some studies have considered the formation of mechanical twins to be primarily stress driven [4,13], while others indicate that twin nucleation is governed by the density and type of dislocation structure. For instance, there is considerable evidence to show that the twinning dislocations are a result of non planar dissociation reactions from oa 4, ocþ a4 , and oc4 type dislocations [14][15][16][17][18]. Since dislocation structure and its distribution are mainly influenced by the plastic strain in the grains, twinning can potentially depend on the strain undergone by the parent grains as well. This raises the fundamental question of whether twinning is controlled by stress level, amount of plastic strain, or a combination of both. This is an important consideration for being able to model the deformation behaviour of the hcp materials. The present study aims at providing answers to these questions by using in-situ neutron diffraction experiments on the deformation of four different Zr-Sn binary alloys. By generating comparable starting microstructures and textures in those binary alloys and compressing them along the three principle directions of the rolled plate, it was possible to preferentially activate different deformation modes and monitor the related twin formation and evolution during in-situ compression using time-of-flight neutron diffraction. Due to the in-situ nature of the experiments, and the arrangement of the detector banks, it was possible to measure the elastic strains and observe trends in peak broadening of grain families that tend to twin. In addition, the change in intensity of specific reflections can be used to detect the onset and subsequent evolution of twinning. A comprehensive analysis of this data is presented together with results from a crystal plasticity finite element model (CPFEM). The CPFEM accounts for the various slip modes and invokes twinning based on a critical resolved shear stress criterion. The principle idea of using the CPFEM in the present work is to replicate the observations obtained during the in-situ loading experiments and determine to what extent a CRSS based model can predict the onset and evolution of twinning measured experimentally. Although the material studied is a Zr alloy, our findings are relevant to the deformation of other hcp materials like Mg and Ti. It may be acknowledged that the role of an alloying element (Zn) on different deformation modes in case of another hcp metal, i.e., Mg has been investigated by Stanford et al. [19]. However, it may be noted that the concerned investigation employed the flow behaviour and pre and post texture data for interpretation of the results. Present work on the other hand made use of intergranualr strain evolutions across different grain families using in-situ deformation coupled with CPFE model incorporating twining to bring out the role of Sn on deformation of Zr. Material and processing of samples Four model Zr-Sn binary alloys with nominal compositions of 0.15%, 0.23%, 0.33%, and 1.20% Sn (amounts are in weight percentage), were used in the present study (see Table 1, for detailed composition). More compositions were deliberately chosen towards a lower Sn content, instead of using equal steps of Sn variation. All alloys were prepared using a single batch of Zr sponge to minimize variation in trace elements. A detailed description of the preparation and thermo mechanical processing of the alloy samples can be found elsewhere [20]. For the sake of completeness a brief account of pre-processing of the samples is presented here. The cast ingots were subjected to hot extrusion (at 800°C) followed by β quenching treatment (at 1050°C). While extrusion resulted in breaking of cast structure, β treatment was helpful in improving chemical homogeneity. Subsequently, one more homogenization treatment at 550°C for 24 h, followed by slow cooling, was also performed. These homogenized samples were finally subjected to hot rolling for 65% reduction and annealing treatment. Both of these treatments were performed at 550°C followed by a slow cooling of 1°C/min in order to minimize the formation of residual stresses. Annealing time was kept to be 1 h. The reduction was achieved in 10 passes during which initial thickness of plate (23 mm) was reduced to a final thickness of 8 mm. Such a series of thermomechanical processing (TMP) is known to give rise to a strong crystallographic texture with a majority of basal poles (or o c4 axis of the hexagonal unit cells) getting aligned towards the plate normal of the rolled samples [1]. Thus we have a set of binary alloys with well recrystallized microstructures and strong crystallographic texture. Such a set is ideally suited for studying deformation along different directions to bring out texture and composition dependent deformation behaviour. In addition, such a texture and microstructure being very common in the actual clad applications of Zr, the results of the present study are of immense practical use for the Zr industry. Compression samples along three principal directions, viz, rolling, transverse and normal directions (RD, TD, and ND) have been extracted from the aforementioned TMP processed samples. These were cylindrical samples of 12 mm in length and 8 mm in diameter in case of deformation tests along RD and TD. Due to thickness limitation of the rolled samples (8 mm), sample dimensions for the tests along the ND were restricted to 8 mm in length and 6 mm in diameter. The time of flight neutron diffraction beam line, ENGIN-X, at ISIS, Rutherford Appleton laboratory, UK, was used for the in-situ compression loading and diffraction experiments [21,22]. Diffraction spectra were acquired using two detectors (longitudinal and transverse), strategically positioned to capture the reflections from crystallographic planes lying parallel and perpendicular to loading direction of the samples. Such an arrangement allows the detection of " 1012 101 1 { ̅ } ̅ ̅ " twinning in hcp materials (the most common mode of twinning in these materials [3,23]) since the change in orientation due to twinning results in appreciable changes in the intensity of certain reflections. Compression tests were performed along all RD, TD and ND. The compression tests were carried out in different control modes (regimes) during the in-situ experiments. Ideally, a constant strain rate would have been used throughout. However, to ensure enough data points were acquired and that the beam time was used efficiently, two control modes were used: Constant stress regime: The data points in the elastic regime (up to ∼50 MPa below the macroscopic yield point) were captured in this mode. This ensured enough points were captured, which would not be possible if a constant strain rate had been used. Displacement controlled regime: Data points in the plastic regime (up to 0.18 true strain) were captured in this mode. In order to make effective use of beam time, the data points in the plastic regime were captured at two different strain rates: a slower rate of s 7 10 / 6 × − until 0.025 strain followed by s 2. 8 10 / 5 × − until 0.18 strain. The frequency of measurement points was increased around the yield point by the deliberate selection of lower strain rate. It was found that an acquisition time of 5 mins gave an acceptable signal to noise ratio. The microstructure of the materials was characterized using SEM based Electron Back Scatter Diffraction (EBSD). EBSD samples were metallographically polished prior to electropolishing using a commercial system, Labopol. Electropolishing was done using 15 V and a temperature of 5°C in an electrolyte of 80% Methanol þ20% Perchloric acid. The EBSD measurements were carried out on a FEI Sirion FEG-SEM equipped with the HKL system. For the crystallographic texture measurement, low spatial resolution (step size of 50 mm) large area maps were recorded covering an area of 70 mm 2 . A step size of 0.3 mm, covering 500 mm by 200 mm, was used for the more detailed scans of the microstructural details. Model for prediction of twin evolution Crystal plasticity finite element modelling (CPFEM) was used to simulate the deformation of the material with emphasis on capturing the observed twinning behaviour. A rate dependent threedimensional CPFEM was employed for this purpose. Details of the original, two dimensional version of the same can be found in Ref. [24]. In this model, plastic deformation is assumed to occur by slip and twinning according to where, γ̇is the slip/twin shear rate, 0 γ is nominal reference slip rate, τ is the resolved shear stress and 0 τ is the critical resolved shear stress (CRSS) for any given slip or twin system and m represents rate sensitivity. In this way, twinning was treated as directional pseudo-slip, the twinning shear rate being calculated identically to slip rate. As the deformation rates employed in the actual in-situ experiments of the present study are rather low, effects due to rate sensitivities are not expected to be significant. Hence, the model was also made rate insensitive by deliberately choosing a low m value of 0.02 for all slip/twin systems. Although it is arguable that twining is more rate insensitive compared to slip, under the conditions employed in the present simulations, application of even lower values of m for twining systems would not result in any noticeable changes in simulation predictions (of slip activity, plastic spin, local stress, and twinning evolution). This is owing to the fact very low value of m makes simulations results insensitive to minor local variations in strain rates. In the simulations we used a constant low work hardening rate of Θ¼100 MPa for all slip/twin systems. Isotropic latent hardening was applied, i.e., the incremental change in CRSS for any slip/twin system is a function only of the sum of all slip shear increments, irrespective of the combination of active slip/twin systems. Only 1012 101 1 { ̅ } ̅ ̅ twin system was accounted in this model, in agreement with experimental observations of the present study (see next section). Twinning pseudo-slip (or shear due to twinning) occurs according to Eq. (1) on all six twin variants of a given integration point (IP). When the accumulated shear on the most active of the six twin variants (hereafter denoted by tw γ ) reaches a value lim γ ; the IP is instantaneously reoriented by 85°according to the geometry of the most active twin system. This way the model takes into account lattice reorientation due to twinning. After reorientation, a further (Γtw γ ) twinning shear is allowed on the originally most active twin system-based on the geometry before reorientation, again according to Eq. (1) (Γ¼ 0.17 is the characteristic twinning shear for the 1012 101 1 { ̅ } ̅ ̅ twin in Zr). The limiting case of tw γ ¼ Γ corresponds to a situation where the whole volume represented by the given IP has twinned. Due to the drastic hardening effect of twinning reorientation, it was found to be necessary to introduce disorder in the lim γ values to achieve good fits to the experimental stress-strain curves. The lim γ values were sampled from a homogeneous probability density function on the interval [0.0068, 0.119]. This procedure helped to smooth out the hardening contribution of twinning. This randomness is expected to account for innumerable uncertainties of real situation (such as localized dislocation structure, its distribution etc). Earlier work has shown that good results can be achieved by treating all these factors as random fluctuations in the critical stress required for nucleating a twin [25]. The present model, on the other hand, introduces randomness in the reorientation part of the twinning process, but not in the CRSS of the twinning. This makes twinning to be strictly governed by the stress state and actual occurrence of reorientation of the twin volume to be subject to random fluctuation in shear strain from the instance of nucleation of twin. This approach has the effect of smoothening the stress-strain curves making them closer to experimentally observed ones, without affecting the twinning probability, which by design was made to depend solely on the value of CRSS. The simulated system consisted of 15 Â 15 Â 15 twenty-node iso-parametric brick elements, each containing 8 integration points (IPs). One element corresponded to one grain, i.e., all IPs of an element were assigned the same orientation. The orientations were randomly sampled from experimental EBSD based texture data and the resulting texture was cross checked to represent the experimental texture. Uni-axial compression simulations were performed by assigning the applied uni-axial compression strain tensor increment at every time step to all boundary IPs on the two opposing faces perpendicular to the loading direction. In general, the residual stresses present in the samples can have significant bearing on the observed flow response of the materials [26,27]. However, in case of present study, owing to the similarity in the processing conditions, the extent and distribution of the residual stresses are expected to be rather similar among all of the samples at the beginning of the in-situ deformation. The role of residual stresses, if any, was accounted indirectly by appropriate values of CRSS of the different slip systems in the present model. Results The initial microstructure and crystallographic texture of the samples used in the in-situ deformation studies are shown in Fig. 1. For the purpose of brevity, only results of the lowest (Fig. 1a) and highest Sn (Fig. 1b) alloys are included here. It is evident that samples were fully recrystallized with characteristic crystallographic texture. Further, both samples were similar in their microstructure and texture distribution (as represented by the respective IPF plots and pole figures). It is important to note that microstructures and textures of other two intermediate compositions (0.23% and 0.33% Sn) were indeed similar to the ones presented in Fig. 1. As can be seen in Fig. 1, the starting texture has the basal poles aligned towards ND of the sample, with a ∼7 30°s pread along TD. There was also a preferred alignment of the 1120 ̅ poles towards RD. The average grain size of these initial un-deformed samples was found to be 5.5 μm for Zr-0.15%Sn, 4.5 μm for Zr-0.23%Sn and 5 μm for Zr-0.33%Sn and 4.5 μm for Zr-1.2%Sn. Thus the four materials are comparable in their initial microstructure, texture, and grain size distribution. Thus the systematic variation of the measured properties must be associated with the differences in Sn content and loading direction only. Flow behaviour as a function of composition and deformation direction The flow behaviour of the Zr-Sn alloy samples is depicted in Fig. 2 as a function of Sn content and deformation direction. These curves are from the data collected during in-situ diffraction experiments. The following key observations can be made from the figure: 1. There is a systematic increase in the flow stress with increase in Sn content for all directions (RD, TD and ND). This confirms the expected solid solution strengthening effect of Sn. As expected, the increase in flow stress was found to be non linear with respect to Sn content. The flow data is summarized in Table 2, showing that the relative increase in the yield stress (Δs y ) is higher at lower Sn values, as if the strengthening effect of Sn saturates at high Sn content. This was the reason for choosing three alloys with a comparatively small difference at the low scale of Sn content and only one with a significantly higher value. 2. The flow stress for a given Sn content, increases in the order of deformation along RD, TD and ND, signifying strong effect of texture on the deformation, see Fig. 2 and Table 2. 3. Along RD, the flow curves, exhibit comparatively low strain hardening regime (referred hereafter as the 'flat response') during the initial stage of plastic deformation. This flat response is increasingly pronounced with increasing Sn content, see Fig. 2a. A magnified plot of this regime is included in the figure to bring out this effect more clearly. 4. There is a conspicuous absence of such a flat response regime in case of compression tests along ND. The behaviour of TD samples is closer to that of ND samples as no distinct 'flat response regime' was observed. 5. Along RD the strain hardening also increases with Sn content. This is not seen along TD or ND. In summary, the role of Sn on the flow behaviour was seen to be highly dependent on deformation direction, that is, it is strongly affected by the texture. Evolution of intergranular elastic strains (diffraction elastic strains) as a function of composition and deformation direction The evolution of the intergranular elastic strains for three principal families of grains: 0002, 1010 ̅ and 1011 ̅ is shown in Fig. 3. The data represents the relative change in the elastic strains from the initial (before loaded) state and corresponds to compressive strain along the loading direction. As expected, all the curves show an initial linear response corresponding to elastic deformation at low stress levels. The onset of plasticity is marked by a deviation from linearity and is a function of Sn content. An observation of particular interest is the behaviour of 0002 reflection. As can be seen, for loading along RD and TD deformations, the elastic strain first increases at a constant applied stress, and then relaxes significantly. This is a clear signature of tensile twinning (i.e., 1012 101 1 { ̅ } ̅ ̅ ). On the other hand, for samples loaded along ND, one can infer that there was no 1012 101 1 { ̅ } ̅ ̅ type of twinning. The decrease in the elastic strains by the grain families 1010 ̅ and 1011 ̅ indicate the occurrence of plastic deformation. This "unloading" was more pronounced at higher Sn contents. Onset and evolution of twinning In Zr and most other hcp metals, the most common twinning mode is the 1012 101 1 { ̅ } ̅ ̅ twin. This twin induces a tensile strain along the oc 4 axis and is therefore particularly active during compression of rolled and recrystallized material along RD and TD. Under the measurement conditions used, the formation and evolution of these twins causes appreciable changes in the intensity of the 0002 reflection, decreasing in the transverse and increasing in the longitudinal detector. This is because of the 85°lattice rotation caused by twinning. Therefore, the integrated intensity of the 0002 reflection can be used to determine the critical macroscopic stress and strains needed for the onset of twinning and monitor evolution of twinning during deformation. The data collected by the longitudinal detector (i.e., along deformation direction) is shown in Fig. 4. Following inferences can be drawn from the figure: 1. The extent of twinning (as measured by change in integrated intensity) is highest for loading along RD, followed by loading along TD. Deformation involving compressive loading along ND, however, did not result in the formation of any detectable change in 0002 intensity, indicating absence of "{1012} o1011 4 " twinning. 2. In the RD case, there is a clear and strong effect of Sn content. Higher Sn content corresponded to higher extent of twinning. In contrast, the extent of twinning for TD deformation appears to be very weakly dependent on Sn content. While the lowest Sn content alloy shows an increase of 0002 intensity by 15 times (nominal values), for a strain of 0.15 along RD, the sample with 1.2%Sn shows an increase of as much as 32 times for the same extent of deformation. In the TD case, however, the change in 0002 intensity increases only marginally with Sn content (Fig. 4). This suggests a weak dependence of the extent of twinning on the Sn content for deformation along TD. It may be emphasized here that although the extent of twinning in TD is lower than in RD, it was nevertheless quite appreciable, unlike during loading along ND. 3. The critical strain needed for the onset of twinning in both RD and TD deformations is fairly independent of the Sn content. On the other hand, critical stress required increases with Sn content in both RD and TD cases. The in-situ diffraction data can also be used to extract intergranular strains of the parent grains, which twin during deformation, as shown in Fig. 5. Diffraction elastic strains were calculated from the peak shifts of the 0002 reflection using transverse detector. For the sake of clarity, only the results of two extreme compositions for RD and TD deformations are included in the plots. The onset of twinning is marked on the plots using arrows. The samples were oriented in such a way that in both, the RD and TD deformations, the transverse detector measured lattice strains along samples' ND (plate normal). As can be seen, the relative difference in the intergranular strains of the twinning grains of alloys with different Sn contents is similar for both RD and TD deformations. Direct and conclusive evidence for the twin formation and determinations of the type of twins was obtained with microstructural characterization of the deformed microstructures using EBSD. Fig. 6 shows the microstructure of the four compositions used in the present study after 18% compression along RD 1 . These maps show that twins are indeed of " 1012 101 1 { ̅ } ̅ ̅ " type, as expected. Further, these microstructures are in full agreement with the in-situ loading and diffraction results presented in Fig. 4, in terms of twin volume fraction. These maps confirm that volume fraction of twins increases with Sn content, for RD loading. Another important observation from these micrographs is that the increase in volume fraction of twins for higher Sn contents is due to larger number of twins rather than due to bigger twins. The effect of loading direction is illustrated in Fig. 7, where the TD and ND deformed microstructures of the highest Sn alloy are shown. These microstructures also confirm that the twinning propensity is indeed less for loading along TD and negligible in ND deformations. In addition, no other modes of twinning other than 1012 101 1 { ̅ } ̅ ̅ type were observed in these samples. The significant difference in the role of Sn, on the twining behaviour, in case of compression along RD and TD was further corroborated by the deformation textures. Fig. 8, illustrates the basal pole figures of the deformed samples of the two extreme compositions subjected to compression tests along the RD and TD. Comparison of these pole figures with ones shown in Fig. 1 reveals that there is considerable increase in pole intensities along the RD axis of the pole figure in case of samples compressed along RD. This is essentially due to activation of 1012 101 1 { ̅ } ̅ ̅ twins. The extent of difference in the twining between samples with different Sn contents is noticeable through difference in the intensity levels of the texture component along RD axis (highlighted by the circles). It is clear that higher Sn sample had higher intensity of this texture component. Compression along TD on the other hand resulted in the insignificant differences in the distribution of basal pole intensities in the samples with different Sn contents. These observations are thus in direct agreement with the neutron diffraction based twin evolution interpretations. Discussion There are three main discussion points. Firstly, the RD samples showed an anomalous flat response regime, the extent of which The behaviour of the excluded alloys lied in between these two alloys. The data is computed from the peak shift of the respective reflections as recorded by the longitudinal detector. Thus the strains correspond to those along the loading direction of the sample. The measurement uncertainty in lattice strain measurement was 750 με under the measurement conditions used [20] and the same is indicated in the form of error bars on selected points (pointed out by the black arrows in the ND plots). Note that the stress and strains in the figure are compressive in nature and negative sign was omitted. 1 The sample with 0.15%Sn had only 0.15 strain as the in-situ neutron diffraction and deformation test had to be interrupted at this strain due to unexpected issue in the machine. Nevertheless, the amount of deformation is close enough to other samples (which had 0.18 strain) to make 'qualitative' comparison of the corresponding microstructures for the extent of twining. increased with increasing Sn content. Secondly, while the extent of twinning strongly depended on Sn content in the RD case, it is only weakly dependent on Sn content in the TD case. Finally, whereas the critical macroscopic strain at the onset of twinning was independent of Sn content and loading direction (RD and TD), the stress level at the onset of twinning increased noticeably with Sn content, independently of loading direction. Role of Sn on slip systems The anomalous flat response in the stress-strain behaviour of the alloys was earlier attributed to the presence of thermal residual stresses in the material, caused by large differences in the thermal expansion coefficients along the oa 4 and oc4 axis of the hcp unit cell [28,29]. In this study, the extent of this flat response was seen to increase with Sn content. The potential reason for this effect could be as follows. The initial texture of the samples compressed along RD was such that a majority of the grains had favourable orientation for the prismatic slip, i.e., o a4 type slip on prismatic planes. It has been shown previously that in case of prismatic slip, interaction among dislocations of different prismatic planes in general is low resulting in low strain hardening [30], which correlates well with the flat regime observed. In addition, modelling work has shown that the oa 4 type dislocations in α-Zr undergo dissociation into partials creating stacking faults, which are stable in the prismatic plane, since the stacking fault energy in the basal plane is too high [12,31]. The addition of Sn to Zr is known to dramatically decrease the stacking fault energy of the system as shown in. Fig. 9, from Ref. [32]. This drop is more prominent at low Sn additions and saturates at high Sn contents. This correlates with the observed non-linear effect of Sn on the flow stress, discussed in Section 3. In addition, the yield point data presented Fig. 9 is consistent with the classical understanding of many solution strengthening models which predict square root dependence of yield point on the concentration of the solute atoms. The drop in SFE should, in principle, increase the stacking fault width and thus promote planar slip. Thus, increasing the Sn content not only makes the initial barrier for dislocation motion higher, but is also likely to increase the planarity of slip, which could decrease the potential strain hardening, TD ND 100μm Fig. 7. EBSD maps showing the twinning behaviour as a function of deformation direction for alloy with 1.2%Sn at 18% compressive strain. Compare these microstructures with the last one of Fig. 6. These maps confirm that only TT1 twins form even in TD and no significant twins are seen in ND deformation. leading to the observed pronounced flat response regime in RD compression. The absence of a marked flat response in ND compression further corroborates this argument. In this case, very few grains are in favourable orientation for the activation of oa 4 type and therefore no flat regime is observed [33]. In this case, however, since very few grains are in favourable orientation for the activation the o a4 type slip, ocþ a4 slip can be one of the principal modes of deformation. The observed increase in yield stress with Sn content, in this case, signifies that the CRSS of slip by the ocþ a4 mode is also influenced by the Sn content. Twinning behaviour The lack of an effect of Sn content on the extent of twinning in TD loading, in stark contrast to the strong effect observed in RD loading, is rather intriguing. Before delving further into possible explanations, it is important to clarify that this observation is not due to a failure of capturing subtle changes on account of poorer statistics due to the lower twin volume fractions in the TD case. As pointed out in Section 3.3 (point 2), the extent of twinning is lower but still considerable in TD deformation. The assertion that Sn did not affect the extent of twinning in TD compression was also supported by the flow behaviour of the material. As can be seen in Fig. 2, the strain hardening is higher at higher Sn contents along RD, which can be attributed to twinning. The flow curves for TD compression do not show any difference in the strain hardening. This signifies that the extent of twinning was rather similar in all alloys for compression along TD. This implies that the effect of Sn content in twinning depended on the loading direction. As far as extent of twinning is concerned, present results show that RD compression had the highest, followed by TD and virtually absent in case of ND compression. The fact that the 1012 101 1 { ̅ } ̅ ̅ requires tensile stress along o c4 axis can explain this observation, as the initial texture of the material (see Fig. 1) renders such stress state to exist more in case of RD than in TD (and ND) for a majority of grains. However, this explanation does not 'seem' to be sufficient to explain the apparent strong effect of Sn on extent of twinning for compression along RD but complete lack of it for compression along TD. To probe if the alloys with different Sn contents in RD and TD compression, had any other differences that might help explain the observed behaviour, the evolution of the diffraction peak broadening, a finger print of plastic activity, was analysed. However, even this could not delineate RD compression from TD compression as both have similar signatures of peak broadening evolutions. Since the FWHM and intergranular strain analysis could not explain the dependence of loading direction on the effect of Sn content on twinning, crystal plasticity finite element modelling (CPFEM) was employed to test simple twinning criteria for the two different loading conditions. For the purpose of brevity, only the two extreme compositions were considered for the simulations. Initially, several simulations were run with varying CRSS values for the slip/twin systems to reproduce the observed flow behaviour. 10 represents the best fit simulations in which excellent agreement between experimental and simulated flow curves (Fig. 10a) and intergranular strains for important grain families (Fig. 10b) can be seen. Table 3 gives the CRSS used for these simulations. It may be noted that while CRSS values for slip was assumed to change with Sn content, the CRSS values for twinning was not. In fact, this difference in how Sn affects slip and twinning is responsible for the difference in twinning activity with Sn content for RD compression. It is clear that the model, apart from capturing the flow curves very well, could also simulate the observed evolution of intergranular lattice strain evolutions fairly well, particularly the behaviour of 0002 family of grains. This family is of particular interest to the present study as this is the reflection which is most sensitive to twining behaviour of the samples. As can be seen, the difference in the observed stress level for twin initiation between the samples of different Sn contents (in spite of using same twining CRSS for both high and low Sn alloys) was well reproduced. In addition, the abrupt elastic unloading of the intergranular strain (of 0002 reflection) subsequent to twining is captured well by the model. This is a significant improvement over the previous works in which sudden relaxation due to twining could not be simulated [4]. The extent of twinning predicted by simulation is compared with experimental observations in Fig. 11. Since the extent of twinning from experiments is known only in terms of change in integrated intensity (in arbitrary units), a direct comparison with the volume fraction estimations from simulation is not possible. For a meaningful comparison, we need to consider the experimental integrated intensity and the simulated volume fraction relative to the initial integrated intensity in the experiments and the initial volume fraction in the simulations respectively, for both RD and TD. The respective experimental and simulated data is plotted using two y-axes in Fig. 11 for both RD and TD as a function of applied strain (Fig. 11a) and applied stress (Fig. 11b). To achieve a meaningful comparison, as mentioned above, the ratio of the yaxis limits was chosen to be the following: R¼E 0 /S 0 Á f, where E 0 is the initial experimental intensity, S 0 the initial volume fraction in the simulations and f is a correction factor. E 0 ¼40.8, S 0 ¼0.00295 for RD and E 0 ¼197.9, S 0 ¼0.01085 for TD. f ¼0.62 for both RD and TD compressions. The fact that f is fairly close to unity and independent of loading direction shows the quality and consistency of the model in terms of predicting the orientation changes due to twinning. It may be noted that the twin volume fractions from the simulations were calculated by considering the change in the integration points (IPs) with their c-axis within 715°of the loading direction, which corresponds to the acceptance angle of the neutron diffraction detectors..To account for both the twinned and non-twinned (parent) orientations of an IP, the following weights were used in the calculations: w tw ¼ γ tw /Γ for the twinned orientation and w pa ¼ (1 À γ tw )/Γ for the parent orientation. w tw and w pa represent the volume fraction of twin and parent for the given IP. There is excellent agreement between the model predictions of twinning extent with the experiments for both RD and TD as 11. Comparison of experimentally measured change in the 0002 integrated intensity (a measure of twinning extent) along with the CPFEM simulated weighted fraction of grains with their c-axis within 15 o of deviation from the loading direction (i.e., fraction of twinned grains) for RD and TD deformations of two extreme compositions a function of (a) applied strain (b) as a function of applied stress. revealed by Fig. 11. The agreement between the simulations and experiments can also be independently seen through the comparison of corresponding textures. Fig. 12 depicts this aspect, where in the similarity in the predicted textures with that of the experimentally determined ones is evident. The slight differences between them can be attributed to the symmetrisation of the initial textures for the simulated textures. This was done to avoid artificial biasing that can arise due to either under or oversampling of certain orientations. The simulations, predict that the extent of twinning is directly proportional to Sn content for RD compression but insensitive for compression along TD, in full agreement with the experiments. In the model Sn only affects the deformation by changing the CRSS for slip. In other words, the increased extent of twinning in RD for the high Sn alloy is a direct consequence of the higher flow stress and ultimately higher intergranular strains once plastic deformation starts. It should be noted that twinning only starts after some level of slip, i.e. when the intergranular strains, generated during deformation, result in stresses in grains orientated for twinning that exceed the twinning CRSS value. Without slip this is not possible regardless of the Sn content. The fact that such an increase in twinning extent was not seen in the TD simulations (despite using the same higher CRSS values for the higher Sn alloy) establishes that the starting texture is crucially important as it affects the intergranular strains/stresses evolving during the early stage of plasticity. The present CPFE model, assuming a CRSS criterion for twinning, demonstrates that compression along a direction that predominantly activates prismatic slip (c-axis predominantly perpendicular to the loading axis) generates intergranular strains that will more easily activate twinning than a compression direction with a greater mixture of prismatic and basal slip activity (c-axis is predominantly 60°off the loading axis). The model highlights that simple stress considerations can, at least in the present case, explain significant variations in twin activities of the samples with different starting textures. Conclusions The present investigation aimed at identifying the relative role of starting texture, stress and plastic strain on the deformation twinning behaviour of a binary Zr-alloy by varying alloy composition and crystallographic texture in a controlled manner. The major conclusions drawn from the study are as follows: Zr-Sn binary alloys with a characteristic split basal texture exhibit the highest extent of deformation twinning (tensile twins) when deformed along the original RD, followed by relatively low level of twinning in TD deformation. In case of ND no deformation twinning was observed. Analysis of the flow behaviour as a function of deformation direction (or texture) showed that oa 4 type slip significantly affected by Sn. The initial zero hardening behaviour when compressing along RD, which becomes more pronounced with increasing Sn content suggests that Sn enhances prismatic slip planarity. For RD compression, increasing the Sn content results in a higher extent of twinning. In TD deformation, however, there is no difference in the extent of twinning in alloys with different Sn contents. fractions and intergranular strain evolutions). The model demonstrates the importance of the starting texture when comparing twin activities and suggests that the intergranular strains generated prior to the onset of twinning, depending on Sn content and slip mode activities, greatly determines twinning activity.
9,431.8
2016-01-05T00:00:00.000
[ "Materials Science" ]
DeepFat: Deep Learning Segmentation and Quanti�cation Method for Assessing Epicardial Adipose Tissue in CT Calcium Score Scans Epicardial adipose tissue volume (EAT) has been linked to coronary artery disease and the risk of major adverse cardiac events. As manual quantification of EAT is time-consuming, requires specialized training, and is prone to human error, we developed a method (DeepFat) for the automatic assessment of EAT on non-contrast low-dose CT calcium score images using deep learning. We segmented the tissue enclosed by the pericardial sac on axial slices, using two innovations. First, we applied a HU-attention-window with a window/level 350/40-HU to draw attention to the sac and reduce numerical errors. Second, we applied look ahead slab-of-slices with bisection ( “ bisect ” ) in which we split the heart into halves and sequenced the lower half from bottom-to-middle and the upper half from top-to-middle, thereby presenting an always increasing curvature of the sac to the network. EAT volume was obtained by thresholding voxels within the sac in the fat window (-190/-30-HU). Compared to manual segmentation, our algorithm gave excellent results with volume Dice=88.52%±3.3, slice Dice=87.70%±7.5, EAT error=0.5%±8.1, and R=98.52%(p<0.001). HU-attention-window and bisect improved Dice volume scores by 0.49% and 3.2% absolute, respectively. Extensive augmentation improved results. Variability between analysts was comparable to variability with DeepFat. Results compared favorably to those of previous publications. Introduction Epicardial and paracardial fat have been linked to increased risk of cardiovascular disease and diabetes.Epicardial adipose tissue (EAT) is a visceral fat deposit distributed between the pericardium and the heart.Several clinical studies have shown a significant association between EAT volume and abdominal visceral adiposity 1,2 .A 2018 meta-analysis using CT images with >41,000 participants over 70 studies showed an association between EAT volume and adverse cardiovascular risk 3 .Importantly, studies have shown a lack of (or weak) association between EAT and another widely used marker of risk, coronary artery calcium scoring 4,5 , suggesting that EAT volume may have additive value in risk stratification 6 .Emerging literature also suggests that EAT attenuation carries prognostic information [7][8][9] .Further, recent studies have shown that EAT is modifiable via pharmacologic treatment and may be a therapeutic target [10][11][12] ., Manual EAT segmentation on non-contrast-enhanced CT images, however, is a time-consuming task, requires skilled expertise, and is prone to inter-and intra-observer variability 13 .For typical manual analysis, EAT is segmented by first delineating the pericardial sac and then thresholding voxels within the sac using the fat window (-190 HU to -30 HU).Yet, the thin layer of pericardium tissue can be difficult to distinguish in cardiac CT scans, with low contrast from surrounding tissues and blood 14 . Recent publications have described the use of machine and deep learning approaches to segment EAT on non-contrast CT calcium score images 13 , contrast CT images 14 , and high-resolution CT angiography (CTA) images 15,16 .Some studies assessed both epicardial and paracardial (external to the pericardium) fat depots 17 , while others distinguished between epicardial and paracardial fat [13][14][15] .Some authors have used methods without learning, including a recent method by De Albuquerque et.al 16 , which used the floor of the log clustering algorithm and a set of morphological operations.Deep learning is popular using 2D slice 13 and 3D patch 15 data.Zhao et.al. 14 demonstrated a 2D Dense U-Net for automatically segmenting epicardium in 14 contrastenhanced CTA images, where the increased contrast facilitates segmentation.He et.al 18 , proposed a 3D deep attention U-Net for segmenting the EAT in 40 CTA images.Their method achieved a Dice score of 85%.By extending their cohort to 200 CTA images 15 , the 3D deep attention U-Net approach reached an improved Dice score of 88.7%.For non-contrast gated, CT images, Zhang et.al. 19 applied a dual U-Net framework on 2D images slices over a small cohort (n=20 image volumes). With non-contrast CT images, Commandeur et.al 20 , proposed a fully automatic method that uses two Convolutional Neural Networks (CNNs) to segment EAT and thoracic adipose tissue (TAT).The first CNN detects the heart limits and performs segmentations while the second combines a statistical shape model to detect the pericardium.Our work is influenced by a subsequent paper from Commandeur et.al 5 , where they used a single deep learning approach in two tasks.First, they trained a deep network to segment the region within the pericardial sac.Second, they extracted features from the same network with machine learning to classify image slices containing the heart.The input to the semantic segmentation network consisted of a slab of three images slices, the slice of interest (), one prior ( − 1), and one post ( + 1).The output was a label image for the middle slice ().The use of three consecutive slices improved results significantly.However, when we applied this 3-slice approach, we found errors particularly associated with the top and bottom slices, leading us to develop an alternative approach. Our goal was to perform an accurate, fully-automated EAT segmentation and quantification from CT calcium score images.CT calcium scoring is currently used to assess the cardiovascular health of patients, and large archives of thousands of CT calcium score images are available that enable population risk studies.However, CT calcium score images are challenging to analyze because the slices are thick (~2.5 mm) and no contrast agent is used to improve delineation of fat boundaries.Building on the work of Commandeur et.al 13 , we applied HU-attention-window with a window/level of 350/40 HU to emphasize appropriate CT numbers.Second, we applied look ahead slab-of-slices with bisection (hereafter referred to as bisect) in which we split the heart into halves and sequenced the lower half from bottom-to-middle and the upper half from top-to-middle, thereby presenting an increasing curvature of the sac to the network.We then used a 3-slice slab approach, with the image of interest at k and other images at k+1, and k+2.In addition, we introduced a slice-based analysis of results for detailed quantifications that may be helpful for optimizing algorithms. Manual Labeling of Image Data All scans in this study were obtained as part of clinical care.The Institutional Review Board of the University Hospitals waived consent for all studies utilizing anonymized CT scans.Method was carried out in accordance with relevant guidelines and regulations.Expert analysts segmented the CT scans using 3D-Slicer in a sequential slice-by-slice process (Figure .1).The top and bottom of the heart were identified.A standard window/level (350 HU/40 HU) was applied to the entire CT volume to achieve good contrast of the pericardium (Figure . 1B).Analysts typically began the process in the middle of the heart.A closed region was manually drawn on every axial slice along the pericardium as in Figure .1D.The anterior limit of the pericardium was determined by the appearance of the pericardium in both axial and sagittal views, as illustrated in Figure .1C (the sagittal view is not shown).If needed, axial slices above and below the current slice were examined to help determine the location of the pericardium.A median filter with a 3×3×3 mm kernel size was used to reduce noise.EAT fat was identified by thresholding in the standard fat range [-190 HU, -30 HU], and voxels within the pericardium were deemed EAT voxels, as shown in Figure .1E. Manual segmentations were individually performed by three expert analysts for the 89 CT scan volumes.At the top and bottom regions of the heart, manual labeling became more difficult (as will be shown with inter-reader variability results) and analysts used the sagittal view along with the axial view to enable precise labeling. Algorithm for EAT segmentation With experience from manual segmentation, we created preprocessing steps for our deep learning network (Figure .2).For the HU-attention-window, we applied a 350 HU/40 HU window/level operation to increase the contrast of the pericardium and encourage the deep learning network to capture pericardium structural features. As processing within the network is done on 8-bit data, creating this truncation operation ensured that when data are mapped to the network, the pericardium contrast is not lost due to numerical rounding.We applied look ahead slab-of-slices with bisection (bisect) whereby we presented the network with the slice of interest and two upcoming slices on the side of increasing sac area.We divided the heart slices into two halves where the lower half was sequenced from bottom-to-middle and the upper half top-to-middle, thereby keeping an increasing curvature of the sac and presenting similar images to the network in training and testing.Once data were arranged in this fashion, each labeled image slice of interest was concatenated with its consecutive two slices ( + 1 and + 2) to generate a 512×512×3 input voxel slab for deep learning segmentation. We segmented the region interior to the pericardial sac using deep learning semantic segmentation and used thresholding to determine EAT.We used DeepLab-v3-plus 21 with transfer learning (i.e., the network was pre-trained on the ImageNet dataset), which uses Resnet-18 as a backbone.The deep network model is a CNN specifically designed for semantic segmentation tasks and is mainly composed of several important architectures: the backbone network, the Atrous convolution, the Atrous Spatial Pyramid Pooling (ASPP) network, and the decoder section, as shown in Figure .S1. Traditional deep CNNs tend to reduce the spatial resolution of the output feature map as the network goes deeper, and thus are not suitable for semantic segmentation tasks, which require detailed spatial information.In contrast to CNNs, the DeepLab-v3 plus applies Atrous convolution, which can adjust the effective field of view for convolution without reducing the size of the output feature map, in the last few blocks of the backbone network.Thus, Atrous convolution can extract denser features at multiple scales while preserving the spatial resolution, which is significant for semantic segmentation. The ASPP was used on the top of the feature map to capture multi-scale object information by applying four parallel Atrous convolutions with different sampling rates.Batch normalization and image-level features were also incorporated into the ASPP by applying a global average pooling at the last feature map of the backbone and concatenating the corresponding results (contains multi-scale features) with batch normalization 16 .The results were then traversed through a 1×1 convolution with 256 filters to obtain the final output.To gradually recover the spatial information and capture more detailed boundary features, a decoder section was added by applying a few 3×3 convolutions to refine the output features obtained from the ASPP with an upsampling factor of 4 21 3.8GHz, 32 GB RAM, 1TB hard disk, and GTX 3090 with 24GB GPU.We implemented the code using Matlab 2021a.The manual segmentations were implemented on conventional computers using Slicer 3D software, Version 4.11 22 , and the results of manually labeled volumes were saved in DICOM files for easy association with the original CT volumes.For training, we used the Adam method for optimization and Dice as the loss function, as it is immune to the effects of prevalence.To enrich the training process, significant random augmentations were applied.Random rotation (-5 to 5 degrees), scaling (0.9 to 1.1), and randomized Gaussian blurring with a standard deviation ( < 2) were applied for data augmentation.We duplicated each input image slice and applied random blurring augmentation to it.Then, with each new training epoch, input images were augmented with a random mixture of rotation and scaling augmentations, creating a wide range of image permutations to enhance the training process.As we typically used 30 epochs for the 50 image volumes, each with an average of 29 image slices, we presented to the network with 1,446 input image slices, duplicating them with blur augmentation to 2,892 and up to 86,760 images following randomized augmentation.Using this level of augmentation improves training, especially with a limited dataset 23 .We used a mini-batch strategy with a batch size of 20, while the maximum number of epochs was set to 30 and the initial learning rate was set to 1e-3.Validation was performed at the end of each epoch to evaluate the performance of training and inquire stopping conditions.Training was stopped when changes in Dice reached a tolerance of 0.1e-4 or the maximum number of epochs was reached.We found that training reached an acceptable convergence typically with only 30 epochs. Dataset and Evaluation Methods This study included 93 non-contrast cardiac CT images, which were obtained from the University Hospital of Cleveland.Four of 93 images were excluded due to abnormality in anatomical structure.For the remaining 89 images, the first and last slices of the thoracic CT volume were manually chosen by analysts to include the heart top and bottom slices, respectively.The axial slice thickness was 2.5 mm and the 2D slice dimensions were 512×512 pixels per axial slice, with pixel-spacing ranging from 0.66 mm to 0.86 mm.A total of 1446 axial slices were included in this study.The dataset was first randomly separated into two sets: training (n=50) and testing (n=39).The training set was further divided into two subsets: training subset (n=40) and validation subset (n=10).To determine the importance of the processing steps (HU-attention-window and bisect), we processed the images with and without these modifications. We evaluated processing using Dice and Intersection Over Union (IOU) scores.The Dice score coefficient was calculated on a slice-by-slice basis for EAT between automated output and the ground truth (manual segmentation) to evaluate the performance of the semantic segmentation.The Dice score ranges from 0 to 1 (1).A look ahead slab-of-slices with increasing size is presented to the network with the slice of interest and two up-coming slices on the increasing side, as in (2).We divide the heart slices into two halves where the lower half is sequenced from bottom-to-middle and the upper half sequenced top-to-middle, thereby keeping an increasing curvature of the sac and presenting similar images to the network in training (bisect method), as in (2).Different data augmentations enrich the deep learning with variations of cases, shown in (3).Finally, the DeepLab-v3 plus network is trained with each of the three sequenced patches with a single corresponding mask slice as in (4). Bottom To Middle (0%-100%) with 0 meaning no overlap of segmentation and 1 meaning identical (completely overlapped) segmentation.We evaluated the Dice score using equation ( 1) and reported measures as percentages. DeepLab-V3 plus Network where, and represent the testing output and the ground truth pixels in a slice (or voxels in a volume), | ∩ | is the number of overlapping pixels (or voxels) between the predicted EAT segmentation and the ground truth EAT images, and || + || represents the total number of pixels (or voxels) in both images (or volumes). In our experiment, Dice score coefficients were calculated for both axial 2D slices and the whole 3D volume. We also calculated the IOU score, also known as the Jaccard Index.Similar to the Dice score, a 0 value indicates no overlapping segmentation and 1 represents identical segmentation, as follows: In addition, to help identify any algorithm issues, we compared the automated EAT volumes to those from analyst manual segmentations.Scatter and Bland-Altman plots were created across volumes and image slices to evaluate the agreement between the predicted results and the manual ground truth.The correlation coefficient (R) and its corresponding p-value were calculated to assess the scatter plots. Results DeepFat showed excellent segmentation of the pericardial sac and EAT.In Figure .3, we compare DeepFat EAT segmentations to the manually obtained gold standard image results in three held-out test volumes.There was good agreement and only small deviations in the marking of the pericardial sac (Figure . 3E, J, O).Dice scores for EAT for these images were 86.8%, 92.3%, and 92.4% (Figure .3A, F, and K, respectively). We evaluated the contributions of our algorithm choices (e.g., HU-attention-window and bisect) in a radar graph (Figure . 4).Comparing the results with HU-attention-window, volume Dice scores were much improved with bisect compared to without bisect.The addition of bisect resulted in the best Dice score in 95% of test volumes.Likewise, HU-attention-window improved results in the presence of bisect, with better Dice scores in 64% of tested volumes.Using both HU-attention-window and bisect was significantly advantageous, providing an improved Dice score in 100% of tested volumes, indicating the usefulness of these algorithm choices.Average Dice scores also showed the value of including both HU-attention-window and bisect (see Figure .4 legend).Data augmentation was also found to be important, especially with image blurring, giving 1.77% absolute improvement in Dice.S1), we found that DeepLab-v3 Plus outperformed the three other networks tested.Improvements were surprisingly substantial, with absolute improvements in Dice ranging from 25% to 2% and IOU from 33% to 2%, depending on the network. Good segmentations translated to good EAT volumes with DeepFat.With and without bisect, we compared automatically obtained DeepFat total EAT volumes to manually obtained results (Figure . 5).The deep network with bisect gave superior total EAT volume estimation compared to the deep network without bisect, as shown in both scatter and Bland-Altman plots (first two columns).R, slope, bias, and spread values all improved with bisect (see Figure .5 legend).It is understood that R is a weak assessment tool for the quality of the measurement, as it does not indicate the quality of the y=mx fit.Assessments per slice allowed us to analyze and optimize the algorithm (Figures.5C and 5F).The slices of each test image were categorized into four equal regions based on their location in the total heart slice sequence and regions were color-coded.Image slices at the top and bottom of the heart tended to have the most error.This was reduced with the inclusion of bisect, due to the ability of bisect to capture the heart shape near the top and bottom of the heart.A plot such as those shown in Figure .5C and F helped us diagnose and optimize our DeepFat algorithm, leading to the creation of the bisect modification. We analyzed the variation between analysts (inter-reader variability) (Figure .6) and compared it to the variation between analysts and the DeepFat automated method (Figure . 7).The 50 training CT images were split into groups of 25, 12, and 13 images that were manually analyzed by analyst1, analyst2, and analyst3, respectively.In the inter-reader variability study, we compared the manual segmentation of analyst1 versus analyst2 over the 39 held-out testing set.Scatter and Bland-Altman plots between the two analysts are shown in Figure .6.There was reasonable agreement (R=.9882, p<0.001) between the two analysts; however, the standard-deviation/bias from the Bland-Altman plot (8.16 cm 3 /1.91 cm 3 ) indicated variability.The largest outlier showed a difference of +35 cm 3 out of 169 cm 3 , a 20% difference.Inspection of this image showed that in some slices, expert analysts disagreed on the placement of the pericardial sac. Figure .7 shows scatter plots comparing the segmentation from the same two analysts against DeepFat for the same 39 testing set of images.There was good agreement with both analyst1 and analyst2 (R=0.9852 and R=0.9731, respectively).Dice scores were 88.53% and 87.24% against analyst1 and analyst2, respectively.Interestingly, there was a slightly better agreement with analyst1 than analyst2, probably because analyst1 had labeled more volumes in the training set than analyst2.In both Figures.6 and 7, Bland-Altman plots show increased differences at higher volumes, probably indicating that an error in the placement of the pericardial sac can result in a larger volume difference.As there was relatively little difference between analyst1 and analyst2, we averaged their volumes for further analysis of DeepFat (Figures.7C and 7D).The scatter plot of DeepFat volumes against the average Discussion Our novel fully-automated DeepFat method for automated analysis of EAT in non-contrast CT images showed excellent results, in terms of Dice score and measured EAT volumes.Automatically obtained volumes compared favorably to manually obtained values with a percent difference of only 0.91%±10.1.When the gold standard is manual analysis, an exacting criterion for automated analysis is that it falls within the uncertainty of Average EAT (cm 3 ) Figure 6.Comparison of total EAT volumes manually analyzed by two different analysts.Good agreement is observed between analyst1 and analyst2 in both the scatter plot and Bland-Altman plot.Bias is small (1.9 cm 3 ), only 1-2% of the measured volumes.Nevertheless, there are substantive differences for some images, shown as outliers.For example, the largest negative outlier in the Bland-Altman plot has a difference of approximately -35 cm 3 , or a 20% percent difference.In such volumes, the pericardial sac is not clearly identified, likely due to motion or noise in larger patients.B analysts.DeepFat met this criterion.When we plotted EAT volumes for analyst1 versus analyst2, data clustered near the idealized line (Figure .6, R=98.8).We saw similar visual results when DeepFat volumes were plotted as a function of the average of the two analysts (Figure .7C, R=98.2), suggesting that agreement of DeepFat with the analysts is about as good as the agreement of one analyst with another.The bias of DeepFat values was very small (0.9 cm 3 ), considering that many fat volumes exceed 100 cm 3 .The percent difference for DeepFat (0.91%±10.1) compared favorably to the percent difference between analysts for EAT (1.91%±8.1).Although paired t-test indicated that the mean difference did not meet the requirement for insignificant difference from zero, the p-value for DeepFat/mean-of-analysts was p=0.29, comparable to that for analyst1/analyst2 (p=0.07),again indicating that DeepFat performed well compared to the analysts.Omitting the single outlier (image with an average-automated difference of -32.4 cm 3 ), gave even better results, with paired t-test p values of 0.10 and 0.15 for DeepFat/mean-of-analysts and analyst1/analyst2, respectively.Altogether, these findings imply that the automated DeepFat algorithm performs as well as the analysts for measuring EAT.We note some important aspects of the DeepFat algorithm revealed by our study.First, deep learning segmentation of the region inside the pericardial sac was superior to methods that try to identify the thin contour of the sac directly.Regional segmentation allowed us to use the Dice loss function and avoid the large class imbalance that we would see with contour segmentation.Second, we determined that it is important to use HUattention-window.Otherwise, small contrasts will be lost in data preparation (e.g., creating 8-bit data in the DeepLab-v3 plus implementation used by us or in numerical optimization of weights.Third, the bisect method greatly improved segmentations at the top and bottom of the heart.Essentially, using the look ahead slab-ofslices allowed the deep learning algorithm to learn the curvature of the sac at the top and bottom of the heart.Adding the bisect step improved the Dice score from 85.3% to 88.5% (Figure . 4).Fourth, augmentation played a key role, as it enriched the deep learning with variations of cases to train the network.In particular, we found that it was important to add the image blurring augmentation.Finally, DeepLab-v3 plus was found to be superior to other networks for analyzing EAT (Table S1). The slice-based plots, which to our knowledge had not been investigated previously, provided a detailed per-slice segmentation evaluation.Since the deep network tries to learn the EAT per slice, this study revealed the regions where the network suffers from in-quartile grouped slices.Detailed slice-by-slice plots made it Comparison of total EAT volumes analyzed by DeepFat and the two analysts.A slightly higher correlation is found with Analyst1 (A) than with Analyst2 (B), indicated by R=0.9852 and R=0.9731, respectively.C and D compare DeepFat to volumes averaged for Analyst1 and Analyst2, giving R values only slightly inferior to that for Analyst1.Scatter plot of DeepFat volumes against the average of Analyst1 and Analyst 2 (C) is visually comparable to that between Analyst1 and Analyst2 in Fig. 6, indicating that DeepFat performs well compared to analysts.Bland-Altman plot (D) compares favorably to that for Analyst1 versus Analyst2 in Fig. 6.Average-DeepFat shows a 50% reduction in bias compared to Analyst2-Analyst1.The spread with average-DeepFat is only a little larger (20%) than that for Analyst2-Analyst1.Note that all results in Figs. 6 and 7 come from the same held out (testing) set of images.possible to distinguish the deep learning difficulties in segmenting the upper and lower slices in a without bisect method, which underscores the need for our bisect method.We compared our results to those in four recent publications (Table 1).DeepFat with bisect compared favorably to all methods despite differences in algorithms, cohorts, and imaging methods.DeepFat achieved the best R-value among all methods.It gave the best Dice score in publications using non-contrast CT, and only slightly worse (0.18% absolute) than studies of high resolution and contrast CTA (e.g., He et.al 9 ).This difference is most likely insignificant given statistical variations.CTA has a thinner slice (0.5 mm thickness), thus producing 5 times the total number of slices than non-contrast CT images 2.5 mm thickness, and CTA uses a contrast agent that further improves the detection of fat. In conclusion, our automated DeepFat EAT segmentation method with HU-attention-window and bisect improvements outperformed methods reported in recent studies to quantify EAT in CT images.The method appears to be appropriate for use in substantive population studies.Nevertheless, we plan to perform a manual review of the automated results to further investigate errors that are readily identified (e.g., the outlier described above).As we gather more training data, possibly from manual corrections of automated segmentations, we anticipate even better performance with DeepFat. . The complete preprocessing, augmentation, and training are presented in Figure.2, while the internal structure of DeepLab-v3 plus is illustrated in Figure.S1.As in manual segmentation, we applied noise reduction (3x3x3 median) to reduce artefacts in these low-dose CT images.We then applied standard fat thresholding [-190 HU, -30 HU] to identify EAT within the pericardial sac. Figure 1 . Figure 1.Manual segmentation of epicardial adipose tissue (EAT) on non-contrast CT images.Each 2D axial slice is displayed as in (A).The HU-attention-window with window/level of 350 HU/40 HU improved visualization of the pericardium (B).The pericardium (or pericardial sac) is marked with arrows in the inferior region (B) with the area bounded in red expanded in panel (C).In addition to axial views, we often rely on sagittal views (not shown) to help identify the pericardium when the location is unclear (not shown).The expert analyst draws contours to distinguish the pericardium, shown in green in (D).Finally, EAT is identified as interior voxels thresholded within the fat window [-190 HU, -30 HU], as shown in blue in (E). E Deep learning experiments were performed using a Windows 10 computer with an AMD Ryzen 7 5800X Figure 2 . Figure 2. The full structure of the automated EAT segmentation training process and the preprocessing steps.A HU-attentionwindow/level of 40 HU/350 HU is shown in(1).A look ahead slab-of-slices with increasing size is presented to the network with the slice of interest and two up-coming slices on the increasing side, as in(2).We divide the heart slices into two halves where the lower half is sequenced from bottom-to-middle and the upper half sequenced top-to-middle, thereby keeping an increasing curvature of the sac and presenting similar images to the network in training (bisect method), as in(2).Different data augmentations enrich the deep learning with variations of cases, shown in(3).Finally, the DeepLab-v3 plus network is trained with each of the three sequenced patches with a single corresponding mask slice as in (4). Figure 3 . Figure 3. Automated segmentation of epicardial adipose tissue (EAT).Axial non-contrast CT images (A, F, and K), manual segmentation in blue (B, G, and L), and automated segmentation (using DeepFat with the bisect method) in red (C, H, and M).Combined manual and automated EAT segmentation is shown in D, I, and N, where red represents manual, blue represents automatic, and white is the overlapping area.Pericardial sac contours using the same color scheme are shown in E, J, and O.The total subject EAT Dice score is 86.8%, 92.3%, and 92.4%, in the rows with low (A-E), intermediate (F-J), and high fat (K-O).Errors tend to be at the edges of the pericardial sac. O When we investigated the use of different deep learning networks (Table Figure 4 . Figure 4. Comparison of Dice scores for DeepFat with and without HU-attention-window and bisect.Plot shows Dice scores for the 39 images in the held-out (testing) set.Dice was calculated against the manual ground truth.Average Dice scores for without bisect, without HU-attention-window (WL); without bisect, with HU-attention-window; with bisect, without HU-attention-window; and with bisect, with HU-attention-window, are 83.0%±4.5, 85.3%±3.6,88.0%±3.5, and 88.5%±3.4,respectively.See text for other analysis details. Figure 5 . Figure5.Impact of bisect on automated EAT volume analysis with DeepFat.With bisect, data points are clustered near the line of identity (D), giving much better results than without bisect (A).To evaluate the data, we computed R values and slopes from a fit of y=mx, which gives slopes (R) of 0.807 (0.9833) and 0.971 (0.9852), for A and D, respectively, again showing the value of the bisect modification.Comparing Bland-Altman plots (B and E), the bias and spread (limits of agreement or LOA corresponding to 2X standard deviation) are both reduced for bisect compared to without-bisect.Bias with bisect (E) is 1.5 cm 3 , on the order of only 1% of measured values.In the Bland-Altman plot (E), the single largest outlier for DeepFat at +40 cm 3 has an unusual automatic segmentation, which is easily identified and corrected.Panels C and F show results for image slice volumes, with slices color coded as to location in the heart.In general, image slices at the top and bottom of the heart have the most errors; this is improved with the bisect modification, resulting in better than 90% reduction in bias to only 0.04 cm 3 . Figure 7 . Figure 7.Comparison of total EAT volumes analyzed by DeepFat and the two analysts.A slightly higher correlation is found with Analyst1 (A) than with Analyst2 (B), indicated by R=0.9852 and R=0.9731, respectively.C and D compare DeepFat to volumes averaged for Analyst1 and Analyst2, giving R values only slightly inferior to that for Analyst1.Scatter plot of DeepFat volumes against the average of Analyst1 and Analyst 2 (C) is visually comparable to that between Analyst1 and Analyst2 in Fig.6, indicating that DeepFat performs well compared to analysts.Bland-Altman plot (D) compares favorably to that for Analyst1 versus Analyst2 in Fig.6.Average-DeepFat shows a 50% reduction in bias compared to Analyst2-Analyst1.The spread with average-DeepFat is only a little larger (20%) than that for Analyst2-Analyst1.Note that all results in Figs. 6 and 7 come from the same held out (testing) set of images. TABLE 1 COMPARISON OF DEEPFAT RESULTS TO RESULTS REPORTED IN PREVIOUS STUDIES CNN, Convolutional Neural Network; CT, thick-slice non-contrast CT; CTA, thin-slice with contrast agent.
6,967.8
2021-01-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Impacts of radiative accelerations on solar-like oscillating main-sequence stars Chemical element transport processes are among the crucial physical processes needed for precise stellar modelling. Atomic diffusion by gravitational settling nowadays is usually taken into account, and is essential for helioseismic studies. On the other hand, radiative accelerations are rarely accounted for, act differently on the various chemical elements, and can strongly counteract gravity in some stellar mass domains. In this study we aim at determining whether radiative accelerations impact the structure of solar-like oscillating main-sequence stars observed by asteroseismic space missions. We implemented the calculation of radiative accelerations in the CESTAM code using the Single-Valued Parameter method. We built and compared several grids of stellar models including gravitational settling, but some with and others without radiative accelerations. We found that radiative accelerations may not be neglected for stellar masses larger than 1.1~M$_{\odot}$ at solar metallicity. The difference in age due to their inclusion in models can reach 9\% for the more massive stars of our grids. We estimated that the percentage of the PLATO core program stars whose modelling would require radiative accelerations ranges between 33 and 58\% depending on the precision of the seismic data. We conclude that, in the context of Kepler, TESS, and PLATO missions, which provide (or will provide) high quality seismic data, radiative accelerations can have a significant effect when inferring the properties of solar-like oscillators properly. This is particularly important for age inferences. However, the net effect for each individual star results from the competition between atomic diffusion including radiative accelerations and other internal transport processes. This will be investigated in a forthcoming companion paper. Introduction Understanding and modelling the transport of chemical elements inside stars still remain difficult challenges for the theory of stellar structure and evolution. Chemical abundances play an important role in determining the structure and evolution of stars. The internal distribution of chemical elements results from the competition of several transport processes within the star which are still barely understood and/or poorly modelled. Transport processes can be constrained using photospheric observations, but the impact on the internal structure can only be probed using stellar oscillations. The CoRoT (Baglin et al. 2013) and Kepler (Gilliland et al. 2010) space missions provided a wealth of high-quality photometric light curves. Seismic data derived from these observations improved the characterisation of the observed main-sequence stars and provide constraints on their internal structures (for reviews, see Chaplin et al. 2013;Deheuvels et al. 2016;Christensen-Dalsgaard 2016). The PLATO ESA mission (Rauer et al. 2014) will be launched in 2026 and offers a new perspective to constrain our stellar evolution models further. The objectives of the project are the detection and the full characterisation of Earth-like planets orbiting solar-like stars, and the study of the evolution of star-planet systems. While the detection of exoplanets requires very high signal-to-noise ratios and long observing times, the full characterisation of these detected objects requires the precise determination of the stellar parameters of the host-stars. The aim of the PLATO mission is to observe a large number of stars while combining two techniques: the detection by photometric transit and a ground-based follow up in radial velocity which will provide the planet-tohost star radius and mass ratios, respectively; asteroseismology analysis (coupled with spectroscopic observations) which will provide precise masses, radii, and more importantly ages of the host stars. The goal is to reach uncertainties of the order of or less than 3% in radius and 10% in mass for the planets. This translates into the need to reach uncertainties of the order of or less than 2% in radius and 15% in mass for the host-stars. A PLATO objective is also to reach an uncertainty as small as 10% for the age determination of a solar-like host-star. The current stellar models are still not able to provide such accuracy. The study of the competition between microscopic and macroscopic transport processes is a necessary step towards more accurate stellar models. Helioseismology showed the necessity of including atomic diffusion to properly model the Sun (Christensen-Dalsgaard et al. 1993). It is a microscopic process which occurs in every star due to the gradients of T , P, etc. This process was first discussed by Eddington (1926) and the importance of radiative accelerations was first recognised by Michaud (1970) and Watson (1971). The diffusion velocity of an element mainly depends on two forces (or accelerations): gravity, which makes the element migrate toward the centre of the star, and radiative accelerations, which generally push the element up toward the surface. The latter is due to the capability of ions to absorb photons (according to their atomic properties) and to acquire part of their momentum. Atomic diffusion principally results from the competition between these two forces. For G-, F-, and late A-type main-sequence stars (Population I and II), models including atomic diffusion may produce depletions or accumulations of chemical elements that are too large if no additional mixing other than convection is considered. This is the reason why models need to include additional macroscopic transport processes to reproduce the observed surface abundances (e.g. Korn et al. 2007). Atomic diffusion can then be used as a proxy to determine the efficiency of macroscopic transport processes or the rate of mass loss needed to reproduce observations and then predict which processes play a role (e.g. Talon et al. 2006;Michaud et al. 2004Michaud et al. , 2011. Atomic diffusion leads to local modifications of the abundance profiles, and thus to a modification of the Rosseland opacities. This has important structural effects in stars, for example the opacity-induced iron and nickel convection zone triggered by the local accumulation of these species around 200 000 K and where these elements are the main contributors to the opacity in F-and A-type stars (Richard et al. 2001;Théado et al. 2009;Deal et al. 2016). This opacity modification close to the bottom of the surface convection zone also causes an increase of the mass of the surface convection zone in F-type stars (Turcotte et al. 1998a). The local accumulation of elements may also lead to an inverse mean molecular weight gradient which triggers thermohaline (or fingering) convection in F-and A-type stars (Théado et al. 2009;Deal et al. 2016) and in B-type stars (Hui-Bon-Hoa & Vauclair 2018). It was shown that neglecting radiative accelerations in the modelling of 94 Ceti A (an F-type star showing solar-like oscillations) using asteroseismic data leads to a 4% age difference (Deal et al. 2017). Currently only a few evolution codes incorporate consistent computations of stellar models including the complete treatment of atomic diffusion. The Montreal/Montpellier code (Turcotte et al. 1998b) computes radiative accelerations using OPAL monochromatic data and the opacity sampling method (e.g. LeBlanc et al. 2000). The Toulouse Geneva Evolution Code (Hui-Bon-Hoa 2008; Théado et al. 2012) includes the OPCD package 1 from the Opacity Project calculations (Seaton 2005) for the opacities and computes radiative accelerations using the single-valued parameter (SVP) approximation proposed by Alecian & LeBlanc (2002) and LeBlanc & Alecian (2004). The SVP approximation allows very fast computations with no need for monochromatic data as they are tabulated within the method. The MESA code computes Rosseland mean opacities and radiative accelerations with the OPCD3 method (Paxton et al. 2018) optimised by the work of Hu et al. (2011). In the present paper, we add to the above list the CESTAM code (Marques et al. 2013) where we implemented the radiative accelerations within the framework of the SVP approximation while using the OPCD3 package for calculations of opacities. Atomic diffusion has an important impact on the structure of stars. The effects are detectable in the Sun. It has also been shown to play a role in several other types of pulsating stars (Charpinet et al. 1997;Turcotte et al. 2000;Alecian et al. 2009;Théado 2012). Our aim here is to determine whether atomic diffusion, including the effect of radiative accelerations, needs to be taken into account in the modelling of solar-like oscillating main-sequence stars. This is a prerequisite for an optimal interpretation of the data provided by CoRoT and Kepler and by future space missions such as TESS and PLATO. Macroscopic transport processes such as those induced by turbulent convection and/or rotation also play an important role, and the competition with atomic diffusion is not straightforward; several parameters come into play and the net result likely depends on the type of stars, if not on the specificities of each individual star. We have therefore started an in-depth study which should ultimately provide the net result of this competition on the transport of chemical elements and the associated consequences on the structure, the evolution of the star, and its solar-like oscillating properties. The present paper is the first step of this study. Our purpose here is a theoretical quantification of the sole impact of atomic diffusion -more specifically the radiative acceleration process -on the structure, surface abundances, and some basic seismic properties of stars. No macroscopic processes other than convection are taken into account. The results presented here may then be interpreted as the maximum impact of atomic diffusion including radiative accelerations. The inclusion of the competitive effect of rotationally induced mixing as allowed by our evolutionary code is in progress and will constitute the second paper of the series. The paper is organised as follows: we first detail the new developments of the CESTAM code in Sect. 2. Section 3 then presents the grids of stellar models which focus on low-mass main-sequence stars and the impact of the radiative accelerations on the stellar structure and chemical abundances by comparing models computed with and without radiative accelerations. Some seismic implications are presented in Sect. 4. The impact of the radiative acceleration on the surface iron abundance and thereby on the stellar characterisation are discussed in Sect. 5, while Sects. 6 and 7 are devoted to discussions and conclusions, respectively. Standard physics The stellar models are computed using the CESTAM code (Marques et al. 2013); it is based on the CESAM code (Morel 1997;Morel & Lebreton 2008), and it has a more detailed treatment of rotationally induced transport processes. Here we do not consider the effect of rotation. A second forthcoming paper will discuss the net results of the competition between atomic diffusion (including radiative accelerations) and rotationally induced transport of angular momentum and chemical elements. The CESTAM models can be computed using the opacities given by the OP (Seaton 2005) or OPAL (Iglesias & Rogers 1996) tables complemented at low temperature by the Wichita opacity data (Ferguson et al. 2005). The equation of state used is OPAL2005 (Rogers & Nayfonov 2002). The nuclear reactions are taken from the NACRE compilation (Angulo 1999) except for the 14 N(p, γ) 15 O reaction, for which we used the LUNA reaction rate given in Imbriani et al. (2004). The convection was treated following (Canuto et al.1996; hereafter CGM) with a mixing-length l = α CGM H P , where H P is the pressure scale height. We took into account the overshooting of the convective core, with an overshoot extent of 0.15 × min(H P , r cc ), where r cc is the radius of the Schwarzschild convective core. This choice is compatible with recent determinations of the overshooting extent based on the study of eclipsing binaries (Claret & Torres 2016) and on asteroseismology of solar-type stars (Deheuvels et al. 2015). The atmosphere is computed in the grey approximation and integrated up to an optical depth of τ = 10 −4 with no mass loss taken into account. We used the solar mixture of Asplund et al. (2009) with meteoritic abundances for refractory elements as recommended by Serenelli (2010). In CESTAM two formulations are available for atomic diffusion: the first is based on the work of (Michaud & Proffitt 1993;hereafter MP93) and the second on the Burgers equations (Burgers 1969). Here we used the MP93 formulation. The MP93 approximation used in the CESTAM code considers the diffusion of trace elements (with partial ionisation) in a fully ionised plasma of H and He. This is an approximation of the Burgers equations. Some comparisons were made with the full Burgers treatment for the Sun (Turcotte et al. 1998b), and in the framework of the Evolution and Seismic Tool Activity (ESTA) for the CoRoT mission for the effect of gravitational settling only (Thoul et al. 2007;Montalbán et al. 2007;Lebreton et al. 2008). The advantage of the MP93 method is that computational times are very short. Partial ionisation Partial ionisation, which is often not considered in evolution codes, is extremely important for atomic diffusion calculations (Montmerle & Michaud 1976;Michaud et al. 2015), firstly because radiative acceleration depends on atomic properties of ions, and secondly because the diffusion velocity is proportional to the diffusion coefficient (D ip ), which is proportional to Z −2 i (where Z i is the electric charge of the ion in proton charge units). Hence, for instance, for two ions with respective charges Z i of 5 and 6 undergoing the same resultant acceleration in the same stellar layer, the velocity of the ion with charge 6 is 30% lower than that of the ion with charge 5. Another example: assuming that iron is fully ionised in diffusion velocity calculations around the depth where the iron opacity bump occurs (log T ≈ 5.2) gives erroneous velocity estimation by more than a factor of 10. The error made by assuming full ionisation in atomic diffusion velocity calculations is larger for stars with a small surface convection zone (larger T eff ) since ions have lower Z i at its bottom (cooler layers). Therefore, neglecting partial ionisation in diffusion calculations of chemical elements leads to large underestimates of the diffusion velocities. In this study, partial ionisation on heavy elements is taken into account through an average electric chargeZ i (instead of Z i ) for each element. This significantly simplifies the numerical treatment of the diffusion equations (see Sect. 2.3) since individual ions do not need to be considered (the same approximation is used in Turcotte et al. 1998b). Hereafter, i represents an element whose atoms locally possess an average electric chargeZ i depending on the local plasma conditions. Diffusion equation The equation describing the evolution of the chemical composition reads where c i is the concentration of element i, ρ is the density in the considered layer, D turb is a turbulent diffusion coefficient, and λ i is the nuclear reaction rate related to the element i. In Eq. (1), v i is the atomic diffusion velocity that can be expressed in the case of a trace element i as where D ip is the diffusion coefficient of element i relative to protons, and A i is its atomic mass. The variable g rad,i is the radiative acceleration on element i, g is the local gravity,Z i is the average charge (in proton charge units) of element i (roughly equal to the charge of the "dominant ion"), m P is the mass of a proton, k is the Boltzmann constant, T is the temperature, and κ T is the thermal diffusivity. It should be noted thatZ i is used when estimating D ip . The competition between macroscopic transport processes and atomic diffusion is given by the first two terms on the righthand side of Eq. (1). Atomic diffusion In some evolution codes including atomic diffusion, a mixture of hydrogen, helium, and of a mean heavy element with respective mass fractions X, Y, and Z are considered (e.g. Thoul et al. 1994). This (X, Y, Z) mixture treatment of atomic diffusion gives acceptable results (depending on the needed accuracy) for stars with masses close to that of the Sun, i.e. in stars where radiative accelerations are systematically weak compared to gravity (i.e. gravitational settling is dominant). However, this approximation is no longer valid for more massive stars where radiative accelerations dominate gravity. In this case, the migration of chemical elements is often towards the surface, depending on the interaction of their ions with the radiation flux. The sign and intensity of the diffusion velocity of a given species depends on the atomic properties of the dominating ions, and on depth (or local physical conditions). This is why elements cannot be treated as a unique mean heavy element Z. In its present version, CESTAM computes the evolution of abundances of all the elements available in the OPCD3 package (Seaton 2005) and of some isotopes: H, 3 He, 4 He, 12 C, 13 C, 14 N, 15 N, 15 O, 16 O, 17 O, 22 Ne, 23 Na, 24 Mg, 27 Al, 28 Si, 31 P (without radiative accelerations), 32 S, 40 Ca, and 56 Fe. It also takes into account the partial ionisation in computing diffusion velocities (see Eq. (2)), which is a major new development in the evolution code under consideration. It is shown in the next sections that modifications of the structure and surface abundances of stars occur whenZ i is used instead of the charge of the fully ionised element. Radiative accelerations in CESTAM are computed using the SVP approximations proposed by Alecian & LeBlanc (2002) and LeBlanc & Alecian (2004). There are mainly three and this is one of them (see Alecian 2018): (i) direct use of atomic data (the most accurate method, but the most computationally expensive to carry out), (ii) use of opacity tables with fixed frequency grids (less accurate, but numerically lighter), (iii) use of parametric approximations (less accurate than (ii), but numerically extremely fast). The first method is generally used to compute radiative accelerations in stellar atmospheres (Hui-Bon-Hoa et al. 2000; Alecian & Stift 2004LeBlanc et al. 2009) and necessitates direct integration over atomic transition profiles. The second is valid for stellar interiors and is used in the Montreal/Montpellier code, and is also employed (with interpolation techniques) in the OPCD3 package (Seaton 1997(Seaton , 2007. The third method corresponds to the SVP approximations and is only valid for stellar interiors. The SVP method is based on a simplified form of the equations for radiative accelerations. They are obtained by separating the terms involving the atomic quantities from those describing the local plasma. The SVP method needs very small tables, contrarily to the other methods. These small tables, which only provide six parameters per ion, are pre-calculated for various stellar masses, and the numerical routines have to interpolate these data to fit the mass of the considered star (some tables can be found on the website 2 , and a larger set of tables is in preparation). This method is numerically efficient and is tailored for use in stellar evolution codes. The SVP method was implemented in the TGEC code , and we proceeded in the same way for its implementation in CESTAM using the same set of tabulated parameters as for TGEC. In this study, radiative accelerations are computed for C, N, O, Ne, Na, Mg, Al, Si, S, Ca, and Fe. The SVP parameters have been calculated with the use of the Opacity Project data (Seaton 1992;Cunto et al. 1993). In order to avoid numerical instabilities due to sharp gradients of abundance produced by radiative accelerations, we added an ad hoc turbulent mixing coefficient, as done by Théado et al. (2009) andDeal et al. (2016). The turbulent coefficient is expressed as where D bcz and r bcz are the value of D mix and the value of the radius at the bottom of the convection zone, respectively. For the grids we chose D bcz,1 = 500 cm 2 s −1 and ∆ 1 = 0.02 of the radius of the star and D bcz,2 = 200 cm 2 s −1 and ∆ 2 = 0.1. This turbulent mixing coefficient was chosen so as not to affect significantly the evolution of the star, and it has a negligible effect on the results presented below. Opacity tables In our models, atomic diffusion notably modifies the initial mixture of heavy elements in the outer layers, which implies that pre-computed Rosseland opacity tables cannot be used throughout the interior and all along the evolution. We therefore had to recompute the Rosseland mean opacity locally at Bahcall & Pinsonneault (1992). each timestep in the layers where the mixture changes considerably. For this purpose, we implemented in CESTAM a dedicated routine (mx.f) which handles the monochromatic opacity tables from the OPCD3 package (Seaton 2005). Since running the mx.f routine is time-consuming, we recomputed the Rosseland opacity only in the outer layers, when log(T ) < ≈6.23. We note the following: at higher temperatures, to save computing time, we used the pre-computed Rosseland mean OP opacity tables described in Sect. 2.1. at low temperatures (T < 10 4 K) the OPCD3 opacities are still available. Therefore, for consistency, we preferred to use them rather than the more complete Wichita tables (which provide Rosseland means including molecular lines for a given mixture, but are not available in the form of monochomatic opacities). The impact of using OPCD instead of Wichita opacities in the low-temperature domain is that we miss the molecular contribution to the opacity. This may have some impact on the stellar properties especially for the colder stars. However, for these stars radiative accelerations are negligible and since our goal is to perform a relative comparison, this should not significantly modify our conclusions. Comparison and validation of the implementations To verify the validity of the new developments presented in Sect. 2.4, we compared the results obtained with our new version of CESTAM to those obtained with the Montreal/Montpellier code. We chose a model of 1.4 M with parameters listed in Table 1. Since the input physics of the models is not exactly the same (especially the equation of state and opacity tables) the structures are slightly different, but close enough for our purpose. Figure 1 shows the abundance profiles of various elements. The agreement between the two codes is very satisfactory. The differences between them never exceed 3% for the surface abundances. Elements are depleted or accumulated in the same way. We have also compared models for more massive stars, and the agreement is at the same level. We are therefore confident in the use of this new version of the CESTAM code. Effects of atomic diffusion on the internal structure Our goal here is to evaluate the range of stellar mass and initial chemical composition for which radiative accelerations (hereafter g rad ) cannot be neglected when computing accurately the structure and evolution of solar-like oscillating main-sequence stars. This will allow us to determine masses above which g rad has to be taken into account to properly infer stellar parameters (age, mass, radius) from models. These are lower-limit masses because macroscopic transport processes (apart from convection) are not taken into account. This will allow us to save computational time when their effects are negligible. For that purpose we built two sets of stellar model grids described below. Our grids of models We first define three grids of models listed in Table 2, each of them corresponding to a different metallicity. We have chosen masses in the range [0.9, 1.5] M , a range for which g rad is expected to have the most significant impact on the structure and evolution of solar-like oscillating main-sequence stars. In order to cover the wide range of metallicities of the CoRoT, Kepler, and the future TESS and PLATO targets, we have considered three values of the initial metallicity for grids 1-3, respectively [Fe/H] ini = −0.35, +0.035, and +0.25 dex, with where X H and X Fe are the hydrogen and iron abundances in mass fraction. Models cover the whole main-sequence lifetime up to the stage where the central hydrogen content is X C = 0.05. For each of these three grids, we have computed a first set of models including g rad , and a second set without g rad (gravitational settling only) including convection as the only macroscopic transport process. The values of the mixinglength parameter α CGM and initial helium abundance Y ini at solar metallicity were inferred from a solar model calibration. As g rad is negligible in the Sun the calibration was done with gravitational settling only. A solar calibration consists in adjusting the initial helium abundance Y ini, , metallicity (Z/X) ini, , and α CGM of a 1 M model so that at solar age it reaches the observed solar luminosity, radius, and photospheric metallicity (see Morel & Lebreton 2008). We obtained Y ini, = 0.2578 and α CGM = 0.68. From Y ini, and a primordial helium abundance Y BB = 0.247 (Peimbert et al. 2007), we obtained a helium-to-metal enrichment ratio ∆Y/∆Z = (Y ini, − Y BB )/Z ini, = 0.9, which we used to get the initial helium abundance for models with other metallicities. Evolutionary tracks To characterise the differences between models with and without g rad in the abundances and the structure of stellar interiors, we computed evolutionary tracks presented in Fig. 2 for the three grids of models described in diffusion timescale is greater than the age of the considered star. If this limit layer is too close to (or above) the bottom of the outer convection zone, there is not enough time for atomic diffusion to play a significant role during the lifetime of the star. This is why the effects of atomic diffusion at solar metallicity are greater for stars with solar mass (Turcotte et al. 1998b) or greater, i.e. for stars with a superficial convection zone that is not deeper than that of the Sun. However, it should be noted that significant effects for lower mass stars cannot be excluded since the age of these stars may be old enough (see Dotter et al. 2017). Moreover, since at low metallicities, surface convective zones are shallower, atomic diffusion may therefore be efficient for lower masses (Richard et al. 2002). In Fig. 2 the evolutionary tracks are shown for several initial metallicities (i.e. representative of the photosphere when abundances are still homogeneous outside the stellar core) and for masses ranging from 0.9 to 1.5 M . For the lowest metallicity ([Fe/H] ini = −0.35) the role of g rad is evident for masses higher than 1.1 M . This lower mass threshold is 1.3 M for [Fe/H] ini = 0.035, and 1.45 M for [Fe/H] ini = +0.25. The role of g rad is stronger at low metallicity because g rad values are higher for lower abundances. This is a radiation transfer effect since the momentum transfer between the net radiation flux and the considered element is strongly dependent on the saturation effect of bound-bound atomic transitions (Alecian & LeBlanc 2000). Abundance variations Competition between gravity and g rad leads to a migration of the chemical elements inside stable zones (when no mixing is at work) of the stars. When g rad is not taken into account, all the elements (except hydrogen) migrate toward the centre of the star due to gravitational settling, and this may cause strong depletion of metals at the surface. Therefore, taking into account g rad generally prevents this abnormal superficial depletion (see Ne, Mg, and Ca in Fig. 3). In some cases g rad is so high at the bottom of the surface convection zone, that metals enter the convection zone and their superficial abundances increase (see Al and Fe in Fig. 3 for the 1.4 M case). These changes in element distribution inside the star, iron in particular, explain the slightly different evolution of the models in Fig. 2. This shows that [Fe/H], an observable parameter characterising stars, may be affected by the inclusion of g rad . When [Fe/H] is used as an observational constraint in stellar evolution calculation to determine unknown stellar parameters like age or mass, the error in that determination will likely be larger if the grid is computed without the effect of g rad (see Sect. 5). In our three grids, the difference in [Fe/H] goes from 0 to 1.7 dex (see Fig. 4) between the models with and without g rad . As discussed previously, the effect of g rad for the largest metallicity grid (grid 3) is lower than for the others, and it is visible in the difference in [Fe/H]. Despite this, the difference in [Fe/H] is larger for the model with 1.4 M of grid 2 than for the model with 1.2 M of grid 1 even if g rad is more efficient for the models of grid 1. This is due to the deepening of the surface convection zone, which is larger at low metallicity and dilutes the accumulated iron more efficiently in the surface convective zone (see Sect. 3.4). The surface abundances of some elements (He, C, N, and O for instance) in our computations are not representative of the values obtained from the observations of G-and F-type stars (at least during a fraction of the evolution of the models). The maximum depletion observed for these elements is ≈0.4 dex for star with a solar metallicity (see Adibekyan et al. 2012;Bensby et al. 2014;Brewer et al. 2016). These elements are not supported (or only weakly) by radiative accelerations and are largely depleted in the models even when g rad is taken into account. This result is expected because these models do A10, page 6 of 13 Fig. 4. Evolution of the difference in [Fe/H] between models with and without g rad for the three grids of models. X C is the central hydrogen mass fraction. The solid lines show the differences for models including the effect of partial ionisation, while the dashed lines show the differences when this process is not taken into account. not include additional mixing processes (e.g. induced by rotation) which should reduce these large depletions. The abundances of the present study can then be considered as upper limits of what can be obtained from more complete models including atomic diffusion and competing macroscopic processes. Position of the bottom of the surface convection zone In the mass range covered by our model sample, the main abundance differences between the two sets of models occur inside the convection zones due to the diffusion flux of iron at their bottom. There is no significant accumulation of metals in layers below the surface convection zone where atomic diffusion processes are too slow to produce abundance stratifications, contrarily to what happens in A-and B-type stars (Richard et al. 2001;Théado et al. 2009;Deal et al. 2016). Here the structure of the models is modified only near the stellar surface. The accumulation of iron, aluminium (model 1.4 M of grid 2, see Fig. 3 for example), and calcium (model 1.2 M of grid 2; see Fig. 3), or the depletion of the other elements has a direct influence on the Rosseland opacity. Figure The blue and red curves represent respectively the models without and with g rad . The solid dashed and dotted vertical lines represent the position of the bottom of the surface convection zone for the model without g rad for the same value of X C as the opacity profiles (for clarity, they are not represented for the model with g rad ). Rosseland mean opacity profiles of 1.4 M models with and without g rad . The difference is more important close to the bottom of the surface convection zone (increase of 65% at X C = 0.4) and this has a direct influence on the evolution of the star (i.e. for the structure) and the surface abundances. As iron is one of the main contributors to the opacity in this region, its accumulation leads to a higher opacity than that obtained with gravitational settling alone. As a result, the bottom of the surface convection zone is always deeper when g rad is taken into account (see upper panels of Fig. 6) as was already shown by Turcotte et al. (1998a) for F-type stars. The more massive the star, the more important the deepening of the surface convection zone due to g rad . Once again this effect is larger for lower metallicity stars. This maximum difference, which can be obtained from models with and without g rad , reaches 120% for grid 1 and goes down to 65% and 5% for grids 2 and 3 for the more massive models of the three grids. WE note that the deepening of the convection zone is smaller in our models than in the Turcotte et al. (1998a) models. We presume that this could be due to the fact that the radiative acceleration for Ni, which significantly contributes to the opacity, is presently missing in our calculations. The new SVP tables that are in preparation (Alecian & LeBlanc, in prep., priv. comm.) should improve our models in the near future. Variation of the stellar radius We have seen in previous sections that the accumulation of metals modifies superficial abundances, opacity profiles, and size of the convection zone. Since the structure of the star is modified, so is the radius. Accurate knowledge of the radius is important in order to characterise exoplanets found by the transit method. If we compare the stellar radii computed without g rad to those computed with g rad (see lower panels of Fig. 6), models with g rad always give larger radii. The maximum difference which can be obtained from models with and without g rad never exceeds 2% and is at the level of requested uncertainties for the PLATO objectives. The increase in radius in our g rad models is linked to a decrease in the mean density due to atomic diffusion including g rad , the same effect (but smaller in magnitude) was found for the Sun by Turcotte et al. (1998b). Seismic implications Our study confirms that g rad may have non-negligible effects on stars, especially on the iron surface abundance and on the size of the surface convection zone. Can these changes have detectable effects on the seismic properties of the star? We consider here only the global seismic indices, leaving a more comprehensive study of individual frequencies and frequency combinations for a forthcoming paper. The global asteroseismic indices are the frequency at maximum power, ν max , and the averaged large frequency separation, ∆ν 0 (Chaplin et al. 2013). Scaling relations relating these seismic indices to stellar mass, radius, and effective temperature are expressed for solar-like oscillating main-sequence stars as (Kjeldsen & Beddinget al. 1995) We showed that g rad has an impact on T eff and on the radii of stars for a given mass (Sect. 3), so an effect should be visible in the ν max and ∆ν values. In order to be detectable, the seismic signatures of the g rad must be larger than the uncertainties arising from the observations. The Kepler Legacy sample of solar-like oscillating stars includes stars in the mass range 0.8−1.6 M with [Fe/H] in the range [−1,+0.5] dex . For most of these stars, Lund et al. (2017) obtained uncertainties for ν max and ∆ν in the approximate range 6-50 µHz and 0.05-0.2 µHz, respectively, depending on the apparent magnitude (in the range 6-11 mag) and the observing time (between 12 months and more than four years). The PLATO mission aims to measure individual frequencies of a reference star (1 M , 1 R , 6000 K) with uncertainties no larger than 0.2 µHz at magnitude 10 (Rauer et al. 2014). The PLATO uncertainties for ν max and ∆ν are expected to lie in similar ranges to those of Kepler at a given magnitude, but PLATO will observe a larger number of bright stars and therefore with expected uncertainties on the lower side of the range. For the purpose of comparison, we considered two sets of uncertainties on ν max and ∆ν (see Table 3). The first set (A) is based on the uncertainties of the best Kepler Legacy data Lund et al. 2017) and the bulk of bright PLATO target stars. The second set (B) considers more conservative uncertainties. In the following we compare, for both sets, the effects of g rad on ν max and ∆ν 0 . Fig. 6. Evolution of the difference of the mass of the surface convection zone (upper panels) and of the difference of the radius of the models (lower panels) with the frequency at maximum power ν max (see Sect. 4) for the three grids of models. Dashed lines are for the same models but without the effect of partial ionisation. sequence. We find that the values of ν max and ∆ν 0 are always lower for models including g rad . For ν max , the impact of g rad never exceeds 15 µHz except for the most massive models. This is more than three times lower than uncertainty set B, but 2.5 times larger than uncertainty set A. We conclude that g rad needs to be very efficient in order to produce a significant signature in the ν max value. The effects on ∆ν 0 are more important. The inclusion of g rad leads to differences that reach 2.4 µHz (for the 1.4 M model at solar metallicity), which is much larger than any uncertainty derived from Kepler data or expected from PLATO data. Because ∆ν 0 is directly related to the mean density of the star, differences in radius as small as 2% can still induce large differences in ∆ν 0 . We can now define the mass limit M L as the stellar mass above which the change in ∆ν 0 due to g rad is larger than the uncertainties considered in sets A and B. For set A, the values of M L are 1.05, 1.25, and 1.4 M , for grid 1, 2, and 3, respectively. In the case of set B, the values of M L are lower (0.9, 1.1, and 1.2 M for grid 1, 2, and 3, respectively). These values of M L are listed in Table 4, and serve as references. They correspond to the lowest masses below which g rad can be neglected. For masses higher than these limits, the effect of g rad will depend on the efficiency of other transport processes. g rad -induced uncertainties on seismic ages When modelling a star using seismic constraints, the impact of g rad on ν max and ∆ν 0 generates an uncertainty on the age of the star. An order of magnitude of the age uncertainty can be obtained for instance by comparing the ages of standard and g rad models at fixed mass, metallicity, central hydrogen abundance, and ∆ν 0 . In such a configuration, we find that the age of the model with g rad is always younger than that of the standard model in this study. The maximum difference due to g rad at metallicity [Fe/H] ini = 0.035 (grid 2) is obtained for the 1.4 M model at X C = 0.4 and ∆ν 0 = 82.90 µHz. The ages of the corresponding standard and g rad models are respectively 1.546 Gyr and 1.386 Gyr, that is they differ in age by about 9%. Similarly for the most massive models of grids 1 and 3, we obtain age differences of about 6% and 5%. The g rad therefore contributes significantly to the age error budget for the most massive mainsequence stars showing solar-like oscillations. Fig. 7. Evolution with the central hydrogen content of the differences of frequency at maximum power, ν max , between models without and with g rad for the three grids (upper panels). The same, but for the average large separation ∆ν 0 (lower panels). Each colour corresponds to a given mass. The dashed lines represent the same models but without the effect of partial ionisation. The horizontal black dash-dotted lines indicate the adopted A uncertainty set, and the horizontal black dashed lines indicate the adopted B uncertainty set on ν max (upper panels) and ∆ν 0 (lower panels). Acoustic depths of the base of the convection zone In Sect. 3.4 we showed that the depth of the surface convection zone increases when g rad is included. The question then is whether the g rad -induced change of the size of the CZ is significant. Solar-like oscillations enable the measurement of the acoustic depth of the base of the convection zone which is defined as where r CZ is the radius of the bottom of the surface CZ, c s the sound speed, and R * the radius of the star (Mazumdar & Antia 2001, and references therein). We therefore computed the acoustic depths, τ CZ,obs , for our models and compared the resulting g rad -induced differences ∆τ CZ,RA to the observational uncertainties of seismically measured τ CZ,RA . From our models, we find that the maximum g rad -induced differences for the convective sizes roughly correspond to ∆τ CZ,RA ∼ 300 s for the 1.4 M model of grid 2 and to 340 s for the 1.2 M model of grid 1 at fixed X C . This difference goes down to 160 s for the first case when comparing models with the same radius. Seismically measured τ CZ,obs were obtained by Verma et al. (2017) for stars from the Kepler Legacy sample. These authors found typical uncertainties on τ CZ,obs of the order of 150 s for stars with masses of about 1.4 M and of the order of 75 s for stars with masses of about 1.2 M . Thus, we can conclude that g rad must be taken into account in the models in order to determine the properties at Notes. Impact of g rad on [Fe/H] and on the stellar parameter determinations With CoRoT and Kepler high-quality seismic data it is possible to determine very precise stellar parameters such as masses, radii, and ages for solar-like oscillating dwarfs (Lebreton & Goupil 2014;Silva Aguirre et al. 2017;Reese et al. 2016). In that framework, one significant impact of the g rad on the stellar parameter determination is its effect on the relation between the iron content and the metallicity. Today, a stellar parameter determination is usually achieved by means of an optimisation process. This method looks for the stellar model that best fits the observed oscillation frequencies and/or frequency combinations and additional spectroscopic constraints such as the effective temperature and/or log g. The stellar model computations involved in the best-fit search require the knowledge of the initial metallicity Z ini . However, the available spectroscopic constraint used for the best-fit search is the surface iron abundance of the star [Fe/H] determined from observations. Assuming a chemical mixture scaling, we derive the current surface metallicity Z s . However, this quantity can significantly differ from the initial metallicity Z ini of the star due to internal transport processes occurring over time. In particular, g rad can lead to an accumulation of iron at the surface. This means that we must expect a lower initial iron abundance than the observed value. When only gravitational settling is taken into account, the effect is the opposite. In addition to these difficulties, we emphasise that atomic diffusion, especially g rad , acts differently on the chemical elements. Then when iron accumulates at the surface of the star, it is no longer possible to approximate the surface metallicity Z s using the determination of [Fe/H] by spectroscopy. Figure When considering only gravitational settling (blue curves), the difference between the two computation methods gives similar evolutions of the profiles for 1.4 M . Nevertheless there are differences up to 0.4 dex that are much larger than current observational uncertainties. As the elements are diffusing toward the centre but at different velocities, the scaling of the iron abundance with Z is not possible even in that case. The difference reaches 0.7 dex for the models including g rad (red curves) and the evolution is completely different as the iron is accumulated at the surface. In this case iron does not follow the behaviour of other heavy elements (namely CNO) for which gravitational settling dominates the diffusion. It is clear in this example that the [Fe/H] value needs to be computed with the actual value of iron and hydrogen abundances. The differences between the two methods used to compute [Fe/H] are smaller for lower mass stars and/or when other transport processes are taken into account since atomic diffusion is less effective. This issue needs to be investigated, especially in the framework of optimisation methods as evolution codes used to compute stellar models rarely follow the evolution of the iron abundance. Impact of partial ionisation In all the comparisons we have made on the structural and seismic properties, we observe that neglecting partial ionisation strongly underestimates the impact of atomic diffusion, especially for the most massive stars of our grids. As shown in Figs. 4, 6, and 7, the impact is roughly doubled when partial ionisation is taken into account. This occurs because iron dominates the structure modifications, and because it is among the elements we consider, the one for which neglecting partial ionisation in estimating the mean electric charge induces the largest errors (it has the highest atomic number). It is clear from this study that partial ionisation must be taken into account in modelling main-sequence stars. Effect of the initial solar mixture We demonstrated how the initial metallicity is an important parameter in evolution models including g rad . To evaluate the impact of the adopted solar mixture, we compared models based on the solar mixture of AGSS09 to models based on Grevesse & Noels (1993; hereafter the GN93 mixture). We computed two 1.3 M models with the GN93 mixture, with and without g rad , in order to perform the same comparisons as in Sect. 4.1. In these two models (Z/X) ini = 0.0276 and α CGM = 0.678 as inferred from a solar calibration. The solar metallicity of the GN93 mixture is higher than the AGSS09 value. We showed in previous sections that g rad decreases when the metallicity increases for a given mass. Therefore, the effect of g rad is slightly smaller in models using the GN93 mixture, but is still non-negligible. With the GN93 mixture, the mass above which g rad has non-negligible effects on seismic predictions is only ≈0.05 M higher than the mass limit obtained with the AGSS09 mixture (Table 4). The difference for other solar mixtures (Grevesse & Sauval 1998;Asplund et al. 2005) is expected to be smaller because the metallicity difference with AGSS09 is smaller. Implications for the PLATO space mission In Sect. 4.1, we determined that g rad induces differences in ν max and ∆ν 0 that can be larger than their observational uncertainties when the stellar mass lies above a lower mass limit M L , which depends on the metallicity (Table 4). These lower masses can be used to determine whether g rad has to be taken into account to ensure a given accuracy on the inferred stellar parameters. We can estimate the number of stars of the PLATO core program which might be affected by g rad . For this purpose we use a stellar population synthesis computed with the Besançon Galaxy model (Robin et al. 2003Czekaj et al. 2014;A. Robin, priv. comm.). The simulation is representative of one PLATO observation field. The mass limits of Table 4 are indicated by yellow points (uncertainty set B) and orange points (uncertainty set A) in Fig. 9. The number of stars with masses higher than the mass limits ranges from 33% up to 59% (depending of the uncertainty criteria) of the PLATO core program star sample and reaches 58%-75% for the total field. This number is an upper limit, but nevertheless indicates that for a Fig. 9. Metallicity according to the mass of a population simulation of the PLATO (grey crosses) and Kepler (black crosses) core programme stars. The selected stars are from K7 to F5 with magnitudes in the range 4 < V < 11, effective temperature in the range 4030 < T eff < 6650 K, and luminosity classes between IV and V. The blue and red points correspond to the models listed in Table 4, which represent masses when g rad needs to be taken into account. significant number of stars, g rad may not be negligible and the determination of their parameters will require some care if the requested PLATO accuracy is required. Conclusion We improved the CESTAM code in order to compute models including the effects of radiative accelerations on the chemical element profiles and the resulting effects on opacities. The goal was to characterise the sole transport effect of atomic diffusion including radiative accelerations; therefore, no macroscopic transport apart from convection was assumed. We computed two sets of models at three metallicities for masses ranging between 0.9 and 1.5 M . One set includes the effect of g rad and the other set does not. The effects of radiative accelerations are higher at low metallicities and for the more massive stars considered here. The most obvious impact of radiative accelerations in stars is the modification of the surface abundances. For instance, this process is responsible for the surface abundances of chemically peculiar stars and we show here that it also has an impact for low-mass oscillating main-sequence solar-like stars. The most important abundance to follow is iron as it is one of the main contributors to opacity, while the [Fe/H] value is an important input for the stellar modelling. We showed that when radiative accelerations on iron are non-negligible it is not correct to calculate the [Fe/H] of a model simply considering a scaling of the metal content; the effect of radiative accelerations is selective, and even if iron accumulates at the surface the surface metallicity decreases as most of the other elements are depleted. This may have an important impact on the stellar parameter determination as [Fe/H] is an observational input. The difference in [Fe/H] between models with and without radiative accelerations reaches 1.7 dex for the more massive models of the grids. We showed that the accumulation of elements in the surface convection zone (mainly iron) induces structure modifications. This is mainly due to the local increase of the opacity at the bottom of the surface convection zone as elements accumulate in regions where they are main contributors to the opacity. This local increase in the opacity leads to an increase in the size of the surface convection zone which can reach up to 120% in mass. This represents an increase larger than 160 s when considering the position of the bottom of the surface convection zone in acoustic radius. This is larger than the uncertainties obtained for some F-type stars of the Kepler Legacy sample and has to be further investigated. The modification of the radius of the star induced by the effects of radiative accelerations can reach 2%. Using scaling relations we showed that the frequency at maximum power ν max of a model can be significantly affected by radiative accelerations for the more massive stars of our sample. Some models of our grid showed differences in the large frequency separation of pressure modes ∆ν 0 that were larger than the observational uncertainty. For masses higher than 0.9, 1.1, and 1.2 M (considering uncertainties of the Kepler Legacy sample) respectively for [Fe/H] ini = −0.35, +0.035, and +0.25, radiative accelerations may have an impact on the age, mass, and radius determinations exceeding the precision requested by the PLATO main objectives. These masses are slightly higher when considering more conservative uncertainties. This has consequences on the parameters to be determined from Kepler, and future TESS and PLATO data. We estimated that radiative accelerations should be non-negligible for 33%-58% (depending on the considered uncertainties) of the core program stars of Kepler and PLATO. It is important to note that the impact of radiative acceleration might be lowered when other processes are efficient in transporting material within stars, such as mixing induced by rotation, turbulence, or internal gravity waves to name a few. This is beyond of scope of this paper, but will be studied in a forthcoming work.
12,117
2018-06-27T00:00:00.000
[ "Physics" ]
Investigation of the effects of time periodic pressure and potential gradients on viscoelastic fluid flow in circular narrow confinements In this paper we present an in-depth analysis and analytical solution for time periodic hydrodynamic flow (driven by a time-dependent pressure gradient and electric field) of viscoelastic fluid through cylindrical micro- and nanochannels. Particularly, we solve the linearized Poisson–Boltzmann equation, together with the incompressible Cauchy momentum equation under no-slip boundary conditions for viscoelastic fluid in the case of a combination of time periodic pressure-driven and electro-osmotic flow. The resulting solutions allow us to predict the electrical current and solution flow rate. As expected from the assumption of linear viscoelasticity, the results satisfy the Onsager reciprocal relation, which is important since it enables an analogy between fluidic networks in this flow configuration and electric circuits. The results especially are of interest for micro- and nanofluidic energy conversion applications. We also found that time periodic electro-osmotic flow in many cases is much stronger enhanced than time periodic pressure-driven flow when comparing the flow profiles of oscillating PDF and EOF in micro- and nanochannels. The findings advance our understanding of time periodic electrokinetic phenomena of viscoelastic fluids and provide insight into flow characteristic as well as assist the design of devices for lab-on-chip applications. Introduction Micro-and nanofluidic applications (e.g., on-chip bioanalysis, on-chip diagnostic devices, DNA molecules separation, energy harvesting, and so on) require the transportation of fluids to be driven by an external driving force, which can be either a pressure gradient [pressuredriven flow (PDF)] or an external electric field [electroosmotic flow (EOF)] or the combination of these two driving forces. Force application results in the coupled flow of matter and ionic current, so-called electrokinetic flow. Based on the physical problem of interest, these driving forces can be steady or time-dependent. The application of steady driving forces for Newtonian fluids, like aqueous electrolyte solutions, whose viscosity is constant, was extensively investigated in the past (Masliyah and Bhattacharjee 2006;Bruus 2008). Recently, the necessity of manipulation of biofluids (for example blood, DNA solutions) and polymeric liquids in small confinements has triggered a renewed interest in the dynamics of non-Newtonian fluid. Berli theoretically studied the utilization of steady PDF (Berli 2010a), steady EOF (Olivares et al. 2009;Berli 2010b), and steady combined PDF-EOF (Berli and Olivares 2008) for inelastic non-Newtonian fluids using a power law constitutive equation in both rectangular and cylindrical microchannels. Experiments carried out for steady PDF non-Newtonian flow in a rectangular microchannel inspired by Berli's theory were also reported Abstract In this paper we present an in-depth analysis and analytical solution for time periodic hydrodynamic flow (driven by a time-dependent pressure gradient and electric field) of viscoelastic fluid through cylindrical micro-and nanochannels. Particularly, we solve the linearized Poisson-Boltzmann equation, together with the incompressible Cauchy momentum equation under no-slip boundary conditions for viscoelastic fluid in the case of a combination of time periodic pressure-driven and electro-osmotic flow. The resulting solutions allow us to predict the electrical current and solution flow rate. As expected from the assumption of linear viscoelasticity, the results satisfy the Onsager reciprocal relation, which is important since it enables an analogy between fluidic networks in this flow configuration and electric circuits. The results especially are of interest for micro-and nanofluidic energy conversion applications. We also found that time periodic electro-osmotic flow in many cases is much stronger enhanced than time periodic pressure-driven flow when comparing the flow profiles of oscillating PDF and EOF in micro-and nanochannels. The findings advance our understanding of time periodic electrokinetic phenomena of viscoelastic fluids and provide (Nguyen et al. 2013). Chakraborty and colleagues have theoretically studied transport of non-Newtonian fluid (inelastic power law fluids and recently viscoelastic constitutions) using separately steady PDF (Bandopadhyay and Chakraborty 2011), steady EOF (Chakraborty 2007;Ghosh and Chakraborty 2015), time periodic PDF (Bandopadhyay and Chakraborty 2012a, b; Bandopadhyay et al. 2014) and time periodic EOF (Bandopadhyay et al. 2013) in rectangular narrow confinements. Afonso et al. studied the combined steady PDF and EOF using two different viscoelastic fluid models, namely the Phan-Thien-Tanner (PTT) model and the finitely extensible nonlinear elastic with a Peterlin approximation (FENE-P) model (Afonso et al. 2009). Dhinakaran et al. (2010) studied the steady EOF for viscoelastic fluids using the PTT model and nonlinearity of the Poisson-Boltzmann equation. Liu et al. studied time periodic EOF of viscoelastic fluid in rectangular (Liu et al. 2011a), cylindrical (Liu et al. 2012) and semicircular microchannels (Bao et al. 2013). However, so far no author discussed on the time-dependent combined PDF-EOF of viscoelastic flow in a narrow confinement (micro-and nanochannels). In this context, our work aims to fill this gap by attempting to investigate the theoretical relations between fluxes and forces for time periodic electrokinetic (mixed PDF-EOF) flow of viscoelastic fluid in narrow confinement. It is important to note that knowing the relationships between driving forces and conjugate fluxes in electrokinetics [which for simple Newtonian fluid and steady mixed PDF-EOF can be described by transport equations and the Onsager relations of non-equilibrium thermodynamics (Masliyah and Bhattacharjee 2006)] is a crucial aspect for miniaturization and integration. It is thus relevant for the design and operation of micro-and nanochannels in fluidic networks (lab-on-chip platforms) as well as for understanding the underlying fundamental physics of fluids. The results are also of interest for energy conversion in micro-and nanofluidic systems. Theoretical model We consider the flow of a linearized Maxwell fluid in an infinity long circular micro-or nanochannel (with channel radius R) under application of both an oscillating pressure and electric field using a cylindrical coordinate system (Fig. 1). Potential distribution When the charged channel surface is in contact with the fluid with dissolved ions, electrical double layers (EDL) are formed at the channel walls. The electrical potential (ψ) in the EDL is a function of r in cylindrical coordinate system and has the non-dimensional form as: in which the non-dimensional quantities are as follows: r = r R , ψ = ψ ζ , R = R or when converted back to dimensional quantities, in which κ = 1 and = ǫk B T 2n 0 z 2 e 2 · ζ is the zeta potential. n 0 is the bulk ionic density, k B is the Boltzmann constant, T is the operational temperature, e is the elementary charge, ϵ is the permittivity of the fluid, and z is the valency of the positively and negatively charged species (for a symmetric electrolyte, z + = −z − = z). This model is classical for electrical double layers when we do not consider finite ionic size effects. A detailed model description on the effects of finite ionic size and solvent polarization for electrical double layers is beyond the scope of this work but can be found in . It is noticed that the electrical potential causes by EDL is normal to the wall and the convection is parallel to the wall, so there is no disturbance of the EDL potential. Fluid velocity The flow is governed by the incompressible Cauchy's momentum equation. Considering the flow in z direction (unidimensional flow), the scalar momentum equation can be expressed as: with ρ, the fluid density; u(r, t), the fluid velocity; − ∂ ∂z p(z, t), the applied pressure gradient; τ(r, t), the stress tensor; and E(z, t) the externally applied electric field. (1) It is important to note that E(z, t) in Eq. (5) includes two components: (1) the induced electric field by the applied pressure gradient E S e −iωt (the streaming potential field) and (2) the applied electric field E A e −i(ωt+ϕ) . Here, ϕ is the phase difference between the applied pressure gradient and the applied electric field. We now define E 0 as: Viscoelastic behavior is presented using the linear Maxwell model. where t n is the liquid relaxation time and η is the liquid viscosity. Flow rate The flow rate q = ℜ(Qe (−iωt) ) in which the flow rate amplitude has the form: By integrating and taking − d dz P = P L and E 0 (z) = d dz Φ = �Φ L , the complex flow rate Q amplitude has the form: The flow rate amplitude as shown in Eq. (11) is composed of two parts. The first part is driven by the applied oscillating pressure, and the second part is driven by the applied oscillating electrical field. Ionic current The ionic current i cur = ℜ(Ie (−iωt) ) , in which the current amplitude has the form: Here, f is the Stokes-Einstein friction factor, f = k B T D and D is the diffusion coefficient, σ s is the conductivity of the Stern layer (Masliyah and Bhattacharjee 2006;Lee et al. 2012;Davidson and Xuan 2008). It is important to note that since we use a linear viscoelastic model, the f factor presented here is not dependent on the power law exponent [denoted as β in ] which is solely used for a power law (inelastic) fluid. For more discussion on the f factor in case of using an inelastic fluid, please refer to . Changing to the non-dimensional variable r, we obtain: By substituting the velocity given by Eq. (9) into Eq. (12) and integrating, we obtain the complex amplitude current: At this point, the velocity profiles expressed in Eq. (6) for both oscillating pressure-driven and electro-osmotic flows are fully determined. Equation (16) is used for plotting velocity amplitudes as shown in the following sections. Onsager's reciprocal relations The Maxwell model for viscoelastic fluid is restricted to small deformations so that the fluid responds linearly. This phenomenon is known as linear viscoelasticity. Because of this linear relation, the Onsager relations are expected to be obeyed (Onsager 1931a, b;Lebon et al. 2008;Rajagopal 2008) and indeed, we find that the complex flow rate amplitude and complex ionic current amplitude in Eqs. (11) and (13) can be re-written as follows: The transport Eq. (17) shows that flow rate amplitude Q and ionic current amplitude I are linear with applied pressure and electric potential amplitudes. L ij in Eq. (17) are phenomenological coefficients. In particularly, L 11 characterizes the hydraulic conductance and L 22 characterizes the electric conductance. L 12 characterizes the electro-osmosis and L 21 characterizes the streaming potential effect. Onsager's reciprocal relation is complied with if L 12 = L 21 . We see that this relation is indeed Here, Du is the Dukhin number and Du = σ s Rσ b , σ b is the conductivity of the bulk solution. As with the flow rate amplitude, the current response of the system is caused by the oscillating pressure (the first term) and the oscillating electrical field (the second term). Consideration of streaming potential and applied electric field By substituting Eq. (7) into Eq. (9), the complex velocity amplitude can be written as: The velocity field therefore can be viewed as the superposition of the velocity fields caused by (1) the pressure gradient coupling with its streaming potential field [the first and the second terms on the right-hand side of Eq. (14)] and (2) the applied electric field [the third term on the right-hand side of Eq. (14)]. In this context, if one considers solely pressure-driven system, where no electric field is applied E A e −iϕ = 0, the streaming potential E S = E 0 . Since the total ionic current at maximal streaming potential is zero, this gives us the opportunity to extract the relation between U refP and U refE S from Eq. (12) as following (by taking I = 0): The velocity amplitude U(r) can therefore be expressed solely as a function of U refP and U refE A as: satisfied because from Eqs. (11), (13) and (17) it is obvious that: Equation (17) can be used to construct an analogy between micro-and nanofluidic channel networks and electric circuits because it describes the electrokinetic phenomenon as a generalization of Ohm's law where linear relations between currents (of mass or charges) and applied gradients (voltage or pressure) occur (Ajdari 2004;Campisi et al. 2006). In this context, it is interesting to apply our calculation results to examine the energy conversion efficiency of the streaming potential energy harvesting system in a manner comparable to the work of Bandopadhyay and Chakraborty (2012a). Streaming potential energy harvesting The electrokinetic energy conversion efficiency (Eff) in a microchannel for a Newtonian fluid under steady pressuredriven flow was theoretically predicted to be less than 1% (Morrison and Osterle 1965), while for an inelastic polymer it was predicted to be about 1% (Berli 2010a). In a nanochannel, for a Newtonian fluid under no-slip boundary conditions and based on a Poisson-Boltzmann charge distribution, the theoretical prediction of energy conversion efficiency is up to 12% (van der Heyden et al. 2006). Recently, Bandopadhyay and Chakraborty (2012a) gave a valuable contribution to the theory of electrokinetic energy conversion by taking into account the utilization of Maxwell viscoelastic fluid and oscillating pressure-driven flow in slit micro-and nanochannels. Bandopadhyay et al. showed that for a slit-type microchannel ( H = 500, with H the half channel height and λ the Debye length), the conversion efficiency can be in the order of 10%, and that for a nanochannel ( H = 10) without taking into account surface conductance, the conversion efficiency can be even larger than 95% [see Fig. 1 and S3 in ref. Bandopadhyay and Chakraborty (2012a)]. Our calculation results for a cylindrical geometry show that an efficiency can be obtained in the same order for the case of a microchannel and that the maximum efficiency can be larger than 95% for a nanochannel (Fig. 2). For the purpose of comparison, plots are constructed using the same input data as provided by the work of Bandopadhyay and Chakraborty (2012a) (i.e., ϑ = 10 −4 , ζ = −1, Ω = −10, Du = 0). It must be remarked that the maximal efficiencies shown in Fig. 2 and those predicted by Bandopadhyay et al. are thermodynamic efficiencies [Eff = I S �φ Q(�p) ], i.e., in the case no power is delivered by the system. For practical purposes, the maximal conversion efficiency under the condition of maximal output power at a load resistor is more relevant (Olthuis et al. 2005), Eff max = 1 4 I S �φ Q(�p) . Figure 3 shows that the maximum efficiencies at maximal output power are 24.3 and 7.7% for a cylindrical nanochannel (R = 15) and microchannel (R = 500), respectively. These values though much lower than the thermodynamic efficiencies are still much higher than the predictions for conventional systems using DC actuations and Newtonian fluids cited above, especially for microchannels. Understanding the mechanism In the work of Bandopadhyay and Chakraborty (2012a), the mechanism behind the massive enhancement of the energy conversion efficiency using viscoelastic fluid was not in detail described. Herewith, we will provide a description of the mechanism that enhances the efficiency. Figure 4 shows the maximal thermodynamic energy conversion efficiency following ω * and the inverse Deborah number ϑ[here, ϑ = ρR 2 ηt n , Bandopadhyay and Chakraborty (2012a)] for a nanochannel at R = 5 [in this context, for the comparison with the work of Bandopadhyay et al., the Deborah number is defined as De = ηt n ρR 2 . It is noticed that some other authors have also defined De = ωt n (Bao et al. 2013)]. It is obvious from Fig. 4 that in the limit ϑ → 0(high relaxation time, elastic dominant zone), the efficiencies are high, while at high ϑ, (low relaxation time, viscous dominant zone), no efficiency peaks appear. This behavior can be explained from the linear Maxwell viscoelastic model that presents fluid as a serial connection between a spring (elastic behavior) and a dashpot (viscous behavior). The closer to the dominantly elastic zone (lower ϑ), the more the fluid behaves as a Hookean solid in responding (large relaxation time), resulting in a shift of the resonant peak toward the higher ω * values. When ϑ → 0, at resonant frequencies, the fluid inside the channel exhibits an entirely elastic character and hence moves frictionlessly, as a result providing high conversion efficiencies. The peak locations at which maximal efficiencies are observed depend on the oscillation frequencies that are also determined by the channel dimension. This can be seen when ϑ is constant (10 −4 ), the maximal efficiency peaks shift to smaller frequencies at an increase in channel dimensionless radius, R (shown in Fig. 5). This frequency shift was also observed in the work of Bandopadhyay and Chakraborty (2012a, b). Furthermore, the peaks also split into two separate peaks so that they can be shifted to smaller frequencies when increasing the channel radii (for example the peak at ω * approximate 550 and R = 100 in Fig. 5). Oscillating pressure-driven flow profile For the sake of generality, all the plots are presented using the non-dimensional quantity: 4η , see Eq. (16) for U(r). Figure 6 shows the oscillating pressure-driven flow profile of viscoelastic fluid following ω * and channel radius r at R = 20, ϑ = 10 −4 , ζ = −1, Ω = −10, Du = 0. In order to compare with the case of oscillating electro-osmotic flow, the velocity amplitude is also plotted and shown in Fig. 7. It is important to stress that while the pressure gradient − ∂p ∂z and the velocity u(r, t) appear to have the same oscillatory form in the time variable t [see Eqs. (4) and (6)], this does not mean that they actually are in phase. The reason for this is that the other part of the velocity, namely, the U(r) or U(r) is a complex quantity. The product of this complex quantity with e −iωt as shown in Eq. (6) causes changes in the phases of the real and imaginary parts of the U(r) or U(r) and hence of the velocity u(r, t) so that a phase shift will occur with respect to the pressure gradient − ∂p ∂z . Complex and real velocity amplitude The velocity u has the form Since U is a complex number, we can express it as: Substituting Eq. (19) into Eq. (20) and isolating the Real part, we have: is the phase shift, and hence U c = |U| is the (real) velocity amplitude (Moyers-Gonzalez et al. 2009), see Fig. 7. Figure 8 shows the phase shift of the velocity following the dimensionless pressure frequency (ω * ) with two different values of ϑ. It can be seen that depending on the values of ϑ, the phases pass from negative (viscous zone) to positive (elastic zone) (Moyers-Gonzalez et al. 2009). The green line represents the phase for Newtonian dominant fluid (ϑ = 10 10 ) and stays in viscous zone (negative). As for ϑ = 10 −4 (the blue curve), the phase is in the elastic zone (positive) at low frequency. As the frequency increases, the fluid responds viscously indicating by the changing of the blue curve from positive to negative zone. When the frequency further increases and reaches the resonant frequency, the phase shifts back to the elastic zone (positive). At resonant frequency, the fluid behaves elastically and hence moves frictionlessly, as a result providing high energy conversion efficiencies as mentioned in previous section. Effectiveness of electro-osmotic flow compared to pressure-driven flow It can be seen from Figs. 6,7 and Figs. 9, 10 that at resonant frequencies, the maximal velocity in the case of oscillating EOF is much higher than in the case of oscillating PDF even though at low frequencies these flows have the same maximal velocities (see Fig. 11). In the textbook, for DC electrokinetic flow, the concept of effectiveness (B) of electro-osmotic flow as compared to pressure-driven flow is given by the ratio of volume flow rate, see page 244 of ref. Masliyah and Bhattacharjee (2006). In our case, for time periodic electrokinetics with a Maxwell fluid, the volume flow is expressed by Eq. (11). The effectiveness B therefore has the form: Figure 12 shows the frequency-dependent effectiveness of oscillating EOF over oscillating PDF. It is clear that at the resonant frequencies, the effectiveness of oscillating EOF is much higher than oscillating PDF, while at small frequencies, effectiveness is equal (as also evident from Fig. 11). Furthermore, in nanochannels, the effectiveness is much more strongly increased than in microchannels. This observation could be explained by noticing that we have the like-standing waves in the channel (see Figs. 6,9). For oscillating PDF, the applied pressure force is exerted over the entire cross section of the channel. This flow behavior allows all energy to be coupled into the actuation in one direction (for example first harmonic, the peak around ω * = 250, see Fig. 6). For the first harmonic of oscillating EOF (see Fig. 9), also all energy is coupled in one direction; hence, both have equal effectiveness at low ω * . However, with the third harmonic (the peak around ω * = 500), the situation is quite different. As with oscillating PDF, the pressure force in the center of the channel is directed against the direction of the movement; hence, the center velocity is lower than in the first harmonic. With oscillating EOF, there is no force exerted in the center of the channel, but only in a thin layer at the wall. Hence the force exerted in the wide area close to the walls can be coupled to the much narrower area at the center. This concentration of energy in a small cross section (especially for nanochannel) causes strong increase in velocity in the center, hence much higher effectiveness than oscillating PDF. The question can be posed whether the high velocities generated will not disturb the electrical double layer composition. It is important to realize that our model concerns an infinitely long channel of constant fluid properties and homogeneous wall charge density. In this channel the potential and ionic composition in the electrical double layer only vary in the direction normal to the channel wall. Only when turbulence occurs, the double layer composition will hence be disturbed. The Reynolds number in our case is Re = ω * R2 2 ρ t n η (Jian et al. 2010;Liu et al. 2011b). For the optimal dimensionless parameter values as found in this work namely R = 10, ω * = 250 , and the practical values mentioned in the work of Bandopadhyay and Chakraborty (2012b), ρ = 10 3 kg/m 3 , t n = 10 −2 , η = 10 −3 Pa s, we find that Re = 2.5 × 10 11 2 . Since Debye length λ is always below 1 µm, turbulence is not expected. From practical point of view, in future experimental systems, the interfacing to an electrical system would need to be considered. This would involve electrode/solution interfaces with local storage and exchange of charge and possibly channel openings. At every interface where an inhomogeneity of flow or fixed charge concentration would occur, conservation of charge and matter would give rise to local gradients of electrical field, pressure and/or concentration. This would cause additional losses that would need to be considered in the design of such systems. One single aspect of the interfacing, namely the disturbances of the electrical double layer composition by advective fluxes can be estimated in isolation. By comparing the advective flux parallel to the wall, disturbing the electrical double layer composition, with the restoring diffusion flux normal to the wall, restoring equilibrium, we can estimate the severity of the disturbances in double layer composition. The ratio of the two fluxes provides a Péclet number, Pe = ω * R 2 Dt n . For R = 10, ω * = 250, D = 10 −9 m 2 /s and t n = 10 −2 s, we find Pe = 2.5 × 10 14 λ 2 . For λ < 60 nm, Pe < 1 and diffusional equilibration will be sufficiently rapid. Conclusions We report for the first time an analytical solution for time-dependent electrokinetic flow (mixed oscillating pressure gradient and electrical field) when using a linear Maxwell viscoelastic fluid in cylindrical microand nanochannels. The analytical solution is derived by solving the linearized Poisson-Boltzmann equation, together with the incompressible Cauchy's momentum equation in no-slip boundary conditions for the case of a combination of time periodic pressure-driven flow and electro-osmotic flow (PDF/EOF). The results show that the Onsager' reciprocal relations are complied with due to using the linear constitutive Maxwell fluid model. The validity of these Onsager's relations is important for practical implementation since it enables the analogy between fluidic networks in this flow configuration and electric circuits. We applied our calculation results for energy conversion systems in cylindrical micro-and nanochannels and compare the results with the work of Bandopadhyay and Chakraborty (2012a) which was performed in slit micro-nanochannels. It is shown that for both case the enhancement is in the same order. We furthermore provided a mechanism to understand the massive efficiency enhancement. We also found that time periodic electro-osmotic flow in many cases is much stronger enhanced than time periodic pressuredriven flow when comparing the flow profiles of oscillating PDF and EOF in micro-and nanochannels. The findings advance our understanding of time periodic electrokinetic phenomena of viscoelastic fluids and provide insight into flow characteristic as well as assist the design of devices for lab-on-chip applications.
5,899.6
2017-02-18T00:00:00.000
[ "Engineering", "Physics" ]
A Brief Review on Syntheses, Structures and Applications of Nanoscrolls Nanoscrolls are papyrus-like nanostructures which present unique properties due to their open ended morphology. These properties can be exploited in a plethora of technological applications, leading to the design of novel and interesting devices. During the past decade, significant advances in the synthesis and characterization of these structures have been made, but many challenges still remain. In this mini review we provide an overview on their history, experimental synthesis methods, basic properties and application perspectives. Graphene is first mechanically exfoliated from graphite and deposited on SiO 2 . Then a droplet of isopropyl alcohol (IPA) and water is placed on the monolayer and evaporated. Both methods lead to well-formed nanoscrolls. CNSs can have their diameter easily tuned (Shi et al. (2010b)), can be easily intercalated (Mpourmpakis et al. (2007)). They offer wide solvent accessible surface area ) while sharing some of electronic and mechanical properties with MWCNTs (Zaeri and Ziaei-Rad (2014)) and preserving the high carrier mobility exhibited by graphene. III. SYNTHESIS Carbon scrolls were first reported as byproducts of arc discharge experiments using graphite electrodes (Bacon (1960)). In this kind of experiment the extremely high energies allow the formation of several different carbon structures besides nanoscrolls, such as, nanotubes and fullerenes (Kriitschmer et al. (1990); Ugarte (1992); Saito et al. (1993)). However, the high cost, low yield and non-selectivity of this method limits its wide use. The first method designed to produce CNSs at high yield, reaching over 80%, was developed only decades later (Kaner et al. (2002)). This process consists in three consecutive steps. Firstly, high quality graphite is intercalated with potassium metals, then it is exfoliated via a highly exothermal reaction with aqueous solvents. Lastly, the resulting dispersion of graphene sheets are sonicated resulting in CNSs (Viculis et al. (2003)) -see fig. II(a). The strong deformations caused by the sonication process leads the solvated sheets to bend and, in case of overlapping layers, to scroll. As calculations pointed out (Braga et al. (2004)), once significant layer overlap occurs the scrolling process is spontaneous and driven by van der Waals forces. The efficiency of this method has lead it to be adopted in other studies (Roy et al. (2008)). A very similar method was shortly after developed (Shioyama and Akita (2003)), the most significant difference being the absence of sonication. In this case longer times are necessary for graphene sheets in solution to spontaneously scroll . Both these chemical methods use donor-type intercalation compounds which are highly reactive, demanding the use of inert atmosphere during the process. In order to avoid this limitation, a variation of Viculis et al.'s method was devised utilizing acceptor-type intercalation compounds, namely graphite nitrate, which is much more stable and thus eliminates the need for an inert atmosphere (Savoskin et al. (2007)). However, the most significant drawbacks of this chemical approach are the poor morphologies of the resulting nanoscrolls, the inability to control the number of scrolled graphene layers, and also the possibility of defects being introduced during the chemical process. In order to overcome these issues, a new method was later developed, offering higher control over the final product. In this new method, graphite is mechanically exfoliated using the scotch tape method and the extracted graphene layers are then deposited over SiO 2 substrates. Then a drop of a solution of water and isopropyl alcohol is applied over the structures and the system let to rest for a few minutes. After this, the system is dried out and spontaneously formed 3 CNSs can be observed (Xie et al. (2009)) -see fig. II (b). It is believed that surface strain is induced on the graphene layer as a consequence of one side being in contact with the solution and the other being in direct contact with the substrate. Once this strain causes the edges to lift, solvent molecules can occupy the space between layer and substrate, further bending the graphene sheets. As some deformation causes overlap on the layers, the scrolling process becomes spontaneous. While this method offers higher control over produced CNSs, on the other hand it is difficult to scale and more sensitive to defects in the graphene layers. Synthesis of high quality CNSs from microwave irradiation has also been reported (Zheng et al. (2011)). In this method, graphite flakes are immersed into liquid nitrogen and then heated under microwave radiation for a few seconds. As graphite presents very good microwave absorption, sparks are produced, which are believed to play a key role in the process, as their absence hinders high CNS yields. The resulting product is then sonicated and centrifugated, resulting in well formed CNSs. More recently, a purely physical route to CNSs synthesis method was proposed on theoretical grounds (Xia et al. (2010)). In this method a carbon nanotube (CNT) is used to trigger the scrolling of a graphene monolayer. Due to van der Waals interactions, the sheet rolls itself around the CNT in order to lower the surface energy in a spontaneous process. The advantage of this method would be being a dry, non-chemical, room-temperature method. However, it has been shown that the presence of a substrate can significantly affect the efficiency of this method (Zhang and Li (2010)). In order to circumvent these limitations, simple changes in substrate morphology have been proposed (Perim et al. (2013)). The same principle has been used to propose a method for producing CNS-sheathed Si nanowires (Chu et al. (2011)). However, an experimental realization of such process has yet to be reported. Even more recently, CNSs have been proposed to form from diamond nanowires upon heating (Sorkin and Su (2014)). In this case, it is believed that the magnetic interaction between the nitrogen defects and maghemite particles is the governing effect in this process. This is supported by the fact that removal of γ-Fe 2 O 3 particles causes the scroll to unroll in a reversible process, different from the observed for pure CNSs. IV. STRUCTURE From a topological point of view, scrolls can be considered as sheets rolled up into Archimedean spirals. Hence, the polar equation used to describe these spirals, can be used to determine the points r that belong to the scroll, for a given core radius r 0 , interlayer spacing h, and number of turns N (φ varies from 0 to 2πN ). See fig. 2(a). In addition, in order to fully determine the geometry of the scroll, the axis around which the scroll was wrapped must be given (see fig. 2(b)). Therefore, armchair, zigzag and chiral nanoscrolls exist, although scroll type is not fixed during synthesis and interconversion can occur in mild conditions, due to the open ended topology (Braga et al. (2004)). Particular values of r 0 , h and N for the general scroll geometry described above will depend on the properties of the composing scroll material. As shown by Shi et al. (2011), the core radius r 0 can be determined from the interaction energy between layers (γ), the bending stiffness of the composing material (D), the interlayer separation (h), the length of the composing sheet (B), and the difference between the inner (p i ) and outer pressure (p e ), as described by the following equation: where R = Bh π + r 2 0 . Density Functional Theory (DFT) calculations, carried out without pressure difference, predicted that the minimum stable core diameter is 23Å (Chen et al. (2007)). The interlayer spacing, however, depends mostly on the interaction energy between layers, although several factors can alter its value, like the presence of defects (Tojo et al. (2013)). Given a core size and an interlayer distance, the number of turns can be obtained after fully wrapping a given sheet width. In order to form a nanoscroll, there is an elastic energy cost associated with bending the sheet and a van der Waals (vdW) energy gain associated with the creation of regions of sheet overlap. For graphene, in particular, if the core size and number of turns is appropriate, the vdW energy gain is large enough to make the final scrolled structure even more stable than its initial planar configuration (Braga et al. (2004)). There is, however, an energy barrier associated with the formation of scrolls, since the initial bending cost is not followed by energy gains. Various theoretical and experimental methods have been devised to overcome this barrier, as discussed in the previous section. V. MECHANICAL PROPERTIES In this and the next section we will restrict ourselves to the discussion of carbon-based scrolls. In section 6 non-carbon scrolls will be also addressed. So far we have discussed mainly equilibrium geometry properties, but there are several studies addressing the mechanical response of scrolls to applied agents and/or forces. The first of these studies was carried out by Zhang et al. (2012) using molecular mechanics, and studied the response of CNSs to axial compression, twisting and bending. With regard to compression, the authors found that the axial stiffness of CNTs and CNSs with similar diameters and number of layers is about the same, but that nanoscrolls buckle under a significantly smaller strain. The authors argued that the free ends of the CNSs tend to wrinkle and are vulnerable to further buckling, which then propagates inward from the ends. With relation to torsions, the paper reported that both the torsional rigidity and critical strain are much lower for CNSs when compared to CNTs, a result that was attributed again to the open topology. Finally, regarding bending, the authors found about the same response for size-similar CNTs and CNSs. To explain this result, the authors reasoned that bending buckling began in their simulations at localized kinks, and that therefore the global topology of the structure did not matter as much in this case. Also, they reported that increasing the number of turns increases the bending rigidity, but decreases the critical buckling strain. Song et al. (2013) also used molecular mechanics to study the response of CNSs to compressive stresses. Similarly to the previously discussed work, they also found that compressive buckling started at the free ends. By adding nickel nanoparticles to the ends, they managed to stabilize the dangling bonds, preventing the wrinkling of the edges. This resulted in a slight increase in the elastic modulus (from 950-970 to 1000-1025 GPa) and in a decent increase in the compression strength of the nanoscrolls (from 40-47 to 45-51 GPa). The reason the values above are presented in a range is that the modulus and strength also were found to depend slightly on core radius and on chirality. Also, note that the compression strength corresponds to the point in the stress-strain curve in which increasing the 6 strain leads to a decrease in the stress. The authors also studied the influence of adding a CNT to the inside of the scroll, and found that it did not significantly influence the critical strain value or the deformation morphology. A third paper on mechanical properties of CNSs, by Zaeri and Ziaei-Rad (2014), studied their response under tensile and torsional stresses. Unlike the previous studies, in this one a finite element based approach was used in the description of the elastic properties. Calculations were performed with and without vdW interactions. Regarding the tensile studies, the authors reported a Young's modulus of about 1100/1040 GPa with/without considering vdW interactions. They also studied the influence of changing chirality, core radius, number of turns and scroll length on the Young's modulus, but found only a small dependence for each case. Regarding the application of the torsional stresses without explicitly taking into account vdW interactions, the authors found no shear modulus dependence on chirality and a moderate decrease of its value as the core radius size increased (from 48 to 36 GPa). More importantly, the shear modulus greatly increased as the number of layers increased (from 20 to 100 GPa) and greatly decreased as the length increased (from 95 to 10 GPa). For the first effect, the authors first argued that the inner and outer layers could not resist torsion well due to the open edges, and then explained that the shear modulus increased as the number of torsion resisting intermediate layers increased. To explain the second observation, the authors suggested that longer inner and outer edges increased the weakening effect, though no explanation was given as to why this should happen. Regarding the influence of vdW interactions, it was found that they increased the shear modulus ten times, from about 50 to 500 GPa. The vibrational properties of CNSs were studied by Shi et al. (2009). Using theoretical modeling and molecular dynamics simulations, they showed that the "breathing" (radial) oscillations can described by the interaction energy between layers (γ), the bending stiffness of the composing material (D), the interlayer separation (h), the length of the composing sheet (B), the density of the material (ρ), the internal radius (r 0 ) and the difference between the inner (p i ) and outer pressure (p e ). The vibrational frequency is described by the following equation: where α = Bh π r 2 0 . For a CNS of 10nm length the aforementioned equation leads to a frequency 7 of almost 60GHz. Much has yet to be done regarding the mechanical properties of carbon nanoscrolls. For instance, the ultimate tensile strength and strain and the fracture pattern of scrolls has yet to be reported. Moreover, both studies regarding the application of compressive stress remark that the results might depend on the scroll length -it is possible that the CNS might bend under compression if they are long enough. One last example is that it remains to be tested whether elements other than nickel could improve the mechanical properties of carbon nanoscrolls. Xie et al. (2009) have built a CNS based electronic device in which a nanoscroll was placed between two metallic contacts over a SiO 2 /Si substrate. The advantage of using a CNS instead of a CNT lies in the ability of nanoscrolls to carry current through all of its layers, while MWCNTs only carry current through the outermost layer, since the inner ones do not make direct contact. It was shown that a CNS was able to withstand a current density up to 5.10 7 A/cm 2 , indicating its suitability for circuit interconnects. A detailed theoretical study on the quantum electron transport in CNSs was carried by Li et al. (2012), showing a strong dependence of the conductance on the nanoscroll radius as well as on the temperature. Another possible application of CNSs is as electroactuators (structures which present mechanical response to adding/removing charges) (Rurali et al. (2006)). The authors carried their study by adding/removing charges (up to ±0.055e/atom) and then performing geometrical optimization using density functional theory (DFT) methods. In the axial direction, the reported length variation for CNSs (∼ 0.4%) was comparable to those reported for CNTs (Verissimo-Alves et al. (2003) also showed that the extra charge accumulated at the edges and at the central part of the scroll. Since the authors considered little more than one turn of scroll, only charges at the edge contributed to the interlayer expansion, suggesting that even larger expansion could be possible for CNSs with more layers, in which the central charges would also play a role. This large radial actuation has been proposed as a method to control the flow rate in nanoscopic water channels, nanofilters and ion channels Shi et al. (2010b). Other possible method to produce mechanical response from scrolls is by applying electric fields. Shi et al. (2010a) reported that the application of an eletric field causes a decrease in the interaction between scroll layers, which in turn could be used to controllable roll/unroll CNSs. Another application that has received considerable attention is the use of CNSs in gas storage, particularly hydrogen. Coluci et al. (2007) were the first to investigate the use of carbon nanoscrolls as a medium to store hydrogen, using classical grand-canonical Monte Carlo simulations. The authors reported that the interlayer galleries of the scrolls were only available for H 2 storage for interlayer spacings larger than 4.4Å. For instance, at 150 K, the gravimetric storage of CNSs was predicted to increase from 0.9% to about 2.8% for crystal packed scrolls and from 1.5% to 5.5% for scroll bundles (see fig. 2(c)) when the interlayer spacing increased from 3.4Å to 6.4Å. The authors used a fixed pressure of 1 MPa in all their calculations. One possible way to experimentally realize this increase in interlayer spacing is by intercalating scroll layers with alkali atoms ( fig. 2(d)), and Mpourmpakis et al. (2007) reported that this method indeed works well. Coluci et al. (2007) also reported that the gravimetric storage decreased greatly by increasing the temperature to 300 K, and Braga et al. (2007) used molecular dynamics simulations to show that it is possible to cyclically absorb and reabsorb hydrogen from scrolls by decreasing and then increasing the temperature. Huang and Li (2013) also studied possible mechanisms for delivering hydrogen from the scrolls, and reported that other effective methods include twisting the scrolls and decreasing their interlayer distance. Finally, note that calculations performed by Peng et al. (2010) reported that CNSs with expanded interlayer distances could also be used to store methane or trap carbon dioxide. It should be noted that recently the use of CNSs in a variety of experimental devices has been gaining momentum. For instance, scrolls have already been used as supercapacitors (Zeng et al. (2012); Yan et al. (2013)), in batteries (Tojo et al. (2013); Yan et al. (2013)), in catalysis (Zhao et al. (2014)) and in sensors (Li et al. (2013a)). When compared to similar planar graphene based devices, the ones based on CNS were found to present superior performance. The difference was attributed to the open ended carbon nanoscroll topology. VII. NANOSCROLLS FROM SOME OTHER MATERIALS The successful isolation of single layer graphene (Novoselov et al. (2004)) has created a revolution in carbon-based materials. In part because of this, there is renewed interest in other two-dimensional materials, which has led to some very significant synthesis advances. Hexagonal boron nitride nanoscrolls (BNNSs) were theoretically predicted some years ago (Perim and Galvao (2009) (2013)). The first reported method consists in exposing hBN crystals to an intense solvent flow inside a spinning disc processor. The shear forces are believed to exfoliate hBN layers which are then scrolled, forming BNNSs (Chen et al. (2013)). However, the yield of this method is considerably low (∼5%). A more simple method has been proposed, in which molten hydroxides are used to exfoliate hBN crystals (Li et al. (2013b)). A NaOH/KOH mixture was added to hBN and then thoroughly ground, subsequently being heated at 180 • C. The analysis of the product revealed presence of BNNSs, however, at even lower yields than the previous method. BNNSs have also been produced by the interaction between exfoliated hBN and lithocholic acid (Suh et al. (2014)), in a self-assembly process. The considerably lower yields of these methods when compared to the ones utilized for producing CNSs indicate the higher difficulty of exfoliating hBN crystals due to stronger interlayer interactions. Carbon nitride nanoscrolls (CNNSs) have similarly been predicted to be stable (Perim and Galvao (2014)). Three different graphene-like carbon nitride structures have been successfully synthesized (Li et al. (2007)) with varying pore sizes and simulations predict that 11 all three of them should be able to form stable nanoscroll structures. The existence of pores in these structures reduce the contact area between overplapping layers, leading to weaker van der Waals interactions and thus less stable nanoscrolls. On the other hand, it also means lower mass density and also easier intercalations, which means these scrolls could be even more suited to hydrogen storage and similar applications. Up to now, no CNNS synthesis has been achieved. It should be stressed that, in principle, under favorable circumstances, any layered material should be able to form scrolls. Therefore, we should expect new forms of nanoscrolls to be reported in the next years, opening the possibility for exciting novel technological applications. VIII. SUMMARY In summary, nanoscrolls are very unique nanostructures due to their open ended morphology. Such morphology creates the possibility of many different technological applications. However, synthesis difficulties had precluded these structures from being more widely investigated. Recent advances in synthesis techniques had led to a change in this scenario, as interest in nanoscrolls is re-emerging and practical applications are becoming a reality. The recent experimental realization of boron nitride nanoscrolls and other scroll materials open new perspectives in the study of these nanostructures. Also, there are many other potential candidates for the formation of novel scrolled structures. In light of this, we can expect in the near future not only significant advances in the production and applications of carbon nanoscrolls, but also the emergence of nanoscrolls from different materials with their own unique properties that can be exploited to be the basis of new applications. We hope the present work can stimulate further studies along these lines. Ruralli, and D. Tomanek for many helpful discussions.
4,908.4
2014-12-23T00:00:00.000
[ "Materials Science", "Physics" ]
UvA-DARE (Digital Academic Repository) Exact out-of-equilibrium central spin dynamics from integrability We consider a Gaudin magnet (central spin model) with a time-dependent exchange couplings. We explicitly show that the Schrödinger equation is analytically solvable in terms of generalized hypergeometric functions for particular choices of the time dependence of the coupling constants. Our method establishes a new link between this system and the Introduction The problem of describing the coherent out-of-equilibrium evolution of driven many-body quantum systems has attracted a great deal of attention in recent years. This interest was spurred by the recent advances in cold atoms and semiconductor physics, which made experimental observations possible. The attention of the community has mostly been devoted to the investigation of two limiting cases: the quench regime [1], where the variation of the parameters of the Hamiltonian is very fast with respect to all the other time scales of the problem, and the adiabatic regime, where it is slow (see [2] for a recent overview). Outside of these two extreme situations, very little is known, either analytically or numerically. This is unfortunate, because this driven regime has the most potential for novel physics. There are natural obstacles for direct studies of non equilibrium quantum many-body systems: only a few solutions of the single-particle Schrödinger equation with time-dependent parameters are known even in the single-particle case. The situation is even worse in the manybody case. While integrable many-body systems provide considerable insight into equilibrium physics in one dimension, their non equilibrium behavior is still difficult to analyze because of the complexity of their solution. Numerical treatments of time-dependent systems (like e.g. by the time-dependent density-matrix renormalization group) are limited by the quantum entanglement which grows while the system evolves in time starting from some initial state [3][4][5][6]. To our knowledge there is a only a single subclass of systems where the full time dependence of parameters can be kept to some extent and where the dynamics can be understood in its full complexity. The dynamics of these systems can be mapped to the dynamics of a different systems which have no explicit time-dependence of parameters by an appropriate transformation of the coordinates, time, and wave-functions [7][8][9][10][11][12]. While this class of models is limited, it provides a clue about certain interesting and fundamental dynamical effects, like e.g. dynamical fermionization, and moreover it does not rely on integrability of the time-independent model. To go beyond this class of models some new ideas are needed. Here we make an effort in this direction by suggesting to use a fact that the wave functions of a broad class of many-body quantum-mechanical models can be represented in terms of the correlation functions of some field theories with known properties. These connections were discovered and used in the context of the quantum Hall effect, where the quantum wave functions can be related to conformal blocks of the two-dimensional conformal field theories (CFTs) [13]. Interestingly, the wave functions of some integrable spin models can also be related to the correlators of certain CFTʼs [14,15]. Here we extend and use these observations further to study non equilibrium dynamics of those spin models. Since these spin models belong to a broader class of a so-called Gaudin systems, our observations can be applied to that class as well. The central ingredient of our approach here is the fact that the conformal blocks of the 2D Wess-Zumino-Witten (WZW) model are solutions of the Knizhnik-Zamolodchikov (KZ) equation. For a broader class of CFTʼs (without internal symmetry) these equations should be replaced by the Belavin-Polyakov-Zamolodchikov system. We believe that the approach we explore here can be generalized further for systems with a more general matrix-product like structure of the wave functions. Implementing the above ideas concretely, we investigate a model of N spin −1/2 degrees of freedom coupled by time-dependent exchange parameters J t ( ) Label 0 refers to the 'central spin' which is coupled to the − N 1 other (mutually uncoupled) spins. For time-independent couplings, this model is known as the central spin model, a Gaudin magnet [16][17][18]. Crucially, this Hamiltonian is directly relevant to experiments in quantum dots [19,20] and nitrogen vacancy centers in diamond [21], in which time-dependent couplings are intrinsic to the experimental protocols (respectively via time-dependent gate voltages and external electromagnetic fields). The model (1) is only one of the broad class of models where dynamics can be treated using our method here. Other Gaudin-type models can be directly studied in a similar way. The aim of this article is therefore twofold. On the one hand, we identify a time-dependent protocol for which it is possible to obtain analytical information (i.e. the exact many-body wavefunction) for a class of Hamiltonians related to the (1). While the requirements of this protocol are restrictive, they nonetheless allow to go beyond the adiabatic or sudden approximation. On the other hand, our technique points to an intriguing link between the timedependent central spin Hamiltonian and the WZW model, a well-known CFT, opening the door to further applications of CFT techniques to driven nonequilibrium physics. We note that here we will restrict our interests to the dynamics of the total spin-singlet subspace, = S 0 2 . This is a good starting point because (see [22,23] and the recent review [24]) this subspace plays a crucial role in quantum information theory: the decoherence-free dynamics naturally occurs in this subspace, while the qubits can be encoded into its basis states. Main results We begin from the fact that the conformal blocks of the WZW model [25] satisfy the KZ equations [26]. For a ( ) is the N-point holomorphic conformal block of primary field φ, while k is a number known as the level of the Kac-Moody algebra. If k is a positive integer, the WZW model is a rational CFT. Interestingly, there exist integral representations of solutions to the KZ equations that can be analytically continued to any nonzero complex k [27,28]. Choosing = k iv where ∈ v  and considering the ansatz ψ for a many-body wavefunction, we see that ψ t ( ) N can in fact be reinterpreted as a timedependent Schrödinger equation ψ is chosen to be the sole time-dependent parameter, It is shown in appendix A that this choice of time-dependent parameters z t ( ) 1) is uniquely dictated by the form of (1). Notice that the hermiticity of the Hamiltonian forces all the z i to be on the same line in the complex plane (for example, we can take them to be all real). Let us emphasize the main features of our approach. First of all, the main ingredient for an explicit solution of the time-dependent Schrödinger equation is a solution of the KZ equations that can be analytically continued to imaginary k. For small systems, this can be done explicitly, using standard CFT techniques. For larger systems, we can rely on a class of integral representations. Quite interestingly, these representations rely crucially on the integrability of the time-independent Hamiltonian, i.e. the off-shell Bethe equations. Therefore, the solubility of the time-dependent Schrödinger equation seems to be a signature of the underlying integrability of the model that survives also when the couplings are time dependent. Indeed, this interpretation is confirmed by the fact that, as we will discuss later on, the solvability of the time-dependent Schrödinger equation is not a special feature of the central spin model: our arguments apply also to the broader class of XXZ Gaudin magnets. Moreover, our results establish a new connection between the ( ) SU 2 WZW model (admittedly, for the quite unusual imaginary k case) and the time-dependent central spin Hamiltonian. It is worth to note here that the ( ) SU 2 WZW model is known to be related to integrable [14] and nonintegrable [29,30] time-independent spin Hamiltonians. Finally, it is important to stress that-by constructionthis approach works only if the time dependence of the J t ( ) i is finely tuned: essentially, the time evolution is 'geometric', i.e. ∫ dtH t ( ) can be written as a curvilinear integral in the space of the z j . Our paper is organized as follows. First of all, in section 3 we provide a detailed analysis of a simple system of four spins. Thanks to the connection between the central spin Hamiltonian and the WZW model, we are able to analyze the time of evolution of the subspace of zero total spin in terms of hypergeometric functions. In this way, we can see our approach explicitly in action and understand some mathematical property of our solution (i.e. completeness and nontriviality). Therefore, in section 4, we move to a more general setting: a N particle XXZ Gaudin magnet with time-dependent couplings (4). Here, we take advantage of an integral representation of the solution of the (generalized) KZ equations to provide an integral representation for the time-dependent many-body wavefunction. While this representation is not (yet) amenable to an quantitative evaluation, it allows us to consider two interesting situations: the adiabatic and the semiclassical limit, thus gaining insight on the completeness of our solutions (section 4.1) . Finally, we present our conclusions in section 5, while some of the more technical details are discussed in the appendices. A simple example: a four spins system The class of Hamiltonians under consideration has a quite specific time-dependent coupling constant (4). Moreover, as we will see, in the general case, while it is possible to write down an integral representation for the wavefunction, it is not easy to extract physical predictions from it. The reader could wonder if this class of Hamiltonians can be solved only because their physics is trivial or if, instead, we can expect some interesting phenomenology that might motivate a further investigation of these systems. In this section, we want to address this point by studying one quite simple representative of this class of Hamiltonians: a central spin Hamiltonian with four constituents, Since the total spin is conserved by the time evolution, we can restrict ourselves to the . In the following, we would like to show that, indeed, the WZW correlators provide solutions that describe the whole zero spin subspace. The computation of the four point conformal blocks Ψ z z z z ( , , , 3 of the WZW model is a standard exercise of CFT (see [25]). The detailed calculation is reported in appendix B, where Ψ z z z z ( , , , ) on a basis of the = S 0 2 subspace given by the two states We thus can write , where c 1,2 are constants determined by the initial conditions, while i k k . Therefore, as discussed above, the wavefunction 3 (with = k iv) is a solution of the time-dependent Schrödinger equation for Hamiltonian (5). As an example, let us consider the following protocol. At time t = 0, the spins S j , = j 1, 2, 3, are at a distance j from the central spin S 0 . Their couplings J j are taken to be As an application, an interesting quantity to look at is the modulus square of the overlaps of the wavefunction with the basis vectors , which are simply computed. We can expect that, if these overlaps are almost constant in time, then the time evolution is essentially trivial. As a signature of the nontriviality of the time evolution, we look the crossing of a t ( ) > ′ t t : this means that for < ′ t t the state i is more important than the state j, while the opposite is true for > ′ t t . Two interesting examples are shown in figure 1 for the dipolar interaction (left) and for the shell model (right). In both cases, the initial condition is chosen in such a way that it is possible to observe one (dipole interaction) or two (shell model) crossings of the overlaps a t ( ) i . Another interesting quantity to understand the dynamics of the system is the fidelity (6)). General solution It this section, we would like to outline our strategy for getting an integral representation of the solution to the KZ equations (2) or, more precisely, a generalized version of these equations. Our arguments are a straightforward generalization to the ones of [15,27,32]. The XXZ Gaudin magnets are defined from the Gaudin algebra , while the λ i are complex numbers. Notice that X and Z are not arbitrarily functions, but they have to satisfy a set of quadratic equations that come from the Jacobi identities for the generators of the Gaudin algebra (8). The solutions to these equations are known, and the simplest one is the rational one λ λ (for a detailed discussion of Gaudin magnets, the reader is referred to [18]). For example, in order to describe a spin or fermionic system, the Here z i are a set of complex numbers (the disorder variables) that are directly linked to the coupling constants of the Hamiltonian, while S i are the familiar spin operators. Instead, a bosonic system is described by a ( ) where the K i satisfy a ( ) su 1, 1 algebra. Of course, mixed representations are also available, in order to describe a system where the spin degrees of freedom interact with a bosonic bath (i.e. the Dicke model and its generalizations). In the following, we will use the notation These H i are a family of commuting operators, and each of them can be considered as the Hamiltonian of a quantum system. Since H w z ( , ) can be diagonalized using the algebraic Bethe Ansatz, the Hamitonians H i are exactly solvable. In the rational case, the integrals of motion for a spin system reduce to satisfy the Schrödinger equation with Hamiltonian . Quite nicely, this integral representation of the solution is based on the integrability of the model. Let us introduce the Bethe state (λ λ λ The functions h i and α f can be derived from a Yang-Yang action: more precisely, Usually, one imposes the on-shell condition = α f 0 (Bethe equations), thus obtaining a basis of eigenstates of the Hamiltonian. Here instead, we take advantage of the existence of the action  and we define where the closed contour γ is chosen in such a way that the branch of the integrand at the end point of γ is the same as that at the initial point. It is quite easy to show that (20) is indeed a solution of (16). Notice that due to the multi-valuedness of the integrand, the path of integration is usually highly nontrivial, this being the major technical difficulty of our approach. In the rational case, these integrals represent multivariable hypergeometric functions [33,34], and our hope is that this connection could be exploited to evaluate explicitly (20). The k →0 limit and the completeness of the integral representation Unfortunately, a direct evaluation of (20) is beyond our present ability. However, in the → k 0 limit, the only contribution to the integral comes from the stationary points of the action (19), i.e. from the on-shell Bethe state [35]. Therefore, it is quite interesting to discuss the physical meaning of this limit for our time-dependent Schrödinger equation. The most natural interpretation of this limit is as an adiabatic one. As an example, let us consider the central spin limit with coupling constant (4). If we parametrize = k i v and Ω is the corresponding dynamical phase. Moreover, by choosing properly the contour γ, we can select any eigenstate, and therefore our solution is complete in the adiabatic limit. This is at least a strong hint that our solution is complete for any time dependency of the coupling constants. Quite interestingly, the → k 0 limit can be interpreted also as a semiclassical limit, if we use a different parametrization. Indeed, if where  0 has the dimensions of an action, we have a central spin model with coupling constants Conclusions In this article, we have studied a class of time-dependent Hamiltonians which possess manybody wavefunctions given by solutions to the associated KZ equations. The underlying timeindependent integrability and link with CFT allow us to provide an explicit integral representation of the solution to the time-dependent Schrödinger equation. For a small system, these solutions reduce to the familiar hypergeometric functions, allowing us to easily study the dynamics of the system, as we did for a four spins model. This specific example shows that the exact solubility of these time-dependent systems is not due to their triviality. Instead, their physics appears to be quite rich, as you could expect for a full time-dependent problem. Of course, from a practical point of view, the most interesting thing would be to solve a Nparticle model. In order to do so, we have to deal with the complicated integral (20). Unfortunately, we are not able to do it at the present time. However, in the rational case, this integral reduces to generalized hypergeometric functions, that have been extensively studied in the mathematical literature. Another possible line of investigation could be to compute the corrections to the adiabatic limit, that could teach us something about this complicated integral representation. While our construction works only if the time dependence of the Hamiltonian is finely tuned, it provides an intriguing starting point for understanding the consequences of quantum integrability in time-dependent physics. Here, we would like to elaborate more on this point. In particular, one could wonder if there exists a more general wavefunction ψ , satisfies a Schrödinger equation with Hamiltonian It turns out that This identity can be proved by a brute force calculation (itʼs just a trivial algebraic calculation), but it is possible to obtain it in a clever way. First of all,we notice that the lhs has poles in = 3 , we get = A 1. Therefore, the KZ equation reduces to
4,187.4
2012-11-26T00:00:00.000
[ "Physics" ]
Polariton condensation in photonic crystals with high molecular orientation We study Frenkel exciton-polariton Bose-Einstein condensation in a two-dimensional defect-free triangular photonic crystal with an organic semiconductor active medium containing bound excitons with dipole moments oriented perpendicular to the layers. We find photonic Bloch modes of the structure and consider their strong coupling regime with the excitonic component. Using the Gross- Pitaevskii equation for exciton polaritons and the Boltzmann equation for the external exciton reservoir, we demonstrate the formation of condensate at the points in reciprocal space where photon group velocity equals zero. Further, we demonstrate condensation at non-zero momentum states for TM-polarized photons in the case of a system with incoherent pumping, and show that the condensation threshold varies for different points in the reciprocal space, controlled by detuning. Introduction The large exciton binding energy and oscillator strength of organic materials embedded in light-confining structures such as optical cavities make it possible to achieve giant energies of the Rabi oscillations desired for room-temperature exciton-polariton (EP) condensation [1][2][3]. In this respect, two-dimensional (2D) photonic crystals (PC), which can be easily integrated with organic materials [4,5], are a current area of focus. Low group velocity of the optical Bloch modes at the edge of the conduction band provides for a long lifetime of the slow waves and thus seems promising for the realization of polariton condensation, similar to the enhancement of coherent emission in defect-free photonic crystals [6][7][8][9]. A considerable number of organic semiconductors, such as thiophene/phenylene co-oligomer single crystal, 1,4-bis(5-phenylthiophen-2-yl) benzene and 2,5-bis(4-biphenyl)thiophene [4,5], have transition dipole moments oriented along the vertical direction with respect to the main crystal face. For this reason, these organic crystals are inappropriate for strong interaction with the optical modes of a Fabry-Perot cavity, where the electric field is oriented perpendicular to the dipole moment. Instead, as we will show in the case of 2D PCs, transverse magnetic (TM) modes have the electric field component perpendicular to the plane of the crystal and therefore can be strongly coupled with excitons. It can be noted that there exist other materials, such as cyano-substituted compound 2,5-bis(cyano biphenyl-4-yl) thiophene, in which the transition dipole moment lies in the in-plane direction with respect to the crystal face. While such materials can be assumed to demonstrate strong coupling with the Fabry-Perot cavities or transverse electric (TE) modes of PCs [10], supporting Γ-point condensation in the reciprocal space, this case is trivial and beyond the scope of our manuscript. It should be noted that, unlike the nanostructure proposed in [10], here we study structures which fabrication is easier. In this manuscript, we consider a 2D PC represented by a triangular lattice of pillars supporting the emergence of band gaps for both TE and TM polarizations. In principle, 2D PCs provide two types of excitonphoton quasiparticles. The first type results from coupling between excitonic and photonic modes below the Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. light cone (free photon dispersion), with such modes called guided PC polaritons. The second type, called radiative polaritons, constitutes the excitons lying above the light cone. Polaritons of the latter type can be effectively analyzed by angle-resolved spectroscopy, under the condition that the exciton-photon coupling is much greater than the intrinsic photon linewidth [11]. These two modes can be employed differently; in particular, the radiative modes can be used as efficient reflectors [12], whereas the guided modes can be utilized for the realization of strong light-matter interaction in such devices as vertical cavity surface emitting lasers [13] and polariton lasers [14]. The main advantages of 2D PCs constitute cheap and relatively simple fabrication technique, scalable and stable technology, relatively strong light confinement in plane, which promotes strong light-matter interaction. The disadvantages of 2D PC are low confinement comparing to 3D PCs and possible losses along z direction. Both types of polaritons can be made of photons representing slow Bloch modes (SBM), confined in 2D PCs in the vicinity of the extremum points at the edge of the photonic band gap, where the group velocity approximately equals zero [6]. Slow velocity of the modes results in long optical paths, or in other words, nearly total light confinement. This effect has been used for mode synchronization in organic lasers [7], and has also proved to be efficient for lasing threshold reduction in vertical cavity surface emitting lasers [15] and in 2D PC lasers in the strong coupling regime. Notably, such lasers exhibit much lower threshold gains [9]. The lifetime of the Bloch modes is mostly determined by the lateral quality factor of the PC, which itself depends on the size and quality of the nanostructure. The vertical quality factor can be considered infinitely high for 2D PCs of sizes greater than a hundred microns. Photonic band structure Our system schematic is presented in figure 1. The structure consists of aluminum nitride (AlN) high aspect ratio micro pillars of radius 450 nm forming the photonic crystal, with a lattice constant of 1 μm and a refractive index (n) of 2.15. The transparent layer consists of a polymer material with optical properties close to air. Generally, the substrate can provide for optical confinement in the vertical direction and should therefore be chosen from materials with refractive indices lower than that of AlN. One can choose Si 3 N 4 as the substrate material, which can be implemented in AlN growth technology. We consider the system to be effectively 2D in our calculations and neglect the influence of the substrate on the optical properties of the system. Indeed, the design assumes the light momentum lying in the x-y plane. Effectively there is no light propagation in the substrate. For this reason we model the system as an effectively 2D PC. Further, in order to make the light-matter coupling effective and strong, the thin active layer is firmly set onto the PC. The thickness of the active organic layer we take three orders of magnitude less than the photonic crystal thickness. The structure is exposed to an electromagnetic wave falling at the edge of the PC, nearly parallel to the layers of the structure. It should be noted that normal incidence does not provide for light confinement, which is necessary for a finite photon lifetime in the system, and is thus outside the focus of our study. We employ a standard Fourier method in order to decouple the Maxwell's equations and find independent behavior of the TE and TM polarizations, using separate equations for the magnetic and electric fields. Both the magnetic and electric fields are coplanar to the z axis. In the case of TE polarization, time harmonic field components can be presented in the following form [16,17]: where ( ) x y r , is the coordinate in the xy plane. Inserting (1) and (2) into Maxwell's equations in the frequency domain, we find: Further, we use the Bloch decomposition, where the summation is taken over the reciprocal lattice vectors, = + G n m b b 1 2 , where n and m are integer numbers. Inserting (4) into (3), we obtain the eigenvalue problem for the TE-modes: and by substituting the ansatz we arrive at the eigenvalue problem: It should be noted that the Fourier approach employed here is only valid for perfectly periodic photonic crystals. In reality, one should also account for the fact that the 2D PC is not infinitely large or lossless. For dielectric materials with small absorption coefficients, one can apply an effective Fourier approach in which the imaginary part of permittivity is considered as a perturbation [6]. In order to account for the finite lifetime of photons, we use experimental data for the absorption coefficient [18] which gives us an imaginary part of the permittivity of about 10 −4 . Then we use the complex-valued permittivity and find complex eigen frequencies which give us the photon lifetime. The finite size of PCs and imperfections of a real sample lead to a reduction in photon lifetime, the effect of which is important for the Bloch waves under the light cone. For these modes, material losses do not reduce the photon lifetime, and imperfections of the geometry can be considered as the only source of dissipation. We include this mechanism of energy relaxation in our model phenomenologically, as discussed below. When we apply equation (10) to model our system (figure 1), we find that the structure favors two energy minima in the spectrum of the TM modes (see figure 2). It is important that the TM mode supports the emergence of the SBM often associated with the emergence of the Van Hove singularity [19]. One of the minima is located at the high-symmetry M-point of the Brillouin zone, and the other lies above the light cone in the vicinity of the K-point. Such modes (lying above the light cone) are usually referred to as radiated or quasiguided modes since they can radiate in free space. The modes lying below the light cone (guided modes) are confined and can only radiate through imperfect sidewalls of the crystal or other disorders in the PC structure [14]. As it was shown in [10], fluctuations of shapes and sizes of the basic elements (such as pillars, holes, rods) forming the PC lead to the variation of the photonic bandgap edge. However, as it was shown, the randomness of around 20 nm leads to deviation of the PC band edge less than 2%. With modern lithography and plasma etching technologies, one can easily achieve resolution higher than 20 nm, it allows us to disregard these fluctuations. Quality factor The main condition for the strong coupling regime, which provides for EP formation, can be roughly stated as where Ω R is the Rabi frequency, standing for the rate of energy exchange between the excitonic and photonic components, and τ C and τ X are the lifetimes of the photons and excitons, respectively. In case of organic polaritons, based on Frenkel excitons with typically long lifetimes, the lifetime of the particles is mostly determined by the photonic component. By definition, the latter is determined by the quality factor of the PC and frequency τ C =2π Q/ω real . The relation between the optical pumping rate and EP inverse lifetime determines the possibility of Bose-Einstein condensate (BEC) formation. Clearly, an increase of both the lifetime and the pumping rate allows one to achieve critical polariton concentration for BEC formation. In 2D PCs, the full quality factor Q can be found as the inversed sum of vertical and lateral quality factors: The lifetime of the Bloch modes, which lie above the light cone and can be coupled with free-space modes, is determined by the vertical quality factor which is a characteristic of the radiation losses in the perpendicular direction [20], where ω real , ω im are the real and imaginary parts of the frequency of the Bloch modes [21]. We can estimate Q v of a 2D PC of final size L using the assumption that such a PC supports modes with mean in-plane µ k L 1 . It is known that if, approximately, L>100 μm, then Q v >50 000 [20]. Thus the total Q is mainly determined by the lateral losses. In turn, Q l depends on the band structure, size, and disorder of the nanostructure, and it can be estimated with a phenomenological model [21]: where R is the modal reflectivity, α is band curvature in the vicinity of the minimum (the second derivative of the dispersion), the group velocity can be expressed as v g =α k, p is an integer number, and f r is the phase of the modal reflectivity at the edges of the 2D PC. The phase and momentum are connected via the expression kL=pπ−f r , which allows us to simplify equation (11) to the following form: = - Obviously, the quality factor is determined by the size of the PC and the group velocity. Then, for R=0.99% the lateral quality factor close to the minimum on the band edge can be estimated as Q l ≈2000, and polariton lifetime in this structure is τ p ≈5 ps. Note that a typical quality factor of AlN hexagonal crystals is high enough for strong coupling, with such structures commonly used as microwire waveguides [13,22]. It is, however, insufficient to provide a thermal state for BEC, and therefore we will consider nonequilibrium condensation. Thus, our system is described with a kinetics approach. Another beneficial property of AlN is that it has a small lattice mismatch with other nitridebased semiconductor alloys [23]; moreover, AlN stress-free layers can be easily grown on Si/SiC substrates [24]. Dispersion relation In organic active media, one can observe tightly bound Frenkel excitons with a typical size of one angstrom. This fact allows us to neglect the influence of the periodic structure of the 2D PC on the exciton wave function and consider non-uniform electric fields only. EPs emerge as mixed modes of the electromagnetic field and the exciton resonance, thus having the dispersion [10]: The molecular packing density here can be estimated as N/V≈10 −3 Å −3 , m » | | 25 Debye. The estimations with formula (13) give exaggerated values of more than 1 eV which do not comply with experimental data. This discrepancy is due the fact that 99% of the excitons being uncoupled. For our simulations, we choose an experimentally demonstrated value of the Rabi energy typical for a planar geometry with an organic active layer,  W » 100 meV R [4,25], which corresponds to 1% of coupled exitons. While the Rabi energy in typical Fabri-Perot cavities with organic active regions varies between 100 and 800 meV [26][27][28], in our case it takes the lowest limit since we have laminated the active layer on top of the PC. This configuration does not allow the achievement of full overlap between the electric field and the dipole. Such values of Ω R typical for organic microcavities are much higher compared with GaAs/InGaAs quantum well-based microcavities due to the extremely high transition dipole moment that is typical for organic materials [26][27][28]. For a defect-free PC the electric field localizes in the Fabry-Perot microcavity, where the quantum well plays the role of a defect in 1D PC. But in the case of 2D PC, the active region covers the whole surface of the sample; therefore, the overcrossing of the dipole and the photon field in each elementary cell is small, yet for the whole sample the Rabi constant is quite high. Figure 2(a) shows the results of the photonic band gap calculation: the red and blue lines correspond to TE and TM modes, respectively, and the black line shows the EP dispersion. The latter curve has two minima which can be considered as traps for polaritons, where their condensation might take place. It should be noted that we do not describe the polaritons based on TE-modes since first, most organic active regions have dipole moments oriented perpendicular to the surface of lamination, and second, the bandgap for TE-modes is less than the trap for the polaritons, resulting in nonzero group velocity and instability of the condensate. Condensation kinetics After finding the bare EP dispersion, we can describe EP dynamics within the mean field approximation, where the EP field operator, Ŷ ( ) t r, , is averaged over the z-direction and treated as the classical variable y ( ) t r, with Fourier image y ( ) t k, . The corresponding equation of motion reads [30]: where  -1 is the inverse Fourier transform, α is a parameter describing the strength of particle-particle interactions, and τ is polariton lifetime. It can be estimated as: α≈(10 −22 /L) eV cm 2 , where L is the thickness of the active layer [1]. We use a value which is three orders of magnitude less than what we usually have in GaAs microcavities [29]. On one hand, such a small value of α does not lead to a significant blueshift. On the other hand, the main driver of condensation is still the cubic term in equation (14). The term  t y -( ) i 2 accounts for the radiative decay of particles. Safely assuming that exciton lifetime τ X lies in the nanosecond range and is much greater than photon lifetime, we consider that polariton lifetime τ is determined mostly by the microcavity photon lifetime, which is t w = [ ] 1 Im C k C where τ C is determined by the material properties and geometry of the PC (see the discussion above). Now we model the evolution of the exciton density in the reservoir, n X , using the equation [30]: where τ X is exciton lifetime, P is the incoherent pumping power, and γ is the rate of polariton formation fed by the excitonic reservoir. Using (14) and (15) we calculate polariton distribution in the reciprocal space ( figure 3). The colormaps demonstrate that EP condensation occurs at nonzero momenta states. Indeed, EPs condense at the minima, where the photon group velocity turns into zero. It can be seen that in both types of points, with one located at the M-point in kspace and the other located between Γ and K points (see figure 2), we observe a threshold-like behavior. At small, underthreshold pumping powers, the particles are thermally distributed at high energies above the ground state(s), as seen in figure 3(a). The minima remain nearly unoccupied. With an increase of pumping power ( figure 3(b)), the particles start to accumulate at the inflection points of the dispersion and we observe the bottleneck effect [31,32]. The last panel ( figure 3(c)) corresponds to the above-threshold pumping, when the particles start to Bose-condense at the minima. Figure 4 illustrates that the condensate formation varies for different points in k-space. We attribute this to the difference in the detunings of exciton energy and PC photon energy. Consequently, as expected, the lessdetuned M-point is more susceptible to condensation and exhibits a lower threshold. Discussion and conclusions We have demonstrated the formation of organic exciton polaritons in a triangular lattice of AlN pillars, forming a 2D photonic crystal, and shown that BEC can take place at the minima of the band diagram where photon group velocity equals zero. Such dispersion acts as a set of traps for particles, and it can be employed to achieve polariton condensation at non-zero momenta, which may be useful, for example, in valleytronics [33] and for spontaneous symmetry breaking. It should also be mentioned that one can replace our periodic (solid state) crystal with an optical lattice produced by crossed laser beams, as in cold atomic systems. Then it becomes easy to in situ vary the optical properties of the PC. In the framework of our model, we found different particle densities at different points in k-space, controlled by the varying exciton-photon detuning at different points. In contrast to BEC in conventional quantum wells based on inorganic semiconductors, here organic materials with high molecular orientation provide selective coupling with TM (as opposed to TE) polarized modes and produce strong coupling due to the giant magnitude of the dipole moment, as opposed to regular inorganic excitons. Exciton polaritons can be studied in various materials, such as traditional III-V semiconductors and 2D dichalcogenide monolayers [34,35]. However, organic polymers can be distinguished by relatively simple fabrication procedure (in contrast to expensive MBE technology). Moreover, polymers provide higher light and matter cross-section, which results in higher Rabi energies and relatively high operation temperatures. One of the drawbacks is that, as mentioned before, the number of coupled excitons is quite low in organic materials (1%). It reduces the actual value of the Rabi splitting. This drawback might be overcome in future if one realizes how to enforce the interaction between the molecular excitons and thus involve more molecules into the process of coupling with light.
4,671.8
2017-02-26T00:00:00.000
[ "Physics" ]
SUM: A Benchmark Dataset of Semantic Urban Meshes Recent developments in data acquisition technology allow us to collect 3D texture meshes quickly. Those can help us understand and analyse the urban environment, and as a consequence are useful for several applications like spatial analysis and urban planning. Semantic segmentation of texture meshes through deep learning methods can enhance this understanding, but it requires a lot of labelled data. The contributions of this work are threefold: (1) a new benchmark dataset of semantic urban meshes, (2) a novel semi-automatic annotation framework, and (3) an annotation tool for 3D meshes. In particular, our dataset covers about 4 km2 in Helsinki (Finland), with six classes, and we estimate that we save about 600 hours of labelling work using our annotation framework, which includes initial segmentation and interactive refinement. We also compare the performance of several state-of-theart 3D semantic segmentation methods on the new benchmark dataset. Other researchers can use our results to train their networks: the dataset is publicly available, and the annotation tool is released as open-source. Introduction Understanding the urban environment from 3D data (e.g. point clouds and 3D meshes) is a long-standing goal in photogrammetry and computer vision [1,2]. The fast recent developments in data acquisition technologies and processing pipelines have allowed us to collect a great number of datasets on our 3D urban environments. Prominent examples are Google Earth [3], texture meshes covering entire cities (e.g. Helsinki [4]), or point clouds covering entire countries (e.g., the Netherlands AHN [5]). These datasets have attracted interest because of their potential in several applications, for instance, urban planning [6,7], positioning and navigation [8,9,10], spatial analysis [11], environmental analysis [12], and urban fluid simulation [13]. To effectively understand the urban phenomena behind the data, a large amount of ground truth is typically required, especially when applying supervised learning-based techniques, such as a deep Convolutional Neural Network (CNN). The recent development of machine learning (especially deep learning) techniques has demonstrated promising performance in semantic segmentation of 3D point clouds [14,15,16]. Compared to point clouds, a surface representation (in the form of a 3D mesh, often with textures, see Figure 1 and 2 for an example) of the urban scene has multiple advantages: easy to acquire, compact storage, accurate, and with well-defined topological structures. This means that 3D meshes have the potential to serve as input for scene understanding. As a consequence, there is an urgent demand for large-scale urban mesh datasets that can be used as ground truth for both training and evaluating the 3D semantic segmentation workflows. In this paper, we aim to establish a benchmark dataset of large-scale urban meshes reconstructed from aerial oblique images. To achieve this goal, we propose a semi-automatic mesh annotation framework that includes two components: (1) an automatic process to generate intermediate labels from the raw 3D mesh; (2) manual semantic refinement of those labels. For the intermediate label generation step, we have developed a semantic mesh segmentation method that classifies each triangle into a pre-defined object class. This semantic initialization allows us to achieve an overall accuracy of 93.0% in the classification of the triangle faces in our dataset, saving significant efforts for manually labelling. Then, in the semantic refinement step, a mesh annotation tool (which we have developed) is used to refine the semantic labels of the pre-labelled data (at the triangle and segment levels). We have used our proposed framework to generate a semantic-rich urban mesh dataset consisting of 19 million triangles and covering about 4 km 2 with six object classes commonly found in an urban environment: terrain, highvegetation, building, water, vehicle, and boat ( Figure 2 shows an example from our dataset). With our semi-automatic annotation framework, generating the ground truth took only about 400 hours; we estimate that manually labelling the triangles would have taken more than 1000 hours. The contributions of our work are: • a semantic-rich urban mesh dataset of six classes of common urban objects with texture information; • a semi-automatic mesh annotation framework consisting of two parts: a pipeline for semantic mesh segmentation and an annotation tool for semantic refinement; • a comprehensive evaluation and comparison of the state-of-the-art semantic segmentation methods on the new dataset. The benchmark dataset is freely available, and the semantic mesh segmentation methods and the annotation software for 3D meshes are released as opensource 1 . Related Work Urban datasets can be captured with different sensors and be reconstructed with different methods, and the resulting datasets will have different properties. Most benchmark urban datasets focus on point clouds, whereas our semantic urban benchmark dataset is based on textured triangular meshes. The input of the semantic labelling process can be raw or pre-labelled urban datasets such as the automatically generated results from over-segmentation or semantic segmentation (see Section 3.3). Regardless of the input data, it still needs to be manually checked and annotated with a labelling tool, which involves selecting a correct semantic label from a predefined list for each triangle (or point, depending on the dataset) by users. In addition, some interactive approaches can make the labelling process semi-manual. However, unlike our proposed approach, the labelling work of most of the 3D benchmark data does not take full advantage of over-segmentation and semantic segmentation on 3D data, and interactive annotation in the 3D space. We present in this section an overview of the publicly available semantic 3D urban benchmark datasets categorised by sensors and reconstruction types (see Table 1). More specifically, we elaborate on the quality, scale, and labelling strategy of the existing urban datasets regarding semantic segmentation. The area was measured in a 2D map. b The number of total points (i.e., 415.43 billion) is estimated. The Campus3D [29] is to our knowledge the first aerial point cloud benchmark. The coarse labelling is conducted in 2D projected images with three views, and the grained labels are refined in 3D with user-defined rotation angles. The dataset covers only the campus of the National University of Singapore and is thus not representative of a typical urban scene. SensatUrban [30] is another example of the photogrammetric point clouds covering various urban landscapes in two cities of the UK. The semantic points are manually annotated via the off-the-shelf software tool CloudCompare [34], and the overall annotation is reported to have taken around 600 hours. The dataset also contains several areas without points, especially for water surfaces and regions with dense objects. The leading causes are the Lambertion surface assumption during the image matching and the inadequate image overlapping rate during the flight. Similarly, the Swiss3DCities [31] was recently released that covers three cities in Zurich but twice smaller than the SensatUrban. The annotation work was conducted on a simplified mesh in the software Blender [35], and then the semantics were transferred to the mesh vertices, which are regarded as point clouds, via the nearest neighbour search. The mesh simplification may result in the loss of small-scale objects such as building dormers and chimneys, and the automatic transfer of the labels could have introduced errors in the ground truth. Triangle Meshes To the best of our knowledge, the ETHZ RueMonge 2014 [28] is the first urban-related benchmark dataset available as surface meshes. The label for each triangle is obtained from projecting selected images that are manually labelled from over-segmented image sequences [27]. In fact, due to the error of multiview optimisation and the ambiguous object boundary within triangle faces, the datasets contain many misclassified labels, making them unsuitable for training and evaluating supervised-learning algorithms. Hessigheim 3D [32,33] is a small-scale semantic urban dataset consisting of highly dense LiDAR point clouds and high resolution texture meshes. Particularly, the mesh is generated from both LiDAR point cloud and oblique aerial images in a hybrid way. The labels of point clouds are manually annotated in CloudCompare [34], and the labels of the mesh are transferred from the point clouds by computing the majority votes per triangle. However, if the mesh triangle has no corresponding points, some faces may remain unlabelled which resulted in about 40% unlabelled area. In addition, this dataset contains non-manifold vertices, which makes it difficult to use directly. LiDAR Point Clouds Unlike photogrammetric point clouds, LiDAR point clouds usually do not contain colour information. To annotate them properly, additional information is often required, e.g. images or 2D maps. LiDAR point cloud benchmark datasets are more common than photogrammetric ones. Street-view Datasets The Oakland 3D [17] is one of the earliest mobile laser scanning (MLS) point cloud datasets, which was designed for the classification of outdoor scenes. It has five hand-labelled classes with 44 sub-classes, but without colour information and semantic categories like roof, canopy, or interior building block, which are typical for all street-view captured datasets. Compared to Oakland 3D, Paris-rue-Madame [18] is a relatively smaller dataset which used the 2D semantic segmentation results for 3D annotation. Specifically, the point clouds were projected onto images to extract the objects hierarchically with several unsupervised segmentation and classification algorithms. Although the 2D pre-labelled generation is fully automatic, different semantic categories require different segmentation algorithms resulting in difficulties in the classification of multiple classes. The iQmulus dataset [19] is a 10 km street dataset annotated based on projected images in the 2D space. Specifically, the user first needs to extract objects by editing the image with a polyline tool and then assigns labels to the extracted object regions. Some automatic functions are made for polyline editing in this framework, but the entire annotation pipeline is still complicated. Unlike other street view datasets, Semantic3D [2] is a dataset consisting of terrestrial laser scanning (TLS) point clouds (the scanner is not moving and scans are made from only a few viewpoints). It has eight classes and colours were obtained by projecting the points onto the original images. There are two annotation methods: (1) annotating in 3D with an iterative model-fitting approach on manually selected points; (2) annotating in a 2D view by separate background from a drawn polygon in CloudCompare [34]. Although it covers many urban scenes and includes RGB information, the acquired objects are incomplete because of the limited viewpoints and occlusions. Aerial-view Datasets As for ALS benchmark point clouds, representative datasets are ISPRS [23], DublinCity [24], and LASDU [26] covering various scales of city landscapes and were annotated manually with off-the-shelf software. Instead of fully manual annotation, the Dayton Annotated LiDAR Earth Scan (DALES) [25] used digital elevation models (DEM) to distinguish ground points with a certain threshold, the estimated normal to label the building points roughly, and satellite images to provide contextual information as references for annotators to check and label the rest of data. Similarly, the AHN3 dataset [5] was semimanually labelled by different companies with off-the-shelf software. Besides, since the ALS measurement is conducted in the top view direction, unlike oblique aerial cameras, the obtained point clouds often miss facade information to a certain degree. Dataset Specification We have used Helsinki's 3D texture meshes as input and annotated them as a benchmark dataset of semantic urban meshes. The Helsinki's raw dataset covers about 12 km 2 , and it was generated in 2017 from oblique aerial images that have about a 7.5 cm ground sampling distance (GSD) using an off-theshelf commercial software namely ContextCapture [36]. The source images have three colour channels (i.e., red, green, and blue) and are collected from an airplane with five cameras that have 80% length coverage and 60% side coverage. To recover the 3D water bodies that do not fulfil the Lambertian hypothesis, 2D vector maps and ortho-photos are used when performing the surface reconstruction. Furthermore, processing like aerial triangulation, dense image matching, and mesh surface reconstruction were all performed with ContextCapture. It should be noticed that the entire region of Helsinki is split into tiles, and each of them covers about 250 m 2 [37]. As shown in Figure 3, we have selected the central region of Helsinki as the study area, which includes 64 tiles and covers about 4 km 2 map area (8 km 2 surface area) in total. Object Classes We define the semantic categories for urban meshes by the most common objects in the urban environment with unambiguous geometry and texture appearance. Moreover, each triangle face is assigned to a label of one of the six semantic classes. Ambiguous regions (which account for about 2.6% of the total mesh surface area), such as shadowed regions or distorted surfaces, are labelled as unclassified (see Figure 4). The object classes we consider in the benchmark dataset are: • terrain: roads, bridges, grass fields, and impervious surfaces; • building: houses,high-rises, monuments, and security booths; • high vegetation: trees, shrubs, and bushes; • water: rivers, sea, and pools; • vehicle: cars, buses, and lorries; • boat: boats, ships, freighters, and sailboats; • unclassified: incomplete objects like buses and trains, distorted surfaces like tables, tents and facades, construction sites, underground walls. Semi-automatic Mesh Annotation Rather than manually labelling each triangle face of the raw meshes, we design a semi-automatic mesh labelling framework to accelerate the labelling process. Figure 5 shows the overall pipeline of our labelling workflow. Given the fact that urban environments consist of a large number of planar regions in the data, we opt to label the data at the segment level instead of individual triangle faces. Specifically, we over-segment the input meshes into a set of planar segments. These segments can enrich local contextual information for feature extraction and serve as the basic annotation unit to improve annotation efficiency. Instead of randomly choosing a mesh tile as input for annotation and refinement, which is insufficient for manual annotation progress, we favour picking a mesh tile that is more difficult to classify. Similar to active learning, we first compute the feature diversity (see Equation 1) to optimally select a mesh tile containing a variety of classes and objects at different scales and complexity. The feature diversity F m of tile m is computed as where f i represents each handcrafted feature which describe in Section 3.3.1, andf is mean value of a N f dimensional feature vector. To acquire the first ground truth data, we manually annotate the mesh (with segments) that is selected with the highest feature diversity. Then, we add the first labelled mesh into the training dataset for the supervised classification. Specifically, we use the segment-based features as input for the classifier, and the output is a prelabelled mesh dataset. Next, we use the mesh annotation tool to manually refine the pre-labelled mesh according to the feature diversity. Finally, the new refined mesh will be added to the training dataset to improve the automatic classification accuracy incrementally. Initial Segmentation To avoid redundant computations of numerous triangles, we first apply mesh over-segmentation (i.e., linear least-squares fitting of planes) based on region growing on the input data to group triangle faces into homogeneous regions [38]. Such grouped regions are beneficial for computing local contextual features. We then extract both geometric and radiometric features from those mesh segments as follows: • Eigen-based features are computed from the covariance matrix of the triangle vertices with respect to the average centre within each segment, which is beneficial for identifying urban objects with various surface distributions. The linearity = (λ 1 − λ 2 )/λ 1 , sphericity = λ 3 /λ 1 and change of curvature = λ 3 /(λ 1 + λ 2 + λ 3 ) are computed based on the three eigenvalues λ 1 ≥ λ 2 ≥ λ 3 ≥ 0. The local eigenvectors n i and the unit normal vector n z along Z-axis are used to compute the verticality = 1−|n i · n z | [39]. Note that many eigen-based features have been studied in literature [39,40,41], and some of them were designed for and tested on LiDAR point clouds. These eigen-based features are mostly computed per point based on its spherical neighbourhood, which often contains noise and does not form a surface. Our chosen eigen-based features are defined on a segment representing the surface of a mesh, and thus they can capture non-local geometric properties of an object. Additionally, in this work, we have tested all eigen-based features from the literature [39], and we only present the ones that are effective for texture meshes. • Elevation is divided into absolute elevation z a , relative elevation z r and multiscale elevations z m . Where z a is the average elevation of the segment; the relative elevation is computed as z r = z a − z rmin ; the multiscale elevation [42,43] z m = za−zmin zmax−zmin . And z rmin denotes the lowest elevation of the local largest ground segment computed within a cylindrical neighbourhood with 30 meters radius around the segment centre. z min and z max represent the local minimum and maximum elevation values of a cylindrical neighbourhood within the scale of 10 meters, 20 meters, and 40 meters. Such large cylindrical neighbourhoods allow to find the local ground considering the resilience to hilly environments, and the square root ensures that small relative height values (i.e., values smaller than 1 m) get a larger elevation attribute to enlarge elevation differences between small objects and the local ground (e.g., cars against the ground, boats against the water surfaces). More importantly, due to the influence of terrain fluctuations and various scales of urban objects, the elevation of these three categories can complement each other. • Segment area is computed as area(S k ) = N i=1 area(f i ), where f i denotes a triangle of the segment S k , and N denotes the total number of triangles in S k . • Triangle density is defined as density(S k ) = N area(S k ) , which reveals the object complexity, especially for adaptive urban meshes. • Interior radius of 3D medial axis transform (InMAT) [44,45] of a segment S k is formulated as r k = M i=1 ri M , where M denotes the total number of triangle vertices of S k , and r i denotes the interior radius of the shrinking ball that touches the vertex v i within the segment S k . It is designed to distinguish objects with different scales. • HSV colour-based features are derived from the RGB channel of the entire texture map. We use the HSV colour space since it can better differentiate different objects than RGB. We compute the average colour, the variance of the colour distribution of all pixels within each segment, and we further discretize it into a histogram that consists of 15 bins of the hue channel, five bins of the saturation channel, and five bins of the value channel. • Greenness a g is used to classify objects that are similar to green vegetation. Specifically, it is computed according to the averaged RGB colour of each segment via a g = G − 0.39 · R − 0.61 · B [46]. All the above features are concatenated into a 44-dimensional feature vector used by our random forest (RF) classifier in the initial segmentation. Annotation Tool for Refinement Because of the under-segmentation errors and the imperfect results of the semantic mesh segmentation process, we design a mesh annotation tool (see Figure 6) to manually correct the labelling errors. Our mesh annotation tool is developed based on the labelling tool of CGAL [47]. As shown in Table 2, it consists of three operation categories: view, selection, and annotation. The view operations provide essential functions for the user to manipulate the scene camera, such as translate, rotate, zoom, or set the new pivot for the scene. In addition, to use textures as a reference for labelling, we map texture and face colour with a certain degree of transparency, and we visualize the segment border to differentiate each segment. The selection operations allow the user to select or deselect either triangle faces (see Figure 7) or segments (see Figure 8) freely via a brush or a lasso. Specifically, the face selection operation is used to fix the under-segmentation errors and generate new segments, and the segment selection operation is to fix incorrect segment labels. We also allow the user to edit the selection of each individual segment with splitting functions (see Figure 9) and automatic extraction of the most planar region (see Figure 10). As for splitting, we first detect the potential planar and non-planar segments marked by user strokes, and then the non-planar one is split according to the vertex-to-plane distance. It allows generating candidate non-planar regions (with respect to the detected planar segment) for the user to edit, and it is useful to split a segment that covers large non-planar regions or contains more than one dominant planar area. To extract the most planar Categories Operations region, we apply the region growing algorithm [38] within the selected segment to automatically generate the candidate triangle faces with user-defined thresholds (i.e., the maximum distance to the plane, the maximum accepted angle, and the minimum region size). Such an operation allows the user to filter out some small bumpy regions of the selected segment. Besides, probability and area-based sliders and a progress bar are provided in the annotation panel to improve annotation efficiency and experience, respectively. Specifically, the probability slider is introduced for the user to visually inspect the segments that are most likely misclassified. Moreover, the user can further use it to inspect a specific class by switching the view to highlight a specific semantic class. The segment area slider is used to identify isolated tiny segments, which commonly appear as errors. The progress bar is used to indicate the estimated labelling progress during the annotation. After performing the selection, the user can easily assign the corresponding label to the selected area. Data Split To perform the semantic segmentation task, we randomly select 40 tiles from the annotated 64 tiles of Helsinki as training data, 12 tiles as test data, and 12 tiles as validation data (see Figure 11 (a)). For each of the six semantic categories, we compute the total area in the training and test dataset to show the class distribution. As shown in Figure 11 (b), some classes, like vehicles and boats, only account for less than 5% of the total area, while the building and terrain together comprise more than 70%. The unbalanced classes impose significant challenges for semantic segmentation based on supervised learning. Evaluation Metric Since the triangle faces in the meshes have different sizes, we compute the surface area for semantic evaluation instead of using the number of triangles. The performance of semantic mesh segmentation is measured in precision, recall, F1 score, and intersection over union (IoU) for each object class. The evaluation of the whole test area is applied with overall accuracy (OA), mean per-class accuracy (mAcc), and mean per-class intersection over union (mIoU). Evaluation of Initial Segmentation We have implemented the semantic mesh segmentation and annotation tool in C++ using the open-source libraries include CGAL [47], Easy3D [48], and ETHZ random forest [49]. Our proposed pipeline for initial segmentation only takes a few input parameters, which are shown in Table 3. The over-segmentation is intended to find all planar regions in the model, for which we set the distance threshold to 0.5 meters. This threshold value specifies the minimum geometric features we would like the over-segmentation method to identify. In other words, the region growing-based over-segmentation method will not be able to distinguish two parallel planes with a distance smaller than this threshold. We set the angle threshold to 90 degrees, which is large enough to cope with high levels of noise (e.g., the distance value is small, but the angle between the triangle normal and the plane normal is large). Moreover, the minimum area is set to zero to allow planar segments of any arbitrary size. As for the random forest classifier, we set the parameters initially to those of Rouhani et al. [43] followed by fine-tuning using the validation data. Specifically, using 100 trees is sufficient to guarantee the stability of the model, and using the depth of 30 is adequate to avoid over-fitting and under-fitting for training. Method Parameters Value Region Growing Minimum area 0 m 2 Distance to plane 0.5 m Accepted angle 90 • Random Forest Number of trees 100 Maximum depth 30 Rather than classifying about 19 million triangle faces (i.e., the entire dataset), we use 515,176 segments that are clustered during over-segmentation. Although both semantic segmentation and labelling refinement can benefit from mesh over-segmentation, the degree of the under-segmentation error cannot be avoided. Since our mesh over-segmentation does not intend to retrieve the individual objects and the purpose is to perform semantic segmentation, we measure the maximum achievable performance by calculating the IoU instead of using under-segmentation errors to evaluate it. The upper bound IoU of each class we could achieve for semantic segmentation is presented in Table 4, and the upper bound mean IoU (mIoU) over all classes is about 90.9% as shown in Table 5. In addition, the results of our experiment in Tables 4 and 5 are reported based on the average performance of ten times experiments with the same configuration. For semantic segmentation, a detailed evaluation of each class is listed in Table 4, and we achieve about 93.0% overall accuracy and 66.2% mIoU as shown in Table 5. The qualitative evaluation of it is shown in Figure 12. As shown in Figure 12 (e), most of the prediction errors occur at small-scale objects such as vehicles and boats due to fewer training samples and errors from oversegmentation. To better understand the relevance of the features, we measure the feature importance and perform ablation studies (see Table 5). We can observe that the radiometric features (which account for 62.8%) are more important than geometric ones (which account for 37.2%). Moreover, after removing individual feature vectors, the performance will decline, indicating each feature contributes to the best results. Evaluation of Competition Methods To the best of our knowledge, none of the state-of-the-art deep learning frameworks of 3D semantic segmentation can directly be used on large-scale texture meshes. Additionally, although the data structures of point clouds and meshes are different, the inherent properties of geometry in the 3D space of the urban environment are nearly identical. In other words, they can share the feature vectors within the same scenes. Consequently, we sample the mesh into coloured point clouds (see Figure 13) with a density of about 10 pts/m 2 as input for the competing deep learning methods. In particular, we use Montecarlo sampling [50] to generate randomly uniform dense samples, and we further prune these samples according to Poisson distributions [51] and assign the colour via searching the nearest neighbour from the textures. To evaluate and compare with the current state-of-the-art 3D deep learning methods that can be applied to a large-scale urban dataset, we select five representative approaches (i.e., PointNet [14], PointNet++ [52], SPG [15], KPConv [16], and RandLA-Net [53]). We perform all the experiments on an NVIDIA GEFORCE GTX 1080Ti GPU. Note that these deep learning-based methods downsample the input point clouds significantly as a pre-processing step. In our experiments, the point sampling density is limited by the GPU memory, and increasing or decreasing the sampling density within a reasonable range may lead to slightly different performance. It should be noted that no matter how dense the input point clouds are, almost all state-of-the-art deep learning architectures (such as PointNet, PointNet++, RandLaNet, KPConv, and SPG, etc.) downsample the input point clouds significantly, and they are still able to learn effective features for classification. Besides, different deep learning-based point cloud classification frameworks exploit different strategies for downsampling the input points. In addition, we also compare with the joint RF-MRF [43], which is the only competition method that directly takes the mesh as input and without using GPU for computation. The hyper-parameters of all the competing methods are tuned according to the validation data to achieve the best results we could acquire. Besides, the results of each competitive method (see Table 6) are demonstrated in average performance based on ten times experiments with the same setting. From the comparison results, as shown in Table 6, we found that our baseline method , mean IoU (mIoU, %) ± standard deviation, Overall Accuracy (OA, %) ± standard deviation, mean class Accuracy (mAcc, %) ± standard deviation, mean F1 score (mF1, %) ± standard deviation, and the time cost of training (t train , hours). The running times of SPG include both feature computation and graph construction, and RF-MRF and our baseline method include feature computation. We repeated the same experiment ten times and presented the mean performance. outperforms other methods except for KPConv. Specifically, our approach outperforms RF-MRF with a margin of 5.3% mIoU, and deep learning methods (not including KPConv) from 16.7% to 29.3% mIoU. Compared with the KPConv, the performance of our method is much more robust, which can be observed from Table 6 that the standard deviation of our method is close to zero (i.e., the standard deviation of mIoU of our method is about 0.024%). The reason is that in our method, we set 100 trees in the random forest to ensure the stability of the model, but in KPConv, the kernel point initialization strategy may not be able to select some parts of the point cloud, which leads to the instability of the results. Furthermore, compared with all deep learning pipelines, our method is conducted on a CPU and uses much less time for training (including feature computation). This can be explained by the fact that we have fewer input data (triangles versus points), and the time complexity of our handcrafted features computation is much lower than the features learned from deep learning. Evaluation of Annotation Refinement Following the proposed framework, a total of 19,080,325 triangle faces have been labelled, which took around 400 working hours. Compared with a trianglebased manual approach, we estimate that our framework saved us more than 600 hours of manual labour. Specifically, we have measured the labelling speed with these two different approaches on the same mesh tile consisting of 309,445 triangle faces and 8,033 segments. It took around 17 hours for manual labelling based on triangle faces, while with our segment-based semi-automatic approach, it took only 6.5 hours. We also evaluate the performance of semantic segmentation with different amounts of input training data on our baseline approach with the intention of understanding the required amount of data to obtain decent results. Specifically, we use ten sets of different training areas with ten times experiments with the same configuration of each set, and we linearly interpolate the results as shown in Figure 14. From Figures 14a, 14b, and 14c, we can observe that our initial Figure 14: Effect of the amount of training data on the performance of the initial segmentation method used in the semi-automatic annotation. We repeated the same experiment ten times for each set of training areas and presented the mean performance. segmentation method only requires about 10% (equal to about 0.325 km 2 ) of the total training area to achieve acceptable and stable results. In other words, using a small amount of ground truth data, our framework can provide robust pre-labelled results and significantly reduce the manually labelling efforts. Conclusion We have developed a semi-automatic mesh annotation framework to generate a large-scale semantic urban mesh benchmark dataset covering about 4 km 2 . In particular, we have first used a set of handcrafted features and a random forest classifier to generate the pre-labelled dataset, which saved us around 600 hours of manual labour. Then we have developed a mesh labelling tool that allows the users to interactively refining the labels at both the triangle face and the segment levels. We have further evaluated the current state-of-the-art semantic segmentation methods that can be applied to large-scale urban meshes, and as a result, we have found that our classification based on handcrafted features achieves 93.0% overall accuracy and 66.2% of mIoU. This outperforms the state-of-the-art machine learning and most deep learning-based methods that use point clouds as input. Despite this, there is still room for improvement, especially on the issues of imbalanced classes and object scalability. For future work, we plan to label more urban meshes of different cities and extend our Helsinki dataset to include parts of urban objects (such as roof, chimney, dormer, and facade). We will also investigate smart annotation operators (such as automatic boundary refinement and structure extraction), which involve more user interactivity and may help reduce further the manual labelling task.
7,582.4
2021-02-27T00:00:00.000
[ "Computer Science", "Geography", "Environmental Science" ]
Journeys towards sociomathematical norms in the Foundation Phase This article describes the normative behaviour of two groups of South African Grade 2 learners during grouped intervention that focused on improving their early number skills. The intervention was based on an adapted version of Wright, Martland and Stafford’s (2006) Mathematics Recovery (MR) programme. Children’s acquisition of early number knowledge is highly personalised and influenced by the context in which mathematics evolves (Shumway 2011). In every classroom, interaction between the teacher and learners, and between learners themselves, forms part of this context which is referred to as the ‘classroom culture’ or the ‘classroom ethos’ (Askew 2016; Yackel & Cobb 1996). Drawing on the work of Cobb, Yackel and colleagues (Yackel & Cobb 1996; Yackel & Rasmussen 2002), we refer to two distinct types of norms that were established within the microculture of each group, namely, social norms (SNs) or general classroom norms and sociomathematical norms (SMNs) that refer to norms specifically associated with classroom mathematical activity. Whilst the two intervention groups consisted of different participants with unique personalities and varying levels of mathematical ability, there were similar trends in how norms came to be established within each group. We use examples taken from our analysis of grouped intervention lessons to show that one particular SMN – ‘use the structure of 10’ – leveraged participants’ progression along the MR literature-based trajectory for additive reasoning, that is, progression from counting-based strategies like ‘count all’ and ‘count on’ to calculation strategies premised on the base-10 number structure. Introduction This article describes the normative behaviour of two groups of South African Grade 2 learners during grouped intervention that focused on improving their early number skills. The intervention was based on an adapted version of Mathematics Recovery (MR) programme. Children's acquisition of early number knowledge is highly personalised and influenced by the context in which mathematics evolves (Shumway 2011). In every classroom, interaction between the teacher and learners, and between learners themselves, forms part of this context which is referred to as the 'classroom culture' or the 'classroom ethos' (Askew 2016;. Drawing on the work of Cobb, Yackel and colleagues Yackel & Rasmussen 2002), we refer to two distinct types of norms that were established within the microculture of each group, namely, social norms (SNs) or general classroom norms and sociomathematical norms (SMNs) that refer to norms specifically associated with classroom mathematical activity. Whilst the two intervention groups consisted of different participants with unique personalities and varying levels of mathematical ability, there were similar trends in how norms came to be established within each group. We use examples taken from our analysis of grouped intervention lessons to show that one particular SMN -'use the structure of 10'leveraged participants' progression along the MR literature-based trajectory for additive reasoning, that is, progression from counting-based strategies like 'count all' and 'count on' to calculation strategies premised on the base-10 number structure. less consideration has been given to the influence that sociological aspects of classroom culture have on learning outcomes. Thus, the potential of viewing mathematics teaching and learning as a social enterprise (Boaler 1998) has been leveraged in limited ways. Broader evidence on the ground suggests a South African Foundation Phase mathematics pedagogy that is centred around whole-class teaching, where teacher talk dominates lessons and where 'group work' consists of 4-10 learners seated around a table with one piece of paper and a pencil which one learner uses whilst the others watch (Ensor et al. 2009). In the broader mathematics education field, links have been made between sociological and psychological perspectives of learning Yackel & Rasmussen 2002). Drawing on the work of Cobb and colleagues (McClain & Cobb 2001;, this article is based on the premise that the norms developed within the microculture of a mathematics classroom influence children's learning. The work of Cobb and colleagues (McClain & Cobb 2001; resonates with international research related to other sociological aspects of learning, such as the link between cultural beliefs about the role of women and girls' maths, science and reading skills (Rodríguez-Planas & Nollenberger 2018), and the development of SMNs through the use of visual learning aids (Widodo, Turmudi & Dahlan 2019). In local literature, research that links the sociological and psychological aspects of learning mathematics includes a focus on the role of teacher pedagogy in shaping learners' mathematical identities and the promotion of equity and access to mathematics (Gardee 2019a(Gardee , 2019b. Beyond mathematics education, local literature has also linked low learner performance with the absence of attention to values education and discipline in South African schools (Maphalala & Mpofu 2018;Segalo & Rambuda 2018;Solomon & Fataar 2011). As with other research that draws links between sociological and psychological perspectives on learning, we argue that a greater focus on the norms established within a mathematics classroom can provide affordances for learners to make sense of the mathematics offered. Background The background of this story is the first author's doctoral thesis (Morrison 2018) that reported on how small-scale intervention based on MR scaled up the early number skills of 10 second graders, eight of whom received grouped intervention (two groups of four), in a South African public school. This suburban school was identified by the district as 'underperforming' in mathematics relative to their quintile five status (Spaull et al. 2016). As part of the MR programme, individual video-recorded task-based interviews using MR assessments were used to determine participants' most advanced additive strategies before and immediately following the intervention . These individual task-based interviews served as pre-and post-tests. Also, part of MR is the Learning Framework in Number (LFIN) that sets out the trajectory for several aspects of number along which children usually progress in their learning of early number. Determining progression along the LFIN trajectory is based centrally on the sophistication of children's strategies for early arithmetical learning (SEAL), which forms the main part of the LFIN . Briefly, the different stages of the LFIN SEAL aspect are as follows: stage 0 -cannot count perceived items; stage 1 -counts perceived items using 'count all'; stage 2 -counts figurative items using 'count all'; stage 3 -uses curtailed counting strategies like 'count on'; stage 4 -uses the difference conception of subtraction (e.g. use 'count-down-to' to solve 18-16); and stage 5 -uses a range of non-count-by-one strategies premised on structuring number and known number facts. The LFIN descriptors for the different SEAL stages were used to code learners' responses to MR assessment tasks and determine their most advanced additive strategy . For the purposes of this article, we grouped SEAL stages into two broad categories, namely, 'calculation-by-counting' with pushes into reified use of number at the upper end and 'calculation-by-structuring' with pushes into purely mental calculations (Van den Heuvel-Panhuizen 2008). We grouped SEAL stages 1-3 under the broad category of 'calculation-by-counting' and SEAL stages 4 and 5 under 'calculation-by-structuring'. Table 1 presents our analysis of the eight group work participants' additive strategies across pre-and post-tests, using these broad categories. Table 1 indicates two completely different pictures of the additive strategies used by participants who received grouped intervention just before and immediately after intervention. Before intervention, seven of the eight participants used 'calculation-by-counting' strategies to solve additive tasks: two of these learners used 'count all' (SEAL 2) and five learners used 'count on' (SEAL 3), with only one learner able to use subtraction as difference (SEAL 4) that is linked to 'calculation-by-structuring'. Immediately after intervention, seven learners used 'calculation-bystructuring' strategies using the base-10 structure to move beyond counting in ones, as well as purely mental strategies like known or derived facts. Of these seven learners, three were at SEAL stage 5 and four learners were at SEAL stage 4. Only one learner in the post-test used the 'count on' strategy that falls under 'calculation-by-counting'. Thus, whilst only one out of the eight participants used efficient 'calculation-by-structuring' strategies to solve additive tasks before intervention, seven of the eight participants used efficient calculation strategies after intervention. The fact that this kind of change could be effected in small groups was an encouraging result, given Wright et al.'s (2006) developed country model of individual working with the MR model, and given the two-to four-grade lag that has TABLE 1: Participants' additive strategies in pre-and post-test. Calculation-by-structuring (SEAL stages 4-5) Number of learners in pre-test 7 1 Number of learners in post-test 1 7 SEAL, strategies for early arithmetical learning. been identified in 75% -80% of children in public schools in South Africa (Spaull 2013). Research aim and question Our aim in studying certain normative behaviour within each intervention group was to understand the connections between the establishment of norms and group members' opportunity to learn early number. Results from the first author's doctoral study showed that learners who received grouped intervention based on MR progressed in their early number learning (Morrison 2018). However, the normative behaviour within intervention groups was not a focus of that study -hence our interest in analysing the establishment of these norms and their influence on participants' early number learning. Taken together, our interest led to the following research question: in what ways did the establishment of norms during grouped intervention influence participants' learning of early number? Theoretical framing or definitions of key concepts Based on a social constructivist perspective, we view the relation between individual and collective learning as 'reflexive' . This means that as individual learners construct mathematical meaning through engagement in classroom activities, they simultaneously contribute to the shared culture or ethos of the classroom. In turn, the microculture of a mathematics classroom has the ability to shape certain aspects of individual and collective mathematical learning. A shared classroom culture that guides the nature of interactions can be created when the teacher coordinates the academic tasks given with a social participation structure (Erickson 1982). 'Revoicing' is one of the strategies that can be used to structure the social participation between members of a group (O'Connor & Michaels 1996). By revoicing group members' offers, the teacher structures the discursive patterns within the group as members talk about elements of the academic task. This strategy was implemented in the study being reported on as learners were encouraged to not only share their solutions to tasks but also to share their underlying reasoning -with the use of 'revoicing' where needed. Linguistic structure, or a shared way of talking, is not the only way in which social participation within a group can be structured -setting up classroom norms can also structure participation within a group. Sociomathematical norms Cobb and Yackel's notion of SMNs -normative classroom aspects specific to learners' mathematical activity -include understandings of what counts as 'mathematically different' or 'mathematically sophisticated' as well as what counts as an 'acceptable mathematical explanation and justification' (Yackel & Cobb 1996:461). According to these authors, SMNs are not predetermined criteria introduced into the small-group context from the outside, but are interactively set up by the learners and teachers as they develop a 'taken-as-shared' understanding of such norms. Research evidence suggests that in addition to regulating participation in small-group and class discussions, SMNs also provide learning opportunities that support learners' higher-order cognitive activities, like comparing solution strategies and judging the similarities and differences inherent therein . In related research conducted by Cobb (1995) a reflexive relationship between children's individual learning and their small-group interactions is reported. Using Cobb's work as a springboard, certain SMNs were set as 'beacons' to guide the nature of group interactions during intervention sessions. The framing of each norm was guided by participants' understanding of the mathematics being taught and the norms already in place within the broader culture of the school. Thus, during intervention, these norms were framed slightly differently from those commonly mentioned in the literature as Yackel and Cobb (1996:406) have noted: 'what becomes mathematically normative in a classroom is constrained by the current goals, beliefs, suppositions and assumptions of the classroom participants'. Because SMNs are interactively set up in any classroom or group, the frequency with which norms were initiated and/or enacted within each intervention group differed. A description of the norms established within the intervention space for both groups follows next, together with a literaturebased rationale for each. These are listed in order from the norm with the lowest frequency during intervention to the one with the highest. Do you know a different way? Rather than having group members trying to memorise one 'right way' of solving tasks, learners were encouraged to think flexibly (Anghileri 2006;McIntosh, Reys & Reys 1997). This norm was also used to keep participants engaged -that means learners could not anticipate that the instructor would move on to another task as soon as the correct answer was offered by someone in the group. Do you know a quicker way? 'Procedural fluency' is one of the strands of mathematical proficiency (Kilpatrick, Swafford & Findell 2001) which research shows is lacking in the South African terrain (Schollar 2008) -thus, this norm was put in place during intervention sessions. This norm ties in with the third norm (do not count in ones) but also goes further than that as some inefficient strategies (for certain calculations) do not involve counting-in-ones. For example, solving '201-198' using the column method is inefficient as it involves extensive and error-prone decomposition, with the difference conception of subtraction using 'count-upto' being more efficient. Do not count in ones When prompted, most learners were able to use the 'count-on' strategy to solve simple additive tasks at the outset of intervention -which is more sophisticated than the inefficient 'count all' strategy prevalent on the ground (Schollar 2008), but still involves counting in ones. Thus, a goal during intervention was to move learners from counting-in-ones to more efficient mental strategies premised on base-10 -for example, by working questions like 7+5 as 7+3+2 (bridgingthrough-10), or questions like 35+12 as 35+10+2 (jumping in 10s). Use the structure of 5 Using the structure of five also underpins a range of mental calculation strategies available to children. For example, the child who sees eight as '5 and 3 more' will be able to draw on this knowledge when using a strategy such as 'bridging-through-10' to solve 35+8 as 35+5 = 40 and 40+3 = 43 (Ellemor-Collins & Wright 2009). Multiple representations can be used for showing or doing the same thing Previous research has pointed to an over-reliance on concrete resources used to enact calculation-by-counting strategies within South African Foundation Phase classrooms (Ensor et al. 2009). This norm encouraged learners to think flexibly and to work within increasingly more abstract settings that engendered mental calculation strategies. Using multiple representations ties into flexible ways of thinking about mathematics (Anghileri 2006;McIntosh et al. 1997) and can link to the norm 'Do you know a different way?'. Be ready to agree with or refute another's offer, giving reasons This norm arises from the premise that mathematics is a social enterprise (NCTM 2000) and that children's acquisition of early number knowledge does not only come from their own working with number but from the group's varied experiences with number. The initial intention was to build this norm to a place where learners developed skills of mathematical argumentation , but because most learners struggled to simply express their thinking using 'math talk' (in a broader culture where highly authoritarian modes of classroom interaction have been noted as widely prevalent), this norm came to be diluted to the point where learners were often only asked if they agreed or disagreed with the offering of a peer. This norm was thus enacted as a disciplinary measure (to get learners to attend to what their peers were saying) and was thus more akin to a social norm. Distancing the setting One of the tenets of the MR programme is 'distancing the setting'. Tasks initially posed using concrete settings like counters or bundling sticks were later posed with the same settings that were flashed and screened (but still available if needed); and finally, settings were removed altogether. In this way, learners were encouraged to become less dependent on concrete materials and move to developing visualisation skills and the use of mental strategies (Wright, Ellemor-Collins & Tabor 2012). Explain or justify your solutions or offers to the group From the outset, learners were expected to not only give an answer to a task posed during intervention but also to explain their reasoning. This norm was instituted to illuminate the strategies available to the learner at that time so that other group members would gain insight into his or her mathematical reasoning and learn from it (if they did not know how to solve the task) or to compare it with their own reasoning (if they solved the task in another way). Use the structure of 10 Literature in the field of early number learning shows that learners who use the structure of 10 (or 10 as a benchmark) have a better sense of the size of numbers, can reason in multiples of 10 and also make use of estimation to gauge whether a result makes sense or not (Ellemor-Collins & Wright 2009). For example, a learner who is able to use the structure of 10 will easily be aware of his or her error if he or she gets '316' as an answer to calculating '29+17' using the column method, because he or she will reason that 29 is close to 30 and 17 is close to 20, so the answer should be close to 50. Also, a learner who can reason in tens will be able to solve 48-23 as 40-20 = 20, then 8-3 = 5 and 20+5 = 25 (a method that splits the tens and units) or as 48-20 = 28 → 28-3 = 25 (a method based on subtracting the tens in the subtrahend from the first number and then subtracting the units in the subtrahend from the result). Social norms Social norms differ from SMNs in that the former are not specifically linked to mathematical activity that takes place within a classroom setting. Social norms refer to normative behaviour that takes place in any classroom or learning space. When learners in an art class stand to greet another teacher entering their classroom, this is a social norm as the same behaviour would be expected from those learners if they were in a technology class. The SNs that were established in both groups during the intervention period were: 1. Do not laugh if another group member makes a mistake. 2. Work as a team, take turns, no competition. 3. Work quietly, do not disrupt other participants. 4. No 'popcorn' offers. 5. Be attentive to your peers' offers. 6. Pay attention and participate in the tasks posed. Whilst the above SNs are largely self-explanatory, we point out the difference between norms that seem similar. Social norms 5 and 6 are very similar -but a useful distinction for us was that the latter was applicable when a learner was being inattentive to what the teacher was doing or saying, whilst the former referred to the same learner action, but this time in response to what a peer was saying or doing. Social norm 4, no 'popcorn' offers, related to learners' 'popping' their hands up as an indication that they wanted to answer even though the task was posed to another learner. From the onset of intervention, the intervention teacher said the name of the intended respondent before posing a task in order to ensure broad and directed participation across all learners in the group. Despite this, learners often bounced up and down on their seats with their hands raised or shouted out their offer in excitement. One teacher at the school remarked how learners were like popcorn jumping around in a pot (when they were eager to offer an answer) -thus this norm was framed as 'no popcorn offers'. Methods Eight middle-attaining Grade 2 learners (two groups of four) from a public school in the north of Johannesburg had two 40min intervention sessions based on MR per week for 9 weeks. Intervention, in the form of a teaching experiment (Steffe & Thompson 2000), was topped and tailed by individual taskbased interviews from the MR programme which served as pre-and post-tests. Intervention sessions were video-recorded to capture patterns of interaction between group members (this included their gestures, talk, inscriptions and artefact use); these recordings were then transcribed. Methodologically, normative behaviour is inferred by identifying regularities in patterns of social interaction in the classroom (Yackel & Cobb 1996:406). Patterns of interaction that were identified as social or SMNs for each group were those that appeared at least three times in more than one intervention session. Using full transcriptions of each intervention lesson that included details of gestures, talk, inscriptions and artefact use, we determined which patterns of interaction could be regarded as a 'regularity' within the group's intervention space, and we took these to be norms and outlined indicators for each. Thereafter we reread all the transcripts, re-watched video data where clarity was needed and noted every instance where a norm was initiated or enacted within the intervention space. The information noted was the following: the sequence number of the lesson, the group member initiating the norm (and the person to whom the exchange was directed), the task posed and the response given. After doing the above for both groups, the frequency of norms established within each intervention space was tallied and recorded in tables from the highest frequency to the lowest. We grouped the first half of intervention lessons (i.e. lessons 2-9) in one table and the second half (i.e. lessons 10-17) in another table as this allowed us to compare the frequency of norms between individual lessons and across the first and second half of the intervention sequence. Analysis of social and sociomathematical norms Using transcripts of each intervention session, we looked for regularities or patterns within the interaction between members of a group and developed codes related to literature as well as codes that emerged in a 'grounded' way. At the onset of intervention, participants often laughed at a peer who made an incorrect offer or who struggled with a task. Thus, the social norm 'do not laugh at a peer' was set up to address what was happening on the ground. When a learner answered a task, the follow-up question often posed was, 'How do you know' or 'Explain to us how you worked that out'. This is an example of interaction that was coded as a sociomathematical norm that relates closely to literature, namely 'explain or justify your solution or offer to the group'. Altogether we identified nine SMNs and six SNs that were established within the microculture of both groups. As already noted, some of the SMNs related closely to one another, for example, 'do not count in ones', was closely linked to 'do you know a quicker way?' as learners who counted in ones often took a long time to complete the calculation. In some cases, one instance could be coded under multiple norms -but here the norm that was foregrounded guided the coding. For example, in the extract that follows, in line 3, Julie answers the teachers' question saying, 'There are twelve people on the bus: ten people on the bottom and two on top'. Julie's response could be coded as SMN #9 'use the structure of ten' because Julie used the 10-structure in her reasoning. However, the artefact used here is a bus with two rows of 10 -the bottom row with 10 counters and the top row with two counters -that was flashed and screened. Thus, it was expected that learners would use the 10-structure inherent in the artefact to subitise the total number of people on the bus. For Julie -one of the stronger attainers who previously showed facility with using base-10 -the significance of her response is that she offered a justification for her answer without prompting, thus the code: SMN #8. Examples of coding Next, we share a transcription extract from lesson 12 (group 1) to show how certain interactions were coded using a few of the SMNs previously outlined: #8 (explain your offer), #6 (agree with or refute a peer's offer), #1 (do you know a different way?) and #7 (distancing the setting) as well as SNs: #5 (be attentive to your peers' offerings) and #4 (no 'popcorn offers'): L1 The what's on top and then take away the rest from the bottom. L32 Khosi: You can also know the answer using addition because eight plus four is twelve. (SMN #1) In the above extract, the teacher's action of screening the bus (in lines 4-6) is coded SMN #7: distancing the setting. The next time the same code is used (line 30) when the teacher encouraged the group to use their visualisation skills when solving the task. These interactions get the same code because the message conveyed is the same, that is, do not rely on concrete materials, rather use your visualisation skills to help you solve the task mentally. Further, whilst presentation of the task (12-4) ran across three lines (lines 4-6) and the word 'screen/ed' was used three times, this is coded as one instance because the entire exchange refers to the same task. Similarly, in lines 10 and 20, the teacher's words 'How do you know, Kgomo?' and 'Why not?' (directed at Kyle) have been coded SMN #8 because they have the same purpose: explain your offer to the group. In both cases, the learner to whom the norm is directed did not answer the initial task, but when they agreed with (in Kgomo's case) or refuted (Kyle) their peer's offer, they were expected to have a reason for doing so. SN #4: no popcorn offer is coded twice in this extract (lines 26 and 28). In the first instance, the teacher told Khosi to put her hand down as the task was posed to another learner. In the second case, the teacher conveyed the same message to Kgomo by giving him a stern look after he shouted out the answer. Here, the teacher used different modes of communication but the message was the same. The teacher initiated SMN #6: agree with or refute another's offer in lines 8 and 18 when she asked Kgomo and Kyle whether they agreed with Julie's offer or not. SN #5: be attentive to your peers' offerings was established in lines 15 and 16 when the teacher reprimanded Kgomo for not listening to what Julie had said. We next present our analysis of the frequency of norms established in both groups over the course of intervention in tabular form and then discuss these further. Table 2 shows that 189 SNs were set up in group 1 for lessons 2-9 and 79 SNs in the same group for lessons 10-17. Across the same intervention period for group 2, 98 SNs were set up during lessons 2-9 and 58 during lessons 10-17. In both groups, more SNs were established in the intervention space during the first half of intervention than the second half. What we infer from this result is that more had to be done to establish a shared way of working between group members during the first half of intervention compared to the second half. Also, in group 1, 460 SMNs were established during lessons 2-9 and 415 SMNs across lessons 10-17. During the same period, group 2 established 553 SMNs for lessons 2-9 and 602 SMNs for lessons 10-17. Thus, more SMNs were established in both groups across the first and second half of intervention compared to SNs for the same period. Although the percentage of SMNs setup in each group was higher than that of the SNs, we also see that the difference in this proportion increased over time -that is, from 42% in the first half of intervention to 68% in the second half for group 1 and from 70% to 82% for the same period for group 2. The greater difference in proportion indicates that a greater number of all the norms established in both groups over time were SMNs, pointing to an emphasis in the intervention on mathematical learning, but in ways that were supported by attention to the SNs of working in the small group environments. Results and discussion The first author's reflective notes on implementation of lessons -that group 1 struggled more than group 2 to work collaboratively during intervention -can be corroborated by the results shown in Table 3. That is, more SNs were set up in group 1 compared to group 2, that is, 63% compared to 37% of all SNs established by both groups over the course of intervention. Also, 43% of the total SMNs established in both groups across intervention came from group 1 and 57% came from group 2. So, whilst group 1 had more SNs established within its microculture, this group also had less SMNs established during intervention than group 2. This result adds to the South African evidence of disruptive behaviour affecting the extent of openings to establish SMNs and mathematical learning as a consequence -a problem that has been highlighted in prior early grades' work (Roberts & Venkat 2016). Table 4 indicates the number of times sociomathematical norm #9 'use the structure of 10' was initiated and enacted in each group across the intervention period compared to SMNs #1 -#8. From the table, it is clear that the number of times SMN #9 was established in each group outweighed the total number of all other SMNs established in each group over the same period. The fact that SMN #9: 'use the structure of ten' was initiated and enacted more times in both groups over the course of intervention than all other SMNs combined attests to the fact that using 'base-10 thinking' became the overarching goal during intervention. When comparing the number of times SMN #9 was established in relation to the total number of SMNs established in each group across intervention, we see a difference of 12% for group 1 and 24% for group 2. So proportionally, a greater percentage of all SMNs initiated and enacted in group 2 were SMN #9 'use the structure of 10', which is double the frequency with which the same norm was established in group 1 when compared to all other SMNs. Conclusion By looking at the pattern of norms established within the microculture of each group over the intervention period, we have been able to corroborate earlier evidence from local studies -that learning mathematics is often not experienced as a social endeavour on the ground (Ensor et al. 2009). Hence, more focus had to be placed on initiating and enacting SNs in both groups during earlier intervention lessons to get to a place of shared meaning making (Figure 1). More importantly, our data goes a step further to show that a greater proportion of all norms established over time in each group were SMNs ( Figure 1). So, early number learning came more sharply into focus during the second half of intervention as groups got better at working collaboratively. Finally, what became very clear from our analysis of normative behaviour within groups is that the sociomathematical norm 'use the structure of 10' became a focal point during intervention. We believe that putting participants' use of base-10 structure at the foreground of intervention is what leveraged their early number learning gains (Morrison 2018(Morrison , 2020 and thus enabled their progression from less-efficient additive strategies based on counting to more sophisticated strategies premised on the base-10 structure. Whilst the relatively small sample size does not allow us to make broad claims about these findings, we believe that the outcomes point to the usefulness of closer attention to the affordances to learning early number skills that are linked to the establishment of social and SMNs in collaborative settings. Funding information The work reported in this article is located within the South African Numeracy Chair Wits Maths Connect-Primary project at the University of the Witwatersrand. It is generously supported by the FirstRand Foundation Change in learner strategies Change in focus of norms Most learners enacƟng calculaƟon-as-counƟng to solve addiƟve tasks Start of intervenƟon Greater focus on social norms within group work End of intervenƟon Most learners enacƟng calculaƟon-as-structuring to solve addiƟve tasks Greater focus on sociomathemaƟcal norms within group work -especially 'use the structure of 10' FIGURE 1: A pictorial view of changes in learner strategies and the type of norms in focus across intervention period.
7,553
2021-06-04T00:00:00.000
[ "Education", "Mathematics", "Sociology" ]
A Template-based Method for Constrained Neural Machine Translation Machine translation systems are expected to cope with various types of constraints in many practical scenarios. While neural machine translation (NMT) has achieved strong performance in unconstrained cases, it is non-trivial to impose pre-specified constraints into the translation process of NMT models. Although many approaches have been proposed to address this issue, most existing methods can not satisfy the following three desiderata at the same time: (1) high translation quality, (2) high match accuracy, and (3) low latency. In this work, we propose a template-based method that can yield results with high translation quality and match accuracy and the inference speed of our method is comparable with unconstrained NMT models. Our basic idea is to rearrange the generation of constrained and unconstrained tokens through a template. Our method does not require any changes in the model architecture and the decoding algorithm. Experimental results show that the proposed template-based approach can outperform several representative baselines in both lexically and structurally constrained translation tasks. Introduction Constrained machine translation is of important value for a wide range of practical applications, such as interactive translation with user-specified lexical constraints (Koehn, 2009;Jon et al., 2021), domain adaptation with in-domain dictionaries (Michon et al., 2020;Niehues, 2021), and webpage translation with markup tags as structural constraints (Hashimoto et al., 2019;Hanneman and Dinu, 2020). Developing constrained neural machine translation (NMT) approaches can make NMT models applicable to more real-world scenarios (Bergmanis and Pinnis, 2021). However, it is challenging to directly impose constraints for NMT models due to their end-toend nature (Post and Vilar, 2018). In accordance with this problem, a branch of studies modifies the decoding algorithm to take the constraints into account when selecting candidates (Hokamp and Liu, 2017;Hasler et al., 2018;Post and Vilar, 2018;Hu et al., 2019;Hashimoto et al., 2019). Although constrained decoding algorithms can guarantee the presence of constrained tokens, they can significantly slow down the translation process (Wang et al., 2022) and can sometimes result in poor translation quality (Zhang et al., 2021). Another branch of works constructs synthetic data to help NMT models acquire the ability to translate with constraints (Song et al., 2019;Dinu et al., 2019;Michon et al., 2020). For instance, Hanneman and Dinu (2020) propose to inject markup tags into plain parallel texts to learn structurally constrained NMT models. The major drawback of data augmentation based methods is that they sometimes violate the constraints (Hanneman and Dinu, 2020;Chen et al., 2021), limiting their application in constraint-critical situations. In this work, we use free tokens to denote the tokens that are not covered by the provided constraints. Our motivation is to decompose the whole constrained translation task into the arrangement of constraints and the generation of free tokens. The constraints can be of many types, ranging from phrases in lexically constrained translation to markup tags in structurally constrained translation. Intuitively, only arranging the provided constraints into the proper order is much easier than generating the whole sentence. Therefore, we build a template by abstracting free token fragments into nonterminals, which are used to record the relative position of all the involved fragments. The template can be treated as a plan of the original sentence. The arrangement of constraints can be learned through a template generation sub-task. Once the template is generated, we need some derivation rules to convert the nonterminals mentioned above into free tokens. Each derivation rule shows the correspondence between a nonterminal and a free token fragment. These rules can be learned by the NMT model through semi-structured data. We call this sub-task template derivation. During inference, the model firstly generates the template and then extends each nonterminal in the template into natural language text. Note that the two proposed sub-tasks can be accomplished through a single decoding pass. Thus the decoding speed of our method is comparable with unconstrained NMT systems. By designing template format, our approach can cope with different types of constraints, such as lexical constraints, XML structural constraints, or Markdown constraints. Contributions In summary, the contributions of this work can be listed as follows: • We propose a novel template-based constrained translation framework to disentangle the generation of constraints and free tokens. • We instantiate the proposed framework with both lexical and structural constraints, demonstrating the flexibility of this framework. • Experiments show that our method can outperform several strong baselines, achieving high translation quality and match accuracy while maintaining the inference speed. 2 Related Work Lexically Constrained Translation Several researchers direct their attention to modifying the decoding algorithm to impose lexical constraints (Hasler et al., 2018). For instance, Hokamp and Liu (2017) propose grid beam search (GBS) that organizes candidates in a grid, which enumerates the provided constrained tokens at each decoding step. However, the computation complexity of GBS scales linearly with the number of constrained tokens. To reduce the runtime complexity, Post and Vilar (2018) propose dynamic beam allocation (DBA), which divides a fixed size of beam for candidates having met the same number of constraints. Hu et al. (2019) propose to vectorize DBA further. The resulting VDBA algorithm is still significantly slower compared with the vanilla beam search algorithm (Wang et al., 2022). Another line of studies trains the model to copy the constraints through data augmentation. Song et al. (2019) propose to replace the corresponding source phrases with the target constraints, and Dinu et al. (2019) propose to insert target constraints as inline annotations. Some other works propose to append target constraints to the whole source sentence as side constraints (Chen et al., 2020;Niehues, 2021;Jon et al., 2021). Although these methods introduce little additional computational overhead at inference time, they can not guarantee the appearance of the constraints (Chen et al., 2021). Xiao et al. (2022) transform constrained translation into a bilingual text-infilling task. A limitation of text-infilling is that it can not reorder the constraints, which may negatively affect the translation quality for distinct language pairs. Recently, some researchers have tried to adapt the architecture of NMT models for this task. Susanto et al. (2020) adopt non-autoregressive translation models (Gu et al., 2019) to insert target constraints. Wang et al. (2022) prepend vectorized keys and values to the attention modules (Vaswani et al., 2017) to integrate constraints. However, their model may still suffer from low match accuracy when decoding without VDBA. In this work, our method can achieve high translation quality and match accuracy without significantly increasing the inference overhead. Structurally Constrained Translation Structurally constrained translation is useful since text data is often wrapped with markup tags on the Web (Hashimoto et al., 2019), which is an essential source of information for humans. Compared with lexically constrained translation, structurally constrained translation is relatively unexplored. Joanis et al. (2013) examine a two-stage method for statistical machine translation systems, which firstly translates the plain text and then injects the tags based on phrase alignments and some carefully designed rules. Moving to the NMT paradigm, large-scale parallel corpora with structurally aligned markup tags are scarce. Hanneman and Dinu (2020) propose to inject tags into plain text to create synthetic data. Hashimoto et al. (2019) collect a parallel dataset consisting of structural text translated by human experts. Zhang et al. (2021) propose a constrained decoding algorithm to translate structured text. However, their method significantly slows down the translation process. In this work, our approach can be easily extended for structural constraints, leaving the decoding algorithm unchanged. The template in our approach can be seen as an intermediate plan, which has been investigated in the field of data-to-text generation (Moryossef et al., 2019). also explored the idea of disentangling different parts in a sentence using special tokens. Template-based Machine Translation Given a source-language sentence x = x 1 · · · x I and a target-language sentence y = y 1 · · · y J , an NMT model is trained to estimate the conditional probability P (y|x; θ), which can be given by where θ is the set of parameters to optimize and y <j is the partial translation at the j-th step. In this work, we firstly build a template to simplify the whole sentence. Formally, we use s and t to represent the source-and target-side templates, respectively. In the template, free token fragments are abstracted into nonterminals. We use e and f to denote the derivation rules of the nonterminals for the source and target template, respectively. The model is trained on two sub-tasks. Firstly, the model learns to generate the target template t: P (t|s, e; θ) = T j=1 P (t j |s, e, t <j ; θ). (2) Secondly, we train the same model to estimate the conditional probability of f : The target sentence y can be reconstructed by extending each nonterminal in t using the corresponding derivation rule in f . We can jointly learn the two sub-tasks in one pass to improve both the training and inference efficiency. Formally, the model is trained to maximize the following joint probability of t and f in practice: P (t, f |s, e; θ) = P (t|s, e; θ) × P (f |s, e, t; θ). (4) Template for Lexical Constraints In lexically constrained translation, some source phrases in the input sentence are required to be translated into pre-specified target phrases. For a source sentence x, we use u (n) , v (n) N n=1 to denote the given constraint pairs, where u (n) is the n-th source constraint, and v (n) is the corresponding target constraint. All the N source constraints can divide x into 2N + 1 fragments: where p (n) is the n-th free token fragment. We can set p (0) to an empty string to represent sentences that start with a constraint, and set p (N ) to an empty string for sentences that end with a constraint. We can also set p (n) to an empty string for the cases where u (n) and u (n+1) are adjacent in x. Similarly, the target sentence can be represented by where q (n) is the n-th free token fragment in the target sentence y. We use i 1 , · · · , i N to denote the order of the constraints in y. The n-th index i n is not necessarily equal to n, since the order of the constraints in the target sentence y is often different from that in the source sentence x. We then abstract each fragment of text into nonterminals to build the template for lexically constrained translation. Concretely, the n-th free token fragment in the source sentence x is abstracted into X n , for each n ∈ {0, · · · , N }. The n-th free token fragment in the target sentence is abstracted into Y n , for each n ∈ {0, · · · , N }. In order to indicate the alignment between corresponding source and target constraints, we abstract u n and v n into the same nonterminal C n . Note that X n and Y n are not linked nonterminals, since fragments of free tokens are not bilingually aligned. The resulting source-and target-side templates are given by We need to define some derivation rules to convert the template into a natural language sentence. The derivation of nonterminals can be seen as the inverse of the abstraction process. Thus the derivation of the target-side template t would be Source x Target y 歌曲 七⾥⾹ 的演唱者是 周杰伦 。 Jay Chou sang the song Orange Jasmine . Figure 1: Example for lexically constrained translation. The constraints are ⟨周杰伦, Jay Chou⟩ and ⟨七里香, Orange Jasmine⟩. Note that X n and Y n are not linked nonterminals, since the source and target free token fragments are not necessarily aligned. The derivation rule X 0 → 歌曲 is learned through the concatenation of X 0 and 歌曲 (i.e., X 0 歌曲). "ϕ" denotes an empty string. See Section 3.2 for more details. Constraints The derivation of the source-side template s can be defined similarly. Note that C n produces the n-th source constraint u n at the source side while producing the target constraint v n at the target side. In order to make the derivation rules learnable by NMT models, we propose to use the concatenation of the nonterminal and the corresponding sequence of terminals to denote each derivation rule. For example, we use Y n q (n) to represent Y n → q (n) . We use d and f to denote the derivation of constraints and free tokens at the target side, respectively: At the source side, we use c and e to denote the derivation of constraints and free tokens, respectively. c and e can be defined similarly. Since the constraints are pre-specified by the users, the model only needs to learn the derivation of free tokens. To this end, we place the derivation of constraint-related nonterminals before the template as a conditional prefix. Then the model learns the generation of the template and the derivation of free tokens, step by step. The final format of the input and output sequences at training time can be given by respectively. We use the delimiter <sep> to separate the template and the derivations. Figure 1 gives an example of both x ′ and y ′ . At inference time, we feed x ′ to the encoder, and provide "d <sep>" to the decoder as the constrained prefix. Then the model generates the remaining part of y ′ (i.e., "t <sep> f "). Jay Chou sang the song C 2 Y 2 Jay Chou sang the song Orange JasmineY 2 Jay Chou sang the song Orange Jasmine . Template Natural Language Sentence Figure 2: The template can be converted into a natural language sentence by replacing the nonterminals according to the corresponding derivation rules. Figure 2 explains the way we convert the output sequence into a natural language sentence. The conversion from the template to the target-language sentence can be done through a simple script, and the computational cost caused by the conversion is negligible, compared with the model inference. Note that we also abstract the constraints when building the template. The reason is that the model only needs to generate the order of constraints in this way, rather than copy all the specific tokens, which may suffer from copy failure (Chen et al., 2021). The formal representation for our lexically constrained model is slightly different from that defined in Eq. (4), which should be changed into P (t, f |c, s, e, d; θ) =P (t|c, s, e, d; θ) × P (f |c, s, e, d, t; θ). (11) Derivation of Free Tokens f sang the song Template t Input x′ Output y′ Source x Target y 歌曲 <i> 七⾥⾹ </i> 的演唱者是 <b> 周杰伦 </b> 。 <b> Jay Chou </b> sang the song <i> Orange Jasmine </i> . Figure 3: Example for structurally constrained translation. The markup tags are reserved in the template, while free tokens are abstracted. Note that X n and Y n are not linked nonterminals. See Section 3.3 for more details. Template for Structural Constraints The major challenge of structured text translation is to maintain the correctness of the structure, which is often indicated by markup tags (Hashimoto et al., 2019). The proposed framework can also deal with structurally constrained translation. Similarly, we replace free token fragments with nonterminals to build the template, where the markup tags are reserved. Figure 3 shows an example. Formally, given a sentence pair ⟨x, y⟩ with N markup tags, the source-and target-side templates are given by respectively. The order of markup tags at the target side (i.e., i 1 · · · i N ) may be different from that at the source side (i.e., 1 · · · N ). For each n ∈ {0, · · · , N }, X n can be derived into the n-th source-side free token fragment p (n) , and Y n can be extended into the target-side free token fragment q (n) . X n and Y n are not linked. The derivation sequences can be defined as The format of the input and output would be respectively. Figure 3 illustrates an example for both x ′ and y ′ . The formal representation of our structurally constrained model is the same as Eq. (4). The model arranges the markup tags when generating t and completes the whole sentence when generating f , which is consistent with our motivation to decompose the whole task into constraint arrangement and free token generation. Setup Parallel Data We conduct experiments on two language pairs, including English-Chinese and English-German. For English-Chinese, we use the dataset of WMT17 as the training corpus, consisting of 20.6M sentence pairs. For English-German, the training data is from WMT20, containing 41.0M sentence pairs. We provide more details of data preprocessing in Appendix. Following recent studies on lexically constrained translation (Chen et al., 2021;Wang et al., 2022), we evaluate our method on human-annotated alignment test sets. For English-Chinese, both the validation and test sets are from Liu et al. (2005). For English-German, the test set is from Zenkel et al. (2020). We use newstest2013 as the validation set, whose word alignment is annotated by fast-align 2 . The training sets are filtered to exclude test and validation sentences. Lexical Constraints Following some recent works (Song et al., 2019;Chen et al., 2020Chen et al., , 2021Wang et al., 2022), we simulate real-world lexically constrained translation scenarios by sampling constraints from the phrase table that are extracted from parallel sentence pairs based on word alignment. The script used to create the constraints is publicly available. 3 Specifically, the number of constraints for each sentence pair ranges between 0 and 3, and the length of each constraint ranges between 1 and 3 tokens. We use fast-align to build the alignment of the training data. Model Configuration We adopt Transformer (Vaswani et al., 2017) as our NMT model, which is optimized by Adam (Kingma and Ba, 2015) with β 1 = 0.9, β 2 = 0.98 and ϵ = 10 −9 . Please refer to Appendix for more details on the model configuration and the training process. Evaluation Metrics We follow Alam et al. (2021a) to use the following four metrics to make a thorough comparison of the involved methods: • BLEU (Papineni et al., 2001): measuring the translation quality of the whole sentence; • Exact Match: indicating the accuracy that the source constraints in the input sentences are translated into the provided target constraints; • Window Overlap: quantifying the overlap ratio between the hypothesis and the reference windows for each matched target constraint, indicating if this constraint is placed in a suitable context. The window size is set to 2. • 1-TERm: modifying TER (Snover et al., 2006) by setting the edit cost of constrained tokens to 2 and the cost of free tokens to 1. Main Results Template Accuracy We firstly examine the performance of the model in the template generation sub-task before investigating the translation performance. We compare the target-side template extracted from the reference sentence and the one generated by the model to calculate the accuracy of template generation. Formally, if the reference template t is In other words, the model must generate all the nonterminals to guarantee the presence of the provided constraints. However, the order of constraintrelated nonterminals can be flexible since there often exist various suitable orders for the provided constraints. In both English-Chinese and English-German, the template accuracy of our model is 100%. An interesting finding is that our model learns to reorder the constraints according to the style of the target language. We provide an example of constraint reordering in Table 1. When generating the free token derivation f , the model can recall all the nonterminals (i.e., Y n ) presented in the template t in English-Chinese. In English-German, however, the model omits one free token nonterminal, of which the frequency is 0.2%. We use empty strings for the omitted nonterminals when reconstructing the output sentence. Table 2 shows the results of lexically constrained translation, demonstrating that all the investigated methods can recall more provided constraints than the unconstrained Transformer model. Our approach can improve the BLEU score over the involved baselines. This improvement potentially comes from two aspects: (1) our system outputs can match more pre-specified constraints compared to some baselines, such as AttnVector (Wang et al., 2022) (100% vs. 93.8%) ; Translation Performance (2) our method can place more constraints in appropriate context, which can be measured by window overlap. The exact match accuracy of VDBA (Hu et al., 2019) is lower than 100% due to the out-ofvocabulary problem in English-Chinese. TextInfill (Xiao et al., 2022) and our approach can achieve 100% exact match accuracy in both the two language pairs. However, TextInfill can only place the constraints in the pre-specified order, Constraints ⟨slowing down,减弱⟩; ⟨price hike,价格上涨⟩ Source Analysts are concerned that since there is no sign yet of any slowing down of this price hike , the prospect of the British real estate market as where it is heading now is far from optimistic. Reference 分析家担心, 由于目前还看不见 价格上涨 趋势有 减弱 的迹象, 照此发展下去, 英国房地产市场前景堪 忧。 Input (enc) C 1 slowing down C 2 price hike <sep> X0 C 1 X1 C 2 X2 <sep> X0 Analysts are concerned that since there is no sign yet of any X1 of this X2 , the prospect of the British real estate market as where it is heading now is far from optimistic. Table 1: An example of our method. We replace the nonterminals in the template using the derivation rules to reconstruct the final result (i.e., "Result"). Surprisingly, we find that our model can automatically sort the provided constraints when generating the template. In this example, C 1 is before C 2 in the source-side template. But in the target-side template generated by our model, C 2 is before C 1 , which is more suitable for the target language. Table 2: Results of the lexically constrained translation task for both English-Chinese and English-German. For clarity, we highlight the highest score in bold and the second-highest score with underlines. while our approach can automatically reorder the constraints. As a result, the window overlap score of our approach is higher than TextInfill. Please refer to Table 8 in Appendix for more translation examples of both our method and some baselines Unconstrained Translation A concern for lexically constrained translation methods is that they may cause poor translation quality in unconstrained translation scenarios. We thus evaluate our approach in the standard translation task, where the model is only provided with the source sentence x. Under this circumstance, the input and output can be given by respectively. The BLEU scores of our method are 42.6 and 25.0 for English-Chinese and English-German, respectively. The performance of our method is comparable with the vanilla model, which can dispel the concern that our approach may worsen the unconstrained translation quality. Table 3 shows the decoding speed. Since we did not change the model architecture and the decoding algorithm, the speed of our method is close to the vanilla Transformer model (Vaswani et al., 2017). Although our speed is almost the same as the vanilla model, our inference time is a bit longer, given the fact that the output sequence y ′ is longer than the original target-language sentence y. Table 4: Results of the structurally constrained translation task. We highlight the highest score in bold and the second-highest score with underlines. We vary the amounts of training data to investigate the effect of data scale on our approach. Figure 4 shows the results. The BLEU score increases with the data size, while the window overlap score reaches the highest value when using 10.0M training examples. When using all the training data, the 1 -TERm metric achieves the best value. We find that the exact match accuracy of our method is maintained at 100%, even with only 0.6M training examples. This trend implies that our method can be applied in some low-resource scenarios. More Analysis Due to space limitation, we place a more detailed analysis of our approach in Appendix, including the effect of the alignment model, the performance on more language pairs, and the domain robustness of our model, which is evaluated on the WMT21 terminology translation task (Alam et al., 2021b) that lies in the COVID-19 domain. Setup Data We conduct our experiments on the dataset released by Hashimoto et al. (2019), which supports the translation from English to seven other languages. We select four languages, including French, Russian, Chinese, and German. For each language pair, the training set contains roughly 100K sentence pairs. We report the results on the validation sets since the test sets are not opensourced. We follow Hashimoto et al. (2019) to use SentencePiece 5 to preprocess the data, which supports user-defined special symbols. The model type of SentencePiece is set to unigram, and the vocabulary size is set to 9000. For English-Chinese, we over-sample the English sentences when learning the joint tokenizer, since Chinese has more unique characters than English (Hashimoto et al., 2019). We did not perform over-sampling for other language pairs. We register the XML tags and URL placeholders as user-defined special symbols. In addition, we also register &amp;, &lt;, and &gt; as special tokens, following Hashimoto et al. (2019). 3673 Model Configuration Since the data scale for structurally constrained translation is much smaller than lexically constrained translation, we follow Hashimoto et al. (2019) to set the width of the model to 256 and the depth of the model to 6. See Section B.1 in Appendix for more details. Baselines We compare our approach with the following three baselines: • Remove: removing the markup tags and only translating the plain text; • Split-Inject (Al-Anzi et al., 1997): splitting the input sentence based on the markup tags and then translating each text fragment independently, and finally injecting the tags; • XML (Hashimoto et al., 2019): directly learning the NMT model end-to-end using parallel sentences with XML tags. Evaluation Metrics We follow Hashimoto et al. (2019) to use the following metrics: • BLEU: considering the structure when estimating BLEU score (Papineni et al., 2001); • Structure Accuracy: utilizing the etree package to check if the system output is a valid XML structure (i.e., Correct), and if the output structure exactly matches the structure of the given reference (i.e., Match). All the metrics are calculated using the evaluation script released by Hashimoto et al. (2019). Main Results Template Accuracy We firstly examine the accuracy of the generated templates. A generated template is correct if • the template is a valid XML structure; • the template recalls all the markup tags of the input sentence. The template accuracy of our method is 100% in all the four language pairs. Similar to lexically constrained translation, the model may omit some free token nonterminals (i.e., Y n ) when generating the derivation f , of which the ratios are 0.4%, 0.6%, 0.1%, 0.9% in English-French, English-Russian, English-Chinese, English-German, respectively. We use empty strings for the omitted nonterminals when reconstructing the output sentence. Table 4 shows the results of all the involved methods. Our approach can improve the BLEU score over the three baselines, and the structure correctness is 100%. Although Split-Inject can also guarantee the correctness of the output, its BLEU score is much lower, which is potentially caused by the reason that some fragments are translated without essential context. The structure match accuracy with respect to the given reference is not necessarily 100%, since the order of markup tags can be diverse due to the variety of natural language. See Table 9 in Appendix for some translation examples. Conclusion In this work, we propose a template-based framework for constrained translation and apply the framework to two specific tasks, which are lexically and structurally constrained translation. Our motivation is to decompose the generation of the whole sequence into the arrangement of constraints and the generation of free tokens, which can be learned through a sequence-to-sequence framework. Experiments demonstrate that the proposed method can achieve high translation quality and match accuracy simultaneously and our inference speed is comparable with unconstrained NMT baselines. Limitations A limitation of this work is that our method can not cope with one-to-many constraints (e.g., ⟨bank, 河岸|银行⟩). Moreover, we only validate the proposed template-based framework in machine translation tasks. However, constrained sequence generation is vital in many other NLP tasks, such as table-to-text generation (Parikh et al., 2020), text summarization (Liu et al., 2018), and text generation (Dathathri et al., 2020). In the future, we will apply the proposed method to more constrained sequence generation tasks. A.1 More Details on Data For the lexically constrained translation task, Chinese sentences are segmented by Jieba 6 , while English and German sentences are tokenized using Moses (Koehn et al., 2007). The tokenized sentences are then processed by BPE (Sennrich et al., 2016) with 32K merge operations for both the two language pairs. We detokenize the model outputs before calculating the sacreBLEU. A.2 More Details on Model We adopt Transformer (Vaswani et al., 2017) as our NMT model. For English-Chinese, we use the base model, whose depth is 6, and the width is 512. For English-German, we use the big model, whose depth is 6, and the width is 1024. The base and big models are optimized using the corresponding learning schedules introduced in Vaswani et al. (2017). We train base models for 200K iterations using 4 NVIDIA V100 GPUs and train big models for 300K iterations using 8 NVIDIA V100 GPUs. Each mini-batch contains approximately 32K tokens in total. All the models are optimized using Adam (Kingma and Ba, 2015), with β 1 = 0.9, β 2 = 0.98 and ϵ = 10 −9 . In all experiments, both the dropout rate and the label smoothing penalty are set to 0.1. The beam size is set to 4. A.3 Effect of Alignment Model In this work, we use an alignment model to produce word alignments for the training set, which is then used for phrase table extraction. By default, we use all the parallel data in the training set to train the alignment model, using the fast-align toolkit. To better understand the effect of the alignment model, we replace the default alignment model with a weaker one that is trained using only 0.1M sentence pairs. Table 5 shows the result, from which we find that using the weaker word alignment can negatively affect the BLEU score. However, the exact match accuracy is still 100%, and changes in the other two metrics are modest. A.4 Domain Robustness Domain robustness is about the generalization of machine learning models to unseen test domains (Müller et al., 2020 all the involved models are trained in the news domain. We evaluate the domain robustness of these methods on the WMT21 terminology translation task (Alam et al., 2021b) 7 , which lies in the COVID-19 domain. Since this task does not support English-German translation, we only conduct this experiment on English-Chinese. In this test set, the maximum number of constraints is 12. We thus modify the phrase extraction script to increase the maximum number of constraints from 3 to 12, and then re-train both the baselines and our models. Note that we only change the number of constraints, while the training domain is still news. Since the open-sourced implementation of AttnVector (Wang et al., 2022) 8 does not support more than 3 constraints, we omit this baseline in this experiment. The test set of the WMT21 terminology translation task also contains some constraints that consist of more than one target term (i.e., one-to-many constraints). We only select the one that appear in the reference as our constraint. We leave it to future work to extend the current framework for one-to-many constraints. achieve much lower exact match accuracy due to the domain shift. However, the BLEU score of VDBA is lower than other constrained translation approaches, while our method can also achieve the best BLEU score. The exact match accuracy of TextInfill (Xiao et al., 2022) is lower than 100% because sometimes the model can not generate all the slots within the length limitation. The results indicate that our approach can better cope with constraints coming from unseen domains. A.5 X-English Translation We also conduct experiments on X-English translation directions (i.e., Chinese-English and German-English). Due to the limitation of computational resources, we only train the two most recent baselines: AttnVector (Wang et al., 2022) and TextInfill (Xiao et al., 2022). Moreover, AttnVector and TextInfill achieve the best BLEU score and exact match accuracy, excluding our approach, respectively. As shown in Table 7, we find that our approach performs well in both Chinese-English and German-English, achieving 100% exact match accuracy and a better BLEU score. A.6 Case Study As mentioned in Section 4.2, our approach outperforms the baselines in the lexically constrained translation task. To better understand the difference between our approach and some representative baselines, we list some examples in Table 8. B.1 More Details on Model All the models are trained for 40K iterations in all the four translation directions. We adopt the cosine learning rate schedule presented in Wu et al. (2019), but we set the maximum learning rate to 7 × 10 −4 and the warmup step to 8K. The period of the cosine function is set to 32K, which means that the learning rate decays into the minimum value at the end of the training. Both the dropout rate and the label smoothing penalty are set to 0.2. Each mini-batch consists of approximately 32k tokens in total. We use Adam (Kingma and Ba, 2015) for model optimization, with β 1 = 0.9, β 2 = 0.98 and ϵ = 10 −9 . We also set the weight decay coefficient to 10 −3 . Both the baseline models and our models are trained using the same hyperparameters. B.2 Case Study We list some translation examples in Table 9 to provide a detailed understanding of our work. The examples demonstrate that our approach can effectively cope with structured inputs. Constraints ⟨guests ,来宾 ⟩; ⟨culinary culture ,食品文化 ⟩; ⟨Chinese-style ,中式 ⟩ Source Wang Kaiwen , Chinese ambassador to Latvia , introduced to the guests a few major styles of cooking in Chinese gourmet foods and expressed his hope that through tasting Chinese-style gourmet foods more will be learned about China and Chinese culinary culture. (Wang et al., 2022) and TextInfill (Xiao et al., 2022) since they achieve the best BLEU score and the highest exact match accuracy, respectively, excluding our approach. In the first example, AttnVector omits the target constraint 食品文化 in its output, while both TextInfill and our approach can generate all the three constraints. In the second example, TextInfill places the constraint 吉曾柯 in the wrong context, while our approach outputs a better result.
8,125.6
2022-05-23T00:00:00.000
[ "Computer Science" ]
Mixing Languages during Learning? Testing the One Subject—One Language Rule In bilingual communities, mixing languages is avoided in formal schooling: even if two languages are used on a daily basis for teaching, only one language is used to teach each given academic subject. This tenet known as the one subject-one language rule avoids mixing languages in formal schooling because it may hinder learning. The aim of this study was to test the scientific ground of this assumption by investigating the consequences of acquiring new concepts using a method in which two languages are mixed as compared to a purely monolingual method. Native balanced bilingual speakers of Basque and Spanish—adults (Experiment 1) and children (Experiment 2)—learnt new concepts by associating two different features to novel objects. Half of the participants completed the learning process in a multilingual context (one feature was described in Basque and the other one in Spanish); while the other half completed the learning phase in a purely monolingual context (both features were described in Spanish). Different measures of learning were taken, as well as direct and indirect indicators of concept consolidation. We found no evidence in favor of the non-mixing method when comparing the results of two groups in either experiment, and thus failed to give scientific support for the educational premise of the one subject—one language rule. Introduction Although some of the positive consequences of bilingualism in domain-general cognition [1][2][3] remain debated on the basis of data showing similar performance in bilinguals and monolinguals in executive control tasks [4][5][6], benefits of bilingualism at a linguistic level seem to be less controversial and appear generalizable. For instance, bilinguals have been shown to outperform monolinguals in phonetic awareness tasks [7] or new vocabulary acquisition [8]. The positive-linguistic-consequences of bilingualism are well-accepted, and the negative impact of early bilingual immersion is at the very least debatable, considering that bilingual children have been shown to reach the same linguistic milestones as monolinguals over the same developmental periods [9][10]. It is a widely held view in bilingual education that introducing more than one language "too early" in life may be detrimental to learning by delaying language acquisition or even triggering confusion between languages in children. However, scientific observations show that children can learn more than one language in a naturalistic context in a seemingly effortless way, and there is little evidence to date of a detrimental effect produced by bilingual education (see [11] for a detailed description of this "bilingual paradox"). In fact, a number of studies have reported an advantage in bilinguals who are exposed to (and use) two or more languages from birth as compared to late bilinguals, since early bilinguals usually show greater fluency and mastery in almost every aspect of their second language (L2) [12][13][14][15]. More importantly for the purposes of the current study, it has been shown that children immersed in a bilingual educational context learn new words better than children immersed in a monolingual context [16]. Given the prevalence of bilingualism in modern societies and the multiplication of policies advocating the protection of minority languages [17], the inclusion of bilingualism in education is a key issue in regions where two or more languages have equal official status (e.g., Catalonia or the Basque Country, which hold Catalan or Basque, respectively, to an equal status as Spanish, or Wales, where Welsh is the official language on a par with English). This also happens in places where a new language is progressively developing (as indexed by the increasing number of speakers) as is the case for Spanish in the United States [18]. In these circumstances, the two languages of a bilingual community tend to be represented in the educational system. While there are different ways in which bilingual education can be implemented, one of the most widespread methods is the Two-Way Immersion program (TWI) [19,20]. The TWI promotes the use of the two languages as vehicular languages, and it has been adopted in most countries with strong bilingual communities. This method has been implemented either on the basis of 50/50 exposure (i.e., children receive instruction and tuition half of the time in one language and the other half in the other), or on the basis of 90/10 exposure (i.e., children initially receive most of the tuition in the "new/incoming language" and get increasingly exposed to the strongest language, generally aiming to reaching the 50/50 exposure ratio by grade 5) [21]. This being said, it does not seem to matter which method of immersion is employed by a given bilingual school, a core principle prevails: the one language-one subject rule. In the vast majority of bilingual schools throughout the world, each subject is taught in a unique language during the whole academic year, and language mixing is avoided within the context of a subject because it is taken for granted that mixing languages would lead to confusion and hinder learning. For illustration purposes, considering a Spanish-English bilingual school, if a given group of students is taught Geography in English and Mathematics in Spanish, English would not be used or allowed during the Mathematics lessons, and Spanish would not be used during the Geography lessons. However, such a radical division is rather unrealistic when taking into account bilingual exposure outside the classroom, given that switching from one language to the other is a highly common behavior in bilingual societies [22][23][24][25], and that language switching spontaneously occur from early childhood [26]. Hence, bilinguals receive and transmit information in a language-mixed fashion without effort, but in sharp contrast, it is the singlelanguage context instead of a dual-language context that bilinguals encounter during formal schooling in bilingual schools. The reason behind this one subject-one language rule seems to stem from fears of the detrimental consequences of mixing languages (i.e., the worry that it may lead to confusion when acquiring new concepts and therefore to deteriorate concept acquisition or learning). To the best of our knowledge, however, this commonly held view has not yet received any scientific validation or support. On the contrary, it has been suggested that the consequences of being immersed in a bilingual learning context are potentially beneficial instead of detrimental. In a study with a large sample of Spanish-speaking English learners, Baker and colleagues [27] investigated how participants differed in their English reading achievement depending on the reading teaching methods. They contrasted a singlelanguage (English-only) program and a mixed-language (bilingual) program. The authors found that participants following the mixed-language bilingual approach showed highly similar reading achievement as participants in the single-language group, and that the differences between groups, if any, were in favor of the mixed-language context. Here, we address this question directly: Is language-mixing during a learning procedure detrimental to learning? In other words, is learning in a mixed-language context less efficient than in a single-language context? Learning, defined here as the acquisition, understanding and retention of new information, occurs spontaneously and very early on in life. But, once children acquire the ability to use language (comprehend and produce utterances), and especially when they start conventional education, learning shifts toward concept acquisition mediated by language. For instance, when encountering the biological definition of 'heart', a student may construct her concept from "something inside that makes you live and love" or "a hollow muscular organ that pumps the blood through the circulatory system by rhythmic contraction and dilation" (from the Oxford dictionary). Concept learning in monolingual contexts (e.g., how new concepts are recognized, assigned meaning, and consolidated either in L1 or in L2 without language mixing) has been extensively studied over the past decade [28]. Language-mediated learning can be investigated in many different ways, ranging from experimental methods that emulate the moment in which a word is encountered for the first time and its meaning needs to be inferred from context [29] to methods that are based on providing the exact meaning of a new word through exposure to its definition(s) [30]. In the current study, we thus chose to use the inferential learning method (i.e., provide features that characterize a concept instead of merely mapping a name to a particular concept) in order to test whether semantic representations acquired in a mixed-language context differ in quality from those acquired in a monolingual context. The selection of the inferential learning method relies on recent evidence that this method allows to generalize and acquire more stable semantic representations as compared to alternative mapping methods [31]. We investigated whether concepts learnt in a single-language context are better acquired and consolidated than concepts learnt in a mixed-language (i.e., bilingual) context, or-alternatively and in contrast to common belief-whether there is no learning deficit associated with a bilingual learning context. In a mixed-language context, information needs to be decoded in two languages before it is integrated at a common semantic level. Under these conditions, the learning process may be expected to suffer given the additional effort required to switch between languages. However, fluent bilinguals have been shown to spontaneously and unconsciously translate input from one language into their other language [32][33][34][35][36][37][38], and several studies have shown that the cost associated with implicit translation is minimal for relatively balanced bilinguals [39][40][41]; note that this is also the case for unbalanced bilinguals, who manifest sizeable translation priming effects from L2 to L1; [42]. Thus, it could be envisaged that language mixing does not affect learning significantly, given that inputs from the two languages are automatically translated into the other language thus favoring parallel semantic access in highly proficient or balanced bilinguals [43][44][45]. In Experiment 1, two groups of adult balanced bilinguals were exposed to a concept learning phase either in a single-(monolingual) or in a mixed-(bilingual) language context. We opted for naturalistic learning involving the association of semantic features with a novel unknown visual object (i.e., the inferential learning method; see [31]). One group of participants learnt these concepts in a single-language context in which two features of the object were provided in the same language. The other group of participants acquired these concepts in a mixedlanguage context, with the two definitions presented in different languages. After the learning phase, participants were tested in a series of experimental tests aimed at quantifying the extent to which semantic acquisition and representation differed across groups. Both direct and indirect measures of concept acquisition were obtained. In Experiment 2, two groups of bilingual children attending a bilingual school were tested using the same experimental paradigm in order to test the extent to which the results in adults would apply to an educational context. If the one subject-one language rule has any grounding, learning should be better established in the single-language context (SLC) than in the mixed-language context (MLC), given the possible confusion caused by language mixing. If so, enhanced consolidation in the singlelanguage context should be reflected by better performance in tasks directly or indirectly measuring learning and consolidation. If, on the contrary, participants in the single-language context do not outperform mixed-language context learners, then it would be reasonable to call into question the one subject-one language rule, and maybe think of it as a prejudice that has developed on the basis of ill-formed intuition surrounding the bilingual paradox. Methods Ethics Statement. All the participants signed informed consent forms before the experiment and were appropriately informed regarding the basic procedure of the experiment, according to the ethical commitments established by the BCBL Scientific Committee and by the BCBL Ethics Committee that approved the experiment (Approval date: 19/03/2014; Approval reference: 19314J). Participants. Fifty young adults (28 females, mean age of 22.96 years) took part in the experiment. All of them were Basque-Spanish balanced bilinguals who acquired both their languages before the age of 6. Their language proficiency in Basque and Spanish was assessed in two ways. First, participants were asked to name a set of 77 common objects in the two languages (see [46] for a similar approach), which showed good vocabulary knowledge in both languages (74.3, SD = 0.82, in Basque and 76.54, SD = 0.08, in Spanish). Second, all participants were individually interviewed by a native Basque-Spanish bilingual linguist in order to assess their communicative skills in each language. The interview started by asking participants to provide basic sociodemographic information, continued with questions related to participants' personal interests, and ended with questions about how they got to know the research center. The language of the interactions was changed from one question to another so that the two critical languages could be assessed in detail. After each interview, the linguist rated the participant based on his/her performance following a 1-to-5 scale (where 5 represents native-like competence and 1 corresponds to an extremely basic or no knowledge of the language). All participants got scores of 5 in both languages. Participants were assigned to two context groups: the single-language context (SLC) or the mixed-language context (MLC). To control for between-group homogeneity, we made sure that the participants in the two context groups were matched for age, gender, age of acquisition, and proficiency in both Basque and Spanish (all ps>.12; see Table 1). In order to ensure that participants in both groups did not significantly differ in terms of domain-general cognitive abilities, three experimental tasks were designed for matching purposes. The first task comprised an assessment of participants' non-verbal IQ obtained from an abridged version of the Kaufman Brief Intelligence Test, K-BIT [47]. Participants had a maximum of 6 minutes to correctly respond to as many trials as they could from the original set of 34 multiple-choice items. The second task was a classic flanker task [48] consisting of a total of 48 trials, which could be congruent, neutral or incongruent (16 items each). The third task was a Simon task [49], which was also made of 48 congruent, incongruent, or neutral trials (16 items in each condition). These two latter tasks were used to measure participants' inhibitory skills and to minimize any potential influence of executive control differences on the critical experiments. Pairwise comparisons revealed no significant differences between groups in all three tasks, and the classic indices associated to the flanker and Simon tasks did not differ across the SLC and MLC (all ps.>.20, see Table 1). Materials. A set of 40 pictures of unfamiliar tools was selected. These were the unknown objects participants had to learn. Each object was paired with two definitions of well-known daily-life objects (e.g., a key). For instance, the definitions "it is kept in the pocket" and "it unlocks doors' locks" referring to the common object "key" were associated with one of the novel objects to be learnt (see [31]). In a norming test run during the material creation phase, both definitions were rated for their informativeness (i.e., how well each of the definitions matched the real object they were derived from) and results showed that the definitions were highly informative, with a mean rating of 4.16 out of 5 (SD = 0.91). Also, we avoided prevalence of one definition over the other and we made sure that each definition of a pair was equally informative about the object (p>.81). For the MLC, one definition in each pair was translated to Basque (see S1 Appendix for the complete set of definitions). Informativeness of the definitions was also rated as being highly similar across the two languages in a norming study. Basque definitions had a mean informativeness rate of 4.10 with a SD of 0.98, while Spanish definitions had a mean informativeness rate of 4.22 and an SD of 0.85 (p>.53). Procedure. The whole experimental session lasted for about one hour in total (see Fig 1 for a schematic summary of the procedure). After the three short control tasks used to match the groups (IQ test, flanker task and Simon task), the learning phase started. Participants learnt the new objects in blocks of four. The pictures were presented one-by-one in the middle of a screen with two features written below them. Learning was self-paced: When a participant thought he/she had learnt the object and its features, he/she could move to the next trial by pressing the spacebar. After every block of four trials, they were tested on the items of that block in order to get an estimate of their immediate learning (Test A). In Test A, one of the learnt pictures appeared in the center of the screen, surrounded by 4 written feature pairs. One of the feature pairs was the correct one, while the others were distractor pairs corresponding to other objects learnt in the same session. If participants failed any of the 4 trials, they had to repeat the whole block of trials and retake Test A, until they succeeded in all 4. Once they met this criterion, the learning session moved to the next block of 4 items. Thus, they went through 40 items in total. Each item appeared twice over the entire learning session (to counterbalance definition order). Both participant groups learnt exactly the same objects, either with the two definitions in Spanish (SLC group), or with one definition in Spanish and the other in Basque (MLC group). After the learning phase, a short association test started (Test B). In each trial of Test B, participants read a feature pair (i.e., the definitions) displayed on the middle of the screen and were instructed to select the corresponding object from 4 pictures presented on the screen. In Test B, participants responded to the 40 items in a row, no feedback was given on the response accuracy and errors did not trigger a test repeat. This test was used to assess immediate recall after the initial learning phase, and taken as an index of learning. After completion of the learning phase (Test A and Test B), participants completed an Old-New judgment task. After a fixation point (centrally displayed for 500 ms), participants were presented with a target picture in the middle of the screen for a maximum of 3000 ms or until a response was given. Targets consisted of the 40 learnt unfamiliar objects (the Unfamiliar Old items) intermixed with 40 unfamiliar objects they had not learnt (Unfamiliar New items), 40 familiar objects (Familiar New items), and the 40 familiar objects from which the definitions of the learnt objects were derived (Familiar Related items; e.g., the picture of a real key). All the images are presented in S2 Appendix. Eighteen participants who did not take part in the experiment (9 females) rated the 160 items for their familiarity on a scale from 1 to 7 (1 = highly unfamiliar; 7 = highly familiar). This was done in order to ascertain that the objects in the Unfamiliar New and Unfamiliar Old conditions did not differ from each other in terms of familiarity, and that the objects used in the Familiar New and Familiar Related conditions were also equally familiar. An unifactorial ANOVA was run on the results, showing significant differences across conditions (F(3,51) = 70.43, p<.01). Pairwise comparisons demonstrated that the items in the two Unfamiliar conditions did not differ from each other (Unfamiliar New = 2, Unfamiliar Old = 1.95; t(34) = -.17, p>.86). Similarly, the objects in the two Familiar conditions did not differ for each other (Familiar Unrelated = 6.12, Familiar Related = 6.24; t(34) = .25, p>.80). As expected, the Familiar conditions significantly differed from the Unfamiliar conditions (all ts>10 and ps<.01). Participants were asked to respond as fast and as accurately as possible by pressing one out of two buttons on a response box to determine whether the displayed objects corresponded to the learnt objects ("Old" items; Unfamiliar Old condition) or to any other object not displayed during the learning phase ("New" items; Unfamiliar New, Familiar New and Familiar Related conditions). No feedback was provided to participants during the task. Accuracy rates as well as reaction times were collected. Familiar New and Familiar Related items were included in order to have a direct measure of false memory effects. The false memory effect is a well-studied phenomenon consisting of participants showing impoverished performance in identifying that they have not previously seen a concrete item (for example, the word "sleep") when there is a close semantic relationship between this item and others from the study set (for example, "bed", "night", "dream"; see, among many others [50][51][52]). Hence, the false memory effect is calculated by contrasting the RTs and error rates in the Familiar New and Familiar Related conditions. Unfamiliar New items were included in order to avoid any possible response bias due to a different proportion of Familiar and Unfamiliar materials. As seen, the inclusion of the necessary control conditions (Familiar New and Unfamiliar New) makes the proportion of expected "Old" and "New" responses different from the 50%-50% ratio used in some paradigms. However, it should be considered that this relative unbalance is rather usual in the memory literature on the false memory effect [52,53]. A final Matching task was also administered to the participants in order to explicitly measure the association strength between the learnt objects and their familiar associates (e.g., between the learnt object that corresponded to a tool that can be kept in the pocket and that is used to unlock doors, and a real key). Participants were presented with the items used in the Familiar Related condition from the Old-New judgment task (e.g., the picture of a key) for 1000 ms, followed by the presentation of 4 different objects from the learning phase (i.e., the Unfamiliar Old items) on the upper part of the screen. From left to right, each of the 4 target objects was associated with a specific button from a response box, and participants had to indicate as accurately as possible which of the objects was the closest in meaning to the reference stimulus (e.g., identify which of the Unfamiliar Old objects was conceptually similar to a real key). Items remained on the screen until a response was given and there was no time limit. Considering that participants needed to simultaneously recall the definitions associated with the 4 learnt items displayed and check which of them best matched the real object presented, accuracy only was used as a dependent measure in this task. Learning (Test A and Test B) The two groups of participants displayed a similar learning trend (see Table 2 for detailed results of all the tasks). Error rates did not differ between groups in Test A (mean error rates of 2.4% and 2.9% for the MLC and SLC groups, respectively; t(48) = -.47, p>.64), suggesting that immediate learning did not differ between the SLC group and the MLC group. Similarly, the two groups saw a similar number of items (including block repetitions after a given error; t(48) = -.47, p>.64). In Test B, SLC and MLC groups did not differ in the number of incorrect responses (t(48) = .46, p>.65), again showing a highly similar performance during the learning process and immediate recall. Old-New judgment task. In the critical Old-New judgment task, we first explored whether SLC and MLC groups differed in their identification of the learnt objects (Unfamiliar Old condition). Response times associated with correct responses did not differ as a function of the context to which participants were assigned (t(48) = -.17, p>.86). Similarly, the two groups did not differ in their accuracy in identifying Unfamiliar Old items (t(48) = -.50, p>.62). Next, ANOVAs were performed on the response times and error rates associated to each of the three conditions requiring a "New" response (i.e., Familiar New, Unfamiliar New and Familiar Related conditions) following a 3 à 2 design in which the 3-levels factor Condition was within-participant and the 2-level factor Learning Context was between-participants factor. The ANOVA on the RTs showed a main effect of Condition (F(2,96) = 65.43, p<.01) but no effect of Learning Context, nor an interaction between the two factors (all Fs<1). The ANOVA on the error data also showed a significant main effect of Condition (F(2,96) = 10.19, p<.01), but not effect of Learning Context neither an interaction between the two factors (all Fs<1). In order to assess concept consolidation, we looked at the false memory effect (i.e., the difference between Familiar New and Familiar Related conditions). As shown by the results of the t-test on the RTs, Familiar Related items were responded to significantly slower than Familiar New items (612 ms vs. 582 ms, respectively; t(49) = 9.53, p<.01). This 30-ms difference corresponds to the false memory effect elicited by the conceptual overlap between the Familiar Related items and the Unfamiliar Old items. Critically, the magnitude of this effect was similar in the SLC and MLC groups (effects of 32 ms and 28 ms, respectively; t(48) = -.69, p>.50; see Fig 2). When looking at the false memory effect on the error data, we also found a significant effect of Condition (t(49) = 3.61, p<.01), such that participants made more errors in the Familiar Related condition than in the Familiar New condition (a 1.1% difference). Participants had difficulty in identifying as "New" the items in the Familiar Related condition, given the shared conceptual features with the Unfamiliar Old items. As in the RT data, the false memory effect in accuracy was indistinguishable in the SLC and MLC groups (1.3% and 0.9%, respectively; t(48) = -.65, p>.52). Mean RTs associated with the Unfamiliar New items and those associated with the Familiar New and Familiar Related items we compared in order to obtain an estimate of the familiarity Matching task. In the last task participants were asked about the pseudo-objects they learnt and their conceptual association with familiar objects. Both groups were very accurate in identifying which real (Familiar Related) objects matched the learnt (Unfamiliar Old) items (an average of 1.76 errors over 40), whilst variance remained small (SD = 0.88). No significant differences in error rates were found between the SLC group and the MLC group (t(48) = .78, p>.44). Hence, the learning performance and the identification of meaning-related real-life objects were not hindered (or improved) by language context of the learning phase (see Fig 3). Interim summary We tested the potential differences in learning, consolidation and integration of new information in adult balanced bilinguals that could be caused by language mixing during new information acquisition as compared to a monolingual learning context in which a single language is used. None of the measures obtained supported the idea that participants in a single-language learning context outperform those in a mixed-language context. No significant differences were observed in the measures associated to the learning phases or in the subsequently obtained direct and indirect indices of memory consolidation. Given that the motivation of this study was to test a situation occurring in formal schooling, in Experiment 2 two groups of bilingual children attending a bilingual school were tested. Based on the findings from Experiment 1, and considering that language switching has been shown to occur spontaneously in very young children too (e.g., [26]) as well as recent evidence suggesting that the acquisition of a new skill is relatively similar for children acquiring it in a single-language and in multilingual learning contexts (e.g., [27]), we did not expect any specific advantage for the SLC as compared to the MLC group. Experiment 2: Children Methods Ethics Statement. All the children enrolled in the experiment provided parental written informed consent. Prior to the experiment, parents received an informative letter and they signed the written informed consent form. They were appropriately informed regarding the basic procedure of the experiment, according to the ethical commitments established by the BCBL Scientific Committee and by the BCBL Ethics Committee that approved the experiment (Approval date: 19/03/2014; Approval reference: 19314J). Participants. Fifty children (mean age 11.44, 31 females) who were attending a Basque-Spanish bilingual-immersion program on a 50-50% exposure basis since age 3 took part in the experiment. Children were randomly assigned to the SLC and MLC groups, and mimicking the procedure used with adult participants in Experiment 1, a series of control tasks was employed to validate the between-group matching. Pairwise comparisons showed that children who were randomly assigned to the SLC and MLC groups did not differ in age, gender, IQ, or the magnitude of the flanker and Simon effect. Also, their proficiency in Basque and Spanish evaluated on the basis of a 30-item multilingual picture naming test did not differ significantly (see [39]). In sum, the two groups did not differ in any of the dimensions tested (all ps.>.26, see Table 3.). Materials and Procedure. Materials and procedures were identical to those used in Experiment 1. Learning (Test A and Test B) In Test A, participants' error rates did not differ between language contexts (t(48) = -.93, p>.36), showing similar learning curves across groups. In a similar vein, the two groups did not differ in the number of items seen during the learning phase (including the number of repeated items due to the repetition of a block after a given mistake) (t(48) = -.88, p>.38; see Table 4). In Test B, no significant differences between the SLC and the MLC groups were found (t(48) = .24, p>.24). Old-New judgment task. Following the same rationale as in Experiment 1, we first compared groups according to their responses to the Unfamiliar Old items (i.e., the items requiring an "Old" response). Response times did not differ between learning contexts (t(48) = -.42, p>.68), and in the same vein, the two groups did not differ in accuracy in the Unfamiliar Old condition (t(48) = -.23, p>.82). Then, an ANOVA was conducted including all the conditions requiring a "New" response (i.e., Unfamiliar New, Familiar New and Familiar Related, in the same way as in Experiment 1, following a 3 à 2 (Condition à Learning Context) design (see Fig 4). The ANOVA on the RT data showed a main effect of Condition (F(2,96) = 13,50, p<.01), but just as in Experiment 1, the main effect of Learning Context was not significant (F(1,48) = .14, p>.71) and the interaction between the two factors was not significant (F(2,96) = .65, p>.53). The ANOVA on error data revealed that the main effect of Condition was not significant (F(2,98) = 1.52, p>.22). The main effect of Learning Context was not significant (F(1,48) = 2.90, p>.1), nor was the interaction between the two factors (F<1). Next, we focused on the difference between responses in the Familiar Related and Familiar New conditions (i.e., the false memory effect). RTs were significantly shorter in the Familiar New than in the Familiar Related condition (720 ms vs. 748 ms, respectively; t(49) = 4.88, p<.01). Importantly, the magnitude of the false memory effect was highly similar in the SLC and MLC groups (an effect of 27 ms in both groups; t<1). Finally, as in Experiment 1, mean RTs associated with the Unfamiliar New items and those associated with the Familiar New and Familiar Related items were compared, testing for a familiarity effect. Unfamiliar New items were responded to significantly more slowly than Familiar New items (754 ms and 720 ms, respectively; t(49) = 4.22 p<.01), but not significantly so from Familiar Related items (748 ms; t<1). Matching task. Both SLC and MLC groups showed markedly high accuracy rates (see Fig 5), with a mean error rate of 11% (SD = 7.71) (an average of 35.6/40 hits). There was no significant difference in accuracy between Learning Contexts (t(48) = .82, p>.42). Interim summary Results from Experiment 2 fully replicated those obtained in Experiment 1. There were no significant differences between the single-language and the mixed-language learning context in the learning trends or in direct and indirect measurements of learning and consolidation. These results show that bilingual children acquire concepts equally efficiently irrespective of the language context (separate or mixed) used for tuition. Thus, these results provide evidence against the one subject-one language rule commonly applied in bilingual educational contexts. General Discussion The aim of this study was to test whether mixing languages during the process of learning new concepts hinders concept acquisition and the consolidation in semantic memory of the learnt concepts. Bilingualism has long been considered a delaying factor in child development, and its possible detrimental impact in different contexts such as schooling and parenting has been feared. Thanks in part to scientific evidence; such misconceptions have been gradually changed. In contrast to earlier studies showing significant differences in vocabulary size, word production and comprehension between bilinguals and monolinguals [54][55], recent studies have suggested that these differences are not reliable and that they do not speak for a 'bilingual disadvantage' [56][57]. Hence, bilingual children appear to reach developmental and linguistic milestones in their two languages at a similar pace and their developmental trajectory does not dramatically differ from that of monolingual children. Unfortunately, the belief that mixing languages in the process of learning may be detrimental for the learning process is an ulcer that still needs to be extirpated on the basis of solid scientific data. Here we simulated essential stages of the process involved in learning new concepts based on the presentation of novel objects paired with definitions while manipulating the number of languages used during the learning phase (mixed-language or single-language learning contexts). In Experiment 1, adult balanced bilinguals were tested in a series of experimental paradigms aimed at exploring 1) differences in the learning phase depending on the number of languages used during the process, and 2) differences in the consolidation and integration of the learnt concepts in semantic memory as a function of the number of languages used during learning. Importantly, none of the indices obtained favored the single-language over the mixed-language context. Hence, we found negligible differences between the two contexts during concept acquisition, showing that language mixing does not represent any additional difficulty for concept learning in balanced bilingual adults. More importantly, indirect measures of learning and integration in semantic memory, as measured by the false memory effects evoked by the familiar objects that were semantically related to the objects learnt also showed parallel (and successful) integration in the two learning contexts. Finally, a direct measure of learning and integration based on an explicit association between the learnt novel objects and existing known objects overlapping in their features and use with the former also showed no differences between groups. Thus, results from Experiment 1 suggest that a multilingual learning context in which two languages are mixed during instruction does not have a negative impact on the learning process itself, nor does it hinder the connection of learnt concepts with pre-existing semantic representations. Together, these data provide evidence against assumptions in support of the one subject-one language rule in formal schooling. In Experiment 2, we directly explored whether the same conclusions would stand in the case of bilingual children attending a bilingual school where two languages are used on a daily basis (but only one language is used for instruction in a given subject). Overall, results from the children samples tested in Experiment 2 closely replicated the outcomes of Experiment 1 (adults). There were no significant differences in the learning trends as a function of the number of languages used during concept acquisition, and both groups of children (SLC and MLC) performed equally well in the experimental paradigms designed to explore the extent to which the learnt concepts had been integrated in semantic memory. Taken together, results from Experiment 2 demonstrate that the simultaneous use of two languages rather than one during concept learning does not increase learning difficulty in balanced bilingual children attending bilingual schools. The absence of differences in the acquisition of new concepts by language-mixing and single-language methodologies could be effectively interpreted as a consequence of automatic and effortless mental translation processes. The transition from one language to another takes place within a few tens of milliseconds in the case of balanced bilinguals, and there is now tangible evidence that the two languages of a bilingual individual are active even if only one of them is required for the task at stake [32][33][34][35][36][37][38]. According to this view, one could tentatively suggest that learners in both mixed-language and single-language contexts activate the lexical representations from their two languages in parallel, irrespective of the number of languages involved in the learning process, thus leading to highly similar effects in the two learning contexts (given the fact that access to semantic representations is equally effective in the two languages). In the same vein, models of bilingual lexico-semantic organization such as the Revised Hierarchical Model (RHM; see [44,45,58]) suggest that at sufficiently high levels of proficiency, bilingual individuals access language-independent semantic representations efficiently regardless of input language. To date, the very few studies systematically exploring the impact of monolingual vs. bilingual education have mainly focused on the differences between bilingual schooling programs (i.e., bilingual education) and fully monolingual schooling programs (see [59]). Different metaanalyses have shown that bilingual education is consistently superior to fully monolingual approaches for second language learning (e.g., [60]). However, these studies exclusively focused on the benefits associated to bilingual schooling programs in which the two languages are not intermixed within a single-subject context (i.e., programs following the one subject-one language rule), and little was known about the differential impact of using one vs. two languages simultaneously for tuition in bilingual schools. The current study is, to the best of our knowledge, the first to investigate this issue, and results support the view that young and older balanced bilinguals learn in a similar manner when immersed in single-language and mixedlanguage learning contexts. These results invalidate the premises of the one subject-one language rule and indicate that comparable concept acquisition and integration can be achieved by balanced bilingual learners irrespective of the number of languages used during tuition. It is worth noting that a mixedlanguage learning context is more akin than a single-language learning context to the linguistic reality of multilingual societies where more than one language are used on a regular basis (i.e., spontaneous language switching). Besides, the simultaneous use of two languages during learning increases the likelihood of balanced exposure, promoting parallel development of linguistic abilities. (Note that this may indeed represent a benefit for children who are still developing their linguistic skills). The current study only focused on balanced bilingual adults and children. Future research will determine whether the same results can be obtained with samples of non-balanced bilinguals who are dominant in one of their languages, given that the cognitive effort associated with mental translation is different in balanced and imbalanced bilinguals (e.g., [40,61]). We acknowledge that future studies should also test whether parallel results could be obtained with different language combinations. Basque and Spanish are markedly different at the lexical and syntactic level, but they share phonology and orthography to a large extent (i.e., they both are alphabetic languages with similar grapheme-to-phoneme mappings). Hence, it would be interesting to explore the consequences of language mixing during learning in language combinations with different degrees of linguistic distance. Furthermore, and in contrast to naturalistic bilingual code-switching where words of the two languages could be blended together during natural speech in a single sentence, for the sake of simplicity we constrained our materials so that language mixing occurred at the whole-sentence level. Further research in needed in order to elucidate whether similar results could be obtained using different forms of code-switching (e.g., within-sentence). Finally, it is worth highlighting that the type of information underpinning learning in the current experiment was (a) simple (so that it would be compatible with learning by school children), and (b) exclusively based on semantic extension (i.e., based on the functional connection of novel objects with previously known ones). Follow up studies are required to examine whether the acquisition of expert knowledge and/or concepts disconnected from pre-established representations is also impervious-or perhaps significantly improved-by language mixing. In sum, this study shows that there is currently no evidence supporting educational practices that exclude mixing languages during the learning of new concepts. These results establish that the final stage of semantic integration (which we consider to be the goal of explicit concept learning) is similarly achieved irrespective of the number languages used to present or define the new concepts. In other words, and at least in a context where there is good and equal mastery of two languages, language mixing is not detrimental to the learning process and does not incur quantifiable learning or integration costs as compared to single-language context learning. In the absence of any negative effect of language mixing, of a delay or poorer learning performance, only the positive outcomes of language mixing can be considered. Firstly, parallel exposure to and use of two languages represent new opportunities for the development of linguistic competence. Secondly, the simultaneous use of two languages makes the learning experience more ecological with respect to the way in which bilingual societies naturally function. Supporting Information S1 Appendix. Definitions in Spanish (red) and in Basque (green) used in the learning phase for the SLC and MLC groups. The items are numbered in the same order as in S2 Appendix. (PDF) S2 Appendix. Pictures used in Experiments 1 and 2. The items are numbered in the same order as in S1 Appendix. (PDF) European Research Council. G.T. was partially supported by a Mid-Career Fellowship from the British Academy.
9,556.4
2015-06-24T00:00:00.000
[ "Linguistics", "Education" ]
Ultralow voltage operation of biologically assembled all carbon nanotube nanomesh transistors with ion-gel gate dielectrics The demonstration of field-effect transistors (FETs) based entirely on single-walled carbon nanotubes (SWNTs) would enable the fabrication of high-on-current, flexible, transparent and stretchable devices owing to the excellent electrical, optical, and mechanical properties of SWNTs. Fabricating all-SWNT-based FETs via simple solution process, at room temperature and without using lithography and vacuum process could further broaden the applicability of all-SWNT-FETs. In this work, we report on biologically assembled all SWNT-based transistors and demonstrate that ion-gel-gated network structures of unsorted SWNTs assembled using a biological template material enabled operation of SWNT-based transistors at a very low voltage. The compatibility of the biologically assembled SWNT networks with ion gel dielectrics and the large capacitance of both the three-dimensional channel networks and the ion gel allowed an ultralow operation voltage. The all-SWNT-based FETs showed an Ion/Ioff value of >102, an on-current density per channel width of 2.16 × 10−4 A/mm at VDS = 0.4 V, and a field-effect hole mobility of 1.12 cm2/V · s in addition to the low operation voltage of <−0.5 V. We envision that our work suggests a solution-based simple and low-cost approach to realizing all-carbon-based FETs for low voltage operation and flexible applications. Single-walled carbon nanotubes (SWNTs) are one of the most promising electronic materials for high-performance field-effect transistors (FETs) 1,2 . Field effect transistors (FETs) based entirely on carbon nanotubes (CNTs) have several advantages such as simple device design, improved contact at the channel-to source(S)/ drain(D) interface, good optical transmittance and excellent mechanical flexibility [3][4][5][6][7][8][9] . The use of CNTs as electrodes instead of metallic electrodes provides not only mechanical flexibility and transparency but also good electrical contacts to an CNT channel due to ideally the similar work function of the CNT electrodes and channel 7,10 . Improved electrical contact at the channel-to-S/D interface results in a small contact resistance, which could lead to high on-current 3 . Various approaches such as transferring, printing and self-assembly process have been developed to fabricate high performance CNT-based transistors 4,5,[7][8][9][11][12][13][14][15][16] . Many of the high-performance all CNT-based FETs have been demonstrated by employing semiconducting-enriched CNTs for the semiconducting channel 3,4,6,8,9 . Recently, unsorted CNTs have been also successfully employed for the fabrication of high-performance all CNT-based FETs showing I on /I off > 10 5 by controlling the network density of CNT channels around the percolating threshold 5,16 . However, previously reported approaches require appropriate surface treatments of CNTs such as acid treatment and heat treatments to enhance electrical conductivity of CNTs or remove residual surfactants. Fabricating all-CNT-based FETs via simple solution process, at room temperature and without using lithography and vacuum process could further broaden the applicability of all CNT-based FETs. Previously, our group showed that aSWNT-network film could be successfully assembled in an aqueous solution using a biological template material under a hydrodynamic process, such as dialysis, to produce so called a SWNT nanomesh 17 . Such hydrodynamic process produced a free-standing SNWT-nanomesh film by releasing the nanomesh from the dialysis membrane 17 . Simple transfer of the free-standing nanomesh of unsorted SWNTs (U-SWNTs) using pre-patterned mask successfully produced channels for FETs 17,18 . In this scheme, no chemical or heat treatment is required either to remove surfactants used for dispersing CNTs or to dissolve the supporting structure onto which the CNT film was deposited. However, the utilization of oxide gate dielectric layers such as SiO 2 and HfO 2 in previous works limited the operation voltage and/or required the vacuum process. Moreover, only metallic electrodes were employed and thus the possibility of fabricating all SWNT-nanomesh FET has not been demonstrated. As a proof-of-concept for fabricating all SWNT-FET via simple solution process at room temperature, ion-gel could be employed as gate dielectric since polymer/ionic liquid composite gels are attractive dielectric materials because of their high specific capacitance, excellent ionic conductivity, printability, and flexibility 11,12,15,[19][20][21][22][23] . Here, we report on biologically assembled all SWNT-based transistors and demonstrate that the ion gel-gated all nanomesh-FETs operate at ultralow saturation voltages and show decent I ON /I OFF and on-current density values. Network structures of unsorted SWNTs with tunable sheet resistance were assembled using a biological template material and employed for contact electrodes and channels depending on their resistance. The ion-gel dielectric layer was compatible with the biologically assembled SWNT networks in terms of wettability and chemical stability and the ion-gel-gated SWNT channels exhibited a very high total capacitance. The three-dimensional network structures of the SWNT channels and their good wettability with the ion gel were responsible for the high total capacitance, thus enabling transistor operation at ultralow saturation voltages. We also show that an amine-rich polymer layer can be used to further shift the threshold voltage of the SWNT channel. Moreover, using SWNTs-nanomesh as S/D electrodes to contact nanomesh channels was found to increase the on-current value about 15 fold on average compared to using Au electrodes. The transistors based entirely on the SWNTs showed a I ON /I OFF value of >10 2 , an on-current value per channel width of 2.16 × 10 −4 A/mm at V DS = 0.4 V, a field-effect hole mobility of 1.12 cm 2 /V · s, and an ultralow operation voltage of <−0.5 V. We envision that our simple and low-cost method to fabricate all-SWNT-based high-performance FETs will provide a valuable route to future electronic devices that require low-voltage operation, mechanical flexibility and transparency. Results Ion-gel-gating of biologically assembled SWNT network channel-based FETs. Figure 1 schematically illustrates the process to fabricate the all-SWNT-based ion-gel-gated FETs. The samples were fabricated on a transparent and flexible PET substrate. A nanomesh of SWNTs and M13 phage assembled in an aqueous solution were transferred using a pre-patterned stencil mask to form channels and source, drain, and gate electrodes. The M13 phage used is a filamentous biological material that has strong binding affinity toward SWNTs on its body surface and the SWNTs and the M13 phage bind each other along their lengths 17 . The ion-gel solution was dropcast onto the device with the channel region and all of the electrodes having been patterned, and dried to form a gate dielectric layer. The source and drain electrodes were passivated using the stencil mask before forming the ion-gel film to minimize leakage current. A photograph of the final device and a scanning electron micrograph of the channel region (circled red in the photograph) are shown in Fig. 1b. It is noted that the channel is transparent (Fig. S1). The length and width of the channel were designed to be 200 μm and 400 μm, respectively, and the SEM image confirmed that the size of the channel was consistent with the mask design. To fabricate the all-SWNT-based FETs, we chose U-SWNT:p8GB#1 molar ratios of 2:16 and 32:4 for the channel and S/D electrodes, respectively. Low content of the SWNTs in the nanomesh was previously shown to be essential to provide the channel with high on/off ratio, while high concentration was required to achieve low electrical resistance of the electrodes 17,18 . A scanning electron micrograph of each nanomesh is shown in Fig. 2 along with its sheet resistance (R S ) value. The sharp and high-contrast lines correspond to the SWNTs (Fig. S2). Note that the insulating and soft biological material is not clearly visualized in the SEM image. The SEM images clearly show the SWNTs to be well dispersed in the nanomesh film. The SWNTs are less dense in the nanomesh channel than those in the nanomesh electrode. The sheet resistances of the channel nanomesh and electrode nanomesh were 4.67 × 10 4 and 96.98 Ω/sq, respectively. The higher resistance of the channel nanomesh was attributed mainly to its lower density of SWNTs dispersed in insulating biological template material. The thickness of the channel nanomesh and electrode nanomesh were 300 nm and 800 nm, respectively. The large thickness of the nanomesh suggests the three-dimensional nature of the SWNT network. The structure of the ion gel-based gate dielectric employed in this study is shown in Fig. 3a. A triblock copolymer, poly(styrene-block-ethylene oxide-block-styrene) (PS-PEO-PS), was dissolved in an ionic liquid, 1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([EMIM][TFSI]). The PS-PEO-PS triblock copolymers formed well-defined physical gels through non-covalent association of the PS blocks 22,24,25 . An ion-gel film was readily formed by drop-casting an acetonitrile solution containing the [EMIM][TFSI] ionic liquid and PS-PEO-PS copolymer onto target substrates. We prepared two ion gel solutions containing different concentration of PS-PEO-PS triblock copolymer, 4% w/v and 7% w/v. The capacitance of the ion gel was measured in metal-insulator-semiconductor (MIS) structures as shown in Fig. 3b. It was measured as a function of frequency (from 10 to 10 6 Hz) using an electrochemical impedance analyzer (Versastat, Princeton Applied Research). The gel thickness was approximately 300 μm. The low concentration of PS-PEO-PS triblock copolymer resulted in a higher capacitance, presumably due to the more effective formation of electrical double layer by the ionic liquid (Fig. S3). The capacitance of the ion gel layer having PS-PEO-PS at 4% w/v was measured to be ~81.90 μF cm −2 at 10 Hz. This capacitance value was comparable to or slightly higher than previously reported values 22,24,25 , suggesting the high quality of the synthesized ion gel. The effect of the ion -gel-based dielectric layer on the performance of SWNT-FETs was investigated with metallic S/D/G electrodes (Au) first. The output characteristics (I DS vs. V DS ) and the transfer characteristics (I DS vs. V G ) of a representative ion-gel-gated FET from the 2:16 nanomesh channel with Au electrode are shown in Fig. 4a,b. The highest I ON /I OFF value was found to be ~3.94 × 10 2 at V DS = 0.2 V. The I ON and I OFF values at this voltage were observed to be 2.90 × 10 −6 A and 7.35 × 10 −9 A, respectively. The threshold voltage, V th , was 0.2 V (Fig. S4). In our previous studies, the operation voltage of the local bottom-gated nanomesh FET with a HfO 2 dielectric layer (0.44 μF cm −2 ) was <5 V and this value was still a greatly improved one compared to the ~60 V value of the bulk SiO 2 dielectric layer (0.012 μF cm −2 )-gated nanomesh FETs 17,18 . The FET showed an anticlockwise hysteresis as indicated by arrows in Fig. 4b. The voltage difference between gate voltages needed to induce an average of the maximum and minimum drain current for the forward and reverse sweep directions was estimated to be 0.28 V at V DS = 1.0 (Fig. S5). The noticeable hysteresis is presumably due to the adsorbed water and oxygen molecules by the hydrophilic biological material in contact with the SWNTs since the nanomesh was assembled in aqueous solution 26,27 . The ultralow threshold voltage of ~0.2 V of the ion-gel-gated nanomesh FETs suggested that the capacitance of the nanomesh channel of the biologically assembled SWNTs was large enough to effectively exploit the large capacitance of the ion-gel dielectric and that the nanomesh channel was highly compatible with the ion-gel dielectric. To confirm this, the total capacitance of the nanomesh-based ion-gel gated transistor was measured and shown in Fig. 4c. The total capacitance was measured to be ~48.70 μF cm −2 at 10 Hz (Fig. 4c) and the capacitance of the SWNT nanomesh was calculated to be ~120.14 μF cm −2 (see Method section), confirming the large capacitance of the SWNT nanomesh channel. The wetting angle of the ion-gel on the nanomesh channel was measured to be 35.72 degree (Fig. S6), implying the good wettability of the ion-gel with the nanomesh. In order to examine the chemical compatibility of the nanomesh in contact with the ion-gel, the on-current and off-current levels of the ion-gel gated nanomesh-FET was compared with those obtained from devices stored in ambient condition for ten days (Table S1). The current levels did not notably change, suggesting the chemical compatibility of the nanomesh with the ion-gel. The three-dimensional network structure of the nanomesh channel and the compatibility with the ion gel were presumably the main factors responsible for the large channel capacitance and thus enabled highly effective gating of the SWNT-FETs. It is noted that the I ON /I OFF value of the ion-gel-gated FET was also much higher than that of the local bottom-gated FETs or back-gated FETs 17, 18 . Shift of the threshold voltage. The threshold voltage was further shifted by introducing an electron donating polymer layer into the SWNT nanomesh channel. Polyethyleneimine (PEI) was selected in this work since a PEI layer has been demonstrated to serve as an electron-donating agent for SWNTs. The PEI used in experiments was a highly branched polymer with an average molecular weight of about 800, with about 25%, 50%, and 25% of its amino groups being primary, secondary, and tertiary amines, respectively 28 . Figure 5 compares the typical transfer characteristics of the nanomesh-FETs with Au electrodes at V DS = 0.2 V before and after PEI functionalization of the nanomesh channel. The transfer curves clearly showed that the threshold voltage was shifted toward the more negative voltage direction as a result of the functionalization, with a fully coated nanotube showing a shift of approximately −1.40 V. The shift in the threshold voltage of the transistors indicated that the transition of the majority carrier type from hole to electron became more facile, confirming the electron donation effect of the PEI layer 28,29 . Also note that the electron drive current, I ON , improved by one order of magnitude. These results could be also attributed to the reduced Schottky barrier (SB) between the electrodes and the semiconducting channels via interface dipole moment enabled by the electron transfer from the dopants to the channels 29,30 . The insets of Fig. 5 schematically depict the band diagram of the SWNT-FETs before (pristine) and after PEI functionalization of the device. The insets show the reduced SB for electron injection, an increased SB for hole injection, and band bending due to the electrons transferred from the dopants to the SWNTs. These results highlight the compatibility of the SWNT nanomesh with a variety of functional polymers that has been utilized for tuning bare SWNTs. Ultralow voltage operation of all-SWNT-based FETs. Ion-gel-gated FETs based on entirely on the nanomeshes were fabricated according to the processes illustrated in Fig. 1. Nanomeshes with different compositions were employed. In particular, a highly conductive nanomesh (with a U-SWNT:p8GB#1 molar ratio of 32:4) was employed as the S/D/G electrodes. The transfer characteristics and output characteristics of a representative all-nanomesh-based ion-gel-gated FET are shown in Fig. 6a,b. The average I on /I off value from three different devices was found to be ~(1.60 ± 1.06) × 10 2 at V DS = 0.4 V. The average I on and I off values were observed to be (8.64 ± 4) × 10 −5 A and (7.4 ± 3.3) × 10 −7 A, respectively. The on-current here was larger than that for the Au electrodes at V DS = 0.4 by ~15 folds on average (Table S2) while exhibiting a similar I on /I off value. The large increase in the on-current is ascribed to the improved contact resistance by the SWNT-nanomesh S/D electrodes compared to the Au electrodes 3,31 . Since the resistance of the nanomesh channel is relatively low due to the relatively high network density (transmittance at 550 nm is ~80%, Fig. S1) compared to sparse-SWNT channel used for high-performance all CNT FETs 5, 16 , the reduction of the contact resistance could readily increase the on-current. The increased off-current level is presumably due to the increased leakage current since it is possible that the thick SWNT network electrodes could not be completely blocked by the passivation. The hole mobility, μ h , was estimated to be 1.12 cm 2 /Vs at V DS = 0.4 V (Supplementary equation 1). This mobility value is much lower than the one of the state-of-the-art CNT FETs (~1 057 cm 2 /Vs) 5 but at the same order as printed CNT FETs using unsorted CNTs 16 . The low mobility of our device could be ascribed to the phage present in the channel as in the hybrid CNT channel 32,33 . The high on-current density per channel width of 2.16 × 10 −4 A/mm even at the low mobility could be ascribed to the better S/D contact with channel, high SWNT network density of the nanomesh channel and the high gate capacitance 3,5 . The threshold voltage, V th , was estimated to be ~0.44 V (Fig. S7). The all-SWNT-FET also showed an anticlockwise hysteresis as indicated by arrows. The larger hysteresis (Fig. S8) compared to Fig. 4(b) could be due to the more phage in the nanomesh electrodes in contact with the channel and/or the wider scanning voltage range 3 . Figure 7 summarizes a comparison of the performance of the all SWNT nanomesh-based FETs with other all-CNT-based FETs in terms of the I on /I off value and the saturation voltage 3-6, 8, 9, 15 . Due to the limited number of reported works on FETs entirely based on unsorted CNTs, devices fabricated using semiconducting-enriched channels have been also included. The saturation voltage was estimated from the inflection point in the transfer characteristics plotted in logarithmic scale. Relative to these other FETs, the device reported in this work exhibited the lowest saturation voltage and a decent I on /I off value. It is worth noting that our device was fabricated without relying on lithographical method and chemical and heat treatment. Discussion We have demonstrated all-SWNT-nanomesh-based FETs by employing SWNT nanomeshes of different compositions for channels and electrodes. The ion gel-gated nanomesh-FETs exhibited an ultralow saturation voltage (|V sat | < |−0.5 V|). The biologically assembled nanomesh of SWNTs possessed a very high specific capacitance, being comparable to that of ion-gel dielectrics. Thus, the field effect was effectively applied onto the channel of the FETs and accordingly enabled ultralow voltage operation. The employment of SWNT-nanomesh S/D electrodes instead of Au electrodes increased the on-current by ~15 folds on average, producing on-current density per channel width of 2.16 × 10 −4 A/mm. The all-SWNT-based FETs showed an I ON /I OFF value of >10 2 and a low operating voltage of less than −0.5 V. The solution-based and room-temperature method to fabricate all-SWNT-based FETs will provide a facile route to realizing electronic devices that require low-voltage operation, mechanical flexibility and transparency. Other types of gate dielectric materials could be further explored for high-speed or low-power device applications. Fabrication of the all-nanomesh-based ion-gel-gated FETs. The nanomesh made up of single-walled carbon nanotubes (SWNTs) was assembled according to the previously reported method 17 . Briefly, an as-received unsorted SWNT solution (Superpure, from NanoIntegris Inc.) was surfactant-exchanged by a sodium cholate solution (anionic surfactant, 2% w/v in deionized water). The SWNTs stabilized by the sodium cholate surfactant were mixed with the p8GB#1 M13 phage showing strong binding affinity toward SWNTs on its body surface. Various SWNT:p8GB#1 molar ratios were used, in particular 32:4 and 2:16 for the assembly of source/drain electrodes and the channel, respectively. The mixed solution was then put into a dialysis membrane and dialyzed against deionized water with frequent changing of the dialyzing solution. After about 24 h, the dialyzed membrane bag was taken into a container filled with water and then the dialysis membrane was removed to produce a large-area nanomesh film floating in water. The nanomesh made using 2:16 SWNT:p8GB#1 molar ratio was transferred onto a transparent flexible poly(ethyleneterephthalate) (PET) substrate using a pre-patterned stencil mask. The transferred nanomesh was left to dry in air, and then the stencil mask was lifted off to produce channels. Then, an additional nanomesh made using 32:4 SWNT:p8GB#1 molar ratio was deposited using the stencil mask to form source/drain (S/D) contact electrodes and the gate (G) electrode. The length and width of the channel were set to 200 μm and 400 μm, respectively. The surface of the S/D electrodes was passivated using a cyanoacrylate adhesive. For the PEI functionalized FETs, 5 μL of the PEI solution (average molecular weight ~800, density ~1.050 g/mL, Sigma-Aldrich) was drop-casted onto the SWNTs channel and air-dried for 24 h. Then, the ion-gel solution was drop-cast on the channel and the exposed gate region, followed by being baked at 70 °C for 5 min. Preparation of the ion gel dielectric film. Characterizations. The sheet resistance of the nanomesh was measured using the van der Pauw method (Hall effect measurement system, HMS-3000). The nanostructures of the nanomesh were examined using scanning electron microscopy (SEM) and transmission electron microscopy (TEM). For SEM, samples were imaged in their native condition (no conductive coating applied) at an acceleration voltage of 20 kV using a JSM-6500F field emission scanning electron microscope. For TEM, the nanomeshes was transferred to a TEM grid (QUANTIFOIL 2 μm circular holes, TedPella Inc.) and dried at room temperature. TEM was performed using a Quantum 966 of FEI Titan, operated at 300 kV. For the contact angle measurement of the ion-gel on the nanomesh channel (2:16), 4 μL of the ion-gel was dropped onto the nanomesh film in ambient condition and analyzed using contact angle measurement system (Phoenix 300, SEO Co. Ltd.). The capacitance of the ion gel film and the ion gel-gated transistors was measured using an electrochemical impedance analyzer (Versastat, Princeton Applied Research). Impedance measurements were performed by using a two-electrode system. For the ion-gel-gated FETs, the source electrode was designated as working electrode (WE) and the gate electrode was designated as counter electrode (CE). Impedance spectra were recorded over a frequency range from 1 MHz to 0.1 Hz, at zero dc potential, with ac amplitude of 10 mV. The transfer characteristics (the drain current (I DS ) vs. gate voltage) of the ion-gel-gated transistors were measured by biasing from V G = −3 V to 3 V at a scanning speed of 70 mV/s using an Agilent 4156 C at room temperature. The output characteristics (I DS vs. the source-drain voltage (V DS )) were measured by scanning V DS from 0 V to 1 V at a scanning speed of 70 mV/s. Calculation of the nanomesh capacitance. The ion-gel capacitance and the nanomesh (network-structured composite of the SWNTs and M13 phage) capacitance were connected in series. Therefore, (1) total n anomesh i ongel Note that, according to this equation, the component (i.e., nanomesh or ion gel) having the smaller capacitance dominates the overall capacitance, C total . However, when the nanomesh and ion gel have similar capacitance values, both values should be considered. The overall capacitance and the ion gel capacitance were measured to be ~48.70 μF/cm 2 and ~81.90 μF/cm 2 at 10 Hz, respectively. The nanomesh capacitance was estimated from these values using equation (1). The capacitance of the biologically assembled nanomesh with a U-SWNT:p8GB#1 molar ratio of 2:16 was calculated to be ~120.14 μF/cm 2 .
5,343.8
2017-07-20T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Towards Predicting Post-editing Effort with Source Text Readability: An Investigation for English-Chinese Machine Translation This paper investigates the impact of source text readability on the effort of post-editing English-Chinese Neural Machine Translation (NMT) output. Six readability formulas, including both traditional and newer ones, were employed to measure readability, and their predictive power towards post-editing effort was evaluated. Keystroke logging, self-report questionnaires, and retrospective protocols were applied to collect the data of post-editing for general text type from thirty-four student translators. The results reveal that: 1) readability has a significant yet weak effect on cognitive effort, while its impact on temporal and technical effort is less pronounced; 2) high NMT quality may alleviate the effect of readability; 3) readability formulas have the ability to predict post-editing effort to a certain extent, and newer formulas such as the Crowdsourced Algorithm of Reading Comprehension (CAREC) outperformed traditional formulas in most cases. Apart from readability formulas, the study shows that some fine-grained reading-related linguistic features are good predictors of post-editing time. Finally, this paper provides implications for automatic effort estimation in the translation industry. Introduction In the translation industry, machine translation post-editing (MTPE) has not only become feasible but also essential thanks to the development of neural machine translation (NMT) and the emerging demand for language services.In academic settings, MTPE has also been recognised as a cost-efficient workflow.Several studies on given language pairs and contexts showed that MTPE generally makes translation faster (O'Brien 2007;Lu and Sun 2018), and that the final product quality is equivalent or even better compared with from-scratch translation (Green et al. 2013;Jia et al. 2019a). However, certain issues, such as the development of the pricing model and the evaluation of the cost-effectiveness of MTPE, remain to be addressed.While MT quality is considered as evidence, the amount of MTPE effort, "not only the ratio of quantity and quality to time but also the cognitive effort expended" (O'Brien 2011: 198), should also be a prime concern, since it focuses more on the interaction between translators and MT output (Herbig et al. 2019) to be more or less correlated with MTPE effort, the effects of different factors, particularly source text characteristics and MT quality, have often been conflated since many experimental settings did not control one of the factors when the other one is under investigation.Accordingly, it was hard to disentangle their separate contributions.Moreover, most of the previous studies have been carried out in the context of Statistical Machine Translation (SMT), which differs substantially from the currently prevalent NMT (Jia and Zheng 2022).Although NMT achieves state-of-the-art results, it is accompanied by fluent but inadequate errors, which may be overlooked by translators and pose new challenges for MTPE (Castilho et al. 2017;Popović 2020;Dai and Liu 2023). The task of MTPE is to identify and modify errors in the MT output, which is mostly achieved by cross-checking source text and MT.Arguably, it is mostly a reading rather than a writing process, so the focal point of MTPE research should direct to reading-related aspects (Koponen et al. 2020:17).As an index of reading difficulty, readability scores have a potential link with MTPE effort.However, readability is mostly discussed in the context of human translation difficulty (Sun 2019:144), while the extent to which source text readability affects MTPE effort is still to be explored.Meanwhile, it is of vital importance to investigate whether such source text features can be used to automatically predict effort, since letting translators themselves evaluate the cost-effectiveness of MTPE would require more time and effort (Daems et al. 2017). Given the aforementioned reasons, this study explores the impact of source text readability on the effort of post-editing English-Chinese NMT output.We also endeavour to predict MTPE time with some reading-related linguistic features.This study is expected to provide evidence for the MTPE pricing model, shed new light on the development of automatic effort estimation and ultimately improve the productivity of MTPE.In particular, it addresses three research questions: 1. How does source text readability impact post-editing effort?2. What is the predictive power of different readability formulas, including both traditional and newer ones, for post-editing effort?3. Is it possible to predict post-editing time based on fine-grained reading-related linguistic features? Related research The three-fold division of MTPE effort, as proposed by Krings (2001) While many scholarly endeavours have been devoted to evaluating the impact of MT output, source text features are little explored.In contrast, they have been widely discussed in the human translation setting (Campbell 1999;Hvelplund 2011;Sun 2019).One of the reasons for this imbalanced distribution of research may be that the dominant role of source text has changed in the context of MTPE, and that translators seem to pay more attention to the MT than to the source text (Koglin 2015, Lu and Sun 2018). There is even a view that translators can do MTPE without access to the source text (Koponen and Salmi 2015; Li 2021).Nevertheless, more evidence is needed to draw more robust conclusions about whether such monolingual MTPE works with the less visible errors recurring in NMT. Meanwhile, it should be noted that the impact of the source text is not only confined to the allocation of cognitive resources, but could be extended to the resources allocated to the MT output, since source text features are mirrored in the MT to a certain extent.Therefore, it is still of vital importance to explore the relationship between source text features and MTPE effort. There has been some relevant research on source text features and MTPE effort, but the results seem to be far from conclusive.O'Brien (2005) discovered that while some source text items that were recognised as negative translatability indicators (NTIs) led to increased effort, the non-NTIs could also increase cognitive processing.Tatsumi and Roturier (2010) suggested that a complexity score, based on a series of source text features such as sentence length, strongly correlated with technical effort.However, in Aziz et al. (2014), the correlation between sentence length and temporal effort was not strong, and Jia et al. (2019b) concluded that source text complexity, measured by human ratings, readability scores, word frequency, and non-literalness, did not necessarily affect MTPE effort.The reason for such inconsistencies might be that these studies have chosen different source text features and adopted different ways of evaluating MTPE effort, and that the results were mixed with the effect of MT quality when comparing the effort of post-editing texts of different complexities.In a study with a more rigorous research design, Jia and Zheng (2022) investigated the interaction effect between source text complexity and MT quality.They found that source text complexity had a significant impact on the effort of post-editing low-quality MT.While this study combined four sets of measurements to identify source text complexity, namely readability scores, word frequency, syntactic complexity, and subjective evaluation, the relationships between each dimension of source text complexity and MTPE effort have not been elucidated.Of note, the measurements in this study have a different focus.Readability scores, for instance, focus on reading difficulties, while subjective evaluation concerns translation difficulties.Accordingly, investigating these features separately may provide a more nuanced understanding of the impact of source text on MTPE effort. Since reading is a significant component of the MTPE process, it is necessary to investigate whether readability, the ease of understanding and processing a text (Nahatame 2021), has an impact on MTPE effort.In previous relevant studies, source text readability was mainly measured using traditional readability formulas including Flesch Reading Ease (Flesch 1948).However, such formulas only examine surface-level linguistic features and fail to give a more in-depth look at text comprehensibility (Graesser et al. 2011) As mentioned, MTPE research has been mainly concerned with MT quality, while source text features were largely neglected.Therefore, the current study focuses on the effect of the source text, specifically source text readability, on MTPE effort.Source text readability is evaluated via both traditional and newer readability formulas, while MTPE effort is measured using a combination of keystroke logging, self-report questionnaires and retrospective protocols.Models based on reading-related linguistic features for predicting MTPE time were also developed. Participants Thirty-four first-year Master in Translation and Interpreting (MTI) students (2 males, 32 females; Chinese as their L1 and English as L2), aged 21 to 25 years old, participated in this study.They had a similar level of English proficiency and an average LexTALE test score of 79 (SD=9) indicated that they were advanced English learners (Lemhöfer and Broersma 2012).In addition, they passed the Test for English Majors at Band4 (TEM4) and the China Accreditation Test for Translators and Interpreters (CATTI) Level 3 (translator). 1Although none of them had worked as professional translators and they had limited MTPE experience, the translation qualification that the participants obtained indicates that they were able to accomplish general translation work.Therefore, the results produced by participants in this study provide implications particularly for novice translators who are new to MTPE.Finally, all participants signed a consent form and were rewarded with 50 yuan for their work. Readability measurement Three traditional readability formulas and three newer ones were adopted to evaluate the readability scores of the source texts.The traditional formulas are the Flesch Reading Ease (RDFRE) formula (Flesch 1948) Source texts selection Six English news texts from the general domain were selected for the study.ST1, ST3, ST5 and ST6 were from newsela.com, a website which provides various adaptations of authentic English news.ST2 and ST4 were from the multiLing set of the CRITT TPR-DB (Carl et al. 2016).All texts were selfcontained and required no specialist knowledge to be post-edited.Under the premise that semantic coherence is preserved, the texts were shortened to 139-150 words.An English native speaker was invited to read them to ensure the comprehensibility of texts.After that, the readability scores were measured, see Table 1. Table 1.Source text readability scores MT quality assessment Two second-year MTI students and two second-year MA students in translation were recruited to evaluate the MT outputs.They all had experience in MT error annotation and have passed the CATTI Level 2 (translator).The MT quality evaluation was conducted with TAUS (2019)'s adequacy and fluency approaches.Specifically, the extent to which the source text meaning is expressed in the MT output and the well-formedness of the MT output were rated separately on a 4-point scale, where "1" represents none/incomprehensible and "4" represents everything/flawless.All the evaluators had training using the scoring rubric.To prevent bias, they were not informed about which MT system was being rated. Since the focus of the study is to investigate the impact of source text readability on MTPE effort, we believe it is necessary to control the impact of MT outputs.In other words, MT outputs should be of similar quality (Jia and Zheng 2022), so that the difference between the effort spent on editing different texts could be better attributed to readability Experimental procedures The MTPE tasks were done within the Translog-II interface (Carl 2012) in November 2022.Before the experiment started, all participants filled in a questionnaire regarding their language, translation and MTPE background. They were asked to do "full post-editing" according to ISO 18587 (2017) and were informed that no external resources such as dictionaries were allowed.There was no time constraint, but participants were required to finish the tasks as soon as possible.They were notified about the layout of the interface, which shows the whole source text in the upper part and the corresponding MT in the lower part (see Figure 1). Figure 1. Screenshot of the Translog-II user interface Each participant finished six tasks and the order of the tasks was balanced across participants in a Latin square design.In order to minimise the impact of fatigue, three tasks were done in the morning session and the other three were done in the afternoon session (Daems et al. 2017).All tasks were conducted in the same classroom and took 66 minutes on average (SD=17.413).In the beginning of the morning session, participants were asked to do a warm-up task to get familiar with the interface.After that, they were immediately shown the screen recording of their MTPE process via the "replay" function of Translog-II and were invited to do a retrospective verbal report concurrently.The recording was played in fast forward mode (two or five times, according to participants' preference) due to time constraints.The participants could freely pause the video or adjust the speed as they commented.There was an outline for the report, based on which participants freely talked about their MTPE patterns, comments on source text and the MT, the difficulties encountered, and the reasons for edits.After the retrospective report, participants rated their subjective cognitive effort.Finally, they took a 5-minute break and proceeded to the main task.All the main tasks followed the same process as the warm-up task did, and there was always a 5-minute break between each task.In the end of the afternoon session, students were asked to take the LexTale test. Data processing and statistical analysis In total, 204 Translog-II xml files that contain the data logged from the MTPE tasks were collected for the study.All the files were uploaded to the CRITT TPR-DB, which can generate different tables and features regarding the MTPE behaviours (Carl et al. 2016). The data analysis was conducted at the textual level with the statistical software R (R Core Team 2022).Specifically, the study adopted Linear Mixed Effects Regression (LMER) models to examine the relationship between readability and MTPE effort.Six null models without a predictor were built via the lme4 package (Bates et al. 2015), each with one effort indicator as the dependent variable: 1) total time, 2) the number of keystrokes, 3) average pause time, 4) pause to word ratio, 5) initial pause, 6) subjective cognitive effort.For each dependent variable, we then built six full models separately, each with source text readability (measured by six different formulas respectively) as fixed effect.Forty-two models were built in total, and all of them included participants and texts as random effects. Before fitting the models, readability scores were z-standardised, and the dependent variables that did not follow a normal distribution were transformed via the powerTransform function in the car package (Fox and Weisberg 2019).Subsequently, the processed variables were entered into the models.We then checked whether the residuals of the models were normally distributed.If not, outliers with standardised residuals over 2.5 standard deviations would be removed and the models were refitted (Wu and Ma 2020). In order to examine whether there is an effect from readability on MTPE effort, we use a log-likelihood ratio test to compare the null models and the full models.Akaike's Information Criterion (AIC) values were adopted to determine the best-fitting full models, with a lower value indicating better performance.Finally, to assess the predictive power of readability, we employed the lmerTest package (Kuznetsova et al. 2017) and the MuMIn package (Bartoń 2023) to measure the significance of the fixed effect and the effect size. Qualitative data supplements the quantitative data in the current study.204 retrospective reports were transcribed and coded.Although participants generally talked about every point in the given outline, only the data pertaining to encountered difficulties and comments on the source text and MT were coded, given the research focus and effort constraints. Total time Total time is the total task duration (in millisecond), normalised by the number of words in the source text.According to Table 4, only the difference between the null model and the RDFRE-included model approached significance (²=3.810,p<0.1).The RDFRE performed the best fit to the total time (AIC=1629.7)and showed a marginally significant negative effect (t=-2.357, p<0.1) on total time, which suggests that the more readable the text, the less temporal effort it takes to post-edit.The results above reveal that readability might not be an accurate predictor of total time.Although the RDFRE demonstrated a marginally significant prediction, it was not sufficiently reliable.Possible explanations for this phenomenon are related to the MT quality.Firstly, the adopted MT output was of relatively high quality.In this case, according to Jia and Zheng (2022), reading a source text does not generally cause deep cognitive processing.Similarly, retrospective protocols suggest that the MT output alleviated the effect of readability.For example, P32 mentioned that she read the MT first and thought the quality was good, so she only referred to the source text when the MT seemed wrong.P34 commented that MT helped her figure out the meaning of certain words.Meanwhile, the general MT quality was controlled since the study has a focus on the impact of readability.While the total time involves reading time, the editing time is also considered, which is closely connected with the MT quality.Therefore, the similar quality of MT might lead to similar editing time, contributing to the insignificant difference between total time regarding different texts. Total number of keystrokes The total number of keystrokes includes the number of insertions and deletions, which was normalised by the number of characters in the target text.As presented in Table 5, the differences between the null model and the full models were mostly insignificant.Only the CAREC-included model showed a marginally significant difference (²=2.772,p<0.1), and the CAREC performed the best prediction of the number of keystrokes (AIC=211.3).However, the fixed effect of readability was neither significant nor approached significance in any models.The results suggest that readability may not be a good predictor of technical effort.Although the materials varied in terms of reading difficulties, the extent to which the corresponding MT was edited did not differ substantially, indicating a limited impact of readability.Meanwhile, given that the general MT quality was comparable in the current study, it can be speculated that the technical effort is more concerned with the MT quality. Nevertheless, an observation of estimated coefficients (b) reveals a potential negative relationship between readability and technical effort.To elaborate, the number of keystrokes is likely to decrease when the text becomes more difficult to read.Our results can partially support Jia and Zheng ( 2022), who consolidated a significant negative impact of source text complexity on the number of editing operations with regards to high-quality MT.We assume that readability has an indirect impact on technical effort, in that lower readability could lead to increased uncertainties in the MTPE process, and subsequently more restrained editing and reduced number of edits.Additionally, retrospective protocols suggest that such an effect of readability may be modulated by the MT quality.For instance, with regards to ST 5 and ST 6, two texts of relatively lower readability, P23, P28 and P34 all commented that since the MT quality was good in general, they chose to trust and keep the MT when they encountered the things that they were not sure about.However, since the participants had no access to external resources during the tasks, whether this negative relationship still exists without such a restriction requires further investigation. Average pause time Average pause time (APT) is the average time per pause in a session.In line with the previous studies which also investigated the impact of source text features on MTPE cognitive effort, 1000ms was considered as the pause threshold (O'Brien 2006;Jia et al. 2019b;Jia and Zheng 2022).As shown in Table 6, only the CAREC-included model differed significantly from the null model (²=5.173,p<0.05) and outperformed other full models in predicting APT (AIC=-1012.7).In addition, the fixed effect of readability was significant in the CAREC-included model (t=2.897,p<0.05).The significant positive relationship between CAREC and APT suggests that participants paused for longer time as the text became harder to read.Vieira (2017) identified three modes of reading involved during MTPE: the first one puts text into working memory for mental processing, the second one concerns specific editing issues, and the third one for revision.Similarly, we believe these three modes can account for the pauses in MTPE.Although the data elicited in the current study has yet to determine which type or types of pauses were prolonged by readability, we assume the impact of readability permeates all three modes of pauses, and particularly the first one, since text processing is one key component of readability (Nahatame 2021).The findings above also indicate that longer APTs are linked with higher cognitive effort.This is somewhat contradictory to Lacruz and Shreve (2014), who claim that APT decreases as cognitive effort increases.However, it should be noted that cognitive effort is itself a complex construct.In Lacruz and Shreve (2014), cognitive effort was indicated by the number of complete editing events, while in this section, the cognitive effort concerns the mental resources for the text understanding and processing. Pause-to-word ratio Pause-to-word ratio (PWR) is calculated by dividing the number of pauses by the number of words in the source text (Lacruz and Shreve 2014).As demonstrated in Table 7, only the difference between the null model and the CAREC-included model approached significance (²=3.091,p<0.1).A marginally significant effect of readability could also be observed in this model (t=-2.02,p<0.1).Meanwhile, it performed a superior prediction of PWR than other formulas (AIC=41.7).The results reveal that the predictive power of readability towards PWR is limited.This is not surprising since the pause is only identified between edits, and the number of edits is not mainly decided by the source text readability, as discussed in section 4.2.However, the marginally significant effect of CAREC indicates a potential indirect impact of readability on PWR. The explanation for such an impact is also similar to that in Section 4.2: lower readability may have refrained the subjects from editing, and less edits are linked with lower pause density.Again, this finding is partially consistent with Jia and Zheng (2022), who observed similar but more significant results in the context of high-quality MT. Initial Pause Initial pause (IP) is the pause time before the first edit of the task, which can be considered as the time that translators spent on understanding the text and detecting the mistakes (Cumbreño and Aranberri 2021: 64).Table 8 shows that most full models, except the ones with SBERT and CAREC as fixed effect, differ significantly from the null model.The fixed effect of readability was significant, as assessed by both traditional formulas, namely the RDFRE (t=-3.952,p<0.01) and the RDFKGL (t=2.906,p<0.05), and newer formula, namely the CML2RI (t=-4.644,p<0.01).In the DC-included model, readability had a marginally significant effect (t=2.394, p<0.1).Among the formulas, the CML2RI showed the best performance (AIC=1091.7).The results demonstrate that participants had significantly longer IP for more difficult texts.Since IP can be largely considered as the first mode of reading according to Vieira (2017)'s classification, it can be concluded that readability has a particular impact on the reading for mentally processing the texts, which is consistent with the assumption proposed in Section 4.3. Of note, shorter IP indicates that participants spent less time on preliminary text processing and error detection, which may lead to the ignorance of MT errors.Although it is not the primary focus of the current study, we would like to provide an example to highlight this issue.Some participants ignored an obvious mistake in ST4, which took the third shortest IP on average (834 ms, normalised by the number of words in source text): ST: Families Hit with Increase in Cost of Living MT:美国家庭生活成本上升 ST4 addresses the economic conditions in Britain, while the MT mistakenly added the adjective "美国 (American)" before "家庭 (Families)".Although the following sentence in the source text clearly refers to "British families", rendering the mistake highly evident, 29% of the students failed to identify this mistranslation.When asked why they did not edit it, participants were surprised to find that they overlooked this error, reporting that they have relaxed their vigilance since the text was not hard to understand and the MT quality was quite good.These findings suggest that translators should exercise caution even when the source text and MT appear to be easily understandable and of good quality, especially as the fluently inadequate errors produced by NMT can still evade detection. Subjective cognitive effort This study applied the adapted NASA Task Load Index (NASA-TLX) (Sun 2012), a multidimensional scale for measuring translation difficulty.We changed the context from human translation to MTPE, and subjects were invited to rate in terms of effort and other five subscales on a 20-point scale. However, for the focus of the current study, only the ratings regarding effort were analysed.As shown in Figure 2, the higher the score, the higher level of effort that participants believed they had exerted.Table 9 suggests that the differences between null model and full models were mostly significant or marginally significant, except the CAREC-included model.The fixed effect of readability was significant in the DC-included model (t=2.977,p<0.01), the CML2RI-included model (t=-2.686,p<0.05), and the SBERT-included model (t=-2.57,p<0.05).A marginally significant effect of readability can also be observed when assessed by RDFRE (t=-2.5, p<0.1) and RDFKGL (t=2.365,p<0.1).The DC performed the best prediction of subjective cognitive effort (AIC=692.7).The results indicate that participants reported higher cognitive effort with more difficult texts.Among the formulas which significantly predicted subjective cognitive effort, two formulas, namely DC and CML2RI, comprise a similar feature.DC focuses on the percentage of less common words, and CML2RI considers word frequency as one of its major components.Therefore, we assume that infrequent word is one of the key factors that influences participants' perception of MTPE effort.This assumption is in accordance with the retrospective protocols, in which all the participants mentioned unfamiliar words regarding their difficulties during MTPE.Therefore, it can be concluded that readability is a good predictor of subjective cognitive effort, especially when it considers infrequent words.Finally, a summary of the fixed effect of readability, measured by different formulas, on MTPE effort is presented in table 10.The formula which shows the best performance in predicting each effort indicator is also marked. 5. Predicting post-editing time with reading-related source text linguistic features MTPE time has proved to be an economical and convenient effort indicator in the translation industry (Koponen et al. 2012).However, our results suggest that the ability of readability formulas in predicting total MTPE time is rather limited.Since previous findings indicate that some linguistic features of the source text have a high correlation with MTPE time (Specia 2011;Green et al. 2013;Vieira 2014), the study also explored the predictive power of fine-grained reading-related linguistic features.Specifically, we investigated the relationship between twelve referential cohesion indices in Coh-Metrix and total time by fitting twelve LMER models separately, in which each index being the independent variable, total time the dependent variable, and participants and texts the random effects.Three indices, namely CRFAOa (t=-2.626,p<0.05),CRFCWO1 (t=-4.617,p<0.01) and CRFCWOa (t=-2.749,p<0.05), significantly predict the time. The CRFAOa represents the global argument overlap, which is the proportion of all possible sentence pairs that share one or more common nouns or pronouns.The CRFCWO1 and CRFCWOa measure content word overlap locally and globally.In other words, the former estimates the proportion of content words that are the same between adjacent sentences, while the latter assesses the overlap between all possible pairs of sentences (Graesser et al. 2011).The more words overlap, the higher the indices, and the easier the text is likely to be to understand.The CRFAOa, CRFCWO1 and CRFCWOa of 6 texts are listed in Table 11.The LMER models suggest that these three indices have a significant negative impact on total time, i.e. total time decreases with the increase of argument/content word overlap.We assume that the repetition of words has facilitated the information processing stage.Our results contradict those of Vieira (2014), who reported that higher type-token ratio, i.e. less repetition of words, led to lower cognitive effort.However, it should be stressed that our results are confined to high-quality MT, while Vieira (2014)'s study covered MT of various quality.Accordingly, the "lack of fluency" problems arising from words repetition might have been reduced in the current study. Conclusion Given that reading is a crucial aspect of MTPE, the study investigated the impact of source text readability on MTPE effort.The quantitative and qualitative data show that readability has a significant effect on cognitive effort, particularly on IP and subjective cognitive effort.The impact of readability on temporal effort and technical effort appears to be limited and indirect, possibly due to the assistance of high-quality NMT.Of note, while the impact of readability on MTPE effort can be statistically significant, the effect sizes of the models suggest that this impact may be relatively weak. Regarding the predictive power of readability formulas, the results indicate that they can predict MTPE effort to a certain degree.Nevertheless, no single formula was able to predict all the effort indicators, highlighting the need to combine different formulas in effort prediction.Newer formulas, particularly the CAREC, outperformed traditional formulas in most instances, which may be explained by the fact that the former consider deeper linguistic features.Our findings also reveal that it is promising to adopt formulas which concern translation-related linguistic features such as translation entropy to automatically predict MTPE effort in the future. In addition to the readability formulas, the study also applied fine-grained reading-related linguistic features to predict MTPE time.Referential cohesion indices, such as content word overlap, were confirmed to be effective predictors.Therefore, it is recommended that translators utilise these automatically generated features to obtain an estimation of MTPE time, and that future QE models can take such features into account. There is no denying that some limitations exist in the current study.Firstly, the number and variety of subjects and texts could be expanded to provide more evidence for the translation industry.Secondly, eye-tracking data would enable a finer-grained analysis regarding the impact of readability on MTPE effort.It is also true that controlling the MT quality may have limited the effect of readability on MTPE effort.However, given the limited time and effort available, the current study represents the best efforts of the researchers.For future research, combinations of multiple MT quality levels and readability scores would certainly provide a more comprehensive picture.Finally, the effectiveness of the prediction models developed in the study should be tested in the real MTPE settings.These limitations and suggestions for future research will be taken into account to yield more comprehensive and generalisable results in future studies.• Daems, Joke, Sonia Vandepitte, Robert J. Hartsuiker and Lieve Macken (2017)."Identifying the machine translation error types with the greatest impact on postediting effort."Frontiers in Psychology 8, Article 1282. • Fox John and Sandford Weisberg (2019).An R Companion to Applied Regression, Third edition.Thousand Oaks, CA: Sage publication. • • Green, Spence, Heer, Jeffrey and Christopher D. Manning (2013)."The efficacy of human post-editing for language translation."Proceedings of the SIGCHI conference . Based on Krings (2001)'s classification of MTPE effort, previous studies have investigated which factors influence temporal, technical and cognitive effort from mainly two aspects: textual features (O'Brien 2005; Tatsumi and Roturier 2010; Koponen et al. 2012), and translators' characteristics (Vieira 2014; Daems et al. 2017).While these factors proved The Journal of Specialised Translation Issue 41 -January 2024 207 While these two types of effort are easier to measure and used more often in the translation industry, cognitive effort cannot be observed directly and is rather confined to academic research.Multiple methods such as think-aloud protocols (Krings 2001), choice network analysis (O'Brien 2005), pause analysis(Toral et al. 2018), eye-tracking (Daems et al. 2017) and subjective ratings (Vieira 2014) have been introduced to evaluate cognitive effort. , comprises temporal, technical, and cognitive effort and is widely used in MTPE research.Temporal effort refers to the time spent during the MTPE process, which is captured by measurements such as processing speed(O'Brien 2011).Technical effort involves a series of manual corrections of the MT output, calculated by keystroke logs(Jia et al. 2019b).Finally, cognitive effort involves "the type and extent of cognitive processes triggered by the post-editing task"(Krings 2001: 182).The quality of MT has been considered to be a key variable (Jia and Zheng 2022), but research results are mixed.Krings (2001)found that the correlation between MT quality and MTPE effort is not necessarily linear while other studies suggested that MT quality is negatively correlated with MTPE effort(Tatsumi 2009; O'Brien 2011; Vieira 2014).Apart from viewing general MT quality, MT error classification provides a finer-grained perspective.For instance, errors regarding word order, omission/addition, style, coherence and so forth were found to be strongly correlated with MTPE effort(Popović et al. 2014; Daems et al. 2017; Qian et al. 2022). Tezcan et al. 2019).Another concern regarding QE is that the interpretation of such complex models remains "cryptic" to translators (Marg 2016).If translators are guided and paid according to information that they do not really understand, the productivity of MTPE may not necessarily be improved.Conversely, adopting simple linguistic features that have the predictive power for MTPE effort should be more comprehensible to translators.Moreover, it was previously suggested that presenting scores on source text characteristics may be helpful for translators to estimate MTPE time (Tatsumi and Roturier 2010). MTPE effort.Such prediction is usually related to quality estimation (QE), which involves textual feature extraction, annotated scores of MT quality and machine learning algorithms (Specia and Shah 2018: 203).However, QE's relation to actual MTPE effort has yet to be attested(O'Brien 2011; relies on the ratio of difficult words.The newer formulas, namely the Coh-Metrix L2 Reading Index (CML2RI)(Crossley et al. 2008), the Crowdsourced Algorithm of Reading Comprehension (CAREC)(Crossley et al. 2019), and the Sentence BERT Readability Model (SBERT) (Reimers and Gurevych 2019), comprise richer linguistic features such as word overlap, and syntactic similarity (seeChoi and Crossley (2022)for more detailed information).TheCoh-Metrix Desktop Tool (McNamara et al. 2014) was used to obtain the RDFRE, the RDFKGL and the CML2RI scores, while the DC, the CAREC, and the SBERT scores were acquired via the Automatic Readability Tool for English (ARTE; Choi and Crossley 2022).The RDFKGL, the DC, and the CAREC scores indicate higher text complexity as they increase, while the RDFRE, the CML2RI, and the SBERT scores suggest lower text complexity as they increase. , the Flesch-Kincaid Grade Level (RDFKGL) formula(Kincaid et al. 1975), and the Dale-Chall (DC) formula(Dale and Chall 1948).The RDFRE and RDFKGL measure readability based on word length and sentence length while the DC . A pilot MT evaluation was first conducted on four widely-used NMT engines (Google Translate, DeepL Translate, Youdao Translate and Baidu Translate) translating ST1, ST3, and ST6.As shown in Table 2, Youdao Translate showed the most consistent performances in translating different texts (especially in terms of ST6).Therefore, Youdao Translate was selected for the experiments. Table 3 presents the evaluation results regarding the MT produced by Youdao Translate.The inter-rater agreement was strong and significant for both fluency (Kendall's W=0.739, p<0.05) and accuracy (Kendall's W=0.659, p<0.05).According to the one-way ANOVA pairwise comparison, six texts scored similarly in terms of both fluency (F=1.105,p>0.05) and accuracy (F=1.044,p>0.05) with no significant difference, indicating that all texts were of comparable MT quality.
7,924
2024-01-30T00:00:00.000
[ "Computer Science", "Linguistics" ]
FGF–2 is required to prevent astrogliosis in the facial nucleus after facial nerve injury and mechanical stimulation of denervated vibrissal muscles Abstract Recently, we have shown that manual stimulation of paralyzed vibrissal muscles after facial-facial anastomosis reduced the poly-innervation of neuromuscular junctions and restored vibrissal whisking. Using gene knock outs, we found a differential dependence of manual stimulation effects on growth factors. Thus, insulin-like growth factor-1 and brain-derived neurotrophic factor are required to underpin manual stimulation-mediated improvements, whereas FGF-2 is not. The lack of dependence on FGF-2 in mediating these peripheral effects prompted us to look centrally, i.e. within the facial nucleus where increased astrogliosis after facial-facial anastomosis follows "synaptic stripping". We measured the intensity of Cy3-fluorescence after immunostaining for glial fibrillary acidic protein (GFAP) as an indirect indicator of synaptic coverage of axotomized neurons in the facial nucleus of mice lacking FGF-2 (FGF-2-/- mice). There was no difference in GFAP-Cy3-fluorescence (pixel number, gray value range 17–103) between intact wildtype mice (2.12± 0.37×107) and their intact FGF-2-/- counterparts (2.12± 0.27×107) nor after facial-facial anastomosis +handling (wildtype: 4.06± 0.32×107; FGF-2-/-: 4.39±0.17×107). However, after facial-facial anastomosis, GFAP-Cy3-fluorescence remained elevated in FGF-2-/--animals (4.54±0.12×107), whereas manual stimulation reduced the intensity of GFAP-immunofluorescence in wild type mice to values that were not significantly different from intact mice (2.63± 0.39×10 ). We conclude that FGF-2 is not required to underpin the beneficial effects of manual stimulation at the neuro-muscular junction, but it is required to minimize astrogliosis in the brainstem and, by implication, restore synaptic coverage of recovering facial motoneurons. Introduction Restoration of function after transection of peripheral nerves is poor. Occurrence of "post-paralytic syndromes" such as paresis, synkinesis and dysreflexia are inevitable. Although axonal regrowth is robust, a large body of evidence points to poor recovery being attributable, at least in part, to extensive sprouting and therefore inaccurate reinnervation of target muscles . Indeed, axonal sprouting occurs at a number of locations en route along the axis of the facial nucleus -facial-nerve trunk -facial nerve fascicles -facial, [2] muscles . The quality of peripheral nerve regeneration, both within the nerve and at the motor end-plate/terminal Schwann cell complex, can be improved by various non-invasive therapies. Muscles with flaccid paralysis can be stimulated electrically or by exercise, procedures which inhibit intramuscular axonal sprouting and diminish motor-end-plate polyinnervation, thereby improving reinnervation quality [3] . With respect to functional improvements, we have recently shown that after facial nerve injury, manual stimulation (MS) of denervated whisker pads reduces the proportion of polyinnervated neuro-muscular junctions (NMJ). Furthermore, the shift towards the normal monoinnervated state is associated with improved whisking function and blink reflexes . Factors contributing to this beneficial effect could involve the denervated muscles themselves, which produce numerous short-range diffusible sprouting with various neurotrophic factors being identified as possible candidates stimuli [6][7] . We have also recently showed that after facial nerve injury and MS [(facial-facial anastomosis (FFA)+MS] in mice deficient in insulin-like growth factor-1 (IGF-1 +/-) or brain-derived neurotrophic factor (BDNF +/-) and its Trk-B receptor, reinnervation of NMJ was highly inaccurate and vibrissal whisking was poor . We thus concluded that the deficiency of both growth factors was involved in lack of functional recovery. By contrast, mice lacking fibroblast growth factor-2 (FGF-2 -/-) recovered well after FFA+MS (but not after FFA only), indicating that FGF-2 may not be required to underpin the beneficial "peripheral" effects of MS [10] . We therefore decided to explore whether there were any central effects of MS by examining the facial motoneurons in the brainstem. After facial nerve injury, activated (GFAP + ) astrogliocytes reversibly displace perisomatic synapses from the neuronal surface and induce "synaptic stripping" [11] . Hence, the amount of GFAP-expressing astroglia provides indirect information about the status of synaptic coverage of facial neurons. In the present study, we therefore quantified the intensity of Cy3 fluorescence after immunostaining for GFAP as a measure of the total amount of activated astrocytes in the facial nucleus. We examined FGF-2 -/mice subjected to facial nerve injury and subsequent MS of the vibrissal muscles or handling alone (i.e. no treatment). Wildtype (WT) littermates were used as controls. Experimental procedures All experiments were conducted in accordance with the German Law on the Protection of Animals and procedures were approved by the local Animal Care Committee, University of Cologne. Guidelines were identical to those of the National Institute of Health Guide for the Care and Use of Laboratory Animals (NIH Publications No. 80-23) revised 1996, the UK Animals (Scientific Procedures) Act 1986 and associated guidelines and the European Communities Council Directive of 24 November 1986 (86/609/ EEC). Before and after surgery, animals were fed standard laboratory food (Ssniff, Soest, Germany), provided tap water ad libitum and kept in an artificial light-dark cycle of 12 hours light on and 12 hours off. The transgene allele yielded an amplicon of 750 bp and the wildtype 344 bp.Thirty-six mice were divided into six groups with each group consisting of six animals ( Table 1). Groups 1 and 2 were intact WT or homozygous knockouts (FGF-2'). Groups 3-6 comprised WT (Groups 3 and 4) or FGF-2 -/-(Groups 5 and 6) mice which underwent FFA. Following FFA, animals either received MS (see below, Groups 4 and 6) or served as "handling" controls (see below; Groups 3 and 5). Surgical procedures FFA involved transection and end-to-end suture of the right facial nerve under surgical anesthesia (Ketamin/ Xylazin; 100 mg Ketanest ® , Parke-Davis/ Pfizer, Karlsruhe, Germany, and 5 mg Rompun ® , Bayer, Leverkusen, Germany, per kg body weight; i.p.) and was undertaken by a trained surgeon (M. Grosheva). The trunk of the facial nerve was exposed and transected close to its emergence from the foramen stylomastoideum. The proximal and distal stumps were immediately reconnected using two 11-0 atraumatic sutures (Ethicon, Norderstedt, Germany). Manual stimulation of vibrissal muscles and handling controls At one day after surgery and continuing for two months, mice receiving MS (groups 4 and 6) were subjected to gentle rhythmical stroking of the right whisker pad for 5 minutes per day. Handling controls (group 3 and 5) were carefully removed from the cage by an investigator and held for 5 minutes as if they were to receive MS [14] . Analysis of vibrissal motor performance We used our established technique of video-based motion analysis of explorative vibrissal motor per-formance™. At the conclusion of the experiment (2 months after surgery), animals were videotaped for 35 minutes during active exploration (Digital Camcorder: Panasonic NV DX-110 EG; 50 Hz; 50 fields per second; shutter open for 4 ms per cycle). Selected sequences (1.5 sec) containing the most pronounced whisking (vibrissal bouts) were captured by a 2D/ Manual Advanced Video System (PEAK Motus 2000, PEAK Performance Technologies, Inc., Englewood, CO, USA). Bouts on both the operated (ipsilateral to surgery) and intact (contralateral to surgery) sides were subjected to motion analysis using specific reference points to evaluate: (i) whisking frequency: cycles of protraction (forward movement) and retraction (backward movement) per second, (ii) angle at maximal protraction, (iii) amplitude: the difference between maximal retraction and maximal protraction, (iv) angular velocity during protraction, and (v) angular acceleration during protraction. Measurements were performed by two observers (Mark Seitz and Srebrina Angelova) blinded to the treatment. Fixation and tissue preparation Two months after surgery, all animals were anaesthetized and transcardially perfused (4% paraformaldehyde in phosphate buffered saline pH 7.4). Brainstems were sectioned coronally at 50 mm using a vibratome. Immunohistochemical staining for GFAP was performed on every second section through the facial nucleus (according to the fractionator sampling strategy, the facial nucleus consisted of 23-25 sections) in one incubation batch. Fluorescence microscopy A Zeiss microscope equipped with a CCD Video Camera System combined with the analyzing software Image-Pro Plus 6.2 (Media Cybernetics, Silver Spring, MD, USA) was used to quantify fluorescence intensity. Fluorescent images captured via the rhodamine filter were compared using the 8 BPP gray scale format whereby each pixel contains 8 bits of information encoding brightness, with a range of 0 to 255. The scale for pixel brightness, or pixel gray value, is constructed so that the higher numbers indicate greater pixel brightness. Digital images were captured with a slow scan CCD camera (Spot RT, Diagnostic Instruments, Scientific Instrument Company, Inc., Campbell, CA, USA). For quantification of pixel brightness, images were captured using a × 25 objective and Image-Pro Plus Software Version 6.2 (Media Cybernetics Rockville, MD, USA). Exposure time was optimized to ensure that only few pixels were saturated at 255 gray value. However, all images representing the same labeling were taken under the same conditions of exposure (duration). The lateral facial subnucleus was selected as an area of interest that could be unambiguously and consistently identified between animals (AOI; white ellipse in Fig. 1). Using a magnification of × 25, we positioned the long diameter of the ellipse parallel to the readily identifiable ventral margin of the brainstem and then placed the ellipse immediately adjacent to the lateral edge (transition zone ventral to lateral, indicated in Fig. 1 by an asterisk). An interactive threshold was used to detect the pixel brightness of the minimum fluorescence. Threshold values ensured the inclusion of the entire signal range in the sample. This value was further used to extract and compare the pixel number between animals of the same group and between experimental groups. A scale with a minimum of 17 and maximum of 103 was used for graphical representation of the results. Using this scale, sections without fluorescence received 17 points and those with very strong fluorescence received 103 points. Statistical analysis Data were analyzed using one-way ANOVA with post-hoc Tukey test and a significance level of 0.05 (Statistica 6.0 software; StatSoft, Tulsa, OK, USA). Intact mice In intact WT and FGF-2 -/animals, whisking involves active exploration whereby mystacial vibrissae sweep back and forth with a frequency of about 5-6 Hz and a maximal protraction (a rostrally open angle between the vibrissal shaft and the sagittal plane) of about 55°. Protraction is mediated by striated muscle fibers that form a sling around the rostral aspect of each hair follicle; contraction pulls the base of the follicle caudally and moves the distal whisker tips forward. The mean amplitude was approximately 55° (WT: 52.2 ± 7.4°; FGF-2 -/-: 56.3 ± 6.7°), the angular velocity about 800° and the angular acceleration about 60,000°/sec 2 . Mice subjected to FFA and subsequent handling Following FFA, both WT and FGF-2 -/mice receiving handling only had large functional deficits compared to intact mice. The main functional deficiency was seen in the significantly smaller amplitudes of vibrissal movements (WT: 23.4 ± 4.1°; FGF-2 -/-12.8 ± 3.2°; P<0.05 compared to intact counterparts). Moreover, the mean amplitude measured in the FGF-2 -/animals was significantly smaller compared to WT-mice (FGF-2 -/-: 12.8 ± 3.2° compared to WT: 23.4 ± 4.1°; P<0.05). The reduced whisking amplitude in FGF-2 -/animals suggests that this growth factor is required to promote functional recovery, at least in the absence of any other intervention such as MS. Mice subjected to FFA and manual stimulation Compared to their respective counterparts subjected only to handling, MS appeared to be effective in both WT and FGF-2 -/mice. Thus, whisking amplitude was significantly increased in WT mice (MS: 36.3 ± 6.4° compared to handling only: 23.4 ± 4.1°; P<0.05). Similarly, despite the deficiency of FGF-2, whisking amplitude was significantly increased by MS in the FGF-2 -/mice (MS: 35.3 ± 5.1° compared to handling only: 12.8 ± 3.2°: P<0.05). Discussion Our finding that recovery of target muscle reinnervation after facial nerve injury is poor in FGF-2 deficient mice supports other work on the pivotal role of this growth factor in peripheral nerve development as well as regeneration. Thus, FGF-2 supports survival of Schwann cell precursors in embryonic mice [13] and in rats, induces proliferation of Schwann cells in vitro [14] . FGF-2 is also expressed in motor and sensory neurons as well as in Schwann cells in both the developing and adult peripheral nervous system. In agreement with these observations, FGF-2 stimulates axon regrowth in vivo and contributes to the enlargement and maintenance of axon caliber [15] . Following injury to peripheral nerves, such as the facial and hypoglossal nerves, reactive gliosis occurs remotely within central nuclei [16] as well as in transynaptically linked regions such as the motor cortex [17] . Within motor nuclei, reactive astrogliosis has been shown to reversibly displace perisomatic synapses from the neuronal surface ("synaptic stripping") [11,18] . Such synaptic stripping presumably also underlies the lack of functional recovery observed after peripheral nerve injury [12] . The discrepancy between persistent astrogliosis in the facial nucleus of the operated FGF-2 -/mice and the good recovery of whisking function cannot be explained easily. There should be no doubt that all facial perikarya underwent ("synaptic stripping") after facial nerve injury. In response to transection of the facial nerve, the resident microglia show a dramatic increase in mitotic activity, rapidly migrate towards the neuronal cell surface and displace the afferent synaptic terminals. The axotomized motoneurons "respond" to their deafferentation with a decrease in the synthesis of transmitter-related compounds, e.g. muscarinic and glycine receptors and a decrease in activity of enzymes involved in the biosynthesis of transmitters, e.g. dopamine-β-hydroxylase, tyrosinehydroxylase, cytochromeoxidase and acetylcholinesterase. These changes correspond to the electrophysiological status of regenerating neurons: increased excitability with preserved integrity of the dendritic input [3] . This post-traumatic deafferentation is reversible if target reinnervation occurs. Quantitative electron microscopic analysis of regenerated cat gastrocnemius motoneurons has, however, revealed that restoration of synaptic inputs is incomplete in several respects. Thus, for example, total synaptic frequency (number of synapses per unit membrane length) and total synaptic coverage (percent of membrane length covered by synapses) estimated for motoneuron cell somata and proximal, intermediate and distal dendritic segments recover to 60% -81% and 28% -48% of normal, respectively. Finally, reinnervation of motor targets (with accompanying recovery of the perisomatic synaptic density) does not automatically mean recovery of function. Voluminous work (also from our laboratory) has shown that polyinnervation of the NMJ is a major critical factor for recovery of coordinated motor performance [4-5-,8-9] . FGF-2 is a member of a large family of small peptide growth factors found in neurons and glia and has multiple roles. For neurons, FGF-2 is involved in maintaining neurogenic niches in vivo [19] and in neuroprotection following nervous system injury [20] . Similarly, FGF-2 appears to be important for glia [21] . Several studies have highlighted a role for FGFs in region-specific regulation of glial differentiation. Thus, using the FGF-2 null mouse, Irmady et al. [22] demonstrated that FGF-2 is critical for cortical astrocyte differentiation. Additionally, following nervous system trauma, reactive astrocytes show increased FGF-2 immunoreactivity [23,24] . Following facial nerve transection and repair, increased GFAP immunoreactivity was observed within 2-3 days after axotomy in the facial nucleus on the lesioned side [16] . This reactive change lasted longer (up to 1 year) when axon regeneration was prevented or delayed by placing a metal clip on the proximal nerve stump. Although GFAP immunoreactivity was also studied in shorter times (7-11 days) after nerve lesion, we evaluated GFAP immunoreactivity after two months to observe its distribution in the facial nucleus after injury. Facial nerve injury was evaluated after two months since the same time was used to evaluate facial nerve recovery in our previous studies. Similarly, and not unexpectedly, we observed reactive gliosis (as evidenced by elevated GFAP-Cy3-fluorescence) in WT mice following FFA + Handling. However, somewhat surprisingly, given that FGF-2 appears to be involved in astrogliosis, we also observed elevated GFAP-Cy3fluorescence (i.e. astrogliosis) in FGF-2 -/mice following FFA + Handling. Our data therefore suggest that factors in addition to FGF-2 must also be involved in astrogliosis. Indeed, ATP activation of P2X receptors appears to be an alternative pathway to FGF-2 in mediating astrocyte proliferation [27] . Another alternative to FGF-2 is the PKCepsilon protein kinase which regulates morphological stellation as well as multiple astrocytic signalling pathways. Earlier studies have demonstrated FGF-2 immunoreactivity in the rat facial nucleus, which gets upregulated after facial nerve injury [25] . We were unable to find direct information about the different isoforms (18, 20.5, 21, 23kD) that may be present in the murine facial nucleus. Anyway, based on earlier work by Allodi et al. showing that 18-kDa-FGF-2 mediates neuritogenesis though with inhibitory effects on the myelination and that 21-/ 23-kDa-FGF-2 mediates long distance myelination of regenerating axons and early recovery of functions, we may assume that all FGF-2 isoforms are present in the murine facial nucleus, based on the successful recovery of vibrissal whisking. A partial support to this assumption was found in the results of Dono et al. [12] demonstrating the presence of all 3 isoforms in the brain of newborn mice. In previous studies, we showed that MS provided functional recovery of vibrissal muscles and reduced the degree of polyinnervation following facial nerveinjury [4] . Earlier work has shown that motoneurons are dependent on growth factors for their survival both normally and after axotomy [1][2] . Indeed, MS of denervated vibrissal muscles was ineffective in mice deficient in IGF-1 and BDNF [8][9] . By contrast, and surprisingly, following facial nerve transection and immediate repair, MS in FGF-2 -/mice was effective in improving both whisking function and accuracy of target muscle reinnervation [10] . However, as we show here, MS in FGF-2 -/mice failed to prevent gliosis in the facial nucleus. At first sight, our results seem to contradict earlier work which indicates that cerebral injections of bFGF activated the astroglial reaction. The absence of bFGF in our experimental animals should have impeded the upregulation of GFAP in FGF-2 -/mice. While appreciating the results in this report, we identified four important differences from our present study. First, they used rats and we used mice. Second, they performed direct injury to the central nervous system with a breakdown of the blood-brain barrier, while in our experiments injury to the peripheral facial nerve was followed by the indirect retrograde axon reaction and "chromatolysis" in facial motoneurons. Third, the areas in which Eclancher et al. injected bFGF included the cortex, striatum, hippocampus and corpus callosum; while we studied the facial nucleus in the brainstem. Fourth, following bFGF injections, Eclancher et al. let the animals survive 3-20 days, while our mice lived two months after facial nerve injury. Finally, we may also suppose that FGF-2 is not the only trophic factor responsible for the upregulation of GFAP in the activated astrocytes. In conclusion, a lack of FGF-2 results in inaccurate target re-innervation and poor functional recovery. In the periphery, but even in the absence of FGF-2, MS can improve the accuracy of reinnervation and restore function [10] . However, centrally, a lack of FGF-2 leads to astrogliosis that cannot be prevented by MS. Nevertheless, sustained astrogliosis in the facial nucleus resulting from of a lack of FGF-2 does not prevent MS from conferring its functional benefit. We therefore conclude that the benefits of MS are not underpinned by FGF-2 in the periphery and that central gliosis resulting from a lack of FGF-2 does not impact on accuracy of reinnervation or functional recovery.
4,443.6
2016-03-01T00:00:00.000
[ "Biology", "Medicine" ]
How Dietary Fibre, Acting via the Gut Microbiome, Lowers Blood Pressure Purpose of Review To discuss the interplay behind how a high-fibre diet leads to lower blood pressure (BP) via the gut microbiome. Recent Findings Compelling evidence from meta-analyses support dietary fibre prevents the development of cardiovascular disease and reduces BP. This relation is due to gut microbial metabolites, called short-chain fatty acids (SCFAs), derived from fibre fermentation. The SCFAs acetate, propionate and butyrate lower BP in independent hypertensive models. Mechanisms are diverse but still not fully understood—for example, they include G protein-coupled receptors, epigenetics, immune cells, the renin-angiotensin system and vasculature changes. Lack of dietary fibre leads to changes to the gut microbiota that drive an increase in BP. The mechanisms involved are unknown. Summary The intricate interplay between fibre, the gut microbiota and SCFAs may represent novel therapeutic approaches for high BP. Other gut microbiota-derived metabolites, produced when fibre intake is low, may hold potential therapeutic applications. Further translational evidence is needed. Introduction High blood pressure (BP), also known as hypertension, affects one in every three adults globally [1,2]. The BP of two-thirds of hypertensive patients remains uncontrolled, especially in low-and middle-income countries [1]. According to the Global Burden of Disease study, high systolic BP is the leading risk for attributable deaths [3]. Thus, understanding the reasons why high BP remains highly prevalent and uncontrolled is crucial.A well-known risk factor for hypertension, and one of the first lines of intervention according to recent guidelines, is diet [4]. Alarmingly, in 2017, the intake of most healthy foods was suboptimal [5•]. In the same year, dietary risks were estimated to have contributed to 11 million deaths and 255 million disability-adjusted life-years (DALYs) in adults [5•]. The main cause of diet-related deaths and DALYs was cardiovascular disease (CVD) [5•]. Diet-related deaths were attributed to high sodium intake, followed by low intake of whole grains, fruits, nuts, seeds and vegetables, while DALYs were primarily attributed to low intake of whole grains [5•]. Overall, foods high in whole grains, fruits, nuts, seeds and vegetables are high in fibre. The first evidence we could identify reporting that dietary fibre lowers BP is a small clinical trial that dates from 1979 [6]. Four decades later, the evidence that overall fibre intake is associated with a lower incidence of CVD and lower BP is robust [7••, 8]. Until recently, however, we did not understand how this happened and if this was an association or indeed dietary fibre was involved in BP regulation. Since 2017, a growing body of evidence suggests this occurs via the gut microbiota, the microorganisms that inhabit the intestine [9••, 10]. In this review, we summarize the complex interplay between fibre, the gut microbiota, microbial metabolites and their molecular mechanisms, and the associated changes in BP. We review the most recent literature supporting that manipulation of the gut microbiota and/or their metabolites produced after fibre intake might be a novel therapeutic approach for hypertension. Dietary Fibre and Lower Incidence of CVD: the Latest Evidence Over the past decades, epidemiological studies and clinical trials revealed a strong association between dietary patterns and CVD (Fig. 1). A recent systematic review and metaanalysis analysed 10 randomized clinical trials (RCTs) that employed the modified Dietary Approaches to Stop Hypertension (DASH) diet, characterized by a diet low in sodium and enriched in fruits, grains, vegetables and low-fat dairy foods [11]. This showed the modified DASH diet reduced systolic BP by 3.3 mmHg and diastolic BP by 2.1 mmHg [11]. While sodium has been the focus of most studies in dietary interventions to treat hypertension, evidence supports that the DASH diet lowers BP even when sodium intake is high [12]. This reinforces the concept that improvements in BP are not only dependent on sodium [13]. Indeed, a systematic review and meta-analysis of 6 clinical trials focused on the Mediterranean diet and BP showed a small decrease in systolic (− 1.4 mmHg) and diastolic (− 0.7 mmHg) BP [14]. Furthermore, a recent RCT reported that both Mediterranean and its improved version, the Green-Mediterranean diet, significantly reduced BP [15]. Apart from the DASH and Mediterranean diets, a meta-analysis of 185 prospective studies and a total of 58 RCTs, equivalent to ~ 135 million person-years, determined that higher fibre intake reduced overall and cardiovascular mortality by 15-30%. A diet high in fibre was also associated with a lower risk of CVD [7••]. Analysis of 15 RCTs, including 1064 intervention and 988 control participants, reported that fibre reduced systolic BP by 1.27 mmHg [7••]. A more recent meta-analysis by the same authors included 12 RCTs of 878 patients with CVD or hypertension [8]. This study provided high certainty evidence showing fibre reduces systolic BP by 4.3 mmHg [8]. An additional 5 g per day of fibre was sufficient to reduce systolic and diastolic BP by 2.8 mmHg and 2.1 mmHg, respectively [8]. These are robust evidence that dietary fibre lowers BP, even without sodium interventions. A diet rich in fibre sources has been associated with beneficial health outcomes. Dietary fibre comprises all carbohydrates that resist digestion or absorption in the small intestine and have a degree of polymerization of at least ten monomers [16•, 17•]. There are two major types of dietary fibre, non-starch polysaccharides and resistant starches (RS). Non-starch polysaccharides, the main component of plant cell walls, include soluble fibre, which is capable of dissolving in water, and insoluble fibre, which is unable to be dissolved in water [16•, 17•]. RS range from type one to five and are the energy repertory for plants and a major dietary carbohydrate source for humans [16•]. Thus, different types of fibre are diverse and their physicochemical characteristics, including solubility, viscosity and fermentability, can be variable based on different food processing methods and individual health conditions [16•]. While all types of fibre are not digested by mammalian enzymes and reach the large Fig. 1 Dietary fibre, acting via the gut microbiota, lowers blood pressure. Diets high in fibre are associated with lower blood pressure (BP) and risk of cardiovascular disease (CVD). Fibres reach the colon intact, as they resist being digested or absorbed in the upper intestine. In the colon, the gut microbiota utilizes them as fuel sources and produces short-chain fatty acids (SCFAs) as by-products. These microbial metabolites have different routes to cross the intestinal epithelium: binding G protein-coupled receptors (GPCR), through transporters such as MCT1 or SMCT1, or passive diffusion. SCFAs become intracellular or available in the circulation, especially acetate, through which they communicate with distal organs and exert their effects. Legend: DASH, dietary approaches to stop hypertension; MCT1, monocarboxylate transporter 1; MED, Mediterranean; OLFR, olfactory receptor; SMCT1, sodium-coupled monocarboxylate transporter. Created with BioRender intestine intact, their degree of fermentation is variable. For example, certain types of soluble fibre (e.g. inulin, galactooligosaccharides, pectins) and RS are highly fermentable, while some types of insoluble fibre (e.g. cellulose and lignins present in the cell walls) have lower fermentability [16•]. Research is largely lacking on the effect of different types of fibre on BP. In particular, RS are remarkably difficult to study and quantify, as their levels vary depending on how foods are cooked and ingested. The heterogeneity of trials poses a large limitation to the direct use of these types of fibre in clinical practice. Combined with a lack of information about fibre intake in hypertensive guidelines [4], overall diets aimed to increase the intake of foods high in fibre and potassium and lower in sodium, such as the DASH or Mediterranean diets, are still the best approach-at least for now. Fibre Digestion by the Gut Microbiota Fibre fermentation in the large intestine is driven by the gut microbiota [16•], the living microorganisms that inhabit the intestinal ecosystem [18•]. Thus, fibre intake not only modulates the gut microbiome, the microbiota plus their nucleic acid, but also microbial structural elements and microbial metabolites [18•, 19••]. The latest estimation suggests a 'reference man' has a similar number of human and bacteria cells in the body (~ 3.8 × 10 13 each) [20]. However, a 'reference woman', infants and the elder were estimated to have 1.7-2.2 more bacterial than human cells in the body [20]. While the vast majority of these bacterial cells inhabit the large intestine [20], the number of other microorganisms (e.g. viruses, fungi) remains unaccounted for. Two recent crossover trials investigated the effect of two purified fibres, arabinoxylan and inulin; a mixture of five types of fibre; and RS on the microbiota [19••, 21]. These studies independently identified that each type of fibre was associated with distinct microbial responses [19••, 21]. Likewise, small chemical structural changes in type 4 (chemically modified) RS drove different effects on the gut microbiota and production of their metabolites in humans [22•]. However, a high inter-individual response is regularly observed in such interventions, highlighting the need for a precision approach to nutrition and microbiome interventions, as well as a better understanding of the individual baseline microbiome [23]. The microbiota inhabits the gut and gut mucosal barrier, and supports the maintenance of a healthy gut epithelial barrier via metabolite production, further discussed below [24]. This physical barrier prevents pathogenic colonization and invasion. In fibre-rich diets, there is a proliferation of gut microbiota that digests fibre, supporting the maintenance of the gut epithelial barrier [25]. In fibre-free diets, there is a shift in the gut microbiota composition, leading to the proliferation of bacteria that digest the intestinal mucus layer instead [25]. This contributes to the breakdown of the gut epithelial barrier; the entrance of undesirable microbes and their substances into the host's systemic circulation; and the subsequent activation of a chronic inflammatory state [25]. A similar chronic inflammatory state is observed in CVD and high BP [26•]-this observation suggests gut dysbiosis and breakdown of the gut epithelial barrier may be involved in the development of these diseases. Short-Chain Fatty Acids: the Microbial Products Derived from Fibre Fermentation In the large intestine, dietary fibre fermentation by the gut microbiota leads to the generation of SCFAs as by-products (Fig. 1) [27•]. Several bacteria are involved in this process via distinct biochemical pathways, summarized in Table 1. However, we still do not completely understand the enzymatic machinery necessary to degrade certain types of fibre, such as RS [23]. The three major SCFAs derived from microbial metabolism are acetate, propionate and butyrate, previously reported to be in a ratio of approximately 60:20:20 in the colon of sudden death victims [28]. We analysed faecal levels of SCFAs in a multi-site cohort study and found that acetate corresponded to 55% of total SCFAs, while propionate and butyrate were 17% each, with the remaining 11% being accounted for iso-butyric, iso-valeric, valeric and caproic acids [29••]. SCFAs have 1-6 carbon-based anions, with acetate having 2, propionate 3 and butyrate 4 carbons [27•]. Although SCFAs can be ingested or produced by other metabolic processes, bacterial fermentation of fibre is the major source of SCFA production in the human body [27•]. Different types of fibre are also fermented in distinct regions of the colon [17•]. Rapidly fermented fibres, such as inulin, are fermented in the proximal region, while moderate-and slow-fermented fibres, such as RS type 2, are fermented in the proximal and transverse regions [17•]. This means that levels of SCFAs vary along the colon, with distal regions having lower levels due to the depletion of fermentable fibres, leading to protein fermentation instead [17•]. This is also reflected in changes in pH in the different intestinal regions, with the proximal region having the lowest and the distal region having the highest pH [17•]. While SCFAs were measured in faecal and blood samples in most of the human studies, these may not reflect the levels produced inside the intestine, particularly, in different colonic regions. SCFAs, in particular butyrate, are absorbed by intestinal epithelial cells by the monocarboxylate transporter 1 (MCT1, encoded by the gene SLC16A1) and sodium-coupled monocarboxylate transporter (SMCT1, gene SLC5A8), promoting cellular metabolism [27•]. Butyrate, as a major source of 1 3 ATP for colonocytes, leads to the maintenance of the gut epithelial barrier [27•]. It also depletes intracellular oxygen which leads to the stabilization of the transcription factor hypoxia-inducible factor 1 (HIF1), which coordinates the expression of tight junction genes in the intestinal epithelial barrier [30]. Although all SCFAs inhibit histone deacetylases (HDACs), butyrate is the most potent [27 •]. Moreover, SCFAs act via signalling cascades when they bind to the G protein-coupled receptors (GPCRs)-GPR41, GPR43 and GPR109A (Table 1) [27 •]. These receptors are mostly expressed on the surface of immune and gut epithelial cells [31]. Their function in hypertension is further discussed below. The majority of SCFAs diffuse through the intestinal epithelium to the lamina propria, entering the circulation via the portal vein [27•]. SCFAs can be utilized by different cell types, including enteroendocrine L-cells, beta cells in the pancreas and immune cells [32,33]. While propionate is preferentially metabolized by hepatocytes, acetate is the only SCFA that is usually detected at physiological concentrations in the host's systemic circulation [27•]. In our studies, acetate was the main SCFA detected in plasma (94%), while propionate and butyrate corresponded to ~ 3% each [29••]reflecting that only a minority of these SCFAs become systemically available. However, acetate can act as a substrate and be converted into fellow SCFAs [27 •]. Nevertheless, the amount of SCFAs in the circulation and their turnover rate are also tightly regulated by the endogenous energy level, such as glucose, fatty acids and ketone bodies [34]. SCFAs Mediate Downstream Effects Outside the Intestine It is estimated that 60% of colonic SCFAs diffuse from the lumen to the lamina propria with the remaining portion taken up directly by MCT1 and SMCT1 transporters in the epithelial cells [35]. As mentioned, SCFAs can bind [95] to GPR41, GPR43 and GPR109A expressed on diverse cell types, including gut epithelial cells, adipocytes, enteroendocrine L-cells, innate immune cells and neurons [36, 37•]. Intracellular SCFAs can regulate epigenetic genes by HDAC inhibition [38], where butyrate may act as a competitive inhibitor and might occupy the hydrophobic binding cleft of the active site [39]. Moreover, mainly in the liver, intracellular SCFAs are essential substrates for β-oxidation and the Krebs cycle. A study investigated the roles of SCFAs in cell metabolism, in which mice were infused with physiological quantities of isotope labelled SCFAs into the cecum [34]. It identified butyrate as the main substrate for lipogenesis, propionate for gluconeogenesis and a minor proportion of acetate and butyrate for cholesterol synthesis [34]. At the epigenetic level, acetyl-CoA derived from β-oxidation, glycolysis and lipid metabolism can modulate histone acetyltransferase, the antagonistic enzyme of HDAC, activity in the nucleus [40]. The several downstream mechanisms involved in the actions of SCFAs that may impact BP are summarized in Fig. 2. SCFAs and other gut microbiota-derived metabolites are key in microbiota-host communication as they can modulate distal organ physiological and molecular functions. Indeed, using in vivo carbon-11 acetate and positron emission tomography, i.v. and colonic acetate were mostly absorbed by the brain, heart and liver [41]. Moreover, transcriptomic analyses of 3-week administration of a high-RS diet and acetate in the drinking water showed downregulation of the renin-aldosterone-angiotensin system (RAAS) and interleukin (IL)-1β in the kidney, and downregulation of mitogen-activated protein kinases (MAPK) and transformation of growth factor β (TGFβ) signalling in the heart, providing evidence for a gut-cardiorenal communication [9••]. Intervention with high RS and acetate increased the mRNA Fig. 2 Known molecular mechanisms of action of short-chain fatty acids and how they may lower blood pressure. The three main shortchain fatty acids (SCFAs), acetate, propionate and butyrate, have multifaceted actions via G protein-coupled receptors (GPCR), epigenetic, immune-dependent and immune-independent mechanisms that together may lower blood pressure and elicit a cardiorenal protective effect. Legend: Ac, acetyl group; GPCRs/GPR, G protein-coupled receptors; HAT, histone acetyltransferase; HDAC, histone deacetylases; IL, interleukin; IFN, interferon; MAPK, mitogen-activated protein kinases; NLRP3, NOD-, LRR-and pyrin domain-containing protein 3; OLFR, olfactory receptor; RAAS, renin-aldosterone-angiotensin system; TGF, transformation of growth factor; Th, helper T; T reg , regulatory T. Created with BioRender and protein levels of renal angiotensin-converting enzyme 2 (ACE2) via GPR41/43/109A signalling [42]. Recent evidence showed even maternal dietary fibre modulated the molecular and cellular composition of the adult offspring's heart [43]. These demonstrate compelling evidence that SCFAs have important roles outside the intestine that may impact BP and CVD. SCFAs and BP in Experimental Hypertension Gut dysbiosis is characterized by changes to the structure of the gut microbiota and a compromised gut epithelial barrier. An important component of hypertensive states may be changes in the capacity of the microbiota to produce SCFAs, which may lead to the breakdown of the gut epithelial barrier. Indeed, lower SCFA-producing bacteria and increased intestinal permeability were reported in both hypertensive models (angiotensin II, DOCA/salt mice and spontaneously hypertensive rats (SHR)) and human hypertensive patients 46,47]. Early studies using acute administration of SCFAs suggested these metabolites may have a BP-lowering effect: SCFAs caused vasodilation in dogs [48,49], rat caudal arteries [50] and human colonic arteries from 6 donors [51]. More recently, acute delivery of propionate resulted in a dose-dependent reduction in BP via GPR41 signalling [52•]. Furthermore, acute administration of acetate reduced heart rate and mean arterial pressure-the use of atenolol to block sympathetic tone abolished the effect on heart rate, but the BP-lowering effect persisted [53]. The long-term effects of SCFAs have only been determined more recently, with a growing number of studies demonstrating the three main SCFAs were able to reduce BP and improve cardiac performance in independent studies (Table 1). Similarly, to a high-RS diet, we reported that magnesium acetate supplementation in the drinking water reduced BP and cardiorenal fibrosis in the DOCA-salt model [9••]. This was followed by further validations of a BP-and fibrotic-lowering effect of magnesium acetate, sodium propionate and sodium butyrate, as well as a combination of all three in the Ang II model, even in combination with a lowfibre diet [54••]. Acetate led to a decrease in the calculated total peripheral resistance and sodium to potassium excretion, but no changes were observed in cardiac output, stroke volume or plasma noradrenaline [54••]. BP-lowering effect induced by SCFAs has been independently validated by others: butyrate supplementation in Ang II mice reduced their BP [55], and propionate supplementation in Ang II-infused apolipoprotein E knockout (Apoe −/− ) mice ameliorated cardiac hypertrophy, fibrosis and vascular dysfunction [56••]. Unpublished data from our team has compared the effect of magnesium and sodium acetate, which determined that the magnesium version had a larger BP-lowering effect than the sodium one. Unfortunately, butyrate and propionate are usually only available in sodium forms. This represents a barrier to their direct clinical use. Consistently, butyrate intervention was shown to reduce BP in both hypertensive (SHR [57], Ang II-infused Sprague Dawley rats [58]) and normotensive (Wistar Kyoto [59]) rats. Sodium butyrate decreased the level of an endotoxin, lipopolysaccharide (LPS), in the plasma and associated expression of genes for the interleukin Il1β [57], the inflammasome-component Nlrp3, and the chemokine Mcp1 in cardiac tissue via COX2/PGE2 pathway inhibition [58]. In another relevant study, Apoe −/− mice fed with a high-fat diet as a model of atherosclerosis, treatment with propionate reduced intestinal cholesterol and blood low-density lipoprotein (LDL) levels that ameliorated the disease phenotype [60]. The molecular mechanisms of SCFAs identified so far are discussed below. Olfactory receptor 78 (OLFR78, encoded by the gene Or51e2) is another GPCR that responds to SCFAs, particularly acetate and propionate [52 •]. OLFR78 is expressed in the vascular smooth muscle and renal juxtaglomerular apparatus, where it was detected to modulate renin secretion [52•]. An acute propionate (10 mM) administration was assessed in Olfr78 −/− mice. Due to the lack of OLFR78, the renin response was abolished and, thus, an acute drop in BP was observed, confirming that OLFR78 raised BP and antagonized the hypotensive effects of propionate [52•]. In a recent study, OLFR78 was investigated in chronic BP regulation, showing that Olfr78 −/− mice had lower renin levels but no differences in baseline BP compared to their WT counterparts [61]. Furthermore, evidence supports that propionate has a hypotensive effect via GPR41. Acute propionate administration caused a minimal reduction in BP response in Gpr41 ± heterozygotes and a modest increased BP response in Gpr41 −/− animals [52•]. This demonstrated that, with the lack of GPR41, there is a reduction in the number of receptors for propionate and, thus, their signalling that impacts BP responses. In addition, Gpr41 −/− mice were reported to have higher systolic hypertension compared to WT animals [62•]. When comparing 3-month versus 6-month old Gpr41 −/− mice, the older group was found with elevated pulse wave velocity, but no increase in ex vivo aorta stiffness, suggesting that endothelial GPR41 lowers baseline BP by decreasing the vascular contractile activity without altering vascular characteristics [62•]. Moreover, one study compared the phenotype of naïve single GPR41, GPR43, GPR109A knockout and GPR43/109A double knockout mice [54••]. At 10 weeks of age, these animals showed no changes in BP, but all presented differences in cardiac function and fibrosis [54••]. Interestingly, the GPR43/109A double knockout mice had a more severe phenotype than individual GPCR knockouts [54••]. Hence, the role of SCFAs-sensing receptors seems intricate-since these receptors act on similar pathways [37•], deletion of only one or two receptors might trigger compensatory mechanisms via the other(s). More comprehensive studies assessing the function of these receptors as well as MCT1 and SMCT1 in hypertension are needed. SCFAs and BP in Essential Hypertension A non-placebo controlled RCT showed healthy participants with 20-g supplementation of dietary fibre, inulin, for 6 weeks had a significant increase in serum butyrate and reduced systolic (− 6.3 mmHg) and diastolic (− 3.1 mmHg) BP [63•]. Levels of pro-inflammatory cytokines IL-4, IL-8 and TNFα were also reduced [63•]. This provides some translational evidence that SCFAs may lower BP in essential hypertension. However, clinical studies that assessed the levels of SCFAs in hypertensive patients have had inconsistent results (summarized in Table 2). On the one hand, untreated hypertensive patients, diagnosed by ambulatory BP monitoring, had higher plasma acetate and butyrate that positively correlated with systolic and diastolic BP [29 ••]. The bacterial pathway acetate-CoA ligase (ADP-forming), which converts ATP, CoA and acetate into ADP, acetyl-CoA and phosphate, was also upregulated in essential hypertension [29••]. BP variability, measured as morning BP surge, was negatively associated with total plasma SCFAs and, in particular, acetate [64]. Similarly, a higher level of circulating butyrate was found positively associated with ambulatory arterial stiffness index, a critical indicator of arterial function in cardiovascular diseases [65]. A possible explanation is that the sensing and uptake of SCFAs from the circulation into relevant cells are defective. This could be explained by observed reduced levels of GPR43 mRNA in hypertensive patients, and the negative association between both GPR41 and GPR43 mRNA and arterial stiffness [29••, 65]. On the other hand, acetate and butyrate levels were lower in plasma from hypertensive patients, both untreated and patients taking anti-hypertensive drugs [66, 67•]. Furthermore, hypertensive subjects had a higher level of acetate, butyrate and propionate in their stool samples [66,68]. The detection of SCFAs in the faecal samples might indicate that their absorption efficacy in hypertension has been decreased as less than 5% of these metabolites are expected to be excreted in faeces. Further studies in larger cohorts with well-characterized BP are needed to clarify the direction of the association between SCFAs and essential hypertension. The Effects of SCFAs on a Broad Range of Immune Cells Important for Hypertension SCFAs have anti-inflammatory effects on several immune cells [27•], which are also associated with the development of hypertension [26•]. Cytokines such as IL-17 and IFN-γ were reported to promote the development of hypertension, whereas IL-10 attenuated the disease [26•]. The direct link between the anti-inflammatory actions of SCFAs in lowering BP is still missing. In patients with ulcerative colitis, butyrate decreased the number of macrophages and neutrophils in the plasma and intestinal lamina propria via inhibition of NF-KB nuclear translocation [69]. Lower levels of pro-inflammatory cytokines IL-6 and IL-12 were identified in intestinal macrophages and bone marrow-derived macrophages treated with butyrate via an HDAC-dependent mechanism [70]. Similarly, Table 2 Cross-sectional clinical studies that assessed the levels of the three main short-chain fatty acids (acetate, butyrate and propionate) in hypertension DBP diastolic blood pressure, HTN hypertensive patients, NT normotensive participants, SBP systolic blood pressure, SCFAs short-chain fatty acids [68] through HDAC inhibition, propionate and acetate increased acetylation of the mTOR pathway that blocks T helper 17 (Th17) and T helper type 1 (Th1) differentiation [69]. As a result, these cells secrete fewer cytokines, including IL-17, interferon (IFN)-γ and IL-10 [71]. SCFAs may also have a direct anti-inflammatory role via differentiation of naïve T cells into regulatory T cells (T regs ), increasing Foxp3 expression via GPR43 [72]. In mice, a 3-week intervention with RS or acetate increased the number of T regs and upregulated methylation of genes associated with T regs function in splenocytes [54••]. A group of SCFA-producing strains of Clostridia isolated from a healthy human faecal sample, enriched in T regs -inducing species, was transferred into germ-free mice. This cluster of bacteria generated a TGF-β-rich environment which favoured the differentiation of colonic T regs [73]. In humans, however, a short (5 days) intervention that increased the systemic levels of acetate and propionate did not change the levels of T regs [74]. This suggests issues with the translation or that longer term interventions may be needed in humans. Overall, these studies showed that SCFAs have a direct effect on a broad range of immune cells, which in turn may either promote or attenuate hypertension. It remains unclear why SCFAs have different preferences for receptor activation and/or HDAC inhibition within different cell types. What Happens to BP when Fibre Intake Is Low Back in 1979, a study demonstrated participants with a lowfibre intake diet had higher systolic and diastolic BP [6]. In the same study, 11 participants, who routinely were on a high-fibre diet, decreased their total dietary fibre intake by 55% for 4 weeks, resulting in an increase in their mean systolic and diastolic BP by 6.8% and 3.8%, respectively [6]. Now that we understand the importance of the gut microbiota for fibre fermentation and that the gut microbiota changes rapidly, it is important to differentiate association from causation in the change in BP. Germ-free mice, which do not possess any microbiota, are a very powerful tool to address this question [75]. Faecal microbiota transplantation (FMT) from low-and high-fibre fed mice into germfree animals demonstrated that a low-fibre diet is not merely associated with a higher incidence of high BP [54••]. The gut microbiota resulting from long-term low-fibre intake triggered and promoted the genesis of higher systolic (+ 17 mmHg) and diastolic (+ 14 mmHg) BP and cardiorenal hypertrophy in mice, showing that this microbiota is hypertensinogenic [54••]. By supplementing acetate, propionate and/or butyrate in the water of conventional Ang II mice fed with a low-fibre diet, this hypertensinogenic effect was ameliorated [54••]. Furthermore, patients with an advanced stage of chronic kidney disease (CKD) with a low-fibre intake (< 25 g/ day) had a lower estimated glomerular filtration rate and a higher level of C-reactive protein, IL-6 and the uremic toxin indoxyl sulphate, indicating reduced renal function and increased inflammatory markers [76]. Similarly, in children with CKD, an inverse association was observed between fibre consumption and serum concentration of protein-bound uraemic toxins, such as indoxyl sulphate, p-Cresol sulphate, p-Cresol glucuronide and indole acetic acid [77]. This correlation was dose-dependent: for every gram/day increase in fibre consumption, there was a small decrease in particular metabolites, which ameliorated their accumulation in the kidney [77]. Therefore, a diet lacking sufficient fibre may play a role in hypertension and CVD pathogenesis. A possible explanation is a deficiency in fibre fermentation and, thus, SCFAs in the proximal and transverse colon, resulting in lower antiinflammatory effects in the interstitial epithelial cells and the systemic circulation. In return, increased protein fermentation takes place earlier in the colon, which might lead to exposure of the mucosal layer to potential harmful metabolites, such as phenols and hydrogen sulphide [17 •]. However, the specific processes that happen inside the intestine and over-flow to the systemic circulation when fibre intake is low are yet to be determined. Challenges in the Field There has been an over-reliance on the abundance of microorganisms instead of their function. As the pathogenesis of hypertension is a complex interplay between several systemic systems, a similar approach regarding the microbiome needs to be considered in this setting. It is ideal to integrate multi-omics studies, such as metagenomics, metatranscriptomics, metaproteomics and metabolomics, which will provide a more comprehensive understanding of BP regulation from a microbiome perspective. There is evidence that SCFA producers such as Ruminococcus spp. are less prevalent in essential hypertension, and that there is a significant shift in the gene pathways of the human hypertensive microbiome [29••]. However, metatranscriptomic or metaproteomic studies, showing a shift in the expression and function of microbial SCFA-producing genes to determine a cause-effect relationship, for example, are still absent in hypertension. Sometimes, in vitro models cannot recapitulate in vivo, especially when assessing complex microbial ecosystems such as the one found in the human intestine [17•]. This complexity can be demonstrated by the findings of an RCT aimed at reducing sodium which resulted in an increase in the levels of plasma butyrate in women [78]. In the last decade, we have seen an expansion of studies investigating gut microbiota-derived metabolites other than SCFAs in CVD-an example being trimethylamine N-oxide (TMAO) [79]. By combining convention and germ-free animals, a study identified four upregulated but under-studied metabolites in plasma samples of conventional Ang II mice [80]. This included 4-ethylphenyl sulphate and p-Cresol sulphate, with another eight metabolites downregulated [80]. In faecal samples, 25 metabolites, including choline phosphate and taurohyodeoxycholic acid, were upregulated, while 71 were downregulated [80]. Additionally, β-hydroxybutyrate, a metabolite derived from the liver, was decreased in the circulation with a high-salt diet in hypertensive rats [81]. This downregulation was associated with increased activation of the inflammasome, which in turn increased the risk of hypertension [81]. There are several challenges in the identification of novel metabolites and their roles, as metabolomics tools are still considered emerging. These include a lack of validation of some putative metabolites or tools for absolute quantification, a large array of synonymous names for the same metabolites and a requirement to use different analysis tools for different metabolites (e.g. SCFAs vs other metabolites), amongst others [82]. Leveraging Fibre as Future Therapeutic Approaches for Hypertension Lifestyle changes remain one of the first lines of intervention in hypertension [4]; however, they fail to promote an increase in the quantity and quality of fibre. Guidelines on the use of prebiotic foods, which selectively stimulate the growth of health-promoting bacteria, are needed. These foods include, for example, highly fermentable fibre such as inulin, sugar gum and pectin. Future interventions involve designing and developing probiotics (i.e. live bacteria) that assist in fibre digestion and SCFA production. This will also require individuals to sustain a fibre-rich diet as a food supply for the microbes to survive and populate the gut. Finally, there is also an opportunity for direct administration of SCFAs as a postbiotic therapy. One RCT aimed at assessing the direct effect of the SCFAs acetate and butyrate to lower BP in human hypertension is in progress [83]. Nevertheless, this might not be a suitable approach for all patients if patients have lower expression of GPR41 or GPR43, making them less responsive to SCFAs. Other potential approach includes FMTs from healthy donors with enriched SCFA-producing bacteria or T regs -inducing bacteria. An RCT on the potential of FMTs to lower BP has been described [84], but the results are yet to be available. Moreover, interkingdom interactions within the gut could be leveraged: bacteriophages could be used to target and kill specific bacteria that produce detrimental metabolites from a low-fibre diet. Another approach could include the development of inhibitors for bacterial genes that produce detrimental metabolites, once these are identified, such as the one developed for TMAO's precursor [85]. Nonetheless, all the above should be adjunctive therapies that complement other types of treatment or management, and it will require extensive RCTs to confirm these promising therapies. Conclusions Evidence from the last four decades supports that dietary fibre lowers BP and decreases cardiovascular and all-cause mortality. The mechanisms involved have only become evident recently, supporting the gut microbiota has a key role in this process via the production of SCFAs. These metabolites have multi-faceted actions via GPCRs, epigenetic, immunedependent and immune-independent mechanisms that together may elicit changes to BP and cardiorenal function. Alternatively, a lack of dietary fibre fosters a gut microbiota that also seems detrimental to cardiovascular health, leading to higher BP. The specific metabolites and mechanisms driving this are, however, unknown. Translational evidence for the direct use of SCFAs to lower BP in hypertensive patients is warranted, together with identification and selective inhibition of the production of detrimental metabolites associated with low-fibre intake. Compliance with Ethical Standards Conflict of Interest The authors declare no competing interests. Human and Animal Rights and Informed Consent This article does not contain any studies with human or animal subjects performed by any of the authors. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
7,926.8
2022-07-15T00:00:00.000
[ "Environmental Science", "Medicine", "Biology" ]
Periodic Plasmonic Nanoantennas in a Piecewise Homogeneous Background Optical nanoantennas have raised much interest during the past decade for their vast potential in photonics applications. This thesis investigates the response of periodic arrays of nanomonopoles and nanodipoles on a silicon substrate, covered by water, to variations of antenna dimensions. These arrays are illuminated by a plane wave source located inside the silicon substrate. Modal analysis was performed and the mode in the nanoantennas was identified. By characterizing the properties of this mode certain response behaviours of the system were explained. Expressions are offered to predict approximately the resonant length of nanomonopoles and nanodipoles, by accounting for the fringing fields at the antenna ends and the effects of the gap in dipoles. These expressions enable one to predict the resonant length of nanomonopoles within 20% and nanodipoles within 10% error, which significantly facilitates the design of such antennas for specific applications. ii Acknowledgments First and foremost I wish to express my gratitude to my supervisors Dr. Pierre Berini and Dr. Derek McNamara for giving me the opportunity to be a part of this research, for their invaluable advice, and for their endless patience and encouragements. This work would not have been possible without their help and support. I am deeply grateful to my family, specially my dearest parents, who made all this possible by their unconditional love and support. Bibliography Chapter 1 Introduction This chapter presents a general introduction to the subject of optical nanoantennas, where the primary focus would be on the rules governing the design of nanomonopoles and nanodipoles, and their agreement or disagreement with classical antenna design rules.The motivation for studying nanoantennas, in general, and nanodipoles, in particular, is addressed and some of the related literature is reviewed.At the end, the scope and organization of this thesis are introduced. Plasmonic Nanoantennas Although the concept of nanoantenna has been around since 1985 [1], it is only in the past decade that (due to the progress made in nanoscale fabrication methods) manufacturing of structures employing nanoantennas seems feasible.Since then, many applications have been found for nanoantennas in e.g.imaging and microscopy, spectroscopy, biosensing, and photovoltaics [1].This growing range of applications has motivated various theoretical, numerical and experimental studies of nanoantennas.A variety of nanoantenna types have been considered ranging from nanoshells to monopoles, dipoles and bowties to Yagi-Uda nanoantennas.While a great number of these studies have concentrated on specific applications of nanoantennas, some have considered the physics behind these plasmonic structures and formed a fundamental understanding of their response behaviour. Developing design rules for monopole, dipole, and bowtie nanoantennas has been of particular interest.Easy-to-use, yet precise rules may improve the efficiency of nanoantennas and improve their applications.Some concepts around nanoantennas have been built up in analogy to classical (RF) antennas. However, fundamental differences exist between classical antennas and nanoantennas due to the properties of metals at IR and visible frequencies. Metals are no longer perfect electric conductors at optical frequencies but rather resemble a cold plasma [2].Consequently they support surface plasmon polaritons (SPPs) at their surfaces.SPPs are inherently excited on nanoantennas. This thesis was motivated by an idea of employing nanoantennas for biosensing.A periodic array of nanoantennas was considered for this application, as an isolated nanoantenna is too small comparing to the wavelength, and thus inefficient.To be able to determine the optimum nanoantenna design for the intended application, understanding the behaviour of nanoantennas, and developing appropriate design rules are important.To this end, the work in this thesis was started with a full parametric study of a periodic structure of gold nanomonopoles and nanodipoles in a piecewise homogeneous background, consisted of a silicon substrate in an aqueous medium.This background was chosen in anticipation of the biosensing application, where the sensing medium is aqueous.It was then decided that nanodipoles are more suited to the application of biosensors, because of their highly sensitive gap region which offers significant field enhancement, and their narrower spectral response compared to nanomonopoles or nanobowties, both of which improve photo detection.A schematic of a nanodipole under study is depicted in Fig. 1. The plasmonic mode resonating in a nanomonopole was identified by comparing the electric field components in a yz cross-section of the nanomonopole, illuminated by a plane wave source from below, to the modal electric field components resonating in a plasmonic waveguide of the same cross-section.By characterizing the properties of this mode we were able to explain the trends in the response of the nanomonopoles and nanodipoles.The effective length of a nanomonopole is evaluated from 3D FDTD results, and then an estimate of this effective length is offered by employing a length scaling factor obtained from the weighted average of the decay length of the transverse modal electric field in silicon and in water.A transmission line model is also developed to estimate the effective length of a nanodipole by considering the gap as a parallel plate capacitor.Using this model one can design dipoles of desired dimensions and material that are resonant at a required wavelength. Literature Review This section intends to review some of the previous theoretical and experimental work on properties of nanoantennas, on new nanoantenna designs, and on developing practical rules to simplify their design. Different studies have attempted to achieve these goals by first understanding the properties of nanoantennas and developing theoretical models or equivalent circuits that closely reflect these properties.Among studies that focused on nanoparticles and their properties [3][4][5] Rechberger et al investigated the interactions between two gold nanoparticles [3]. Experimental results are reported for arrays of spherical nanoparticles with different inter-particle distance under perpendicular and parallel illumination. The study finds that under parallel illumination reducing the center to center distance of every pair of nanoparticles, such that the array eventually looks like an array of nanodipoles, causes a red-shift in the surface plasmon extinction peak (Fi.2),where extinction is defined as However, for perpendicular incidence, where the incident field is perpendicular to the long axis of the nanoparticle pairs, a small blue-shift is observed in the extinction peak as inter-particle distances are reduced.These observations are, first, qualitatively explained using a simple dipole-dipole interaction model based on the repulsive and attractive forces between the electric field of the incident light and the plasma electrons in the nanoparticles.Then a 2D quantitative study is performed based on a dipole-pair model, where the interactions of particles in the normal direction are sufficiently small to be ignored.Finally, it is shown that the experimental results are in good agreement with the results of dipole pair model calculations.cross section responses [6].By mapping the plasmon dipole resonances in Au nanomonopoles, it is demonstrated that nanomonopoles do not obey rules of the quasistatic regime, unless they are almost spherical in shape.In other words, the spectral response of a non-spherical nanoantenna is not solely dependent on the aspect ratio of the antenna, as is suggested in the quasistatic limit.Rather, the antenna response is highly sensitive to any change in the length and radius of the nanomonopoles, even if the aspect ratio remains constant.Numerical results demonstrated in this study suggest a linear increase in the resonant wavelength of the nanomonopoles as the antenna length is increased, while the radius remains fixed.Looking at the rate of shift in the resonant wavelength of antennas of different radii, one observes more significant red-shift for larger radii.On the other hand, for a fixed antenna length and increasing radius, the resonance first blue shifts in agreement with the quasistatic predictions, and then red shifts as the radius becomes larger. The value of the radius at which this shift happens is different for different antenna lengths.One must note that the resonance of larger radii monopoles obtained from the near-field results is substantially red-shifted with respect to the resonance of the far-field results.This red-shift is due to "retardation from the resonance in far-field scattering" [6].Numerical results are then compared to the results obtained from a simple dispersion relation for the cylindrically symmetric mode of an infinite cylinder.Results are somehow different since the dispersion relations do not consider the end effects of the nanomonopoles, which are crucial for evaluating the correct resonant length of the nanomonopoles. Bowtie nanoantennas are investigated theoretically and experimentally [11][12][13][14][15]. Ding et al. measure intensity extinction of nanobowties [11], defined as ⁄ , where T (T 0 ) is the intensity of zero-order transmitted light in the present (absence) of the nanoantennas, and the normalized field distributions.Several extinction peaks are identified and associated to fundamental and higher-order resonances according to their field distributions. It is observed and explained through local mode theory and propagation of isolated SPPs along the prism edges, that increasing the bowtie angle causes the fundamental peak to blue-shift and then red-shift.These analyses are also confirmed by considering the magnitude and phase distributions of the normalized electric field parallel to the long axis of the nanobowties.Electron beam lithography is used to build periodic arrays of silver bowtie nanoantennas.The experimental results confirm the shifts of the fundamental resonance observed in simulations, although the blue-shift is not significant in experimental results.The first higher-order resonance is evident from the measurements, however, the second higher-order resonance falls outside the acquisition range of this experiment. Fischer and Martin have comparatively studied dipole and bowtie nanoantennas [16].In this study the spectral response of dipole and bowtie nanoantennas was investigated by varying their length, gap width, substrate index n s , the index of their surrounding environment n env , and the bowtie angle suggest that this increase in the field enhancement is a result of stronger coupling between the two antenna arms.As the antenna is elongated, its resonance wavelength becomes larger, and hence the effective length of the gap becomes shorter, which leads to stronger coupling between two arms. Another dimension of the antenna that is examined in this study is the gap width.Decreasing the gap width of the dipole antenna shifts its resonance to longer wavelengths, while the position of the bowtie main resonance remains almost unchanged.The field enhancement is also increased by shortening the gap in both antennas.This increase is stronger in the case of dipoles.The authors suggest that "the spectrum of the bowtie antenna appears to be more determined by the resonance of the two triangular arms, rather than by the coupling between them" [16].As for n s and n env , increasing the indices linearly results in linear increase of the wavelength. A theoretical wavelength scaling rule for nanomonopoles is introduced by Novotny [18].It assumes that the nanomonopole is made of linear cylindrical segments of radius and that the metal properties are described by the Drude model.A nanorod of known geometry and dielectric constant, located in a dielectric medium is considered for this study.Taking into account the charge density wave propagating along the rod and the reactance of the rod ends which increases the antenna length, the effective wavelength is calculated as where is the incident wavelength, is the free-space wave number, and is the propagation constant of the charge density wave, which is determined by solving for the TM 0 modes of a cylindrical waveguide.Effective wavelengths corresponding to various incident wavelengths are calculated for nanomonopoles of different radii made of Ag, Au, and Al.The effective wavelength obtained numerically by using the proposed model is then compared to the results of experiments from other studies, and a good agreement between the results is evident. This study importantly states that the effective wavelength of nanoantennas is shorter than the free space wavelength.However, it does not give a general rule for designing antennas to be resonant at a particular free space wavelength, as would be required in practical antenna design work. Ding et al. proposed an analytical model of nanoantennas based on a microcavity model [19].Using this model, expressions are offered for the normal and longitudinal components of the electric field of the SPP mode propagating in the antenna, as well as its propagation constant, and charge and current densities.A silver hybrid dimmer nanoantenna design is presented (Fig. 3), which consists of a bowtie-like gap and two rod-like shafts.This nanoantenna is compared to a nanobowtie and a nanodipole of the same dimensions with regards to its near and far-field properties.2D simulations are also done for all of the above mentioned nanoantennas using the boundary element method (BEM).Based on their numerical results the hybrid dimmer antenna offers higher near and far-field enhancements and Q factor than bowtie and dipole nanoantennas.In fact the hybrid dimmer nanoantenna possesses benefits originating from the long shafts of the nanodipole, which introduce strong far-field enhancements, as well as the sharp tips of the nanobowtie, which significantly improves the near-field enhancement because of high charge accumulation around the tips.3D simulations are also done for dipole and hybrid dimmer nanoantennas using the finite-difference timedomain method.The trends of 3D simulation results agree well with those of 2D simulations.It is also noticed that by modifying the gap size of a dimmer nanoantenna one can tune its resonant wavelength.The results suggest that decreasing the size of the gap red shifts the resonance.Cubukcu and Capasso modeled nanomonopoles as 1D Fabry-Perot resonators for surface plasmons [20].Through numerical analysis, it is shown that the resonance of nanomonopoles occur at integer multiples of half the effective wavelength of the SPP mode excited in the nanomonopole, rather than at half of the free space wavelength.Modal analysis of an infinitely long nanowire of the same cross section as the nanomonopole yields the effective mode index of the nanoantenna.Then the resonance condition is described by where m is an integer indicating the order of resonance and δ is the field penetration in vacuum "corresponding to the phase shift acquired upon reflection of the SP mode from the antenna ends.In other words, δ corresponds to the decay length of the displacement current in vacuum increasing the effective antenna length" [20].The authors suggest that "δ is comparable to the 1/e decay length of the one-dimensional SP mode in the radial direction". Thus, by obtaining the n eff and δ from finite element method modal calculations, one can determine the resonant antenna length using Eq.2. Results of these calculations are shown for various wavelengths and are compared with the results of finite integration method simulations, where the resonant wavelength is found for a fixed antenna length.This comparison confirms the credibility of the expression suggested in Eq.2. Alú and Engheta studied nanoparticles as lumped nanocircuit elements, used for loading and tuning the frequency response of nanodipoles [21].The optical impedance of these nanoloads, which may be metal or dielectric nanodisks is evaluated by where t, 2a, and ε, are respectively the height, diameter, and permittivity of the nanodisk, and ω is the frequency of the excitation electric field.In this expression ω is the frequency of the incident field parallel to the nanodisk's axis.The input impedance of the nanodipole, Z in , is evaluated as the ratio between the applied voltage across the gap and the displacement current that flows through the dipole terminals.The authors suggest that Z in consists of a parallel combination of the gap impedance, Z gap , and Z dip .Thus, by evaluating Z gap from Eq. 3 and then removing it from Z in one can obtain Z dip .For tuning the nanodipole at a particular frequency, one needs to design the load such that the Z gap cancels the reactance X dip of the dipole at the desired frequency. This can be done by changing the geometry or permittivity of the nanodisk filling the gap. Nanodipole and nanocross plasmonic antennas were studied by Hecht et al. [22].For nanodipoles of different lengths, the amplitude and phase of the electric field component localized in the gap of a dipole antenna at the wavelength of illumination is calculated with respect to the driving field in the absence of the antenna.It is shown that the enhanced filed peaks for a dipole of certain length, which is the resonant length of the nanodipoles for the incident wavelength.The phase of the field component in the gap shifts from 0° to 180°.Therefore it is possible to choose the appropriate length of a dipole to achieve desired amplitude and phase values.An asymmetric cross antenna is then proposed to convert linearly polarized propagating waves to circularly polarized fields that are confined in the common feed gap area of the antenna. By properly choosing the length of the antenna arms, one can shape the polarization of the field component inside the gap as desired. Combining known antenna designs to produce higher field enhancement and more sensitivity Verellen et al. introduced the nano-cross (XI) geometry shown in Fig. 4 for high sensitivity refractive index [23].In this geometry the dipole modes in the bar (DI) and cross (Dx) and the higher order quadrupole mode in the cross (Qx) couple and produce a higher energy super-radiant anti- These FoM's are the highest reported to date (according to this article) for arrays of particles (higher values are reported for single particle measurements).The strong spectral shift is attributed to the long resonant wavelengths, which are just below the water absorption band at 1900 nm, and the reduced substrate effect, due to partial removal of the substrate. Simulations show 30% increase in peak shift sensitivity for the BDD and 45% for BQD Fano modes upon etching. Thesis objectives and outline This thesis intends to investigate the response of a periodic array of linear nanoantennas, namely nanomonopoles and nanodipoles, in a piecewise homogeneous medium to variations of antenna dimensions, such as length, gap size, width, thickness, and the pitch.The response of the system is defined as far-field transmittance, reflectance and absorptance.This thesis also aims at finding expressions that approximately determine the resonant length of nanomonopoles and nanodipoles. The bulk of this thesis, chapter 2, consists of a scientific article which is submitted for publication. Chapter 2 reports a parametric study of the response of periodic arrays of nanomonopoles and nanodipoles.An in-depth analysis of the physics of SPPs in nanomonopoles and nanodipoles is also provided by considering these nanoantennas as plasmonic metal stripe waveguides.Theoretical and equivalent circuit models are developed to predict the resonant length of nanomonopoles and nanodipoles, respectively.Chapter 3 presents conclusions, lists the thesis contributions and makes suggestions for future work. Chapter 2 Periodic Plasmonic Nanoantennas in a Piecewise Homogeneous Background Summary A parametric study of periodic nanomonopoles and nanodipoles in an inhomogeneous background consisting of a silicon substrate and an aqueous medium on top is presented.By viewing the nanoantennas as plasmonic waveguides and performing modal analysis the response characteristics of the system were determined and theoretical models for predicting the resonant length of nanomonopoles and nanodipoles were offered.These models rely on the result of modal analysis. Author Contribution The results provided in section are to be submitted to the journal Optics Express.Iperformed the simulations, generated and interpreted the results, formulated the theoretical model, and wrote the manuscript. Prof. Berini and Prof. McNamara contributed to the theoretical formulation, the interpretation of the results, and revised the manuscript. Article The manuscript submitted follows. Periodic plasmonic nanoantennas in a piecewise homogeneous background Saba Siadat Mousavi, 1 Pierre Berini, ].We determine the propagation characteristics of this mode as a function of nanowire cross-section and wavelength, and we relate the modal results to the performance of the nanoantennas.An approximate expression resting on modal results is proposed for the resonant length of nanomonopoles, and a simple equivalent circuit, also resting on modal results, but involving transmission lines and a capacitor (modelling the gap) is proposed to determine the resonant wavelength of nanodipoles.The expression and the circuit yield results that are in good agreement with the full computations, and thus will prove useful in the design of nanoantennas. Introduction Nanoantennas have been widely investigated, both experimentally and numerically, during the past decade.Nanomonopoles, nanodipoles and nanobowties have been of particular interest.Although there are some conceptual similarities between optical nanoantennas and classical microwave antennas, the physical properties of metals at optical frequencies dictate applying a different scaling scheme.Moreover, feeding procedures are very different between classical and optical nanoantennas, since driving nanomonopole, nanodipole, and nanobowtie nanoantennas using galvanic transmission lines is not an option, due to their small size.Instead, localised oscillators or incident beams are often used to illuminate nanoantennas [1]. One of the primary differences between classical monopoles and dipoles and their optical counterparts is their resonant length which is considerably shorter than /2 [2], where is the free-space incident wavelength.In [2] Novotny introduces useful analytical expressions for wavelength scaling of free-standing cylindrical nanomonopoles of different radii surrounded by a dielectric medium.The results of this study, however, are not easily applicable to other nanoantenna geometries such as dipoles and bowties, or non-cylindrical nanoantennas.Adding a substrate, which is often required in practice, also necessitates some modifications to wavelength scaling expressions. As in classical antennas, the spectral position of the resonance in the optical regime depends strongly on the geometry of the antennas.Antenna length has been investigated as a crucial tuning parameter in nanomonopoles, nanodipoles, and nanobowties [3][4][5][6].Capasso and Cubukcu proposed a resonant length scaling model for free-standing cylindrical nanomonopoles by using the decay length of the surface plasmon mode excited in the corresponding plasmonic waveguide as the scaling factor [4].In the case of dipoles and bowties the gap size and gap loading play an important role in determining the position of resonance [3,[5][6][7][8][9][10][11].Alu and Engheta have looked at nanoantennas as lumped nanocircuit elements and investigated some of their properties such as optical input impedance, optical radiation resistance, and impedance matching [7,10].In these studies the gap is considered as a lumped capacitor, which is connected in parallel to the nanodipole.This model identifies the gap length and gap loading as additional tuning parameters of nanodipoles.Fischer and Martin show that in nanodipoles, decreasing the gap shifts the resonance towards the red region of the spectrum, whereas in the case of bowties the resonance hardly shifts as a result of changing the gap [5].High intensity fields in the dipole and bowtie gaps are strongly sensitive to the index of the material inside the gap [5,7].Also the effects of variations in the bow angle of a bowtie antenna on its spectral response have been investigated numerically and experimentally [5,9], showing that the bow angle can be used as a tuning parameter. Semi-analytical investigations have been done on nanodipole, nanobowtie, and hybrid dimer nanoantennas based on a microcavity model [11], and analytical expressions were suggested for surface and charge densities corresponding to the SPP mode propagating in the nanoantenna, which in turn represent near-and far-field properties of these nanoantennas. Nanoantennas, in general, and the three above-mentioned types in particular, have found many applications in nanoscale imaging and spectroscopy [1], photovoltaics [1], and biosensing [12].With a growing range of applications, developing precise, yet practical design rules for nanoantennas seems essential. Although many theoretical and experimental studies have been carried out on various aspects of nanoantennas and their applications, a systematic study of their spectral response to variations in design parameters is lacking in the literature.In this paper, we present a full parametric study of the spectral response of infinite arrays of rectangular gold nanomonopoles and nanodipoles on a silicon substrate covered by water.(The materials were selected in anticipation of an eventual biosensing application to be described elsewhere; however, the study remains otherwise generic.)We vary the nanoantenna length (l), width (w), thickness (t), and, in the nanodipole case, the gap length (g), as well as the vertical and horizontal distance (p, q) between any two adjacent nanoantennas in an infinite array.Physical insight into the resonant response of arrays of nanoantennas is then provided through modal analysis of the corresponding plasmonic nanowire waveguides.A simple rule is proposed to determine the effective length of a nanomonopole in a piecewise homogeneous background from the modal properties of the corresponding nanowire.An equivalent circuit using transmission lines and a capacitor is proposed for the nanodipoles, with the capacitor taking into account the effects of the gap.This simple rule and model should become helpful aids in the design of such nanoantennas.(In the remainder of this paper we refer to nanoantenna, nanomonopole and nanodipole simply as antenna, monopole and dipole.) The antenna geometry and the method used in its study are discussed in Section 2. The parametric study of monopole and dipole arrays is presented in Section 3. Section 4 discusses the operation of the antennas from a modal viewpoint and gives expressions for the resonant length of monopoles and the equivalent circuit of dipoles.Section 5 gives our conclusions. Geometry and Methods Figure 1 gives a sketch of the dipole geometry under study.The array cell is symmetric about the x and y axes.An infinite array is constructed by repeating the cell along x and y with pitch dimensions p and q (respectively).A plane wave source having an electric field magnitude of 1 V/m, located in the silicon substrate, illuminates the array from below at normal incidence. The finite difference time domain (FDTD) method [13], with a 0.5×0.5×0.5 nm 3 mesh in the region around the antenna, was used for all simulations.Palik's material data [14] were used for gold and silicon, and Segelstein's data [15] for water.Transmittance and reflectance reference planes were located 2.5 µm above and below the silicon-water interface, respectively, parallel to the interface.(A convergence analysis was performed where the resonant wavelength of an array was tracked as a function of mesh dimensions in the neighborhood of the antenna.Mesh dimensions were halved successively, starting from a 2×2×2 nm 3 cubic mesh to a 0.25×0.25×0.25 nm 3 cubic mesh, over which the resonant wavelength was observed to trend monotonically.The wavelength of resonance for mesh dimensions of zero (infinitely dense) could thus be extrapolated using Richardson's extrapolation formula [16].Comparing this extrapolated wavelength to the wavelength obtained for a finite mesh of 0.5×0.5×0.5 nm 3 reveals a ~2% error, which considering the broad spectral response of the structures of interest, was deemed acceptable.) The transmittance T was calculated as a function of frequency (wavelength) using: where P m,s is the Poynting vector at the monitor and source locations, f is the frequency and S is the surface of the reference plane where the transmittance is computed [13].Eq. ( 1) was also used to compute the reflectance R of the system by changing S to the appropriate reference plane.The absorptance is then determined as A=1-T-R.Throughout this paper the resonant wavelength ( res ) refers to the free-space wavelength at which the transmittance curve reaches its minimum value.Reflectance resonance and absorptance resonance refer to the wavelengths at which reflectance and absorptance reach their minima, respectively (in general these three resonant wavelengths are different). Parametric Study of an Array of Antennas A rigorous analysis of the design parameters of the two types of antennas is carried out by varying one design parameter at a time and monitoring the response of the system.Results are presented for monopole and dipole antennas in Sec.3.1 and Sec.3.2, respectively.These results will be useful as a guideline for design and to relate the antenna performance to its geometry.The minimum values of g, t and w reflect approximate limitations of an eventual fabrication process. Periodic Array of Monopoles An array of monopoles (g = 0) with fixed pitch is a relatively simple, yet, effective resonant structure.Here, we investigate such an array by determining the influence of changing each design parameter (independently) on the system response, while keeping the other parameters fixed, including the pitch (p×q)which is maintained to 300×300 nm 2 .We consider variations of length l, width w and thickness t of the monopoles. Length (l) Length is one of the main design parameters of antennas.As shown in Fig. 2, increasing the length of a monopole shifts its transmittance, reflectance, and absorptance resonances to longer wavelengths.The red shift is expected by analogy to classical antennas, where resonance occurs when the antenna length is roughly half a wavelength.Increasing the length thus increases the wavelength at which the antenna is resonant.Increasing the length also decreases slightly the absorptance in the monopoles and broadens the absorptance response. Field enhancements (not shown here) very much depend on the location along the monopoles; the only regions with field enhancement are the ends of the monopole.In contrast, as discussed in Sec.3.2, the gap region of dipoles generates highly enhanced fields, making dipoles very sensitive to changes in the gap region. Width (w) Monopole width is another design parameter.Considering the system response to changes in width w, shown in Fig. 3, we note that increasing the latter blue-shifts the transmittance, reflectance and absorptance resonances.We also note that the amount of shift decreases as Δw/w decreases.The reasons for this behaviour will become clear in Sec. 4, where we examine the modal characteristics of the corresponding nanowire waveguides. The absorptance level, as shown in Fig. 3(c), does not follow a linear trend with increasing monopole width.A maximum value of absorptance is evident, unlike the linearly decreasing trend that we observed as a result of increasing the length of the monopoles. Thickness (t) The response of the system to variations in thickness t is shown in Fig. 4. As with the width, increasing the thickness causes a blue-shift in the resonant wavelengths.The absorptance peaks at t = 30 nm, while the amount of shift of the resonant wavelength decreases as Δt/t decreases.This behavior will also be explained in Sec. 4 where we discuss the modal characteristics of the corresponding nanowire waveguides. Periodic Array of Dipoles A region of highly localised, enhanced fields is one of the main attractions of dimer antennas, such as dipoles and bowties.Here we chose to study a periodic array of dipoles not only because of its similarities to an array of monopoles, but also for its advantage over monopoles, namely, having a gap region with highly concentrated fields, and its sharper wavelength response compared to bowties and monopoles.In this section we study the response of an array of dipoles to variations in individual dipole length, gap, width and thickness. Length (l) Figure 5 shows the response of a periodic array of dipoles to changes in the length from l=190 to 280 nm in steps of 10 nm, while w, t, g, p and q remain fixed.As is evident from Fig. 5(a), increasing the length of the dipole red-shifts the resonant wavelength, decreases the amount of power transmitted at resonance, and increases the level of reflectance of the system, but the absorptance remains almost unchanged. Elongating the dipole while keeping the pitch constant means increasing the proportion of gold surface area covering the silicon-water interface (and thus intercepting a greater fraction of the incident wave) resulting in more reflection and less transmission.The electric field enhancement on resonance is calculated at the center of the gap, 3 nm above the silicon-water interface (a representative location in the gap in H 2 O) with respect to the electric field at the same location in the absence of the antenna, and is shown in Fig. 5(d).The electric fields in the absence of the antenna are computed at the  res of each corresponding dipole.The difference between the electric fields at different wavelengths in the absence of the dipole is, however, negligible, as expected.The field enhancement, although slightly decreasing as l increases, is relatively constant over the range of lengths, which implies the electric field distribution in the gap does not change much by increasing the length of the dipoles.However, as the antenna length increases, the coupling between any two adjacent antennas (along the x-axis) increases.This means higher field localisation at antenna ends, which leads to slightly less field localisation in the gap as the antenna length increases, explaining the trend of Fig. 5(d).This is also confirmed in Fig. 6 which shows the magnitude of the xcomponent of the electric field along the antenna length, taken at the silicon-gold interface at y = 0 at the resonant wavelength for each case.This figure clearly shows coupling between two neighbouring dipoles for this pitch p.As the length increases the distance between the ends of any two adjacent dipoles becomes smaller.When the dipole is long enough, such that the distance between two neighbouring antennas is the same as the gap length, the electric fields at the gap are equal to those at the ends. Gap (g) Figures 8(a)-(c) show the transmittance, reflectance and absorptance of the system as a function of wavelength for different antenna gap lengths g.While the transmittance increases with increasing gap length, the reflectance decreases, as the proportion of gold covering the surface becomes smaller.Increasing the gap of an array of dipoles moves the response from the limit g = 0, corresponding to an array of monopoles of length l, to the limit g = p -l, corresponding to an array of monopoles of length (l -g)/2.At this limit, the length of the antennas are reduced to less than a half of their original value, which according to classical antenna theory implies a blue-shift in the resonance.This is indeed observed in the results of Fig. 8.There is also a capacitance associated with the gap that explains in part the wavelength shift, as discussed in Sec 4.4. The field enhancements, shown in Fig. 8(d), are calculated as described in Sec.3.2.1.A steep decrease in the field enhancement is evident as the gap increases, which is corroborated by the field distributions of Fig.We note from Fig. 9 that in dipoles with a small gap, fields are appreciably larger in the gap than at the ends.However, as the gap gets larger the fields are almost equally distributed at both ends of a single arm of the dipole.Localised fields in the gap region of dipoles make small-gap antennas highly sensitive to changes in the gap region. Width (w) Figure 10 shows the transmittance, reflectance and absorptance of the array of dipoles as a function of wavelength for different antenna widths w.From Fig. 10(a) one can clearly see that increasing the width of the antenna from 4 to 60 nm blue-shifts the position of the resonance and lowers the level of transmittance at resonance.Figs.10(b) and (c) show a similar shift in the reflectance and absorptance, respectively.The amount of shift decreases as w/w decreases.This property is explained in terms of the characteristics of the mode excited in the dipole arms, as will be discussed in Sec.4.Fig. 10(d) shows field enhancements calculated at the center of the dipole gap, 3 nm above the silicon-water interface.One can clearly see that w=20 nm gives the maximum enhancement of the electric field, while w=4 nm yields the minimum enhancement. From Fig. 11, which shows the total electric field over the x-y cross-section of the antenna where the field enhancements are calculated, one can clearly see that at w=20 nm the fields at the center of the gap reach their largest value, resulting in the strongest field enhancement.However, for very small and very large widths, although fields are strongly localised at the extremities, the non-uniform distribution of fields over the gap yields smaller field values at the center of the gap.In Fig. 12 the total electric field is shown over a y-z cross-sectional plane taken at the middle of a dipole antenna arm.As the antenna gets wider, the field becomes less intense across the y-z plane, and less coupling occurs between the fields localised at the left and right edges.This establishes two separate localised field regions (Fig. 12(c)), as opposed to one high intensity region that exists in narrow-width dipoles (Fig. 12(a)).Clearly, the antenna fields are strongly dependant on w. Thickness (t) A similar study was performed to understand the effects of changing the thickness t of the dipole.Increasing the thickness causes a blue-shift in the resonances, as shown in Fig. 13.The amount of shift decreases for decreasing t/t.This property is explained in terms of the characteristics of the mode excited in dipole arms, as will be discussed in Sec. 4. Fig. 13(d) shows field enhancements calculated at the center of the dipole gap, 3 nm above the siliconwater interface.One notes that the field enhancement does not depend strongly on thickness.In Fig. 14 the total electric field is shown over a y-z cross-sectional plane taken at the middle of a dipole antenna arm.As the antenna gets thicker, the field becomes more localised near the silicon-water interface but does not change appreciably in character (compared to changes in width -Fig.12). Alignment of Transmittance, Reflectance and Absorptance Extrema From Figs. 2-5, 8, 10 and 13, we note that the wavelength corresponding to the minimum of the transmittance curve does not lineup with extrema in the reflectance or the absorptance curves for a given array of dipoles.To determine the reason, the imaginary parts of the permittivity of gold and water were made zero, one at the time.Results are shown in Fig. 15, where one could clearly see that the positions of the minima in the transmittance and reflectance curves are aligned if the permittivity of gold is purely real, which implies that the misalignment is due to the absorption of gold.As the imaginary part of the permittivity of gold is forced to zero, no energy is absorbed by the antennas, which makes the absorptance close to zero (small losses remain in water); thus the illuminating beam is partially transmitted and reflected.This also shows that most of the energy in the system is absorbed by the gold and not by the water.In fact, water absorption is negligible over most of the wavelength range shown (for the reference planes adopted).Fig. 15 shows the transmittance, reflectance and absorptance curves on the same scale, which clearly demonstrates their relative values. Full Width at Half Maximum In this section we consider the full-width-at-half-maximum (FWHM) of the absorptance response of the arrays.We determine the FWHM by finding the difference between the two wavelengths corresponding to the half value of the peak of each response curve.These results are shown on the left vertical axes (Δλ) in Figs.16(a)-(d) as a function of each design parameter.We convert the Δλ values to the frequency domain using where c is the speed of light in free-space and ν= /(2)is the frequency, and we plot this on the right axis of each figure.Fig. 16(a) shows an increasing FWHM as the length of the dipoles increases.This is caused by an increase in the loss of the antennas due to their longer length, resulting in broadening (see also Sec. 4.2).The same argument holds for the change in FWHM as a function of gap length: by increasing the length of the gap in a dipole of fixed length, we effectively make the dipole arms shorter, thus decreasing the loss and the FWHM, as shown in Fig. 16(b).The FWHM is shown in Figs.16(c) and (d) as a function of dipole width and thickness.Here, as the length of the dipoles is fixed, the only contributing factor is the change in the attenuation of the mode resonating in the antenna -it increases as  decreases (see Sec 4.2).Thus, by increasing the width and thickness of dipoles, their FWHM (Δν) decreases. Field Decay from the Monopole Ends -Effective Length L eff Generally, resonance depends on a balance of stored electric and magnetic energies.Energy is stored not only over the physical length of a monopole but also in the fields decaying at its ends (as in microstrip resonators at microwave frequencies viz. the end fringing fields [17]).Thus, a monopole appears to have an effective length (L eff ) which is greater than its physical length.The longitudinal electric field component, E x , is used to measure the 1/e field decay beyond the antenna ends.Fig. 17 shows E x along the x-axis at y=0 for several heights (z).The average of these decay lengths is denoted by  a (also termed the length correction factor) and used to determine the effective monopole length L eff as: We could also take the electric displacement D x or the polarization density P x to measure the length correction factor.Not surprisingly, the measures yield essentially the same value regardless of their definition.The results show that the antenna fields (which can be thought of as equivalent current densities) penetrate the background a non-negligible distance (~20 nm) beyond the ends of the monopole.In Sec. 4, we propose an alternative method to estimate  a , which does not require FDTD modelling. Surface Plasmon Mode of the Antennas The antennas investigated in the previous section are formed from rectangular cross-section Au nanowires in a piecewise homogeneous background.It is known that a thin, wide metal stripe in a symmetric [18] or asymmetric [19] background supports several surface plasmon modes.The nanowire comprising the antennas is very similar in structure to the asymmetric stripe [19].It is thus surmised that it operates (and resonates) in a surface plasmon mode of the nanowire, compatible with the geometry and the excitation scheme (an x-polarised plane wave.)In this section we identify the mode of operation of the antenna, we relate its propagation characteristics on the nanowire to the performance of the antennas, and we propose design models resting on modal results to predict the performance of the antennas. Modal identification The surface plasmon mode that is excited on the antennas must first be identified.To this end, we found the modes and their fields on a nanowire waveguide of the same cross-sectional configuration as one of the monopoles analysed in Sec.3.1.1(l = 210 nm, w = 20 nm, t = 40 nm).The wavelength for the modal analysis was set to  =  res = 2268 nm, which is the resonant wavelength of the aforementioned monopole.In order to remain consistent with the FDTD computations, the finite-difference mode solver in Lumerical was used and the same mesh as in the antenna cross-sectional plane (y-z plane, 0.5 nm mesh) was adopted.The same material properties for gold, silicon and water were retained. Figures 18(a)-(b) show the real part of the transverse electric fields (which are at least 10× larger than their corresponding imaginary parts) and Fig. 18(c) shows the imaginary part of the longitudinal electric field (which is significantly larger than its real part) of the nanowire mode of interest computed using the mode solver.These field components are compared to the corresponding electric field components distributed over a y-z cross-section taken near the centre of the monopole antenna in Figs.18(d)-(f) computed using the FDTD method.A very close resemblance between all corresponding field distributions is apparent from the results, suggesting that the mode of operation is correctly identified and that the antenna operates in only one surface plasmon mode (monomode operation).Based on the distribution of E z we identify the mode as the sa b 0 mode [18,19].The longitudinal (E x ) field component of the monopole (Fig. 18(f)) has a large background level because it consists of the sum of the surface plasmon mode field and the incident (plane wave) field. In general, the surface plasmon modes that are excited on an antenna depend on the polarisation and orientation of the source.Modes that share the same symmetry as the source, and that overlap spatially and in polarisation with the latter, can be excited. Effective Index and Attenuation Now that we have successfully identified the mode resonating in the monopoles, or each arm of the dipoles, we can evaluate the effective refractive index n eff and the attenuation α of this mode as a function of the nanowire cross-section.For this purpose, the incident wavelength was arbitrarily fixed to λ 0 =1400 nm and w and t were changed, one at the time, to determine their influence on n eff and α.From these results we then explain some of the trends observed in the parametric study of Sec. 3. Figures 19(a) and (b) give the computed results.Evidently, the effective index and the attenuation decrease with increasing nanowire width and thickness.Therefore, increasing the width or thickness of an antenna, while keeping its length fixed, results in a blue-shift of its resonant wavelength, because: Effective Length of a Monopole Based on Modal Analysis We wish to use the results of the modal analysis to estimate δ a and the physical length l of the monopole required for resonance at a desired λ res .An estimate, inspired from RF antenna theory, of the required physical length would be l = λ res /2n eff , where n eff is obtained from modal analysis at the desired λ res .However, this is incorrect because we know from the parametric study (section 3) that the monopole operation is such that it appears longer than its physical length due to fields extending beyond its ends (Fig. 17), i.e., L eff > l.We thus propose the following alternative relation: The nanowire of Sec.4.1 (w = 20 nm, t = 40) was analysed at different wavelengths, corresponding to the resonance wavelengths λ res of monopoles of length l (Fig. 2) in a pitch large enough to eliminate the coupling effects between neighboring monopoles (p=q=700 nm).(In fact, the values of n eff were obtained from the data of Fig. 20 by interpolation at the required wavelengths).The decay of |E z | of the mode away from the nanowire along the z-axis was evaluated.In the positive z-direction (into the H 2 O) |E z | falls to 1/e of its value a distance δ w from the nanowire surface.In the negative z-direction (into the Si) |E z | falls to 1/e of its value a distance δ s from the nanowire surface.The decays thus obtained from |E z | of the mode along with n eff are summarised in Table 1.(In general δ s ≠ δ w , in which case we would use a weighted decay length correction factor δ m = (1-τ)δ w + τδ s , where τ/(1-τ) is defined as the ratio of |E z | at the Au-Si interface to |E z | at the Au-H 2 O interface.) Using δ m from Table 1, we find the estimated physical length of the monopoles l est as summarised in Table 2. Our estimated lengths are all within 20% of the physical length, which is quite acceptable considering the relatively broad response of the monopoles.Part of this error may be caused by numerical inaccuracies due to the finite mesh size used in the FDTD and modal analyses. Transmission Line Model of Dipoles A transmission line model with a lumped element as shown in Fig. 21 is proposed to account for the effect of the gap on the position of the resonant wavelength of dipoles observed in Fig. part of Z w over the y-z cross-section (w=20 nm, t=40 nm) of the nanowire, computed using Eq. ( 10).The nanowire waveguide has an inhomogeneous cross-section, so the mode does not have a unique Z w .By averaging Z w over a 300300 nm 2 cross-section we obtain Z w = 275 Ω.Note that the small region inside the metal where values of Z w become large (z ~ 25 nm, y = 0) is due to the denominator of Eq. ( 10) becoming close to zero -Z w is non-physical here so this region was removed from the averaging calculations.Going back to Eq. ( 9), we now solve for λ res and plot the results in Fig. 23 as a function of gap length.We also plot the values obtained from the FDTD analysis of dipoles of l = 210 nm, w = 20 nm, t = 40 nm and variable gap lengths.The pitch was set to 700700 nm 2 to eliminate coupling effects between antennas in the FDTD analysis, thus making the results directly comparable to the results of modal analysis.Very good agreement is noted. Concluding Remarks We performed a full parametric study of periodic plasmonic monopoles and dipoles in a piecewise homogeneous environment consisting of a silicon substrate and an aqueous cover.The study considered three system responses: transmittance, reflectance and absorptance.The responses were evaluated numerically and the results interpreted.Increasing the length redshifts the resonance of monopoles and dipoles, whereas increasing the width, thickness and gap causes a blue-shift in their responses.We show that such trends are expected by identifying the surface plasmon mode that is excited in the antennas (the sa b 0 mode [18,19]) and computing its effective index and attenuation as a function of geometry and wavelength.The field enhancement (|E x |/|E inc |) and FWHM of dipoles were also computed, yielding values of up to ~100 in g = 4 nm gaps, and 30 to 40 THz, respectively. We proposed an expression resting on modal results (for the surface plasmon mode excited in the antennas) to predict the resonant length of a monopole given its cross sectional dimensions and the required resonant wavelength.The expression, which takes into account field extension beyond the antenna ends, estimates the physical length of monopoles to within ~20% when compared with the FDTD results.Finally, we proposed a simple equivalent circuit, also resting on modal results, but involving transmission lines and a capacitor which models the gap, to determine the resonant wavelength of dipoles.This circuit successfully estimates the resonant wavelength of a dipole to within ~10% when compared to the FDTD results.The expression and the equivalent circuit should prove useful as design guidelines for optical monopole and dipole antennas. Convergence Analysis Convergence analysis was done by tracking the resonant wavelength of a nanoantenna for dx=dy=dz=0.25,0.5, 1, and 2 nm, where dx, dy, and dz are the dimension of mesh cells around the antenna.The results are shown in Fig. 3, as well as the expected convergence value at zero, calculated using Richardson Extrapolation Formula.The 0.5×0.5×0.5 nm 3 mesh that is used throughout this study gives 2.1% numerical error. Effects of Pitch Changing the pitch, while keeping antenna length and width constant varies the coupling between adjacent antennas and hence the response of the system.Figs. 4 and 5 show the system response as a function of square and non-square pitch, respectively.For a square pitch, where p=q, a slight blue-shift, followed by a red-shift can be seen in the system response as p and q increase.At p=q=400 and 450 nm the response of the system is not as smooth as it is for smaller pitch.This issue, as well as the trends of the system response as a function of pitch need to be further investigated.For a non-square pitch, where p=300 nm and q varies from 150 to 350 nm, as shown in Fig. 5, a red-shift is clear in T, R and A. Further investigation is required to determine the cause of this red-shift. Considering the FWHM of the system response, as well as the level of absorptance at p=q=300 nm, a relatively narrow response with high level of absorptance is observed.Thus 300 nm was chosen as the cell dimensions throughout this study. Suggestions for Future Work Experimental work is required to confirm the numerical results obtained. Designs need to be further optimized in terms of bulk and surface sensitivities to get the best response for the intended application.Achieving sharp and intense responses by adjusting the antenna parameters would result in better detection of the spectral response of the system in the experimental work.The array factor could be analysed as another design parameter, which may have a significant effect on the intensity of response.It may be worthwhile to look at different antenna designs to achieve higher order resonances, as well as the Fano resonance. Fig. 1 . Fig.1.Schematic of a gold nanodipole on silicon, covered by water. Fig. 2 . Fig.2.SEM images of particle pair samples with varying inter-particle distance (centerto-center) of (a) 450 nm, (b) 300 nm and (c) 150 nm.The particle diameter is 150 nm, the particle height is 17 nm.(d) shows spectral position of the extinction maximum vs. interparticle distance for the full set of seven samples for the polarization directions parallel (circles) and normal (rhombs) to the long particle pair axis.Full lines show the corresponding results of the dipole-pair model calculations.(Reprinted with permission from [3] © 2003 Elsevier Science B.V.) in the case of the nanobowties.While dipole antennas support only one mode, multiple modes are excited in a bowtie nanoantenna.Although changing the bowtie angle shifts the spectral position of the resonances, it does not change the field enhancement in a special trend.The field intensity is shown to be stronger in the gap of a dipole than a bowtie.The dimensions of the antenna significantly affect the spectral position of the resonance.Increasing the length of the nanoantenna redshifts the resonance, and increases the relative field enhancement in the gap for the case of dipoles.However, for bowties, where multiple resonances are present in their spectrum, each resonance must be considered individually in order to observe a clear trend.Doing so, one finds an almost linear redshift of the resonance position as a result of increasing the length of bowtie.The relative field enhancement in the gap is smaller than in the dipole, still increases by increasing the length.The authors Fig. 3 . Fig.3.From right to left: schematics of the rod dimer antenna, the bowtie antenna, and the hybrid dimer antenna.(Reprinted with permission from [19] © 2009 OSA.) 1 ( bonding dipole-dipole mode (ADD) with a broad line width.A lower energy sub-radiant bonding dipole-dipole mode (BDD) with a narrower line width is also produced in the XI geometry.Furthermore, coupling of DI and Qx modes results in a third hybridized mode, which is a narrow bonding quadrupoledipole mode (BQD).Spectral overlap of the ADD and BQD modes gives rise to a destructive Fano interference, which reduces the radiation losses and produces a narrow anti-resonant dip around the Qx resonance.Next, a Fano interference fitting model is introduced for the far field extinction spectra E(ω) = 1-T(ω) of the nano-cross.In a series of bulk localized surface plasmon resonance (LSPR) refractive index (RI) sensing measurements sensitivities of different plasmon modes are obtained.A figure of merit (FoM) (δ δ ) is then determined.For this geometry FoM values 4.6 (BDD) and 4.BQD) are obtained using the line widths found by a full Fano model fit. Fig. 1 . Fig.1.Geometry of a unit cell of the system under study: a Au rectangular dipole antenna on a silicon substrate covered by water.A plane wave source illuminates the antenna in the z-direction from within the substrate. Fig. 5 .Fig. 6 . Fig. 5. (a) Transmittance, (b) reflectance and (c) absorptance vs wavelength for a dipole of g=20 nm, w=10 nm, t=40 nm, p=q=300 nm and variable l (given in legend inset to (a)).Part (d) shows 10log(|Ex|/|Einc|) where Ex is taken at x=0, y=0, z=3 nm on resonance and Einc is the incident field at the same location and wavelength in the absence of the antenna. Figure 7 Figure 7 shows the electric field distribution of dipole over the x-y cross-section close to the silicon-gold interface, slightly inside the gold.Fig. 7(b) shows the field distribution on resonance, while Figs.7(a) and (b) show the fields at wavelengths below and above  res .The magnitude of the electric field is clearly enhanced on resonance. 9 Figures8(a)-(c) show the transmittance, reflectance and absorptance of the system as a function of wavelength for different antenna gap lengths g.While the transmittance increases with increasing gap length, the reflectance decreases, as the proportion of gold covering the surface becomes smaller.Increasing the gap of an array of dipoles moves the response from the limit g = 0, corresponding to an array of monopoles of length l, to the limit g = p -l, corresponding to an array of monopoles of length (l -g)/2.At this limit, the length of the antennas are reduced to less than a half of their original value, which according to classical antenna theory implies a blue-shift in the resonance.This is indeed observed in the results of Fig.8.There is also a capacitance associated with the gap that explains in part the wavelength shift, as discussed in Sec 4.4.The field enhancements, shown in Fig.8(d), are calculated as described in Sec.3.2.1.A steep decrease in the field enhancement is evident as the gap increases, which is corroborated by the field distributions of Fig.9. Figures8(a)-(c) show the transmittance, reflectance and absorptance of the system as a function of wavelength for different antenna gap lengths g.While the transmittance increases with increasing gap length, the reflectance decreases, as the proportion of gold covering the surface becomes smaller.Increasing the gap of an array of dipoles moves the response from the limit g = 0, corresponding to an array of monopoles of length l, to the limit g = p -l, corresponding to an array of monopoles of length (l -g)/2.At this limit, the length of the antennas are reduced to less than a half of their original value, which according to classical antenna theory implies a blue-shift in the resonance.This is indeed observed in the results of Fig.8.There is also a capacitance associated with the gap that explains in part the wavelength shift, as discussed in Sec 4.4.The field enhancements, shown in Fig.8(d), are calculated as described in Sec.3.2.1.A steep decrease in the field enhancement is evident as the gap increases, which is corroborated by the field distributions of Fig.9. Fig. 8 . Fig. 8. (a) Transmittance, (b) reflectance and (c) absorptance vs wavelength for dipole antennas of l=210 nm, w=20 nm, t=40 nm, p=q=300 nm and variable g (given in legend inset to (a)).Part (d) plots 10log(|Ex|/|Einc|) where Ex is taken at x=0, y=0, z=3 nm on resonance and Einc is the incident field at the same location and wavelength but with the antenna removed. Fig. 13 . Fig. 13.(a) Transmittance, (b) reflectance, and (c) absorptance versus wavelength for dipole antennas of l=210 nm, w=20 nm, g=20 nm, p=q=300 nm and variable t (given in legend inset to (a)).Part (d) plots 10log(|Ex|/|Einc|), where Ex is taken at x=0, y=0, z=3 nm on resonance and Einc is the incident field at the same location and wavelength in the absence of the antenna. Fig. 15 . Fig. 15.Transmittance, reflectance and absorptance of an array of dipoles, using (a) the full material properties of silicon, gold and water, and (b) forcing the imaginary part of the permittivity of gold to zero. Fig. 16 . Fig. 16.FWHM of the absorptance response as a function of (a) length l, (b) gap g, (c) width w, and (d) thickness t.The left axis shows the FWHM calculated as Δ and the right axis shows the corresponding Δυ. Fig. 17 . Fig. 17.Electric field Ex along the x-axis of a monopole of l=210 nm, w= 20 nm, t = 40 nm and p=q= 300 nm, at y= 0 for several heights z (indicated in the legend).The average of Ex at the z locations of this figure (and for z= 15 and 25 nm) is also shown.The inset shows Ex at the physical end of the antenna, where the field is discontinuous. Fig. 18 . Fig. 18. (a)-(c) Electric field distribution of a surface plasmon mode plotted over the cross-section of a nanowire waveguide (w=20 nm, t=40 nm and λres = 2268 nm) computed using a mode solver.(d)-(f) Electric field distribution over a cross-section of the corresponding monopole antenna computed using the FDTD.(a) and (d) Re{Ey}, (b) and (e) Re{Ez}, (c) and (f) Im{Ex}. observed in our FDTD computations in Figs.3 and 10, and in Figs.4 and 13.It is worth noting here that the rate of change of n eff and α decrease as w and t increase.Applying this observation to Eq. (4), we expect a smaller shift in the position of resonance as w and t increase.This is also observed in the trends of Figs.3, 4, 10 and 13. Fig. 19 . Fig. 19.Effective refractive index and attenuation of the mode resonating in the antennas as a function of (a) width using t=40 nm and (b) thickness using w= 20 nm, at λ=1400.Next, w and t were fixed to representative values (w = 20 nm, t = 40 nm) and the incident wavelength was varied to determine its influence on n eff and α.As is observed from Figs. 20(a) and (b), n eff and α decrease with increasing wavelength.Returning to Figs. 16(c) and (d), for a fixed dipole arm length d = (lg)/2, the FWHM decreases as w and t decrease because α decreases with the latter (recall that λ res red-shifts with decreasing w and t -see Figs. 10 and 13 or the previous paragraph).However, if d varies, one needs to consider the product dα as the total loss.Given the slow rate of change of α with wavelength (Fig. 20 (b)), the total loss changes mostly with d.Increasing the length of the dipole while the gap is fixed thus increases the FWHM, as shown in Fig. 16(a) (d increases).However, increasing the gap while the antenna length is fixed decreases the FWHM, as shown Fig. 16(b) (d decreases).These observations also explain the results of Fig. 8 (broader response for smaller gaps). Fig. 20 . Fig. 20.(a) Effective index and (b) attenuation as a function of wavelength calculated from modal analysis for a nanowire waveguide of cross-section w = 20 nm by t =40 nm. Fig. 22 . Fig. 22. Wave impedance over a y-z cross-section of the nanowire. Fig. 23 . Fig. 23.λres obtained from FDTD analysis and from the transmission line model for a dipole of l= 210 nm, w= 20 nm, t= 40 nm and variable g. Table 1 . Results of the Modal Analysis Substituting n eff and λ res corresponding to every value of physical length l from Table1into Eq. 5 yields L eff .One can then estimate the physical length of the monopole l est by analogy with Eq. (3) as:
14,231
2012-07-30T00:00:00.000
[ "Physics", "Materials Science" ]
Identification and characterization of potent and selective aquaporin-3 and aquaporin-7 inhibitors The aquaglyceroporins are a subfamily of aquaporins that conduct both water and glycerol. Aquaporin-3 (AQP3) has an important physiological function in renal water reabsorption, and AQP3-mediated hydrogen peroxide (H2O2) permeability can enhance cytokine signaling in several cell types. The related aquaglyceroporin AQP7 is required for dendritic cell chemokine responses and antigen uptake. Selective small-molecule inhibitors are desirable tools for investigating the biological and pathological roles of these and other AQP isoforms. Here, using a calcein fluorescence quenching assay, we screened a library of 7360 drug-like small molecules for inhibition of mouse AQP3 water permeability. Hit confirmation and expansion with commercially available substances identified the ortho-chloride–containing compound DFP00173, which inhibited mouse and human AQP3 with an IC50 of ∼0.1–0.4 μm but had low efficacy toward mouse AQP7 and AQP9. Surprisingly, inhibitor specificity testing revealed that the methylurea-linked compound Z433927330, a partial AQP3 inhibitor (IC50, ∼0.7–0.9 μm), is a potent and efficacious inhibitor of mouse AQP7 water permeability (IC50, ∼0.2 μm). Stopped-flow light scattering measurements confirmed that DFP00173 and Z433927330 inhibit AQP3 glycerol permeability in human erythrocytes. Moreover, DFP00173, Z433927330, and the previously identified AQP9 inhibitor RF03176 blocked aquaglyceroporin H2O2 permeability. Molecular docking to AQP3, AQP7, and AQP9 homology models suggested interactions between these inhibitors and aquaglyceroporins at similar binding sites. DFP00173 and Z433927330 constitute selective and potent AQP3 and AQP7 inhibitors, respectively, and contribute to a set of isoform-specific aquaglyceroporin inhibitors that will facilitate the evaluation of these AQP isoforms as drug targets. conduct water, glycerol, as well as hydrogen peroxide (H 2 O 2 ) (1,2). Two other AQP isoforms of the so-called aquaglyceroporin subfamily, AQP7 and AQP9, are expressed in the mouse. AQP3 expression sites in the body include the kidneys (3,4), gastrointestinal tract (5,6), airway epithelia (7), conjunctival epithelium of the eye (5), urinary bladder (8), skin keratinocytes (9), and human erythrocytes (10). AQP3 gene deletion in mice results in extreme polyuria (11). Pharmacological inhibition of AQP3 has thus been suggested as an approach to induce aquaresis (12), selective salt-sparing removal of excess body water that may be desirable for some clinical conditions such as heart failure and hyponatremia. Surprisingly, the extremely rare AQP3-deficient humans do not show any clinical manifestations, including no polyuria (13). However, it cannot be excluded that this observation is due to compensatory adaptations in AQP3-deficient individuals. Consequently, AQP3 inhibitors could still have potential for treating human disorders of water retention. Beside uses for aquaporin inhibitors in water imbalance disorders, several newer AQP3 Ϫ/Ϫ knockout and knockdown animal studies suggest a number of applications for potent, selective, and nontoxic AQP3 inhibitors. This includes mouse models of multistage skin tumorigenesis, non-small-cell lung cancer, and breast cancer (14 -17). Moreover, AQP3-mediated H 2 O 2 conductance confers sensitivity to cytokines in T cells, alveolar macrophages, and keratinocytes. Thus, AQP3 Ϫ/Ϫ knockout mice are largely protected from hapten-induced contact hypersensitivity, psoriasis, and allergic airway inflammation (18 -20). A second murine aquaglyceroporin, AQP7, is expressed in adipose tissue, renal proximal tubules, muscle, and pancreatic ␤-cells (21). Knockout mouse phenotypes suggest associated functions in adipocyte glycerol release during lipolysis, glycerol reabsorption in proximal tubules, energy supply to heart muscle, as well as regulation of ␤-cell glycerol content (21)(22)(23)(24)(25)(26). More recently, AQP7 was also detected in skin dendritic cells. Similar to Aqp3 Ϫ/Ϫ knockout mice, Aqp7 Ϫ/Ϫ knockout mice were protected from hapten-induced contact hypersensitivity (27). Up until now, it has not been clarified whether AQP7 conducts H 2 O 2 and has a role in cytokine signaling. However, it is conceivable that combined inhibition of AQP3 and AQP7 could be useful in the treatment of atopic dermatitis. At present, only few studies have described AQP3 inhibitors (28). Cu 2ϩ as well as Ni 2ϩ ions bind to extracellular loop sites of human AQP3 (29, 30). Inhibitor potency was not fully clarified in these studies because it varied with the utilized buffer system. When bicarbonate buffer was used, copper (IC 50 , ϳ5 M) appeared to potently and selectively inhibit the water and glycerol permeability of AQP3 but not of the homologous AQP7. On the other hand, in the presence of chelators such as Tricine and HEPES buffers, the apparent Cu 2ϩ potency was in the millimolar range (29,30). It should be noted that normal human plasma Cu 2ϩ levels range from 1 to 15 M (http://www.hmdb. ca/ (31)). 3 More recently, the gold(III) bipyridyl compound Auphen was described as an AQP3 inhibitor. Auphen inhibition of AQP3 conductance to glycerol was potent, with an IC 50 of ϳ0.8 M (32) in human erythrocytes. Besides AQP3, Auphen also blocked AQP7 (IC 50 , ϳ6.5 M) in an adipocyte cell line (33). Because of the apparent clinical potential of AQP3 inhibitors as well as to obtain new tools for experimental research, we decided to identify novel AQP3 inhibitors. We previously identified inhibitors of the third murine aquaglyceroporin, AQP9, by screening a commercially available library of small molecules (34). The library contains chemically stable, diverse, druglike molecules with beneficial properties for further development. Here we describe the identification of a potent and selective mouse (m) and human (h) AQP3 inhibitor. Structureactivity relationship analysis and AQP isoform specificity analysis have additionally resulted in the serendipitous discovery of a potent AQP7 inhibitor. AQP3 inhibitor screen Chinese hamster ovary (CHO) cells expressing mAQP3 under the control of a tetracycline promoter were grown in 96-well plates for identifying AQP3 inhibitors in a calcein quenching-based assay of cell water permeability (34). Plates 1-23 of the Maybridge Hitfinder Library version 8, comprising a total of 7360 different molecules, were screened at a final concentration of 100 M in assay buffer containing 1% DMSO. HTS06792 was identified previously as a weak mAQP9 inhibitor (34). Further analyses suggested partial inhibition of mAQP3. HTS06792 was thus used as a positive control in each screening plate, whereas DMSO-containing wells were used as negative controls. Hits were arbitrarily defined as substances that induced an increase in t1 ⁄ 2 of cell shrinking time of more than 2.5 s (compared with ϳ1 s for HTS06792). Ten substances fitting this criterion were subsequently repurchased and subjected to dose-response analyses for hit confirmation. Inhibition of mAQP3 could be confirmed for five compounds. In addition, JM00015 showed mAQP3 inhibition at the highest tested concentration but at a lower efficacy than expected (Fig. 1). All confirmed active compounds appeared to be of relatively linear structure, with exception of the weak active JM00015. Besides the positive control, hits could broadly be categorized into the single hit BTB09519 and a group of substances described by a urea-containing linker (DFP00176, GK00877, or SEW00832) or acetamide linker (BTB14129); a left-hand 2-nitrothiophene, ethyl-benzoate, or chlorobenzene group; and a more variable right-hand side, as intuitively depicted (Fig. 1). These substances shared structural elements that are similar to the previously identified benzothiadiazole urea-containing AQP9 inhibitors RF03176 and HTS13772 (34,35). Indeed, AQP3 inhibition by RF03176 was identified previously (34), and this substance was the seventh best hit in the current screen. We did initially not pursue RF03176 further because of its previously described low potency against AQP3 (34). Structure-activity relationship To explore this putative hit series further, a set of 12 commercially available, structurally similar compounds was purchased. These comprised six 1-(5-nitrothiophene-3-yl)urea compounds, a 1-(1H-indole-3-yl)urea compound, four ethyl 4-(carbamoylamino)benzoate compounds, and a methyl 4-(carbamoylamino)benzoate compound (Fig. 2, top left to bottom right). Potency and efficacy were tested on both mouse and human AQP3 isoforms, respectively. Two of these compounds contained a urea linker, and ten compounds comprised a methylurea linker. The most potent AQP3 inhibitor, DFP00173, was identified among the two compounds comprising a urea linker. Between these compounds, a 2,6-dichlorophenyl right-hand side in DFP00173 was clearly preferred to 4-chlorophenyl in DFP00172. Comparison of several methylurea linker compounds (SEW00835, SEW00833, and SEW00832 versus 9016645) suggests that 2-nitrothiophene and ethyl benzoate left-hand sides result in similar potency. However, direct comparisons are not possible because of the lack of commercially available molecules with identical right-hand sides. Considering right-hand sides, 3,4-chlorophenyl (7791389) appears to decrease efficacy and potency compared with phenyl (9016645) and methoxyphenyl groups (9053871), respectively. Interestingly, a right-hand phenylpyrazole (Z433927330) group resulted in a potent but moderately efficacious compound. Overall, no clear potency differences were observed between the inhibition of mAQP3 and hAQP3. AQP inhibitor specificity We selected the two most potent compounds, DFP00173 and Z433927330, as well as compound 9016645 for further characterization of inhibitor specificity toward mouse aquaglyceroporins. Of these, 9016645 showed moderate selectivity for AQP3, whereas DFP00173 selectively inhibited AQP3, with only minor inhibition of AQP7 and AQP9. Z433927330 inhibited AQP7 potently and with good efficacy, whereas inhibition of AQP3 and AQP9 was lower and seemed to be less potent (Fig. 3, A-C). Comparison of 9016645 and Z433927330 inhibition profiles suggested that a shift in isoform specificity and potency was caused by the additional right-hand pyrazole. To investigate potential cytotoxic or anti-proliferative effects, we incubated proliferating CHO cells for 48 h in the presence of the inhibitors DFP00173 and Z433927330 and the previously described AQP9 inhibitor RF03176 (34) before loading cells with Calcein-AM. A 5.8-fold increase in cell number was expected during this incubation period. Furthermore, the fluorophore can only be retained in viable cells (36). We found no negative effect of either substance on calcein fluorescence, indicating no apparent negative effects of inhibitors on CHO cell proliferation or vitality in the tested concentration range up to 25 M (Fig. 3D). However, incubation with 25 M DFP00173 and Z433927330 but not 25 M RF03176 resulted in higher fluorescence, indicating increased proliferation or dye retention or a combination thereof. Hydrogen peroxide permeability AQP3 and AQP9 are known to function as hydrogen peroxide channels in chemokine signaling in T cells and neutrophils (18,37,38). H 2 O 2 permeability of AQP7 has currently not been demonstrated. However, AQP7 is required for chemokine responses in skin dendritic cells (27). Consequently, we tested whether AQP inhibitors are suitable to block AQP-mediated H 2 O 2 permeability. AQP3, AQP7, and AQP9 were stably expressed in CHO cells along with the H 2 O 2 sensor HyPer-3 (39). Addition of extracellular H 2 O 2 resulted in an increase in 490/420-nm fluorescence intensity ratio in all HyPer-3-expressing cell lines (Fig. 4). Faster changes were observed in AQP3-, AQP7-, and AQP9-expressing cell lines compared with HyPer-3-expressing cells without ectopic AQP expression. H 2 O 2 permeability in AQP-expressing cell lines was reduced by isoform cognate inhibitors. The inhibitors did not show clear effects on H 2 O 2 permeability in CHO-HyPer-3 cells without ectopic AQP expression. Glycerol permeability of human erythrocytes Human erythrocytes express water-permeable AQP1 as well as glycerol and water-permeable AQP3 (10). To confirm AQP3 inhibition in a second assay as well as to test inhibition of AQP3 glycerol permeability, we isolated whole human erythrocytes that were exposed to inward-directed glycerol gradients (Fig. 5, A and B). Changes in scattered light intensity were recorded over time in a stopped-flow apparatus. Treatment with DFP00173 and Z433927330, respectively, resulted in inhibition of erythrocyte glycerol permeability as well as inhibition of initial water permeability (Fig. 5A). The observed potencies for DFP00173 inhibition of glycerol permeability in erythrocytes Figure 1. Dose-response curves for the ten most potent screening hits as well as for the positive control HTS06792. Shown is t1 ⁄2 of cell shrinking, indicating that water permeability was measured in calcein-loaded, mAQP3-expressing CHO cells. Screening hits were repurchased before analysis to confirm hit identity. Supplier product codes are indicated above each substance. Compounds MWP00821 and S04288 were not sufficiently soluble in DMSO and were thus tested at lower concentrations than the other compounds. AQP3 inhibition could be confirmed for five hit compounds. In addition, JM00015 showed AQP3 inhibition at the highest tested concentration and at lower efficacy than expected. Aquaporin-3 and aquaporin-7 inhibitors (IC 50 , ϳ 0.2 M) and Z433927330 (IC 50 , ϳ 0.6 M) agreed well with the observed inhibition of water permeability in CHO cells. However, inhibition of glycerol permeability by Z433927330 appeared to be more efficacious than inhibition of CHO cell water permeability, causing similar levels of reduced permeability as DFP00173 (Fig. 5B). Furthermore, we tested the inhibition of AQP3 and AQP7 glycerol permeability by these two compounds in CHO cells (Fig. 5, C and D). We found highly specific inhibition of AQP3 glycerol permeability by DFP00173, which did not apparently inhibit AQP7 glycerol permeability. On the other hand, we found moderate selectivity of Z433927330 for inhibition of AQP7 glycerol permeability compared with inhibition of AQP3 permeability. We note that inhibition efficacy has to be compared cautiously between cell lines because we observed significantly higher glycerol permeability in AQP7-expressing cells compared with AQP3-expressing cells. Homology modeling and molecular docking We previously identified a putative binding site for RF03176 and similar compounds on hAQP9 at the cytoplasmic side of the pore. This binding site was examined by site-specific muta-tions in hAQP9 (35). Based on calculated logP values (RF03176, 2.80; DFP00173, 3.97; Z433927330, 3.72), we reasoned that inhibitors could diffuse across the lipid bilayer to reach the cytoplasmic side of the pore. This assumption was tested in epithelial barrier assays using mouse cortical collecting duct mpkCCD cells (40), which express very little AQP2 under basal culture conditions (41). Moreover, AQP2 is not expected to reach the apical membrane in the absence of vasopressin. Indeed, we found that all three substances crossed an electrically tight mpkCCD cell monolayer at a rate comparable with DMSO from the basal to the apical side, whereas cellimpermeable TO-PRO-3 remained undetectable at the apical side (Fig. S2). Because of the chemical similarity between RF03176 and the currently identified hit series, we hypothesized similar binding of the newly identified compounds to mouse aquaglyceroporins at the cytoplasmic pore side. To investigate this, we generated homology models of the three mouse aquaglyceroporins mAQP3, mAQP7, and mAQP9. Initial calculations of the pore dimensions suggested that the pore appears to be too narrow for inhibitor binding at the so-called aromatic/arginine region and too shallow for strong inhibitor binding in its remaining Aquaporin-3 and aquaporin-7 inhibitors extracellular segment (Fig. S1). Subsequent molecular docking, utilizing the entire pore of these models as a potential interaction surface, confirmed this hypothesis. Analysis of the top 50 poses for the newly identified ligands DFP00173, Z433927330 and 9016645, respectively, displayed a predicted binding site similar to RF03176 at the cytoplasmic side of the pore of all three aquaglyceroporins. Hydrogen bonds formed between the inhibitor urea linker and backbone carbonyls of loop B are a common motif (Fig. 6). Importantly, this binding prevents carbonyl-water or carbonyl-solute interactions, which are crucial for channel permeability. Further hydrogen bonds with one or both of the NPA box asparagines are formed in some AQP-inhibitor combinations. These asparagines of the two NPA boxes are well conserved among aquaporins and appear to be critical for selective channel permeability (42). To exemplify inhibitor-AQP interactions, DFP00173 interacts in a similar fashion with all three mouse aquaporins, with the dichlorophenyl group at the cytoplasmic opening of the pore and hydrogen bonding between the urea backbone of the ligand and loop B of the aquaporins, as well as hydrogen bonding between asparagines and the nitro group (Fig. 6A). Compound Z433927330 differs from 9016645 by a pyrazole group (Figs. 2 and 3). Molecular docking suggests that the pyrazole does not grossly influence the position of these molecules in aquaglyceroporins (Fig. 6B). Besides the general interaction of loop B carbonyls with the urea linker, oxygens from the ester functional group act as hydrogen bond acceptors for asparagine hydrogens here. Docking calculations suggest that the hydrophobic phenyl group in RF03176 is located close to the loop B asparagine, creating a tight constriction at this site. A phenylalanine (Phe-180) at the cytosolic entrance in m/hAQP9 is located proximal to the RF03176 benzothiadiazole. Interactions between nitrogen lone pairs of the thiadiazole group and the ␦-positive edge of Phe-180 are conceivable (Fig. 6C). An identical interaction is seen in hAQP9 (35). The homology models suggest interesting differences between mouse aquaglyceroporins. In line with similar channel substrate permeability, pore-lining residues are highly conserved. Differences that may explain inhibitor specificity are, however, observed in residues at the cytoplasmic pore entrance. Specifically, a Phe-180 in mAQP9 is replaced by Val-179 in mAQP3 and Thr-159 in mAQP7. Furthermore, the chemical environment surrounding the conserved histidine in loop B (His-81 in mAQP3) is noticeably different. In mAQP3, a glutamate (Glu-96) is involved in a hydrogen bond to the ⑀ nitrogen of the histidine, which reduces its rotameric freedom. In mAQP7 the unprotonated ␦ nitrogen of the corresponding histidine (His-61) constrains freedom of rotation as a hydrogen bond acceptor to an asparagine (Asn-70). The latter is in a position accessible for pore solutes. In contrast, the residues at these two positions are both methionines in mAQP9 (Met-91 and Met-97, respectively). This allows the histidine (His-82) to adopt a different configuration that makes the pore narrower. Discussion Here we describe the identification of novel mAQP3 and mAQP7 inhibitors. Together with previously identified specific mAQP9 inhibitors, this provides a complete set of potent mouse aquaglyceroporin inhibitors, in part with high isoform specificity suitable for dissecting the function of these channels in cell-based assays. These are not the first aquaporin inhibitors described in the literature. However, as raised by Verkman et al. (43), several previously described AQP inhibitors were not sufficiently validated upon discovery. Consequently, the effect of some AQP inhibitors could not be reproduced by independent laboratories (44 -46). Here as well as in a previous study, we tested inhibitor specificity among related and less related AQP isoforms (34). We consider the observed inhibitor specificity as a strong argument for fidelity of the inhibitors described in these studies. In further support for this class of inhibitors, we previously described the effect of hAQP9 point mutations on inhibition by RF03176 and structurally similar molecules (35). DFP00173 and Z433927330 share structural similarities with RF03176. Moreover, we confirmed inhibition of hAQP3 glycerol permeability by some of the described inhibitors in human erythrocytes, utilizing stopped-flow scattered light intensity recordings, a method considered a gold standard in inhibitor validation (43). Relative inhibitor potency measurements suggested good agreement of inhibitor potency between CHO cell water permeability and erythrocyte glycerol permeability assays. Similarly, AQP inhibitors reduced water permeability in AQP-expressing cell lines only (cell shrinking in response to sucrose addition 3.6 s into each read). The same HyPer-3-expressing cell lines were loaded with calcein before water permeability measurements. We note that the calcein fluorescence intensity is about 10-fold higher than HyPer-3 fluorescence under the conditions used. HyPer-3 did not seem to interfere with water permeability measurements. Furthermore, we noted a small baseline fluorescence decrease induced by DFP00173 and RF03176 treatment that was present before H 2 O 2 addition. The reason for this effect is unknown. Means of four recordings are shown. Aquaporin-3 and aquaporin-7 inhibitors A number of aquaporins, including AQP3, AQP8, and AQP9, have recently been described to function as H 2 O 2 channels in cytokine and growth factor signaling, primarily in leukocytes (18,19,37,38,(47)(48)(49)(50)(51). Similarly, altered cytokine responses have been described for AQP7-deficient dendritic cells (27). However, altered water and/or glycerol permeability of AQP7 deficient cells was proposed as an explanation. To our knowledge, this is the first study that describes AQP7-mediated H 2 O 2 permeability, providing an alternative explanation for reduced cytokine responses in AQP7-deficient dendritic cells. Verkman et al. (43) have previously branded aquaporins as "important but elusive drug targets." The elusive part of the assessment was based in part on a low hit discovery rate in an unpublished screen for AQP1 inhibitors conducted by the authors. Only a few weak AQP1 inhibitors could be discovered in a screen of 100,000 small molecules (43). As an explanation, narrow AQP pore geometry as well as the necessity to find molecules binding deeply in this pore to inhibit water (and solute) transport has been given. In agreement with this view, we found that the pore diameter at the aromatic/arginine region seems to be too narrow to contain the AQP inhibitors discovered in this study. This results in a very limited surface that remains available for potential inhibitor binding at the extracellular orifice of mouse aquaglyceroporins. In contrast, we found that the aquaglycerol channel pore on the cytoplasmic side of the aromatic/arginine constriction can ideally fit linear aromatic molecules. It may well be confirmed that the discovery of inhibitors for pure water channels such as AQP1 is more challenging. This study identified at least one hit series containing potent as well as specific AQP3 and AQP7 inhibitors, respectively, in a medium-throughput screen of 7360 molecules. Moreover, in a previous screen, we identified two hit series containing potent and specific AQP9 inhibitors in a screen of only 1920 molecules (34). These outcomes suggest that the pore geometry of aquaglyceroporins permits inhibitor discovery. We have previously characterized the binding of several hAQP9 inhibitors in a pore region facing the cytoplasm using computational and experimental methods (35). Our analyses in this study suggest that inhibition cooperatively results from steric hindrance of the pore by bulky functional groups and disruption of the pore-water hydrogen-bonding network. The role of steric hindrance is exemplified by the differences between DFP00173, comprising two ortho-chlorides, and the closely related but less potent DFP00172 (para-chloride) and SEW00833 (meta-chloride), respectively. Aquaporin water and solute permeability require backbone carbonyls in loops B and E as hydrogen bond acceptors and NPA asparagines as hydrogen bond donors. All of our described inhibitors form hydrogen bonds to backbone carbonyls, thus mimicking channel-substrate interactions. Indeed, AQP7 and APQ9 are permeable for urea (34,52), which is a commonly observed linker in the aquaglyceroporin inhibitors we identified. In light of this common inhibitor binding, the experimentally observed inhibitor specificity requires further consideration. Molecular docking analyses did not provide clear explanations for this selectivity. Interestingly, although occupying carbonyls of the channel polypeptide backbone, the inhibitor urea linker offers its own carbonyl as a potential substitute. At physiologically relevant temperatures, inhibitors and channels are dynamic. It is thus conceivable that alternative hydrogen bonding involving the urea carbonyl facilitates permeability through a residual pore. Thus, overall channel diameter as well as local inhibitor-bound isoform-specific pore constrictions may explain the most striking experimentally observed inhibitor specificities. Notably, among our homology models, mAQP3 is narrowest at the NPA region. The nitro group in DFP00173 is predicted to form a very strong interaction with the asparagine of the first NPA box. Furthermore, Ile-71 restricts alternative backbone inhibitor interactions, locking DFP00173 in a position that does not allow residual pore permeability. The analogous Val-51 provides more room for alternative positioning in mAQP7. Similarly, favorable positioning seems to be hindered in Figure 6. Representative AQP-inhibitor binding sites. A-C, the calculated diameter of the AQP pore is shown in transparent surface view, colored from blue (narrowest) to red (widest). Docked ligands and select amino acids are shown in stick representation. Hydrogen bonds form between the inhibitor urea linker and backbone carbonyls of each AQP isoform. Asparagines of the NPA boxes form interactions between AQP3 and DFP00173 (A) and between AQP7 and Z433927330 (B). Phe-180 is unique to AQP9 and may form a positive edge interaction with an RF03176 nitrogen lone pair, thereby stabilizing a conformation that constricts the entrance to the pore (C). Aquaporin-3 and aquaporin-7 inhibitors mAQP9 by Phe-180, as ligands can only take up discrete positions along the carbonyl backbone. Residual pore permeability may also provide an explanation for the previously observed RF03176 specificity to m/hAQP9 (34). AQP9-specific Met-91 and Met-97 are positioned in such a way at the cytoplasmic pore entrance that access to the underlying loop B backbone carbonyl His-82 may be obscured when RF03176 is bound. In support of this mechanism, we have previously mutated Met-91 and His-82 of hAQP9 (35). A M91N mutation could conceivably have provided better access to His-82 and did result in almost complete loss of inhibition. A H82A mutation may also have provided better access to the backbone carbonyl at this position and did result in lower inhibitor potency. For mAQP3 and mAQP7, similar shielding of the corresponding histidine carbonyl does not seem to be the case. For Z433927330, molecular docking interactions are almost identical to those of 9016645, whereas the additional pyrazole in Z433927330 resulted in higher experimentally observed potency. A simple explanation is that the additional pyrazole group in Z433927330 offers additional hydrogen bonding possibilities. However, docking calculations did not correlate well with the experimental observations. Further analyses will be required to find explanations for these effects. In conclusion, we identified a complete set of aquaglyceroporin inhibitors. All described compounds are commercially available for independent verification. We confirmed the identity and purity of the three currently most useful substances, DFP00173, Z433927330, as well as the previously described HTS13286 (34), of one batch from the described sources by MS. The substances should be useful in future cell-based experiments, e.g. investigation of AQPs in cytokine signaling. All three substances provide useful starting points for further development of substances that are suited for in vivo experiments and potentially for drug development. CHO cell experiments Generation of the utilized CHO cells with tetracycline-inducible ectopic AQP expression has been described previously (34,35). For H 2 O 2 permeability assays, CHO cells with stably integrated tetracycline repressor were stably transfected with VspI-linearized pC1-Hyper-3 (39) and pC1-SypHer (53) plasmids, respectively, before selection of transfected cells with G418 (800 g/ml, Thermo Fisher Scientific) for 2 weeks and subsequent sorting of a pool of YFP fluorescent cells on a BD FACSAria II flow cytometer (BD Biosciences). Human AQP3 was ligated from IMAGE clone ID 3877096 (Source BioScience) into pcDNA5/FRT/TO (Thermo Fisher Scientific) following KpnI/NotI digestion of the parent vectors. These DNA recombinations were designed with the help of AiO software (54). The use of a plate reader to measure water permeability has been described previously (55) and was used with modifications. CHO cells were cultured in DMEM/F12/10% donor bovine serum, 250 g/ml hygromycin, and 10 g/ml blasticidin (all from Thermo Fisher Scientific) in a humidified incubator at 37°C and 5% CO 2 . Three days before water permeability or H 2 O 2 permeability assays, 6000 cells/well were seeded in black, clear-bottom, polylysine-coated 96-well plates (Greiner, 655 946). Twenty-four hours after seeding, AQP expression was induced with tetracycline; in water permeability assays, tetracycline was titrated for each cell line to reach uninhibited cell shrinking of t1 ⁄ 2 of ϳ1 s. For H 2 O 2 permeability assays, all cell lines were induced with 5 ng/ml tetracycline, which leads to a maximal increase in water permeability in the tested cell lines. Cells were grown for an additional 48 h. Before water permeability assays, cells were incubated in fresh medium containing 5 mM probenecid, 2.5 M calcein O,OЈ-diacetate tetrakis(acetoxymethyl) ester (Calcein-AM, BD Biosciences) for 45 min. The plates were transferred to a Fluostar Optima plate reader, and fluorescence intensity was recorded for 30 s (water permeability) and 79 s (H 2 O 2 permeability). Optical filters were 485BP12/520 for calcein and 420BP10/490BP10 excitation and 520LP emission for HyPer-3 recordings. Osmotic cell shrinkage was induced 3.6 s into reads by addition of 1 volume of 500 mM sucrose in assay buffer. The other assay buffer components were as follows: MgSO 4 , 0.8 mM; KCl, 5.0 mM; CaCl 2 , 1.8 mM; NaHEPES, 25 mM; NaCl, 111.5 mM; and probenecid, 5.0 mM (pH 7.4). Similarly, 1 volume of 150 M H 2 O 2 was added 3.6 s into HyPer-3 recordings. All assay buffers contained 1% DMSO to keep DMSO concentrations of inhibitor-treated wells constant during volume additions (44). For cytotoxicity assays, CHO cells were grown in the presence of the indicated inhibitor concentrations for 48 h, followed by cell loading with Calcein-AM as for water permeability assays and similar fluorescence recording at a single time point. Erythrocyte isolations and stopped-flow light scattering Freshly collected human whole blood was washed three times in PBS (spinning at 800 ϫ g) to remove serum and the cellular buffy coat. Stopped-flow light scattering experiments were carried out on a BioLogic MPS-200 stopped-flow reaction analyzer (BioLogic, Claix, France) as described previously (56). For measurement of glycerol permeability, suspensions of erythrocytes (ϳ1% hematocrit) in PBS were subjected to a 100 mM inwardly directed gradient of glycerol. The erythrocyte volume changes were recorded as the kinetics of the scattered light intensity at 20°C at a wavelength of 530 nm and with a dead time of 1.6 ms and a mixing efficiency of 99% in less than 1 ms. Data were fit to a single exponential function, and the related Aquaporin-3 and aquaporin-7 inhibitors half-life time (t1 ⁄ 2 ) of the cell swelling phase during entry of glycerol and water into the erythrocytes was measured. During this phase, the t1 ⁄ 2 values represent an index of the glycerol permeability of the analyzed erythrocytes. A series of stopped-flow light scattering experiments were done for each inhibitor compound, testing the effects of concentrations obtained by serial dilution. All compounds were dissolved in DMSO and applied in 1% DMSO for 10 min at 20°C prior the light scattering measurement. Erythrocyte suspensions containing 1% DMSO (10 min, 20°C) represented the control condition. Homology modeling To rationalize AQP inhibitor isoform selectivity, we generated homology models of murine AQP3, AQP7, and AQP9. The sequences were aligned to the Escherichia coli glycerol transporter GlpF using ClustalW version 2.1 (57,58) and inspected manually. The 2.2-Å X-ray structure of GlpF (PDB code 1FX8) (59) was used as a template for homology modeling, and all three input files were submitted to the I-TASSER server (60). The geometry of the resulting homology models was optimized by fragment guided molecular dynamics simulation using the FG-MD server (61), followed by manual correction of a few pore residues using the Coot software (62). Noteworthy is phenylalanine 64 in AQP9, which was further into the pore than expected, probably because of the lack of pore waters in the generation of the models. Pore radii were calculated utilizing the program HOLE (63). Molecular docking The ligands DFP00172, RF03176, Z4339927330, and 9016645 were docked into the pore of the AQP3, AQP7, and AQP9 using the LeadIT Molecular Docking software. To avoid bias, the pore in its entirety was defined as a binding site. A library containing the four ligands of interest was created, and docking was performed using standard settings. The results were optimized using Hyde, a program in the LeadIT software package (version 2.3.2, BioSolveIT GmbH, Sankt Augustin, Germany). Statistical analysis Dose-response curves were compared between AQP isoforms and substances by two-way ANOVA in GraphPad Prism 5.0.
7,103.6
2019-03-11T00:00:00.000
[ "Biology", "Chemistry" ]
Ten-year helium anomaly prior to the 2014 Mt Ontake eruption Mt Ontake in central Japan suddenly erupted on 27th September 2014, killing 57 people with 6 still missing. It was a hydro-volcanic eruption and new magmatic material was not detected. There were no precursor signals such as seismicity and edifice inflation. It is difficult to predict hydro-volcanic eruptions because they are local phenomena that only affect a limited area surrounding the explosive vent. Here we report a long-term helium anomaly measured in hot springs close to the central cone. Helium-3 is the most sensitive tracer of magmatic volatiles. We have conducted spatial surveys around the volcano at once per few years since November 1981. The 3He/4He ratios of the closest site to the cone stayed constant until June 2000 and increased significantly from June 2003 to November 2014, while those of distant sites showed no valuable change. These observations suggest a recent re-activation of Mt Ontake and that helium-3 enhancement may have been a precursor of the 2014 eruption. We show that the eruption was ultimately caused by the increased input of magmatic volatiles over a ten-year period which resulted in the slow pressurization of the volcanic conduit leading to the hydro-volcanic event in September 2014. of helium isotopes were reported also in fumarolic gases during the 2002-2003 eruption of Stromboli volcano, Italy 13 . All above helium isotopic anomalies were related to magmatic eruptions, none has yet been reported for hydro-volcanic eruptions. Here we show a ten-year helium anomaly related to the 2014 Mt Ontake eruption. A hydrodynamic dispersion model applied to the data provides an explanation for temporal variation of helium-3 flux at the conduit. The helium-3 flux can be converted into magmatic volatile flux, which may have led to the accumulation of steam in the volcanic edifice and the hydro-volcanic eruption. Results Helium isotopes and helium/neon ratios of gas samples. We measured helium isotopes and helium/neon ratios of 92 gas samples in seven bubbling hot and minerals springs around Mt Ontake (Fig. 1). Samples were collected once every few years since November 1981 14 (STable 1) and 12 samples were collected after the 2014 eruption. The 3 He/ 4 He and 4 He/ 20 Ne ratios vary significantly from 1.25 Ra to 7.38 Ra (where Ra is the atmospheric ratio 15 of 1.382 × 10 −6 ) and from 0.34 to 285 respectively. All helium isotopic ratios are higher than the air value, suggesting the influence of a mantle signature typical for arc volcanoes (7.4 ± 1.3 Ra 9 ). Observed 3 He/ 4 He ratios are corrected for atmospheric contamination using helium/neon ratios 11 . Hereafter we use only corrected values, while we identified five samples collected between 1993 and 2007 with significant air contamination. During the whole observation period, the 3 He/ 4 He ratio generally decreases with increasing distance from the central cone to the sampling site (SFig. 1) suggesting that the most primitive magmatic 3 He is carried with fluid flowing through the volcanic conduit 14 . As helium moves from the volcanic conduit through fissures and permeable channels to surrounding hot and mineral springs, the magmatic helium is diluted by radiogenic helium (0.02 Ra 16 ) produced in aquifer rocks. This process results in lower 3 He/ 4 He ratios at more distant sites. However, monitoring of distant mineral springs still provides data that are, to a large extent, the direct result of variations of 3 He/ 4 He ratios in the volcanic conduit 10,11,12 . Secular variations of helium isotopes. Figure 2 shows secular variations of helium isotopes in seven natural springs where Fig. 2a indicates those in the northwest section of Mt Ontake and Fig. 2b those in the southeast. These data cover 3 He/ 4 He ratios of bubbling gas samples collected for 34 years since November 1981, comprising the longest record of hydrothermal helium isotope data in the noble gas literature 8,9 . In the northwest sites, 3 He/ 4 He ratios were generally constant within 2σ error from November 1981 to June 2000. Then the ratios increased significantly from June 2003 to November 2014 at Nigorigo hot spring, the closest site to the central cone. In contrast distant from the cone, the ratios stayed constant during the same period at Akigami and Yuya mineral springs (Fig. 2a). In the southeast sites, 3 He/ 4 He ratios were mostly variable ( Fig. 2b). At the Kanose site close to the cone, the ratio increased gradually and with a constant rate from November 1981 to November 2014. On the other hand, there are two step changes of helium isotope values at Shirakawa, Kakehashi and Shojima, sites located relatively distant from the cone. The 3 He/ 4 He ratios increased significantly from November 1981 to June 2003 and then suddenly decreased and remained at a constant value until after the 2014 eruption. In summary spatial and secular variations of helium isotopes are complex and there is not a simple relationship except for recent increases of helium isotopes at Nigorogo site closest to the cone. Discussion In order to study how the activation of Ontake volcano led to the fatal hydro-volcanic eruption, precise data analysis and hydro-geochemical modeling is necessary. In addition, the recent history of geotectonic events reported in the region is important for the interpretation of helium isotopes. These events are summarized as follows: The last magmatic activity was estimated to have occurred about 23,000 years ago 17 and the volcano had been believed to be dormant, even though weak fumarolic activity was observed at the southwestern flank of the central cone. The first historical hydro-volcanic eruption occurred on 28 th October 1979, forming several new craters and ejecting large amounts of volcanic ash, rock and steam 18 . Five years later, a large earthquake (M6.8; the 1984 Western Nagano Earthquake) at shallow depth (2 km 19 ) occurred about 10 km southeast of Mt Ontake on 14 th September 1984. Immediately after the earthquake, a large-scale landslide took place near the top of the volcano, killing 29 people on the southern slope. On 12 th November 1992, seismic activity occurred beneath the summit, followed by a white plume rising to 100 m above the crater 20 25 . Subsequent emission of mantle helium has ceased by the time of ground uplift, probably due to exhaustion of mantle volatiles in the small magma volume. Constant increase of 3 He/ 4 He ratio at Kanose site ( Fig. 2b) may be due to a switch of the source of mantle helium from the diapiric magma in southeast flank to the central cone plumbing system during the time from 2002 to 2004. Different patterns at the sites Kanose and Shirakawa may be due to the distance from the fault (SFig. 2). Kanose is located further than Shirakawa and influence of diapiric magma may be smaller. Figure 3 indicates the relationship between the distance from the central cone of Mt Ontake and the TROC of helium isotopes after June 2003. There is a negative relationship between the distance and TROC, suggesting that the source of excess mantle helium is attributable to reactivated magma beneath the central crater. Decrease of crustal helium contribution into natural springs by aquifer rock dilatancy 26 is not likely because there is not significant seismic activity in the northwest section. Therefore the recent ten years of increases in 3 He/ 4 He ratios at the Nigorigo and Kanose sites (Fig. 2) are mostly related to the central magma source of Mt Ontake, which may be related to the hydro-volcanic eruption. There are two types of hydro-volcanic eruptions 5 ; explosions of confined geothermal systems with or without the direct influence of magmatic fluids and those caused by the vaporization of surface fluids percolating into the temporarily plugged hot conduit of the volcano. The most likely cause of the 2014 eruption could be the former type of explosion because there is not a plugged hot conduit. Heating of shallow groundwater may have occurred during the magma rise, which may have increased the volatile pressure in the volcanic edifice. Ten years increase of helium isotopes at Nigorigo and Kanose sites suggests that the eruption process is slow (Fig. 4), caused by the gradual, rather than fast, accumulation of mantle volatiles during the rapid increase of volatile pressure produced by groundwater contact with the magma. Prior to the small hydro-volcanic eruption in March 2007, a very-long-period (VLP) volcanic event was detected by seismic observation 27 . The VLP event was explained as the response of a hydrothermal system to magma intrusion about 3 km beneath the summit of Mt Ontake. Therefore accumulation of volatile pressure was ongoing at least since 2007, which corresponds to the increase of helium isotope ratios at Nigorigo (Fig. 2). To evaluate the risk of a possible hydro-volcanic eruption, it is important to study the rate of volatile input into the volcanic edifice. Monitoring of volcanic SO 2 flux measurement may be useful to estimate this rate, but it has not been conducted at the central cone of Mt Ontake before the 2014 eruption. Using our data it is possible to estimate helium-3 flux at the conduit by a hydrodynamic dispersion model applied to the spatial variation of the helium isotopes in a given year 28 (see Methods). Assuming that the fringe of the conduit is 1 km away from the center, which is the same size of the dike model 23 Assuming that the depth of the aquifer is 30 m with an uncertainty of a factor of three and using the volcanic conduit diameter of 2 km, the hypothetical area of helium emission is 1.9 × 10 5 m 2 and the total helium-3 flux from the conduit of Mt Ontake before June 2003 is 78 nmol/day. The magmatic CO 2 / 3 He and H 2 O/CO 2 ratios of high temperature subduction zone volcanic gases are well documented and summarized as 1 × 10 10 and 100, respectively 29 . Using these values, the magmatic water flux is calculated as 1.4 tons/day based on the helium-3 flux and H 2 O/ 3 He ratio. The magmatic water flux increased to 1.7 ton/day in June 2005 as the helium-3 flux was enhanced. This excess water supply of 0.3 ton/day, which likely continued over the last 10 years, led to an accumulated water amount of 1000 tons. This amount of water was introduced into the surrounding hydrothermal system and excess water vapor may have been trapped in the conduit just beneath the central cone (Fig. 4). This excess water vapor could have provided the driving force for the 2014 eruption. In summary, we have observed a clear helium isotope increase at the hot spring close to Mt Ontake since June 2003, ten years before the 2014 fatal eruption. There were no consistent change at the distant sites. The helium anomaly is likely related to the recent activation of magma and is valuable for the mitigation of volcanic hazard in future. Methods Sampling, analysis and data reduction. Hot and mineral spring gases were collected by water displacement method using an inverted funnel, a manual pump and a lead glass container 9 . All sampling sites are natural springs and we did not use any lifting pump system. A portion of gas sample was introduced into a metallic high vacuum line in the laboratory, where helium and neon were purified by hot Ti getters and charcoal traps at liquid nitrogen temperature. Then the 4 He/ 20 Ne ratios were measured by a quadrupole mass spectrometer and helium was separated from neon by a cryogenic charcoal trap. Samples before 1990 and 2003 were measured by a Nuclide noble gas mass spectrometer without separating helium from neon 30 , while those after 1990 except for 2003 were analyzed by a VG5400 mass spectrometer 31 . There is an experimental bias of about 9% between the two systems. However the difference was well corrected by a careful treatement 32,33 . Samples collected after the 2014 eruption were measured by the same system as for the 1993-2009 samples. Therefore there is no bias expected among them. Correction of the 3 He/ 4 He ratio for air contamination was made based on the 4 He/ 20 Ne ratio. If the 4 He/ 20 Ne ratio is close to the air value, the correction could be significantly erroneous 11 . Therefore we masked five samples with low 4 He/ 20 Ne ratios (STable 1). Hydrodynamic dispersion model. In order to explain the observed helium isotope trend around the volcano (SFig. 1), a hydrodynamic dispersion model was developed 28 Assuming that thermal fluids are supplied from a magma reservoir to the conduit at a constant rate, and that the boundary conditions are such that the height of the piezometric head has the same distribution in any vertical section through the axis of the conduit, it is possible to estimate the fluid flow and thus helium isotopes based on the dispersion model. The equation governing helium isotopes at distance (r) under steady-state, homogeneous and isotropic conditions is as follows: where 3 P, 4 P, 3 C and 4 C denote nucleogenic and radiogenic production rate of 3 He and 4 He, hypothetical concentration of 3 He and 4 He at conduit, respectively. Assuming typical sedimentary material composing the aquifer, 3 P and 4 P is 1.5 × 10 6 atoms/m 3 sec and 3 × 10 −2 atoms/m 3 sec, respectively. It is possible to calculate 3 C and 4 C values by fitting the observed helium isotope distribution to the above equation by the least-squares method. Despite the model being simplistic, it reproduced well the spatial distribution of helium isotopes at several volcanoes (Mt Nevado del Ruiz, Mt Hakone, Mt Kusatsu and Mt Unzen) 9 . The method is applied to the spatial data set of year 1981, 1984, 1985, 1991, 1993, 1996, 1998, 2003, 2005, 2007, 2009, and 2014. It is difficult to calculate the data of 2000 because the number of data is too small. Helium-3 flux at the conduit is estimated by the term of " 3 C/r" in above equation for each year. Secular variation of the helium-3 flux is plotted in SFig. 3 where the error is 2 sigma obtained by the least-squares method.
3,339
2015-08-19T00:00:00.000
[ "Geology" ]
p-p, p-$\Lambda$ and $\Lambda$-$\Lambda$ correlations studied via femtoscopy in pp reactions at $\sqrt{s}$ = 7 TeV We report on the first femtoscopic measurement of baryon pairs, such as p-p, p-$\Lambda$ and $\Lambda$-$\Lambda$, measured by ALICE at the Large Hadron Collider (LHC) in proton-proton collisions at $\sqrt{s}$ = 7 TeV. This study demonstrates the feasibility of such measurements in pp collisions at ultrarelativistic energies. The femtoscopy method is employed to constrain the hyperon-nucleon and hyperon-hyperon interactions, which are still rather poorly understood. A new method to evaluate the influence of residual correlations induced by the decays of resonances and experimental impurities is hereby presented. The p-p, p-$\Lambda$ and $\Lambda$-$\Lambda$ correlation functions were fitted simultaneously with the help of a new tool developed specifically for the femtoscopy analysis in small colliding systems 'Correlation Analysis Tool using the Schr\"odinger Equation' (CATS). Within the assumption that in pp collisions the three particle pairs originate from a common source, its radius is found to be equal to $r_{0} = 1.144\pm0.019$ (stat) $^{+0.069}_{-0.012}$ (syst) fm. The sensitivity of the measured p-$\Lambda$ correlation is tested against different scattering parameters which are defined by the interaction among the two particles, but the statistics is not sufficient yet to discriminate among different models. The measurement of the $\Lambda$-$\Lambda$ correlation function constrains the phase space spanned by the effective range and scattering length of the strong interaction. Discrepancies between the measured scattering parameters and the resulting correlation functions at LHC and RHIC energies are discussed in the context of various models. Introduction Traditionally femtoscopy is used in heavy-ion collisions at ultrarelativistic energies to investigate the spatial-temporal evolution of the particle emitting source created during the collision [1,2]. Assuming that the interaction for the employed particles is known, a detailed study of the geometrical extension of the emission region becomes possible [3][4][5][6][7][8][9][10]. If one considers smaller colliding systems such as proton-proton (pp) at TeV energies and assumes that the particle emitting source does not show a strong time dependence, one can reverse the paradigm and exploit femtoscopy to study the final state interaction (FSI). This is especially interesting in the case where the interaction strength is not well known as for hyperon-nucleon (Y-N) and hyperon-hyperon (Y-Y) pairs [11][12][13][14][15][16][17][18]. Hyperon-nucleon and hyperon-hyperon interactions are still rather poorly experimentally constrained and a detailed knowledge of these interactions is necessary to understand quantitatively the strangeness sector in the low-energy regime of Quantum-Chromodynamics (QCD) [19]. Hyperon-nucleon (p-Λ and p-Σ) scattering experiments have been carried out in the sixties [20][21][22] and the measured cross sections have been used to extract scattering lengths and effective ranges for the strong nuclear potential by means of effective models such as the Extended-Soft-Core (ESC08) baryon-baryon model [23] or by means of chiral effective field theory (χEFT) approaches at leading order (LO) [24] and next-to-leading order (NLO) [25]. The results obtained from the above-mentioned models are rather different, but all confirm the attractiveness of the Λ -nucleon (Λ-N) interaction for low hyperon momenta. In contrast to the LO results, the NLO solution claims the presence of a negative phase shift in the p-Λ spin singlet channel for Λ momenta larger than p Λ > 600 MeV/c. This translates into a repulsive core for the strong interaction evident at small relative distances. The same repulsive interaction is obtained in the p-wave channel within the ESC08 model [23]. The existence of hypernuclei [26] confirms that the N-Λ is attractive within nuclear matter for densities below nuclear saturation ρ 0 = 0.16 fm −3 . An average value of U (ρ = ρ 0 , k = 0) ≈ −30 MeV [26], with k the hyperon momentum in the laboratory reference system, is extracted from hypernuclear data on the basis of a dispersion relation for hyperons in a baryonic medium at ρ 0 . The situation for the Σ hyperon is currently rather unclear. There are some experimental indications for the formation of Σ-hypernuclei [27,28] but different theoretical approaches predict both attractive and repulsive interactions depending on the isospin state and partial wave [23,25,29]. The scarce experimental data for this hypernucleus prevents any validation of the models. A Ξ-hypernucleus candidate was detected [30] and ongoing measurements suggest that the N-Ξ interaction is weakly attractive [31]. A recent work by the Lattice HAL-QCD Collaboration [32] shows how this attractive interaction could be visible in the p-Ξ femtoscopy analysis, in particular by comparing correlation functions for different static source sizes. This further motivates the extension of the femtoscopic studies from heavy ions to pp collisions since in the latter case the source size decreases by about a factor of three at the LHC energies leading to an increase in the strength of the correlation signal [33]. If one considers hyperon-hyperon interactions, the most prominent example is the Λ-Λ case. The Hdibaryon Λ-Λ bound state was predicted [34] and later a double Λ hypernucleus was observed [35]. From this single measurement a shallow Λ-Λ binding energy of few MeV was extracted, but the Hdibaryon state was never observed. Also recent lattice calculations [36] obtain a rather shallow attraction for the Λ-Λ state. The femtoscopy technique was employed by the STAR collaboration to study Λ-Λ correlations in Au-Au collisions at √ s NN = 200 GeV [15]. First a shallow repulsive interaction was reported for the Λ-Λ system, but in an alternative analysis, where the residual correlations were treated more accurately [37], a shallow attractive interaction was confirmed. These analyses demonstrate the limitations of such measurements in heavy-ion collisions, where the source parameters are time-dependent and the emission time might not be the same for all hadron species. The need for more experimental data to study the hyperon-nucleon, hyperon-hyperon and even the Data analysis In this paper we present results from studies of the p-p , p-Λ and Λ-Λ correlations in pp collisions at √ s = 7 TeV employing the data collected by ALICE in 2010 during the LHC Run 1. Approximately 3.4 × 10 8 minimum bias events have been used for the analysis, before event and track selection. A detailed description of the ALICE detector and its performance in the LHC Run 1 (2009Run 1 ( -2013 is given in [53,54]. The inner tracking system (ITS) [53] consists of six cylindrical layers of high resolution silicon detectors placed radially between 3.9 and 43 cm around the beam pipe. The two innermost layers are silicon pixel detectors (SPD) and cover the pseudorapidity range |η| < 2. The time projection chamber (TPC) [55] provides full azimuthal coverage and allows charged particle reconstruction and identification (PID) via the measurement of the specific ionization energy loss dE/dx in the pseudorapidity range |η| < 0.9. The Time-Of-Flight (TOF) [56] detector consists of Multigap Resistive Plate Chambers covering the full azimuthal angle in |η| < 0.9. The PID is obtained by measuring the particle's velocity β . The above mentioned detectors are immersed in a B = 0.5 T solenoidal magnetic field directed along the beam axis. The V0 are small-angle plastic scintillator detectors used for triggering and placed on either side of the collision vertex along the beam line at +3.3 m and −0.9 m from the nominal interaction point, covering the pseudorapidity ranges 2.8 < η < 5.1 (V0-A) and −3.7 < η < −1.7 (V0-C). Event selection The minimum bias interaction trigger requires at least two out of the following three conditions: two pixel chips hit in the outer layer of the silicon pixel detectors, a signal in V0-A, a signal in V0-C [54]. Reconstructed events are required to have at least two associated tracks and the distance along the beam axis between the reconstructed primary vertex and the nominal interaction point should be smaller than p-p, p-Λ and Λ-Λ correlations studied via femtoscopy in pp at √ s = 7 TeV ALICE Collaboration 10 cm. Events with multiple reconstructed SPD vertices are considered as pile-up. In addition, background events are rejected using the correlation between the number of SPD clusters and the tracklet multiplicity. The tracklets are constrained to the primary vertex, and hence a typical background event is characterized by a large amount of SPD clusters but only few tracklets, while a pile-up event contains a larger number of clusters at the same tracklet multiplicity. After application of these selection criteria, about 2.5 × 10 8 million events are available for the analysis. Proton candidate selection To ensure a high purity sample of protons, strict selection criteria are imposed on the tracks. Only particle tracks reconstructed with the TPC without additional matching with hits in the ITS are considered in the analysis in order to avoid biases introduced by the non-uniform acceptance in the ITS. However, the track fitting is constrained by the independently reconstructed primary vertex. Hence, the obtained momentum resolution is comparable to that of globally reconstructed tracks, as demonstrated in [54]. The selection criteria for the proton candidates are summarized in Tab. 1. The selection on the number of reconstructed TPC clusters serve to ensure the quality of the track, to assure a good p T resolution at large momenta and to remove fake tracks from the sample. To enhance the number of protons produced at the primary vertex, a selection is imposed on the distance-of-closest-approach (DCA) in both beam (z) and transverse (xy) directions. In order to minimize the fraction of protons originating from the interaction of primary particles with the detector material, a low transverse momentum cutoff is applied [57]. At high p T a cutoff is introduced to ensure the purity of the proton sample, as the purity drops below 80 % for larger p T due to the decreasing separation power of the combined TPC and TOF particle identification. For particle identification both the TPC and the TOF detectors are employed. For low momenta (p < 0.75 GeV/c) only the PID selection from the TPC is applied, while for larger momenta the information of both detectors is combined since the TPC does not provide a sufficient separation power in this momentum region. The combination of TPC and TOF signals is done by employing a circular selection criteria n σ ,combined ≡ (n σ ,TPC ) 2 + (n σ ,TOF ) 2 , where n σ is the number of standard deviations of the measured from the expected signal at a given momentum. The expected signal is computed in the case of the TPC from a parametrized Bethe-Bloch curve, and in the case of the TOF by the expected β of a particle with a mass hypothesis m. In order to further enhance the purity of the proton sample, the n σ is computed assuming different particle hypotheses (kaons, electrons and pions) and if the corresponding hypothesis is found to be more favorable, i.e. the n σ value found to be smaller, the proton hypothesis and thus the track is rejected. With these selection criteria a p T -averaged proton purity of 99 % is achieved. The purity remains above 99 % for p T < 2 GeV/c and then decreases to 80 % at the momentum cutoff of 4.05 GeV/c. Lambda candidate selection The weak decay Λ → pπ − (BR= 63.9 %, cτ = 7.3 cm [58]) is exploited for the reconstruction of the Λ candidate, and accordingly the charge-conjugate decay for the Λ identification. The reconstruction method forms so-called V 0 decay candidates from two charged particle tracks using a procedure described in [59]. The selection criteria for the Λ candidates are summarized in Tab. 1. The V 0 daughter tracks are globally reconstructed tracks and, in order to maximize the efficiency, selected by a broad particle identification cut employing the TPC information only. Additionally, the daughter tracks are selected by requiring a minimum impact parameter of the tracks with respect to the primary vertex. After the selection all positively charged daughter tracks are combined with a negatively charged partner to form a pair. The resulting Λ vertex i vertex Λ , i=x,y,z is then defined as the point of closest approach between the two daughter tracks. This distance of closest approach of the two daughter tracks with respect to the Λ decay vertex DCA(|p, π|) is used as an additional quality criterion of the Λ candidate. The Λ momentum is calculated as the sum of the daughter momenta. A minimum transverse momentum requirement on the Λ candidate is applied to reduce the contribution of fake candidates. Finally, a selection is applied on the opening angle α between the Λ momentum and the vector pointing from the primary vertex to the secondary V 0 decay vertex. The rather broad PID selection of the daughter tracks introduces a residual pion contamination of the proton daughter sample that in combination with the charge-conjugate pion of the V 0 leads to the misidentification of K 0 S as Λ candidates. These K 0 S candidates are removed by a selection on the π + π − invariant mass. The reconstructed invariant mass, its resolution and purity are determined by fitting eight spectra of the same size in p T ∈ [0.3, 4.3] GeV/c with the sum of two Gaussian functions describing the signal and a second-order polynomial to emulate the combinatorial background. The obtained values for the mean and variance of the two Gaussian functions are combined with an arithmetic average. The determined mass is in agreement with the PDG value for the Λ and Λ particles [58]. A total statistics of 5.9 × 10 6 and 5.5 × 10 6 and a signal to background ratio of 20 and 25 at a p T -averaged purity of 96 % and 97 % is obtained for Λ and Λ , respectively. It should be noted that the Λ purity is constant within the investigated p T range. Finally, a selection on the pπ − (pπ + ) invariant mass is applied. To avoid any contribution from auto-correlations, all Λ candidates are checked for shared daughter tracks. If this condition is found to be true, the Λ candidate with the smaller cosine pointing angle is removed from the sample. If a primary proton is also used as a daughter track of a Λ candidate, the latter is rejected. Figure 1 shows the p T -integrated invariant mass of the Λ and Λ candidates. The correlation function The observable of interest in femtoscopy is the two-particle correlation function, which is defined as the probability to find simultaneously two particles with momenta p 1 and p 2 divided by the product of the corresponding single particle probabilities . These probabilities are directly related to the inclusive Lorentz invariant spectra P(p 1 , . In absence of a correlation signal the value of C(p 1 , p 2 ) equals unity. Approximating the emission process and the momenta of the particles, the size of the particle emitting source can be studied. Following [2], Eq. (1) can then be rewritten as where k * is the relative momentum of the pair defined as k * = 1 2 · |p * 1 − p * 2 |, with p * 1 and p * 2 the momenta of the two particles in the pair rest frame, S(r * ) contains the distribution of the relative distance of particle pairs in the pair rest frame (PRF, denoted by the * ), the so-called source function, and ψ(r * , k * ) denotes the relative wave function of the particle pair. The latter contains the particle interaction term and determines the shape of the correlation function. In this work, the p-p correlation function, which is theoretically well understood, is employed to obtain the required information about the source function and this information will be used to study the p-Λ and Λ-Λ interaction. In order to relate the correlation function to experimentally accessible quantities, Eq. (1) is reformulated [2] as p-p, p-Λ and Λ-Λ correlations studied via femtoscopy in pp at √ s = 7 TeV ALICE Collaboration The distribution of particle pairs from the same event is denoted with A(k * ) and B(k * ) is a reference sample of uncorrelated pairs. The latter is obtained using event mixing techniques, in which the particle pairs of interest are combined from single particles from different events. To avoid acceptance effects of the detector system, the mixing procedure is conducted only between particle pairs stemming from events with similar z position of the primary vertex and similar multiplicity [2]. The normalization parameter for mixed and same event yields N is chosen such that the mean value of the correlation function equals unity for k * ∈ [0.2, 0.4] GeV/c. As correlation functions of all studied baryon-baryon pairs, i.e. p-p , p-Λ and Λ-Λ , exhibit identical behavior compared to those of their respective anti-baryon-anti-baryon pairs, the corresponding samples are combined to enhance the statistical significance. Therefore, in the following p-p denotes the combination of p-p ⊕ p-p, and accordingly for p-Λ and Λ-Λ . Decomposition of the correlation function The experimental determination of the correlation function is distorted by two distinct mechanisms. The sample of genuine particle pairs include misidentified particles and feed-down particles from strong and weak decays. In this work a new method to separate all the individual components contributing to a measured correlation signal is proposed. The correlation functions arising from resonances or impurities of the sample are weighted with the so-called λ parameters and in this way are taken into account in the total correlation function of interest where the i, j denote all possible impurity and feed-down contributions. These λ parameters can be obtained employing exclusively single particle properties such as the purity and feed-down probability. The underlying mathematical formalism is outlined in App. A. For the case of p-p correlation the following contributions must be taken into account whereX refers to misidentified particles of specie X . p Λ and p Σ + correspond to protons stemming from the weak decay of the corresponding hyperons. The Ξ → Λπ → pππ decays are explicitly considered in the feed-down contribution of the p-Λ correlation and hence are omitted in Eq. (5) to avoid double counting. As shown in App. A, the fraction of primary protons and their feed-down fractions are required to calculate the λ parameters of the different contributions to Eq. (5). The information about the origin of the protons, i.e. whether the particles are of primary origin, originating from feed-down or from the interactions with the detector material, is obtained by fitting Monte Carlo (MC) templates to the experimental distributions of the distance of closest approach of the track to the primary vertex. The MC templates and the purity are extracted from Pythia [60] simulations using the Perugia 2011 tune [61], which were filtered through the ALICE detector and the reconstruction algorithm [53]. The p T averages are then calculated by weighting the quantities of interest by the respective particle yields dN/dp T . The resulting fraction of primary protons averaged over p T is 87 %, with the other 13 % of the total yield associated to weak decays of resonances and the contribution from the detector material is found to be negligible. The feed-down from weakly decaying resonances is evaluated by using cross sections from Pythia and for the proton sample consists of the Λ (70 %) and Σ + (30 %) contributions. The individual contributions to the total correlation function are presented in Tab. 2. p-p, p-Λ and Λ-Λ correlations studied via femtoscopy in pp at √ s = 7 TeV ALICE Collaboration The decomposition of the p-Λ correlation function is conducted in a similar manner as for the p-p pair, however considering the purities and feed-down fractions of both particles The Λ purity is obtained from fits to the invariant mass spectra in eight bins of p T and defined as S/(S + B), where S denotes the actual signal and B the background. The feed-down contribution is determined from MC template fits of the experimental distributions of the cosine pointing angle, in which a total of four templates are considered corresponding to direct, feed-down, material and impurity contributions. The production probability dN/dp T is employed in order to obtain p T weighted average values. Around 73% of the Λs are directly produced in the primary interaction and 23% originate from weakly decaying resonances, which is in line with the values quoted in [62]. The remaining yield is associated to combinatorial background and Λs produced in the detector material. The main contribution to the feeddown fraction is expected to originate from the Ξ states with no preference for the neutral or the charged, respectively. This hypothesis is supported by Pythia simulations where the secondary Λ particles arise from the weak decay of the Ξ 0 (48%) and Ξ ± (49%) resonances. The remaining contribution in the simulation arises from the Σ 0 , which however is treated separately. Since the latter decays electromagnetically almost exclusively into Λγ [58], it has a very short life time and cannot be experimentally differentiated from the sample of primary Λs. Measurements of the ratio R Σ 0 /Λ = σ Σ 0 /σ Λ have obtained values around 1/3 [63][64][65][66], however with large uncertainties for hadronic collisions at high energies. For lack of better estimates the value of 1/3 is used in the following. The resulting λ parameters for the p-Λ pair are shown in Tab. 2. For the Λ-Λ correlation function the following pair contributions are taken into account The resulting λ parameters are shown in Tab. 2. Notable is that the actual pair of interest contributes only to about one third of the signal, while pair fractions involving in particular Σ 0 and Ξ give a significant contribution. The statistical uncertainties of these parameters are negligible and their influence on the systematic uncertainties will be evaluated in Sec. 4. Detector effects The shape of the experimentally determined correlation function is affected by the finite momentum resolution. This is taken into account when the experimental data are compared to model calculations in the fitting procedure by transforming the modeled correlation function, see Eq. (15), to the reconstructed momentum basis. When tracks of particle pairs involved in the correlation function are almost collinear, i.e. have a low k * , detector effects can affect the measurement. No hint for track merging or splitting is found and therefore no explicit selection criteria are introduced. Non-femtoscopic background For sufficiently large relative momenta (k * > 200 MeV/c) and increasing separation distance, the FSI among the particles is suppressed and hence the correlation function should approach unity. As shown in Fig. 2, however, the measured correlation function for p-p and p-Λ exhibits an increase for k * larger than about 200 MeV/c for the two systems. Such non-femtoscopic effects, probably due to energy-momentum conservation, are in general more pronounced in small colliding systems where the average particle multiplicity is low [2]. In the case of meson-meson correlations at ultra-relativistic energies, the appearance of long-range structures in the correlation functions for moderately small k * (k * < 200 MeV/c) is typically interpreted as originating from mini-jet-like structures [49,67]. Pythia also shows the same non-femtoscopic correlation for larger k * but fails to reproduce quantitatively the behavior shown in Fig. 2, as already observed for the angular correlation of baryon-baryon and antibaryon-anti-baryon pairs [57]. Energy-momentum conservation leads to a contribution to the signal which can be reproduced with a formalism described in [68] and accordingly also considered in this work. Therefore, a linear function C(k * ) non−femto = ak * + b where a, b are fit parameters, is included to the global fit as C(k * ) = C(k * ) femto × C(k * ) non−femto to improve the description of the signal by the femtoscopic model. The fit parameters of the baseline function are obtained in k * ∈ [0.3, 0.5] GeV/c for p-p and p-Λ pairs. For the case of the Λ-Λ correlation function, the uncertainties of the data do not allow to additionally add a baseline, which is therefore omitted in the femtoscopic fit. Genuine correlation function For the p-p correlation function the Coulomb and the strong interaction as well as the antisymmetrization of the wave functions are considered [69]. The strong interaction part of the potential is modeled employing the Argonne v 18 [51] potential considering the s and p waves. The source is assumed to be isotropic with a Gaussian profile of radius r 0 . The resulting Schrödinger equation is then solved with the CATS [52]. p-p, p-Λ and Λ-Λ correlations studied via femtoscopy in pp at √ s = 7 TeV ALICE Collaboration In the case of p-Λ and Λ-Λ we employ the Lednický and Lyuboshitz analytical model [70] to describe these correlation functions. This model is based on the assumption of an isotropic source with Gaussian profile where r 0 is the size of the source. Additionally the complex scattering amplitude is evaluated by means of the effective range approximation with the scattering length f S 0 , the effective range d S 0 and S denoting the total spin of the particle pair. In the following the usual sign convention of femtoscopy is employed where an attractive interaction leads to a positive scattering length. With these assumptions the analytical description of the correlation function for uncharged particles [70] reads where ℜ f (k * ) S (ℑ f (k * ) S ) denotes the real (imaginary) part of the complex scattering amplitude, respectively. The F 1 (Q inv r 0 ) and F 2 (Q inv r 0 ) are analytical functions resulting from the approximation of isotropic emission with a Gaussian source and the factor ρ S contains the pair fraction emitted into a certain spin state S. For the p-Λ pair unpolarized emission is assumed. The Λ-Λ pair is composed of identical particles and hence additionally quantum statistics needs to be considered, which leads to the introduction of an additional term to the Lednický model, as employed e.g. in [15]. While the CATS framework can provide an exact solution for any source and local interaction potential, the Lednicky-Lyuboshitz approach uses the known analytical solution outside the range of the strong interaction potential and takes into account its modification in the inner region in an approximate way only. That is why this approach may not be valid for small systems. Table 2 demonstrates that a significant admixture of residuals is present in the experimental sample of particle pairs. A first theoretical investigation of these so-called residual correlations was conducted in p-p, p-Λ and Λ-Λ correlations studied via femtoscopy in pp at √ s = 7 TeV ALICE Collaboration [71]. This analysis relies on the procedure established in [18], where the initial correlation function of the residual is calculated and then transformed to the new momentum basis after the decay. Residual correlations For the p-p channel only the feed-down from the p-Λ correlation function is considered, which is obtained by fitting the p-Λ experimental correlation function and then transforming it to the p-p momentum basis. All contributions are weighted by the corresponding λ parameters and the modeled correlation function for this pair C model,p-p (k * ) can be written as All other residual correlations are assumed to be flat. For the p-Λ , residual correlations from the p-Σ 0 , p-Ξ and Λ-Λ pairs are taken into account. As the Λ-Λ correlation function is rather flat no further transformation is applied. The p-Σ 0 correlation function is obtained using predictions from [72]. As the decay products of the reaction Ξ → Λπ are charged and therefore accessible by ALICE, we measure the p-Ξ correlation function. The experimental data are parametrized with a phenomenological function where the parameter a Ξ is employed to scale the function to the data and has no physical meaning. Its value is found to be a Ξ = 3.88 fm. The modeled correlation function C model,p-Λ (k * ) for the pair is obtained by As the present knowledge on the hyperon-hyperon interaction is scarce, in particular regarding the interaction of the Λ with other hyperons, all residual correlations feeding into the Λ-Λ correlation function are considered to be consistent with unity, It should be noted, that the residual correlation functions, after weighting with the corresponding λ parameter, transformation to the momentum base of the correlation of interest and taking into account the finite momentum resolution, only barely contribute to the total fit function. Total correlation function model The correlation function modeled according to the considerations discussed above is then multiplied by a linear function to correct for the baseline as discussed in Sec. 3.3 and weighted with a normalization parameter N where C model (k * ) incorporates all considered theoretical correlation functions, weighted with the corresponding λ parameters as discussed in Sec. 3.1 and 3.4. The inclusion of a baseline is further motivated by the presence of a linear but non-flat correlation observed in the data outside the femtoscopic region (see Fig. 2 for k * ∈ [0.3, 0.5] GeV/c). When attempting to use a higher order polynomial to model the background, the resulting curves are still compatible with a linear function, while their interpolation into the lower k * region leads to an overall poorer fit quality. Table 3: Selection parameter variation and the resulting relative systematic uncertainty on the p-p , p-Λ and Λ-Λ correlation function. Systematic uncertainties 4.1 Correlation function The systematic uncertainties of the correlation functions are extracted by varying the proton and Λ candidate selection criteria according to Tab. 3. Due to the low number of particle pairs, in particular at low k * , the resulting variations of the correlation functions are in general much smaller than the statistical uncertainties. In order to still estimate the systematic uncertainties the data are rebinned by a factor of 10. The systematic uncertainty on the correlation function is obtained by computing the ratio of the default correlation function to the one obtained by the respective cut variation. Whenever this results in two systematic uncertainties, i.e. by a variation up and downwards, the average is taken into account. Then all systematic uncertainties from the cut variations are summed up quadratically. This is then extrapolated to the finer binning of the correlation function by fitting a polynomial of second order. The obtained systematic uncertainties are found to be largest in the lowest k * bin. The individual contributions in that bin are summarized in Tab. 3 and the resulting total systematic uncertainty accounts to about 4 % for p-p , 1 % for p-Λ and 2.5 % for Λ-Λ . Variations of the proton DCA selection are not taken into account for the computation of the systematic uncertainty since it dilutes (enhances) the correlation signal by introducing more (less) secondaries in the sample. This effect is recaptured by a change in the λ parameter. Femtoscopic fit To evaluate the systematic uncertainty of the femtoscopic fit, and hence on the measurement of the radius r 0 , the fit is performed applying the following variations. Instead of the common fit, the radius is determined separately from the p-p and p-Λ correlation functions. Λ-Λ is excluded because it imposes only a shallow constraint on the radius, in particular since the scattering parameters unconstrained for the fit. Furthermore, the input to the λ parameters are varied by 25 %, while keeping the purity and the fraction of primaries and secondaries constant since this would correspond to a variation of the particle selection and thus would require a different experimental sample as discussed above. Additionally, all fit ranges of both the femtoscopic and the baseline fits are varied individually by up to 50 % and 10 %, respectively. The lower bound of the femtoscopic fit is always left at its default value. For the p-Λ correlation function the dependence on the fit model is studied by replacing the Lednický and Lyuboshitz analytical model with the potential introduced by Bodmer, Usmani, and Carlson [73] for which the Schrödinger equation is explicitly solved using CATS. Additionally, the fit for the p-p and p-Λ correlation function p-p, p-Λ and Λ-Λ correlations studied via femtoscopy in pp at √ s = 7 TeV ALICE Collaboration [24] parameter set (green curve) is plugged in for the p-Λ system and the scattering length obtained from [15] for the Λ-Λ system (cyan curve). is performed without the linear baseline. The radius is determined for 2000 random combinations of the above mentioned variations. The resulting distribution of radii is not symmetric and the systematic uncertainty is therefore extracted as the boundaries of the 68 % confidence interval around the median of the distribution and accounts to about 4 % of the determined radius. Results The obtained p-p , p-Λ and Λ-Λ correlation functions are shown in Fig. 3. For each of the correlation functions we do not observe any mini-jet background in the low k * region, as observed in the case of neutral [74] and charged [50] kaon pairs and charged pion pairs [49]. This demonstrates that the femtoscopic signal in baryon-baryon correlations is dominant in ultrarelativistic pp collisions. The signal amplitude for the p-p and p-Λ correlations are much larger than the one observed in analogous studies from heavy-ion collisions [1,11,12,14], due to the small particle emitting source formed in pp collisions, allowing a higher sensitivity to the FSI. In absence of residual contributions and any FSI, the Λ-Λ correlation function is expected to approach 0.5 as k * → 0. The data of the herewith presented sample is limited, but we can see that the Λ-Λ correlation exceeds the value expected considering only quantum statistic effects, which is likely due to the attractive FSI of the Λ-Λ system [26,37]. The experimental data are fitted using CATS and hence the exact solution of the Schrödinger equation for the pp correlation and the Lednický model for the p-Λ and Λ-Λ correlation. The three fits are done simultaneously and this way the source radius is extracted and different scattering parameters for the p-Λ and Λ-Λ interactions can be tested. While in the case of the p-p and p-Λ correlation function the existence of a baseline is clearly visible in the data, the low amount of pairs in the Λ-Λ channel do not allow for such a conclusion. Therefore, the baseline is not included in the model for the Λ-Λ correlation function. The simultaneous fit is carried out by using a combined χ 2 and with the radius as a free parameter common to all correlation functions. The fit range is k * ∈ [0, 0.16] GeV/c for p-p and k * ∈ [0, 0.22] GeV/c for p-Λ and Λ-Λ . Hereafter we adopt the convention of positive scattering lengths for attractive interactions and negative scattering lengths for repulsive interactions. The p-Λ strong interaction is modeled employing scattering parameters obtained using the next-to-leading order expansion of a chiral effective field theory at a cutoff scale of Λ = 600 MeV [25]. The simultaneous fit of the p-p , p-Λ and Λ-Λ correlation p-p, p-Λ and Λ-Λ correlations studied via femtoscopy in pp at √ s = 7 TeV ALICE Collaboration Table 4: Scattering parameters for the p-Λ system from various theoretical calculations [24,25,[75][76][77][78][79][80][81] and the corresponding degree of consistency with the experimentally determined correlation function expressed in numbers of standard deviations n σ . The χEFT scattering parameters are obtained at a cutoff scale Λ = 600 MeV. The usual sign convention of femtoscopy is employed where an attractive interaction leads to a positive scattering length. functions yields a common radius of r 0 = 1.144 ± 0.019 (stat) +0.069 −0.012 (syst) fm. The blue line in the left panel in Fig. 3 shows the result of the femtoscopic fit to the p-p correlation function using the Argonne v 18 potential that describes the experimental data in a satisfactory way. The red curve in the central panel shows the result of the NLO calculation for p-Λ . In the case of Λ-Λ (right panel), the yellow curve represents the femtoscopic fit with free scattering parameters. The width of the femtoscopic fits corresponds to the systematic uncertainty of the correlation function discussed in Sec. 4. After the fit with the NLO scattering parameters has converged, the p-Λ correlation function for the same source size is compared to the data using various theoretically obtained scattering parameters [24,25,[75][76][77][78][79][80][81] as summarized in Tab. 4. The degree of consistency is expressed in the number of standard deviations n σ . The employed models include several versions of meson exchange models proposed such as the Nijmegen model D (ND) [75], model F (NF) [76], soft core (NSC89 and NSC97) [77,78] and extended soft core (ESC08) [79]. Additionally, models considering contributions from one-and two-pseudoscalarmeson exchange diagrams and from four-baryon contact terms in χEFT at leading [24] and next-toleading order [25] are employed, together with the first version of the Jülich Y -N meson exchange model [80], which in a later version [81] also features one-boson exchange. All employed models describe the data equally well and hence the available data does not allow yet for a discrimination. As an example, we show in the central panel of Fig. 3 how employing scattering parameters different than the NLO ones reflects on the p-Λ correlation function. The green curve corresponds to the results obtained employing LO scattering parameters and the theoretical correlation function is clearly sensitive for k * → 0 to the input parameter. In order to probe which scattering parameters are compatible with the measured Λ-Λ correlation function, the effective range and the scattering length of the potential are varied within d 0 ∈ [0, 18] fm and 1/ f 0 ∈ [−2, 5] 1/fm, while keeping the renormalization constant N as the only free fit parameter. It should be noted that the resulting variations of N are on the percent level. The resulting correlation functions obtained by employing the Lednický and Lyuboshitz analytical model [70] and considering also the secondaries and impurities contributions are compared to the data. The degree of consistency p-p, p-Λ and Λ-Λ correlations studied via femtoscopy in pp at √ s = 7 TeV ALICE Collaboration is expressed in the number of standard deviations n σ , as displayed in Fig. 4 together with an overview of the present knowledge about the Λ -Λ interaction. For a detailed overview of the currently available models see e.g. [37], from which we have obtained the collection of scattering parameters. Additionally to the Nijmegen meson exchange models mentioned above, the data are compared to various other theoretical calculations. An exemplary boson-exchange potential is Ehime [82,83], whose strength is fitted to the outdated double hypernuclear bound energy, ∆B ΛΛ = 4 MeV [84] and accordingly known to be too attractive. As an exemplary quark model including baryon-baryon interactions with meson exchange effects, the fss2 model [85,86] is used. Moreover, the potentials by Filikhin and Gal (FG) [87] and by Hiyama, Kamimura, Motoba, Yamada, and Yamamoto (HKMYY) [88], which are capable of describing the NAGARA event [89] are employed. In contrast to the p-Λ case, the agreement with the data increases with every revision of the Nijmegen potential, while the introduction of the extended soft core slightly increases the deviation. In particular solution NSC97f yields the overall best agreement with the data. The correlation function modeled using scattering parameters of the Ehime model which is known to be too attractive deviates by about 2 standard deviations from the data. For an attractive interaction (positive f 0 ) the correlation function is pushed from the quantum statistics distribution for two fermions (correlation function equal to 0.5 for k * = 0) to unity. As a result within the current uncertainties the Λ-Λ correlation function is rather flat and close to 1 and this lack of structure makes it impossible to extract the two scattering parameters with a reasonable uncertainty. This means that even by increasing the data by a factor 10, as expected from the RUN2 data, it will be very complicated to constrain precisely the region f 0 > 0. As for the region of negative scattering length f 0 this is connected in scattering theory either to a repulsive interaction or to the existence of a bound state close to the threshold and a change in the sign of the scattering length. Since the Λ-Λ interaction is known to be slightly attractive above the threshold [35], the measurement of a negative scattering lengths would strongly support the existence of the Hdibaryon. Notably the correlation function modeled employing the scattering parameters obtained by the STAR collaboration in Au-Au collisions at √ s NN = 200 GeV [15] and all the secondaries and impurities contributions deviates by 6.8 standard deviations from the data. This is also shown by the cyan curve p-p, p-Λ and Λ-Λ correlations studied via femtoscopy in pp at √ s = 7 TeV ALICE Collaboration Fig. 5: (Color online) Comparison of radii obtained for different charged particle multiplicity intervals in the pp collision system at √ s = 7 TeV [49,50,74]. The error bars correspond to statistical and the shaded regions to the systematic uncertainties. The black point is the radius obtained in this analysis with p-p , p-Λ and Λ-Λ pairs, while the gray bar corresponds to the range of covered m T in this analysis. displayed in the right panel of Fig. 3 which is obtained using the source radius and the λ parameters from this analysis and the scattering parameters from [15]. On the other hand these parameters and all those corresponding to the gray-shaded area in Fig. 4 lead to a negative genuine Λ-Λ correlation function if the Lednický model is employed. The total correlation function that is compared to the experimental data is not negative because the impurities and secondaries contributions lead to a total correlation function that is always positive. This means that the combination of large effective ranges and negative scattering lengths translate into unphysical correlation functions, for small colliding systems as pp. This effect is not immediate visible in larger colliding system such as Au-Au at √ s NN = 200 GeV measured by STAR, where the obtained correlation function does not become negative. This demonstrates that these scattering parameters intervals combined with the Lednický model are not suited to describe the correlations functions measured in small systems. One could test the corresponding local potentials with the help of CATS [52], since the latter does not suffer from the limitations of the Lednický model due to the employment of the asymptotic solution. On the other hand we have directly compared the correlation functions obtained employing CATS and the Λ-Λ local potentials reported in [37] with the correlation functions obtained using the corresponding scattering parameters and the Lednický model. For the typical source radii of 1.3 fm the deviations are within 10%. This disfavours the region of negative scattering lengths and large effective ranges for the Λ-Λ correlation. This study is the first measurement with baryon pairs in pp collisions at √ s = 7 TeV, while other femtoscopic analyses were conducted with neutral [74] and charged [50] kaon pairs and charged pion pairs [49] with the ALICE experiment. The radius obtained from baryon pairs is found to be slightly larger that that measured from meson-meson pairs at comparable transverse mass as shown in Fig. 5 Summary This paper presents the first femtoscopic measurement of p-p , p-Λ and Λ-Λ pairs in pp collisions at √ s = 7 TeV. No evidence for the presence of mini-jet background is found and it is demonstrated that this kind of studies with baryon-baryon and anti-baryon-anti-baryon pairs are feasible. With a newly developed method to compute the contributions arising from impurities and weakly decaying resonances p-p, p-Λ and Λ-Λ correlations studied via femtoscopy in pp at √ s = 7 TeV ALICE Collaboration to the correlation function from single particles quantities only, the genuine correlation functions of interest can be extracted from the signal. These correlation functions contribute with 74 % for p-p , 47 % for p-Λ and 30 % for Λ-Λ to the total signal. A simultaneous fit of all correlation functions with a femtoscopic model featuring residual correlations stemming from the above mentioned effects yields a radius of the particles emitting source of r 0 = 1.144 ± 0.019 (stat) +0.069 −0.012 (syst) fm. For the first time, the Argonne v 18 NN potential with the s and p waves was used to successfully describe the p-p correlation and in so obtain a solid benchmark for our investigation. For the case of the p-Λ correlation function, the NLO parameter set obtained within the framework of chiral effective field theory is consistent with the data, but other models are also found to be in agreement with the data. The present pair data in the Λ-Λ channel allows us to constrain the available scattering parameter space. Large effective ranges d 0 in combination with negative scattering parameters lead to unphysical correlations if the Lednický model is employed to compute the correlation function. This also holds true for the the average values published by the STAR collaboration in Au-Au collisions at √ s NN = 200 Ge, that are found to be incompatible with the measurement in pp collisions within the Lednický model. The larger data sample of the LHC Run 2 and Run 3, where we expect up to a factor ten and 100 more data respectively, will enable us to extend the method also to Σ, Ξ and Ω hyperons and thus further constrain the Hyperon-Nucleon interaction. [57] ALICE Collaboration, J. Adam et al., "Insight into particle production mechanisms via angular correlations of identified particles in pp collisions at √ s = 7 TeV," The European Physical Journal C 77 no. 8, (Aug, 2017) 569. https://doi.org/10.1140/epjc/s10052-017-5129-6. A Derivation of the λ parameters Let 'X' be a specific particle type and X is the number of particles of that species. For each particle different subsets X i are defined, each representing a unique origin of the particle, where i = 0 corresponds to the case of a primary particle, the rest are either particles originating from feed-down or misidentification. In particular indexes 1 ≤ i ≤ N F should be associated with feed-down contributions and N F + 1 ≤ i ≤ N F + N M should be associated with impurities, where N F is the number of feed-down channels and N M the number of impurity channels. In the present work we assume that all impurity channels contribute with a flat distribution to the total correlation, therefore we do not study differentially the origin of the impurities and combine them in a single channel, i.e. N M = 1. Further we define as the total number of particles that stem from feed-down and as the total number of particles that were misidentified (i.e. impurities). X 0 is the number of correctly identified primary particles that are of interest for the femtoscopy analysis. The purity P is the fraction of correctly identified particles, not necessarily primary, to the total number of particles in the sample (Eq. A.3). P(X ) = (X 0 + X F )/X . For the later discussion it is beneficial to combine the two definitions and refer to the purity as P(X i ) = P(X ) = (X 0 + X F )/X for i ≤ N F , P(X ) = X M /X else. (A.5) Another quantity of interest will be the channel fraction f i , which is defined as the fraction of particles originating from the i-th channel relative to the total number of either correctly identified or misidentified particles: As discussed in the main body of the paper both the purity and the channel fractions can be obtained either from MC simulations or MC template fits. The product of the two reads P(X i ) = P(X i ) f (X i ) = X i X . (A.7) Next we will relate P(X i ) and f (X i ) to the correlation function between particle pairs, which is defined as where N and M are the yields of an 'XY' particle pair in same and mixed events respectively. Note that this is a raw correlation function which is not properly normalized. The normalization is discussed in the main body of the paper, but is irrelevant in the current discussion and it will be omitted. Both N and M are yields which can be decomposed into the sum of their ingredients. Using the previously discussed notion of different channels of origin Hence the total correlation function becomes: where C i, j (XY ) is the contribution to the total correlation of the i, j-th channel of origin of the particles 'X,Y' and λ i, j (XY ) is the corresponding weight coefficient. How to obtain the individual functions C i, j (XY ) is discussed in the main body of the paper. The weights λ i, j can be derived from the purities and channel fractions of the particles 'X' and 'Y'. This is possible since λ i, j depends only on the mixed event sample for which the underlying assumption is that the particles are not correlated. In that case the two-particle yield M(XY ) can be factorized and according to Eq. (A.11) the λ coefficients can be expressed as The last step follows directly from Eq. (A.7) applied to the mixed event samples of 'X' and 'Y'. Eq. A.7 relates P to the known quantities P and f , hence the λ coefficients can be rewritten as (A.14) We would like to point out that due to the definition of P(X i ) the sum of all λ parameters is automatically normalized to unity. p-p, p-Λ and Λ-Λ correlations studied via femtoscopy in pp at √ s = 7 TeV ALICE Collaboration B The ALICE Collaboration p-p, p-Λ and Λ-Λ correlations studied via femtoscopy in pp at √ s = 7 TeV ALICE Collaboration
11,666.2
2018-05-31T00:00:00.000
[ "Physics" ]
Natural Hazards and Earth System Sciences Investigating the earthquake catalog of the National Observatory of Athens The earthquake catalog of the National Observatory of Athens (NOA) since the beginning of the Greek National Seismological Network development in 1964, is compiled and analyzed in this study. The b-value and the spatial and temporal variability of the magnitude of completeness of the catalog is determined together with the times of significant seismicity rate changes. It is well known that man made inhomogeneities and artifacts exist in earthquake catalogs that are produced by changing seismological networks and in this study the chronological order of periods of network expansion, instrumental upgrades and practice and procedures changes at NOA are reported. The earthquake catalog of NOA is the most detailed data set available for the Greek area and the results of this study may be employed for the selection of trustworthy parts of the data in earthquake prediction research. Introduction Earthquake catalogs are a valuable result of fundamental seismological practice and they form the basis for seismicity, seismotectonic, seismic risk and hazard investigations. Before one proceeds in such investigations it is essential to examine and report on the spatial and temporal homogeneity and completeness of the catalog.This is because earthquake catalogs are produced by the recording of seismic waves in seismological networks that change in time and space with varying operational practices and procedures. Greece has the highest seismicity in Europe and the first "Agamemnon" type seismograph was installed in the National Observatory of Athens (NOA) in 1898.In spite of this a Greek seismographic network did not begin its opera-Correspondence to: G. Chouliaras (g.choul@gein.noa.gr)tion until 1964 and at this time also begins the seismological bulletin production and the network earthquake catalog.This catalog spans over four decades of uninterrupted seismological network data and contains more than 75 000 events, so it is arguably the most detailed recent instrumental earthquake catalog of Greece. A number of scientific papers have used the entire or parts of the NOA catalog, however little work has been reported concerning artifacts in the homogeneity and completeness of the data set.Significant network expansions, infrastructure upgrades and staff changes have taken place in the last four decades of catalog production and the knowledge of the chronological order of the steps towards the improvement in the detectability of NOA's network and the way that earthquake parameters have been reported is of great importance in order to detect more accurately possible seismicity anomalies related to earthquake preparatory processes. It is the purpose of this investigation to report on such inhomogeneities, artifacts and biases which may distort the earthquake catalog of NOA and this may provide further insight when examining in detail the seismicity patterns of Greece. History of network development at NOA The historical development of the permanent seismological station installations by NOA in Greece are reviewed by Bath (1983) and more recently by Papazachos and Papazachou (2003).The first traditional WWSSN station in Greece began operating in Athens at the Institute of Geodynamics (NOA) in 1962 and a year later the second station was installed in Patras, to be followed by the installation of four more stations on the Greek islands: Crete, Kefalonia, Lesvos and Rodos.In 1964 a Wood Anderson seismograph was Published by Copernicus Publications on behalf of the European Geosciences Union.installed in Athens and the respective local magnitude has been used ever since.The seismological network expanded to 13 stations by 1973 and the determination of the source parameters was carried out manually by using appropriate travel time curves. Advances in the telecommunication infrastructure technology in Greece allowed for data transmission from the remote stations to Athens via leased telephone lines in realtime by 1981.Around this time the rapid increase in seismic data flow, required the change of the operating practices at NOA and a main frame computer with the Hypo 71 computer program (Lee and Lahr, 1975) was employed for the daily analysis and bulletin production procedure. In 1988 begins the second period of station expansion with the addition of 14 seismological stations by 1990.At this time portable computers were introduced at NOA and the analysis and bulletin production and archiving gradually changed from analog to digital.More time was spent on signal detection and less on routine analysis and this improved the detectability and led to an increase in reporting.The third station expansion occurred around the end of 1994, by the installation of the first digital seismographic network (Chouliaras and Stavrakakis, 1997).A three component, shortperiod, seismographic network comprised of nine remote stations was operated by a dial -up telemetry server at NOA by 1995. Simultaneous to the analog to digital instrumentation transition period at NOA in 1995, digital signal analysis practices and procedures were introduced.During the next few years, an increase in the research staff occurred and upgrades in the station infrastructure of the NOA network continued with 12 new, three component digital seismic stations that began their operation around 1998.After the catastrophic earthquake on 7 September 1999, and in view of the preparations for the 2004, Olympic Games, the NOA digital seismographic network rapidly expanded around Athens.At the same time NOA participated in various projects coordinated by ORFEUS and EMSC, that concerned the real time exchange of broad -band waveform data and parametric results for the rapid determination of epicenters (Stavrakakis et al., 1999;Van Eck et al., 2002).The transition from short period to broad band sensors and related software changes lasted until 2004 and this was accompanied by the largest staff increase in NOA's history in 2005 in order to handle the large influx of seismological data. Seismological data The monthly bulletins of NOA from 1964 to 2009 have been used to compile the NOA network earthquake catalog for the region 34 • -42 • N and 19 • -29 • E, as described recently by Chouliaras (2009a, b).The seismicity analysis is performed using the ZMAP software which includes a set of tools written in Matlab®, a Mathworks commercial software language (http://www.mathworks.com), with an open code and driven by a graphical user interface (GUI), (Weimer, 2001). The epicentral distribution of the earthquakes in the NOA network catalog delineate three areas of dense seismicity as seen in Fig. 1: the Hellenic Arc subduction zone to the south and its northwestern extension towards the Ionian islands, the North Anatolian Fault extension into the northeastern Aegean till Central Greece and the Gulf of Corinth. Figures 2 through 5 show the respective time, hour, magnitude and depth histograms of the data set.From these results the following are pointed out : (a) the significant increases in the registered earthquakes around 1968, 1981, 1995 and 2004, (b) the 20-25 % decrease in the registered earthquakes during the daylight hours due to the increased noise at the recording stations, (c) the asymmetrical peak around M=3.1 due to the insufficient detectability for small events and d) the depth of the vast majority of events in the catalog is below 50 km, with the most frequent depth range that of 5 to 15 km. The time variation of the reported magnitudes in the catalog as seen in Fig. 6 shows periodic "bursts" of large earthquakes, around the same periods mentioned earlier for Fig. 2 and the same time periods show up with sudden increases in the cumulative seismicity curve of the entire catalog shown in Fig. 7 (blue line), from a compilation of 75 449 seismic Cumulative seismicity curves.The blue curve is the NOA earthquake catalog and the red curve is for its de-clustered equivalent.events.Large crustal earthquakes usually produce aftershock sequences that contain hundreds to thousands of events thus introducing bias in rate change investigations.Following Chouliaras (2009a), the removal of aftershock clusters from NOA's catalog using the declustering method of Reasenberg (1985) eliminates these sudden increases as shown by the declustered curve (red line) in Fig. 7 which will be used further on in this study.In order to assess the quality of the earthquake catalog the criterion of the magnitude of completeness (Mc) is employed.Mc is defined as the lowest magnitude of the catalog at which all of the events are detected in space and in time (Rydelek and Sacks, 1989;Taylor et al, 1990;Wiemer and Wyss, 2000).Different approaches and methodologies as described by Schorlemmer and Woessner (2008) exist in determining Mc and the assumption of self similarity for the earthquake process, implying that for a given volume a simple power law can approximate the frequency magnitude distribution (FMD), will be adopted here. The FMD describes the relationship between the frequency of occurrence and the magnitude of earthquakes (Ishimoto and Iida, 1939;Gutenberg and Richter, 1944): Where N is the cumulative number of earthquakes having magnitudes larger than M, a and b are constants. The b-value describes the relative size of the events and it is determined by the linear least squares regression or by the maximum-likelihood technique: (2) <M> is the mean magnitude of the sample and Mbin is the binning width of the catalog (Aki, 1965;Bender, 1983;Utsu, 1999).Two different Wiemer and Wyss (2000) methods will be used to determine Mc in this investigation: 1.The Maximum Curvature (MAXC), which simply computes the maximum value of the first derivative of the FMD curve. 2. The Goodness of Fit Test (GFT), which compares the observed FMD curve with a synthetic one.A model is found at which a predefined percentage 90% or 95% of the observed data is modeled by a straight line. The results of the MAXC method in Fig. 8 show that the cumulative frequency magnitude distribution and its first derivative indicate a magnitude of completeness at M=3.1, with a b-value of 1.14.In the previous section of this study a historical account of the upgrading of NOA's seismological network as well as the related changes in analysis procedures and practices was given since these may introduce biases and inhomogeneities in NOA's earthquake catalog.Identification of artificial or man made seismicity anomalies in earthquake catalogs have been discussed in several studies (Habermann 1982(Habermann , 1983;;Habermann and Wyss, 1984;Wyss and Bufford, 1985;Wyss, 1991;Zuniga, 1989;Zuniga and Wyss, 1995;Zuniga et al., 2000Zuniga et al., , 2005)). The Genas algorithm (Habermann, 1983(Habermann, , 1987) ) is the appropriate tool to investigate such artificial rate changes and this task is performed on declustered catalogs in order to avoid false alarms from rate changes due to aftershock sequences and clusters.This algorithm identifies significant changes in seismicity rate (number of events larger and smaller than a given magnitude with respect to time) by comparing the mean rate before the time under study to that of the period which follows.The procedure is repeated for increased values up to the end of the seismicity record.Every time a significant change is found, the catalog is marked and splited into two segments which are iteratively analyzed in the same fashion.The result provides the times which stand out as the beginning of periods were increases and/or decreases of seismicity are detected as well as the magnitude range affected by these changes. Figure 11 shows the output of the Genas algorithm as times of significant rate changes, where circles indicate rate increases and crosses rate decreases.Since the network catalog was in the built up phase around 1966 and 1967 this period may be regarded as inferior to the rest and ignored however significant rate changes around 1973, 1990, 1995 and 2000-2004 are observed.These periods were also mentioned previously as periods of seismological network expansion and upgrading and for this reason we see their imprints in the seismicity catalog as rate changes.These rate changes may or may not be accompanied with changes in the reporting of magnitudes namely "magnitude shifts" (Zuniga and www.nat-hazards-earth-syst-sci.net/9/905/2009/Nat.Hazards Earth Syst.Sci., 9, 905-912, 2009 The temporal variation of the magnitude of completeness for NOA's earthquake catalog. Fig. 13.The temporal variation of the magnitude of completeness for NOA's earthquake catalog.Wyss, 1995) which are crucial in determining the homogeneity and completeness of the catalog.The rate change from the "clean periods" 1973.5 to 1977 is compared to that from 1983 to 1987 in Fig. 12.The cumulative and noncumulative FMD's show a small (11.54%) increase in the seismicity rate and no magnitude shift in the data set, indicating a homogeneous magnitude reporting for the entire period.In a similar fashion the periods before and after the significant increases in the seismicity rate indicated by Genas were successively compared and even though significant rate increases were observed, no apparent magnitude shifts existed. The following results are found: 1 . 1973.5 The seismological network of NOA has gone through periods of station expansion, instrumentation and staff changes as well as changes in the procedures and practices in data analysis.As described earlier on in this study, these factors usually cause inhomogeneities, artifacts or biases in earthquake catalogs and may be misinterpreted as natural processes.The periods of rate increases around 1973, 1983, 1990, 1995 and 2000-2004 as seen by Genas coincide with periods of significant station increases in NOA's seismological network.These rate increases are also attributed to the instrumentation and software upgrading as well as staff increased of the NOA network.The results show that such changes affect the seismicity rate of the data base equally in all magnitude ranges. In a recent investigation Uyeda et al. (2009) used the Japan Meteorological Agency earthquake catalog to investigate the idea of "natural time" as first presented by Varotsos et al. (2005a, b) in order to forecast the occurrence time of an impending earthquake.The high density of seismological network stations in Japan allowed that investigation of a relatively low threshold magnitude of M=2 in their catalog analysis.The results obtained after considering the seismicity subsequent to a Seismic Electric Signals activity they recorded (which was similar to the ones recorded in Greece, e.g., see Varotsos et al., 2003), succeeded in predicting the occurrence of an M=6.0 event within a time window of a few days. The latter is a characteristic example showing that the detectability of the seismological network is an important factor in searching for critical stage seismicity.In view of this fact and since NOA's earthquake catalog for Greece is also used extensively in on going earthquake prediction research, the properties of this catalog were investigated.Among others, the chronological order of changes in the network station infrastructure was studied and it is found that they influence the results of the analysis procedures.Thus the present study enables the selection of appropriate parts of the data set for trustworthy analysis. Edited by: M. E. Contadakis Reviewed by: two anonymous referees Figure 1 Figure 1 Earthquake epicenters (red) between 1964 and 2009 from the earthquake catalog of NOA. Figure 2 Figure 2 Time histogram of NOA's earthquake catalog.Arrows indicate times of increased seismic activity. Fig. 2 . Fig. 2. Time histogram of NOA's earthquake catalog.Arrows indicate times of increased seismic activity. Figure 3 Figure 3 Hourly variation of the seismicity in NOA's earthquake catalogue.Arrow indicates maximum day/night variation. Fig. 3 . Fig. 3. Hourly variation of the seismicity in NOA's earthquake catalog.Arrow indicates maximum day/night variation. Figure 5 Depth Figure 5 Depth Histogram of NOA's earthquake catalog. Fig. 7 . Fig. 7. Cumulative seismicity curves.The blue curve is the NOA earthquake catalog and the red curve is for its de-clustered equivalent. Figure 8 Cumulative Figure 8 Cumulative Frequency -Magnitude Distribution and it's first derivative (green line) for NOA's earthquake catalog.Crosshair is at a value of Mc=3.1.The b-Value of 1.14 and its error (+/-0.03)are determined with the weighted least squares method. Fig. 8 . Fig. 8. Cumulative Frequency -Magnitude Distribution and its first derivative (green line) for NOA's earthquake catalog.Crosshair is at a value of Mc=3.1.The b-value of 1.14 and its error (±0.03) are determined with the weighted least squares method. Figure 9 Figure 9 Results for the Goodness of Fit Test (GFT) for NOA's earthquake catalog.The arrow indicates a 90% confidence for a magnitude of completeness (Mc) of 3.1. Fig. 9 . Fig. 9. Results for the Goodness of Fit Test (GFT) for NOA's earthquake catalog.The arrow indicates a 90% confidence for a magnitude of completeness (Mc) of 3.1. Figure 10 Map Figure 10 Map of the spatial distribution of the Magnitude of Completeness (Mc) of NOA's earthquake catalog. Fig. 10 . Fig. 10.Map of the spatial distribution of the Magnitude of Completeness (Mc) of NOA's earthquake catalog. b = log 10 (e)/[< M > −(Mc − Mbin/2)] Figure 9 shows the GFT results which indicates that Mc has a 90% and 95% confidence for the values of 3.1 and 3.5, respectively.These two results in combination show that a value of Mc=3.1 may be chosen as representative of the investigated data set, however we must consider the study of Woessner and Wiemer (2005) which indicates that two methods used in this study may underestimate Mc by about 0.2.The differences in Mc as a function of space is influenced by the seismological network configuration as shown by Chouliaras (2009a, b) and the spatial mapping of Mc may identify regions in outer margins of the network that give radically different reporting and should not be used in quantitative studies.This methodology is applied to the NOA network catalog data and Fig. 10 shows the spatial variability of Mc in Greece.The lowest Mc region is that around Athens with a Mc around 2 and this value increases outwards, with values Mc values around 3 for Central Greece, Peloponesus and Northern Greece and values around 4 or more in the bordering regions all around Greece where NOA's seismic network is sparse (http://www.gein.noa.gr/services/net figure.gif). Figure 11 Figure 11 Genas algorithm result for NOA's de-clustered earthquake catalog.Circles indicate rate increases and Crosses rate decreases. Figure 12 A Figure12A comparison of the cumulative and non-cumulative Frequency -Magnitude distributions from two successive periods of NOA's earthquake catalog.The period of1973.5 -1977 is compared to 1983 -1987. Fig. 12 . Fig. 12.A comparison of the cumulative and non-cumulative Frequency -Magnitude distributions from two successive periods of NOA's earthquake catalog.The period of1973.5-1977 is compared to 1983-1987. 4. 1973.5 to 1977 is compared to 2005 to 2009 with a rate increase of 550% The temporal variation of the magnitude of completeness Mc, for the data set of the NOA network is seen in Fig. 13.It is expected that Mc changes with time in data sets from expanding networks with inhomogeneous reporting practices and procedures and this is also observed here where Mc decreases from 3.8 to 3.0 around 1970 and also from 3.2 to 3.0 around 1982.This result is also confirmed by the Genas results of Fig. 11 and is attributed to significant increases (almost doubling) in the number of seismological stations comprising the NOA network.The unexpected increase in the Mc value from 3.0 to 3.2 around the year 2000, in a period of station expansion and software upgrades will be explained in a separate study regarding the detectability of NOA's network.3 Discussion and conclusion The earthquake catalog of NOA for the Greek area since the beginning of its seismological network installation in 1964 is analyzed in this study.The uninterrupted operation and fundamental seismological practice at NOA during the last four decades provided for a data base of more than 75 000 seismic events until 2009.Statistical analysis of this catalog using two different methods indicates a magnitude of completeness Mc=3.1 as an indicator of the detectability of network and a frequency-magnitude relation with a b-Value of 1.14.These values are shown to vary spatially in Greece and are strongly influenced by the seismological network configuration as well as local seismicity.
4,501
2009-06-23T00:00:00.000
[ "Geology" ]
Experimental Characterization of Laser Trepanned Microholes in Superalloy GH4220 with Water-Based Assistance An experiment using water-assisted millisecond laser trepanning on superalloy GH4220 was carried out, and the effects of pulse energy on the hole entrance morphology, diameter, roundness, cross-section morphology, taper angle, sidewall roughness, and recast layer in air and with water-based assistance were compared and analyzed. The results show that, compared with the air condition, the water-based assistance improved the material removal rate and hole quality, increased the diameter of the hole entrance and exit, increased the hole roundness, decreased the hole taper angle, decreased the hole sidewall roughness, and reduced the recast layer thickness. In addition, under the combined action of water and steam inside the hole, the sidewall surface morphology quality was improved. Compared with the air condition, the spatter around the hole entrance was reduced, but the oxidation phenomenon formed by the thermal effect surrounding the hole entrance with water-based assistance was more obvious. The research provided technical support for the industrial application of millisecond laser drilling. Introduction Laser drilling has the advantages of no tool loss, high machining precision, low cost, high drilling efficiency, etc. It can be used to drill high-aspect ratio holes in a wide variety of materials. Nowadays, laser drilling has been widely used in aerospace, aircraft, medical devices, automobile industries, etc. [1][2][3][4][5][6]. Millisecond laser drilling is widely used in industrial applications. However, some defects, such as microcracks, heat-affected zones, and recast layers, are generated during millisecond laser drilling [7][8][9][10]. Scholars have found that water assistance could improve the quality of laser drilling. Zhu et al. [11] improved the quality of picosecond laser drilling using the water-assisted method on the back of the workpiece, and a large number of holes with an exit diameter of 55 µm were drilled on a 60-µm-thick stainless steel sheet. After the sheet was drilled through, water entered into the hole through the capillary phenomenon, and the laser reflected at the gas-liquid interface. After the water medium was radiated, the mechanical effect and cavitation effect would be generated. Combined with the cooling effect of water, the quality of the hole was improved. At the same time, the water-assisted method on the back of the workpiece could also reduce the hole taper angle, the recast layer, and the heat-affected zone generated in the laser drilling. Behera et al. [12] introduced water-assisted laser drilling using different pulse durations (millisecond, nanosecond, and femtosecond) and compared it with the traditional laser drilling method, proving the superiority of water-assisted laser drilling, and the shorter the pulse duration, the better the drilling quality. They found that the mechanism of water-assisted laser drilling mainly includes light transmission (absorption) in liquid, liquid heating and vaporization, bubble evolution (formation, growth, and collapse) and material evolution (heating, melting, and vaporization). Chen et al. [13] conducted a nanosecond laser drilling of silicon nitride ceramics underwater and compared it with the air environment. It was found that the underwater environment had a more obvious effect on the hole taper. When the scanning speed was constant, the deeper the hole, the smaller the taper. Under the same hole depth, the taper decreased with the decrease in the scanning speed. At the same time, underwater laser drilling could obtain a better hole sidewall. Feng et al. [14] carried out picosecond laser drilling on zirconia underwater to study the change in the hole geometric quality after optimizing the process parameters. The experimental result showed that the cracks on the hole sidewall were reduced, and the hole sidewall surface roughness was reduced in the water environment. Wang et al. [15] selected a single-crystal silicon wafer (4H-SiC) with a thickness of 500 µm as the experiment material to study the effect of water assistance on femtosecond laser drilling. It was found that phenomena such as inlet debris redeposition, cracks, surface material falling off, heat-affected zones, and recast layers could be eliminated by water scouring and diffusion in the hole. In addition, the water-layer thickness and the pulse repetition rate had a great impact on the drilling efficiency and hole quality. Above all, although research on water-assisted laser drilling has been carried out at present, there is a lack of systematic research and analysis on water-assisted laser drilling, and there were no relevant reports on water-assisted millisecond laser trepanning in superalloy GH4220. In order to further improve the quality of millisecond laser drilling, the water-based assisted laser trepanning method was used to carry out laser drilling experiments. The effects of air and water-based assistance on the hole morphology, diameter, roundness, cross-sectional morphology, taper angle, sidewall roughness, and recast layer thickness were compared and analyzed. This research could provide technical support for the industrial application of millisecond laser drilling. Materials and Methods The water-assisted laser trepanning device is shown in Figure 1. It mainly included a machining head, workpiece, tank, fixture, and motion platform. The laser used was an Nd: YAG laser; the laser parameters are shown in reference [16]. In millisecond laser trepanning, the beam was applied perpendicularly to the workpiece, and the laser focus was located on the upper surface of the workpiece. The relative circular motion between the laser beam and the workpiece surface was formed by controlling the movement trajectory of the workpiece on the horizontal plane, and the diameter of the shown cutting path was 400 µm, as shown in Figure 1. The upper surface of the workpiece was exposed to air, and the lower surface was submerged in water, with a submerged depth of about 1.6 mm (the workpiece thickness). In order to reduce the influence of the assist gas on the water, the lower air pressure allowed by the equipment was used. The assist gas used was argon with a pressure of 0.1 MPa (the assist gas was applied by coaxial blowing). The parameters used are shown in Table 1; the number of circles indicates how many circles the workpiece moved. Water-based assistance The nickel-based superalloy GH4220 (Manufacturer name: Dongguan Tengfeng Metal Materials Co., Ltd.; Dongguan, China) used in this experiment needed to be prepared before the laser drilling. The purchased superalloy bar (with a diameter of 30 mm) was cut into sheets of the same thickness by wire cutting, and a certain polishing allowance (with a polishing allowance of 0.1 mm) was reserved. Detergent was used to remove the oil stain remaining on the workpiece surface during wire cutting in order to facilitate subsequent polishing. Afterward, the scratches left by the wire cutting were removed using the metallographic sander and water abrasive paper, the polished workpiece was placed into the beaker and cleaned with an ultrasonic cleaner for 5 min (the cleaning solution was absolute ethanol), and the debris and dirt generated during the grinding process on the workpiece surface were removed. The changes to the workpiece at various stages are shown in Figure 2. The thickness of the workpiece used in the experiment after pretreatment was 1.6 mm. The nickel-based superalloy GH4220 (Manufacturer name: Dongguan Tengfeng Metal Materials Co., Ltd.; Dongguan, China) used in this experiment needed to be prepared before the laser drilling. The purchased superalloy bar (with a diameter of 30 mm) was cut into sheets of the same thickness by wire cutting, and a certain polishing allowance (with a polishing allowance of 0.1 mm) was reserved. Detergent was used to remove the oil stain remaining on the workpiece surface during wire cutting in order to facilitate subsequent polishing. Afterward, the scratches left by the wire cutting were removed using the metallographic sander and water abrasive paper, the polished workpiece was placed into the beaker and cleaned with an ultrasonic cleaner for 5 min (the cleaning solution was absolute ethanol), and the debris and dirt generated during the grinding process on the workpiece surface were removed. The changes to the workpiece at various stages are shown in Figure 2. The thickness of the workpiece used in the experiment after pretreatment was 1.6 mm. After the experiment, the workpiece needed to be processed in multiple steps to obtain more experimental data. Grinding was used to remove the spatter around the hole entrance and exit, but in order to prevent the molten material from blocking the hole when grinding the hole cross-section, a large amount of clean water could be added. After grinding, the hole sidewall must be polished with a metallographic polisher to prepare for the subsequent corrosion of the hole cross-section. In this experiment, in order to reduce the uncertainty and measurement error, each group of experiments was repeated three times, and the average value was taken for analysis. After the experiment, the workpiece needed to be processed in multiple steps to obtain more experimental data. Grinding was used to remove the spatter around the hole entrance and exit, but in order to prevent the molten material from blocking the hole when grinding the hole cross-section, a large amount of clean water could be added. After grinding, the hole sidewall must be polished with a metallographic polisher to prepare for the subsequent corrosion of the hole cross-section. In this experiment, in order to reduce the uncertainty and measurement error, each group of experiments was repeated three times, and the average value was taken for analysis. After the experiment, the hole diameter was measured several times from different angles, and the average value was obtained to reduce the measurement error. The angle difference between adjacent diameters measured each time was 30 • , as shown in Figure 3. The formula for calculating the hole diameter is shown in Equation (1). The hole's roundness was described by the hole circularity deviation (the deviation of the hole's maximum radius and minimum radius). The smaller the circularity deviation, the better the hole's roundness. The hole circularity deviation (∆r) was calculated using the following Equation (2). ∆r = r max − r min (2) The calculation method for the through-hole taper angle is shown in the reference; the formula was as follows [17]: After the experiment, the hole diameter was measured several times from different angles, and the average value was obtained to reduce the measurement error. The angle difference between adjacent diameters measured each time was 30 °, as shown in Figure 3. The formula for calculating the hole diameter is shown in Equation (1). (1) The hole's roundness was described by the hole circularity deviation (the deviation of the hole's maximum radius and minimum radius). The smaller the circularity deviation, the better the hole's roundness. The hole circularity deviation (Δr) was calculated using the following Equation (2). The calculation method for the through-hole taper angle is shown in the reference; the formula was as follows [17]: where α is the taper angle, and d1, d2, and h are the entrance diameter, exit diameter, and workpiece thickness, respectively. The hole sidewall roughness (Sa) was measured using KEYENCE confocal laser scanning microscopy (CLSM) analysis software (MultiFileAnalyzer 1.3.1.120). Spatter around the Hole Entrance The spatter around the hole entrance in air and with water-based assistance is shown in Figure 4. It was found that the spatter around the hole entrance with water-based assistance was less than that in the air, which indicated that the recoil pressure generated by the water entering the hole after the hole was drilled through enhanced the removal of molten material and debris. In addition, both in the case of air and water-based assistance, there was a thermal effect area around the hole (oxidation area). This oxidation area deviated from the side of the hole. It was mainly caused by the laser trepanning method used in this experiment. With the movement of the laser beam, a blind hole was first formed in the action area during the initial period of the laser beam movement. With the continued movement of the laser beam, under the action of assist gas, plasma and spatter were removed from the formed blind hole. Because the previously formed blind hole was located near the laser beam motion path and deviated from the center of the final hole, more molten material, spatter, and plasma were removed from one side (the position of the blind hole formed during the initial period of laser beam movement) of the hole entrance, resulting in a more obvious thermal effect on this side. Furthermore, it was found that the oxidation phenomenon formed by the thermal effect surrounding the hole entrance with water-based assistance was more obvious. It was because the water flowed into the hole after the workpiece was drilled through. The Spatter around the Hole Entrance The spatter around the hole entrance in air and with water-based assistance is shown in Figure 4. It was found that the spatter around the hole entrance with water-based assistance was less than that in the air, which indicated that the recoil pressure generated by the water entering the hole after the hole was drilled through enhanced the removal of molten material and debris. In addition, both in the case of air and water-based assistance, there was a thermal effect area around the hole (oxidation area). This oxidation area deviated from the side of the hole. It was mainly caused by the laser trepanning method used in this experiment. With the movement of the laser beam, a blind hole was first formed in the action area during the initial period of the laser beam movement. With the continued movement of the laser beam, under the action of assist gas, plasma and spatter were removed from the formed blind hole. Because the previously formed blind hole was located near the laser beam motion path and deviated from the center of the final hole, more molten material, spatter, and plasma were removed from one side (the position of the blind hole formed during the initial period of laser beam movement) of the hole entrance, resulting in a more obvious thermal effect on this side. water medium promoted the removal of the material. Part of the water was heated and evaporated under the action of the laser. The water mixed with melt and material debris was removed from the hole under the power of the steam. The material removal process was more intense than that in the air, resulting in a more obvious thermal effect. With the increase in the pulse energy, the spatter around the hole entrance in air and with water-based assistance decreased. With the increase in the pulse energy, the material was easier to drill through, and the molten material and debris could be ejected from the hole exit in time along with the assist gas. At the same time, because the hole exit diameter increased with the increase in pulse energy, more molten material and debris could pass through and be removed, and the spatter around the hole entrance was reduced significantly. Due to the effect of the assist gas, the spatter accumulation around the hole exit increased; as it was difficult to photograph, the spatter around the hole exit was not analyzed. Figure 5 shows the morphology of the hole entrance and exit after grinding. The Effect of pulse energy on spatter around the hole entrance: (a) in air, (b) water-based assistance. Hole Entrance/Exit Diameter, Roundness Furthermore, it was found that the oxidation phenomenon formed by the thermal effect surrounding the hole entrance with water-based assistance was more obvious. It was because the water flowed into the hole after the workpiece was drilled through. The water medium promoted the removal of the material. Part of the water was heated and evaporated under the action of the laser. The water mixed with melt and material debris was removed from the hole under the power of the steam. The material removal process was more intense than that in the air, resulting in a more obvious thermal effect. With the increase in the pulse energy, the spatter around the hole entrance in air and with water-based assistance decreased. With the increase in the pulse energy, the material was easier to drill through, and the molten material and debris could be ejected from the hole exit in time along with the assist gas. At the same time, because the hole exit diameter increased with the increase in pulse energy, more molten material and debris could pass through and be removed, and the spatter around the hole entrance was reduced significantly. Due to the effect of the assist gas, the spatter accumulation around the hole exit increased; as it was difficult to photograph, the spatter around the hole exit was not analyzed. Figure 5 shows the morphology of the hole entrance and exit after grinding. The effect of water-based assistance on the hole morphology was not obvious, which was mainly because the water had little influence on the laser drilling process (there was no water above the sample before laser drilling) before the hole was drilled through. After the hole was drilled through, some water entered the hole, which promoted the removal of molten materials and ultimately led to an increase in material removal. However, due to the limited increase in material removal, the observed change in hole morphology was not obvious. In addition, due to the influence of the assist gas, the promotion effect of the water medium was reduced to a certain extent. Figure 6 shows the effect of pulse energy on the hole entrance and exit diameter. With the increase in pulse energy, the hole entrance and exit diameter increased. The reason was that with the increase in pulse energy, the workpiece would absorb more energy, and the material removal rate would be improved. In the case of water-based assistance, the hole entrance and exit diameter were slightly larger than that in the case of air. If the workpiece was drilled through, the water entered into the hole from the hole exit; then, the water would be heated and evaporated to steam under the action of the laser. The steam mixed with water droplets generated impact force, which promoted the removal of molten material and debris from the hole, thus improving the material removal rate. At the same time, under the action of the laser, bubbles would also be generated in the water, and the bubbles would rise and break to generate shock waves and micro jet, promoting the removal of molten material and debris from the hole and further improving the material removal rate. In addition, the water suppressed the expansion of plasma during the drilling, the plasma shielding effect was weakened, and more material was removed. Figure 6 shows the effect of pulse energy on the hole entrance and exit diameter. With the increase in pulse energy, the hole entrance and exit diameter increased. The reason was that with the increase in pulse energy, the workpiece would absorb more energy, and the material removal rate would be improved. In the case of water-based assistance, the hole entrance and exit diameter were slightly larger than that in the case of air. If the workpiece was drilled through, the water entered into the hole from the hole exit; then, the water would be heated and evaporated to steam under the action of the laser. The steam mixed with water droplets generated impact force, which promoted the removal of molten material and debris from the hole, thus improving the material removal rate. At the same time, under the action of the laser, bubbles would also be generated in the water, and the bubbles would rise and break to generate shock waves and micro jet, promoting the removal of molten material and debris from the hole and further improving the material removal rate. In addition, the water suppressed the expansion of plasma during the drilling, the plasma shielding effect was weakened, and more material was removed. Hole Entrance/Exit Diameter, Roundness steam mixed with water droplets generated impact force, which promoted the removal of molten material and debris from the hole, thus improving the material removal rate. At the same time, under the action of the laser, bubbles would also be generated in the water, and the bubbles would rise and break to generate shock waves and micro jet, promoting the removal of molten material and debris from the hole and further improving the material removal rate. In addition, the water suppressed the expansion of plasma during the drilling, the plasma shielding effect was weakened, and more material was removed. Figure 7 shows the hole entrance and exit circularity deviation in air and with waterbased assistance. It was found that the hole entrance and exit circularity deviation with water-based assistance was smaller than that in the case of air. This was because after the workpiece was drilled through, the water promoted the removal of molten material and debris, suppressed the expansion of plasma during drilling, and caused the absorption of Figure 7 shows the hole entrance and exit circularity deviation in air and with waterbased assistance. It was found that the hole entrance and exit circularity deviation with water-based assistance was smaller than that in the case of air. This was because after the workpiece was drilled through, the water promoted the removal of molten material and debris, suppressed the expansion of plasma during drilling, and caused the absorption of laser energy by the hole's sidewall to be more uniform [12,[17][18][19]. Therefore, hole roundness was better than that in the case of air. With the increase in pulse energy, the change in the hole's entrance and exit circularity deviation under these two conditions was not obvious. 7 of 12 laser energy by the hole's sidewall to be more uniform [12,[17][18][19]. Therefore, hole roundness was better than that in the case of air. With the increase in pulse energy, the change in the hole's entrance and exit circularity deviation under these two conditions was not obvious. Hole Cross-Section Morphology, Taper Angle The cross-sectional morphology of the hole in the case of air and with water-based assistance is shown in Figure 8. The calculated taper angle is shown in Figure 9. It could be found that with the increase in pulse energy, the removal rate of the material was improved. With the increase in pulse energy, the taper angle decreased continuously, but when the pulse energy increased to a certain range, the taper angle tended to become larger. This was because, with the increase in pulse energy, the hole would be drilled through faster, and a large amount of molten material could be removed from the hole in time. However, when the laser energy increased to a certain range, a large amount of energy accumulated on the upper surface of the workpiece, and the material removal rate Hole Cross-Section Morphology, Taper Angle The cross-sectional morphology of the hole in the case of air and with water-based assistance is shown in Figure 8. The calculated taper angle is shown in Figure 9. It could be found that with the increase in pulse energy, the removal rate of the material was improved. With the increase in pulse energy, the taper angle decreased continuously, but when the pulse energy increased to a certain range, the taper angle tended to become larger. This was because, with the increase in pulse energy, the hole would be drilled through faster, and a large amount of molten material could be removed from the hole in time. However, when the laser energy increased to a certain range, a large amount of energy accumulated on the upper surface of the workpiece, and the material removal rate at the hole entrance increased obviously, resulting in a larger taper angle. When the pulse energy was 1.9 J, the hole taper angle was larger than 1.6 J. improved. With the increase in pulse energy, the taper angle decreased continuously, but when the pulse energy increased to a certain range, the taper angle tended to become larger. This was because, with the increase in pulse energy, the hole would be drilled through faster, and a large amount of molten material could be removed from the hole in time. However, when the laser energy increased to a certain range, a large amount of energy accumulated on the upper surface of the workpiece, and the material removal rate at the hole entrance increased obviously, resulting in a larger taper angle. When the pulse energy was 1.9 J, the hole taper angle was larger than 1.6 J. Figure 10 is the 2D morphology of the hole sidewall near the hole en and exit in air and with water-based assistance, and the pulse energy is 0.7 that the molten material on the hole sidewall in the air condition was in the droplets, and the number was also large. The morphology of the hole sidew based assistance was relatively smooth. Figure 10 is the 2D morphology of the hole sidewall near the hole entrance, middle, and exit in air and with water-based assistance, and the pulse energy is 0.7 J. It was found that the molten material on the hole sidewall in the air condition was in the form of liquid droplets, and the number was also large. The morphology of the hole sidewall with water-based assistance was relatively smooth. Figure 10 is the 2D morphology of the hole sidewall near the hole entrance, middle and exit in air and with water-based assistance, and the pulse energy is 0.7 J. It was found that the molten material on the hole sidewall in the air condition was in the form of liquid droplets, and the number was also large. The morphology of the hole sidewall with water based assistance was relatively smooth. Figure 11 is the 3D morphology corresponding to Figure 10. It was found that th hole sidewall roughness with water-based assistance was less than that in the ai condition. When the pulse energy was 0.7 J, the hole sidewall roughness at the hol entrance, middle, and exit with water-based assistance was reduced by 41%, 10%, and 19%, respectively, compared with that in the case of air. When the laser irradiated th water, the steam generated by the heating of the water continuously washed the hol sidewall, making the residual molten material distribution on the sidewall more uniform At the same time, after the workpiece was drilled through, bubbles were generated unde the action of the laser, and the sidewall material was removed more evenly due to th impact force and micro jet caused by the bubbles [17]. As a result, more material wa removed from the hole, and the hole sidewall quality was improved. The hole roughnes decreased from the entrance to the exit. With the increase in hole depth, the laser energ was continuously consumed, and the energy absorbed by the material near the hole exi was reduced, which led to a decrease in the material removal rate near the hole exit and the uneven removal of the sidewall material. After the hole was drilled through, unde the action of assist gas and gravity, more molten material and debris would be ejected Figure 11 is the 3D morphology corresponding to Figure 10. It was found that the hole sidewall roughness with water-based assistance was less than that in the air condition. When the pulse energy was 0.7 J, the hole sidewall roughness at the hole entrance, middle, and exit with water-based assistance was reduced by 41%, 10%, and 19%, respectively, compared with that in the case of air. When the laser irradiated the water, the steam generated by the heating of the water continuously washed the hole sidewall, making the residual molten material distribution on the sidewall more uniform. At the same time, after the workpiece was drilled through, bubbles were generated under the action of the laser, and the sidewall material was removed more evenly due to the impact force and micro jet caused by the bubbles [17]. As a result, more material was removed from the hole, and the hole sidewall quality was improved. The hole roughness decreased from the entrance to the exit. With the increase in hole depth, the laser energy was continuously consumed, and the energy absorbed by the material near the hole exit was reduced, which led to a decrease in the material removal rate near the hole exit and the uneven removal of the sidewall material. After the hole was drilled through, under the action of assist gas and gravity, more molten material and debris would be ejected from the hole exit, resulting in more molten material remaining on the sidewall near the hole exit. Therefore, the hole sidewall roughness near the hole exit was larger than that of other parts, both in the case of air and with water-based assistance. Hole Sidewall Morphology and Roughness Micromachines 2022, 13, 2249 9 of 12 from the hole exit, resulting in more molten material remaining on the sidewall near the hole exit. Therefore, the hole sidewall roughness near the hole exit was larger than that of other parts, both in the case of air and with water-based assistance. Figure 12 shows the effect of pulse energy on the sidewall roughness of the different parts of the hole. It was found that with the increase in pulse energy, the roughness of each part decreased and, finally, stabilized within a certain value range. This was maybe that when the laser energy increased, the material removal on the hole sidewall was more uniform. When the pulse energy reached a certain range, the influence of the pulse energy Figure 12 shows the effect of pulse energy on the sidewall roughness of the different parts of the hole. It was found that with the increase in pulse energy, the roughness of each part decreased and, finally, stabilized within a certain value range. This was maybe that when the laser energy increased, the material removal on the hole sidewall was more uniform. When the pulse energy reached a certain range, the influence of the pulse energy change on the sidewall roughness would be reduced. When the laser pulse energy was 1.3-1.9 J, the sidewall roughness had little change. The water-based assistance could reduce the hole sidewall roughness and improve the hole sidewall quality. Figure 11. 3D morphology at different locations of hole sidewall: (a) in air, (b) water−based assistance. Figure 12 shows the effect of pulse energy on the sidewall roughness of the different parts of the hole. It was found that with the increase in pulse energy, the roughness of each part decreased and, finally, stabilized within a certain value range. This was maybe that when the laser energy increased, the material removal on the hole sidewall was more uniform. When the pulse energy reached a certain range, the influence of the pulse energy change on the sidewall roughness would be reduced. When the laser pulse energy was 1.3-1.9 J, the sidewall roughness had little change. The water-based assistance could reduce the hole sidewall roughness and improve the hole sidewall quality. Figure 13 shows the thickness of the recast layer at the hole entrance, middle, and exit in air and with water-based assistance, and the pulse energy was 0.7 J. Figure 14 shows the change in the recast layer thickness at the hole entrance, middle, and exit under different pulse energies and different environments (in air and with water-based assistance). It was found that the recast layer thickness (at the hole entrance, middle, and exit) of the hole drilled with water-based assistance was smaller than that drilled in air. When the pulse energy was 0.7 J, the recast layer thickness at the hole entrance, middle, and exit with waterbased assistance was reduced by 32%, 17%, and 23%, respectively, compared with that in air. The reason was that after the workpiece was drilled through, under the combined action of water and steam (generated by heating of the water), the molten material and spatter could be effectively removed, and the residual molten material on the hole sidewall was reduced, which led to the reduction in the recast layer thickness [17]. In addition, it was also found that the distribution of the recast layer near the hole middle was more uniform with water-based assistance. and exit with water-based assistance was reduced by 32%, 17%, and 23%, respectively, compared with that in air. The reason was that after the workpiece was drilled through, under the combined action of water and steam (generated by heating of the water), the molten material and spatter could be effectively removed, and the residual molten material on the hole sidewall was reduced, which led to the reduction in the recast layer thickness [17]. In addition, it was also found that the distribution of the recast layer near the hole middle was more uniform with water-based assistance. compared with that in air. The reason was that after the workpiece was drilled through, under the combined action of water and steam (generated by heating of the water), the molten material and spatter could be effectively removed, and the residual molten material on the hole sidewall was reduced, which led to the reduction in the recast layer thickness [17]. In addition, it was also found that the distribution of the recast layer near the hole middle was more uniform with water-based assistance. It was also found, as shown in Figure 14, that when the single pulse energy was 0.7 J, the recast layer thickness on the sidewall at the hole entrance was large in these two environments. This was because when the pulse energy was low, the workpiece would be drilled through for a long time. When the workpiece was not drilled through, a large amount of molten material was ejected from the hole entrance under the combined action of the assist gas and the recoil pressure generated by the evaporation of molten material, resulting in a large amount of molten material remaining on the hole sidewall at the hole entrance. Conclusions The experimental research of using millisecond laser trepanning on a superalloy GH4220 with water-based assistance was carried out, and the effects of different pulse energy on the spatter, hole diameter, roundness, taper angle, sidewall morphology, and roughness, and the distribution of the recast layer in air and with water-based assistance, were studied. This research has industrial application prospects. The main research results were as follows: (1) With the increase in pulse energy, the spatter around the hole entrance and exit decreased in air and with water-based assistance. The spatter around the hole entrance with water-based assistance was less than that in the case of air. (2) With the increase in pulse energy, the hole entrance and exit diameter increased because the material removal rate was increased with the increase in pulse energy. Under the water-based assistance condition, the hole roundness was better than that under the air condition. This was because after the workpiece was drilled through, the water promoted the removal of molten material and suppressed the shielding effect of plasma during laser drilling. Moreover, the absorption of laser energy by the material on the hole sidewall was more uniform, resulting in better roundness than that under the air condition. With the increase in pulse energy, the roundness of the hole entrance and exit in these two environments was not changed significantly. (3) With the increase in pulse energy, the material removal rate increased, and the hole taper angle decreased. However, when the pulse energy increased to a certain range, the hole taper angle tended to become larger, and the hole taper angle at 1.9 J was larger than 1.6 J. Water−based assistance could reduce the hole taper angle. (4) The hole sidewall morphology in the water-based assistance conditions was better than that in the case of air, and the hole sidewall roughness was less than that in air. When the pulse energy was 0.7 J, the hole sidewall roughness at the hole entrance, middle, and exit with water-based assistance was reduced by 41%, 10%, and 19%, respectively, compared with that in air. The sidewall roughness at the hole exit was larger than that of other parts. (5) The recast layer thickness of the hole drilled with water-based assistance was smaller than that in the case of air. This was mainly because after the workpiece was drilled through, the molten material produced during laser drilling could be effectively removed under the combined action of the water and steam, the residual amount of molten material on the hole sidewall was reduced, and the thickness of the recast layer was reduced. When the pulse energy was 0.7 J, the recast layer thickness at the hole entrance, middle, and exit with water-based assistance was reduced by 32%, 17%, and 23%, respectively, compared with that in air. (6) Compared with the air condition, water-based assistance could improve the material removal rate and the hole quality. The hole entrance and exit diameter increased, the roundness increased, the taper angle decreased, the hole sidewall roughness decreased, and the recast layer thickness decreased. At the same time, the hole sidewall surface morphology was better with water-based assistance. Compared with the air condition, the spatter around the hole entrance was reduced, but the oxidation phenomenon formed by the thermal effect surrounding the hole entrance with water-based assistance was more obvious. Data Availability Statement: The data supporting the findings of this work are available from the corresponding author upon reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
8,666
2022-12-01T00:00:00.000
[ "Materials Science" ]
“Neural Efficiency” of Athletes’ Brain during Visuo-Spatial Task: An fMRI Study on Table Tennis Players Long-term training leads experts to develop a focused and efficient organization of task-related neural networks. “Neural efficiency” hypothesis posits that neural activity is reduced in experts. Here we tested the following working hypotheses: compared to non-athletes, athletes showed lower cortical activation in task-sensitive brain areas during the processing of sports related and sports unrelated visuo-spatial tasks. To address this issue, cortical activation was examined with fMRI in 14 table tennis athletes and 14 non-athletes while performing the visuo-spatial tasks. Behavioral results showed that athletes reacted faster than non-athletes during both types of the tasks, and no accuracy difference was found between athletes and non-athletes. fMRI data showed that, athletes exhibited less brain activation than non-athletes in the bilateral middle frontal gyrus, right middle orbitofrontal area, right supplementary motor area, right paracentral lobule, right precuneus, left supramarginal gyrus, right angular gyrus, left inferior temporal gyrus, left middle temporal gyrus, bilateral lingual gyrus and left cerebellum crus. No region was significantly more activated in the athletes than in the non-athletes. These findings possibly suggest that long-standing training prompt athletes develop a focused and efficient organization of task-related neural networks, as a possible index of “neural efficiency” in athletes engaged in visuo-spatial tasks, and this functional reorganization is possibly task-specific. INTRODUCTION Extensive practice over a long period of time leads expert athletes to develop a focused and efficient organization of task-related neural networks (Milton et al., 2007), and the functional reorganization is task-specific rather than general in terms of improved motor abilities (Schwenkreis et al., 2007). ''Neural efficiency'' hypothesis posits that neural activity is reduced in experts (Del Percio et al., 2009a). Present studies investigating expert athletes' specific brain activation are somewhat inconsistent. Numerous previous studies showed that compared to novices/non-athletes, expert athletes have less brain activation during resting state or performing cognitive/motor tasks. For example, in the condition of resting state, karate athletes exhibited less cortical activation over frontal, central, parietal or occipital areas than non-athletes (Babiloni et al., 2010a;Del Percio et al., 2011b). During viewing pictures/videos of real competition performances, alpha event-related desynchronization (ERD) was lower in mirror system in athletes than in non-athletes (Babiloni et al., 2009(Babiloni et al., , 2010b. During the 6 s pre-shot period, athletes exhibited greater alpha power than novices in occipital areas (Loze et al., 2001), parietal (Baumeister et al., 2008) and the whole scalp (Del Percio et al., 2009b). Besides, compared to non-athletes and skilled athletes, elite athletes showed lower coherence values, which imply the refinement of cortical networks in experts and differences in strategic planning related to memory processes and executive influence over visualspatial cues (Deeny et al., 2009). During the execution of upright standing, less alpha ERD was observed in frontal, central and parietal areas in athletes (Del Percio et al., 2009a). Similar results were observed in primary motor area, lateral and medial premotor areas in athletes while performing wrist extension task (Del Percio et al., 2010). However, many other studies reported more, or partly, cortical activation in expert athletes than in non-athletes. For instance, alpha power in athletes was reduced significantly (more cortical activation) while they observed sports videos, which was not found in novices (Orgs et al., 2008). Besides, a TMS study observed greater activation in the frontal mirror system in athletes than in novices during observation of sports videos (Aglioti et al., 2008), and two fMRI studies also observed greater activation in task related brain areas in athletes than in novices/non-athletes while they observed sports videos (Wright et al., 2011) or judged the line orientation (Seo et al., 2012). In addition, during preparing or executing a motor task, athletes exhibited higher alpha coherence values in parietal, temporal and occipital areas (Del Percio et al., 2011a) or more alpha ERD in ventral centro-parietal pathway than novices (Del Percio et al., 2007a). It's worth noting that a few fMRI studies examined the effect of task familiarity on athletes' brain activation and found greater cortical activation in task-sensitive areas (e.g., the mirror system, motor areas) in athletes while performing familiar tasks than less familiar tasks (Calvo-Merino et al., 2005Lyons et al., 2010;Woods et al., 2014). These different findings might be related to practice-related decrease (mainly in frontal cortex areas), increase (mainly in task-relevant brain areas), redistribution and reorganization of regional activation of cognitive and sensorimotor processes (Kelly and Garavan, 2005;Babiloni et al., 2010b;Hardwick et al., 2013). Considering the inconsistent results of the brain activation in athletes, most of these studies employed motor or motor related tasks and few studies adopted cognitive tasks, the present fMRI study contributed to the debate on the more or less brain activation in athletes during cognitive tasks. The cortical activation was examined when athletes and non-athletes performed visuo-spatial tasks. Based on the ''Type Token Model'' (Zimmer and Ecker, 2010) and the item characteristic of table tennis, we used a visuo-spatial task that included sports related condition and sports unrelated condition, in which participants were asked to recognize the figure (circle or cross-star) with notch angle of 135 • . The following hypothesis was tested in the present study: athletes exhibited lower cortical activation in task-sensitive brain areas than non-athletes during processing of sports related and sports unrelated visuo-spatial task. The ventral and dorsal cortical visual pathways were considered as they were respectively involved in the recognition of objects (Braddick and Atkinson, 2007) and the analysis of visual space (Rolls and Stringer, 2006). In addition, after reviewing studies from functional and structural neuroimaging paradigms, Jung and Haier (2007) report a striking consensus suggesting that variations in a distributed network predict individual differences found on intelligence and reasoning tasks, and they describe this network as the Parieto-Frontal Integration Theory (P-FIT). According to the P-FIT, the extrastriate cortex, fusiform gyrus, supramarginal, superior parietal, angular gyri, frontal regions and anterior cingulated are the very critical brain areas in solving a given problem (Jung and Haier, 2007). Statistical analysis of the present study focused on the following brain areas/cortex: extrastriate cortex, fusiform gyrus, supramarginal, superior parietal, angular gyri, cingulated, frontal regions and cerebellum. Participants A total of 28 right handed male subjects, 14 table tennis players (mean age, 19.64 ± 1.50 years) and 14 non-athletes (mean age, 21.50 ± 1.83 years) participated in the experiment. None of the non-athletes had any formal table tennis training experience. All of the table tennis players were above the 2nd level of national standard and had been practicing table tennis for more than 8 years at least five times a week. All subjects reported normal or corrected vision and no history of mental disorders problems. This study was approved by the Ethics Committee of Scientific Research of Shanghai University of Sport (no. 2014066) and carried out strictly in accordance with the approved guidelines. All participants gave informed written consent. Experiment Task The experimental task was a go/no-go visuo-spatial task. ''Type Token'' model, a theoretical model of long-term object memory, FIGURE 1 | Schematic illustration of the stimulus for one trail. Each trails starts with a 500 ms fixation of cross on gray background. At the end of the fixation, 500 ms/1000 ms/1500 ms of a jitter will appear, and then appears the 500 ms probe stimuli. After the probe stimuli, there is 1000 ms of a gray screen for subject's response. Frontiers in Behavioral Neuroscience | www.frontiersin.org suggesting that perceptual priming and episodic recognition are phenomena based on distinct kinds of representations, i.e., types and tokens. Types are prototypical representations needed for object identification, mainly include the outline and three-dimension information. Tokens support episodic recognition, mainly store the orientation and color information, and the tokens can be bound preserved with types. Individuals can simplify the types and tokens to form a special bundled representation for a long time of contacting with some objects (Zimmer and Ecker, 2010). Based on the ''Type Token'' model and the item characteristic of table tennis, circle with notch angle over 45 • , 135 • , 225 • or 315 • was employed as sport related stimulus for its similarity on the ball and the hitting point (Zhang, 2014). The cross-star with notch angle over 45 • , 135 • , 225 • or 315 • was employed as sport unrelated stimuli for its shape's unfamiliarity in table tennis. The target stimulus was the shape with notch angle over 135 • and only appeared at one location of the picture (There were four shapes in one picture). The ratio of the target and non-target stimulus is 50%, respectively. Participants were asked to press the left key with right index finger when the circle target stimulus displayed, press the right key with the right third finger when the cross-star target stimulus showed, and instructed not to press key while non-target stimulus displayed. All stimuli appeared in a pseudorandom order. The total number of trials was 256, 60 go trials for circle and cross-star respectively, 60 no-go trials for circle and cross-star trials respectively, 16 no-go trials for blank screen as baseline. The schematic illustration of the stimulus for one trail was shown in Figure 1. Image Acquisition/Scanning Parameters fMRI scanning was conducted using a Siemens Magnetom Verio 3T MRI scanner and a 32-channel head coil. Functional data consisted of 384 volumes using a T 2 -weighted echo planar imaging sequence with 33 contiguous sagittal slices covering the whole brain. The data was acquired with an FOV of 220 × 220 mm, flip angle 90 • , TR of 2000 ms, TE of 30 ms and slice thickness of 3 mm. The resulting voxel resolution was 3.4 × 3.4 × 3.0 mm. Participants indicated their judgment by pressing one of two buttons of an MRI-compatible response device held in the right hand (left button for sport related go stimuli and right button for sport non-related go stimuli). Preprocessing included realignment, slice-time correction and normalization to the standard space of the Montreal Neurological Institute brain (MNI brain). Smoothing was conducted with an isotropic threedimensional Gaussian filter with a full-width-at-half-maximum kernel (FWHM) of 6 mm. The functional images were corrected for sequential slice timing, and all images were realigned to the middle image to correct for head movement between scans. The realigned images were then mean-adjusted by proportional scaling and spatially normalized into standard stereotactic space to fit a MNI template based on the standard coordinate system. The pre-processed fMRI data were then entered into first-level individual analysis by comparing fMRI activity during the target stimuli presenting condition (sport related and sport unrelated condition) with that during the blank presenting condition (baseline condition). In second-level analysis, contrast images from the analysis of individual subjects were analyzed by a 2 (Group: Athletes, Non-athletes) × 2 (Stimulus Type: Sports related, Sports unrelated) ANOVA (with Group as a between-subjects factor and Stimulus Type as a within-subjects factor). Regions showing a significant interaction were identified using an initial uncorrected voxel-wise threshold of F (1,52) = 12.164, p < 0.001. FIGURE 2 | Brain regions activated in "sports related condition" from between group analysis (p < 0.001, uncorrected, cluster size of 15). Analysis of Behavioral Data Repeated ANOVA was used to check the reaction time and accuracy differences between athletes and non-athletes among sports related and unrelated stimulus. Behavioral Results The behavioral outcomes (task accuracy and response time) were shown in Table 1. A 2 × 2 repeated ANOVA was used to determine group differences for behavioral outcomes, employing the SPSS software. Statistical significance was defined at p < 0.05. The ANOVA of the accuracy variable showed no statistical significant differences in main effect or interaction between the factors Group (athletes, non-athletes) and Condition (Sports related, Sports unrelated; p > 0.05). The ANOVA of the reaction time showed no statistically significant differences in interaction between the factors Group (athletes, non-athletes) and Condition (Sports related, Sports unrelated; p > 0.05), but displayed significant differences in main effect between groups (F (1,52) = 10.05, p = 0.004, η 2 = 0.279). Compared with non-athletes, athletes needed much less time to recognize the target stimulus during both sports related task and sports unrelated task. Group Effect under Sports Related Stimulus Condition Significant brain regions of group effect under sports related stimulus condition were shown in Figure 2 and Table 2. FIGURE 3 | Brain regions activated during "sports related condition" from between group analysis (p < 0.001, uncorrected, cluster size of 15). Athletes exhibited less activation than non-athletes in the left middle frontal gyrus, right middle orbitofrontal area, right angular gyrus and left cerebellum crus. No region was significantly more activated in the athletes than in the nonathletes. Group Effect under Sports Unrelated Stimulus Condition Significant brain regions of group effect under sports unrelated stimulus condition were shown in Figure 3 and Table 3. Athletes exhibited less activation than non-athletes in the bilateral middle frontal gyrus, right middle orbitofrontal area, right supplementary motor area, right paracentral lobule, right precuneus, left supramarginal gyrus, right angular gyrus, left inferior temporal gyrus, left middle temporal gyrus, bilateral lingual gyrus and left cerebellum crus. No region was significantly more activated in the athletes than in the nonathletes. Stimulus Type Effect under Athlete Condition Significant brain regions of stimulus type effect under athlete condition were shown in Figure 4 and Table 4. The left middle frontal gyrus and pars opercularis of inferior frontal gyrus exhibited less activation under sports related condition than sports unrelated condition in athletes, but the precuneus exhibited more activation under sports related condition than sports unrelated condition in athletes. Group Effect under Sports Unrelated Stimulus Condition Significant brain regions of stimulus type effect under non-athlete condition were shown in Figure 5 and Table 5. A few brain areas exhibited less activation under sports related condition than in sports unrelated condition, including the superior frontal gyrus, middle frontal gyrus, occipital lobe, inferior parietal lobule, supramarginal gyrus, lingual gyrus, middle occipital lobe and middle temporal gyrus. No region was significantly more activated under sports related condition than in sports unrelated condition. DISCUSSION This study used fMRI to investigate the brain activation in athletes and non-athletes during a figure recognition task. Our hypothesis was based on research demonstrating that athletes seems to develop a focused and efficient organization Frontiers in Behavioral Neuroscience | www.frontiersin.org FIGURE 5 | Brain regions activated during "non-athlete condition" from between stimulus type analysis (p < 0.0001, uncorrected, cluster size of 15). of task-related neural networks (Milton et al., 2007), the functional reorganization is task-specific rather than general in terms of improved motor abilities (Schwenkreis et al., 2007), and the ''neural efficiency'' hypothesis about experts (Del Percio et al., 2009a). More specifically, it was tested whether there was less cortical activity in athletes than in non-athletes during the sports related and sports unrelated visual-spatial task. Behaviorally, we found that athletes showed shorter reaction time during both tasks than non-athletes. This result was supported by the previous findings that athletes exhibited faster than non-athletes during reaction time tasks, and the faster responses stimulus discrimination and response selection ability possibly due to athletes' enhanced attention and inhibitory control ability (Hung et al., 2004;Di Russo et al., 2006;Mori, 2008, 2012;Muraskin et al., 2015). Regarding the group effect, neuroimaging data demonstrated less brain activation in numerous areas in athletes than in non-athletes during the visuo-spatial tasks, no brain area showed more activation in athletes than in non-athletes during either of the tasks. Less brain activation areas in athletes than in non-athletes including the bilateral middle frontal gyrus (BA 6), right middle orbitofrontal area (BA 10), right supplementary motor area (BA 6), right paracentral lobule (BA 31), right precuneus (BA 7), left supramarginal gyrus (BA 40), right angular gyrus (BA 17), left inferior temporal gyrus (BA 20), left middle temporal gyrus (BA 21), bilateral lingual gyrus (BA 18) and left cerebellum crus. These results are in line with the findings of previous research that athletes exhibited less cortical activation during social cognition task. The activation in occipital areas was decreasing in non-athletes, amateur karate athletes and elite karate athletes during the observation of pictures with basket and karate attacks (Del Percio et al., 2007b). Low-and high-frequency alpha ERD was lower in amplitude in the elite rhythmic gymnasts compared to the non-gymnasts in occipital and temporal areas (ventral pathway) and in dorsal pathway, these results globally suggest that the judgment of observed sporting actions is related to low amplitude of alpha ERD, as a possible index of spatially selective cortical activation (''neural efficiency''; Babiloni et al., 2009). Low-and high-frequency alpha ERD was less pronounced in dorsal and ''mirror'' pathways in the elite karate athletes than in the non-athletes during the judgment of karate actions, and the researchers concluded that less pronounced alpha ERD in athletes hints at ''neural efficiency'' in experts engaged in social cognition (Babiloni et al., 2010b). In addition, extensive practice over a long period of time leads experts to develop a focused and efficient organization of task-related neural networks (Milton et al., 2007). It appears that the involvement of the executive functions associated with frontal pathways decreases while the role of specialized posterior brain regions becomes more important when individuals are sufficiently trained in a cognitive task (Neubauer and Fink, 2009). Less brain activation in athletes in present study may indicate that athletes have developed focused and efficient organization of task-related neural networks and needed less supervisory control while processing visuo-spatial information, and therefore exhibited ''neural efficiency'' during sports related and sports unrelated visuo-spatial tasks. In addition, this functional reorganization is possible not only for task-specific but also general cognitive task. According to the P-FIT, the visual information was first processed in temporal and occipital lobes (mainly BAs 18, 19, 37), including recognition and subsequent imagery and/or elaboration of visual input, then this basic sensory/perceptual processing is fed forward to the parietal cortex (mainly BAs 40, 7, 39), wherein completed structural symbolism, abstraction, and elaboration emerge, and the parietal cortex interacts with frontal regions (mainly BAs 6,9,10,(45)(46)(47) at the same time, which serve to generate various solutions to a given problem. Once the best solution is arrived up on, the anterior cingulate (BA 32) is engaged to constrain response selection and inhibit other competing responses (Jung and Haier, 2007). Less brain activation in brain areas including BAs 17,18,20,21,BAs 7,31,40 and BAs 6 and 10 in athletes than in non-athletes during the visual-spatial tasks may suggest that athletes showed ''neural efficiency'' during the whole information processing flow, including the early processing of sensory information, the next information integration, the information matching identification and the last response selection procedure during these task. Regarding the stimulus type effect, neuroimaging data demonstrated less brain activation under the sports related stimulus condition than the sports unrelated stimulus condition in both athletes and non-athletes, except for the precuneus which showed more activation under the sports related condition than the sports unrelated stimulus condition in athletes. Precuneus is involved in integration of external and internal information, and can extract information from internal memory storage according to external stimuli (Ren, 2010), the increased precuneus activation in athletes during sports related stimulus tasks possibly suggest that the processing of sports related stimulus information was based more on the athletes' sports experience compared to non-athletes. Analyses of combined data show that results support our hypotheses. Athletes showed less brain activation during both the sports related and sports unrelated tasks. These findings are in accordance with previous studies reporting ''neural efficiency'' in athletes (Del Percio et al., 2008;Babiloni et al., 2010b), and this ''neural efficiency'' may stem from the long term training which enabled athletes to develop a focused and efficient organization of task-related neural networks, and this functional reorganization is possibly task-specific. However, it should be noted that we conducted a cross-sectional study and our entire corollary was based on compared outcomes for the two groups of subjects, for this reason we cannot exclude that maybe some differences already existed before practicing sports. It is possible that young people having certain basic perceptual-motor skills received positive feedback during their first attempts to practice sports and they became ''athletes'', while those who were less skilled gave up when they were young and became ''non-athletes''. Thus, probably both nature and experience contributed to the differences found by our research. One way to solve this problem may be carrying out a longitudinal study. Instead of studying the effects of long-term field training, other specific kinds of training can be relatively easily manipulated, such as perceptual training. Previous research has shown that perceptual training can be effective, from the behavioral point of view, for both non-athletes (Savelsbergh et al., 2010;Ryu et al., 2013) and athletes (Farrow and Abernethy, 2002;Murgia et al., 2014) in a shorter time, i.e., weeks/months. Therefore, in order to explore the exact effect of training on performance and brain activation pattern of athletes during cognitive tasks, future studies could compare the performance and the brain activation pattern of a group (of either non-athletes or athletes) before and after a period of perceptual training with those of a matched control group. CONCLUSION In summary, we used fMRI to investigate the possible brain activation difference between athletes and non-athletes in visual-spatial tasks. We found that athletes reacted faster than non-athletes during both the sports related and sports unrelated visuo-spatial tasks. Athletes decreased activation in cortical regions important for the early processing of sensory information, the next information integration, the information matching identification and the last response selection. Taken together, our findings suggest that there is neural efficiency in athletes during visuo-spatial tasks, and this ''neural efficiency'' may stem from the long term training which prompt athletes to develop a focused and efficient organization of task-related neural networks, and this functional reorganization is possibly task-specific. AUTHOR CONTRIBUTIONS ZG: literature research, study design, data acquisition/analysis/ interpretation, manuscript preparation/editing/revision. AL: guarantor of integrity of entire study, manuscript final version approval. LY: literature research, statistical analysis, manuscript editing.
5,096
2017-04-26T00:00:00.000
[ "Biology", "Psychology" ]
Baryon and lepton number intricacies in axion models Because the Peccei-Quinn (PQ) symmetry has to be anomalous to solve the strong CP puzzle, some colored and chiral fermions have to transform non-trivially under this symmetry. But when the SM fermions are charged, as in the PQ or DFSZ models, this symmetry ends up entangled with the SM global symmetries, baryon (B) and lepton (L) numbers. This raises several questions addressed in this paper. First, the compatibility of axion models with some explicit B and/or L violating effects is analyzed, including those arising from seesaw mechanisms, electroweak instanton interactions, or explicit B and L violating effective operators. Second, how many of these effects can be simultaneously present is quantified, along with the consequences for the axion mass and vacuum alignment if too many of them are introduced. Finally, large classes of B and/or L violating interactions without impact on axion phenomenology are identified, like for example the various implementations of the type I and II seesaw mechanisms in the DFSZ context. Introduction Even if the simplest axion models introduced more than forty years ago have been ruled out, axions still remain one of the best solutions for the strong CP problem of the Standard Model (SM). This problem originates from the observation that the QCD and the electroweak sectors, by construction secluded, must somehow conspire to cancel each other's sources of CP-violation. Indeed, while individually their contributions to the θ term of QCD are a priori both of O (1), the yet non-observed electric dipole moment of the neutron [1] requires their sum to be tiny, θ ef f ≡ θ QCD + θ Y ukawa 10 −10 . Axions come under many guises, but the basic receipe is always the same: design a global U (1) symmetry and assign charges to some colored chiral fermions [2]. This ensures U (1) rotations act on the strong CP phase since its current is anomalous. This is not sufficient yet to dispose of the θ term since fermion masses explicitly break this U (1) symmetry. To force θ ef f to zero, the trick proceeds in two steps [2]. First, this U (1) symmetry is spontaneously broken, so that its associated Goldstone boson, the axion [3,4], has a direct coupling to gluons. Second, non-perturbative QCD effects create an effective potential for the axion field, whose minimum is attained precisely when the θ term is rotated away. In the process, the axion acquires a small QCD-induced mass, typically well below the eV scale [5,6]. Both the mass and the couplings of the QCD axion are thus controlled by a single scale: the one of spontaneous symmetry breaking, usually dubbed f a . To solve the strong CP puzzle, the axion needs to be coupled to colored fermions, and this gives rise to two broad classes of models. Those of the KSVZ type [7] introduce new very heavy fermions, vector-like for the SM gauge interactions, while those of the PQ [2] and DFSZ [8] types make use of the SM chiral quarks. In that latter case, the axion must arise from the very same Higgs bosons that give the quarks their masses, and thus only emerges after the electroweak symmetry is broken. In a previous study [9], we have described that, for this class of models, the fermion charges are necessarily ambiguous, because of the presence of the accidental U (1) symmetries of the SM, corresponding to the conserved baryon (B) and lepton (L) numbers. Though this ambiguity was found to have no impact on the low-energy phenomenology, it raises several questions that we want to address in the present paper. Specifically, • Since the ambiguities arise from the SM accidental symmetries, the main question is to study what happens in the presence of explicit B and/or L breaking terms. There is some conflicting conclusions regarding the capabilities of DFSZ models to accommodate for such violations. We will see that some limited violation is possible, characterise it, and study the consequences when this limit is overstepped. • A second question is to which extend is it possible to fix the ambiguities, or said differently, is there naturally some B and/or L components embedded in the axion U (1) symmetry. Of course, those components are projected out when the symmetry is spontaneously broken, but finding the optimal representation for the U (1) symmetry could simplify the form of the axion effective Lagrangian. We will see that in most cases, neutrino masses and electroweak instanton effects hold the key to identify the U (1) symmetry unambiguously. • Finally, since these ambiguities have no phenomenological consequence, it is worth to inversigate whether it can be used to relate seemingly different models. We will see that the fermion charges for all PQ and DFSZ-like models based on the same Yukawa couplings, whether with a seesaw mechanism of type I, II, or with some (limited) B violation, are actually equivalent. Thus, despite their very different appearance in terms of effective interactions, those models cannot be distinguished at low energy. The paper is organized as follow. To set the stage, we start in the next section by presenting the PQ axion model and the DFSZ axion model. Then in Section 3 we study the compatibility of these models with lepton number violation, by introducing various mechanisms to generate neutrino masses. In Section 4 we investigate the impact of baryon number violation on axion models and explore what would happen if further explicit B and/or L violating interactions were introduced in the theory. Finally, our results are summarized in Section 5. Fermion charge ambiguities in axion models In this section, the simplest axion models are briefly reviewed. We focus on the precise identification of the global and local U (1) symmetries at play, and their breaking pattern. In this way, it will be immediately obvious that when the scalars giving masses to the SM fermions are charged under the PQ symmetry, there remains an ambiguity in the PQ charges of the fermions, and that this ambiguity is related to the invariance of the Yukawa couplings under B and L. In the next sections, this freedom will play a central role, as it will be used to accommodate the possibility of B and/or L violation in axion models. Axion in the PQ model The starting point is a two Higgs doublets with the scalar potential Provided a consistent Spontaneous Symmetry Breaking (SSB) occurs, the mass spectrum is then made of two neutral scalar Higgs bosons h 0 and H 0 , a pseudoscalar A 0 , and a pair of charged Higgs boson H ± . This potential is invariant under the independent rephasing of the Higgs doublets, corresponding to a global U (1) 1 ⊗U (1) 2 symmetry. Actually, a linear combination of these U (1) charges is nothing but the gauged hypercharge. Note that this U (1) 1 ⊗ U (1) 2 symmetry is truly active at the level of the whole THDM, and in particular, assuming Yukawa couplings of Type II, it requires that fermions are assigned appropriate U (1) 1 ⊗ U (1) 2 charges. Beside, these Yukawa couplings are also invariant under the global baryon and lepton number symmtries, U (1) B and U (1) L . Those must be left untouched by the Electroweak Symmetry Breaking (EWSB). So, all in all, the pattern of symmetry breaking is When the doublets acquire Vacuum Expectation Values (VEVs), U (1) 1 ⊗ U (1) 2 ⊗ SU (2) L is broken down to U (1) em . There are thus two Goldstone bosons, one is the Would-be Goldstone (WBG) eaten by the Z 0 , and the other is truly present in the spectrum and is the massless axion. In the breaking chain, it must be stressed that we wrote U (1) X and not U (1) P Q for the part of U (1) 1 ⊗ U (1) 2 not aligned with U (1) Y . Indeed, strictly speaking, the U (1) P Q symmetry is only defined after the doublets acquire their VEVs, from the orthogonality of the axion with the WBG of the Z 0 . Further, if we denote the VEVs as 0| 2 and v 2 /v 1 ≡ x ≡ 1/ tan β, both these fields are v i -dependent linear combinations of Im Φ 0 1 and Im Φ 0 2 , and consequently, the PQ charges of the doublets are functions of v i . They are only defined once U (1) Y is broken. Specifically, adopting a polar representation for the pseudoscalar Goldstone bosons, the Higgs doublets are written in the broken phase as The Goldstone bosons associated to the U (1) 1 and U (1) 2 symmetries, η 1 and η 2 , are related to the physical Goldstone bosons a 0 and G 0 as Plugging this in Eq. (4), the PQ charge of each doublet can be read off its phase variation under a shift of the associated Goldstone boson, a 0 → a 0 + vθ, and thus Note that the shift It also shows explicitly how misleading any idea of orthogonality of the U (1) charges could be. We started with U (1) 1 ⊗ U (1) 2 under which the pair (Φ 1 , Φ 2 ) has the seemingly orthogonal charge assignment (v/v 1 , 0) ⊗ (0, v/v 2 ). But once U (1) 1 ⊗ U (1) 2 is broken and the associated Goldstone bosons compelled to be orthogonal, we end up with the U (1) Y ⊗ U (1) P Q charge (1, 1) ⊗ (x, −1/x) for the pair (Φ 1 , Φ 2 ). Once these charges are fixed, those of the fermions can be derived by requiring the Yukawa Lagrangian to be invariant under U (1) P Q . Since those couplings are also necessarily invariant under B and L, these charges are defined only up to a two-parameter ambiguity [9], which we denote α and β: At this stage, there is no way to fix α and β, essentially because neither B nor L have associated dynamical fields. Further, as discussed for the pair (Φ 1 , Φ 2 ), there is no viable concept of orthogonality for the U (1) charges in the fermion sector either. Actually, it should be remarked that are not orthogonal among themselves to begin with, so there is no reason to expect the PQ charge to be any different. The freedom in the PQ charges of the SM fermions has no observable consequence. The simplest way to see that is to adopt the usual linear parametrization for the THDM. Since the ambiguity in the fermion PQ charges appears nowhere in the Lagrangian, all the Feynman rules are independent of α and β, and so are the physical observables. Using the polar representation of Eq. (4), the situation is a bit more involved. Though initially, the Lagrangian is again independent of α and β, and so are all the Feynman rules, it is customary to perform a reparametrization of the fermion fields to remove the axion field from the Yukawa couplings. In full generality, this reparametrization is α and β dependent because the fermion rephasings are tuned by their PQ charges, In this way, a dependence on α and β is spuriously introduced in the Lagrangian, first because the non-invariance of the fermion kinetic terms generates the couplings and second, because the non-invariance of the fermionic path integral measure generates the anomalous interactions with where d C,L (ψ), C C,L (ψ) are the SU (3) C and SU (2) L dimensions and quadratic Casimir invariant of the representation carried by the field ψ, respectively, and by extension, C Y (ψ) = Y (ψ) 2 /4 with the hypercharges given in Eq. (8c). Yet, even if both δL Der and δL Jac depend on α and β, these parameters cancel out systematically in all physical observables, as shown explicitly in Ref. [9]. Nevertheless, some theoretical quantities inevitably depend on α and β. Besides the above interactions, another particular example is the divergence of the PQ current since it is related to the anomalous interaction via δL Jac = a 0 ∂ µ J µ P Q . Since the two-photon coupling arises as N L + N Y = N em in Eq. (11), both the QED and QCD terms in ∂ µ J µ P Q are independent of α and β and immediately physical, but the electroweak term is always ambiguous. This is of course expected in view of the B and L origins of the α and β parameters. If one remembers that these currents also have anomalous divergences one can immediately understand how α and β enters in Eq. (12). Yet, one should not conclude too quickly that α and β represent a spurious B and L component of the PQ current and should be set to zero. Indeed, this would entirely remove the electroweak W i µνW i,µν term of ∂ µ J µ P Q , but there is no reason for an (hypothetical) B and L-free PQ current to have no electroweak component. Besides, one should realize that the final form of N L reflects the specific choice made in parametrizing the two-parameter freedom in the fermion PQ charges. To bring Eq. (7) to a simple form, we made the choice of fixing P Q(q L ) ≡ α and P Q( L ) ≡ β. So, setting α = β = 0 would simply removes the left-handed fields from the PQ currents, but this is hardly natural since the axion is coupled to left-handed fields, as can be confirmed adopting the usual linear representation for the THDM scalar fields. Axion in DFSZ model When the axion is embedded as one of the pseudoscalar degrees of freedom of the THDM, its couplings end up tuned by the electroweak VEV and are far too large given the experimental constraints. The DFSZ axion model [8] circumvents this problem by moving most of the axion field into a new field, whose dynamics take place at a much higher scale. Specifically, the THDM is extended by a gauge-singlet complex scalar field φ, with the scalar potential This potential is invariant under the same U (1) 1 ⊗ U (1) 2 symmetry as in the PQ realization of the previous section, provided φ is charged under both U (1)s. Concerning fermions, the same type II Yukawa couplings as in Eq. (2) are allowed, while φ cannot directly couple to fermions because of its U (1)s charges. The symmetry-breaking scale v s of the singlet is assumed to be far above the electroweak scale. To leading order in v/v s , 0| Re φ|0 breaks U (1) 1 ⊗ U (1) 2 → U (1) Y and its associated Goldstone boson is the axion. Indeed, at the v scale, the λ 12 φ 2 Φ † 1 Φ 2 term ensures the pseudoscalar state of the THDM is massive. In this leading order approximation, the axion is not coupled to fermions since it is fully embedded in φ. The interesting physics take place at O(v/v s ), where the V φPQ coupling generates an O(v/v s ) mass for Im φ tuned by λ 12 v 1 v 2 . Neither Im φ nor Im Φ 1,2 remain massless, but a linear combination of these states does. The axion is thus a 0 = O(1) Im φ + O(v/v s ) Im Φ 1,2 , and since all the couplings to SM particles stem from its Im Φ 1,2 components, the axion essentially but not totally decouples. Yet, it is still able to solve the strong CP problem since this ensures its coupling to G µνG µν . To be more quantitative, this picture is easily confirmed adopting a polar representation for the scalar fieds. Plugging Eq. (4) together with in V DFSZ and setting all fields but η 1,2,s to zero, only the λ 12 φ 2 Φ † 1 Φ 2 coupling contributes since all the other terms involve the hermitian combinations Φ † i Φ i and/or φ † φ. Restricted to the pseudoscalar states, the potential collapses to By expanding the cosine function and diagonalizing the quadratic term, the mass eigenstates are easily found to be with δ s = v/v s and ω −2 = 1 + δ 2 s sin 2 2β. The interest of this form is that we can read off the PQ charges of η 1 , η 2 , and η s from their reactions to a shift a 0 → a 0 + v s ω −1 θ, and we find or, rescaling these charges by 2x/(x 2 + 1), We thus recover the same charges as in the PQ model, Eq. (6), so those of the fermions also stay the same, Eq. (7), including the α and β ambiguities related to baryon and lepton numbers. Note that the final form of the mixing matrix is compatible with the cosine potential, in the sense that the massive π 0 state is precisely the combination of states occurring as argument of the cosine function: The potential V DFSZ (η 1,2,s ) is necessarily flat in the other two orthogonal directions, corresponding to the two Goldstone bosons (the G 0 eaten by the Z 0 , and the a 0 ). Finally, remark that if the φ 2 Φ † 1 Φ 2 coupling is replaced by φΦ † 1 Φ 2 , everything stays the same but for P Q(φ). This has no phenomenological impact since the axion couplings to SM fields are unchanged. Axions and lepton number violation Up to now, neutrinos have been kept massless. To account for the very light neutrino masses in a natural way, the standard approach is to implement a seesaw mechanism. Generically, these mechanisms assume the observed left-handed neutrinos have a Majorana mass term, typically via the dimension-five operator where c is understood as a matrix in flavor space, and flavor indices are understood. Neutrino masses are then m ν = cv 2 i /Λ. The scale Λ represents that where lepton number is broken, either explicitly or spontaneously. Obviously, neutrinos end up very light when Λ is sufficiently high. Since a generic feature of the seesaw mechanisms is a breaking of L, the most immediate question is how to accommodate for that in axion models. This has already been studied quite extensively, but most of the time in a KSVZ-like setting, where new colored fermions are introduced and SM fermions need not be charged under the PQ symmetry [10][11][12][13]. Here, we concentrate on DFSZ-like models, in which L manifests itself as an ambiguity in the PQ charges of the SM fermions. To study the consequences, and actually show that axion phenomenology is essentially unaffected by neutrino masses, we review in this section three realizations. First, we supplement the PQ and DFSZ model with a seesaw mechanism of type I [14]. Then, we consider the νDFSZ model of Ref. [15], where the DFSZ singlet is made responsible for the breaking of lepton number. Finally, we consider the type II seesaw mechanism [16], realized eitherà la PQ or DFSZ [17,18]. Other DFSZ-like realizations are possible, see for example Refs. [19][20][21][22], but those described here are the simplest. Also, we do not consider the proposal of Ref. [23,24] in which the PQ and B − L currents are identified, with a non-local majoron gluonic coupling arising through complicated multiloop processes. PQ and DFSZ with a type I seesaw mechanism A first strategy to account for neutrino masses is to add to the PQ or DFSZ model a type I seesaw mechanism. Specifically, we add right-handed neutrinos ν R to the model. Since those are singlet under the gauge symmetry, the only new allowed couplings are with i = 1 or 2. Lepton number no longer emerges as an accidental symmetry because the Majorana mass term M R breaks L by two units. It is also presumably very large, so integrating out the ν R fields generates the dimension-five operator in Eq. The PQ charge of the right handed neutrinos has to vanish to allow the presence of the Majorana mass term. Given the PQ charge in Eqs. (7) and (6) or (19), this implies that β must be non-zero sinceν These equations must be interpreted in the right way. This is not a choice for β. Rather, in the presence of M R , U (1) L is removed from the symmetry breaking chain of Eq. (3), and the corresponding ambiguity is simply not there to start with. In other words, it would make no sense to set β to any other value and discuss the impact of the PQ breaking induced by M R , since this breaking is spuriously introduced by an inappropriate choice of PQ charges. Yet, remarkably, the PQ symmetry does not forbid either the Majorana mass term in Eq. (22) or the effective operator Eq. (21), contrary to the claim made for example in Refs. [25,26]. The presence of the seesaw mechanism does not significantly alter the axion phenomenology. This is most clearly seen adopting the linear parametrization for the scalar fields, since then all the axion couplings to SM fermions are proportional to their masses. When ν R have been integrated out, that of the axion to light neutrinos will arise from Eq. (21), and thus be tiny. In a polar representation for the pseudoscalar fields, first note that with β fixed as in Eq. (23), the seesaw operator of Eq. (21) becomes invariant under the PQ symmetry. It does not prevent the fermion reparametrization of Eq. (9), which proceeds exactly as in the absence of neutrino masses. Except that β is fixed, the effective derivative and anomalous interactions stay the same. Since we proved in Ref. [9] that β cancels out in physical observables anyway, the phenomenology is unchanged, except for the tiny kinematical impact of the now finite neutrino masses (for example, the a 0 W + W − loop amplitude depends on the mass of the virtual fermions, including neutrinos). Merging DFSZ with a type I seesaw mechanism Instead of adding a Majorana mass term for the right-handed neutrinos, we can use the singlet field and set This model, dubbed the νDFSZ, was first proposed in Ref. [15]. Let us see how this merging of the DFSZ model with a type I seesaw mechanism can be understood from the point of view of the U (1)s. Since φ cannot be neutral under U (1) 1 ⊗ U (1) 2 , the right-handed neutrinos do have charges, and no Majorana mass term is allowed. Basically, what we are doing is to embed lepton number inside the global symmetries, Since the VEV of φ breaks both U (1) 1 and U (1) 2 , it also breaks U (1) L , and then the Goldstone boson can be viewed as a majoron. Note, though, that Φ 1 and Φ 2 as well as quarks are charged under U (1) 1 ⊗ U (1) 2 , since the assignments arē To we must set α 1 + α 2 = 1/3, and the remaining oneparameter freedom originates in the B invariance of the Yukawa couplings. Yet, clearly, no linear combination of the U (1) 1 and U (1) 2 charges can make the Higgs doublets and the quarks neutral. The symmetry breaking proceeds as in the DFSZ model since the scalar potential stays the same. This fixes the PQ charge of the scalar fields to the same values, Eq. (19). The fermions then have the same charge as in Eq. (7), but with β fixed so that P Q(ν R ) = −P Q(φ)/2: together with P Q(q L , u R , d R ) = (α, α + x, α + 1/x), as before. In some sense, U (1) L never occurs at low energy. Instead, it is embedded into U (1) P Q via the specific value of β imposed by thē ν C R Y R ν R φ coupling. So, in this model, the axion and majoron are really one and the same particle. Further, the "axion = majoron" is automatically coupled to quarks and to G a µνG a,µν , hence can solve the strong CP puzzle via the same mechanism as in the DFSZ model. Once φ acquires its vacuum expectation value, ν R has a Majorana mass term, so it may seem this contradicts the fact that P Q(ν R ) = 0. But actually, plugging Eq. (15) in L ν R of Eq. (24) and using Eq. (17), we find where Thus, we now see why ν R must have a non-zero PQ charge. Because M R orginates from the φ field, it is always accompanied by the axion field. Then, under a U (1) P Q transformation, a → a + v s θ must be compensated by the phase shift ν R → ν R exp(iP Q(ν R )θ). Also, thanks to this, the fermion field reparametrization ψ → ψ exp(iP Q(ψ)a/v s ) are still able to entirely move the axion field out of the fermion mass terms. One point must be stressed though. The axion couples to SM fermions via its suppressed components η 1,2 , but it couples directly to ν R via its dominant η s component. As a result, the couplings to SM fermions are O(v/v s ), but that to ν R is O(1), as evident in Eq. (28). Yet, since v s is assumed to be well above the electroweak scale, ν R should be integrated out before performing the fermion reparametrization. In that case, we find (assuming Then, performing L → L exp(iP Q( L )a/v s ) moves the axion field entirely into the same effective derivative and anomalous interactions as in Eq. (11), but with β now fixed as in Eq. (27a) or (27b). Again, the phenomenology is unaffected since β cancels out of physical observables. Thus, the O(1) axion coupling to ν R has no consequences at low energy. PQ and DFSZ with a type II seesaw mechanism In the previous sections, we have seen two ways to incorporate neutrino masses in the DFSZ model. For the first, one simply adds right handed neutrinos ν R with a Majorana mass term. The PQ symmetry stays the same, though a specific value of β is required, Eq.(23), to ensure P Q(ν R ) = 0. Also, this makes sure the explicit breaking of the lepton number does not spill over to the PQ symmetry. A second way to proceed, in the νDFSZ model, is again to add right-handed neutrinos but ask to the heavy singlet field to induce their Majorana mass term. In that case, P Q(ν R ) = 0, but the lepton number symmetry ceased to exist. Actually, it is replaced by the PQ symmetry. A third realization is provided by the type II seesaw mechanism [16]. Instead of right-handed neutrinos, let us add to the THDM model three complex Higgs fields ∆ transforming as a SU (2) L triplet with hypercharge 2. For the couplings to fermions, in addition to the THDM Yukawas, we add to L Yukawa of Eq. (2) the term where, as indicated, C acts in both Lorentz and SU (2) L spaces. For the scalar potential, we introduce one new coupling to entangle the U (1) 1 ⊗ U (1) 2 charges of ∆ with those of the doublets, withΦ i = iσ 2 Φ i . A factor µ ∆ is introduced to make λ ∆12 dimensionless. Even if µ 2 ∆ is large and positive, the λ ∆12 coupling generates a tadpole for Re ∆ 0 and this field has to be shifted. In effect, this induces a VEV for the ∆ field, v ∆ ∼ λ ∆12 v 1 v 2 /µ ∆ . To preserve the electroweak custodial symmetry, µ ∆ v 1,2 so that v ∆ v 1,2 . Yet, this shift generates a small Majorana mass term for the neutrinos, m ν = v ∆ Y ∆ . This is the characteristic linear suppression of the neutrino masses of the type II seesaw mechanism. We have not identified the axion field yet. To that end, we adopt again the polar parametrization, Eq. (4) together with Restricted to the pseudoscalar states, only the λ ∆12 coupling survives and The mass eigenstates are easily found. First, the G 0 state has to be aligned with since this ensures it can be removed by a U (1) Y transformation and Y (∆) = 2Y (Φ 1,2 ) = 2. Second, the single massive state, denoted π 0 , is aligned with the combination of fields occurring in the argument of the cosine in Eq. (33). The axion is then the only state orthogonal to both G 0 and π 0 , and a simple cross product permits to construct the mixing matrix: with The PQ charges of the three fields can be read off the second line of this matrix, and upon adopting a convenient normalization: Note that P Q(Φ 1 ) + P Q(Φ 2 ) − P Q(∆) = 0, as it should, but those can be expressed as simple function of x = 1/ tan β only to leading order in δ ∆ . Once the PQ charge of the scalars is set, that of the fermions can be derived and we find Apart from the small shifts induced by x ∆ , this corresponds to the PQ current of the THDM with β = −P Q(∆)/2. Since the η 1,2 components of a 0 are of O(1), the PQ scale stays at v, and the axion ends up too strongly coupled to SM fermions. To cure for this, the same strategy as in the DFSZ model can be used, that is, an additional complex singlet field is introduced [17,18]. To study this situation, let us take the scalar potential The coupling b φ∆ gives a large O(v s ) mass to the triplet states, while those in V ∆THDM generate small O(v) splittings among the three ∆ states. Of particular interests are the λ νi couplings since they entangle the scalar states. First, the λ ν2 and λ ν3 couplings creates ∆ tadpoles that need to be removed by shifting the ∆ field Note that if µ ∆ v s , the bulk of the ∆ mass comes from the singlet, and the µ 2 ∆ can be neglected in these expressions. Plugging in the polar representations of the scalar fields, the scalar potential for the pseudoscalar states is: If the three λ νi couplings are present, there are three massive pseudoscalar states corresponding to the linear combinations appearing in the cosine functions. Those are linearly independent. Together with G 0 which stays of course massless, there is no room for the axion. This was evident from the start, since with all three λ νi couplings, no U (1) 1 ⊗ U (1) 2 symmetry can be defined. Removing any one of these couplings, a second Goldstone boson appears and can be identified with the axion. Given that the G 0 state stays the same as without the φ, see Eq. (34), we directly find the a 0 state by its orthogonality with G 0 and the massive states in the cosine functions, The first scenario collaspes to that without φ, and is ruled out since the axion scale remains at v. The other two are viable, with the axion scale set by v s . The PQ assignments can be read off the coefficients of the v i η i terms above, and upon adopting a convenient normalization, with x ∆ given in Eq. (36). The λ ν2 = 0 scenario is the simple DFSZ generalization of the THDM with a type II seesaw, and the PQ charges stay the same, see Eq. (36). Consequently, the fermions have the charges in Eq. (37). For the λ ν3 = 0 scenario, corresponding to that discussed in Ref. [17,18], the PQ charges of the leptons are shifted since that of ∆ is different: Again, apart from the small shifts induced by v ∆ , the PQ charges in these two scenarios correspond to that of the THDM in Eq. (7) with specific values of β: The electroweak terms in the divergence of the PQ current are thus different in both scenarios. Yet, phenomenologically, the axion couplings are independent of β, and apart from negligible corrections brought in by v ∆ , these scenarios cannot be distinguished at low energy. Axions and baryon number violation Up to now, we have seen that the violation of the lepton number, through insertion of Majorana neutrino masses, fixes one of the two ambiguities in the PQ charges of the SM fermions, that parametrized by β in Eq. (7). We will now concentrate on the remaining ambiguity, α, which originates in the conserved baryon number current. In the first subsection, we will discuss two frameworks in which α is automatically fixed, for dynamical reasons. Then, in the second subsection, the impact of explicit B-violating operators will be discussed. Finally, in the last subsection, the situation in which too much B and/or L violating effects are introduced, preventing the PQ symmetry from arising, will be described. Dynamical B violation Even without explicit B violation, U (1) B is not a true symmetry at the quantum level because electroweak instantons are known to induce B + L transitions [27,28]. This takes the form of an effective interaction involving antisymmetric flavor contractions of three lepton weak doublets and nine quark weak doublets: At zero temperature, c inst is tuned by exp(−4π/g 2 ) and these effects are totally negligible. Yet, even so, these interactions are present, and following the same philosophy as for β, they prevent the emergence of the parametric freedom to choose α and β separately. Specifically, the PQ symmetry necessarily settles with ∆B = ∆L = 3 ⇒ 3α + β = 0 . Setting this combination to zero also kills off the W i µνW i,µν term in the PQ current (see Eq. (11)). In some sense, this requirement removes a B +L component in U (1) P Q . Since B −L is anomaly-free, there is then nothing remaining to generate the W i µνW i,µν term. Yet, there remain couplings of the axion to left-handed fields in the effective non-linear Lagrangian since neither α nor β are vanishing when Majorana neutrino masses are present. It should be mentionned also that once the electroweak instanton interaction fixes 3α + β = 0, the axion decouples from electroweak anomalous effects, including also the sphaleron interactions. Mechanisms to generate the baryon asymmetry from the rotation of an axion field (see e.g. Ref. [29]) which relies on those interactions cannot be active in the present simple axion models. Some additional constraints must force the PQ symmetry to be realized differently. At the very least, it must fix α to some value not compatible with the β value imposed by the neutrino sector, in order to induce 3α + β = 0. So, generically, electroweak instantons prevent the emergence of one of the ambiguities in the fermionic PQ charges. However, this supposes the ambiguity is not removed first at a yet higher scale. A generic class of models where this occurs are the GUT scenarios. Indeed, in that case, gauge interactions can break B and L. For example, in SU (5), B − L is conserved but not B + L, and one automatically has Indeed, this is the unique value for which all three anomalous terms in Eq. (12) coincide when taking into account the SU (5) normalization of the hypercharge, N C = N L = 5/3N Y . It is quite remarkable that this value is not compatible with the instanton value in Eq. (50). Further investigation of the fermion charge ambiguities in a GUT context are defered to future work [36]. Effective B violation The basis of effective, gauge invariant operators violating B and/or L is well-known. It starts at the dimension-five level with the ∆L = 2 Majorana mass operator of Eq. (21). Then, at the dimension-six level, all the operators are ∆B = ∆L = 1 (see Ref. [30]): Adequate contractions of the Lorentz and SU (2) L spinors are understood, as well as Wilson coefficients and flavor indices. Beyond that level, other patterns of ∆B and ∆L can occur at the dimension-seven level, thanks to additional Higgs insertions. With only SM fermions, the next series of operators arise at the dimension-nine level: The first two induce ∆B = 1, ∆L = 3 transitions, and the last three ∆B = 2, ∆L = 0 ones. These operators are peculiar because, provided the flavor indices are antisymmetrically contracted, they break only U (1) L and U (1) B and not the flavor SU (3)s [31]. Given the charges in Eq. (7), none of these operators is invariant under U (1) P Q , but carry instead The way in which α and β enters reflects the ∆B and ∆L properties of the corresponding operators, with n × α ⇔ ∆B = n/3 and m × β ⇔ ∆L = m. Yet, remarkably, the PQ charge of the operators are not aligned with their ∆B and ∆L contents. For example, all the dimension-six operators are ∆B = ∆L = 1, but they do have different PQ charge. Among the dimension-six operators, it is also interesting to remark that only the first is compatible with electroweak instantons, Eq. (50), while only the last two are compatible with GUTs, Eq. (51). This could have been expected since those are the operators arising from SU (5) gauge boson exchanges. Another noticeable feature is that the misalignment between two operators carrying the same ∆B and ∆L always appears as a mutliple of x + 1/x. This means that even if the PQ symmetry does not exist when all types of operators are simultaneously present, large classes of operators can nevertheless be allowed, but at the cost of a further scaling in their dimensions. Consider for instance the DFSZ model where P Q(Φ † 2 Φ 1 ) = P Q(φ † ) = x + 1/x. Misalignments can always be compensated by scalar singlet insertions. For instance, if one assumes Eq. (50) holds, then the ∆B = ∆L = 1 operators must be The PQ charge of all the operators is now aligned in the direction of L q 3 L , with P Q( L q 3 L ) = 0 when 3α + β = 0. Operators involving insertions of the Higgs doublet combination Φ † 1 Φ 2 need not be included since they are comparatively very suppressed, both dimensionally and because v 1,2 v s . Phenomenologically, given the bounds on Λ from proton decay and provided v s < Λ, only the leading operator is expected to play any role. The same holds for other series of operators, though there is then no clear reason to select one operator against another as leading. For example, in the ∆B = 2 class, assuming d 4 R u 2 R is leading, the effective operators must be They are all neutral provided 3α = −x − 2/x, and thus there remains enough room for the PQ symmetry to exist. Vacuum realignments In the previous section, we have seen that the PQ symmetry can accommodate for limited B and/or L breaking. Our goal here is to work out the consequences when too much B and/or L violation is introduced. Indeed, if there are too many misaligned ∆B and ∆L operators, the U (1) 1 ⊗ U (1) 2 symmetry cannot be exact and the axion cannot be massless. To analyze this, we first remark that these breaking effects have to be tiny given the experimental constraints on ∆B and ∆L transitions. Thus, the U (1) 1 ⊗ U (1) 2 symmetry is at most only very slightly broken, and the leading dynamics remain that of Goldstone bosons. The pseudoscalar degrees of freedom can still be parametrized using the polar representation. Of course, the axion will no longer be truly massless, it becomes a pseudo-Goldstone boson. Naively, if this mass is too large compared to the QCD-induced mass, then the axion fails to solve the strong CP puzzle. This failure can also be viewed in terms of the vacuum of the theory. In the presence of the ∆B and ∆L breaking terms, the shift symmetry is no longer active. All the vacua are no longer equivalent, and one direction is prefered. At the low-scale, QCD effects also require a realignment of the vacuum, and the CP-puzzle can be solved only when the QCD requirement is stronger than that coming from the ∆B and ∆L effects. In the next section, the axion mass arising from various combinations of ∆B and ∆L operators are analyzed semi-quantitatively, from the point of view of the effective scalar potential. Then, in the next section, we perform a more detailed analysis of the vacuum realignment mechanism induced by the ∆B and ∆L operators, in the spirit of Dashen theorem [32]. Effective potential approach To estimate the mass of the axion, the simplest strategy is to start at the level of the scalar potential before the electroweak SSB. Indeed, at tree level, the U (1) 1 ⊗ U (1) 2 symmetry is still active there since it is broken explicitly in the fermion sector only. Thus, at tree-level, the axion remains as a massless Goldstone boson. To go beyond that, we must consider the effective scalar potential, and in particular look for the leading symmetry breaking terms induced by fermion loops. Clearly, such loops must include all the misaligned ∆B and/or ∆L interactions simultaneously, in such a way that the process is ∆B = ∆L = 0 overall since scalar fields have B = L = 0. Scenario I: Weinberg dimension-six operators and the axion mass. As a first situation, we consider the case where several operators inducing the same ∆B and ∆L transitions are introduced simultaneously. As discussed before, such a set of operators can be organized into classes according to their PQ charges, with the PQ charges of two classes differing by some multiple of x + 1/x. Because this is precisely the charge of the Φ † 2 Φ 1 combination, the combined presence of two operators whose PQ charge differ by n × (x + 1/x) generates the correction in the effective scalar potential, where Λ B,L is the scale of the ∆B and ∆L physics, λ n a complicated combination of the Wilson coefficients, Yukawa couplings, and loop factors, and the factor 2 2n−3 is introduced for convenience. From there, the mass of the axion can be estimated as m 2 B,L ) in the PQ model. This correction to the scalar potential is also valid for the DFSZ model, since the singlet does not couple directly to fermions. The only way in which the breaking of U (1) 1 ⊗ U (1) 2 can be communicated to φ is via the mixing term φ 2 Φ † 1 Φ 2 . To incorporate this effect, the full mass matrix for the pseudoscalar states has to be diagonalized. To that end, consider the effective potential restricted to pseudoscalar states. It now contains a second cosine function: Figure 1: Fermion loops involving the Weinberg operators Q 1 ≡ L q 3 L , Q 2 ≡ e R u R q 2 L , and Q 3 ≡ e R u 2 R d R , and inducing symmetry-breaking effective potential terms. Diagonalizing the mass matrix, one pseudoscalar state remains at the v s scale while the other has a mass Thus, in the DFSZ model, the lightest pseudoscalar mass is suppressed by a v/v s factor compared to the PQ model. Still, this factor does not really help to make a scenario viable because the QCD contribution to the axion mass also scales as 1/v s . For instance, with m a 0 | QCD ∼ m 2 π /v s , n should be strictly greater than two if λ n is O(1) and Λ B,L ≈ 10 16 GeV. To illustrate this discussion, let us take the Weinberg operators of Eq. (54). If both the Q 1 ≡ L q 3 L and Q 2 ≡ e R u R q 2 L operators are simultaneously present, given that their mismatch is simply x + 1/x, the induced axion mass corresponds to Eq. (60) with n = 1, and is thus way too large at m a 0 ∼ O(Λ B,L × v/v s ). This can be understood qualitatively from the process depicted on the left in Fig. 1, corresponding schematically to the symmetry-breaking terms where c i are the Wilson coefficients of Q i , and summation over the flavor indices are understood. The scale Λ reg denotes that at which the loop diagrams are regulated. In all UV scenarios we could think of, this scale corresponds to that of the operators, Λ reg ≈ Λ B,L . Indeed, if some new dynamics is introduced that break the U (1) 1 ⊗ U (1) 2 symmetry, there is no reason not to expect the same dynamics to induce corresponding breaking terms in the scalar sector. If, instead of Q 2 , one takes Q 1 together with Q 3 ≡ e R u 2 R d R , the mismatch in PQ charges is 2 × (x + 1/x), and the mass is m a 0 ∼ O(v × v/v s ) from Eq. (60) with n = 2. Again, this picture can be understood from the diagram on the right in Fig. 1, with the corresponding dimension-four breaking term in the effective potential: Thus, when Λ reg ≈ Λ B,L , the axion mass becomes insensitive to the very high energy scale. Yet, it is still tuned by the electroweak scale, and is thus far too large to solve the strong CP puzzle. L interaction together with the dimension-six Q 2 = e R u R q 2 L operator. Scenario II: A viable scenario with many ∆B and ∆L operators. The axion mass is too large for any combination of Weinberg operators carrying different PQ charges. To get a viable scenario, we have to allow for operators inducing different ∆B and ∆L patterns, so that the effective potential term is forced to be of higher dimension. For example, consider that instead of a dimension-six operator, Q 1 is accompanied by the ∆B = 2 dimension-nine operator Q 4 ≡ d 4 R u 2 R . Alone, Q 1 and Q 4 do not break the U (1) 1 ⊗ U (1) 2 symmetry, since they have vanishing PQ charge for some value of α and β. But if neutrinos have a Majorana mass term, say Q ν ≡ 2 L Φ 2 i of Eq. (21), then not all the ∆B and ∆L operators can be simultaneously neutral. Thus, together, Q 1 , Q 4 , and Q ν introduce too much ∆B and ∆L violation for the axion to remain massless. Yet, the combined presence of these ∆B and ∆L effects break U (1) 1 ⊗ U (1) 2 in a direction that can be matched in the effective potential only at the cost of many Higgs doublets. This combination of doublets need not be a power of Φ † 2 Φ 1 anymore, and actually corresponds to the dimension-eight coupling (see Fig. 2) where c ν is the Wilson coefficient of Q ν . Also, we have suppressed flavor indices and identified the scale of all the operators to Λ B,L for simplicity. Clearly, the axion mass is negligible in this case, Even though θ ef f will not be entirely disposed off, it is tiny and the strong CP puzzle is still solved. Note that this estimate remain valid in the νDFSZ model even though Q ν is replaced by the singlet coupling to ν R . Indeed, the leading term in the effective potential is then φΦ 2 i Φ 2 1 Φ †4 2 , as can be seen from Fig. 2 by splitting the and one can check that this leads to the same estimate for the axion mass when v s ∼ Λ B,L . Scenario III: Electroweak instantons and the axion mass. As a final example, imagine now that only Q 2 and Q ν are present. At first sight, the U (1) 1 ⊗U (1) 2 symmetry is preserved. However, one still has to account for the electroweak instanton effects. Since Eq. (50) is not compatible with the presence of Q 2 , the axion cannot be truly massless. It is a bit more tricky to estimate its mass in this case because the electroweak instanton effects are not truly local. But to get an idea of the induced mass, let us nevertheless use the same strategy as above with Q inst = 3 L q 9 L . There will then be a new term in the effective potential In this estimate, we consider that the UV regularization needs only to compensate for the scale of the dimension-six operators, Λ B,L , and not for the dimension-18 instanton effect. So, this should be understood as nothing more than a rough estimate of the maximal impact this combination of operators could have on the axion. In any case, when Λ reg ≈ Λ B,L , the axion mass is completely negligible because it is suppressed by the Λ B,L scale, see Eq. (60), because instanton effects are tiny, c inst ∼ exp(−4π/g 2 ), and because of the flavor structure. Indeed, Q inst is fully antisymmetric in flavor space, so first and second generation fermions circulate in the loop and there will be many small Yukawa couplings. Actually, additional gauge interactions may be needed to prevent the leading flavor contraction from vanishing, in a way similar to what happens for the electroweak contribution to the EDMs, see e.g. the discussion in Ref. [33]. Yet, even if tiny, this shows that the axion would not be strictly massless in this case. Further, at high temperature, when the QCD chiral symmetry is restored, these effects would be dominant and force a specific alignment of the vacuum. A very similar conclusion is encountered when non-perturbative quantum gravity, which is expected to violate global symmetries, is taken into account by adding non-local higher dimensional operators in the low energy effective action [34,35]. Terms such as in Eq. (58) are introduced with a Planck scale cut-off, M P ∼ 10 19 GeV, implying a lower limit on their dimension (2n + 4) in order not to impose permanently an alignment of the vacuum away from the strong CP solution. Dashen theorem approach The effective potential approach of the previous section is rather simple, but it does not clearly show how the presence of too much B and/or L violation imposes a realignment of the vacuum. This will be described here, by perform the analysis directly in the broken phase. Once the axion is introduced as the degree of freedom spanning the vacuum, the fact that the symmetry is explicitly broken manifests itself via non-zero matrix elements 0|L B,L |a 0 and a 0 |L B,L |a 0 . The latter corresponds to a mass term for the axion, and the former asks for a realignment of the vacuum. Indeed, in the presence of the perturbation, the vacuum is no lnger degenerate and the theory is unstable. It is only once at the true vacuum, |Ω , that the perturbation stops being able to shift the vacuum and Ω|L B,L |a 0 = 0. This condition on |Ω is equivalent to Dashen's theorem [32], which states that the true vacuum is that for which Ω|L B,L |Ω is minimal. Let us compute these matrix elements, and thereby the axion mass and true vacuum |Ω , for the specific case of the Weinberg operators, L B,L = L dim 6 B,L . First, let us move to a more convenient basis. After the reparametrization ψ → exp(iP Q(ψ)a 0 /v)ψ of the fermion fields, the axion is removed from the Yukawa couplings. As detailed in Ref. [9], this generates derivative couplings −∂ µ a 0 /v × |J µ P Q | f ermions from the fermion kinetic terms, and anomalous non-derivative couplings a 0 /v × ∂ µ J µ P Q with ∂ µ J µ P Q given in Eq. (11) from the non-invariance of the fermionic path integral measure. In addition, since L B,L is not invariant under U (1) P Q , each operator gets transformed Forgetting for now the anomalous couplings, the only couplings surviving in the static limit are the non-derivative couplings with the ∆(B + L) interactions, When taken alone, none of these operators is able to induce Ω|L dim 6 B,L |a 0 or a 0 |L dim 6 B,L |a 0 . For example, with only Q 1 , the simplest ∆(B+L) = 0 matrix element arises from a Q † 1 ⊗Q 1 combination, and the axion field disappears. Some interference between two or more operators with different phases is needed. Let us consider that arising from Q 1 and Q 2 . We have two contributions, Q † 1 ⊗Q 2 and Q † 2 ⊗ Q 1 . Since we are only after ∆(B + L) = 0 matrix elements with external axion fields, we can consider the generating function where δ 12 denotes the phase of Ω|Q 1 ⊗ Q † 2 |Ω . In the last line, we use the fact that the vacuum space is spanned by the axion, i.e., any two vacua are related by shifts in the axion field. This permits to trade |Ω for the free parameter ω. Expanding the cosine function up to second order, the axion mass is found to be consistent with the previous estimate, Eq. (61), since 0|Q 1 ⊗ Q † 2 |0 corresponds to the diagrams of Fig. 1 with the external Higgs fields replaced by their vacuum expectation values. Concerning the vacuum, Ω|L dim 6 B,L |a 0 is obtained from ∂V dim 6 B,L /∂a 0 at a 0 = 0, and thus vanishes when ω satisfies The fact that the prefered direction is set by the phase of Ω|Q 1 ⊗ Q † 2 |Ω can be understood as follow. In the absence of L dim 6 B,L , thanks to the still exact U (1) P Q symmetry, one can remove any phase occurring in the fermion mass terms as well as take real VEVs, v 1,2 for the two Higgs doublets (see Eq. (4)). But, it is no longer possible to keep both VEVs real once U (1) P Q is broken by L dim 6 B,L , and the specific choice in Eq. (67) becomes compulsory. In some sense, we can also understand V dim 6 B,L as a contribution to the effective potential of the axion. With this picture, bringing back the anomalous couplings and turning on the QCD effects, the full axion potential looks like Thus, the strong CP puzzle is solved only if V QCD dominates and forces the vacuum to align itself to kill θ QCD . In the present case, given that the V dim 6 B,L -induced axion mass is much larger than that induced by V QCD , the constraint from V dim 6 B,L is stronger and the vacuum is rather aligned in the direction of Eq. (67), leaving the strong CP-puzzle open. Conclusions Axion models are based on the spontaneous breaking of an extra U (1) symmetry. When this symmetry has a strong anomaly, the associated Goldstone boson, the axion, ends up coupled to gluons, and this ensures the strong CP violation relaxes to zero in the non-perturbative regime. In this paper, we analyzed more specifically the PQ and DFSZ axion models, where SM fermions as well as the Higgs fields responsible for the electroweak symmetry breaking are charged under the additional U (1) symmetry. A characteristic feature of these models is that the true U (1) P Q symmetry corresponding to the axion is not trivial to identify, because of the presence of three other U (1) symmetries acting on the same fields: baryon number B, lepton number L, and weak hypercharge. As a consequence, in general, the PQ charges can be defined only after U (1) Y is spontaneously broken, and even then, those of the fermions remain ambiguous whenever baryon or lepton number is conserved. Our purpose was to study this ambiguity, see when it can be lifted, and how it leaves the axion phenomenology intact. Our main results are: • The ambiguities in the PQ charges of the fermions, here parametrized by α and β, is wellknown but it is often interpreted as a freedom. One seems free to fix α and β as one wishes. Doing this, however, prevents any further analysis of B and L violation. For example, if one chooses to assign PQ charges only to right-handed fermions, a ∆B = 2 operator like (u R d R d R ) 2 would be forbidden. Yet, this is merely a consequence of the choice made for the PQ charges. What we showed here is that it is compulsory to keep the fermion charge ambiguity explicit to leave the theory the necessary room to adapt to the presence of B and/or L violation. Indeed, in the presence of such interactions, these ambiguities automatically disappear, and the corresponding parameters α and β are fixed to specific values, when the U (1) P Q symmetry aligns itself with the remaining U (1) symmetry of the Lagrangian. This proves that such violations of B and L can be compatible with the PQ symmetry. • Since there are two parameters, reflecting the two accidental symmetries U (1) B ⊗U (1) L , axion models can accommodate for breaking terms in two independent directions. This means for example that adding a ∆L = 2 Majorana mass for the neutrinos as well as some B + L violating operators, say e R u R q 2 L and L q L d R u R , preserves the axion solution to the strong CP puzzle. Yet, this compatibility is delicate and needs to be checked in details. For example, adding both the operators L q 3 L and e R u R q 2 L spoils the axion solution completely, even though these operators have the same B and L quantum numbers. When they are both present, there is simply not enough room for the U (1) P Q symmetry to remain active. • In many cases, the capability of axion models to accommodate for B and L violation is saturated from the start. Indeed, first, these models should be compatible with neutrino masses, and seesaw mechanisms being the most natural, some ∆L = 2 effects are present. Second, electroweak instantons generate B + L violating effects, and even if negligible at low energy, their mere existence forces the PQ symmetry to be realized in a specific way. In this case, there remains not much room for other B and/or L violating effects. For example, the axion cannot remain a true Goldstone boson in the presence of say the e R u R q 2 L or (u R d R d R ) 2 operator. Yet, the instanton interaction is so small that the induced mass of the axion is well below the QCD mass, and the strong CP puzzle is still solved. Obviously, the situation changes at high temperature. If the electroweak contribution to the axion mass becomes larger than the QCD contribution, the axion is initially not aligned in the CP-conserving direction but does so only at a later time. Such a situation could have important cosmological consequences. • Usually, axion models are specified in a particular representation, in which the axion has only derivative couplings to SM fermions, and anomalous couplings to gauge field strengths. Because these effective couplings arise from chiral rotations of the fermion fields, tuned by their PQ charges, some dependences on α and β are introduced (explicitly or implicitly) in the Lagrangian. At the same time, we have shown that α and β take on various very different values, depending on the ∆B and/or ∆L effects present. So, the axion effective interactions are strongly dependent on the presence of these ∆B and/or ∆L interactions, whatever their intrinsic size. In this respect, the electroweak couplings a 0 W i µνW i,µν , i = 1, 2, 3, is extreme in that the theory turns it off automatically whenever the PQ current has to circumvent the tiny electroweak instanton interactions. Of course, these dependences on α and β are spurious. As we demonstrated in Ref. [9], the α and β terms occurring in the derivative interactions always cancel out exactly with those of the anomalous interactions, and the physical axion to fermion or gauge boson amplitudes are independent of α and β. In particular, the a 0 W + W − coupling is non-zero even when the anomalous a 0 W i µνW i;µν term is forced out of the axion effective Lagrangian by electroweak instantons. • Several scenarios were discussed: the PQ and DFSZ axion with massless neutrinos, with a seesaw mechanism of type I and of type II, and the νDFSZ where the singlet also plays the role of the majoron. Then, additional requirements were discussed, arising from the electroweak instantons, a GUT constraint, or various B violating operators. Despite their variety, for all those settings, the PQ charges of the two Higgs doublets and the fermions are the same, up to specific values for α and β, and up to negligible corrections in the type II seesaw. Though this can be understood as the orthogonality condition among Goldstone bosons stays essentially the same and the Yukawa couplings are always those of Eq. (2), it is often obscured by the normalization of the PQ charges. Yet, this is remarkable because it means the low-energy phenomenology of the axion is the same in all these models, since it is independent of α and β. This is most evident adopting a linear parametrization for the two Higgs doublets, since the axion then does not couple directly to gauge bosons, while its coupling to each fermion is simply proportional to the fermion mass times the PQ charge of the doublet to which it couples [9]. The results of this paper should have implications in other settings where B and/or L violations occur, most notably in supersymmetry if R-parity is not conserved and in Grand Unified Theories. While embedding the axions in those models has already been proposed, further work to identify the most promising scenario is required [36]. In this respect, the connection with cosmology, either via the axion relic density or its possible impact on baryogenesis, could provide invaluable information.
14,798
2020-06-11T00:00:00.000
[ "Physics" ]
A paper/polymer hybrid microfluidic microplate for rapid quantitative detection of multiple disease biomarkers Enzyme linked immunosorbent assay (ELISA) is one of the most widely used laboratory disease diagnosis methods. However, performing ELISA in low-resource settings is limited by long incubation time, large volumes of precious reagents, and well-equipped laboratories. Herein, we developed a simple, miniaturized paper/PMMA (poly(methyl methacrylate)) hybrid microfluidic microplate for low-cost, high throughput, and point-of-care (POC) infectious disease diagnosis. The novel use of porous paper in flow-through microwells facilitates rapid antibody/antigen immobilization and efficient washing, avoiding complicated surface modifications. The top reagent delivery channels can simply transfer reagents to multiple microwells thus avoiding repeated manual pipetting and costly robots. Results of colorimetric ELISA can be observed within an hour by the naked eye. Quantitative analysis was achieved by calculating the brightness of images scanned by an office scanner. Immunoglobulin G (IgG) and Hepatitis B surface Antigen (HBsAg) were quantitatively analyzed with good reliability in human serum samples. Without using any specialized equipment, the limits of detection of 1.6 ng/mL for IgG and 1.3 ng/mL for HBsAg were achieved, which were comparable to commercial ELISA kits using specialized equipment. We envisage that this simple POC hybrid microplate can have broad applications in various bioassays, especially in resource-limited settings. Microfluidic lab-on-a-chip (LOC) devices, mostly produced by the microfabrication technique, possess astonishing features for low-cost, simple, and rapid bioanalysis. Microfluidic immunoassay devices possess remarkable features such as high surface-to-volume ratio and microliter volumes of microchannels, which leads to significant decrease in analysis time from hours to minutes and minimal reagent consumption as compared to regular ELISA. These highly portable microfluidic devices with integrated processing can analyze complex biological fluids including serum, urine 9 , cells, and cell lysates for various applications, such as detection of diseases [10][11][12] , single cell analysis [13][14][15] , 3D cell culture for tissue-based bioassays 16 , forensic analysis 17 , and a wide range of other fields [18][19][20][21][22] . To address issues from conventional microplates, a few microplate-format microfluidic devices have been developed for immunoassays. For instance, Kai et al. 8 developed a 96-well microfluidic microplate for ELISA with improved sensitivity and reduced sample volumes. The microplate was fabricated with clear polystyrene through injection molding. Each well was connected to a microfluidic channel on the opposing face of the substrate via a through reservoir at bottom of the well. The ELISA on this microfluidic microplate took less time and consumed less reagents as compared to conventional ELISA, but it still required a fluorescence microplate reader. Sapsford et al. 23 developed a miniaturized 96-well microfluidic chip for portable ELISAs with colorimetric detection. The 96-well ELISA chip was micro-machined using clear acrylic and polycarbonate (PC) bound together by double sided tape. Although the reagent consumption was less than conventional ELISA and a portable detector (electroluminescence semiconductor strip with a charge coupled device (CCD)) was used, overnight incubation and manual fluid handling was required. Similarly, Sun et al. 24 fabricated a miniaturized 96-well device for immunological detection, assembling six layers of poly(methyl methacrylate (PMMA) core and five PC layers. They performed electrochemiluminescence ELISA of staphylococcal enterotoxin B (SEB) using a CCD detector. The microfluidic device required a complicated functionalization and device assembling steps along with long incubation time to complete the assay. Overall, all these devices require either long incubation time, surface functionalization or complicated detection systems. With the emergence of paper-based devices in recent years, various POC analyses, including paper-based ELISAs have been developed [25][26][27] . Paper-based devices, which don't require clean room for fabrication, can transport liquid via capillary effect and don't require external force. Another significant feature of paper is the high surface to volume ratio of the micro-porous structure, which improves the immobilization of protein and other biological agents. Paper-based ELISA takes advantage of high specificity of ELISA and low cost, easy-to-use paper-based devices. Whitesides and his colleagues performed ELISA in a 96-microzone plate fabricated in paper 28 . Although it was faster and less expensive, it was less sensitive than conventional ELISA. Murdock et al. used 96-well paper-based ELISA for the assay of human performance biomarker 29 . They used complicated and time-consuming conjugation steps to perform enzyme-free ELISA using gold nanoparticles. Wang et al. performed chemiluminescence ELISA of tumor markers on a paper-based device (6 × 3 zones). Chitosan coating and glutaraldehyde cross-linking were required to covalently immobilize antibodies to perform bioassay for tumor markers 30 . In addition, Lei et al. performed paper-based immunoassay (8 × 6 zones) for detection of influenza 31 . The limitations in paper-based ELISA include low-performance in flow control and the need of repeated micropipetting for adding reagents and washing all the zones, which is really time-consuming and limits its application for high-throughput detection especially in low-resource settings. In addition, we observed that the repeated washing steps in the micro-zones leads to spreading of the reagents over the hydrophobic areas, which is one of the serious problems faced in paper-based devices. Along with paper, some polymers such as PMMA have also been widely used for the fabrication of microfluidic devices. Each substrate has its own advantages and disadvantages. PMMA is transparent, rigid and rapidly delivers reagents to different regions. However, polymers such as PMMA require complicated surface modification procedures to immobilize biosensors and other biomolecules such as antibodies and enzymes. For instance, ELISA has been reported in PMMA devices but they require complicated surface modifications including poly(ethyleneimine) (PEI) treatment 32,33 , (3-aminopropyl)triethoxy silane (APTES) treatment 34 , and carbon nanotube (CNT) functionalization 24 . In addition, they require detectors like fluorescence microscopy 33,34 . On the contrary, paper-based devices can rapidly immobilize biosensors and other biomolecules but do not offer high performance in flow control especially over a fairly long distance. Hybrid devices can take advantages of various substrates, while eliminating some limitations of certain substrates. Recently, hybrid devices have been used for various applications. Our group developed a polydimethylsiloxane (PDMS)/paper hybrid microfluidic biochip integrated with aptasensors for one-step multiplexed pathogen detection 35 . Paper in this hybrid device acted as the substrate for facile immobilization of aptamer-functionalized nano-biosensors without complicated surface modification. Recently, Dou et al. fabricated another PDMS/paper hybrid microfluidic platform integrated with loop-mediated isothermal amplification for detection of meningitis-causing bacteria 36 . It was interesting that they found the hybrid device provided more stable performance than non-hybrid devices over a period of 2 months. These types of hybrid devices have been used for various applications including infectious diseases diagnosis [35][36][37] . Herein, we have developed a simple miniaturized 56-microwell paper/PMMA hybrid microfluidic ELISA microplate for rapid and high-throughput detection of infectious diseases. A series of novel funnel-shaped PMMA microwells have been created by laser ablation of PMMA, wherein a paper substrate can be placed to complete ELISA within an hour. The introduction of 3D micro-porous paper with high surface-to-volume ratio in microwells of this hybrid microplate facilitated rapid immobilization of antibody/antigen and also avoided complicated surface modifications. The top reagent delivery channels along with the vertical flow-through microwells in the middle PMMA layer can simply transfer reagents to multiple microwells, thus avoiding repeated manual pipetting and washing steps into each well in conventional ELISA or the use of costly robots. All the reagents/ analytes pass through the 3D matrix of the paper surface from the funnel-shaped microwells. This design not only provides efficient washing, but also increases the opportunities of analytes to be rapidly and efficiently captured, thus resulting in higher detection sensitivity. ELISA of Immunoglobulin G (IgG) and Hepatitis B surface antigen (HBsAg) were performed in the hybrid device and limits of detection (LODs) comparable to commercial ELISA kits were obtained by using an office scanner, without the use of any specialized instruments like a microplate reader. Results and Discussions Hybrid microfluidic microplates. ELISA of multiple disease biomarkers was performed using the hybrid device. As shown in Fig. 1a, the microfluidic device consists of three different layers. The topmost layer, the fluid delivery layer, is used to deliver all the assay reagents and also forms the cover for the microwells in the assay plate (middle layer). Each of the channels connected to different inlet reservoirs of upper layer delivers reagents to 7 microwells in the middle layer. Pieces of chromatography paper were placed inside each microwells. The middle layer contains funnel-shaped ( Fig. 1c) microwells with an upper diameter of 2 mm and a lower diameter of 0.3 mm, wherein a paper disk can be placed, as shown in the cross section view of the device from Fig. 1b. The 0.3 mm diameter-lower microwells are placed just below the upper microwells of the middle layer and helps to hold the paper in place and minimize the chances of backflow of the reagents. Just underneath the bottom of the assay microwells, is attached the outlet system. The outlet channels are connected to a single outlet microwell, which acts as an outlet reservoir once a negative pressure is applied. Arrows in Fig. 1b shows the direction of the flow of reagents. Figure 1d shows the photograph of the fully assembled microfluidic chip filled with different food dyes. Since the funnel-shaped microwells involve different depths, multi-level fabrication is needed. Although photolithography is one of the most widely used fabrication techniques to fabricate microfluidic devices, it is difficult, expensive and complicated to create a microfluidic device with different depths. It requires multi-level microfabrication, alignment and multiple photomasks [38][39][40] . On the contrary, laser ablation is a rapid prototyping method for the fabrication of microfluidic devices. It uses high intensity laser beams to evaporate polymers at the focal point. The evaporation is due to photo-degradation or thermal-degradation or the combination of both 11,41 . By applying different intensities, microstructures with different depths can be readily fabricated. Therefore, we developed a simple laser ablation method to create the funnel-shaped microwells, which can be completed within minutes, without using any photomask (see Supplementary Fig. S1 and Supplementary Information online for more details). After the assembly of the hybrid microfluidic microplate, cross-contamination test was done as microwells were connected through channels at the bottom layer. For cross-contaminations test, fluorescein isothiocyanate (FITC) was added to the alternate columns of the device, while Milli Q water was added in the adjacent columns. As seen from the fluorescent image (see Supplementary Fig. S2), high fluorescent intensity was only observed in the alternate column (a, c, e, and g) where FITC was added; there was no fluorescence in adjacent columns (b, d, f, and h). The result shows that there was no cross contamination or leakage within different columns. To confirm this, different colored dyes were similarly passed into the alternate columns, with water in the adjacent columns. Similar results were obtained with colours showing up only in the alternate columns and clear background in the adjacent columns (Fig. 1d). This further confirmed that there was no cross-contamination between the adjacent columns. Because of the novel introduction of paper in this hybrid microfluidic microplate, antigen/antibody can be quickly immobilized within 10 minutes as compared to overnight incubation in conventional microplates 28 . Cy3-labeled IgG was used to assess rapid immobilization of antibody on paper surface. Different concentrations of Cy3-labeled IgG (100, 50, 25, and 12.5 μ g/mL) were introduced into alternate columns of the device, and PBS in the adjacent columns for 10 minutes. After washing, from Fig. 2a, we can see the decreasing intensity of fluorescence in the alternate columns (from left to right), with the decrease in concentration of IgG. There was no fluorescence in the adjacent columns where PBS was added. Yet, in another experiment the blocking buffer (4% BSA + 0.05% Tween 20) was added to one column and PBS to another to test the effectiveness of the blocking buffer. After 10 minutes, Cy3-labeled IgG was added to both and incubated for 10 more minutes followed by washing three times with PBST. Figure 2b shows that blocking buffer can be used to effectively block the paper surface. Minimal fluorescence can be seen in the column where blocking buffer was added before the addition of Cy3-labeled IgG. The hybrid microfluidic device has several important features. First, the reagents can be easily transferred to all the microwells, minimizing time-consuming repeated micropipetting to add reagents. PMMA acts as the support for paper and provides channels for reagent delivery. As such, it can overcome the slow flow issue on paper. Flow of reagents can be controlled in a better way with the hybrid device. Second, due to the novel introduction of paper in this hybrid microplate, antigen/antibody can be rapidly immobilized in the paper surface and does not require the complicated surface modification of PMMA. Additionally, due to the flow-through funnel-shaped microwells, the entire reagent passes through the micro-porous paper, which results in not only more efficient and rapid antigen/antibody immobilization, but also more efficient washing. Efficient washing is also very important to decrease the background especially in paper-based ELISA. Optimization of incubation time for BCIP/NBT. It was observed that once BCIP/NBT was added to the chip, the substrate started to produce an insoluble diformazan end product, which was purple in colour and could be observed visually. As average brightness was measured after the assay, we can observe that the higher concentration of the analyte produces dark purple colour resulting in lower brightness and vice-versa. The colour intensity significantly increased to a certain time and then started fading away. To optimize the optimal incubation, on-chip ELISA of IgG (from 1 ng/mL-1 μ g/mL) was performed. The chip was scanned every 5 minutes, starting from 10 minutes after the addition of BCIP/NBT. As seen from Fig. 3, the purple colour started fading away after 30 minutes, which leads to increase in average brightness value. In addition, lower signal/noise ratios (the noise was derived from the column with PBS only) were observed starting at 30 minutes. Colour intensity of 20-minute incubation was almost similar to that of 25-minute incubation, but 20-minute incubation had higher background (lower brightness of PBS). It can also be noticed that the deviation started increasing slightly after 25 minutes. Therefore, considering the signal/noise ratio, deviation, and the time required, 25 minutes incubation time was considered optimum and was used in subsequent experiments. Rapid Quantitative Detection of IgG. IgG is the most common type of antibody found in the human circulation (75% of serum antibodies). The measurement of IgG can be a diagnostic tool for conditions like autoimmune hepatitis 42 . IgG levels are indicative of immune status of diseases such as measles, mumps, and rubella (MMR), hepatitis B virus, and varicella 43 . In addition, IgG can serve as a specific marker for Neuromyelitis optica, an inflammatory demyelinating disease 44 . Thus, we first demonstrated the application of our hybrid microfluidic plate for rapid detection of IgG (0.1 ng/mL-100 μ g/mL). For the on-chip ELISA, all reagents were loaded sequentially from the inlets in the upper layer of the PMMA chip, the reagent delivery system. No external power or device was used for the addition of reagents, except a micropipette. After ELISA was completed, the result could be viewed by the naked eye, or a portable office scanner can be used to scan the device. Figure 4a shows a scanned image for IgG detection from an office scanner. It was found that the colour intensity increased as the IgG concentration increased from 0.1 ng/mL to 100 μ g/mL (from right to left) with the blank in the rightmost column. Signal intensities of the scanned images were calculated using ImageJ. Figure 4b shows the calibration curve of IgG over a concentration range of 1 × 10 2 pg/mL to 1 × 10 8 pg/mL. A sigmoidal curve (Fig. 4b) was observed over the whole concentration range, while the linearity lies between 1 × 10 3 pg/mL to 1 × 10 7 pg/mL (inset Fig. 4b) with a regression curve of y = 18.35 log (x) + 48.74 (r 2 = 0.99), which illustrates a typical immunoassay characteristic. The LOD for IgG was calculated to be 1.6 ng/mL based on 3 folds of standard deviation (SD) above the blank value, which was comparable to commercial 96-well microplate ELISA (LOD, 1.6-6.25 ng/mL) 45 . The conventional 96-well microplate ELISA not only consumes more reagents (50-100 μ L), and requires overnight incubation, but also rely on specialized instruments like a microplate reader. However, our method only needs 5 μ L samples, and 1 h to complete the whole assay, without using any specialized instruments. A more detailed comparison is listed in Table 1. As to PMMA devices, they require complicated surface modification with APTES and long incubation time (i.e. 12 hours), and the LOD was only 0.12 μ g/mL even with a fluorescence microscope 34 . Although 96-zone paper-based ELISA did not require surface modification 28 , it required time-consuming repeated micropepitting, making it less user-friendly and incapable of high-throughput detection. Additionally, the LOD of paper-based ELISA was 54 fmol/zone, much higher than that of our hybrid system (53.6 amol/zone), indicating high sensitivity of our method, which might be attributed to efficient washing from our hybrid system. 46 . HBsAg, a serological biomarker for a HBV infection, can diagnose acute and chronic hepatitis B virus [47][48][49] . The titer of serum HBsAg indicates the level of infection and severity of the disease 49,50 . Slightly different from the IgG detection, the ELISA for the detection of HBsAg was based on a sandwich-type immunoassay. As illustrated in the inset of Fig. 5b, the antigen HBsAg was first immobilized on the paper surface in the hybrid microfluidic microplate, followed with reactions with the primary antibody (i.e. rabbit anti-HBsAg) and the secondary antibody goat anti-rabbit IgG conjugated with ALP. After the formation of the sandwich structure, the enzymatic reaction between ALP and the colorimetric substrate BCIP/NBT produces the purple colour, similar to IgG detection. Different concentrations of HBsAg ranging from, 0.34 ng/mL to 340 μ g/mL were analyzed by the hybrid microfluidic microplate. Figure 5a shows a scanned image for HBsAg detection from an office scanner. The purple colour intensity increased with increasing concentrations from 0.34 ng/mL to 340 μ g/mL (from right to left) with the blank in the rightmost column. Figure 5b shows the calibration curve of HBsAg over a concentration range from 3.4 × 10 2 pg/mL to 3.4 × 10 8 pg/mL. A sigmoidal curve (Fig. 5b) was observed over the whole detected concentration range as in IgG. In case of HBsAg, the range of linearity was observed between 3.4 × 10 2 pg/mL to 3.4 × 10 7 pg/mL (inset Fig. 5b) with a regression curve of y = 17.37 log (x) + 56.71 (r 2 = 0.99). The LOD for HBsAg was found to be 1.3 ng/mL, comparable to commercial ELISA kits 51 . Rapid Quantitative Detection of HBsAg in human serum samples. For the validation of analytical accuracy and to determine its feasibility for detection of real human samples, normal human serum was spiked with different concentrations of standard HBsAg. Four different concentrations of HBsAg (3.4 ng/mL, 34 ng/mL, 0.34 μ g/mL, and 3.4 μ g/mL) within the range of linearity and above the LOD were chosen for spiking and recovery tests. As can be seen from Supplementary Table S1, the intensity of purple colour increased from lower concentrations to higher concentrations of the spiked human serum samples, consistent with ELISA results using standard HBsAg (Fig. 5a). The analytical recoveries of the serum samples ranging from 91.1-109.1% were obtained and were within the acceptable criteria for bio-analytical validation 52,53 . Discussion We have developed a simple, portable, and POC paper/PMMA hybrid microfluidic microplate for rapid and sensitive detection of infectious diseases and other bio-analytes. To the best of our knowledge, this is the first report of a paper/PMMA hybrid microfluidic device, which draws more benefits from both substrates. The innovative use of 3D micro-porous paper in funnel-shaped microwells of this hybrid microplate facilitated rapid immobilization of antibody/antigen and also avoided complicated surface modifications. ELISA assays can be completed within one hour, and results can be observed by the naked eye or scanned by an office scanner for quantitative analysis. In addition, smartphone cameras can also be used to capture the image and the signals can be processed using different applications or cloud-based systems 54 . Although the basic system shown here can only perform 8 seven-repeated experiments (7 × 8 microwells), the design can be simply modified to perform as many experiments and repeats as desired. For instance, the hybrid microfluidic microplate can be expanded to 96 wells or 384 wells according to different needs simply by increasing the number of wells and channels, while the basic architecture remains the same. Without using any specialized laboratory equipment, the LOD of 1.6 ng/mL for IgG was achieved, which is comparable to that of commercial ELISA kits using spectrometers or microplate readers. The hybrid microfluidic microplate significantly reduces the sample and reagent volume compared to commercial ELISA and shows great promise as a POC device for rapid, sensitive and quantitative detection of biomarkers, especially in low-resource settings, such as small clinics, rural areas, border regions and developing nations. Because ELISA and microplates are widely used, this hybrid paper/PMMA microfluidic microplate will have broad applications from biology and clinical diagnosis to various biochemical analyses. Methods Microfluidic platform fabrication. The chip used in this study was designed by using Adobe Illustrator CS5 and micro-machined using laser cutter (Epilog Zing 16, Golden, CO). In the mask-less laser ablation, the PMMA substrate was placed on a stage and the focused laser beam was moved across in x and y directions as defined in the designed pattern. Pieces of chromatography paper were cut using a laser cutter and placed inside each microwells, as a 3D surface for ELISA. Chromatography paper can also be placed just over the middle layer, so that the paper pieces directly fall to each microwells in the middle layer during laser cutting so that there is no need to place the chromatography paper manually to all the wells. To assemble the device, different PMMA layers were clamped together and kept in an oven at 115-120 °C for 35 minutes. The chip could be used once it cooled down to room temperature. Different PMMA layers could be separated after an assay by applying slight pressure between the joints so that the device can be reused after cleaning. IgG and HBsAg detection using the hybrid device. The hybrid device can be used for a wide range of bioassays. Figure 6 illustrates the main steps for the IgG detection by on-chip ELISA using the hybrid device. The primary antibody IgG (0.1 ng/mL-100 μ g/mL in 10 mM, pH 8.0 PBS (Phosphate-buffered saline)) was introduced to the chip from different inlet reservoirs in the first layer of the chip. After the chip was incubated with the primary antibody for 10 minutes, the unreacted paper surface was blocked with a blocking buffer (4% BSA w/v in PBS + 0.05% Tween 20) for another 10 minutes. After washing with PBST (10 mM, pH 7.4 PBS + 0.05% Tween 20), anti-rabbit IgG-alkaline phosphatase (6 μ g/mL) was added for another 7 min. Then, the final wash was done with the washing buffer for three times. Finally, the substrate for the alkaline phosphatase, i.e., BCIP/NBT (Nitroblue tetrazolium + 5-bromo, 4-chloro, 3-indoyl phosphate) was added. NBT is often used with the alkaline phosphatase substrate BCIP in western blotting and immunohistological staining and immunoassay procedures. These substrate systems produce an insoluble NBT diformazan, which changes the colour of the solution from light yellow to purple and can be observed visually. After 10-minute incubation, different layers of chip were separated and the middle layer was scanned with a scanner after another 15 minutes. Regarding HBsAg detection, a similar assay procedure was followed. The main difference was that the first step was to immobilize the antigen, i.e., HBsAg, followed by addition of anti-HBsAg, and finally forming a sandwich-structure immunoassay by the addition of ALP-labelled anti-rabbit IgG. 35 μ L of sample/reagents was used for each channel of the microfluidic platform. 35 μ L of the reagent added to each inlet microwells, travels from the upstream delivery channel to the downstream waste channel through the different microwells (7 microwells in each channel). Hence, the average volume per well of this platform is (35/7) 5 μ L per microwell.
5,668.6
2016-07-26T00:00:00.000
[ "Engineering", "Medicine" ]
High-Performance Computing Storage Performance and Design Patterns—Btrfs and ZFS Performance for Different Use Cases : Filesystems are essential components in contemporary computer systems that organize and manage data. Their performance is crucial in various applications, from web servers to data storage systems. This paper helps to pick the suitable filesystem by comparing btrfs with ZFS by considering multiple situations and applications, ranging from sequential and random performance in the most common use cases to extreme use cases like high-performance computing (HPC). It showcases each option’s benefits and drawbacks, considering different usage scenarios. The performance of btrfs and ZFS will be evaluated through rigorous testing. They will assess their capabilities in handling huge files, managing numerous small files, and the speed of data read and write across varied usage levels. The analysis indicates no definitive answer; the selection of the optimal filesystem is contingent upon individual data-access requirements. Introduction Efficient and reliable information systems rely on excellent data management.Given the rapid increase in data volumes, there is an urgent requirement for sophisticated filesystems that can effectively manage, store, and protect data.Linux, a dominant operating system in server and cloud environments, provides support for various filesystems, including ext4, XFS, btrfs, and ZFS.Btrfs (B-tree file system) and ZFS (zettabyte file system) are known for their advanced capabilities, including efficient management of enormous data volumes and the capacity to recover from errors automatically.These characteristics make them essential for constructing dependable, expandable, and secure information infrastructures, garnering substantial attention from IT professionals and academics.Also, the flexibility to expand and shrink the storage capacity is one of the essential elements that should be considered [1].There are other requirements, including that the storage must execute many other CPU-intensive tasks, such as compression, data deduplication, checksum calculations, snapshots, and data cloning [2]. Although btrfs and ZFS are acknowledged for their groundbreaking features, comprehensive comparisons that examine their performance in various usage circumstances are needed.Real-world data management encompasses intricate activities, such as state imaging, replication, and dynamic resource allocation, necessitating a comprehensive comprehension of how multiple filesystems accomplish these duties.Storage can employ a physical resource like multiple virtual resources or pool several physical resources together to execute as a virtual resource [3].In that regard, this research aims to provide an impartial analysis that will assist in comprehending the merits and drawbacks of each system.This will enable people to make well-informed decisions when choosing a filesystem for specific applications. The purpose of this study is to achieve multiple objectives.The primary aim is to comprehensively examine the btrfs and ZFS filesystems, emphasizing their prominent characteristics.It also aims to test performance on both platforms across different usage • Performance optimization: It is an entirely different scenario if we use storage for storing users' files (file server) versus virtualization, versus cloud, and versus HPC. The filesystem plays a critical role, as it determines how fast we can read from and write to it based on the scenario requirements; • Scalability and flexibility: As much as file servers do not necessarily have to be able to scale quickly, the same is not valid for virtualization, cloud, and HPC-these scenarios need to be easily scalable-both for growing datasets and increased computational loads; • Data integrity and reliability: If we lose a file or two on a general file server, it usually is not the end of the world for the company.On the other hand, if we lose a file in HPC environments, that corrupts research and results.That is why the filesystem has to be a building block that ensures that no corruption happens during the store or retrieve processes, especially when thinking about design for failures that will occur (it is not a question of "if"; it is a question of "when").We also need to consider that different filesystems treat inevitable failures differently.For example, ext4 and XFS react differently to fsync failures (after the fsync system call has been initiated, it is expected that the application's buffer is going to be flushed to the storage device-if this does not complete successfully, we have a fsync failure) [4]; • Advanced features: Snapshots, deduplication, caching, and similar features are paramount when dealing with large-scale data storage, especially in cloud and HPC environments.These features can have a significant impact on storage efficiency and data-management tasks; • Overhead management: In large-scale environments like cloud and HPC, we must use all available resources efficiently.We do not want to waste unnecessary computational power on storage I/O or storage I/O bottlenecks when discussing storage. If the storage system does not meet these requirements, the long-term effects are going to create problems in terms of: • Performance degradation: The wrong filesystem and storage choice will create I/O bottlenecks, which will, in turn, slow down our computation and reduce system efficiency; • Scalability problems: If we cannot scale our storage, large-scale environments like cloud and HPC are going to hinder our growth; • Data corruption and loss: A filesystem that is not robust enough for our data-storage requirements will eventually lead to data corruption or loss, especially if we do not plan for failures or crashes.Having in mind that data integrity in scientific research is crucial, this might lead to severe problems concerning projects and credibility; • Inability to efficiently operate storage: If we cannot use snapshots while running virtual machines, that is a significant operational problem.If we cannot use data deduplication, there is a substantial chance that we are wasting available storage space. As a result, we determined that choosing the proper filesystem, especially in HPC, is one of the most critical topics when designing an HPC environment.This is why we skipped some other common types of filesystems (for example, ext4, FAT, and NTFS) in terms of testing, as it would be impractical, at times irresponsible, and downright counterproductive to use these filesystems for large virtualized, cloud, or HPC environments.We explore these key considerations and slowly build our test suite out-from some less intensive scenarios to running an HPC app.This approach should give us a good overview of how to design our storage environment properly so that we do not end up in situations where we have to re-design, as this is a very costly exercise. Overview of Currently Used Filesystems Developing filesystems is a continuous endeavor to provide technologies for more effective data administration and storage.During the initial stages of computing, filesystems were very uncomplicated, specifically created to manage tiny data collections within systems.Nevertheless, with the progression of technology and the exponential growth of data volumes, it became clear that there was a requirement for more robust filesystems.The development of early filesystems, such as FAT (file allocation table) and NTFS (new technology file system) aimed to manage data efficiently.FAT was initially launched with MS-DOS and later chosen as the standard for Windows operating systems, serving as the basis for organizing and retrieving data.If we are running a Windows-only environment, NTFS is the best choice of available filesystems [5].Considering that most virtualized, cloud, and HPC environments are not Windows based, we must survey open-source filesystems for those scenarios. File Allocation Table (FAT) Filesystems The file allocation table (FAT) filesystem, created by Microsoft (Albuquerque, NM, USA) in 1977, is one of the most uncomplicated and widely utilized filesystems across various computing platforms.It has several versions, including FAT12, FAT16, FAT32, and exFAT, each with distinct features and limitations.The FAT filesystem's architecture is straightforward, comprising the boot sector, the FAT region, the root directory, and the data region.The boot sector contains essential metadata about the filesystem, such as type, size, and layout.The FAT region includes one or more copies of the file allocation table, an array of entries mapping the clusters (the minor allocable units of disk space) used by files.The root directory stores entries for files and directories, while the data region holds the actual file and directory contents. FAT has four different types: In terms of advantages, the FAT stack of filesystems is very portable, easy to implement, and, therefore, compatible.It can be used with Windows, Linux, and Mac computers.However, it has no security features, is heavily prone to fragmentation, and lacks journaling capabilities, something that NTFS solved in the Microsoft-based world. NTFS The new technology file system (NTFS), created by Microsoft, is widely used in Windows operating systems due to its strong performance and extensive features.It was introduced in 1993 as a collaboration with IBM [6].One of NTFS's primary benefits is its ability to support large file sizes and disk volumes, making it ideal for modern storage needs, as data continues to expand.NTFS effectively manages disk space using advanced data structures like the master file table (MFT), which facilitates quick access and efficient file management [6].Additionally, NTFS enhances security through Access Control Lists (ACLs), allowing administrators to set detailed permissions for individual files and directories [6].A notable feature of NTFS is its journaling capability, which improves reliability and data recovery after system crashes.By maintaining a log of changes, NTFS ensures data integrity and speeds up recovery from unexpected shutdowns, thereby minimizing data loss.This feature is precious in environments where data integrity is paramount.NTFS also supports file compression, which saves disk space by reducing file sizes without needing third-party software.Its support for symbolic links, hard links, and mount points adds flexibility in managing complex directory structures.However, NTFS has its disadvantages.One major drawback is its limited compatibility.NTFS is optimized for Windows and lacks native support in many non-Windows systems, such as various Linux distributions and macOS [6].This can be problematic in mixed computing environments where cross-platform file sharing is required.While third-party tools and drivers can provide some level of compatibility, they often do not perform as well as native support.Performance issues can also arise with NTFS under certain conditions.Although it performs well, NTFS can suffer from fragmentation over time, where files are broken into pieces and scattered across the disk.While NTFS includes a defragmentation utility, maintaining optimal performance requires regular maintenance, which can be cumbersome in large-scale environments.The overhead from features like ACLs and journaling can also lead to higher CPU and memory usage compared to simpler filesystems, impacting overall system performance in resource-constrained settings.In performance comparisons, such as a study evaluating EXT4 and NTFS on SSDs, NTFS often falls behind EXT4 in various benchmarks [7].This performance gap highlights NTFS's limitations in high-performance scenarios, especially with the intensive read-write operations typical in modern applications.Additionally, NTFS's handling of fsync failures-which ensure data are physically written to disk-can lead to data corruption and loss, posing significant risks to data integrity [4].This issue underscores the need to understand NTFS's specific limitations when using it in critical environments.In forensic investigations, NTFS's detailed metadata can be advantageous for data recovery and analysis.However, the complexity of NTFS structures can also make forensic analysis more challenging compared to simpler filesystems like FAT32 [6].NTFS is a robust and feature-rich filesystem suitable for various use cases, particularly within Windows environments, where it's often used as a part of DFS (Distributed File System), a vital component of any cloud-scale data processing middleware [8].Its strengths in security, reliability, and support for large files and volumes are balanced by its compatibility issues, potential performance overhead, and the need for regular maintenance to prevent fragmentation.The decision to use NTFS should be based on the specific requirements and constraints of the deployment environment. Both FAT and NTFS filesystems are widely available on Windows operating systems.Still, some subtle differences exist when using them with other operating systems, such as, for example, Linux (and macOS, but this is a bit less relevant for our paper).NTFS filesystems are not supported in most Linux distributions out of the box, especially when writing on them, and they require additional modules to be installed and configured. We choose not to use NTFS in our tests for various reasons.To use a filesystem like NTFS in a virtualized, cloud, or HPC environment, we would have to create a file share, most probably based on SMB (server message block) as NFS (network file system) performance on the Windows Server is very bad.Then, we would have to mount this SMB storage, which will add another layer of abstraction that will, in turn, have additional performance penalties without even considering the high availability.We would have to create a SOFS (scale-out file server) or storage spaces directly to make it highly available, or, more realistically, a file server failover cluster with CSV (cluster shared volume).This significantly impacts how we design our environments and day-to-day operations, as these features have very complicated setups and do not necessarily offer the level of performance we are after.Considering our requirements, we selected btrfs, ZFS, and XFS as part of the test suite. Ext4 Filesystem The evolution of filesystem development followed a distinct path in Unix and Linux.The ext (extended file system) and its subsequent versions, ext2, ext3, and ext4, brought about notable advancements by enhancing efficiency, enabling support for larger files and systems and adding new capabilities, such as journaling and extensible metadata.The latest filesystem from the ext family is ext4, a widely used filesystem in the Linux environment, developed to resolve its predecessor's capacity and scalability issues [9].It has a bulk of new features when compared to its older versions: • Extents: Ext4 uses extents instead of the traditional block mapping scheme in ext2 and ext3 [10].An extent is a contiguous block of storage, which enhances performance and reduces fragmentation by enabling more efficient management of large files; There are other problems with ext4 for the scenarios that we are covering in this paper.Ext4 lacks a lot of features that we are after in this paper, including no caching, no direct data deduplication, and pooling implemented via LVM (logical volume manager), which means that LVM needs to be used from the start (otherwise, we have to re-format the disk).This is why we do not want to use ext4 to store our virtual machines in virtualized environments, cloud environments, or, even worse, HPC environments. In the future, filesystems in Linux contexts will face new issues, including cloud scalability, data security in a more sensitive cyber landscape, and effective administration of the massive data generated by IoT devices and huge data centers.Further integration of technology, such as artificial intelligence and machine learning, might provide more opportunities for automating and optimizing data-management processes. Linux also supports various more sophisticated filesystems in addition to the ext filesystems, which are specifically designed to cater to contemporary computer systems' unique requirements.With the increasing demand for handling significant amounts of data and fulfilling more intricate security and reliability standards, sophisticated filesystems such as XFS, btrfs, and ZFS were developed.These systems were specifically engineered to tackle the complexities of contemporary computing environments.Both have sophisticated functionalities, such as integrated volume management, dynamic expansion, snapshots, and self-recovery, making them robust solutions for data management in challenging circumstances. XFS XFS, created by Silicon Graphics, Inc. (SGI, Mountain View, California, USA) in 1993, is a filesystem known for its strong performance, ability to handle massive amounts of data, and efficient operation, especially in large-scale settings.XFS is a 64-bit filesystem that can handle huge files and volumes.It supports filesystems up to eight exabytes (EB) and individual files up to 8 EB, making it well-suited for enterprise-level storage requirements.The system employs an extent-based allocation approach to optimize speed and minimize fragmentation by organizing files into contiguous blocks [9]; B+ trees index file information and directory entries, enabling fast access to files and directories.XFS differs from standard filesystems in that it dynamically allocates inodes as required rather than statically allocating them at creation time.This approach prevents shortages of inodes and provides increased flexibility.The journaling method records changes before finalizing them in the filesystem, guaranteeing data integrity and facilitating rapid recovery from system crashes.XFS is a massively scalable filesystem [13] designed to efficiently handle many input/output activities simultaneously, especially when dealing with small files [11].It incorporates advanced allocation techniques, such as delayed allocation, which delays the allocation of disk blocks until data are written [10].This helps optimize the arrangement of files and enhances writing performance.It effectively manages sparse files by avoiding the allocation of disk space for blocks that contain only zeros, resulting in space savings and improved speed.In addition, XFS offers the capability of conducting online defragmentation and scaling.This means administrators can defragment and expand the filesystem without unmounting it, resulting in more flexibility and reduced downtime. Although XFS offers notable advantages, it also has various restrictions.The filesystem does not possess sophisticated capabilities in more recent filesystems like BTRFS and ZFS, such as integrated data deduplication and compression.XFS lacks built-in snapshot functionality, crucial for producing filesystem copies at certain times.As a result, it may not be the most suitable choice for regular backup and recovery requirements.In addition, it does not have integrated RAID capabilities, which means additional RAID solutions are necessary to achieve redundancy and improve performance.Administering XFS can be intricate, particularly in extensive or extremely dynamic settings, resulting in a higher administrative burden than more unified systems such as ZFS.Although XFS has superior fragmentation management compared to other filesystems, it is nevertheless susceptible to fragmentation over time, especially when subjected to heavy usage or many small files.This can lead to a decline in performance and require defragmentation.Unlike ZFS, XFS lacks inherent tools for automated identification and rectification of data corruption; instead, it depends more heavily on user intervention.The delayed allocation function, although enhancing performance, might occasionally result in unforeseen data loss in the event of system crashes or power shortages.While XFS may be able to accommodate sparse files, its performance in managing them may not be on par with more recent filesystems that are specifically designed to optimize for modern usage patterns.XFS is a robust filesystem wellsuited for large-scale environments due to its high performance, scalability, and efficient metadata handling.However, it may not be the best choice for specific high-demand environments that require advanced features, snapshot capabilities, RAID support, and data integrity checking.In such cases, filesystems like BTRFS and ZFS, which offer more extensive features, are more suitable. Btrfs Btrfs, often known as "Butter FS" or "B-tree FS", is a contemporary filesystem designed to meet the increasing demand for data management in Linux environments.Its history and development demonstrate the endeavor to overcome current limitations and offer the sophisticated capabilities required by contemporary computer systems.Btrfs is a modern copy-on-write filesystem primarily used in the Linux operating system [14]. Btrfs was initiated in 2007 by Chris Mason, who was employed at Oracle Corporation then.The primary objective behind the development of btrfs was to design a filesystem that could meet the increasing demands for data storage by providing enhanced performance, improved scalability, and sophisticated functionalities, such as state snapshots and integrated multi-disk management.Btrfs was developed to address the constraints of current filesystems, particularly their ability to scale and adapt. The development of btrfs was driven by several crucial aims from the beginning.One of the main goals was to enhance scalability, allowing for managing enormous amounts of data and the development of capacity as needed.Furthermore, there has been a notable emphasis on developing sophisticated data-management capabilities.This encompasses using snapshots, cloning, and integrated volume management, giving users increased flexibility and control over their data.The primary objective was to guarantee enhanced dependability and the ability to recuperate autonomously.Btrfs ensures strong data integrity by implementing data-corruption detection and automatic correction mechanisms.Furthermore, transparent data compression has been incorporated, decreasing the overall storage capacity needed and enhancing the efficiency of btrfs in disk utilization. Btrfs garnered swift attention from the Linux community due to its intriguing features and potential.Throughout the years, efforts have been directed towards enhancing stability, performance, and functionality, leading to integrating btrfs into the primary selection of the Linux kernel in 2009 (Linux kernel 2.6.29).Since its inception, btrfs has undergone continuous development, making it one of the most sophisticated filesystems accessible in a Linux context. Btrfs was jointly developed, with contributions from multiple developers and corporations, including Oracle, Red Hat, Facebook, and others.The extensive backing has ensured the ongoing expansion and development of btrfs, effectively meeting the requirements of users and organizations. Btrfs architecture primarily relies on B-tree (balanced tree) structures for storing metadata and some data types.B-trees facilitate efficient operations, such as reading, writing, and data search, in extensive filesystems due to their capacity to uphold a balanced multilevel tree structure.The presence of this structure is crucial for the optimal functioning of btrfs, since it facilitates rapid indexing and data retrieval.Here is an example of the offset and size fields in the btrfs item that indicates where in the leaf the data can be found: As shown in Figure 1, the btrfs block header includes a checksum for the block content, filesystem UUID for the block owner, and its block number.Btrfs employs the copy-onwrite (CoW) technique, which guarantees that when altering data, modified blocks are initially duplicated rather than directly modifying the originals.This method enhances data security and integrity by eliminating partial updates and destruction.Additionally, it enables advanced functionalities like state snapshots and cloning without considerably increasing the required storage space. tures and potential.Throughout the years, efforts have been directed towards enhancing stability, performance, and functionality, leading to integrating btrfs into the primary selection of the Linux kernel in 2009 (Linux kernel 2.6.29).Since its inception, btrfs has undergone continuous development, making it one of the most sophisticated filesystems accessible in a Linux context. Btrfs was jointly developed, with contributions from multiple developers and corporations, including Oracle, Red Hat, Facebook, and others.The extensive backing has ensured the ongoing expansion and development of btrfs, effectively meeting the requirements of users and organizations. Btrfs architecture primarily relies on B-tree (balanced tree) structures for storing metadata and some data types.B-trees facilitate efficient operations, such as reading, writing, and data search, in extensive filesystems due to their capacity to uphold a balanced multi-level tree structure.The presence of this structure is crucial for the optimal functioning of btrfs, since it facilitates rapid indexing and data retrieval.Here is an example of the offset and size fields in the btrfs item that indicates where in the leaf the data can be found: As shown in Figure 1, the btrfs block header includes a checksum for the block content, filesystem UUID for the block owner, and its block number.Btrfs employs the copyon-write (CoW) technique, which guarantees that when altering data, modified blocks are initially duplicated rather than directly modifying the originals.This method enhances data security and integrity by eliminating partial updates and destruction.Additionally, it enables advanced functionalities like state snapshots and cloning without considerably increasing the required storage space.Btrfs provides built-in volume management and RAID capabilities, allowing users to create and manage numerous data volumes within a single filesystem.The device supports multiple RAID configurations, including RAID 0, 1, 10, 5*, and 6*, allowing users to manage redundancy and performance effectively. Btrfs utilizes checksums for metadata and data, enabling it to identify and automatically rectify data corruption, guaranteeing a superior level of data integrity.This functionality and RAID support offer a resilient technique for safeguarding data against hardware failures and corruption.Btrfs provides built-in volume management and RAID capabilities, allowing users to create and manage numerous data volumes within a single filesystem.The device supports multiple RAID configurations, including RAID 0, 1, 10, 5*, and 6*, allowing users to manage redundancy and performance effectively. Btrfs utilizes checksums for metadata and data, enabling it to identify and automatically rectify data corruption, guaranteeing a superior level of data integrity.This functionality and RAID support offer a resilient technique for safeguarding data against hardware failures and corruption. Btrfs enables seamless data compression on a disk, facilitating optimal storage-capacity utilization.Compression can be implemented on the entire filesystem or specific files and directories, allowing the users to enhance storage efficiency. One distinctive aspect of btrfs is its ability to create subvolumes, conceptually distinct sections inside the same filesystem that may be mounted and managed separately.This allows for sophisticated space and access rule management and streamlined maintenance of backups and snapshots. The btrfs filesystem is architecturally designed to offer exceptional performance, adaptability, and dependability for contemporary computer systems.Btrfs is an advanced filesystem incorporating B-trees, CoW, integrated volume management, and many capabilities like snapshots, cloning, self-recovery, and transparent compression.These advancements enable btrfs to handle modern computing and data-management requirements effectively. ZFS ZFS, often known as the zettabyte filesystem, is a robust and scalable data-management solution that delivers exceptional performance.Sun Microsystems founded it, and it is currently a division of the Oracle Corporation.The software commenced development in 2001, and the initial open-source iteration was launched in 2005 as a component of the OpenSolaris operating system.The project's objective was to develop a filesystem that could address the issues of scalability, data integrity, and ease of management in the filesystems of that era. ZFS was designed to achieve extensive scalability, allowing it to handle enormous volumes of data with the ability to scale permits systems to expand seemingly without limitations.ZFS is not only scalable, but it also prioritizes data integrity as a crucial aspect.ZFS guarantees data accuracy by implementing thorough integrity checks and automatic error correction procedures, which remain effective despite physical disk damage or other potential error sources.Furthermore, ZFS is characterized by its inherent capability for effortless administration.The system incorporates sophisticated features, such as dynamic partitioning and space management, significantly streamlining administration and maintenance tasks.ZFS enables system administrators to efficiently manage resources and store them more effectively by utilizing straightforward commands and automating daily operations. As shown in Figure 2, ZFS architecture is built on top of the idea of storage pooling -a disk or a partition provides capacity pooled to the overall ZFS capacity.ZFS rapidly gained popularity due to its pioneering features and resilient architecture.ZFS integrates filesystem and volume-management (LVM) capabilities into a unified and intricate solution.Using an integrated approach, ZFS efficiently manages storage and data, eliminating the need for distinct layers and tools.ZFS is characterized by its crucial attribute of having a 128-bit address space, equivalent to two raised to the power of 128.ZFS's extensive scalability guarantees that it can accommodate future storage requirements without imposing restrictions on the size of filesystems or individual files. ZFS employs the copy-on-write method, which avoids directly overwriting the original data when making changes.Alternatively, the data are initially stored in a different location, and once the process is successfully finished, the original data are modified. ZFS stores a distinct checksum for each block of data, which is saved independently from the data itself.When reading data, ZFS performs an automated checksum verifica- ZFS integrates filesystem and volume-management (LVM) capabilities into a unified and intricate solution.Using an integrated approach, ZFS efficiently manages storage and data, eliminating the need for distinct layers and tools.ZFS is characterized by its crucial attribute of having a 128-bit address space, equivalent to two raised to the power of 128.ZFS's extensive scalability guarantees that it can accommodate future storage requirements without imposing restrictions on the size of filesystems or individual files. ZFS employs the copy-on-write method, which avoids directly overwriting the original data when making changes.Alternatively, the data are initially stored in a different location, and once the process is successfully finished, the original data are modified. ZFS stores a distinct checksum for each block of data, which is saved independently from the data itself.When reading data, ZFS performs an automated checksum verification, guaranteeing its integrity and accuracy.If an error is detected, ZFS can automatically restore the data by utilizing redundant copies.In ZFS, the primary storage unit is a "storage pool" or zpool [15], instead of conventional partitioning and space allocation methods.Users can augment the pool by including additional drives, and ZFS autonomously regulates the storage capacity within the pool, facilitating effortless capacity extension with the addition of new disks.The devices added to the pool are immediately available for use/storage, which occurs transparently to the user [16].There are also QoS features in ZFS.For example, we can change the configuration of ZFS to prioritize application I/O during RAID recovery, which can mitigate the performance degradation of a declustered RAID [17]. Thanks to its copy-on-write technique, ZFS enables the creation of snapshots and clones without substantial additional storage usage.Snapshots are immutable, whereas copied data are editable, providing efficient choices for backup, archiving, and testing purposes.ZFS enables block-level deduplication and compression, resulting in space efficiency through the storing of distinct data copies and real-time data compression.These qualities are helpful in contexts with a significant amount of repetitive data. ZFS introduces RAID-Z, an enhanced iteration of conventional RAID setups that addresses some RAID limitations, such as the "RAID hole" issue.RAID-Z provides a significant level of redundancy and ensures data integrity while maintaining optimal performance. ZFS allows for the transmission and reception of snapshots between ZFS pools, even across remote systems.This capability enhances data replication efficiency and simplifies backup and restore procedures. ZFS utilizes the ZFS intent log (ZIL) to enhance the efficiency of transaction workloads.It offers a secure means of storing unconfirmed transactions in case of system failure, enabling fast recovery and minimal data loss. ZFS's architecture embodies outstanding stability, scalability, and efficiency in data management.It sets a high standard for filesystems by combining filesystem and volume management, along with advanced features like CoW, data integrity, snapshots, deduplication, and RAID-Z.It provides a solid foundation for various applications, from data centers to cloud infrastructure and multimedia services.Furthermore, because of its architecture, the added flexibility and security might be more critical in your environment than pure performance [18].Regarding data protection, data resilvering in ZFS is reactive; data and parity blocks are read, regenerated, and stored after failures are detected [19].In ZFS terms, resilvering is copying data and calculating its parity between the hard drive of interest and another in a RAID group when such a hard drive has been replaced [20]. Comparison of Key Features between btrfs, ZFS, and XFS The critical characteristics of filesystems can be classified based on various criteria, including performance, reliability, scalability, and data management.Let us systematically evaluate criteria and categorize btrfs, ZFS, and XFS accordingly. Performance When evaluating the performance of btrfs, ZFS, and XFS filesystems, it is crucial to consider multiple factors influencing speed and efficiency.These factors include handling large files, recording and reading speed, and the effects of advanced features like deduplication, compression, and snapshot management. Working with Large Files Because of their copy-on-write architecture, btrfs and XFS are highly versatile and can efficiently manage large files.However, copy-on-write can lead to fragmentation, especially when dealing with intensive workloads, ultimately harming performance over time.ZFS, which also utilizes copy-on-write (CoW), excels in managing large files due to its use of RAID-Z and superior data-management algorithms, with an additional layer of read-and-write caching technologies at its disposal.ZFS has a significant advantage in efficiently managing large volumes of data because of its broad scalability. Burn and Read Speed Btrfs demonstrates excellent performance in various situations, particularly on SSDs, due to its implementation of TRIM and its speedy copy-on-write (CoW) technique.Dynamic space management enables the customization of resource allocation, leading to enhanced recording and reading speed under specific circumstances.ZFS is designed to achieve optimal performance in challenging environments, offering exceptional write and read speeds through its integrated cache (ARC and L2ARC) and log (ZIL) support.ZFS enhances data access with advanced techniques such as prefetching.XFS, known for its robustness and scalability, excels in large-scale environments with its extent-based allocation scheme that reduces fragmentation and improves performance.Its use of B+ trees for indexing metadata allows for rapid access to files and directories, and its journaling mechanism ensures data integrity and quick recovery from crashes.XFS also supports dynamic inode allocation, which prevents inode shortages and offers greater flexibility.Furthermore, XFS's ability to handle parallel I/O operations and its support for online defragmentation and resizing provide additional performance benefits and management flexibility. Impact of Deduplication and Compression Btrfs offers the capability of transparent compression, which can enhance recording performance in certain situations by lowering the volume of data that needs to be written to the disk.Nevertheless, Btrfs lacks the capability for filesystem-level deduplication, which can provide a constraint in environments with substantial redundant data.ZFS provides the capability of deduplication and compression, substantially reducing storage space and improving efficiency.Nevertheless, deduplication in ZFS can be demanding regarding resources and necessitates a substantial amount of RAM to achieve ideal efficiency, impacting the overall system performance in some setups.XFS, on the other hand, does not support native deduplication or compression.However, it compensates with its advanced extent-based allocation scheme, which minimizes fragmentation and optimizes performance.XFS also excels in environments requiring robust scalability and high performance, especially with large files and volumes.Its journaling mechanism ensures data integrity and swift recovery from crashes, and it supports dynamic inode allocation, preventing inode shortages and enhancing flexibility.Additionally, XFS offers features like online defragmentation, resizing, and deduplication but does not support compression. Experimental Setup and Study Methodology Our testing setup consists of multiple HP ProLiant DL380 Gen10 servers (24-core Xeon CPU, 256 GB of memory).We specifically selected 2U servers, as they offer the best price-performance ratio regarding the potential to add future expansions, such as hard disk or SSD (solid state drive) storage or PCI Express cards.Regarding the storage subsystem, we used a stack of 10.000 rpm Seagate Savio 900 GB disks.The OS disk was on a separate 240 GB SATA SSD. FIO (flexible IO tester) was used for many of these tests, and configuration files were created for all test scenarios.Here is an excerpt from the FIO configuration file: [global] ioengine = libaio; it uses the libaio IO engine for asynchronous input/output.direct = 1; bypassing the cache of the operating system for I/O operations. The term "sequential" usually refers to when a process retrieves data from storage, typically used for loading huge files like streaming videos, etc.In these tests, as shown in Figure 3, it is visible that ZFS is miles ahead in reading performance while being in the top three in terms of writing performance.Unlike other filesystems, performance can be improved using more memory and faster/lower latency NVMe (non-volatile memory express) SSDs.XFS and btrfs have excellent performance in writing data but are nowhere near the read performance of ZFS.This is especially relevant when discussing that RAIDZ topologies are the most common ZFS use cases, as they offer excellent performance while being sensible with raw capacity.Therefore, if we are looking for the best solution for a heavily read-prone use case with large files, ZFS is a clear winner. write tests, mixed-workload tests, web-server workload tests, database workload tests, and HPC workload tests.Let us start with sequential read-write tests for bandwidth, IOPS (input/output operations per second), and CPU usage. Sequential Performance Tests Let us first discuss sequential performance tests for bandwidth, IOPS, and CPU usage to see if we can establish some patterns: The term "sequential" usually refers to when a process retrieves data from storage, typically used for loading huge files like streaming videos, etc.In these tests, as shown in Figure 3, it is visible that ZFS is miles ahead in reading performance while being in the top three in terms of writing performance.Unlike other filesystems, performance can be improved using more memory and faster/lower latency NVMe (non-volatile memory express) SSDs.XFS and btrfs have excellent performance in writing data but are nowhere near the read performance of ZFS.This is especially relevant when discussing that RAIDZ topologies are the most common ZFS use cases, as they offer excellent performance while being sensible with raw capacity.Therefore, if we are looking for the best solution for a heavily read-prone use case with large files, ZFS is a clear winner.There are benefits to using btrfs and XFS as storage solutions regarding CPU usage.But that comes at a price, and Figure 5. explains.As seen in the previous two figures, both are much slower than ZFS and have no room to increase their performance, while the ZFS scores could still be improved. Random Performance Tests Random access is a scenario in which data on a disk is accessed non-sequentially.It is present in numerous applications where rapid retrieval of small files can significantly impact performance.So, let us now discuss random performance tests for bandwidth, IOPS, and CPU usage to see if we can learn more about the patterns we noticed while doing the sequential tests. As shown in Figure 6, ZFS demonstrates outstanding performance regarding available bandwidth for random reads and writes.This further highlights the importance of effective caching methodology when searching for top-level storage performance.Even more impressive is that it displays the same read and write performance.There are benefits to using btrfs and XFS as storage solutions regarding CPU usage.But that comes at a price, and Figure 5. explains.As seen in the previous two figures, both are much slower than ZFS and no room to increase their performance, while the ZFS scores could still be improved. Random Performance Tests Random access is a scenario in which data on a disk is accessed non-sequentially.It is present in numerous applications where rapid retrieval of small files can significantly impact performance.So, let us now discuss random performance tests for bandwidth, IOPS, and CPU usage to see if we can learn more about the patterns we noticed while doing the sequential tests. As shown in Figure 6, ZFS demonstrates outstanding performance regarding available bandwidth for random reads and writes.This further highlights the importance of effective caching methodology when searching for top-level storage performance.Even more impressive is that it displays the same read and write performance.The next test we need to conduct is to check the IOPS read and write averages for random read and write workloads. Again, ZFS completely dominates the test, posting five times better scores than the first real competitor, as shown in Figure 7. IOPS, and CPU usage to see if we can learn more about the patterns we noticed while doing the sequential tests. As shown in Figure 6, ZFS demonstrates outstanding performance regarding available bandwidth for random reads and writes.This further highlights the importance of effective caching methodology when searching for top-level storage performance.Even more impressive is that it displays the same read and write performance.The next test we need to conduct is to check the IOPS read and write averages for random read and write workloads. Again, ZFS completely dominates the test, posting five times better scores than the first real competitor, as shown in Figure 7.The next round of tests relates to CPU usage when dealing with random workloads, so let us check those results now. As we can see on Figure 8, we have a similar situation to the sequential scenario.While the CPU usage is higher, so is the performance.This is partly because RAIDZ uses an erasure coding methodology (RAIDZ is very similar to RAID level 5). Let us see what happens when we test our filesystems using a mixed-load performance test.The next round of tests relates to CPU usage when dealing with random workloads, so let us check those results now. As we can see on Figure 8, we have a similar situation to the sequential scenario.While the CPU usage is higher, so is the performance.This is partly because RAIDZ uses an erasure coding methodology (RAIDZ is very similar to RAID level 5). Let us see what happens when we test our filesystems using a mixed-load performance test.so let us check those results now. As we can see on Figure 8, we have a similar situation to the sequential scenario.While the CPU usage is higher, so is the performance.This is partly because RAIDZ uses an erasure coding methodology (RAIDZ is very similar to RAID level 5). Let us see what happens when we test our filesystems using a mixed-load performance test. Mixed-Workloads Performance Tests A mixed load is a complex procedure that combines sequential and random readwrite operations.It is particularly challenging as it assesses a filesystem's capacity to handle various tasks efficiently.Let us first discuss the results for the bandwidth for our mixed-workload performance test: Figure 9 shows that ZFS is a clear winner, as this scenario combines sequential and random workloads, which proved to be ZFS's strong suit.It is surprising how vast this gap is, as we still see a massive gap between ZFS and its first competitor. Mixed-Workloads Performance Tests A mixed load is a complex procedure that combines sequential and random readwrite operations.It is particularly challenging as it assesses a filesystem's capacity to handle various tasks efficiently.Let us first discuss the results for the bandwidth for our mixed-workload performance test: Figure 9 shows that ZFS is a clear winner, as this scenario combines sequential and random workloads, which proved to be ZFS's strong suit.It is surprising how vast this gap is, as we still see a massive gap between ZFS and its first competitor.We still see the same pattern in Figure 11; ZFS keeps using more CPU, which is expected but provides significantly better performance.We look forward to web-server workload tests to see if something changes, as clear patterns emerge from our tests. Web Performance Tests Web-server load refers to obtaining data that involves multiple concurrent requests for small files.In our scenario, we have set the test to simulate reading and writing to a 10 GB file in 120 s runs or a 100 GB file for a 600 s run with an 80/20 read-write ratio and depth of 64.It is a critical aspect of filesystems in online hosting setups.Let us first check the bandwidth test for our web-server workload.We still see the same pattern in Figure 11; ZFS keeps using more CPU, which is expected but provides significantly better performance.We look forward to web-server workload tests to see if something changes, as clear patterns emerge from our tests.We still see the same pattern in Figure 11; ZFS keeps using more CPU, which is expected but provides significantly better performance.We look forward to web-server workload tests to see if something changes, as clear patterns emerge from our tests. Web Performance Tests Web-server load refers to obtaining data that involves multiple concurrent requests for small files.In our scenario, we have set the test to simulate reading and writing to a 10 GB file in 120 s runs or a 100 GB file for a 600 s run with an 80/20 read-write ratio and depth of 64.It is a critical aspect of filesystems in online hosting setups.Let us first check the bandwidth test for our web-server workload. Web Performance Tests Web-server load refers to obtaining data that involves multiple concurrent requests for small files.In our scenario, we have set the test to simulate reading and writing to a Computers 2024, 13, 139 18 of 26 10 GB file in 120 s runs or a 100 GB file for a 600 s run with an 80/20 read-write ratio and depth of 64.It is a critical aspect of filesystems in online hosting setups.Let us first check the bandwidth test for our web-server workload. ZFS still keeps way ahead, although XFS jumped in performance and was close in this scenario, which is visible in Figure 12.Still, it is not good enough to overtake ZFS's performance.ZFS still keeps way ahead, although XFS jumped in performance and was close in this scenario, which is visible in Figure 12.Still, it is not good enough to overtake ZFS's performance.Let us now check the IOPS test for the web workload test.Figure 13 shows that in the web-server IOPS test, ZFS is even further ahead than the rest of our tested filesystems, maintaining a healthy 6x plus performance lead.Let us check the CPU usage scores.When equipped with an improved cache, ZFS delivers markedly superior performance in all tests, barring this one, guaranteeing swift response even during periods of Let us now check the IOPS test for the web workload test.Figure 13 shows that in the web-server IOPS test, ZFS is even further ahead than the rest of our tested filesystems, maintaining a healthy 6x plus performance lead.Let us check the CPU usage scores. When equipped with an improved cache, ZFS delivers markedly superior performance in all tests, barring this one, guaranteeing swift response even during periods of heavy usage.Figure 14.clearly shows this dominance.These findings indicate that ZFS is the most suitable option for web servers that need rapid response times.If we are worried about CPU usage, then btrfs and ext4 seem like a viable option. The next round of tests concerns database performance, as we expand the tasks we require our storage to perform.Databases are notoriously sensitive to write latency and performance jitters.Let us check which filesystem performs best.Let us now check the IOPS test for the web workload test.Figure 13 shows that in the web-server IOPS test, ZFS is even further ahead than the rest of our tested filesystems, maintaining a healthy 6x plus performance lead.Let us check the CPU usage scores.When equipped with an improved cache, ZFS delivers markedly superior performance in all tests, barring this one, guaranteeing swift response even during periods of The next round of tests concerns database performance, as we expand the tasks we require our storage to perform.Databases are notoriously sensitive to write latency and performance jitters.Let us check which filesystem performs best. DB Performance Tests The database-server load is characteristic of conventional database servers, where rapid access and data processing are crucial.Let us start with the usual bandwidth test. As we can conclude from Figure 15, ZFS continues to have the measure of the field, performing roughly twice as fast in terms of read and write bandwidth.Let us check if the same story continues with IOPS measurement. DB Performance Tests The database-server load is characteristic of conventional database servers, where rapid access and data processing are crucial.Let us start with the usual bandwidth test. As we can conclude from Figure 15, ZFS continues to have the measure of the field, performing roughly twice as fast in terms of read and write bandwidth.Let us check if the same story continues with IOPS measurement.As Figure 16 shows, ZFS continues to have a significant advantage over any other filesystem.Before we move on to the most critical test related to HPC app performance, let us briefly check the CPU usage characteristics for the DB workload.ZFS is still the most CPU hungry but still delivers the best performance, which Figure 17.clearly shows.This is just a continuation of the trend we noticed at our tests' start.As Figure 16 shows, ZFS continues to have a significant advantage over any other filesystem.Before we move on to the most critical test related to HPC app performance, let us briefly check the CPU usage characteristics for the DB workload.As Figure 16 shows, ZFS continues to have a significant advantage over any other filesystem.Before we move on to the most critical test related to HPC app performance, let us briefly check the CPU usage characteristics for the DB workload.ZFS is still the most CPU hungry but still delivers the best performance, which Figure 17.clearly shows.This is just a continuation of the trend we noticed at our tests' start.ZFS is still the most CPU hungry but still delivers the best performance, which Figure 17.clearly shows.This is just a continuation of the trend we noticed at our tests' start. Before we move on to the final round of tests, let us summarize what we have found so far.ZFS was a clear winner in all the tests, sometimes 2× and sometimes up to 6× ahead of the competition.If our article's premise is correct, we suspect something similar should happen with the HPC app performance test.Let us check that scenario now.Before we move on to the final round of tests, let us summarize what we have found so far.ZFS was a clear winner in all the tests, sometimes 2× and sometimes up to 6× ahead of the competition.If our article's premise is correct, we suspect something similar should happen with the HPC app performance test.Let us check that scenario now. HPC Performance Tests In HPC, the efficiency of each system component is critical for achieving optimal performance.This is particularly true for storage systems within Kubernetes-managed environments.This part of our paper provides an in-depth analysis of storage performance across different filesystems, such as btrfs and ZFS, in the context of a Kubernetes cluster running an HPC application. The setup includes a Kubernetes cluster spread across 16 physical nodes.Each node contributes to substantial computing resources, collectively providing 256 physical cores and 1 TB of memory.Such a configuration is essential for supporting the application's intensive computational demands and using parallel processing frameworks like OpenMP and MPI.In terms of storage, the data are read and written by using the same storage setup as in the previous tests discussed earlier in this paper. In this Kubernetes environment, the horizontal pod autoscaling (HPA) feature dynamically scales the number of containers based on computational demand, with some spare computing power reserved for regular OS and Kubernetes operations.The system is configured to scale up to 16 replicas of custom-configured pods, each containing segments of the HPC application.This dynamic scaling is crucial for adapting to the varying load and ensuring efficient resource utilization throughout the performance measurement process.Here are the results for our tests using the same testing parameters as earlier in the paper. The analysis demonstrates a hierarchy in filesystem performance, with ZFS leading, followed by btrfs and XFS in different configurations, as shown in Figure 18.Let us now check what happens when we conduct a round of IOPS tests for this workload. HPC Performance Tests In HPC, the efficiency of each system component is critical for achieving optimal performance.This is particularly true for storage systems within Kubernetes-managed environments.This part of our paper provides an in-depth analysis of storage performance across different filesystems, such as btrfs and ZFS, in the context of a Kubernetes cluster running an HPC application. The setup includes a Kubernetes cluster spread across 16 physical nodes.Each node contributes to substantial computing resources, collectively providing 256 physical cores and 1 TB of memory.Such a configuration is essential for supporting the application's intensive computational demands and using parallel processing frameworks like OpenMP and MPI.In terms of storage, the data are read and written by using the same storage setup as in the previous tests discussed earlier in this paper. In this Kubernetes environment, the horizontal pod autoscaling (HPA) feature dynamically scales the number of containers based on computational demand, with some spare computing power reserved for regular OS and Kubernetes operations.The system is configured to scale up to 16 replicas of custom-configured pods, each containing segments of the HPC application.This dynamic scaling is crucial for adapting to the varying load and ensuring efficient resource utilization throughout the performance measurement process. Here are the results for our tests using the same testing parameters as earlier in the paper. The analysis demonstrates a hierarchy in filesystem performance, with ZFS leading, followed by btrfs and XFS in different configurations, as shown in Figure 18.Let us now check what happens when we conduct a round of IOPS tests for this workload. ZFS is again setting the standard for how a filesystem should perform by a 2x margin or more.Looking at Figure 19, it's very clear why ZFS would be a great filesystem choice for applications that produce a lot of IOPS. The last round of tests is related to the HPC app workload CPU usage test.Let us check what our tests tell us.This is quite surprising; suddenly, the CPU usage characteristics of our filesystems are much closer, especially for more advanced configurations.But, again, as shown on Figure 20, ZFS uses more CPU, and XFS is very near (and we do not mean that positively).But, the performance difference between ZFS and everything else justifies the difference.ZFS is again setting the standard for how a filesystem should perform by a 2x margin or more.Looking at Figure 19, itʹs very clear why ZFS would be a great filesystem choice for applications that produce a lot of IOPS.This is quite surprising; suddenly, the CPU usage characteristics of our filesystems are much closer, especially for more advanced configurations.But, again, as shown on Figure 20, ZFS uses more CPU, and XFS is very near (and we do not mean that positively).But, the performance difference between ZFS and everything else justifies the difference.ZFS is again setting the standard for how a filesystem should perform by a 2x margin or more.Looking at Figure 19, itʹs very clear why ZFS would be a great filesystem choice for applications that produce a lot of IOPS.This is quite surprising; suddenly, the CPU usage characteristics of our filesystems are much closer, especially for more advanced configurations.But, again, as shown on Figure 20, ZFS uses more CPU, and XFS is very near (and we do not mean that positively).But, the performance difference between ZFS and everything else justifies the difference. Which Filesystem to Use for HPC Application Managed by Kubernetes The filesystem selection should be meticulously matched with the needs and prerequisites of the applications it caters to.This research provides a foundation for making educated choices by emphasizing each filesystem's significant performance attributes, benefits, and constraints regarding performance and functionality.It offers users and system administrators the essential information to determine the most suitable option for their requirements. In Kubernetes deployments that leverage HPC applications, selecting the appropriate filesystem is pivotal for optimizing data throughput, processing speed, and latency, depending on the application.The scalability and robustness of ZFS make it an exceptional choice for applications that demand peak performance and require reliability and data integrity.These qualities are vital in fields that depend on precise scientific calculations and data analysis, where accuracy is paramount. As mentioned, choosing the proper filesystem for HPC environments should consider data integrity, performance, and scalability factors.For systems like the one described here, ZFS offers superior performance and reliability, making it the preferred choice for demanding HPC applications.Continuous performance monitoring and adjustments, informed by evolving workloads and Kubernetes enhancements, are crucial for maintaining an optimized computational environment.This sort of analysis can have a long-term impact on how we design HPC environments, especially considering that we can build different types of available open-source storage systems for free and scale to incredible capacities while maintaining high storage performance.And if we want to go more towards SDS (software-defined storage), the most viable solution seems to be using Ceph, as it enables us to implement distributed, highly available storage that can be used to host object, file, and block-based storage devices on top of it.We need to be aware that the nature of Ceph + ZFS means that both storage subsystems will use CoW, which is not the best approach in terms of performance, and because network-based replication will always be the primary bottleneck in terms of latency.Depending on the architecture and how Ceph is used, parts of the latency problem can be solved by employing faster network standards and RDMA (remote direct memory access) technologies like ROCE (RDMA over converged ethernet) and by using offload technologies like DPDK (Intel's Data Plane Development Kit).At this point, RDMA is a community-supported feature, while DPDK Which Filesystem to Use for HPC Application Managed by Kubernetes The filesystem selection should be meticulously matched with the needs and prerequisites of the applications it caters to.This research provides a foundation for making educated choices by emphasizing each filesystem's significant performance attributes, benefits, and constraints regarding performance and functionality.It offers users and system administrators the essential information to determine the most suitable option for their requirements. In Kubernetes deployments that leverage HPC applications, selecting the appropriate filesystem is pivotal for optimizing data throughput, processing speed, and latency, depending on the application.The scalability and robustness of ZFS make it an exceptional choice for applications that demand peak performance and require reliability and data integrity.These qualities are vital in fields that depend on precise scientific calculations and data analysis, where accuracy is paramount. As mentioned, choosing the proper filesystem for HPC environments should consider data integrity, performance, and scalability factors.For systems like the one described here, ZFS offers superior performance and reliability, making it the preferred choice for demanding HPC applications.Continuous performance monitoring and adjustments, informed by evolving workloads and Kubernetes enhancements, are crucial for maintaining an optimized computational environment.This sort of analysis can have a long-term impact on how we design HPC environments, especially considering that we can build different types of available open-source storage systems for free and scale to incredible capacities while maintaining high storage performance.And if we want to go more towards SDS (software-defined storage), the most viable solution seems to be using Ceph, as it enables us to implement distributed, highly available storage that can be used to host object, file, and block-based storage devices on top of it.We need to be aware that the nature of Ceph + ZFS means that both storage subsystems will use CoW, which is not the best approach in terms of performance, and because network-based replication will always be the primary bottleneck in terms of latency.Depending on the architecture and how Ceph is used, parts of the latency problem can be solved by employing faster network standards and RDMA (remote direct memory access) technologies like ROCE (RDMA over converged ethernet) and by using offload technologies like DPDK (Intel's Data Plane Development Kit).At this point, RDMA is a community-supported feature, while DPDK support should become available when the next Ceph release comes out based on the multi-core Crimson project.Furthermore, other work has shown that there are alternative approaches for combining thread scheduling mechanisms and run-time CPU load monitoring that can improve the I/O performance dynamically [21].There are some known problems with the interoperability of ZFS with Linux services, such as NSCD (name server caching daemon) [22].Also, over time, multiple advancements were made in the scientific community regarding data integrity, for example, a concept of flexible end-to-end data integrity [23].Overall, operational gains can be achieved using that type of architecture, but we will also have different design challenges if we go the Ceph route. Let us discuss operational gains first: • Ceph allows us to have an all-in-one solution covering block-based, file-based, and object-based storage, some of which might be required for specific workloads; • Ceph is easily scalable, offers excellent performance when configured correctly, and can be used cheaply because it works with regular commodity hardware; • Ceph offers a wide range of options for high availability and fault tolerance, which might be required for many scenarios where resilience is a significant design factor. In terms of design challenges that might have huge implications on how we design HPC environments: • Using Ceph might also require different calculations when capacity planning, as having multiple copies of data or objects requires more disks, servers, or both; • This might, in turn, significantly affect the overall HPC data-center design, as it is an entirely different design scenario when storage high availability and fault tolerance need to be a part of the design equation. One of the most important conclusions of our paper is related to the design aspect of an HPC data center from a storage perspective when using Ceph-based infrastructure to conduct ZFS vs. ZFS natively.Because Ceph-based infrastructure has the replication capability, which inherently means higher availability and much better data resilience, the data ingestion process for our applications does not have to be a two-stage process, leading to quite a bit less infrastructure size and cost.On the other hand, with ZFS, we do have to pay attention to the fact that we do not have as much availability and resilience, and there will be situations where that might come into question.It is not impossible to conduct ZFS in highly available situations, but it will mostly be burdened by many license costs and quite a bit more configuration.This scenario's initial operational and cost overhead might make it worthwhile to investigate Ceph-based storage infrastructure, albeit with a bit more added latency and performance penalty.We will delve into this in future research, as there will be many additional parameters to tune in this scenario, a process that is not necessarily well documented or researched. Discussion The filesystem selection should be meticulously matched with the needs and prerequisites of the applications it caters to.This research provides a foundation for making educated choices by emphasizing each filesystem's significant performance attributes, benefits, and constraints regarding performance and functionality.It offers users and system administrators the essential information to determine the most suitable option for their requirements. Based on the testing, ZFS demonstrated superior performance in multiple tests when configured with sufficient cache memory.Simultaneously, btrfs showed its capacity to offer similar functionalities while consuming substantially fewer resources.ZFS is notable for its high reliability and innovative features, including built-in compression and RAIDZ.Although btrfs is still in development and may need to be more stable, it presents a promising option due to its lower resource consumption.But with the cache capabilities of ZFS, the performance increase that comes with adding RAM, adding faster NVMe SSDs for L2ARC (basically second-level random read cache), and ZIL (synchronous write cache) offer an excellent possibility for a huge performance increase. Conclusions ZFS has several benefits, including its long-lasting nature, strong resilience, and extensive sophisticated features.Nevertheless, these advantages are accompanied by increased resource utilization.Btrfs exhibits reduced resource use yet encounters challenges related to stability and long-term advancement. When evaluating their utilization, ZFS is a superior option for high-criticality systems where the utmost durability and robustness are necessary.Btrfs is a favorable choice for systems that are not as important yet prioritize the efficient use of resources. This research guides selecting between btrfs and ZFS, emphasizing the importance of comprehending their distinct characteristics and constraints to meet the diverse requirements of the IT environment.Choosing a filesystem necessitates a well-rounded evaluation of technical and operational factors to maximize performance and efficiency.This study affirms the significance of conducting a thorough assessment and customization of technical solutions to meet individual requirements, confirming that making the appropriate selection can significantly enhance the success and reliability of IT systems.It also puts the onus on the correct design process, a part of the overall process that has often been conducted the wrong way, which, in turn, means that we do not fully exploit the capabilities of our infrastructure.We will delve into this topic in future research, as we see a real possibility for significant changes to the design of future HPC data centers on the horizon. After Oracle acquired Sun Microsystems, Oracle sustained ZFS's progress.On the other hand, the open-source iteration of ZFS, known as OpenZFS, continued to develop and improve with the help of the open-source community.OpenZFS is the foundation for ZFS implementations on several platforms, such as Linux and FreeBSD.The architecture of the ZFS filesystem serves as the basis for its advanced features and high performance, employing various new technological solutions.Computers 2024, 13, x FOR PEER REVIEW 10 of 27 Figure 3 . Figure 3. Sequential read-write results for bandwidth.The next round of tests is related to sequential IOPS measurement for both read and write operations.Let us see what our performance results look like.Again, as expected, ZFS is far ahead in IOPS tests for read operations and again in the top three for write operations, as Figure 4. clearly shows.XFS and btrfs are again putting up a good fight but still cannot come close to ZFS in read operations.Let us now move on to CPU usage tests.There are benefits to using btrfs and XFS as storage solutions regarding CPU usage.But that comes at a price, and Figure5.explains.As seen in the previous two figures, both are much slower than ZFS and have no room to increase their performance, while the ZFS scores could still be improved. The next round of tests is related to sequential IOPS measurement for both read and write operations.Let us see what our performance results look like.Again, as expected, ZFS is far ahead in IOPS tests for read operations and again in the top three for write operations, as Figure4.clearly shows.XFS and btrfs are again putting up a good fight but still cannot come close to ZFS in read operations.Let us now move on to CPU usage tests. Figure 4 . Figure 4. Sequential write patterns for bandwidth and IOPS-read, write, and CPU usage. Figure 4 . Figure 4. Sequential write patterns for bandwidth and IOPS-read, write, and CPU usage. Figure 9 . Figure 9. Mixed-workload bandwidth results.Now let us see the results for the mixed-workload IOPS test.As shown on Figure 10, ZFS stays ahead of the pack by a considerable margin.Let us check the CPU usage test now. Figure 9 . 27 Figure 10 . Figure 9. Mixed-workload bandwidth results.Now let us see the results for the mixed-workload IOPS test. Computers 2024 , 13, x FOR PEER REVIEW 19 of 27 Figure 14 . Figure 14.Web performance results in terms of CPU usage. Figure 14 . Figure 14.Web performance results in terms of CPU usage. Figure 17 . Figure 17.CPU usage for DB performance test. Figure 17 . Figure 17.CPU usage for DB performance test. Figure 18 . Figure 18.HPC application bandwidth performance measurement across all storage configurations. Figure 19 . Figure 19.HPC application IOPS performance across filesystems.The last round of tests is related to the HPC app workload CPU usage test.Let us check what our tests tell us.This is quite surprising; suddenly, the CPU usage characteristics of our filesystems are much closer, especially for more advanced configurations.But, again, as shown on Figure20, ZFS uses more CPU, and XFS is very near (and we do not mean that positively).But, the performance difference between ZFS and everything else justifies the difference. Figure 19 . Figure 19.HPC application IOPS performance across filesystems.The last round of tests is related to the HPC app workload CPU usage test.Let us check what our tests tell us.This is quite surprising; suddenly, the CPU usage characteristics of our filesystems are much closer, especially for more advanced configurations.But, again, as shown on Figure20, ZFS uses more CPU, and XFS is very near (and we do not mean that positively).But, the performance difference between ZFS and everything else justifies the difference.
15,233.4
2024-06-03T00:00:00.000
[ "Computer Science", "Engineering" ]
Estimation of Airspeed, Angle of Attack, and Sideslip for Small Unmanned Aerial Vehicles (UAVs) Using a Micro-Pitot Tube Fixed and rotary-wing unmanned aircraft systems (UASs), originally developed for military purposes, have widely spread in scientific, civilian, commercial, and recreational applications. Among the most interesting and challenging aspects of small UAS technology are endurance enhancement and autonomous flight; i.e., mission management and control. This paper proposes a practical method for estimation of true and calibrated airspeed, Angle of Attack (AOA), and Angle of Sideslip (AOS) for small unmanned aerial vehicles (UAVs, up to 20 kg mass, 1200 ft altitude above ground level, and airspeed of up to 100 knots) or light aircraft, for which weight, size, cost, and power-consumption requirements do not allow solutions used in large airplanes (typically, arrays of multi-hole Pitot probes). The sensors used in this research were a static and dynamic pressure sensor (“micro-Pitot tube” MPX2010DP differential pressure sensor) and a 10 degrees of freedom (DoF) inertial measurement unit (IMU) for attitude determination. Kalman and complementary filtering were applied for measurement noise removal and data fusion, respectively, achieving global exponential stability of the estimation error. The methodology was tested using experimental data from a prototype of the devised sensor suite, in various indoor-acquisition campaigns and laboratory tests under controlled conditions. AOA and AOS estimates were validated via correlation between the AOA measured by the micro-Pitot and vertical accelerometer measurements, since lift force can be modeled as a linear function of AOA in normal flight. The results confirmed the validity of the proposed approach, which could have interesting applications in energy-harvesting techniques. Introduction Small unmanned aerial vehicles (UAVs), with maximum gross takeoff mass <10 kg, normal operating altitude <1200 ft above ground level (AGL) and airspeed <100 knots according to the U.S. Department of Defense (DoD) classification [1] (p. 12), are an easy-touse and economical way to perform tasks that can be fulfilled without human involvement, or for flights in unconventional missions or constrained space. UAVs can also be an optimal solution as a test bench for new sensor systems or embedded flight management systems. When subsystems are integrated to improve characteristics such as estimation of the vehicle's state vector, autonomy, and guidance, navigation, and control (GNC) capabilities, we categorize unmanned aircraft systems (UASs) as semiautonomous, remotely operated, and fully autonomous [2][3][4][5]. A UAS comprises several subsystems that include the aircraft, its payload, the control station(s) (and, often, other remote stations known as ground stations (GSs)), aircraft launch and recovery subsystems where applicable, support subsystems, communication subsystems, and transport subsystems. Recent improvements of technologies such as global navigation satellite systems (GNSSs), inertial measurement units (IMUs), light detection and ranging systems (LiDAR), microcontrollers, imaging sensors, in Section 3 the sensors used are characterized. Section 4 reports the experimental activity (pressure-sensor calibration, estimation of velocity, angle of attack, angle of sideslip, and attitude), describing the tests performed and analyzing the numerical results. Final considerations and future work (Section 5) conclude the paper. Kinematic Model We begin with the aircraft kinematics [27][28][29], assuming a rigid-body model, and referencing Figure 1. Let v B ac = u v w T denote the velocity vector (ground speed, relative to Earth) in the aircraft's body coordinate frame (CF), with T denoting transpose. Let v N ac = u g v g w g T denote the velocity vector with components referred to an Earth-fixed north-east-down (NED) CF. The UAV kinematics are: where a = a x a y a z T is the acceleration vector, decomposed in the body CF, and p, q, r are the body-frame angular rates. The wind velocity vector relative to the Earth, decomposed in the NED CF, is v N w = u w v w w w T . The relation among the airspeed (velocity with respect to the surrounding air), the ground speed (velocity with respect to the Earth frame), and the wind velocity (with respect to the Earth frame) is: The rotation matrix for moving from the vehicle-carried NED frame to the body frame, R b N , is defined by roll (φ, positive up), pitch (θ, positive right), and yaw (ψ, positive clockwise) angles [27] (Chap. 2): cos θ cos ψ cos φ sin ψ − sin θ sin φ sin θ cos ψ − cos φ sin ψ sin φ sin θ sin ψ + cos φ cos ψ sin φ cos θ cos φ sin θ cos ψ + sin φ sin ψ cos φ sin θ sin ψ − sin φ cos ψ cos φ cos θ   (3) Therefore, the relative velocity (airspeed) vector v r = u r v r w r T in the body According to [27], the airspeed vector body-frame components can be expressed in terms of the airspeed magnitude, angle of attack, and sideslip angle as: The AOA, α, is defined as the angle between the longitudinal (X) axis of the airframe and the freestream velocity (or relative wind), measured in the Y-Z plane of the body CF, and is positive when there is uplift (pitch-up), whereas the AOS, β, is measured between the X-body axis of the airframe and the relative wind velocity vector, and is positive for wind coming from starboard (right side). Inverting Equation (5), the angles α, β and the true airspeed V TAS are given by: The calibrated airspeed V CAS is derived from [30] (Chap. 3): where ρ sl and P sl are the sea-level standard atmospheric values of air density and pressure, respectively (P sl = 101.325 kPa, ρ sl = 1.225 kg/m 3 at 15 • C, or 288.15 K [31]), and ∆P is the measured differential pressure. When Mach numbers are small (less than 0.3), Equation (8) is related to Equation (9) by: where σ is the relative density; i.e., the ratio between the actual air density and the standard air density at sea level. For low-level flights and small velocities (typical of small UAV mission scenarios), V CAS . can be assumed as equal to V TAS . Error Analysis Assuming statistically independent observations, the variance of the calculated value of α (Equation (6)), σ 2 α , can be evaluated using the special law of propagation of variances (SLOPOV) [32] (Chapter 6): where σ 2 w r and σ 2 u r are the variance of the measured quantities w r and u r , respectively Equation (11) can be easily rearranged as follows: As far as σ 2 β is concerned, using Equation (7) and the SLOPOV, it can be shown that: (13) where A = u 2 r + w 2 r , and B = u 2 r + v 2 r + w 2 r = V 2 TAS . Assuming that the vertical winds are zero (usually correct for a nonturbulent atmosphere), and measuring the sideslip angle in the X-Y plane of the body CF to maintain independence (i.e., decoupling) between α and β, then A = u r , B = u 2 r + v 2 r , and Equation (13) can be expressed in a form similar to Equation (12): From Equations (12) and (14), it can be seen, by finding the maxima of the functions 1 ur wr + wr ur and 1 ur vr + vr ur , that the errors in α and β are maximized when the velocity ratios u r w r or w r u r (for β, u r v r or v r u r ) are equal to 1, and σ 2 α and σ 2 β increase as the velocity components become small; i.e., the vehicle approaches a hovering flight (in which knowledge of AOA and AOS becomes less important to the control strategy). Figure 2a, as an example, shows σ α in degrees as a function of u r and w r , assuming a typical value of 0.2 m/s for the standard deviations of the measured velocity components (σ u r and σ w r ). Finally, using Equation (9), the variance of the calibrated airspeed is found to be: where σ 2 ∆P is the variance of the differential pressure measurements (σ ∆p set to 0.1 kPa; see Table 2, Section 3). Figure 2b shows the propagated error on V CAS (i.e., σ V CAS ) as a function of ∆P, with operating pressure range of the sensor from 0 to 10 kPa, as from Table 2. Measurement Noise Estimation via Kalman Filtering We applied Kalman filtering [33] for measurement noise estimation and removal, using a simple 1D formulation. The measurement process was modeled as: where x k is the k-th measurement; w k is the (Gaussian, zero-mean) model noise with variance Q (initialized to 0); v k is the (Gaussian, zero-mean) measurement noise, with variance R, derived from the sensors' technical datasheet (see Section 3); and A and C are scalar quantities equal to 1. The (scalar) variance of the estimation error is P. The predictor-corrector sequence (Kalman filtering) used in our work was implemented in the Arduino Integrated Development Environment (IDE) as a function (KF_nr, where "nr" stands for "noise removal") called whenever a new measurement (Data) was available from the sensor. The equations of the a priori estimation error, the Kalman gain K k , the updated estimatex k+1|k+1 (with current data y k+1 ), and the minimum-square a posteriori estimation error P k+1|k+1 , are, respectively: The Arduino code translated these equations (using the compound addition "+=") as shown in Table 1. float KF_nr(float Data) { P += Q; K = P/(P+R); X += K*(Data -X); P = (1-K)*P; return X; } Sensor System The system used for velocity, AOA, AOS, and attitude estimation is composed of the following: Differential Pressure Sensor (MPX2010DP) Estimating the CAS (Equation (9)) requires evaluation of the dynamic pressure. The Freescale Semiconductor, Inc. MPX2010DP ( Figure 3) silicon piezoresistive pressure sensor [34] provides a very accurate and linear voltage output directly proportional to the applied differential pressure, in the range of 0-10 kPa. The output voltage of the differential gauge sensor increases with increasing pressure applied to the positive pressure side (port P1 in Figure 3) relative to the vacuum side (port P2). The sensor is designed to operate with positive differential pressure applied (P1 > P2). Table 2 shows some technical specifications at 25 • C. The sensor housed a single monolithic silicon die with the strain gauge and thin film resistor network integrated. The chip was laser-trimmed for precise span, offset calibration, and temperature compensation, allowing optimal linearity, low pressure and temperature hysteresis (±0.1%VFSS from 0 to 10 kPa and ±0.5%VFSS from −40 • C to 125 • C, respectively), and an excellent response time. It fulfilled the in-flight requirements. Digital Pressure Sensor (BMP180) The static pressure also was acquired by the BMP180 digital pressure sensor ( Figure 4) in order to obtain redundant measurements and more accurate estimates. The BMP180, produced by Bosch Sensortec, consists of a piezoresistive sensor, an analog-to-digital converter, a control unit with E 2 PROM, and a serial I 2 C interface, which allowed for easy system integration by direct connection to commercial microcontrollers [35]. The 16-bit (or 19-bit in high-resolution mode) pressure data, in the range of 300-1100 hPa (from +9000 m to −500 m related to sea level) and 16-bit temperature data were compensated by the calibration data stored in the embedded E 2 PROM. Detailed sensor features are reported in Table 3. 10 DOF IMU The DFRobot 10 DOF IMU sensor [36] ( Figure 5) is a low-power (10 mW), compactsize (26x18mm) board, fully compatible with the Arduino microcontroller family, and integrates an Analog Device 10-bit ADXL345 accelerometer with up to ±16 g dynamic range, 0.312 × 10 −5 -g sensitivity, and ±40-mg drift [37]; a Honeywell HMC5883L magnetometer [38]; a 16-bit ITG-3205 gyro with ±2000 • /s full-scale range, 0.014 • /s sensitivity, and ±1 • /s drift [39]; and a Bosch BMP085 pressure sensor. The IMU is a polysilicon surface micromachined structure built on top of a silicon wafer. Polysilicon springs suspend the structure over the surface of the wafer and provide resistance against acceleration forces. IMU measurements were used in this research to estimate the acceleration components and the angular rates in Equation (1), and to check the effectiveness of the micro-Pitot approach for velocity and attitude determination. Microcontroller The Arduino Mega 2560 board (102 × 53 mm, weight 37 g) is a microcontroller board based on the ATmega2560. It has 54 digital input/output pins (of which 14 can be used as PWM outputs), 16 analog inputs, 4 UARTs (hardware serial ports), a 16-MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a 5 V voltage supply. The board has 250-mA current absorption (1.25-W power consumption) when running code and providing power to external sensors. The board contains everything needed to support the microcontroller; simply connecting it to a computer with a USB cable or powering it with an AC-to-DC adapter or battery allows the board be used and work in an integrated development environment (IDE) based on the Processing language project. Simulations, Results, and Performance Evaluation Schematics and a prototype of the acquisition system are shown in Figures 6 and 7, respectively. The indoor experimental campaigns were performed in the PFDL of the University of Naples "Parthenope", Naples, Italy. The system was tested using a fan as airflow generator while acquiring the differential pressures from the MPX2010DP sensor. The sensor's measurement range is 0-1023 (10 bits). Data were collected in 3-min acquisitions at a sampling frequency of 10 Hz. A digital anemometer (Proster PST-TL017 Handheld Anemometer [40]) was used for reference measurement of the airstream. Table 4 reports some technical specifications of the PST-TL017 device, shown in Figure 8. Pressure-Sensor Calibration To reduce measurement noise, 1D Kalman filtering was applied to the raw data, postprocessed in the Matlab ® software environment; the results are shown in Figure 9. Preliminarily, for sensor bias estimation, digital pressure data (10-bit strings) were collected in quiet airflow (v w = 0). The MPX2010DP sensor used the raw Arduino 5 V voltage source, and the output voltage V out was amplified by a two-stage differential op-amp circuit with default gains of 101 (first stage) and 6 (second stage), to obtain a signal in the range of 0-5 V. Inherent in the MPX2010 family of pressure sensors is a zero-pressure offset voltage, typically up to 1 mV. At a 5 V supply voltage, the zero-pressure offset was 0.5 mV, which corresponded to a 0.39 V offset voltage in the first op-amp stage. This offset was amplified by the second stage and appeared as a DC offset at V out with no differential pressure applied. Using the design considerations described in [41], at zero pressure we expected a theoretical voltage value after the second gain stage of 2.34 V, and a pressure range of 0-2 kPa. Since the supply voltage was 5 V, the available signal for the actual pressure was 5 − 2.34 = 2.66 V. When v w = 0, the sensor's DN (digital number) average output was 477, which corresponded to V out = V bias = 2.33 V (very close to the expected value), according to: The average bias voltage acquired at v w = 0 (Figure 10), V bias , was subtracted from measurements with values of v w = 0 to estimate the differential pressure and velocity. To convert voltage into differential pressure (∆P, in kPa) and evaluate the velocity from Equation (9), the following linear relation was applied: (22) where V max = 5 V, V bias = 2.33 V, and 2 is the full scale of the sensor (2 kPa). Indoor Tests-Velocity Estimation The indoor experiments were performed by mounting the equipment on a test bench on which the airflow was generated by a domestic fan, placed at 0.5 m from the micro-Pitot tube. The environmental conditions were: 33% relative humidity at 27 • C, collected by the DHT11 sensor, a low-cost, small-size (12 mm × 15.5 mm, mass < 5 g), low-power (12.5-mW power consumption when operating at 5 V, 2.5 mA) temperature and humidity sensor with a calibrated 16-bit digital signal output on a single-wire serial interface, with a 20-80% relative humidity range (accuracy 5%) and 0-50 • C temperature range with ± 2 • C accuracy [42]. Data were acquired at a 10 Hz sampling rate. Bias estimation and sensor calibration (Section 4.1) were performed in the "Speed 0" condition. Figures 11 and 12 show the acquired raw (unfiltered) and filtered data in the "Speed 1" and "Speed 2" (random wind speed) conditions, respectively. As a check of the effectiveness of the calibration strategy, the average estimated velocity value in the "Speed 1" test condition (Figure 11) was found to be 2.56 m/s; this corresponded to a relative error equal to (2.56 − 2.5)/2.56 = 2.3%. Raw data smoothing was performed as shown in Figures 11a and 12a with a moving average filter (the Matlab function smoothdata) in the time domain, comparable (but not as effective) to low-pass filtering in the frequency domain, while Figures 11b and 12b show the effect of 1D KF on the processed data. Indoor Tests-AOA, AOS, and Attitude Estimation AOA and AOS were calculated according to Equations (5) and (6). The entire system was successively mounted on a movable structure ( Figure 13) to simulate various attitude changes of the UAV during the indoor tests. The following assumptions were made: • Static data acquisition: the system was fixed and simply surrounded by the constant airflow coming from the fan; • Pitot tube aligned with the airframe longitudinal axis: the relative velocity was equal to the airflow velocity, and from Equation (4): Several test cases were set up, as shown in Table 5, in which the actual wind velocity (|v w |) was measured by the digital anemometer, and the controlled ("true") attitude angles were selected by using a graduated scale mounted on the structure. For example, in test case 1, the system was set in a horizontal position, with the X-Y plane parallel to the ground (yaw, roll, and pitch angles equal to 0 • , calculated by the IMU), and the "true" wind velocity was set to 7 m/s. To estimate roll and pitch angles from the IMU, a complementary filter was applied, fusing accelerometer and gyroscope data, to reduce drift and noise errors [43,44]: whereφ k ,θ k ,ψ k are the estimates of roll, pitch, and yaw angles at the instant k; γ is the ψ T is the angular rate vector derived from gyroscopic measurements [45]; and φ acc , θ acc , are the angles derived from the accelerometer data vector a. These can be derived from triple-axis tilt calculation, which evaluates the angles φ between the gravity vector and the accelerometer's z-axis (with positive direction opposite to gravity); θ between the horizon (initially coincident with the accelerometer xy-plane) and the x-axis, coincident with the body longitudinal axis; and ψ between the horizon and the y-axis of the accelerometer [46]: The roll angle (a x = 0) is given by tan −1 a y /a z , and pitch (a y = 0) is given by tan −1 a x /a z . The yaw angle was estimated by simply integrating the gyroscopic yaw rate. The filter coefficient was determined by: where τ is the time constant of the filter. Figure 14 shows the KF velocity estimation and the Euler angles with the complementary filter during data collection in static conditions (test case 1). Following Equation (4), the relative velocity v r and its components u r v r w r T were evaluated, whereas Equations (6) and (7) were used to estimate the AOA (α) and AOS (β). Table 6 shows the average values of the estimates and the experimental results for the seven test cases devised, and Figures 15-17 show the collected data and the estimates during test conditions 3, 6, and 7, respectively. A simple statistical analysis was conducted on the estimates of airspeed, AOA, and AOS to evaluate the system's performance. Table 7 shows the standard deviations σ V CAS , σ α , σ β of the estimated values (σ α and σ β before and after filtering), while Figure 18 shows a plot of σ α and σ β as a function of the velocity magnitude, together with the relative least-square fits. The experimental data were in agreement with the growth in errors as the velocities approached zero, according to the developed error analysis (Section 2.2). Conclusions and Further Work This paper presented a kinematic model for estimation of airspeed, angle of attack α, and sideslip angle β for small UAVs equipped with low-cost, off-the-shelf commercial navigation sensors. The devised system used a miniaturized differential pressure sensor (micro-Pitot tube) and an IMU for attitude determination, managed by a microcontroller. The calibration technique used for the pressure sensor returned estimation errors of less than 3%. The system performance was evaluated using experimental results from indoor benchmarking tests that emulated some basic dynamics of typical UAV missions. As shown in Table 6, the relative error that affected airspeed measurements was found to be less than 9% in all the test scenarios considered, or less than 5% if we excluded test case 4 shown in Table 6, relative to a nonrealistic value of the AOA (which was typically in the range of −2 • to 15 • ). The estimation errors of α, β, and V CAS were found to be within 1.7 • (excluding the nonrealistic case α = 45 • ), 0.5 • , and 1.4 m/s, respectively, in good agreement with the theoretical values derived from the law of error propagation, and consistent with other authors' work [15,19,20,26]. The proposed approach showed a promising potentiality for implementation of real-time control laws to increase the flight envelope by exploiting attitude measurements and direct knowledge of α and β. Using AOA and AOS estimates as in-flight feedback inputs to the autopilot control loop also could help to improve the endurance of small aircraft (typically in the range 15-45 min) by implementing specific flight strategies according to the wind conditions, or even optimizing the trajectory by gaining power in energy-harvesting techniques. There are, however, some limitations of the proposed methodology: • The alignment of the micro-Pitot tube (differential pressure sensor) to the longitudinal axis of the UAV must be performed very precisely in order to avoid biases in the estimations of the AOA and AOS. Moreover, the sensor must be located reasonably far from the rotary wings (considering a quadcopter or a multirotor VTOL UAV) to avoid turbulent airflow added by the rotors. • High velocities (>20 m/s) create differential pressure values out of the available sensor range (0.10 kPa at 10 V supply, 0-2 kPa at 5 V supply). This was not an issue for the micro-UAV applications devised by the authors (the system will be installed on a quadcopter with maximum velocity on the order of 10 m/s), but could be a problem for larger aircraft. • The mass and size requirements of our system (<150 g, typical dimensions of the boxed prototype of 120 × 60 × 30 mm) fit typical mini-and micro-UAV payload constraints, but the power consumption of the system (in the range 1-2W) could significantly reduce the aircraft's endurance (which was in the order of 15-30 min for typical small UAVs). Therefore, careful engineering considerations must be devised to reduce the impact of the system in terms of flight mission duration. Future work will involve the realization of a minimum-size, minimum-weight version of the sensor suite, to be strapdown-mounted on a small UAV with a 2 kg maximum takeoff weight (MTOW); the implementation of outdoor tests in typical flight and atmospheric conditions; and further developments of the error models and the attitude kinematics, to improve the accuracy of airflow angle and sideslip estimates in various flight and wind conditions. Author Contributions: Conceptualization, methodology, G.A., S.P., and U.P.; software and validation, G.A., S.P., and U.P.; formal analysis, investigation, data curation, G.A., S.P., and U.P.; resources, G.D.C.; writing-original draft preparation, G.A., U.P., and S.P.; writing-review and editing, S.P.; supervision, funding acquisition, G.D.C. All authors have read and agreed to the published version of the manuscript.
5,670
2021-09-22T00:00:00.000
[ "Engineering" ]
Visual Sensing and Depth Perception for Welding Robots and Their Industrial Applications With the rapid development of vision sensing, artificial intelligence, and robotics technology, one of the challenges we face is installing more advanced vision sensors on welding robots to achieve intelligent welding manufacturing and obtain high-quality welding components. Depth perception is one of the bottlenecks in the development of welding sensors. This review provides an assessment of active and passive sensing methods for depth perception and classifies and elaborates on the depth perception mechanisms based on monocular vision, binocular vision, and multi-view vision. It explores the principles and means of using deep learning for depth perception in robotic welding processes. Further, the application of welding robot visual perception in different industrial scenarios is summarized. Finally, the problems and countermeasures of welding robot visual perception technology are analyzed, and developments for the future are proposed. This review has analyzed a total of 2662 articles and cited 152 as references. The potential future research topics are suggested to include deep learning for object detection and recognition, transfer deep learning for welding robot adaptation, developing multi-modal sensor fusion, integrating models and hardware, and performing a comprehensive requirement analysis and system evaluation in collaboration with welding experts to design a multi-modal sensor fusion architecture. Introduction The interaction between cameras and welding lies in the integration of technology, vision, and field plots for controlling the welding process [1,2].As we embrace the rapid development of artificial intelligence [3], the prospects for research and development in the automation and intelligence of robotic welding have never been more promising [4][5][6].Scientists, engineers, and welders have been exploring new methods for automated welding.Over the past few decades, as shown in Figure 1, numerous sensors have been developed for welding, including infrared sensors [7], vision sensors [8,9], temperature sensors [10], acoustic sensors [11], arc sensors [12], and force sensors [13]. The vision sensor stands out as one of the sensors with immense development potential.This device leverages optical principles and employs image processing algorithms to capture images while distinguishing foreground objects from the background.Essentially, it amalgamates the functionalities of a camera with sophisticated image processing algorithms to extract valuable signals from images [14]. Vision sensors find widespread application in industrial automation and robotics, serving various purposes including inspection, measurement, object detection, quality control, and navigation [15].These versatile tools are employed across industries such as manufacturing, food safety [16], automotives, electronics, pharmaceuticals, logistics, and unmanned aerial vehicles [17].Their utilization significantly enhances efficiency, accuracy, and productivity by automating visual inspection and control processes. unmanned aerial vehicles [17].Their utilization significantly enhances efficiency, accuracy, and productivity by automating visual inspection and control processes. A vision sensor may also include other features such as lighting systems to enhance image quality, communication interfaces for data exchange, and integration with control systems or robots.It works in a variety of lighting conditions for detecting complex patterns, colors, shapes, and textures.Vision sensors can process visual information in real time, allowing automated systems to make decisions and take actions.Vision sensors for welding have the characteristics of non-contact measurement, versatility, high precision, and real-time sensing [18], providing powerful information for the automated control of welding [19].However, extracting depth information is challenging in the application of vision sensors.Depth perception is the ability to perceive the threedimensional (3D) world through measuring the distance to objects [20,21] by using a visual system [22][23][24] mimicking human stereoscopic vision and the accommodative mechanism of the human eye [25][26][27][28].Depth perception has a wide range of applications [29,30], such as intelligent robots [31,32], facial recognition [33,34], medical imaging [35], food delivery robots [36], intelligent healthcare [37], autonomous driving [38], virtual reality and augmented reality [39], object detection and tracking [40], human-computer interaction [41], 3D reconstruction [42], and welding robots [43][44][45].A vision sensor may also include other features such as lighting systems to enhance image quality, communication interfaces for data exchange, and integration with control systems or robots.It works in a variety of lighting conditions for detecting complex patterns, colors, shapes, and textures.Vision sensors can process visual information in real time, allowing automated systems to make decisions and take actions. The goal of this review is to summarize and interpret the research in depth perception and its application to welding vision sensors and evaluate some examples of robotic welding based on vision sensors.Review [46] focuses on structured light sensors for intelligent welding robots.Review [47] focuses on vision-aided robotic welding, including the detection of various groove and joint types using active and passive visual sensing methods.Review [48] focuses on visual perception for different forms of industry intelligence.Review [49] focuses on deep learning methods for vision systems intended for Construction 4.0.The difference our review provides is a comprehensive analysis of visual sensing and depth perception.We contribute to visual sensor technology, welding robot sensors, computer vision-based depth perception methods, and the industrial applications of perception to welding robots. Research Method This article focuses on visual sensing and depth perception for welding robots, as well as the industrial applications.We conducted a literature review and evaluated from several perspectives, including welding robot sensors, machine vision-based depth perception methods, and the welding robot sensors used in industry. We searched for relevant literature in the Web of Science database using the search term "Welding Sensors".A total of 2662 articles were retrieved.As shown in Figure 2, these articles were categorized into subfields and the top 10 fields, and their respective number of articles were plotted.From each subfield, we selected representative articles and reviewed them further.Valuable references from their bibliographies were subsequently collected.[47] focuses on vision-aided robotic welding, including the detection of various groove and joint types using active and passive visual sensing methods.Review [48] focuses on visual perception for different forms of industry intelligence.Review [49] focuses on deep learning methods for vision systems intended for Construction 4.0.The difference our review provides is a comprehensive analysis of visual sensing and depth perception.We contribute to visual sensor technology, welding robot sensors, computer vision-based depth perception methods, and the industrial applications of perception to welding robots. Research Method This article focuses on visual sensing and depth perception for welding robots, as well as the industrial applications.We conducted a literature review and evaluated from several perspectives, including welding robot sensors, machine vision-based depth perception methods, and the welding robot sensors used in industry. We searched for relevant literature in the Web of Science database using the search term "Welding Sensors".A total of 2662 articles were retrieved.As shown in Figure 2, these articles were categorized into subfields and the top 10 fields, and their respective number of articles were plotted.From each subfield, we selected representative articles and reviewed them further.Valuable references from their bibliographies were subsequently collected. In total, we selected 152 articles as references for this review.Our criterion for literature selection was the quality of the articles, specifically focusing on the following: 1. Relevance to technologies of visual sensors for welding robots.2. Sensors used in the welding process.In total, we selected 152 articles as references for this review.Our criterion for literature selection was the quality of the articles, specifically focusing on the following: 1. Relevance to technologies of visual sensors for welding robots. 2. Sensors used in the welding process. 3. Depth perception methods based on computer vision. 4. Welding robot sensors used in industry. Sensors for Welding Process Figure 3 shows a typical laser vison sensor used for a welding process.If there are changes in the joint positions, the sensors used for searching the welding seam will provide real-time information to the robot controller.Commonly used welding sensors include thru-arc seam tracking (TAST) sensors, arc voltage control (AVC) sensors, touch sensors, electromagnetic sensors, supersonic sensors, laser vision sensors, etc. Sensors for Welding Process Figure 3 shows a typical laser vison sensor used for a welding process.If there are changes in the joint positions, the sensors used for searching the welding seam will provide real-time information to the robot controller.Commonly used welding sensors include thru-arc seam tracking (TAST) sensors, arc voltage control (AVC) sensors, touch sensors, electromagnetic sensors, supersonic sensors, laser vision sensors, etc. Thru-Arc Seam Tracking (TAST) Sensors In 1990, Siores [50] achieved weld seam tracking and the control of weld pool geometry using the arc as a sensor.The signal detection point is the welding arc, eliminating sensor positioning errors and being unaffected by arc spatter, smoke, or arc glare, making it a cost-effective solution.Comprehensive mathematical models [51,52] have been developed and successfully applied to automatic weld seam tracking in arc welding robots and automated welding equipment.Commercial robot companies have equipped their robots such sensing devices [53]. Arc sensor weld seam tracking utilizes the arc as a sensor to detect changes in the welding current caused by variations in the arc length [54].The sensing principle is because when the arc position changes, the electrical parameters of the arc also change, primarily in the distance between the welding nozzle and the surface of the workpiece.From this, the relative position deviation between the welding gun and the weld seam can be derived from the arc oscillation pattern.In many cases, the typical thru-arc seam tracking (TAST) control method can optimize the weld seam tracking performance by adjusting various variables. The advantages of TAST as a weld seam tracking method are its low cost, as it only requires a welding current sensor as hardware.However, it requires the construction of a weld seam tracking control model, where the robot adjusts the torch position in response to the welding current feedback. Thru-Arc Seam Tracking (TAST) Sensors In 1990, Siores [50] achieved weld seam tracking and the control of weld pool geometry using the arc as a sensor.The signal detection point is the welding arc, eliminating sensor positioning errors and being unaffected by arc spatter, smoke, or arc glare, making it a costeffective solution.Comprehensive mathematical models [51,52] have been developed and successfully applied to automatic weld seam tracking in arc welding robots and automated welding equipment.Commercial robot companies have equipped their robots such sensing devices [53]. Arc sensor weld seam tracking utilizes the arc as a sensor to detect changes in the welding current caused by variations in the arc length [54].The sensing principle is because when the arc position changes, the electrical parameters of the arc also change, primarily in the distance between the welding nozzle and the surface of the workpiece.From this, the relative position deviation between the welding gun and the weld seam can be derived from the arc oscillation pattern.In many cases, the typical thru-arc seam tracking (TAST) control method can optimize the weld seam tracking performance by adjusting various variables. The advantages of TAST as a weld seam tracking method are its low cost, as it only requires a welding current sensor as hardware.However, it requires the construction of a weld seam tracking control model, where the robot adjusts the torch position in response to the welding current feedback. Arc Voltage Control (AVC) Sensors In gas tungsten arc welding (GTAW), there is a proportional relationship between the arc voltage and arc length.AVC sensors are used to monitor changes in the arc voltage when there are variations in the arc length, providing feedback to control the torch height [55].Due to their lower sensitivity to arc length signals, AVC sensors are primarily used for vertical tracking, and, less frequently, are used for horizontal weld seam tracking.The establishment of an AVC sensing model is relatively simple and can be used in both pulsed current welding and constant current welding. Laser Sensors Due to material or process limitations, certain welding processes, such as thin plate welding, cannot utilize arc sensors for weld seam tracking.Additional sensors on the robotic system are required; a popular choice are laser sensors. Laser sensors do not require an arc model and can determine the welding joint position before welding begins.When there are changes in the joint, the robot dynamically adjusts the welding parameters or corrects the welding path deviations in real time [56].Laser sensor systems are relatively complex and have stringent requirements for the welding environment.Since the laser sensor is installed on the welding torch, it may limit the accessibility of the torch to the welding joint.An associated issue is that it introduces the inconsistency between the position of the laser sensor's detection point and the welding point, known as sensor positioning lead error. Contact Sensing Contact sensors do not require any weld seam tracking control functions.Instead, they find the weld seam before initiating the arc and continuously adjust the position deviation along the entire path.The robot operates in a search mode, using contact to gather the three-dimensional positional information of the weld seam.The compensation for the detected deviation is then transmitted to the robot controller. Typical contact-based weld seam tracking sensors rely on probes that roll or slide within the groove to reflect the positional deviation between the welding torch and the weld seam [57].They utilize microswitches installed within the sensor to determine the polarity of the deviation, enabling weld seam tracking.Contact sensors are suitable for X-and Y-shaped grooves, narrow gap welds, and fillet welds.Contact sensors are widely used in seam tracking, because of their simple system structure, easy operation, low cost, and the fact they are not affected by arc smoke or spatter.However, they have some drawbacks, including different groove types requiring different probes, and the probes potentially experiencing significant wear and deform easily, which are not suitable for high-speed welding processes. Ultrasonic Sensing The detection principle of ultrasonic weld seam tracking sensors is as follows: Ultrasonic waves are emitted by the sensor and when they reach the surface of the welded workpiece, they are reflected and received by the ultrasonic sensor.By calculating the time interval between the emission and reception of the ultrasonic waves, the distance between the sensor and the workpiece can be determined.For weld seam tracking, the edge-finding method is used to detect the left and right edge deviations of the weld seam.Ultrasonic sensing can be applied in welding methods such as GTAW welding and submerged arc welding (SAW) and enable the automatic recognition of the welding workpiece [58,59].Ultrasonic sensing offers significant advantages in the field of welding, including non-contact measurement, high precision, real-time monitoring, and wide frequency adaptability.By eliminating interference with the welding workpiece and reducing sensor wear, it ensures the accuracy and consistency of weld joints.Furthermore, ultrasonic sensors enable the prompt detection of issues and defects, empowering operators to take timely actions and ensure welding quality.However, there are limitations to ultrasonic sensing, such as high costs, stringent environmental requirements, material restrictions, near-field detection sensitivity, and operational complexities.Therefore, when implementing ultrasonic sensing, a comprehensive assessment of specific requirements, costs, and technological considerations is essential. Electromagnetic Sensing Electromagnetic sensors utilize the changes in induced currents in sensing coils caused by variations in the induced currents in the surrounding metal near the sensor.This allows the sensor to perceive the position deviations for the welding joint.Dual electromagnetic sensors can detect the offset of the weld seam from the center position of the sensor [60,61].They are particularly suitable for butt welding processes of structural profiles, especially for detecting position deviations in welding joints with painted surfaces, markings, and scratches.They can also achieve the automatic recognition of gapless welding joint positions.Kim et al. [62] developed dual electromagnetic sensors for the arc welding process of I-shaped butt joints in structural welding.They performed weld seam tracking by continuously correcting the offset of the sensor's position in real time. Vision Sensor Vision sensing systems can be divided into active vision sensors and passive vision sensors according to the imaging light source in the vision system.Passive vision sensors are mainly used for extracting welding pool information, analyzing the transfer of molten droplets, recognizing weld seam shapes, and weld seam tracking.In [63], a passive optical image sensing system with secondary filtering capability for the intelligent extraction of aluminum alloy welding pool images was proposed based on spectral analysis, which obtained clear images of aluminum alloy welding pools. Active vision sensors utilize additional imaging light sources, typically lasers.The principle is to use a laser diode and a CCD camera to form a vision sensor.The red light emitted by the laser diode is reflected in the welding area and enters the CCD camera.The relative position of the laser beam in the image is used to determine the three-dimensional information of the weld seam [64][65][66].To prevent interference from the complex spectral composition of the welding arc, and to improve the imaging quality, specific wavelength lasers can be used to isolate the arc light.Depth calculation methods include Fourier transform, phase measurement, Moiré contouring, and optical triangulation.Essentially, they analyze the spatial light field modulated by the surface of the object to obtain the three-dimensional information of the welded workpiece. Both passive and active vision sensing systems can achieve two-dimensional or threedimensional vision for welding control.Two-dimensional sensing is mainly used for weld seam shape recognition and monitoring of the welding pool.Three-dimensional sensing can construct models of important depth information for machine vision [67,68]. Depth Perception Method Based on Computer Vision Currently, 3D reconstruction has been widely applied in robotics [69], localization and navigation [70], and industrial manufacturing [71].Figure 4 illustrates the two categories of methods for deep computation.The traditional 3D reconstruction algorithms are based on multi-view geometries.These algorithms utilize image or video data captured from multiple viewpoints and employ geometric calculations and disparity analysis to reconstruct the geometric shape and depth information of objects in the 3D space.Methods based on multi-view geometry typically involve camera calibration, image matching, triangulation, and voxel filling steps to achieve high-quality 3D reconstructions. Figure 5 describes the visual perception for welding robots based on deep learning, including 3D reconstruction.Deep learning algorithms leverage convolutional neural networks (CNNs) to tackle the problem of 3D reconstruction.By applying deep learning models to image or video data, these algorithms can acquire the 3D structure and depth information of objects through learning and inference.Through end-to-end training and automatic feature learning, these algorithms can overcome the limitations of traditional approaches and achieve better performance in 3D reconstruction. Traditional Methods for 3D Reconstruction Algorithms Traditional 3D reconstruction algorithms can be classified into two categories according to whether the sensor actively illuminates the objects or not [72].The active methods utilize laser, sound, or electromagnetic waves to emit toward the target objects and to receive the reflected waves.The passive methods rely on cameras capturing the reflection of the ambient environment (e.g., natural light), and specific algorithms to calculate the 3D spatial information of the objects. In the active methods, by measuring the changes in the properties of the returned light waves, sound waves, or electromagnetic waves, the depth information of the objects can be inferred.The precise calibration and synchronization of hardware devices and sensors are required to ensure the accuracy and reliability. In contrast, for the passive methods, the captured images are processed by algorithms to obtain the objects' 3D spatial information [73,74].These algorithms typically involve feature extraction, matching, and triangulation to infer the depth and shape information of the objects in the images. Active Methods Figure 6 shows schematic diagrams of several active methods.Table 1 summarizes the relevant literature on the active methods.A parallel CNN transformer network is proposed to achieve an improved depth estimation for structured light images in complex scenes. [79] Time-of-Flight (TOF) DELTAR is proposed to enable lightweight Time-of-Flight sensors to measure high-resolution and accurate depth by collaborating with color images. [80] Time-of-Flight (TOF) Based on the principle and imaging characteristics of TOF cameras, a single pixel is considered as a continuous Gaussian source, and its differential entropy is proposed as an evaluation parameter. [81] Time-of-Flight (TOF) Time-of-Flight cameras are presented and common acquisition errors are described. [82] Triangulation A universal framework is proposed based on the principle of triangulation to address various depth recovery problems. [83] Triangulation Laser power is controlled via triangulation camera in a remote laser welding system. [84] Triangulation A data acquisition system is assembled based on differential laser triangulation method. [85] Laser scanning The accuracy of monocular depth estimation is improved by introducing 2D plane observations from the remaining laser rangefinder without any additional cost. [86] Laser scanning An online melt pool depth estimation technique is developed for the directed energy deposition (DED) process using a coaxial infrared (IR) camera, laser line scanner, and artificial neural network (ANN). [87] Laser scanning An automatic crack depth measurement method using image processing and laser methods is developed.[88] time data acquisition, making it suitable for complex shape and detail reconstruction [82].However, this method has longer scanning times for the large objects, higher equipment costs, and challenges in dealing with transparent, reflective, or multiply scattered surfaces.With further technological advancements, laser scanning holds a vast application potential in engineering, architecture, cultural heritage preservation, and other fields.However, limitations still need to be addressed, including time, cost, and adaptability to special surfaces [86][87][88].Additional explanations for the symbols and color fields can be found in [87].Reprinted with permission from [87]. Structured light-a technique that utilizes a projector to project encoded structured light onto the object being captured, which is then recorded by a camera [75].This method relies on the differences in the distance and direction between the different regions of the object relative to the camera, resulting in variations in the size and shape of the projected pattern.These variations can be captured by the camera and processed by a computational unit to convert them into depth information, thus acquiring the three-dimensional contour of the object [76].However, structured light has some drawbacks, such as susceptibility to interference from ambient light, leading to poor performance in outdoor environments.Additionally, as the detection distance increases, the accuracy of structured light decreases.To address these issues, current research efforts have employed strategies such as increasing power and changing coding methods [77][78][79]. Time-of-Flight (TOF)-a method that utilizes continuous light pulses and measures the time or phase difference of the received light to calculate the distance to the target [80][81][82].However, this method requires highly accurate time measurement modules to achieve sufficient ranging precision, making it relatively expensive.Nevertheless, TOF is able to measure long distances with a minimal ambient light interference.Current research efforts are focused on reducing the yield and cost of time measurement modules while improving algorithm performance.The goal is to lower the cost by improving the manufacturing process of the time measurement module and enhance the ranging performance through algorithm optimization. Triangulation method-a distance measurement technique based on the principles of triangulation.Unlike other methods that require precise sensors, it has a lower overall cost [83][84][85].At short distances, the triangulation method can provide high accuracy, making it widely used in consumer and commercial products such as robotic vacuum cleaners.However, the measurement error of the triangulation method is related to the measurement distance.As the measurement distance increases, the measurement error also gradually increases.This is inherent to the principles of triangulation and cannot be completely avoided. Laser scanning method-an active visual 3D reconstruction method that utilizes the interaction between a laser beam emitted by a laser device and the target surface to obtain the object's three-dimensional information.This method employs laser projection and laser ranging techniques to capture the position of laser points or lines and calculate their three-dimensional coordinates, enabling accurate 3D reconstruction.Laser scanning offers advantages such as high precision, adaptability to different lighting conditions, and realtime data acquisition, making it suitable for complex shape and detail reconstruction [82].However, this method has longer scanning times for the large objects, higher equipment costs, and challenges in dealing with transparent, reflective, or multiply scattered surfaces.With further technological advancements, laser scanning holds a vast application potential in engineering, architecture, cultural heritage preservation, and other fields.However, limitations still need to be addressed, including time, cost, and adaptability to special surfaces [86][87][88]. Passive Methods Figure 7 displays schematic diagrams of several passive methods.Table 2 summarizes relevant literature on passive methods. Monocular vision-a visual depth recovery technique that uses a single camera as the capturing device.It is advantageous due to its low cost and ease of deployment.Monocular vision reconstructs the 3D environment using the disparity in a sequence of continuous images.Monocular vision depth recovery techniques include photometric stereo [89], texture recovery [90], shading recovery [91], defocus recovery [92], and concentric mosaic recovery [93].These methods utilize variations in lighting, texture patterns, brightness gradients, focus information, and concentric mosaics to infer the depth information of objects.To improve the accuracy and stability of depth estimation, some algorithms [94,95] employ depth regularization and convolutional neural networks for monocular depth estimation.However, using monocular vision for depth estimation and 3D reconstruction has inherent challenges.A single image may correspond to multiple real-world physical scenes, making it difficult to estimate depth and achieve 3D reconstruction solely based on monocular vision methods.Binocular/Multi-view Vision-an advanced technique based on the principles reo geometry.It utilizes the images captured by the left and right cameras, after rec tion, to find corresponding pixels and recover the 3D structural information of the ronment [96].However, this method faces the challenge of matching the images fro left and right cameras, as inaccurate matching can significantly affect the final im results of the algorithm.To improve the accuracy of matching, multi-view vision duces a configuration of three or more cameras to further enhance the precis Binocular/Multi-view Vision-an advanced technique based on the principles of stereo geometry.It utilizes the images captured by the left and right cameras, after rectification, to find corresponding pixels and recover the 3D structural information of the environment [96].However, this method faces the challenge of matching the images from the left and right cameras, as inaccurate matching can significantly affect the final imaging results of the algorithm.To improve the accuracy of matching, multi-view vision introduces a configuration of three or more cameras to further enhance the precision of matching [97]. This method has notable disadvantages, including longer computation time and a poorer real-time performance [98]. RGB-D Camera-Based-in recent years, many researchers have focused on utilizing consumer-grade RGB-D cameras for 3D reconstruction.For example, Microsoft's Kinect V1 and V2 products have made significant contributions in this area.The Kinect Fusion algorithm, proposed by Izadi et al. [99] in 2011, was a milestone in achieving real-time 3D reconstruction with RGB cameras.Subsequently, algorithms such as Dynamic Fusion [100], ReFusion [101], and Bundle Fusion [102] have emerged, further advancing the field [103].These algorithms have provided new directions and methods using the RGB-D cameras. Deep Learning-Based 3D Reconstruction Algorithms In the context of deep learning, image-based 3D reconstruction methods leverage largescale data to establish prior knowledge and transform the problem of 3D reconstruction into an encoding and decoding problem.With the increasing availability of 3D datasets and improvement in computational power, deep learning 3D reconstruction methods can reconstruct the 3D models of objects from single or multiple 2D images without the need for complex camera calibration.This approach utilizes the powerful representation capabilities and data-driven learning approach of deep learning, bringing significant advancements and new possibilities to the field of image 3D reconstruction.Figure 8 illustrates schematic diagrams of several deep learning-based methods. In 3D reconstruction, there are primarily four types of data formats: (1) The depth map is a two-dimensional image that records the distance from the viewpoint to the object for each pixel.The data is represented as a grayscale image, where darker areas correspond to closer regions.(2) Voxels are like the concept of pixels in 2D and are used to represent volume elements in 3D space.Each voxel can contain 3D coordinate information as well as other properties such as color and reflectance intensity.(3) Point clouds are composed of discrete points, where each point carries 3D coordinates and additional information such as color and reflectance intensity.(4) Meshes are two-dimensional structures composed of polygons and are used to represent the surface of 3D objects.Mesh models have the advantage of convenient computation and can undergo various geometric operations and transformations. 2019 Point cloud Three-dimenional LMNet is proposed as a latent embedding matching method for 3D reconstruction.[114] 2023 Point cloud A learning-based method called GeoUDF is proposed to address the long-standing and challenging problem of reconstructing discrete surfaces from sparse point clouds.[115] 2018 Mesh Using 2D supervision to perform gradient-based 3D mesh editing operations.[116] 2018 Mesh The state-of-the-art incremental manifold mesh algorithm proposed by Litvinov and Lhuillier has been improved and extended by Romanoni and Matteucci.[117] 2019 Mesh A passive translation-based method is proposed for single-view mesh reconstruction, which can generate high-quality meshes with complex topological structures from a single template mesh with zero genus.[118] 2020 Mesh Pose2Mesh is proposed as a novel system based on graph convolutional neural networks, which can directly estimate the 3D coordinates of human body mesh vertices from 2D human pose estimation. [119] 2020 Mesh By employing different mesh parameterizations, we can incorporate useful modeling priors such as smoothness or composition from primitives.[120] 2021 Mesh A novel end-to-end deep learning architecture is proposed that generates 3D shapes from a single color image.The architecture represents the 3D mesh in graph neural networks and generates accurate geometries using progressively deforming ellipsoids. [121] 2021 Mesh A deep learning method based on network self-priors is proposed to recover complete 3D models consisting of triangulated meshes and texture maps from colored 3D point clouds.[122] Sensors 2023, 23, x FOR PEER REVIEW 15 of 27 2021 Mesh A deep learning method based on network self-priors is proposed to recover complete 3D models consisting of triangulated meshes and texture maps from colored 3D point clouds. [122] Figure 8. Deep learning methods based on point clouds [112].Reprinted with permission from [112]. Voxel-Based 3D Reconstruction Voxels are an extension of pixels to three-dimensional space and, similar to 2D pixels, voxel representations in 3D space also exhibit a regular structure.It has been demonstrated that various neural network architectures commonly used in the field of 2D image analysis can be easily extended to work for voxel representations.Therefore, when tackling problems related to 3D scene reconstruction and semantic understanding, we can leverage pixel-based representations for research.In this regard, we categorize voxel representations into dense voxel representations, sparse voxel representations, and voxel rep- Voxel-Based 3D Reconstruction Voxels are an extension of pixels to three-dimensional space and, similar to 2D pixels, voxel representations in 3D space also exhibit a regular structure.It has been demonstrated that various neural network architectures commonly used in the field of 2D image analysis can be easily extended to work for voxel representations.Therefore, when tackling problems related to 3D scene reconstruction and semantic understanding, we can leverage pixelbased representations for research.In this regard, we categorize voxel representations into dense voxel representations, sparse voxel representations, and voxel representations obtained through the conversion of point clouds. Point Cloud-Based 3D Reconstruction Traditional deep learning frameworks are built upon 2D convolutional structures, which efficiently handle regularized data structures with the support of modern parallel computing hardware.However, for images lacking depth information, especially under extreme lighting or specific optical conditions, semantic ambiguity often arises.As an extension of 3D data, 3D convolution has emerged to naturally handle regularized voxel data.However, compared to 2D images, the computational resources required for processing voxel representations grow exponentially.Additionally, 3D structures exhibit sparsity, resulting in significant resource waste when using voxel representations.Therefore, voxel representations are no longer suitable for large-scale scene analysis tasks.On the contrary, point clouds, as an irregular representation, can straightforwardly and effectively capture sparse 3D data structures, playing a crucial role in 3D scene understanding tasks.Consequently, point cloud feature extraction has become a vital step in the pipeline of 3D scene analysis and has achieved unprecedented development. Mesh-Based 3D Reconstruction Mesh-based 3D reconstruction methods are techniques used for reconstructing threedimensional shapes.This approach utilizes a mesh structure to describe the geometric shape and topological relationships of objects, enabling the accurate modeling of the objects.In mesh-based 3D reconstruction, the first step is to acquire the surface point cloud data of the object.Then, through a series of operations, the point cloud data is converted into a mesh representation.These operations include mesh topology construction, vertex position adjustment, and boundary smoothing.Finally, by optimizing and refining the mesh, an accurate and smooth 3D object model can be obtained. Mesh-based 3D reconstruction methods offer several advantages.The mesh structure preserves the shape details of objects, resulting in higher accuracy in the reconstruction results.The adjacency relationships within the mesh provide rich information for further geometric analysis and processing.Additionally, mesh-based methods can be combined with deep learning techniques such as graph convolutional neural networks, enabling advanced 3D shape analysis and understanding. Robotic Welding Sensors in Industrial Applications The development of robotic welding sensors has been rapid in recent years, and their application in various industries has become increasingly widespread [123][124][125].These sensors are designed to detect and measure various parameters such as temperature, pressure, speed, and position, which are crucial for ensuring consistent and high-quality welds.The combination of various sensors enables robotic welding machines to better perceive the welding object and control the robot to reach places that are difficult or dangerous for humans to access.As a result, robotic welding machines have been widely applied in various industries, including shipbuilding, automotive, mechanical manufacturing, aerospace, railroad, nuclear, PCB, construction, and medical equipment, due to their ability to improve the efficiency, accuracy, and safety of the welding process.Table 4 summarizes the typical applications of welding robot vision sensors in different fields.Human-machine interaction mobile welding robots successfully remotely produced welds. [126] Shipyard Ship welding robot system A ship welding robot system was developed for welding process technology. [127] Shipyard Super flexible welding robot A super flexible welding robot module with 9 degrees of freedom was developed. [128] Shipyard Welding vehicle and six-axis robotic arm A new type of welding robot system was developed. [129] Automobile Multi-robot welding system An extended formulation of the design and motion planning problems for a multi-robot welding system was proposed. [130] Automobile Robot-guided friction stir welding gun A new type of robot-guided friction stir welding gun technology was developed. [131] Automobile Friction welding robot A redundant 2UPR-2RPU parallel robotic system for friction stir welding was proposed. [132] Automobile Arc welding robot A motion navigation method based on feature mapping in a simulated environment was proposed.The method includes initial position guidance and weld seam tracking. [133] Machinery Visual system calibration program A visual system's calibration program was proposed and the position relationship between the camera and the robot was obtained. [134] Machinery Robot system for welding seawater desalination pipes A robotic system for welding and cutting seawater desalination pipes was introduced. [135] Aerospace Aerospace friction stir welding robot By analyzing the system composition and configuration of the robot, the loading conditions of the robot's arm during the welding process were accurately simulated, and the simulation results were used for strength and fatigue checks. [136] Aerospace New type of friction stir welding robot An iterative closest point algorithm was used to plan the welding trajectory for the most complex petal welding conditions. [137] Aerospace Industrial robot Using industrial robots for the friction stir welding (FSW) of metal structures, with a focus on the assembly of aircraft parts made of aluminum alloy. [138] Railway Industrial robot The system was developed and implemented based on a three-axis motion device and a visual system composed of a camera, a laser head, and a band-pass filter. [139] Railway Rail welding path grinding robot A method for measuring and reconstructing a steel rail welding model was proposed. [140] Railway Industrial robot Automation in welding production for manufacturing railroad car bodies was introduced, involving friction stir welding, laser welding, and other advanced welding techniques. [141] Nuclear New type of underwater welding robot An underwater robot for the underwater welding of cracks in nuclear power plants and other underwater scenarios was developed. [142] Nuclear Robot TIG welding Manual and robotic TIG welding used in key nuclear industry manufacturing was compared. [143] PCB Flexible PCB welding robot A deep learning-based automatic welding operation scheme for flexible PCBs was proposed. [144] The optimized PCB welding sequence was crucial for improving the welding speed and safety of robots.[145] 1997 Construction Steel frame structure welding robot Two welding robot systems were developed to rationalize the welding of steel frame structures. [146] 2020 Construction Steel frame structure welding robot The adaptive tool path of the robot system enabled the robot to generate welds at complex approach angles, thereby increasing the potential of the process. [147] 2020 Medical equipment Surgical robot performing remote welding The various challenges of using surgical robots equipped with digital cameras for remote welding, used to observe welding areas, especially the difficulty of detecting weld pool boundaries, were described. [148] 2020 Medical equipment Intelligent welding system for human soft tissue By combining manual welding machines with automatic welding systems, intelligent welding systems for human soft tissue welding could be developed in medicine. [149] In the shipbuilding and automotive industries, robotic welding vision sensors play a crucial role in ensuring the quality and accuracy of welding processes [126][127][128][129][130][131][132][133].These sensors are designed to detect various parameters such as the thickness and shape of steel plates, the position and orientation of car parts, and the consistency of welds.By using robotic welding vision sensors, manufacturers can improve the efficiency and accuracy of their welding processes, reduce the need for manual labor, and ensure that their products meet the required safety and quality standards.Figure 9 shows the application of welding robots in shipyards.Figure 10 shows the application of welding robots in automobile factories.Figure 10.Welding robot for automobile door production [133].Reprinted with permission from [133]. In other fields, robotic welding vision sensors can easily address complex, difficultto-reach, and hazardous welding scenarios through visual perception [134][135][136][137][138][139][140][141][142][143][144][145][146][147][148][149].By accurately detecting, recognizing, and modeling the object to be welded, the sensors can comprehensively grasp the structure, spatial relationships, and positioning of the object, facilitating the precise control of the welding torch and ensuring optimal welding results.The versatility of robotic welding vision sensors enables them to adapt to various environmental conditions, such as changing lighting conditions, temperatures, and distances.They can also be integrated with other sensors and systems to enhance their performance and functionality. Area Key Technology Description References Figure 10.Welding robot for automobile door production [133].Reprinted with permission from [133]. In other fields, robotic welding vision sensors can easily address complex, difficult-toreach, and hazardous welding scenarios through visual perception [134][135][136][137][138][139][140][141][142][143][144][145][146][147][148][149].By accurately detecting, recognizing, and modeling the object to be welded, the sensors can comprehensively grasp the structure, spatial relationships, and positioning of the object, facilitating the precise control of the welding torch and ensuring optimal welding results.The versatility of robotic welding vision sensors enables them to adapt to various environmental conditions, such as changing lighting conditions, temperatures, and distances.They can also be integrated with other sensors and systems to enhance their performance and functionality. The use of robotic welding vision sensors offers several advantages over traditional manual inspection methods.Firstly, they can detect defects and inconsistencies in real time, allowing for immediate corrective action to be taken, which reduces the likelihood of defects and improves the overall quality of the welds.Secondly, they can inspect areas that are difficult or impossible for human inspectors to access, such as the inside of pipes or the underside of car bodies, ensuring that all welds meet the required standards, regardless of their location.Furthermore, robotic welding vision sensors can inspect welds at a faster rate than manual inspection methods, allowing for increased productivity and efficiency [150].They also reduce the need for manual labor, which can be time-consuming and costly.Additionally, the use of robotic welding vision sensors can help to improve worker safety by reducing the need for workers to work in hazardous environments [151]. We have analyzed the experimental results from the literature in actual work environments.In reference [144], the weighted function of the position error in the image space transitioned from 0 to 1, and after active control, the manipulation error was reduced to less than 2 pixels.Reference [147] utilized tool path adaptation and adaptive strategies in a robotic system to compensate for inaccuracies caused by the welding process.Experiments have demonstrated that robotic systems can operate within a certain range of outward angles, in addition to multiple approach angles of up to 50 degrees.This adaptive technique has enhanced the existing structures and repair technologies through incremental spot welding. In summary, robotic welding vision sensors play a crucial role in assisting robotic welding systems to accurately detect and recognize the objects to be welded, and then guide the welding process to ensure optimal results.These sensors utilize advanced visual technologies such as cameras, lasers, and computer algorithms to detect and analyze the object's shape, size, material, and other relevant features.They can be integrated into the robotic welding system in various ways, such as mounting them on the robot's arm or integrating them into the welding torch itself.The sensors provide real-time information to the robotic system, enabling it to adjust welding parameters such as speed, pressure, and heat input to optimize weld quality and consistency [152].Customized approaches are crucial when applying welding robots across different industries.The automotive, aerospace, and shipbuilding sectors face unique welding challenges that require tailored solutions.Customized robot designs, specialized parameters, and quality control should be considered to ensure industry-specific needs are met. Existing Issues, Proposed Solutions, and Possible Future Work Visual perception in welding robots encounters a myriad of challenges, encompassing the variability in object appearance, intricate welding processes, restricted visibility, sensor interference, processing limitations, knowledge gaps, and safety considerations.Overcoming these hurdles requires the implementation of cutting-edge sensing and perception technologies, intricate software algorithms, and meticulous system integration.Within the realm of industrial robotics, welding robots grapple with various visual perception challenges.This encompasses current issues, potential solutions, and future prospects within the field of welding robotics. In the exploration of deep learning and convolutional neural networks (CNN) within the realm of robot welding vision systems, it is crucial to recognize the potential of alternative methodologies and assess their suitability in specific contexts.Beyond deep learning, traditional machine learning algorithms can be efficiently deployed in robot welding vision systems.Support vector machines (SVMs) and random forests, for example, emerge as viable choices for defect classification and detection in welding processes.These algorithms typically showcase a lower computational complexity and have the capacity to exhibit commendable performance on specific datasets. Rule-based systems can serve as cost-effective and interpretable alternatives for certain welding tasks.Leveraging predefined rules and logical reasoning, these systems process image data to make informed decisions.Traditional computer vision techniques, including thresholding, edge detection, and shape analysis, prove useful for the precise detection of weld seam positions and shapes.Besides CNNs, a multitude of classical computer vision techniques can find applications in robot welding vision systems.For instance, template matching can ensure the accurate identification and localization of weld seams, while optical flow methods facilitate motion detection during the welding process.These techniques often require less annotated data and can demonstrate robustness in specific scenarios.Hybrid models that amalgamate the strengths of different methodologies can provide comprehensive solutions.Integrating traditional computer vision techniques with deep learning allows for the utilization of deep learning-derived features for classification or detection tasks.Such hybrid models prove particularly valuable in environments with limited data availability or high interpretability requirements. The primary challenges encountered by robotic welding vision systems include the following: 1. Adaptation to changing environmental conditions: robotic welding vision systems often struggle to swiftly adjust to varying lighting, camera angles, and other environmental factors that impact the welding process. 2. Limited detection and recognition capabilities: conventional computer vision techniques used in these systems have restricted abilities to detect and recognize objects, causing errors during welding. 3. Vulnerability to noise and interference: robotic welding vision systems are prone to sensitivity issues concerning noise and interference, stemming from sources such as the welding process, robotic movement, and external factors like dust and smoke.4. Challenges in depth estimation and 3D reconstruction: variations in material properties and welding techniques contribute to discrepancies in the welding process, leading to difficulties in accurately estimating depth and achieving precise 3D reconstruction. 5. The existing welding setup is intricately interconnected, often space-limited, and the integration of a multimodal sensor fusion system necessitates modifications to accommodate new demands.Effectively handling voluminous data and extracting pertinent information present challenges, requiring preprocessing and fusion algorithms.Integration entails comprehensive system integration and calibration, ensuring seamless hardware and software dialogue for the accuracy and reliability of data. To tackle these challenges, the following solutions are proposed for consideration: 1. Develop deep learning for object detection and recognition: The integration of deep learning techniques, like convolutional neural networks (CNNs), can significantly enhance the detection and recognition capabilities of robotic welding vision systems.This empowers them to accurately identify objects and adapt to dynamic environmental conditions.2. Transfer deep learning for welding robot adaptation: leveraging pre-trained deep learning models and customizing them to the specifics of robotic welding enables the vision system to learn and recognize welding-related objects and features, elevating its performance and resilience. 3. Develop multi-modal sensor fusion: The fusion of visual data from cameras with other sensors such as laser radar and ultrasonic sensors creates a more comprehensive understanding of the welding environment.This synthesis improves the accuracy and reliability of the vision system.4. Integrate models and hardware: Utilizing diverse sensors to gather depth information and integrating this data into a welding-specific model enhances the precision of depth estimation and 3D reconstruction. 5. Perform a comprehensive requirements analysis and system evaluation in collaboration with welding experts to design a multi-modal sensor fusion architecture.Select appropriate algorithms for data extraction and fusion to ensure accurate and reliable results.Conduct data calibration and system integration, including hardware configuration and software interface design.Calibrate the sensors and assess the system performance to ensure stable and reliable welding operations. Potential future advancements encompass the following: 1. Enhancing robustness in deep learning models: advancing deep learning models to withstand noise and interference will broaden the operational scope of robotic welding vision systems across diverse environmental conditions.2. Infusing domain knowledge into deep learning models: integrating welding-specific expertise into deep learning models can elevate their performance and adaptability within robotic welding applications. 3. Real-time processing and feedback: developing mechanisms for real-time processing and feedback empowers robotic welding vision systems to promptly respond to welding environment changes, enhancing weld quality and consistency.4. Autonomous welding systems: integrating deep learning with robotic welding vision systems paves the way for autonomous welding systems capable of executing complex welding tasks without human intervention.5. Multi-modal fusion for robotic welding: merging visual and acoustic signals with welding process parameters can provide a comprehensive understanding of the welding process, enabling the robotic welding system to make more precise decisions and improve weld quality.6. Establishing a welding knowledge base: creating a repository of diverse welding methods and materials enables robotic welding systems to learn and enhance their welding performance and adaptability from this knowledge base. Conclusions The rapid advancement of sensor intelligence and artificial intelligence has ushered in a new era where emerging technologies like deep learning, computer vision, and large Sensors 2023, 23, 9700 20 of 26 language models are making significant inroads across various industries.Among these cutting-edge innovations, welding robot vision perception stands out as a cross-disciplinary technology, seamlessly blending welding, robotics, sensors, and computer vision.This integration offers fresh avenues for achieving the intelligence of welding robots, propelling this field into the forefront of technological progress. A welding robot with advanced visual perception should have the following characteristics: accurate positioning and detection capabilities, fast response speed and real-time control, the ability to work in complex scenarios, the ability to cope with different welding materials, and a high degree of human-machine collaboration.Specifically, the visual perception system of the welding robot requires highly accurate image processing and positioning capabilities to accurately detect the position and shape of the welded joint.At the same time, the visual perception system needs to have fast image processing and analysis capabilities, which can perceive and judge the welding scene in real time in a short period of time and make correct control and feedback on abnormal situations in time.Actual welding is usually carried out in a complex environment, including interference factors such as lighting changes, smoke, and sparks.A good visually perceptive welding robot should have a strong ability to adapt to the environment and can achieve accurate recognition in complex environments.At the same time, the visual perception system of the welding robot needs to have the ability of multi-material welding and can adapt to the welding needs of different materials.Finally, with the development of smart factories, the visual perception system of welding robots needs to have the ability of human-computer interaction and collaboration. At present, the most commonly used welding robot vision perception solution is based on the combination of vision sensor and deep learning model, through depth estimation and three-dimensional reconstruction methods to perceive the depth of the welding structure and obtain the three-dimensional information of the welding structure.Deep learningbased approaches typically use models such as convolutional neural networks (CNNS) to learn depth features in images.By training a large amount of image data, these networks learn the relationship between parallax, texture, edge, and other features in the image and depth.Through the image collected by the vision sensor, the depth estimation model can output the depth information of the corresponding spatial position of the image.This depth model may solve the problem that the welding robot needs to be accurately positioned in the space position, so that the attitude and motion trajectory of the welding robot can be controlled. In conclusion, in the pursuit of research on robot welding vision systems, a balanced consideration of diverse methodologies is essential, with the selection of appropriate methods based on specific task requirements.While deep learning and CNNs wield immense power, their universal applicability is not guaranteed.Emerging or traditional methods may offer more cost-effective or interpretable solutions.Therefore, a comprehensive understanding of the strengths and limitations of different methodologies is imperative, and a holistic approach should be adopted when considering their applications. Figure 1 . Figure 1.A classification of depth perception for welding robots. Figure 1 . Figure 1.A classification of depth perception for welding robots. 3 . Depth perception methods based on computer vision.4. Welding robot sensors used in industry. Figure 2 . Figure 2. Top ten fields and the number of papers in each field.The number of retrieved papers was 2662. Figure 2 . Figure 2. Top ten fields and the number of papers in each field.The number of retrieved papers was 2662. Figure 3 . Figure 3. (a) A typical laser vison sensor setup for arc welding process; (b) a video camera as a vision sensor; (c) a vision sensor with multiple lenses. Figure 3 . Figure 3. (a) A typical laser vison sensor setup for arc welding process; (b) a video camera as a vision sensor; (c) a vision sensor with multiple lenses. Figure 4 . Figure 4.A classification of deep computation, which can be broadly divided into traditional methods and deep learning methods, is shown. Figure 5 Figure 5 describes the visual perception for welding robots based on deep learning, including 3D reconstruction.Deep learning algorithms leverage convolutional neural networks (CNNs) to tackle the problem of 3D reconstruction.By applying deep learning models to image or video data, these algorithms can acquire the 3D structure and depth information of objects through learning and inference.Through end-to-end training and automatic feature learning, these algorithms can overcome the limitations of traditional approaches and achieve better performance in 3D reconstruction. Figure 4 . 27 Figure 5 . Figure 4.A classification of deep computation, which can be broadly divided into traditional methods and deep learning methods, is shown.Sensors 2023, 23, x FOR PEER REVIEW 8 of 27 Figure 5 . Figure 5.A schematic of the processing sequence of welding robot vision perception.The welding robot obtains the welding images from the vision sensor, processes various welding information through the neural network, and then evaluates and feeds back to correct the welding operation and improves the accuracy. Figure 6 . Figure 6.Depth perception based on laser line scanner and coaxial infrared camera for directed energy deposition (DED) process.Additional explanations for the symbols and color fields can be found in [87].Reprinted with permission from [87].4.1.2.Passive Methods Figure 7 displays schematic diagrams of several passive methods.Table 2 summarizes relevant literature on passive methods. Figure 6 . Figure6.Depth perception based on laser line scanner and coaxial infrared camera for directed energy deposition (DED) process.Additional explanations for the symbols and color fields can be found in[87].Reprinted with permission from[87]. Table 1 . Active approaches in the selected papers. Table 2 summarizes relevant literature on passive methods. Table 2 . Passive approaches in the selected papers. Table 3 . Approaches based on deep learning in the selected papers. [111]el point cloud-based multi-view stereo network is proposed, which directly processes the target scene as a point cloud.This approach provides a more efficient representation, especially in high-resolution scenarios.[111] Table 4 . Research on sensor technologies for welding robots in different industrial fields. Table 4 . Research on sensor technologies for welding robots in different industrial fields.
12,213.6
2023-12-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Gastric Cancer: Epidemiology, Risk Factors, Classification, Genomic Characteristics and Treatment Strategies Gastric cancer (GC) is one of the most common malignancies worldwide and it is the fourth leading cause of cancer-related death. GC is a multifactorial disease, where both environmental and genetic factors can have an impact on its occurrence and development. The incidence rate of GC rises progressively with age; the median age at diagnosis is 70 years. However, approximately 10% of gastric carcinomas are detected at the age of 45 or younger. Early-onset gastric cancer is a good model to study genetic alterations related to the carcinogenesis process, as young patients are less exposed to environmental carcinogens. Carcinogenesis is a multistage disease process specified by the progressive development of mutations and epigenetic alterations in the expression of various genes, which are responsible for the occurrence of the disease. Introduction The cancer development process is caused by both genetic and environmental factor influences. Around 50% of cancer incidents might be provoked by environmental agents, mostly dietary habits and social behavior. The development and progression of tumors is a multiannual and multistage process. Cancer usually occurs after 20-30 years of exposure to damaging carcinogenic agents. The possibilities of modern medicine allow for better recognition of most cancers, in their advanced stages, where among 50% of cases radical resection enables recovery. Gastric cancer (GC) is a multifactorial disease, where many factors can influence its development, both environmental and genetic [1]. Current statistics display GC as the fourth leading cause of cancer deaths worldwide, where the rate of median survival is less than 12 months for the advanced stage [2]. Gastric carcinoma as a malignancy of a high aggressiveness with its heterogenous nature, and still constitutes a global health problem [3]. That is why alternative prevention, considered as a proper diet, early diagnosis and follow-up proper treatments, leads to the reduction of recorded incidents [4]. GC is rather rare and is not prevalent in the young population (under 45 years of age), where no more than 10% of patients are suffering from disease development [5][6][7][8][9]. however the frequency of HDGC, in comparison to the incidence of familial gastric carcinogenesis in Asia, is rather low [28]. Therefore, environmental agents, more than genetic alterations, can affect the development of familial GC in countries with an increased incidence of the disease. A family history of GC is also one of the most crucial risk factors [23]. However, GCs are mostly sporadic, around 10% display a familial aggregation [24]. Inherited GCs with a Mendelian inheritance pattern encompass less than 3% of all gastric carcinomas [25]. Hereditary diffuse gastric cancer (HDGC) is the most recognizable familial GC, which is caused by cadherin 1 gene (CDH1) alterations. The risk of gastric carcinoma in patients with a family history is around three-fold higher than among individuals without such a history [26]. The number of available studies on GC incidence and family history is rather low, the family history of individuals undergoing health check-ups has been noted for around 11% [27]. The ratio of GC with a family history is greater in Asian regions than in Europe and North America, however the frequency of HDGC, in comparison to the incidence of familial gastric carcinogenesis in Asia, is rather low [28]. Therefore, environmental agents, more than genetic alterations, can affect the development of familial GC in countries with an increased incidence of the disease. The correlation between dietary factors and the risk of GC development has been broadly studied. The World Cancer Research Fund/American Institute for Cancer Research (WCRF/AICR) summarized that fruit and vegetables are protectors against GC development, whereas broiled and charbroiled animal meats, salt-preserved foods and smoked foods probably enhance GC progression [29]. Food carcinogens might interact with gastric epithelial cells and provoke changes in genes and their expression. Interestingly, a high intake of sodium chloride was described as devastating the gastric mucosa, promoting cell death and regenerative cell proliferation in animal models [30]. The dietary or endogenous role of N-nitroso compounds has been displayed to significantly increase gastrointestinal cancer risk, mostly among non-cardia GCs [31]. Among a variety of habits which play a role in GC development, the impact of smoking and alcohol intake has been considered. Studies show that smokers display around an 80% increase in the risk for GC development among non-drinkers. Additionally, heavy drinkers show a higher risk of GC; in a group of smokers, the risk of GC is estimated to be 80% [32]. In the European prospective nutrition cohort study, 444 cases of GCs were examined; heavy alcohol intake at the baseline was positively correlated with GC risk, whereas a decreased intake was not [33]. Intestinal non-cardia carcinoma was accompanied by heavy alcohol consumption. The dependence between alcohol intake and the risk of GC development was studied in a Korean population showing the ALDH2 genotype [34]. Among a group of patients with ALDH2*1/*2 carriers, current/ex-drinkers displayed a higher probability for cancer development in comparison to the group of never/rare drinkers. The study Epstein-Barr virus Family History The correlation between dietary factors and the risk of GC development has been broadly studied. The World Cancer Research Fund/American Institute for Cancer Research (WCRF/AICR) summarized that fruit and vegetables are protectors against GC development, whereas broiled and charbroiled animal meats, salt-preserved foods and smoked foods probably enhance GC progression [29]. Food carcinogens might interact with gastric epithelial cells and provoke changes in genes and their expression. Interestingly, a high intake of sodium chloride was described as devastating the gastric mucosa, promoting cell death and regenerative cell proliferation in animal models [30]. The dietary or endogenous role of N-nitroso compounds has been displayed to significantly increase gastrointestinal cancer risk, mostly among non-cardia GCs [31]. Diet Among a variety of habits which play a role in GC development, the impact of smoking and alcohol intake has been considered. Studies show that smokers display around an 80% increase in the risk for GC development among non-drinkers. Additionally, heavy drinkers show a higher risk of GC; in a group of smokers, the risk of GC is estimated to be 80% [32]. In the European prospective nutrition cohort study, 444 cases of GCs were examined; heavy alcohol intake at the baseline was positively correlated with GC risk, whereas a decreased intake was not [33]. Intestinal non-cardia carcinoma was accompanied by heavy alcohol consumption. The dependence between alcohol intake and the risk of GC development was studied in a Korean population showing the ALDH2 genotype [34]. Among a group of patients with ALDH2*1/*2 carriers, current/ex-drinkers displayed a higher probability for cancer development in comparison to the group of never/rare drinkers. The study showed the association for alcohol consumption and GC development among a group of patients with ALDH2 polymorphisms and the ALDH2*1/*2 genotype. Helicobacter pylori (H. pylori) is a Gram-negative bacterium that has been described as a class I carcinogen of GC development by the World Health Organization since 1994 [35]. The effect of H. pylori on the oncogenesis process has been described by two main mechanisms: an indirect inflammatory reaction to H. pylori infection on the gastric mucosa and a direct epigenetic outcome of H. pylori on gastric epithelial cells [36]. Several virulence factors of H. pylori, like CagA or VacA, are noted to increase the risk of GC development [37]. H. pylori with cagA and vacA relate to a higher risk of developing both intense tissue responses and premalignant and malignant lesions in the distal stomach [38]. Multiple epidemiological studies have shown that H. pylori infection is one of the risk factors of GC development. Besides, H. pylori infection impairs the gastric tissue microenvironment, promoting epithelial-mesenchymal transition (EMT) and further GC progression [39,40]. Apart from H. pylori infection, the second factor associated with GC development is the Epstein-Barr virus (EBV). EBV is a ubiquitous infectious factor. The EBV genome subsists in the tumor cells and transforming EBV proteins are expressed among them [41]. About 10% of GCs have been described to be EBV-positive, but there is not enough evidence for a distinct etiological role of EBV in GC development [42]. EBV-positive gastric carcinomas differ due to patients' characteristics, like sex, age or anatomic subsite, and decrease with age among males [43]. Classification Systems in Gastric Cancer In 1965 the Lauren classification of GC was established, and nowadays it is the most frequently used, compared to other available GC classifications [10]. According to the Lauren division, two histological subtypes of GC can be distinguished-intestinal and diffuse; later the indeterminate type was also included to characterize infrequent histology. Signet ring cell carcinoma is assigned to the diffuse subtype. Multiple studies have shown that the intestinal type is the most common, the second is diffuse and ending with the indeterminate type [10]. Intestinal carcinoma is characterized by visible glands and cohesion between tumor cells. The diffuse subtype encompasses poorly cohesive cells, diffusely infiltrating the gastric wall with little or no gland formation. The cells are usually small and round, also with a signet ring cell formation. There is evidence that the intestinal subtype is associated with intestinal metaplasia of the gastric mucosa and the occurrence of H. pylori infection. Some studies also revealed that the incidence of the diffuse GC subtype is higher among females and younger patients, and that this type of GC originates from the normal gastric mucosa [44]. The World Health Organization (WHO) classification issued in 2010 is perceived to be the most detailed among all classification systems. The WHO classification, apart from stomach adenocarcinomas, also describes other types of gastric tumors with decreased attendance [45]. The gastric adenocarcinoma type includes multiple subgroups, like tubular, mucinous, papillary and mixed carcinoma, which are similar to the indeterminate type according to the Lauren classification system. The poorly cohesive carcinoma type contains the signet ring cell carcinoma. The remainder of the classified gastric adenocarcinomas are described as uncommon, mainly because of their low clinical importance. Following the WHO classification, the most common GC subtype is tubular adenocarcinoma, then the papillary and mucinous types. The signet ring cell carcinoma encompasses around 10% of GCs and is described by the occurrence of signet ring cells in over 50% of the tumor [44][45][46][47]. GC development onsets are present in Figure 2, where the percentage of each carcinoma is displayed. Conventional Gastric Cancer Gastric carcinomas that appear intermittently mostly occur among the older population, at over 45 years of age, and are so-called "conventional gastric cancers". The genetic factors that cause cancer development are less important in this type of cancer, where environmental agents are prevalent [48]. Patients are diagnosed between 60 and 80 years of age. These gastric carcinomas affect mostly men, Conventional Gastric Cancer Gastric carcinomas that appear intermittently mostly occur among the older population, at over 45 years of age, and are so-called "conventional gastric cancers". The genetic factors that cause cancer development are less important in this type of cancer, where environmental agents are prevalent [48]. Patients are diagnosed between 60 and 80 years of age. These gastric carcinomas affect mostly men, who are two times more likely to develop them than women [49,50]. Early-Onset Gastric Cancer Early-onset gastric cancer (EOGC) is described as a GC occurring at the age of 45 years or younger. Around 10% of GCs are categorized as EOGCs, however rates differ between 2.7% and 15%, depending on the performed cohort studies [14]. In the young population, diffuse lesions are more frequent and they are related to the background of the histologically "normal" gastric mucosa. Young patients are less exposed to environmental carcinogens, therefore, an EOGC is a good model to study genetic alterations in the gastric carcinogenesis process [51]. H. pylori infection is important for the development of tumors in EOGC patients, however, there is no statistically significant difference in the distribution of IL1β polymorphisms between young and old patients [9]. EBV infection is observed to be importantly decreased or absent in EOGCs [52]. It is postulated that around 10% of EOGCs have a positive family history [53]. It has been revealed that the early-onset type has different clinicopathological characteristics compared to the conventional subtype, which suggests that they display separate models within gastric carcinogenesis, and molecular patterns support this [54]. Gastric Stump Cancer Gastric stump cancer (GSC) is described as a carcinoma in the gastric remnant after partial gastric resection, usually due to peptic ulcer disease (PUD). The incidence of GSC varies from 1% to 8% [55]. The major pathogenesis of GSC is biliary pancreatic reflux, provoking chronic inflammation of the mucosa, followed by atrophic gastritis, intestinal metaplasia and dysplasia. Other possible causes are achlorhydria and bacteria overgrowth, and H. pylori seem to be the main agent included in the etiopathogenesis of the GSC [56]. The surveillance of these patients with endoscopy and biopsies might allow for the early diagnosis of these patients, however, the benefit to cost ratio is still to be considered. Viste et al. (1986), made a comparison of GSC patients with other GC patients and discovered relevant differences in gender, age, staging, resectability rates and operative procedures, however, the postoperative mortality and survival rates were approximate [57]. Hereditary Diffuse Gastric Cancer Hereditary diffuse gastric cancer (HDGC) is an autosomal dominant susceptibility for diffuse GC, a weakly differentiated adenocarcinoma that penetrates into the stomach wall leading to the thickening of the wall, usually without producing an explicit mass. The median age of HDGC onset is around 38 years, with a range of 14-69 years [58]. HDGC should be considered for screening with several important symptoms, like two or more documented cases of diffuse GC in first-or second-degree relatives, with at least one diagnosed before the age of 50, or three or more cases of documented diffuse gastric cancer in first/second-degree relatives, independent of the age of onset [59]. When clinical characteristics and family history are insufficient, the identification of a heterozygous germline CDH1 pathogenic variant using screening with available genetic tests checks out the diagnosis and enables for family research [60,61]. Among CDH1 mutation-negative patients within HDGC families, there were displayed candidate mutations within genes of high and moderate penetrance, like: BRCA2, STK11, ATM, SDHB, PRSS1, MSR1, CTNNA1 and PALB2 [62]. Therefore, in HDGC families, with no detected alterations in the CDH1 gene, the clinical importance of other tumor suppressor genes, like CTNNA1, should be considered. CTNNA1 is concerned in intercellular adhesion and is a questionable tumor suppressor gene for HDGC. The group discovered a novel variant (N1287fs) in the BRCA2 gene, which is the first report of the occurrence of a truncating BRCA2 variant among HDGC families. That is why it is important to consider HDGC syndrome as associated to CDH1 mutations and closely related genes, then consider the clinical criteria of families with heterogeneous susceptibility profiles. Genomic Characteristics of Gastric Cancer Development Many studies on the molecular biomarkers of GC have been broadly investigated to reveal the wide spectrum of recognition patterns in this field. The main signatures for GC disease development encompass the modules of HER2 expression, factors that regulate apoptosis, cell cycle regulators, factors that influence cell membrane properties, multidrug resistance proteins and microsatellite instability [63], which are presented in Table 1. Table 1. Molecular biomarkers in gastric cancer development. HER2 -Amplification and overexpression in GC, the positive cases range from 6% to 30%. -HER2/neu amplification is higher in the intestinal histologic subtype of GC, compared to the diffuse subtype, and is not associated with gender and age, but with the poor survival of GC patients. [64,65] p53 -Mutations in the p53 gene occur in the early stages of gastric carcinoma, and their frequency is increased in advanced stages of cancer development. -TP53-positive patients are also classified as one of the GC subtypes. [66,67] PD1 -The expression of PDL1 is significantly increased in cases with PCNA and C-met expression, EBV-positive, and without metastasis; a better outcome is associated with increased PD-L1/PD-1 expression. [68] p73 -The p73 gene is not an object of genetic modification in gastric carcinogenesis, wild-type p73 is quite often highly expressed in GC tissues by transcriptional induction of an active allele or the activation of a silent allele. [69] mdm2 The expression level of the MDM2 protein is importantly increased in intestinal metaplasia and gastric carcinomas in comparison to simple intestinal metaplasia and chronic gastritis. [70] Bcl-2 Lymph node metastases, depth of invasion and the negative expression of Bcl-2 are associated with an increased chance of cancer recurrence. [71] pRb CCND1 -Cyclin D1 is a positive regulator of the cell cycle process; retinoblastoma protein (pRb) acts as cell cycle repressor, it promotes G1/S arrest and growth restriction through the inhibition of the E2F transcription factors; their higher expression is merged with cell overgrowth and cancer development. -The expression of pRb and cyclin D1 might be present in the early stages of gastric carcinogenesis, with the higher expression of Rb and cyclin D1 among nonneoplastic mucosa comprising dysplasia, intestinal metaplasia, atrophy and gastritis to carcinoma. [72,73] p16 The p16 gene plays a main role as a tumor suppressor gene, the deletion of the p16 gene is associated with the carcinogenesis process, as well as the progression of gastric carcinoma. [74] p27 Kip1 Cyclin-dependent kinase inhibitor 1B, called p27 Kip1 with low protein expression in GC, is assigned to advanced tumors, it is importantly higher in weakly differentiated cases and is described as a negative prognostic factor for the survival of patients. [75] MUC Mucins are a group of extracellular, huge molecular weight, strongly glycosylated proteins; they have significant characteristics assigned to cell signalling, the creation of chemical barriers, facilities to create a gel, a major function related to lubrication. One of their main roles is also as an inhibitory function, and the high expression of mucin proteins, like MUC1, MUC2, MUC5AC and MUC6 is associated with gastric carcinogenesis process. [76,77] MRP2 The overexpression of MRP2 is significant in the initial absence of reaction to chemotherapy treatments of tumors, which allow us to consider it as an important biomarker for chemotherapy response. [78] MDR1 MDR1 is a very significant candidate gene in the progress of GC susceptibility, as well as displaying an important impact on drug resistance response, and the knockdown of MDR1 might reverse this phenotype among GC cells. [79,80] GST-P The expression of GST-P is visibly increased in tumors that are chemically induced, it is also associated with tumor invasion and recurrence, as well as poor prognosis. Possible Biomarkers of Gastric Cancer Carbohydrate antigen 19-9 (CA 19-9) is the serum tumor marker most commonly used in cases of pancreatic cancer diagnosis or therapy monitoring. Physiologically, the serum concentration of CA 19-9 is small (less than 37 U/mL), being overexpressed in inflammatory conditions (e.g., pancreatitis) or other gastrointestinal diseases (esophageal, gastric or biliary cancers) [85]. The utility of CA 19-9 as a diagnostic biomarker of GC is slightly controversial and the results of the studies usually remain contradictory. Feng et al. reported that increased levels of CA 19-9 are associated with female gender and the presence of lymph node metastasis [86]. CA 19-9 might be associated with the tumor depth, tumor stage and lymph node metastasis in GC patients [87,88]. Besides, serum CA 19-9 levels are more diagnostically important than CEA regarding the estimation of the tumor size [89]. Serum levels of CA 19-9 are higher in GC patients compared to those with gastric benign diseases [90]. Increased CA 19-9 concentrations can also constitute a marker of an early recurrence after curative gastrectomy for GC, as well as of possible peritoneal dissemination [91,92]. Increased serum CA 19-9 and CA 72-4 levels are associated with an increased mortality rate among GC patients [93]. Song et al. reported that increased CA 19-9 levels are primarily observed in cases of stage III/IV group GC relative to the I/II group [94]. Usually, single tumor markers are not sufficiently sensitive and specific, therefore, the combined detection of several markers is inherent. In cases of GC, serum CA 19-9, carcinoembryonic antigen (CEA), carbohydrate antigen 72-4 (CA 72-4) and carbohydrate antigen 15-3 (CA 15-3) are important during an early GC diagnosis and therapy monitoring [95,96]. Prevention Strategies for Gastric Cancer The two main primary prevention activities for gastric carcinoma at a population level could encompass a better diet habit and a lowering of the occurrence of H. pylori infection, the major cause of GC. The secondary prevention strategy is early detection using available resources, mainly the endoscopic method, as a gold standard. Improvement in Diet Prevention through dietary intervention might be possible through a higher intake of fresh fruit and vegetables and the restricted consumption of salt and salt-preserved food. Lifestyle modifications, including a higher level of physical activities and smoking limitation, could also reduce the risk of getting the disease. Fruit and vegetables are rich sources of folate, carotenoids, vitamin C and phytochemicals, which might have a protective role in the carcinogenesis process [97]. In the European Prospective Investigation into Cancer and Nutrition, 330 GC patients, both men and women, were examined [98]. A preventive role of vegetable consumption was displayed, mostly for the intestinal type of GC. Citrus fruit intake could play a role in protection against gastric cardia cancer. A subsequent report by the International Agency for Research on Cancer (IARC) described that the increased consumption of fruit "probably", and higher intake of vegetables "possibly", reduces the risk of GCs [99]. Helicobacter pylori Eradication The prevention of GC development through H. pylori eradication is another approach. The explanation that the bacterium is a disease-causing factor allowed some authors, by 2005, to call for different programs to eradicate the infection among the population, as a way to limit the disease development [100]. A meta-analysis conducted by Ford et al. (2014) provides limited, moderate-quality proof that H. pylori eradication causes a reduction in the incidence of GC in healthy, asymptomatic, infected Asian individuals, however, these results cannot necessarily be extrapolated to various populations [101]. In the Shandong Intervention Trial, after two weeks of antibiotic dosing for H. pylori, the prevalence of precancerous gastric lesions decreased, while 7.3 years of oral supplementation with garlic extract, selenium and vitamins C and E did not [102]. In the prospective trial performed by , the eradication of H. pylori after the endoscopic resection of GCs did not lower the incidence of metachronous gastric carcinoma [103]. Fukase et al. (2008) checked the prophylactic effect of H. pylori eradication on the development of metachronous gastric carcinoma after the endoscopic resection of early GC [104]. The study confirmed that the prophylactic eradication of H. pylori after the endoscopic resection of early GC should be used to prevent the development of metachronous gastric carcinoma. Although the randomized trials showed that H. pylori treatment might decrease GC incidence by 30-40%, there are still significant restrictions to the displayed data [104]. Early Detection Importance The early detection of GC requires financial and population support, as well as available health services. Several tests are recommended and were used in various countries for GC screening. In Japan, mass screening for gastric carcinoma with a photofluorography method was started in 1960. Currently, over 6 million people are examined each year. The sensitivity and specificity of photofluorography are 70-90% and 80-90%, respectively. The five-year survival rate is 15-30% better among screen-detected cases than in symptom-diagnosed patients [105]. Additionally, endoscopic examination for gastric carcinoma has a higher sensitivity than the radiographic method [106]. The sensitivity of the endoscopic method in the population study was higher or the disclosure of distant or regional GC than for localized GC [106]. Upper gastrointestinal endoscopy has been established as the gold standard for the diagnosis of gastric carcinoma [107]. It is also performed for the minimally invasive treatment of early GC by endoscopic mucosal resection and endoscopic submucosal dissection. Matsumoto et al. (2013) performed the evaluation of the efficacy of radiographic and endoscopic examination for GC patients and suggested that both screening methods can allow for the avoidance of gastric carcinoma development [108]. Hamashima et al. (2013) investigated the evaluation of the reduction of mortality for GC patients by endoscopic examination. The results showed a 30% reduction in GC mortality using endoscopic screening in comparison to a control, the non-examined group, within 36 months before the date of diagnosis of GC [109]. Treatment Strategies for Gastric Cancer: Surgical Resection Surgery plays a crucial role as a strategy in the treatment of GC [110]. The best time for surgery is when a tumor is mostly sensitive to the chemotherapy. The development of two new methods, endoscopic resection and minimally invasive access, have had an important impact on the treatment strategies revolution in the last few decades [111] Nevertheless, vertical and horizontal margin invasion and the chance of nodal implication should also to be taken under serious consideration to prevent real oncological lapses. The standard treatments are directed to the endoscopic mucosal resection, or, even better, endoscopic submucosal dissection (ESD) for differentiated types of gastric adenocarcinoma without ulcerative findings [112]. Both endoscopic mucosal resection and ESD provide favorable long-term outcomes. Laparoscopic surgery of GCs, as a minimally invasive method, was originally limited to treat distal-sided early GCs, with no necessity for complete gastrectomy or extended lymphadenectomy [113]. Both laparoscopic and robotic-assisted gastrectomies are considered to provide positive clinical outcomes, equivalent to those in cases of open surgeries. Furthermore, compared to open surgeries, minimally invasive techniques have even lower rates of postoperative complications, such as incisional hernias or bowel obstructions [114][115][116]. Limited surgical approaches-pylorus-preserving gastrectomy, proximal gastrectomy and local resection-significantly reduce the resection area of the stomach, as well as the extent of nodal dissection [117]. Conversion therapy in GC is an application of either chemotherapy or radiotherapy followed by surgical treatment in cases of originally unresectable or marginally resectable GCs, the application of which might be of great importance, especially in cases of stage IV GCs [118]. Comprehensive surgical resection with lymphadenectomy D2 still constitutes the major treatment strategy aimed at cure for GC. The continuation of chemotherapy is usually crucial after the resection, preventing adverse events. Several reconstruction methods, such as Billroth I gastroduodenostomy, Billroth II gastrojejunostomy, casual/uncut Roux-en-Y gastrojejunostomy and jejunal interposition are often employed after the subtotal gastrectomy [119]. Adjuvant Chemotherapy In the last few decades, multiple phase III trials have been undertaken to consider the potential of adjuvant chemotherapy versus surgery, however, no consistent outcomes have been observed [120][121][122][123]. The observations might be explicated by several important factors, like the huge heterogeneity of the study cohort, a low number of performed series, various levels of surgical precision and dissimilar chemotherapy regimens. A meta-analysis study, performed by the GASTRIC group in 2010, showed that postoperative adjuvant chemotherapy based on fluorouracil regimens significantly reduces the mortality rate of GC patients in comparison to surgery alone [124]. Adjuvant chemotherapy was correlated with a statistically important benefit in terms of overall survival and disease-free survival. There was no distinct heterogeneity for overall survival across randomized clinical trials. Five-year overall survival increased from 49.6% to 55.3% with chemotherapy. An application of oral fluoropyrimidine might also be effective in cases of advanced GCs [125,126]. Likewise, other phase III trials, including the CLASSIC or the ACTS-GC, proved that postoperative adjuvant therapy following D2 gastrectomy is a highly effective treatment strategy [127,128]. An activity of pembrolizumab in the neoadjuvant setting provides a rationale for its application in combination with chemotherapy in patients with resectable GCs [129]. The systematic review and meta-analysis performed by Yan et al. (2007) was undertaken to check the efficiency and safety of adjuvant intraperitoneal chemotherapy for patients with locally advanced resectable GC [130]. The study displayed that hyperthermic intraoperative intraperitoneal chemotherapy (HIIC), with or without early postoperative intraperitoneal chemotherapy (EPIC) after the resection of advanced gastric primary cancer, is assigned to increase the overall survival rate. Unfortunately, higher risks of intra-abdominal abscess and neutropenia are also displayed. Adjuvant XELOX might be a valid approach in curable gastric carcinomas among Asian patients. Nowadays, it is clear that adjuvant chemotherapy brings a survival benefit in radically resected GC for stage ≥ T2 or N+ [131,132]. Neoadjuvant chemotherapy followed by surgery is also highly recommended in cases of limited metastatic GCs [133]. What is also crucial while applying neoadjuvant chemotherapy is the genotype of the GC, which might additionally constitute a prognostic or predictive factor of the clinical outcome. Neo-Adjuvant Chemotherapy The importance of neoadjuvant chemotherapy in GC, gastroesophageal junction and lower esophageal adenocarcinoma has been highlighted over the past few decades. In the first Dutch randomized controlled trial of neoadjuvant chemotherapy, patients with proven adenocarcinoma of the stomach were randomized to obtain four series of chemotherapy with 5-fluorouracil, doxorubicin and methotrexate (FAMTX) prior to surgery or to undergo surgery alone. With a median follow-up of 83 months, the median survival after randomization was 18 months in the FAMTX group, versus 30 months in the surgery alone group [134]. In European regions, perioperative chemotherapy has been advertised based on the MAGIC [135] and FFCD9703 [136] randomized trials. In the first trial, Cunningham et al. (2006) investigated tests with epirubicin, cisplatin and infused fluorouracil (ECF) on patients' survival with incurable locally advanced or metastatic gastric adenocarcinomas. Among a group of patients with operable gastric or lower esophageal adenocarcinomas, a perioperative regimen of ECF caused a lowering in tumor size, stage and importantly benefited progression-free and overall survival [135]. Boige et al. (2007) used the combination of 5-Fluorouracil (5FU) in a continuous infusion and cisplatin (FP) as one of the important approaches for advanced adenocarcinoma of the stomach and lower esophagus (ASLE). Preoperative chemotherapy using 5-fluorouracil/cisplatin improved the disease-free and overall survival of patients with ASLE [136]. Radiation therapy uses high-energy rays or particles to kill cancer cells. It is sometimes applied to treat stomach cancer. In the majority of cases, radiation therapy is given with chemotherapy (chemoradiation). Both neo-adjuvant chemoradiation therapy and neo-adjuvant chemotherapy significantly improve the clinical outcomes of patients with resectable GC with a similar efficiency [137]. Targeted Therapy The major therapeutic options, based on the molecular characteristics of the gastric tumor, are ramucirumab and trastuzumab (targeting VEGFR2 and HER2, respectively) [138]. Gastric cancer often displays heterogeneity of the HER2 genotype and phenotype, which might be partly accountable for testing inaccuracy. Phase II trials studied trastuzumab plus chemotherapy (cisplatin, capecitabine) versus chemotherapy alone in HER2+ advanced gastric patients and underlined that trastuzumab is the most appropriate therapeutic approach for strongly HER2+ patients [139,140]. Other studies suggested that lapatinib, as a single targeted therapy, is weakly effective against gastric cancer, which might be explained by the contribution of antibody-dependent cell-mediated cytotoxicity (ADCC), which is lacking in the small molecule therapeutic approach [141]. Pertuzumab is another HER2 monoclonal antibody that interacts with HER2 heterodimerization with different members of the EGFR family [142]. The epidermal growth factor receptor (EGFR) is amplified in approximately 5% of gastric cancers, specified by poor prognosis. Experiments have displayed a positive correlation between EGFR overexpression and cetuximab response [143]. A phase II trial assessing cetuximab plus oxaliplatin/leucovorin/5-fluorouracil displayed a dependence between a higher EGFR copy number and overall survival [144]. VEGF/VEGFR2-dependent signaling is significant in tumor angiogenesis. It has been noted that among GC cases, VEGF status and serum levels correlated with advanced stage and poor prognosis [145]. The role of ramucirumab, a VEGFR-2 mAb, was evaluated in the REGARD study, as a second line therapy after disease progression on a first line chemotherapy regimen, among cases with unresectable, advanced gastroesophageal tumors [146]. A phase III study (RAINBOW) tested this antibody, in combination with paclitaxel, as a second line treatment among cases with metastatic GC who progressed after a first line chemotherapy [147]. Overall survival was importantly increased in the paclitaxel plus ramucirumab group in comparison to the placebo. The fibroblast growth factor 2 receptor tyrosine kinase (FGFR-2) is overexpressed among approximately 10% of gastric tumors and its amplification is related to lymphatic invasion and poor prognosis [148]. Clinical trials in which patients picked for FGFR2 amplification are treated with inhibitors, such as dovitinib or AZD4547, are ongoing [149]. The activation of the PI3K/AKT/mTOR pathway is often among GC tumors. A phase III clinical study investigated the mTOR inhibitor (everolimus) in patients with advanced gastric cancer, and the results showed no improvement in the overall survival [150]. Additionally, a phase II study of MK-2206, an inhibitor of AKT, displayed no positive results [151]. Imaging Strategies Gastric cancer requires multimodal staging approaches, in which computed tomography (CT) is the first staging modality, mostly because of its broad availability and proper accuracy [152]. This method is very often used to assess local tumor invasion. It allows for poor soft tissue contrast; the intravenous contrast material and exposure to radiation is needed. Computed tomography for overall T-staging displayed a diagnostic accuracy between around 77% and 89% [153]. CT is frequently applied to image the occurrence of lymph node metastases among GC patients. The sensitivity was assessed as being between 63-92% and the specificity between 50-88%, according to a systematic review covering 10 studies [154]. The method of choice for M-staging is a CT of the abdomen and pelvis [155]. The sensitivity for the imaging of M1 disease using CT is approximately between 14-59%, and the specificity is between 93-100% [156]. Magnetic resonance imaging (MRI) is an auspicious method for depicting various gastric wall layers and the differentiation of tumor tissue from fibrosis [157]. The accuracy for the proper evaluation of the T-stage is between 64-88% [158]. MRI in T-staging was compared with CT, and the accuracy was rather higher for MRI, however, this difference was only proven to be statistically significant in two studies: 73% for MRI versus 67% for helical CT [159] and 81% for MRI versus 73% for spiral CT [160]. The precision of MRI for the correct distinction between node-negative and node-positive cases with GC varied between 65% and 100%, sensitivities and specificities ranged between 72-100%, 20-100%, 69-100% and 40-100%, respectively [161]. MRI is broadly applied to the diagnosis of liver metastases, as well as displaying capability for the diagnosis of peritoneal seeding [162]. The treatment response evaluation and the detection of lymph node metastases could take advantage of imaging biomarkers derived from functional MRI in the future [163]. Positron emission tomography (PET) imaging is not the best option for the evaluation of the T-stage. The resolution of PET is limited by the volume averaging of the metabolic signal, with prominent uptake averaged across several millimeters [164]. PET might be a very good method to detect anatomically small and metabolically active focuses of metastatic disease. The comparatively poor spatial resolution of PET causes the decreased productivity of differentiation compartment I and II nodes from the primary tumor itself [165]. PET is probably the most useful for the detection of distant areas of solid organ metastases. Kinkel et al. (2002) performed a metanalysis and underlined PET as the most sensitive noninvasive imaging strategy in this field [166]. PET may be a useful tool to prefigure answers to preoperative chemotherapy in GC cases. Conclusions In this review, we described GC characteristics, considering the epidemiology, risk factors, classification and molecular and genomic markers, as well as treatment strategies. We characterize the incidence of GC, which is variable when taking into account the geographical and sex variability. We displayed that the decline in sporadic intestinal types of GC is present, whereas the diffuse type prevalence is increased, and the proximal GC prevalence is higher than for the distal one. Several risk factors with an important impact on developing GC are mentioned, including family history, diet, alcohol consumption and smoking, as well as Helicobacter pylori and Epstein-Barr virus infection. The two main classifications of GC are described: Lauren, which is the most commonly used, and WHO, which is perceived to be the most detailed among all of the pathohistological classification systems. The signatures, which are described, are based on the current literature and research performed on this topic, which encompass: the module of HER2 expression, factors that regulate apoptosis, cell cycle regulators, factors that influence cell membrane properties, multidrug resistance proteins and microsatellite instability. We highlighted the two main primary prevention strategies for gastric carcinoma, which are better diet habits and a lowering of the occurrence of H. pylori infection, and the secondary prevention approach, which is early detection using the endoscopic method as a gold standard. Different treatment strategies are also displayed, including surgical resection, adjuvant and neo-adjuvant chemotherapy, radiation therapy, hyperthermic intraperitoneal chemotherapy (HIPEC) and pressurized intraperitoneal aerosol chemotherapy (PIPAC). Author Contributions: The authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
8,513.4
2020-06-01T00:00:00.000
[ "Biology", "Medicine" ]
MIN-MAX OPTIMIZATION OF EMERGENCY SERVICE SYSTEM BY EXPOSING CONSTRAINTS MIN-MAX OPTIMIZATION OF EMERGENCY SERVICE SYSTEM BY EXPOSING CONSTRAINTS scheme, the disutility perceived by the worst situated user is minimized first and then the disutility of the second worst situated user is minimized unless the previously achieved disutility of the worst situated users is worsened. This approach is applied step by step for the remaining users [13]. Complexity of previously described problems has led to searching for a suitable algorithm which complies with the task. It was found that the radial formulation of the problem can considerably accelerate the associated solving process [14]. Simultaneously with the above mentioned research, an attention was paid to so-called approximate approaches which make use of commercial IP-solvers and the radial formulation with homogenous system of radii concerning individual users [15] and This paper deals with the fair public service system design using the weighted p-median problem formulation. Studied generalized system disutility follows the idea that the individual user’s disutility comes from more than one located service center and the contributions from relevant centers are weighted by some coefficients. To achieve fairness in such systems, various schemes may be applied. The strongest criterion consists in the process when the disutility of the worst situated users is minimized first, and then the disutility of better located users is opti-mized under the condition that the disutility of the worst situated users does not worsen. Hereby, we focus on the first step and try to find an effective solving method based on the radial formulation. The main goal of this study is to show how suitable solving method for the min-max optimal system design can save computational time and bring precise results. Introduction The public service system design problem is a challenging task for both system designer and operational researcher. As the first one searches for a tool which enables to obtain service center deployment satisfying future demands of system users, the second one faces the necessity of completing the associated solving tool. The family of public service systems includes medical emergency system, public administration system and many others where the quality criterion of the design takes into account some evaluation of users´ discomfort [1], [2], [3] and [4]. Thus designing of a public service system includes determination of limited number of center locations from which the service is distributed to all users. The associated objective in the standard formulation is to minimize some form of disutility which is proportional to the distance between served users and the nearest service centers [5] and [6]. This paper is focused on such methods of the public service system design where the generalized disutility is considered instead of common distance. It follows the idea of random occurrence of the demand for service and limited capacity of the service centers in real emergency rescue systems [6] and [7]. At the time of the current demand for service, the nearest service center may be occupied by some other user for which this service center is also the nearest one. When such situation occurs, the current demand is usually served from the second nearest center or from the third nearest center if the second one is also occupied. Thus we assume that the service is generally provided from more located service centers and the individual contributions from relevant centers may be weighted by reduction coefficients depending on a center order [6] and [8]. This approach constitutes an extension of previously developed methods where only one nearest center was taken as a source of individual user's disutility. Furthermore, we pay attention to the quality criterion of the design. On the contrary to our previous research where the average user´s disutility was minimized, here we focus on the fair-optimal public service system design. Fairness in general emerges whenever limited resources are to be fairly distributed among participants [9], [10], [11] and [12]. The strongest scheme is so-called lexicographic min-max criterion. Applying this scheme, the disutility perceived by the worst situated user is minimized first and then the disutility of the second worst situated user is minimized unless the previously achieved disutility of the worst situated users is worsened. This approach is applied step by step for the remaining users [13]. Complexity of previously described problems has led to searching for a suitable algorithm which complies with the task. It was found that the radial formulation of the problem can considerably accelerate the associated solving process [14]. Simultaneously with the above mentioned research, an attention was paid to so-called approximate approaches which make use of commercial IP-solvers and the radial formulation with homogenous system of radii concerning individual users [15] and constant a ij s is equal to 1 if and only if the disutility contribution d ij for a user from location j from the possible center location i is less or equal to d s , otherwise a ij s is equal to 0. Let the location variable y i ! {0, 1} model the decision of service center location at the location i ! I by the value of 1. Further, we introduce auxiliary zero-one variables x jsk for j ! J, s ! [0 ... v], k ! [1...r] to model the disutility contribution value of the k-th nearest service center to the user j. The variable x jsk takes the value of 1 if the k-th smallest disutility contribution for the customer j ! J is greater than d s and it takes the value of 0 otherwise. Then the expression e 0 x j0k + e 1 x j1k + e 2 x j2k + … + e v x jvk constitutes the k-th smallest disutility contribution d k j* for customer located at j. Under mentioned preconditions, we can describe the min-max optimal public service system design problem using the following variables and other denotations. Subject to ,..., , ,..., j J s vk r 0 1 ! = = (6) h 0 $ The constraint (2) puts a limit p on the number of located centers. The constraints (3) ensure that the sum of variables x jsk over k ! [1...r] expresses the number of the service centers outside the radius s from the user location j, which remains to the number r. The link-up constraints (4) ensure that each perceived disutility is less than or equal to the upper bound h. Validity of the assertion that the expression on the left-hand-side of (4) expresses the sum q 1 d i1,j + q 2 d i2,j + … + q r d ir,j of weighted relevant disutility values from the r nearest service centers i1, i2, …, ir to the user located at j follows from the next reasoning. It can be easily found that minimal sum of the variables x jsk over k ! [1...r] completes the number of located service centers in the radius s from user location j to the number r. This way, the sum gives the number t of the nearest service centers whose disutility contribution is greater than or equal to the value d s . As the sequence of q k decreases, only x jsk for k = r-t+1, r-t+2 … r must be equal to one for the given j and s. It causes that the biggest disutility contribution is assigned by the smallest value of q k . The left-hand-side of (4) is pushed down by some optimization process and then the constraints x jsk ≤ x js-1,k for s = 1…v must hold due to construction of a ij s and constraints (3) and further also the constraints x jsk ≤ x jsk+1 for k = 1… r-1 must hold due to convexity given by decreasing sequence of q k . [16]. These approaches are called approximate not due to the solving tool, but for some small impreciseness connected with rounding the disutility values up to values from the set of so-called dividing points. In this paper, we focus on the first step of the lexicographic approach which consists in solving the min-max optimal public service system design problem where the disutility of the worst situated user is minimized. We study and compare two different approaches from the point of their impact on the solution accuracy and saved computational time. The remainder of the paper is organized as follows. Section 2 introduces the generalized model of individual user's disutility concerning more than one contributing center and provides the mathematical formulation of the problem based on the radial formulation. Section 3 contains the description of suggested approximate bisection search for exposing structure and gives the resulting algorithm for the min-max location problem solution. Section 4 contains numerical experiments, comparison of the suggested approaches and Sections 5 gives final conclusions. Generalized system disutility To formulate a mathematical model of the min-max optimal public service system design problem, we denote the set of user locations by J and the set of possible service center locations will be denoted by I. The basic decisions in any solving process of the problem concern location of given number p of centers at the possible locations from the set I. The system disutility for the user located at j ! J provided by a center located at i ! I is denoted by d ij . The randomly restricted capacity of a service center can be generalized so that the r nearest located centers influence the total disutility perceived by any user. In this paper, the generalized disutility for any user is modeled by a sum of weighted disutility contributions from the r nearest centers. The weights q k for k = 1 ... r are positive real values which meet the following inequalities q 1 ≥ q 2 ≥ … ≥ q r . The k-th weight can be proportional to the probability of the case that the k-1 nearest located centers are occupied and the k-th nearest center is available [17]. Radial formulation We assume that the disutility contribution value ranges only over non-negative integers from the range [d 0 , .u] such that G(w * (w)) ≥ G(w) and S(w * (w)) ≤ S(w). Proof: We perform the proof by contradiction assuming that there exists a subscript w such that either G(w) < G(w) or S(w) > S(w) holds for each w ! [1..u]. As S(w) < S(w+1) and G(w) < G(w+1) follow from definition of the exposing structure and S(1) ≤ S(1) follows the structure domination, then maximal w' exists such that S(w') ≤ S(w) and G(w') < G(w). It follows that the range [G(w')+1.. From the inequalities (12) it follows that y also satisfies (9) according to [u, S, G]. 3. The approximate bisection search for exposing structure Exposing structure for the radial formulation Let us consider the radial formulation (1) - (7) of the generalized p-center problem with the zero-one coefficients a ij s defined for i ! I, j ! J and s ! [0…v] where r nearest located centers influence the disutility perceived by a user. The coefficients are derived from the disutility contribution values which range only over non-negative integers of all possible disutility values d 0 < d 1 < … < d m from the matrix {d ij }. The triple [u, S, G] is denoted as an exposing structure, if its components satisfy the following rules. The first component u is a positive integer less than or equal to r. then the structure is denoted as complete structure. Using the above introduced location variables y i ! {0,1} for i ! I, the following set of constraints can be formulated for the exposing structure [u, S, G]. If a feasible solution y of the constraints (8) -(10) structure exists for a complete [u, S, G], then each user location j must lie at least in the radius d S(1) from G(1) located service centers and in the radius d S (2) from G(2)-G(1) additional service centers and so on till up to the radius d S(u) from the G(u)-G(u -1) service centers. It means that the worst situated user perceives the generalized disutility less than or equal to the value of (11). An exposing structure is called valid if there is at least one feasible solution of the problem (8) -(10) formulated for the structure. Example 2 This example is defined for the matrix of disutility contributions d ij in Table 3 and for the associated sequence of the different values d 0 < d 1 < … < d m from the matrix {d ij } in the form 1< 2 <5. The associate matrices of a s ij for s=0, 1, 2 are depicted in Table 4. Potential disutility contributions d ij from the center locations to the user locations. Matrices of a s ij for s=0, 1, 2. Table 4 s: Lexicographic maximal completion of valid exposing structure The process of valid structure completion starts with an incomplete valid structure [u, S, G] where G(u) < r. As we assume that the structure is valid, at least one feasible solution of (8) - (11) exists. The lexicographic process begins with attempt Proposition 4 The ordering defined on the set of all complete exposing structures by relation of dominance is not complete ordering in general. Proof: The proof is performed by the construction of two examples where each of them includes two complete structures. None of the structures dominates the other and, furthermore, there is no further structure which dominates any of the two ones. In addition, the pair of examples shows that mutual positions of the lowest subscripts S(1) and S(1) do not decide which structure takes the lowest value of (11). Both of the following examples are defined on the network where the set J of users' locations contains only two elements J={1, 2} and the set of possible center locations I consists of four elements I={1, 2, 3, 4}. It is necessary to locate p=2 centers so that the generalized disutility for the worst situated user is minimal. The generalized disutility is defined here for r=2 and for reduction coefficients q 1 =1 and q 2 =0.5. Example 1 This example is defined for the matrix of disutility contributions d ij in Table 1 Step 1. Repeat the following steps for k=1… r. Step .sM]. The limit sm is set at zero for k=1 and it equals to s * at the next steps for the k-1. The limit sM is specified using the value bH. Step 3. Apply the procedure Complete on the structure [u, S, G] and if a valid complete exposing structure is found and H [u, S, G] < bH holds, then update the exposing structure [bu, bS, bG] with value of bH by the newly found structure. Computational study The main goal of this study is to verify the usefulness of suggested approximate algorithm for the min-max location problem with generalized system disutility. This problem represents the first step of the lexicographic optimization process [13]. It was found that this important first step is the most time-consuming part of the whole algorithm and therefore it is necessary to develop an effective solving method for this min-max problem. Within this paper, we try to answer the question whether the suggested algorithm based on radial formulation and exposing constraints considerably accelerates the solving process of the p-center problem. Therefore, we compare the basic radial approach based on the formulation (1) -(7) to the suggested method described in the previous section. The results are compared from the viewpoint of computational time and solution accuracy. All reported experiments were performed using the optimization software FICO Xpress 7.3 (64-bit, release 2012) for both studied approaches. The associated code was run on a PC equipped with the Intel® Core™ i7 2630QM processor with parameters: 2.0 GHz and 8 GB RAM. Particular approaches were tested on the pool of benchmarks obtained from the road network of the Slovak Republic. The instances are organized so that they correspond to the administrative organization of Slovakia. For each self-governing region (Bratislava -BA, Banska Bystrica -BB, Kosice -KE, Nitra -NR, Presov -PO, Trencin -TN, Trnava -TT and Zilina -ZA) all to increase the value of G(u) as much as possible keeping the augmented structure valid. As the possible increase of G(u) is limited by r, only finite and small number of tests of solution existence is necessary to determine the highest value of G(u). The associated algorithm will be called "AugmentG". Further, let us consider incomplete structure which has been processed by the algorithm AugmentG. The lexicographical process continues with attempt to add next subscript S(u+1) to the structure as the lowest subscript from the range [S(u)+1..m]. The associated value of G(u+1) is set at the lowest possible value, i.e. G(u)+1. The searching process can be finished quickly even for big value of m when bisection is used. If the value of m is used as upper bound of the searched subscript, the algorithm ever succeeds in the search. If the searched range is reduced, then addition may fail. The associated algorithm will be called "AugmentS". After successful run of the algorithm, the structure is enlarged by one element of the array S and G and u is increased by one. The process of the structure completion can be performed by the following steps. Step 0. Initialize the starting valid incomplete structure [u, S, G]. {Comment: G(u)<r holds for the incomplete structure.} Step 1. Repeat the following two steps until G(u)=r. Step 2. Apply AugmentG on [u, S, G]. {Comment: The previous step may increase G(u)so that structure stay valid.} Step 3. If G(u)<r, apply AugmentS on [u, S, G]. {Comment: The previous step increases u to u+1, adds elements S(u+1) and G(u+1) to the u-tuples S and G respectively so that the augmented structure stays valid and S(u+1)>S(u) and G(u+1)= G(u)+1.} If the search included in the algorithm AugmentS is performed over the range [S(u)+1..m], then the resulting structure of is a complete valid exposing structure and in addition, any other valid structure containing the starting structure cannot dominate the resulting complete structure. If the search is restricted on some smaller range, e.g. not to produce complete structures with the value H [u, S, G] higher than a given upper bound UB, then the process can be prematurely stopped unless valid complete structure is produced. The above process will be called "Complete" in the remainder of the paper. An approximate algorithm for min-max location problem solution The suggested approximate algorithm is based on the partial search over set of non-dominated complete valid exposing |J| in all solved instances. The road network distance from a user located at j to the center located at i was taken as an individual user´s disutility d ij . The value of parameter p limiting the number cities and villages were taken as possible service center locations and also as the user locations. Thus the number of possible service center locations |I| is the same as the number of user locations The results of numerical experiments for r = 3 and the sets of coefficients q 1 and q 2 . Table 5 Region The results of numerical experiments for r = 3 and the sets of coefficients q 3 and q 4 . Table 7 which follows the same denotation as the previous tables. The reported results indicate that the suggested algorithm based on exposing constraints gives more precise results in considerably shorter time in comparison to the exact method. We presume that the link-up constraints for the upper bound definition significantly spoil the convergence of the computational process based on the branch and bound principle. Therefore, we can conclude that our algorithm constitutes an effective solving tool for the min-max optimal public service system design problem with generalized system disutility. The instances where our suggested algorithm lost in comparison to the prematurely terminated branch and bound approach as concerns the solution quality will become a topic of our future research. Conclusions The main goal of this study was to introduce and compare different approaches to the min-max optimal public service system design problem as the initial step of the lexicographic optimization process. Within this paper, the generalized system disutility was studied. The model of generalized disutility impacts the of located service centers was set in such a way that the ratio of |I| to p equals 5 and 10 respectively. In the benchmarks, the generalized disutility perceived by any user sharing given location j was defined by the sum of r = 3 distances from the user's location to the three nearest located service centers. Particular disutility values are multiplied by the reduction coefficients q k for k = 1… r so that the biggest coefficient multiplies the smallest distance etc. The four triples q 1 , q 2 , q 3 , q 4 of the reduction coefficients define the individual benchmarks and these symbols of the triples are used for distinguishing the results obtained by individual approaches applied on the benchmarks. The used triples were q 1 = [1, 0. Table 5 and Table 6 where the basic radial exact approach based on the model (1) - (7) is denoted by RA_EX and the radial approach with exposing constraints is denoted by RA_ EC. The computational time in seconds is given in the columns denoted by CT and the symbol G * denotes the best found value of the generalized disutility, which corresponds to the maximal disutility perceived by the most exposed users of the designed public service system. Since our preliminary experiments showed that the used IP-solver needs unpredictable computational time when the middle-size integer programming problem with the min-max criterion is solved to optimality, we decided to test each method in the maximal time of one hour and we report the best achieved results at that time. To make the comparison of presented approaches more precise, we enriched the pool of benchmarks in such a way that The results of numerical experiments for r = 5 and the sets of coefficients q 5 and q 6 . software tool. Thus, we can conclude that we have constructed a very useful solving tool for the middle-sized min-max optimal public service system design problem.
5,191.2
2015-05-31T00:00:00.000
[ "Engineering", "Computer Science" ]
Analysis of Switchable Spin Torque Oscillator for Microwave Assisted Magnetic Recording A switchable spin torque oscillator (STO) with a negative magnetic anisotropy oscillation layer for microwave assisted magnetic recording is analyzed theoretically and numerically. The equations for finding the STO frequency and oscillation angle are derived from Landau-Lifshitz-Gilbert (LLG) equation with the spin torque term in spherical coordinates. The theoretical analysis shows that the STO oscillating frequency remains the same and oscillation direction reverses after the switching of the magnetization of the spin polarization layer under applied alternative magnetic field. Numerical analysis based on the derived equations shows that the oscillation angle increases with the increase of the negative anisotropy energy density (absolute value) but decreases with the increase of spin current, the polarization of conduction electrons, the saturationmagnetization, and the total appliedmagnetic field in the z direction.The STO frequency increases with the increase of spin current, the polarization of conduction electrons, and the negative anisotropy energy density (absolute value) but decreases with the increase of the saturation magnetization and the total applied magnetic field in the z direction. Introduction Microwave assisted magnetic recording (MAMR) is one potential technology to overcome the superparamagnetic effect of perpendicular magnetic recording in the hard disk drive.A microwave field matching with the ferromagnetic resonance of recording media excites a large angle precession of magnetization, resulting in a significant reduction in switching field.Using microwave-assisted magnetic switching, it is possible to write data into high magnetocrystalline anisotropy recording media, such as FePt and CoPt, which have sufficient thermal stability at very small grain size. The angular momentum carried by the spin-polarized current applies a torque on the magnetization vector leading to either precession or reversal through spin-transfertorque effect [1,2].The current-induced magnetization precession enables magnetic nanostructure to be a tunable high-frequency spin-torque oscillator (STO) [3].The highfrequency magnetization precession in STO can generate localized microwave suitable for the application for MAMR, as proposed in [4,5].Furthermore, the fabrication processes of STO are compatible with current thin film perpendicular magnetic recording head and are easy to integrate with the current recording technology. For the real application of STO for MAMR, the STO should be near the writing pole to avoid field decay with the distance away from STO, as shown in the thin film magnetic head in Figure 1.The STO basically consists of a spin polarization layer, a spacer, and an oscillation layer.The STO is located between the writing pole and trailing shield.The microwave generated by the STO can assist the magnetic field from the writing pole to switch the media. There is very strong magnetic field in the gap between the writing pole and trailing shield; the STO with the negative magnetic anisotropy oscillation layer can oscillate stably under the very wide range of applied fields and injected spin currents [6].Therefore, STO with negative magnetic anisotropy oscillation layer is preferred.The oscillation frequency and oscillation angle of the switchable STO, which, together with the saturation magnetization of oscillation layer, determine the microwave frequency and the amplitude (critical for MAMR application), are investigated in detail in this paper. Theoretical Analysis of the Switchable Spin Torque Oscillator Theoretical Analysis of the STO Frequency and Oscillation Angle.The basic structure of the STO and the coordinates used for the analysis in this paper are shown in Figure 2. A simple approach to describe current induced magnetization oscillation of the oscillation layer is to fix the magnetization of spin polarization layer and consider the oscillation layer magnetization as a uniform macrospin.The dynamics of the oscillation layer magnetization follows the Landau-Lifshitz-Gilbert (LLG) equation with the Slonczewski's spin torque term: where 0 is the gyromagnetic factor, ⃗ eff is the effective magnetic field, is the damping constant, 0 is saturation magnetization, is the current passing through the STO, ℏ is the reduced Planck constant, 0 is the permeability of free space, is the volume of the oscillation layer, is the charge of an electron (−1.60 × 10 −19 C), ⃗ is current polarization, and () is the spin transfer efficiency function given by () = [−4+(1+) 3 ⋅((3+ ⃗ ⋅ ⃗ )/4 3/2 )] −1 , where ⃗ = ⃗ / and is the polarization of conduction electrons. If the spin torque term is included into the effective magnetic field, (1) can be rewritten as where is the same as the traditional LLG equation in format. The LLG equation given in the spherical coordinates can be expressed as where ℎ eff and ℎ eff are the normalized total effective field along ⃗ and ⃗ in the spherical coordinates.Here the following conditions are assumed: (1) the uniaxial magnetic anisotropy of the spin polarization layer is along the axis, (2) magnetization of the spin polarization layer is fixed along the + axis, (3) the dimensions of STO in the direction are the same, and (4) the magnetic field is only applied along the axis.The effective magnetic field is calculated by the energy variation with magnetization; the ℎ eff and ℎ eff can be expressed as where is total applied magnetic field on the oscillation layer that includes the external applied field and the demagnetizing field from the spin polarization layer. is the anisotropy energy density, and , are the demagnetizing factors of the oscillation layer.Thus (3a) and (3b) can be expressed as The oscillation frequency of the STO can be expressed as where the angle 0 is the solution of (6); we define = (1 + ) 3 /4 3/2 , = −4 + 3, = ℏ/2 0 0 , = 2 / 0 0 + 0 ( − ), and ( 6) can be expressed as It is not difficult to find the solution of (8) as For the stable oscillation, the solution of cos 0 is valid only when it is between −1 and +1.Inputting the angle 0 into (7a) or (7b), the oscillation frequency of the STO can be found. STO Frequency and Oscillation Angle after Switch.If the external magnetic field along the axis is strong enough, when its direction is changed from + to −, the magnetization of spin polarization layer will also change from + to − (switchable).The demagnetizing field of spin polarization layer also reverses its direction.Therefore, becomes − .The reversal of the spin polarization layer magnetization also causes the current polarization to be reversed.Equation (6) becomes Equation ( 10) can be rewritten as It is obvious that the solution of (10) is cos Thus 0 is equal to ( − 0 ), and the corresponding oscillating frequency is Therefore, the oscillating frequency remains the same, but the oscillating direction reverses after the switching of external magnetic field and the magnetization of spin polarization layer. In the application of STO for MAMR, the magnetic field from writing pole is opposite when 0 or 1 is written. The reverse of the oscillating direction and the unchanged frequency match the needs for the microwave-assisted magnetic switching when 0 or 1 is written.If the STO is not switchable, the external magnetic field applied to STO is different for writing of 0 and 1, which causes a shift of STO oscillation frequency (as shown in the next paragraph) and a mismatch between the STO frequency and the recording media switching frequency, resulting in write-in failures which are the main source of MAMR noise. For the STO studied in this paper, there is no pinning layer in the polarization layer.However, there is a very strong magnetic field of 5000-8000 Oe along the ± direction between the main pole and the trailing shield, where the spin torque oscillator (STO) is placed (Figure 1).This field acts on STO, which makes the polarization layer robust enough against other influential forces such as the dipole field from the oscillation layer (which is 100-500 Oe depending on the thickness and of free layer) or the field from magnetic recording grain (which is about 200-400 Oe at a flying height of 3-5 nm) or the spin torque it experiences when passing through a current (the equivalent spin torque field is about 100-200 Oe). Numerical Analysis of STO Frequency and Oscillation Angle For microwave assisted magnetic recording (MAMR), STO generates microwave, which is used to reduce the switching field of recording media during writing process.In order to sufficiently reduce the media switching field, the microwave frequency should be tuneable to match the natural precession frequency of the media magnetization and the oscillating amplitude of microwave (i.e., AC magnetic field) should be large enough (about 10% of the media ).Therefore, the microwave frequency and the AC magnetic field are two key parameters for MAMR.In our STO design, the AC magnetic field strength is determined by the STO oscillation angle.Therefore, the discussion on the STO oscillation angle is critical for the application of STO in microwave assisted magnetic recording.Based on the equations above, the relationship between STO oscillation angle/frequency and the relative parameters is numerically analysed below.The dimension of the oscillation layer of STO is 40 nm × 40 nm × 10 nm. Injected Spin Current. In our simulation we assume that the saturation magnetization 0 is 800 kA/m, the anisotropy energy density is −8 × 10 5 J/m 3 , and the damping constant is 0.02 for the oscillation layer.The applied magnetic field (including the demagnetizing field from spin polarization layer) is 10000 Oe, and the polarization of the conduction electrons is 0.35 [7].We vary the current density from 0 to 1.25 × 10 8 A/cm 2 .The numerically calculated results of cos and frequency are shown in Figure 3.The increase of the spin current results in a decrease in the oscillation angle and an increase in the oscillation frequency.This trend is easily understandable because the larger current injects more spin torque to the oscillation layer and makes the oscillation layer oscillate faster. Polarization of the Conduction Electrons. The simulation parameters are the same as those in Section 3.1, except for a fixed current density of 1 × 10 8 A/cm 2 and a varied polarization of the conduction electrons from 0.2 to 0.5.The numerically calculated results of cos and frequency are shown in Figure 4. Similar to the spin current, the increase in the polarization of conduction electrons results in a decrease in the oscillation angle and an increase in the oscillation frequency because more spin torque is injected to the oscillation layer. Saturation Magnetization. The simulation parameters are the same as those in Section 3.1, except for a fixed current density of 1 × 10 8 A/cm 2 and a varied saturation magnetization 0 from 400 kA/m to 800 kA/m.The high saturation magnetization results in a low magnetic anisotropy field and a high demagnetizing field, which result in a low oscillation angle (high cos ) and a low value of the spin transfer efficiency function ().Besides the (), the STO frequency is inversely proportional to the as shown in (7a); thus the oscillation frequency decreases with the increase of the saturation magnetization as shown in the numerically calculated results in Figure 6. Total Applied Magnetic Field.The simulation parameters are the same as those in Section 3.1, except for a fixed current density of 1 × 10 8 A/cm 2 and a varied total applied magnetic field in + direction from the 0 to 10000 Oe.The high results in a low oscillation angle and a low value of the spin transfer efficiency function ().Therefore, the oscillation frequency decreases with the increase of the as shown in the numerically calculated results in Figure 7. Conclusions Using modified LLG equation, we derived formulas to solve the oscillation frequency and oscillation angle for the switchable spin torque oscillator (STO) with negative magnetic anisotropy oscillation layer.The STO keeps the same oscillation frequency, while its oscillation direction reverses after the flip of applied external field in the direction.The oscillation angle increases with the increase of the negative anisotropy energy density (absolute value) but decreases with the increase of spin current, the polarization of conduction electrons, the saturation magnetization, and the total applied magnetic field in the direction.The STO frequency increases with the increase of spin current, the polarization of conduction electrons, and the negative anisotropy energy density (absolute value) but decreases with the increase of the saturation magnetization and the total applied magnetic field in the direction.The findings in this paper offer guidelines Figure 2 : Figure 2: Spin torque oscillator with a negative magnetic anisotropy oscillation layer. Figure 4 : Figure 4: Effects of polarization of conduction electrons on STO. Figure 6 : Figure 6: Effects of saturation magnetization on STO. Figure 7 : Figure 7: Effects of applied magnetic field on STO. 2 Advances in Condensed Matter Physics
2,926
2015-03-10T00:00:00.000
[ "Physics", "Engineering" ]
Domain and Challenges of Big Data and Archaeological Photogrammetry With Blockchain With gigantic growth of data volume that is moved across the web links today, there has been a gigantic measure of perplexing information produced. Extremely huge sets of data including universities, organizations framework, institution gas, petroleum sector, photogrammetry, healthcare, and archaeology, that have so enormous thus complex information with more differed structure. The major challenge is how to handle this significant volume of data, also in archaeological photogrammetry which alluded to as Big Data. Although big data has to be securely flying and conveyed through the internet. It cannot be controlled with regular conventional methods that fail to handle it, so there is a need for more up-to-date developed tools. The big data have frequently divided into V’s characteristics beginning from three V’s: volume, velocity and variety. The initial three V’s have been stretched out during time through researches to arrive 56 V’s till now. Among them are three newfound by the author that implies it multiplied near twenty times. Researcher had to dive to search for all of these characteristics in many researches to detect and build comparisons to answer the old, current, and restored essential inquiry, “how many V’s aspects (characteristics) in big data with archaeological photogrammetry and blockchain.” This paper provides a comprehensive overview of all secured big data V’s (characteristics) as well as their strength and limitations with archaeological photogrammetry and blockchain. gets. An estimated 59 zettabytes of data will be generated and 23 processed in the 2021 alone. By 2025, the International Data 24 Corporation (IDC) predicts that the amount of data saved 25 would have increased to 163 zettabytes and shown in fig1. 26 As a result, data storage capacity has risen from megabytes 27 to exabytes, with zettabytes per year predicted in the coming 28 years [1]. The amount of data that will be generated during the 29 next three years will be more than than that generated during 30 the last thirty years.The amount of data produced in the next 31 The associate editor coordinating the review of this manuscript and approving it for publication was Mehdi Sookhak . five years will be three times that produced in the previous 32 five years. The task of handling and managing continuously 33 increasing data is becoming a problem.Another issue with 34 data is that it is being generated in new formats and in 35 unstructured forms, such as photographs, audio, tweets, text 36 messages, server logs, and so on. The petabyte era is coming 37 to an end, leaving us at the threshold of the exabyte era. 38 The technological revolution has aided billions of people by 39 generating massive amounts of data, which has been dubbed 40 ''Big Data.'' [2]. 41 According to research [3], Big Data (BD) basically meant 42 the amount of data that could not be processed in an efficient 43 manner by the traditional database tools and methods. 44 Every time a new medium of storage was devised, the 45 amount of data that could be retrieved became larger because 46 it was now easier to do so. The first concept of BD centred 47 on organised data, but many academics and practitioners 48 noticed that the vast majority of information on the planet 49 The application of BD will be a critical component of 85 individual company growth and rivalry. Every organisation 86 should take BD seriously from the standpoint of competition 87 and potential value extraction. Established organisations and 88 new entrants in every field will use the most up-to-date data 89 gathering methods to innovate, compete and capture value 90 gathered and also the real-time data. Every field we looked 91 at has examples of this type of data utilisation. 92 It want to execute an operation with BD, its vast volume 93 offers a challenge. However, how will we know if the opera-94 tion was successful? And do you know if it's correct or not? 95 The truth or validity of Big Data is a major issue because it 96 is nearly hard to check spelling, slang, and vocabulary with 97 such a large amount of data. If the information isn't accurate, 98 it's useless. 99 Modern data-driven technologies, as well as an increase 100 in processing and data storage capacities, have greatly aided 101 the growth of the BD industry. Companies like Google, 102 Microsoft, Amazon, and Yahoo are collecting and maintain-103 ing data that can be measured in proportional greater than 104 exabytes. Furthermore, social media sites such as YouTube, 105 Facebook, Twitter, and Instagram have billions of users who 106 produce massive amounts of data every second of the day. 107 Various organisations have invested in the product's develop-108 ment and research. BD Analytics is a prominent topic in data 109 science research since several firms have invested in building 110 products to handle their monitoring, testing, data analysis, 111 simulations, and other knowledge and business demands. 112 The core of the Big Data analytics is the processing and 113 generation of meaningful patterns for making inferences, 114 predictions and decision. There are also other challenges 115 that BD analytics need to overcome for data analysis and 116 machine learning. Variation in raw data format, speed of 117 streaming data, data analysis reliability, vast and distributed 118 input sources, noisy and low quality data, scalability of algo-119 rithms, increased dimensionality of data, uncategorized data, 120 tent. For assessment of huge data, complex tools for rapidly 156 A massive benefit of latest technology is the pretty growing 157 skills and user friendliness over price ratio, which inspires 158 archaeologists to go into the rising realm of Digital Archaeol-159 ogy. For any metric to be widely accepted in the archaeologi-160 cal community as a benchmark evaluation tool for contrasting 161 various archaeological item detection procedures, this is a 162 crucial need. The required archaeological data for additional 163 (field) investigation is provided by the centroid-based and 164 pixel-based measurements. We anticipate that from now on, 165 the community will view these two metrics as a common per-166 formance evaluation tool [12]. Over time, archaeological pho-167 tography has undergone intense scrutiny and been improved. 168 Methodological and technical advancements in the form of 169 equipment development and digital control of photographic 170 products and environments are significant advancements in 171 archaeological photography [13]. 172 With the emergence of ''big'' data projects, it is important 173 to think about how these new data scales and perspectives 174 on historic sites and landscapes might complement or con-175 flict with local residents' modes of knowing. Big data has 176 a lot to offer the archaeological discipline, allowing for the 177 use of never-before-seen scales of data to ask questions and 178 observe sites from novel perspectives, as this issue of JFA 179 demonstrates [14]. Heritage sites now face both new poten-180 tial and difficulties as the big data era begins. Big data has 181 enormous commercial value, particularly in the application 182 area. However, the market demand cannot be satisfied by 183 the current domestic cultural site development.It is challeng-184 ing to implement innovative cultural tour service models 185 because the majority of historic site tourism service modes 186 that have been done in a similar area in the past in Figure 4 dis-243 play the article's structure; Sections III and IV discuss the step 244 by step way for conducting a Literature Review, including 245 the some RQs, strings to search, IE(Inclusion/exclusion) cri-246 teria, QA, and conclusion. Section V discusses the proposed 247 taxonomy, including main findings and open challenges; and 248 Section V discusses the obtained results. Finally, Section VI 249 brings the article to a close. The term ''data volume'' alludes to the massive amount 253 of data derived from science and technology, as well as 254 organizations, innovation, and people collaboration records. 255 Volume alludes to the amount of data extracted from various 256 sources such as sound, video,text, research work, long-range 257 interpersonal communication, space images, clinical data, 258 climate forecasting, wrongdoing reports, and catastrophic 259 events, among others. 260 Regardless, data volume takes up a significant amount of 261 time and effort to manage [20]. Although, because of the 262 speed with which capacity innovations are created on the 263 one hand and the capacity cost is reduced on the other, the 264 capacity limit poses less of a challenge in terms of handling. 265 As a result, cost-effective data storage arrangements, Cloud 266 advancements, and now Edge developments provide organ-267 isations with more options for data storage. In any event, 268 data volume has an impact on executives' data handling and 269 dynamic data [21], [22]. It controls the rate where data flows in diverse sources 272 such as corporations, machinery, human communication, and 273 online media destinations.The growth of data might be enor-274 mous or nonstop. Importing data can be done in one of two 275 techniques: 1st is batch data and 2nd is streaming data. It is 276 critical when selecting a BD examination stage since constant 277 cycle frequently is time-delicate and requests quicker and 278 close moment investigation results. 279 The speed of Hadoop is ideal for batch processing 280 of archive data, on the other hand the performance of 281 Apache Spark is excellent for interactive task and real time 282 analysis [23]. 283 In some cases, 5 seconds is past the point of no return. 284 For time-touchy cycles/processes such as detecting fraud, 285 BD should be used as it flows into the attempt to increase its 286 value. 5,000,000 exchange occasions and activities are inves-287 tigated to discover potential extortion every day The degree of data arrangement is referred to as data vari-290 ety. Unstructured data lacks sufficient organisation, whereas 291 structured data has a high degree of organisation [21]. The 292 diversity and fruitfulness of data representations in text, 293 audio, video, pictures, and other formats are measured by data 294 variety. 295 From an analytic standpoint, it is most likely the most 296 significant impediment to properly utilising large amounts of 297 data. The fact that Data appears in a variety of shapes adds 298 to the overall complexity. Unstructured and semi-structured 299 data, on the other hand, are more difficult to analyse and make 300 judgments with. Due to data inconsistency, incompleteness, ambiguity, 314 delay, deception, and approximations, data is graded as good, 315 horrible, or undefined [28]. rigorous study of precise data. 331 BD is a massive information asset that necessitates 332 cost-effective and innovative data processing in order to 333 improve decision-making insight [33]. Although this defini-334 tion isn't perfect, it does provide us with a clear differen-335 tiation. We cannot retrieve the data of a dataset using this 336 definition. from [36] in terms of profitability and productivity, data-347 driven decision-making has been shown to outperform other 348 decision-making strategies. 349 A number of researchers [37] have underlined the chal-350 lenges in extracting and obtaining business value from BD 351 analytics.Some firms may afford to pay a higher price for 352 storage associated with higher tiers since the security is better 353 at those levels, resulting in a better value and cost ratio [38]. VALIDITY: Governance, Understandability, Excellency 355 Ideas for data validity and data truthfulness may be com-356 parable. However, they do not share the same ideas and 357 theories. Data should be legitimate when it transitions from 358 exploratory to actionable stage. To put it another way, a data 359 collection may not have problems with veracity, yet it may 360 not be legitimate and is not properly accepted or understood. 361 Validity of BD is necessitated by occurrence of some hidden 362 connections among pieces within large number of BD gener-363 ating sources. 364 As [30] the terms ''validity of data'' and ''veracity of data'' 365 are often used interchangeably. They are not the same notion, 366 yet they are similar. Validity refers to the data's correctness 367 and accuracy in relation to its intended use. To put it another 368 way, data may not have any concerns with truthfulness, but it 369 may not be legitimate if it is not correctly understood. 370 Importantly, the same collection of data may be appropriate 371 for one application or even use but not really for another. 372 Despite the fact that we are working with the information 373 where connections may not be distinct or in beginning phases, 374 it is basic to confirm connections between parts of informa-375 tion to some even out to validate it against utilization. In BD it defines as the length of time in which data is 378 valid [24]. We need to figure out when real-time data is no 379 longer effective and applicable for present research in this 380 field. The data should always be present in some sources, but 381 this may not be the case in others. As a result, it is neces-382 sary to comprehend the data's requirements, availability, and 383 longevity. 384 Data is retained for decades in a data standard context to 385 develop a knowledge of the value of data [30].We can readily 386 recall the structured data retention policy that we employ 387 every day in our organisations when it comes to the volatility 388 of large data. We may easily destroy it once the retention term 389 has expired. 390 This guideline and policy in real-world data storage apply 391 equally to BD. Such a problem is amplified in the BD world, 392 and it's not as simple to solve as it is in the traditional data 393 world. The retention time for BD may be exceeded, and 394 storage and security may become prohibitively expensive to 395 execute. Because of the variety, volume, and velocity of data, 396 volatility becomes significant. 398 BD ought to be able to stay alive and active indefinitely, 399 as well as evolve and produce additional data as needed. 400 However, researcher must do more to examine large data sets 401 instantaneously, which necessitates thorough evaluation of 402 the traits and aspects most likely to predict critical business 403 effects [39]. It collect multidimensional data using Big Data, 404 which encompasses a growing number of factors rather than 405 just a big number of records. BD is prevalent in academic study, spanning the full spec-473 trum.We will almost likely come across a vast amount of 474 data; this is due to current technology, which permit us gather, 475 analyze, and sample massive amounts of data. 476 The challenge is converting BD into useful, meaningful 477 and actionable information. This demands a wide range of 478 mathematical, statistical, and computer science tools, as well 479 as approaches that can be intimidating to the uninitiated. 480 All metadata shapes that explain the data's structure, syn-481 tax, content, and origin, such as data models, schema, seman-482 tics, ontologies, taxonomies, and other contents [47]. Geo-tag real-time location data will soon be included in 485 Online Social Networks (OSN) data, in addition to OSN 486 interaction [48]. Data based on location will soon extend 487 beyond landscape. 488 The gauntlet of prime types of technology for 3D inter-489 action and also volume rendering technology based on GPU 490 technology is addressed in one study. 491 This project investigates data-oriented and visual s/ware 492 for the hydrological environment. It also generates surface 493 contour mapping, dynamic simulations and element field 494 mapping of existing fields [49], [50]. Smart city, for example, detectors/sensors can be used 501 to track movement of vehicles in order to determine traffic 502 volumes and trends [51]. This data can then be combined with 503 information from vehicle owners to identify the correlations 504 between trip times, age groups, and places. This data can be 505 used to improve planning [52]. 507 Writing codes is a part of both data science and software 508 development. Data science is more iterative and cyclical, with 509 each cycle beginning with a basic comprehension of the data. 510 The data is collected, explored, cleaned, and trans-511 formed, and then machine-learning models are built, vali-512 dated, and deployed. Researchers and ''data science'' teams 513 aim to gather, analyse, and cooperate on large datasets in 514 order to extract meaningful insights or condense scientific 515 knowledge. 516 This type of collaborative data analysis is frequently ad 517 hoc, including a lot of back-and-forth among team members 518 as well as trial-and-error to find the correct analytic tools, 519 programmes, and parameters. The ability to keep track of and 520 reason about the datasets that are being used is required for 521 this form of collaborative study. In essence, a system to track 522 dataset versions throughout time is required [53]. Varmint is defined as the rate at which bugs age in software 556 when the BD grows massively at a rapid rate [43]. The capability of Big Data to reveal insight into con-593 founded and immense issues in the Data science. The amorousness, dynamic, strong, active, and sparkling 596 practices of BD come through loud and clear. These features 597 provide us with experiences, thoughts, and provision in many 598 features of our data science endeavors. 599 VICTUAL: Fuels, Nutrition, Nourishment Victual denotes 600 supplies of information to data science shape of BD. By adding evaluation for each question, we were able to 761 provide an overall score for each article (ranging from 0 to 5). 762 763 The goal is to get favorable perceptions to the presented 764 questions. Q1. To avoid publication drift, articles must be categorized 766 according to the year they were published. Q2. It is essential to determine the printing media and basis 768 for these questions (RQ). 787 Q4. The main RQ of research is apprehension incumbent 788 study in the direction of big data and Vs. We are sure in 789 giving a generic understanding of big data that is also tract the 790 current study trends after compiling all relevant investigations 791 from scientific sources. 792 This research will enhance current studies and practical 793 information on existing research challenges, assisting in the 794 process of increasing the number of Vs in big data. In the 795 828 This section specifies the results relating to the RQs defined 829 in the specified Table 1. For each RQ's results, a number of 830 publications are picked to pretence the model. We predicted 831 that they are critical and represent a significant undertaking 832 for BD domains. [3]. The problem of assisting information in DB 883 quickly and securely for the next Vs era has been overcome. 884 Supported approaches vs. The growing use of social media, 885 which is the key origin of the rise in information loads, has 886 put this property to the test [45]. We have evaluated 29 research papers after an extensive 890 analysis of 340 papers. We found only 6 conference papers 891 on the Big data domain and characteristics, none of them is 892 published in any well-reputed conferences. On the other hand, 893 we have found 23 journals,5 conferences and only 1 book is 894 published with good ranking and we found 11 journals that 895 are published without ranking. The QA scores are given in the table 4. Almost 25% articles 918 are having an average, 59% have standard, and 16% papers 919 are without any score. QA can helps to choose sited articles 920 with defined asserted. 922 In this SLR Vs in Big Data are hypothetically answer the 923 given RQ's proffered by investigation. It display that Vs have 924 been examined over years. 926 The term ''Big Data'' was originally used in a paper by 927 Diebold in the year 2000. The Big Data age has brought 928 with it a slew of new potential for promoting economic 929 growth, improving education, advancing science, strengthen-930 ing health care, and mounting social collaboration and enter-931 tainment options. However, while big data has its assistances, 932 it also has its drawbacks in form of challenges and issues. 933 Since then, the number of Vs has steadily increased. 934 In 2001, it is accredited with devising the three big Vs of 935 BD: variety, volume, and velocity. Many individuals began 936 to add up more number of V's to the characterization of BD 937 when it gained a lot of attention. Other authors referred to the 938 characteristics of big data as pillars. 939 Varacity is included to the V family as the 4th V in 2012. 940 So many studies classified the evolution of V's over time for 941 different ways, but we sought to find a pattern of V's growth 942 through time in this work. In 2013, Big Data becomes increas-943 ingly popular, and individuals begin to discuss it. Different 944 researchers added 12 new pillars of big data to the list in 2013. 945 Vs to 56 Vs by 2020 or more. Many researchers believe that 962 in the future years, it will reach 100 V's. Figure 7 depicts the 963 evolution of V's over time. 964 Also the Taxonomy has been proposed in figure 11. These 965 features deliver explore prospect to the scholar and practition-966 ers in command to efficiently accomplish BD. The complete 967 study in BD circles around these features. Additionally it 968 can resolve alot of problems related to BD. It also helps to 969 differentiate the BD nature. 988 • The majority of studies and research are covering not 989 more than four to five V's. We haven't been able to locate 990 a huge quantity of V's. 991 VOLUME 10, 2022 In order to achieve this, research and studies on Vs in BD 1017 have to be built up on by establishing a certain standard. 1019 Big data of the Vs with archaeological photogrammetry and 1020 blockchain is discussed in-depth by analyzing 29 differ-1021 ent articles. After a thorough examination of past research, 1022 it is determined that the majority of Vs are not covered. 1023 The major goal of research was to search the already 1024 available data and condense into as many Vs as possible. 1025 From 2011 to 2022, 340 publications were selected from 1026 an initial list of 80699 studies, and 29 were characterized 1027 as intent criteria: research and contribution kind, number of 1028 Vs, issues investigated articles, and techniques. Vs in BD are 1029 thought to have received little attention until 2022. The major-1030 ity of the chosen studies were published in various journals, 1031 although some mature publications came from conferences as 1032 well. There are two types of esquires: experimental solution 1033 and suggestion of solution. The articles in this mapping study 1034 did not contain the design and implementations of Vs. In this 1035 study, three more Vs have been presented. In addition, this 1036 study includes a taxonomy that can assist other specialists 1037 in identifying numerous approaches that can improve the 1038 study's performance. On the other hand, evolution research 1039 must be regulated in order to assess existing strategies. As a 1040 result, the current study has deeply explored Big Data with 1041 archaeological photogrammetry and blockchain, along with 1042 the associated Vs and their brief explanations. Open source 1043 software can also be used in archaeological photogrammetry 1044 where data can be securely saved as big data. The Big Data 1045 and archaeological photogrammetry with blockchain disci-1046 pline is emerging around Vs.
5,475.4
2022-01-01T00:00:00.000
[ "Computer Science" ]
Multiscale effects of habitat and surrounding matrices on waterbird diversity in the Yangtze River Floodplain With the expansion in urbanization, understanding how biodiversity responds to the altered landscape becomes a major concern. Most studies focus on habitat effects on biodiversity, yet much less attention has been paid to surrounding landscape matrices and their joint effects. We investigated how habitat and landscape matrices affect waterbird diversity across scales in the Yangtze River Floodplain, a typical area with high biodiversity and severe human-wildlife conflict. The compositional and structural features of the landscape were calculated at fine and coarse scales. The ordinary least squares regression model was adopted, following a test showing no significant spatial autocorrelation in the spatial lag and spatial error models, to estimate the relationship between landscape metrics and waterbird diversity. Well-connected grassland and shrub surrounded by isolated and regular-shaped developed area maintained higher waterbird diversity at fine scales. Regular-shaped developed area and cropland, irregular-shaped forest, and aggregated distribution of wetland and shrub positively affected waterbird diversity at coarse scales. Habitat and landscape matrices jointly affected waterbird diversity. Regular-shaped developed area facilitated higher waterbird diversity and showed the most pronounced effect at coarse scales. The conservation efforts should not only focus on habitat quality and capacity, but also habitat connectivity and complexity when formulating development plans. We suggest planners minimize the expansion of the developed area into critical habitats and leave buffers to maintain habitat connectivity and shape complexity to reduce the disturbance to birds. Our findings provide important insights and practical measures to protect biodiversity in human-dominated landscapes. connectivity Á Shape complexity Á Urban and rural planning Introduction Anthropogenic landscape modification is the major cause of biodiversity loss (Fischer and Lindenmayer 2007;Guadagnin and Maltchik 2007), and is one of the most pressing challenges for ecologists and conservation biologists. Globally, urban and rural areas are developing rapidly (Andrade et al. 2018), vastly altering the landscape composition and structure of wildlife habitats and their surroundings. However, the influence on urban development is not ubiquitous for biodiversity and is instead dependent on landscape composition and configuration at local and regional scales (Andrade et al. 2018). Wetlands, as important biodiversity hotspots, maintain high biodiversity and biological productivity (Forbes 2000;Dudgeon et al. 2006;Green et al. 2017), and offer habitat for many threatened species (Green 1996;Dudgeon et al. 2006). Though some wetlands are under protection, human activities remain a threat to wetland biodiversity, resulting in degraded ecosystem services (Green 1996;Nassauer 2004;Galewski et al. 2011;Martínez-Abraín et al. 2016). For example, due to dryland development, such as for agriculture and urban construction, large numbers of natural wetlands are deteriorated (Nilsson et al. 2005;Niu et al. 2012). Waterbirds (e.g. swans, geese, ducks, and herons), that rely on wetland habitats are sensitive to the environmental change and are often regarded as important indicators of ecosystem health (Ogden et al. 2014). Nevertheless, populations of such important bird groups are declining globally, which calls for new strategies for conservation of both waterbirds and wetlands (Amano et al. 2018). Habitat characteristics influence bird distribution, abundance and diversity (Paracuellos and Telleria 2004;Beatty et al. 2014). For example, Zhang et al. (2018) found that waterfowl prefer areas with wellconnected waterbodies and wetlands. Neotropical migrants are more abundant in landscapes with a greater proportion of forest and wetland (Flather and Sauer 1996). Shorebird abundance is positively affected by wetland area and number of wetlands (Webb et al. 2010). Moreover, greater habitat patch size, core area, edge and connectivity positively influence bird diversity (Wu et al. 2011). Nevertheless, the suitability of an area for birds depends on the condition of both habitat and the surrounding landscape matrix (Saab 1999;Guadagnin and Maltchik 2007;Elphick 2008;Perez-Garcia et al. 2014). For example, Morimoto et al. (2006) found that two woodland bird species prefer woodlands surrounded by agricultural areas over those surrounded by urban areas. Francesiaz et al. (2017) found that gulls prefer ponds surrounded by meadow and fallow land rather than woodland. Dallimer et al. (2010) found that the size of urban area and the amount of grassland patches affect the richness of moorland bird species in northern England. Nevertheless, studies investigating the effect of the landscape matrix have mainly considered the distance of the landscape matrix to habitats (Debinski et al. 2001;Summers et al. 2011), or the size and amount of the matrix (Guadagnin et al. 2009;Dallimer et al. 2010;Egerer et al. 2016). Thus, the effect of detailed characters (such as shape complexity and connectivity) of the surrounding landscape matrix on bird diversity are largely unknown. Landscape metrics are frequently used to evaluate landscape pattern change (Riitters et al. 1995;Lausch and Herzog 2002), habitat characters (Mcalpine and Eyre 2002;Bailey et al. 2007), and linked to biodiversity (Bailey et al. 2007;Walz 2011;García-Llamas et al. 2018). Landscape metrics can be used to assess biodiversity at a higher and integrated level (Walz 2011) as higher environmental diversity leads to higher species diversity (Ricotta et al. 2003). These metrics can also capture biotic processes, such as immigration (Honnay et al. 2003) and biotic interactions (Simmonds et al. 2019). Numerous metrics have been proposed to quantify landscape composition, configuration and connectivity (Šímová and Gdulová 2012; Sklenicka et al. 2014), covering the patch size, dominance, shape complexity, fragmentation, connectivity, landscape diversity, contagion and aggregation (Mcgarigal and Marks 1995). We used these metrics to quantify the character of habitat and surrounding landscape matrices to investigate their effects on waterbird diversity. Moreover, birds respond to their environment differently at different spatial scales and hence different conservation plans are needed across scales (Wiens 1989;Zhang et al. 2018). The surrounding environment tend to play a more important role at coarser scales as birds avoid areas highly disturbed by human activities (Si et al. 2020), which often are a large component of landscape matrices (Herbert et al. 2018;Souza et al. 2019). However, the understanding of how landscape matrices affect bird diversity across spatial scales, in particular at coarse scales, is rather limited. Previous studies (Chan et al. 2007;Guadagnin and Maltchik 2007;De Camargo et al. 2018) investigating the effect of habitat and the surroundings on bird communities mainly focus on fine scales (500 m to 10 km). Considering that the maximum mean foraging flight distances of ducks and geese is 32. 5 km (Johnson et al. 2014) and is generally \ 50 km (Ackerman et al. 2006;Si et al. 2011;Johnson et al. 2014), we chose the spatial scale [ 10 km and \ 50 km as the coarse scales to further investigate how the landscape features influence waterbird diversity. This study investigates how habitat and landscape matrices affect waterbird diversity in the Yangtze River Floodplain across spatial scales using spatial and ordinary least squares regression models. We hypothesize that (1) habitat and landscape matrices jointly affect waterbird diversity, and (2) the effect of landscape matrices outweighs that of habitats at coarse scales. Study area The Yangtze River Floodplain (thereafter YRF, 28.3°-33.6°N, 112.2°-122.5°E; Fig. 1) is located in the humid subtropical climate zone. The annual average temperature ranges from 14°C to 18°C and average annual rainfall is from 1, 000 mm to 1, 400 mm (Xie et al. 2017;Wei et al. 2019). In this region, 11 Ramsar sites (wetlands of international importance, designated under the Ramsar Convention; http://www.ramsar. org) and 31 wetlands (including 10 national and 21 provincial-level wetlands) are designated as protected areas. A seasonal flood-drought cycle results in high water levels in spring and summer, followed by low water level in autumn and winter (Wei et al. 2019). Flooding brings nutrients and organic matter into the wetlands, during drought cycles as water levels decline, the large number of wetlands provide abundant feeding areas for waterbirds (Xu et al. 2017;Wei et al. 2019). YRF, as an important wintering area for local and migratory birds along the East Asian-Australasian Flyway, is composed of variable types of wetlands such as flooded wetlands, inland marshes, swamps and mudflats. YRF is one of the Global 200 priority ecoregions for conservation identified by the World Wide Fund for Nature (Olson et al. 1998), and it provides habitat for about one million wintering waterbirds (Wang et al. 2017). Meanwhile, YRF, flowing through Shanghai and Hunan, Hubei, Jiangxi, Anhui and Jiangsu provinces, plays an important role in Chinese economy, agriculture and industry (Hollert 2013), support 29% of China's population (about 400 million) and produces more than 40% of the national GDP (Wang et al. 2017). Intensive human activities (such as agriculture, urbanization, land reclamation and conversion, etc.) in this region makes YRF one of the most critical and endangered ecoregions in the world (Olson and Dinerstein 2002). Thus, YRF is an appropriate region to explore how species diversity responds to the altered landscape patterns. There is an urgent need to generate sustainable development plans to solve the conflicts between economic development and biodiversity conservation in YRF. Waterbird survey data We obtained the waterbird survey data for 101 sites along YRF from The World Wide Fund for Nature (WWF; survey was carried out from 9 to 13 January 2011). This time of year was chosen because the distribution of wintering birds is relatively stable and concentrated. The survey sites where bird congregate were identified based on expert knowledge. Various methods were used to approach the survey sites. The survey team usually drove as close as possible and then walked on foot. Birds were counted by experienced field ornithologists from early morning and through the day using telescope, in at least two locations of one surveyed wetland. A total of 136 waterbird species were recorded during the survey. In some regions, only data at the county level was summarized and the counts corresponding to specific wetlands were not available. For example, the count in the Xingzi County (Jiangxi Province, China) is the sum of three wetlands. We excluded these records and only used data for sites with accurate geographical locations of a specific wetland and corresponding bird counts for further analyses (Fig. 1). Land cover map We used the aggregation land cover map of the finer resolution observation and monitoring of global land cover in 2010 (FROM-GLC-agg; http://data.ess. tsinghua.edu.cn; Yu et al. 2014) to calculate landscape metrics. According to the classification scheme of Li et al. (2016), we reclassified land cover map into nine types: cropland, forest, grassland, shrub, wetland, water, developed area and bareland. As wetlands are difficult to characterize by automatic classification ), we replaced the water and wetland classifications in the FROM-GLC map with a 2008 wetland map generated based on human interpretation and multi-temporal imagery (Niu et al. 2012). Specifically, with the wetland map, 'water' is composed of recreational waters, artificial channels and fish farms, and 'wetland' includes shallow beaches, coastal marshes, estuary deltas, flooded wetlands and inland marshes. We then categorized land-use types into waterbird habitat (wetland, water, grassland, and shrub) and the surrounding landscape matrix (cropland, forest, bareland, and developed area). Grassland and shrub were included as habitat because grass is a potential food resource for some waterbirds and shrub could be used for resting or roosting. Cropland was classified as the landscape matrix due to a limited number of observed waterbird species in this land cover type (12/136 species). Waterbird diversity The Shannon-Wiener index has been frequently used to measure species diversity (Macarthur 1955;Lin et al. 2011;Dronova et al. 2016). It combine richness and evenness and can be used to compare the species diversity among different sites (Payne et al. 2005;Lin et al. 2011). The index (Hill 1973) is calculated for each site by Eq. (1): where s is the total number of species and P i is the proportion of individuals of species i to the total individuals of all species. Landscape metrics at fine and coarse spatial scales To quantify the habitat feature and landscape matrices, we generated circular buffers around the locations of sites at different spatial scales i.e., 5 km, 10 km, 20 km, 25 km, 40 km and 50 km, as the radii. We defined 5 km-and 10 km-scale as the fine scales (Forcey et al. 2011;Morelli et al. 2013), and scales larger than 10 km-scale as the coarse scales. Landscape metrics were selected based on the lifehistory and ecological characteristics of waterbirds (Madsen 1985;Si et al. 2011;Li et al. 2017;Zhang et al. 2018). Table 1 lists the selected metrics covering multiple forms of patch size, dominance, shape complexity, fragmentation, connectivity, landscape diversity, contagion and aggregation (Mcgarigal and Marks 1995). For patch size and shape complexity, we also calculated their mean, minimum, maximum and standard deviation. Patch size includes patch area (PA) and patch core area (PCO), with a higher value indicating a larger patch. The core area represents the interior area of a patch after a user-specified edge buffer is eliminated. Smaller patches with greater shape complexity have a smaller PCO (Mcgarigal and Marks 1995;De Smith et al. 2007). Metrics for shape complexity include perimeter area ratio (PAR), shape index (SI) of each land cover type. Higher PAR and SI indicate greater shape complexity or greater deviation from regular geometry. Patch density (PD) and splitting index (SPI) (Green et al. 2017) represent the fragmentation level, while patch cohesion index (PCI) (Concepcion et al. 2016) indicates the connectivity level. Higher values of PD and SPI indicate more isolated patches, whereas higher PCI indicates more connected patches. Landscape Shannon index (LSHD) indicates the level of landscape diversity, with a higher value representing higher heterogeneity of patches in the landscape. Contagion index (CI) and aggregation index (AI; Li and Reynolds 1993) measure the extent of aggregation of patches for one particular land cover type. CI and AI increase if a landscape is dominated by large and well-connected patches. Landscape metrics were calculated in R 3.3.3 using the package 'SDMTools'. All metrics were standardized using z-score normalization transformation for the further analyses. Statistical analyses We first tested the influence of each landscape metric on waterbird diversity using univariate linear regression. Only significant metrics (p value \ 0.05) were included (Forcey et al. 2011). A preselection was then carried out to exclude metrics with relatively high autocorrelation or high collinearity. Specifically, we used Moran's I to detect autocorrelation and metrics with a Moran's I larger than 0.5 or smaller than -0.5 were removed. We then use Variance Inflation Factors (VIF; Marquardt 1970) to diagnose collinearity. VIF measures the amount of multicollinearity in a set of multiple regression variables and tests the multiple correlation coefficient between one variable and the rest of variables. Specifically, we dropped the metric with relatively less impact (based on the result of the univariate linear regression), and repeated this process until VIFs of each variable were\ 10. Considering the potential spatial dependency among survey sites, we used both spatial regressions (the spatial lag model SLM and the spatial error model SEM) and the Ordinary Partial Least Squares (OLS) regression. The non-significant metrics were removed, and variables kept in the final model were considered as key landscape metrics. Two spatial autoregressive models were used to detect the level of spatial autocorrelation. A matrix of spatial weights W was calculated based on Euclidean distances between survey sites. The one is the spatial lag model (SLM) that adds a lag term of the dependent variable y into the OLS model. This model explains the spatial interaction between survey sites based on their proximity, as given by Eq. (2): where b is the correlation coefficient of the independent variable X, W is a spatial weights matrix indicating distance relationship between pairs of survey sites. q is the coefficient of the spatially lagged variable Wy on the matrix of weight W applied to response values from spatial neighbors of each survey site, and e is the random error. The other model is the spatial error model (SEM) that estimates the spatial autocorrelation existing in the regression residuals of the neighboring location (i.e. the spatial error) of the OLS model, as given by Eq. (3): where k is the spatial autoregressive coefficient for the spatial error variable W and l is the random factor of disturbances. We fitted in total seven models for the fine (two models) and the coarse (five models) scales. The performance of OLS and spatial auto-regression ; g ii is the number of like adjacencies between pixels of patch i based on the single-count method. max ! g ii is the maximum number of like adjacencies between pixels of patch i based on the single-count method. The value of AI ranges from 0 to 1, and high AI means more aggregated patches models were compared using Akaike Information Criterion (AIC). AIC, as a model selection criterion, has a sound likelihood framework, based on Kullback-Leibler information loss between estimates of the model and actual values and allows the comparisons among models (Burnham and Anderson 2004). A lower AIC value means better fit of the model, thus the model with the lowest AIC value is deemed as the best model. Spatial regressions were carried out in GeoDa and the other analyses in R 3.3.3 software. Results Waterbird diversity of the survey sites in the Yangtze River Floodplain measured by the Shannon-Wiener index is shown in Table S1. The Shannon-Wiener index values varies between 0 and 2.6877 (mean = 1.32 ± 0.69 SD). The highest waterbird diversity was found in the Poyang Lake Nature Reserve in Jiangxi Province, followed by Chen Lake and Liangzi Lake in Hubei province, while relatively lower Shannon-Wiener values occurred in Ge Lake in Jiangsu province, the Aquafarm of Jieshou Town in Anhui province and West Yangcheng Lake in Jiangsu province (Table S1). At both fine and coarse scales, the p-value of k in SLM and that of q in SEM were higher than 0.05, which indicated that no strong spatial autocorrelation was observed among survey sites. Thus, we retained OLS models to estimate the influence of landscape features on waterbird diversity (Table 2). According to the coefficient of each significant metric (Table 2; Fig. 2), we found waterbird diversity was strongly associated with the surrounding landscape matrix at both fine and coarse scales, and the effect was stronger at the coarse scales. At fine scales, a higher waterbird diversity was associated with a lower connectivity of developed area (i.e., lower PCI, a negative effect). At coarse scales, developed area showed the most pronounced effect on waterbird diversity, i.e., habitats surrounded by developed area of regular shapes (i.e., higher LSI, a positive effect) tended to have a higher waterbird diversity (Fig. 2). In addition, regular-shaped croplands (i.e. higher LSI, Mean SI and SD SI; positive effects) and larger irregular-shaped forest patches (i.e. higher Min SI and Mean PCA; positive effects) facilitated a higher waterbird diversity. Significant relationships between habitat features and waterbird diversity were found at both fine and coarse scales (Table 2). At fine scales, the important variables included patch density (PD) of grassland and SD shape index (SD SI) of shrub. Waterbird diversity was significantly higher in more connected grassland (i.e. lower PD, a negative effect) and more irregularshaped shrub (i.e. higher SD SI, a positive effect). At coarse scales, the important variables were the landscape shape index (LSI), the splitting index of shrub, the Mean shape index (Mean SI) and aggregation index (AI) of wetland. Irregular-shaped and wellconnected wetland (i.e. higher Mean SI and AI, a positive effect), as well as irregular-shaped shrub (i.e. higher LSI, a positive effect) contributed to a high waterbird diversity whereas the isolated shrub (i.e. higher SI, a negative effect) resulted in a low waterbird diversity. Discussion This study investigated the impact of habitat features and landscape matrices on waterbird diversity across spatial scales. At fine scales, well-connected habitats (grassland and shrub) surrounded by isolated and regular-shaped developed area helped maintain high waterbird diversity. At coarse scales, waterbird diversity was higher in areas where aggregated wetlands were surrounded by regular-shaped developed area and croplands, and large irregular-shaped forests. Developed areas consistently influenced waterbird diversity and showed the most pronounced effect at coarse scales. The landscape matrix in which wildlife habitat is embedded should be managed wherever possible (Prugh et al. 2008;Franklin and Lindenmayer 2009), especially when expanding the developed area. Waterbird diversity was negatively correlated with fragmentated habitats (i.e., isolated grassland, regular and isolated shrub and unconnected wetland with regular boundaries). Well-connected grassland, shrub and wetland habitat provide important foraging and resting area for waterbirds (Stafford et al. 2009;Pearse et al. 2012). Connectivity, at both fine and coarse scales, is important for waterbird aggregation (Guadagnin and Maltchik 2007). At finer scales, wellconnected habitats facilitate the movement of waterbirds between feeding and roosting sites (Elphick 2008), which can reduce the costs due to shorter foraging flight distances. In addition, we found that waterbird diversity was lower in sites with regularshaped shrub and wetland patches at coarser scales. In general, the regular and less complex patches are often associated with intensive human influnce (Mcgarigal and Marks 1995;Cunningham and Johnson 2011), whereas less disturbed patches are more complex (Krauss and Klein 2004). Furthermore, habitat patches with a higher shape complexity tended to have increased foraging resources (Andrade et al. 2018). Therefore, irregular-shaped shrub and wetland habitat helped to maintain a higher waterbird diversity due to the lower level of human disturbance and the higher level of potential food resources. Developed area was the most critical factor influencing waterbird diversity, particularly at coarse scales. Though a previous study found that the presence of developed area negatively influenced waterbird richness (Rosa et al. 2003), we suggest that habitat surrounded by isolated or regular-shaped developed area can help to maintain higher waterbird diversity. Isolated developed area indicated a lower level of connectivity of surrounding patches, resulting in a higher connectivity of waterbird habitat patches (Pearce et al. 2007;Larsonab and Perrings 2013). In other words, well-connected surrounding landscape patches (i.e. developed area) indicated higher habitat degradation and fragmentation, which leads to a lower waterbird diversity. In particular, the effect of shape complexity of developed area was more prominent. Waterbird diversity decreased as the shape complexity of surrounding developed area increased. Surrounding developed patches with a more complex shape tended to have a longer border with the adjacent natural habitats, indicating a higher level of human disturbance (Gyenizse et al. 2014). Regular-shaped developed patches resulted in less disturbance to the habitat and hence support higher waterbird diversity. Other landscape matrices, such as cropland and forest, also affected waterbird diversity. Regularshaped cropland and larger irregular-shaped forest tended to facilitate a higher waterbird diversity. Similar to the developed area, regular-shaped cropland The credible interval of the estimate is 95%. *P \ 0.05 (two-sided test), **P \ 0.01 (two-sided test), ***P \ 0.001 (two-sided test) Landscape metrics: PCI patch cohesion index, PD patch density, SD SI standard deviation of shape index, LSI landscape shape index, Mean SI mean shape index, AI aggregation index, Mean PCA mean patch core area, Min SI minimum shape index, SPI splitting index indicated a lower level of habitat invasion and disturbance. Habitats surrounded by natural land tended to support more species due to relatively low human disturbance (Vandermeer and Carvajal 2001). Larger irregular-shaped forest patches could act as a buffer insulating core habitats from intensive human activities such as urban-rural development and agriculture expansion (Findlay and Houlahan 1997) thus facilitating a higher waterbird diversity. We found that both habitat features and surrounding landscape matrices influenced waterbird diversity at fine scales, whereas at coarse scales the effect of the landscape matrix outweighed that of the habitat. At fine scales, waterbird diversity was facilitated by wellconnected habitats surrounded by regular-shaped developed area. Whereas at coarse scales, the surrounding matrices (with the shape of developed area outperformed others) played the most important role in determining species diversity. The reason might be that initial habitat selection is mainly based on the appearance of the landscape (Moore and Aborn 2000), and birds tend to avoid regions with the habitat surrounded by well-connected landscape matrices. This kind of landscape tends to have more fragmented habitat patches and a relatively higher human disturbance. Among different types of landscape matrices, developed area had the most pronounced negative effect on waterbird diversity, probably because the level of human activity intensity is the highest in the developed area in comparison to other landscape matrices. We acknowledge that imperfect detection during surveys might negatively impact data quality of cropland (C); S-LSI means the LSI of shrub; C-Mean SI indicates the mean shape index (Mean SI) of cropland; W-Mean SI means the Mean SI of wetland (W); C-SD SI indicates the SD SI of cropland; W-AI means the aggregation index (AI) of wetland; D-Mean SI indicates the Mean SI of developed area; F-Min SI means the minimum shape index (Min SI) of forest (F); F-Mean PCA means the mean patch core area (Mean PCA) of forest; D-Mean SI indicates the Mean SI of developed area; S-SPI means the splitting index (SPI) of shrub (false absences or false presence of species) and interpretation. We suggest increasing the number of surveys for each location in the future to further validate our findings. Conclusion Habitat features and landscape matrices jointly affected waterbird diversity, and the effect of the landscape matrix was more pronounced at coarse scales. Well-connected habitats (e.g. wetland, shrub and grassland) surrounded by isolated regular-shaped developed area and cropland, and large irregularshaped forest helped maintain a higher waterbird diversity. Regular-shaped developed area was a critical factor that consistently facilitates a higher waterbird diversity across scales. Wetland managers should maintain well-connected habitats (wetland, grassland and shrub), and urban and rural landscape planners should minimize the expansion of developed areas to critical habitats and leave sufficient buffer to maintain the habitat connectivity and shape complexity in order to reduce the disturbance to birds. Our findings provide insights into understanding how waterbirds respond to altered landscapes and offer practical measures to help mitigate the human-bird conflicts in biodiversity hotspot areas. of China (No. 41471347), donations from Delos Living LLC and the Cyrus Tang Foundation to Tsinghua University, and the China Scholarship Council (201806210038). We thank the World Wide Fund for Nature (WWF) for providing bird observation data, and Y. Zheng for providing the boundary of the Yangtze River basins. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
6,237
2020-09-29T00:00:00.000
[ "Environmental Science", "Biology" ]
A CREATIVE PARTNERSHIP AND AN INNOVATIVE AND EFFECTIVE MULTIMEDIA ENGLISH LANGUAGE LEARNING AND TEACHING WEBSITE “in2english” as an innovative and effective multimedia English language learning and teaching website came about through a creative partnership between the British Council (BC), the British Broadcasting Corporation (BBC), China Central Radio and Television University (CCRTVU) and CRTVU-Online Distance Educational Technology Limited (CRTVU-Online). As a cooperative endeavor it combines British expertise in English teaching and development of online materials with CCRTVU’s knowledge and experience of the English learning environment in China as well as expertise in ICT and development of online materials. At its inception many of the website’s features were innovative and cutting edge. The free website goes beyond text and makes extensive use of multimedia. Since its launch on 5th November 2002 the website has welcomed nearly 10 million unique visitors, visited 18 million times, from its target audience of business people, English teachers, young professionals and their children. During five years of development “in2english” has evolved into a highly interactive community. “myin2english”, a personalized feature, helps foster the learners’ participation and ventures into the world of mobile learning highlight its continuing commitment to innovation. However, it is also a time to reflect on the lessons learned about Chinese learners’ attitude to online learning. A retrospective and critical approach is needed to see how far we have come, how creative and innovative we are now and how we should proceed in the future. "in2english.com.cn" as an innovative and effective multimedia English language learning and teaching website came about through a creative partnership between the British Council (BC), the British Broadcasting Corporation (BBC), China Central Radio and Television University (CCRTVU) and CRTVU-Online Distance Educational Technology Limited (CRTVU-Online) in 2002.As a cooperative endeavor it combines British expertise in English teaching and development of online materials with CCRTVU's knowledge and experience of the English learning environment in China as well as expertise in ICT and development of online materials. The common ground strategy of the four partners brought about the establishment of the site and this creative partnership has been maintained throughout its development.At its inception many of the website's features were innovative and cutting edge.As a free website it went beyond text and made extensive use of flash and multimedia.Since its launch on 5th November 2002 the website has welcomed nearly 10 million unique visitors, visited over 18 million times, from its target audience of business people, English teachers, young professionals and their children throughout China. During five years of development "in2english.com.cn" has evolved into a highly interactive community.A personalized feature known as "myin2english" helps foster the learners' participation and ventures into the world of mobile learning.This is highlights its continuing commitment to innovation.However, it is also a time to reflect on the lessons learned about Chinese learners' attitude to online learning and also the international cooperation between partners.A retrospective and critical approach is needed to see how far we have come, and how we should proceed in the future. BACKGROUND OF THE FOUR PARTNERS "in2english".com.cn is a joint British-Chinese English language learning project.Collectively known as the Multimedia English Language Learning Initiative (MELLI), four partners have been working together to develop and deliver a comprehensive online English language learning website for China.The parties investing in this unique Britain-China partnership are: The British Council The British Council -the Cultural & Education Section of the British Embassy in China -is well respected for the quality of its English language teaching projects.With "in2english".com.cn, the British Council wants to deliver a quality English language-teaching product that concurrently presents positive aspects of the UK and China in this important Britain-China project. CCRTVU CCRTVU has high brand awareness and a long-standing history in adult education and distance learning.About 2.6 million students currently study through CCRTVU.Most are required to study English as part of their course and some are English majors training to be English teachers, at both primary and secondary level.CCRTVU's experience offers unique access to the market of distance learners and gives the project invaluable physical association with China, ensuring local relevance, and the potential to reach over a million learners directly. BBC BBC has been teaching English for 60 years, reaching learners that are situated around the globe, in the full range of linguistic markets.Over six decades it has become well known for its radio English learning programs, particularly in China, and more recently is increasingly recognized worldwide for the quality online learning it offers. In a market where there is a gap in quality content provision the BBC brand can be maximized as a known and trusted content provider, to communicate a message of quality, reliable content on the web. CRTVU-Online The site is technically supported by a complementary new media service, CRTVU-Online, a joint venture from the established CCRTVU brand and one of China's leading local brand electrical and electronics companies, TCL.CRTVU-Online brings its experience as part of the new era of media delivery in the Chinese educational market to the project.It has a strong commitment to helping China achieve its educational potential.We can see that the four partners have their own strengths with different aims and targets.As partners working on the same project it is essential to find the common ground to benefit all the partners so as to build an English language learning and teaching website for a selected target audience in China. MEETING THE NEEDS OF THE FOUR PARTNERS The different partners clearly have different needs, from public service to commercial, and want to reach different consumers, from teachers to business professionals to the wider ELT market.The challenge facing the cooperation is to make a strategic fit between the different partners and their different needs.In this way, the joint venture would be able to take advantage of all partners.The individual strengths of the partners behind the "in2english".com.cnsite, as existing English language teaching and distance-learning providers, offer a synergy of brand strengths. By analyzing the areas of overlap in the strategic fit of all partners, in conjunction with market data, we were trying to find a clearer set of business aims and objectives for the joint venture, and to define primary and secondary targets.We needed to focus our efforts and activity around the primary targets and also discover what each partner could offer.• Deliver quality, innovative ELT to key target groups in China. • Strengthen brand values and secure market share. • Introduce course-based, structured learning to help learner achieve goals. • Develop existing and new online product portfolio.Primary targets: • Support teachers and teacher training for primary education. • Reach business sector and professionals in key sectors.Secondary targets: • Younger urban professionals and wider ELT public. • Kids of the above professional groups. • Communicate with a tertiary target group of influencers, opinion formers and business trend-setters. Core Business Objectives The redefined joint aims of the project are the following core business objectives: loyalty program, reaching teachers program and evaluation. Loyalty program -Invest in the "in2english".com.cnonline brand, develop the open access site, introduce learning paths, strengthen brand values and secure market share in key target consumer groups of business and teaching professionals.Reaching teachers program -Work closer with key partners, introduce learning paths for primary teachers in conjunction with partner needs and activities.Evaluate -Monitor activity, measure achievements and feedback into site content, business and product development, and communications initiatives. Core Business Targets Focusing our limited resources, we must drill down to identify and define our target end consumer groups, within key professional sectors, further develop our product proposition in line with market and partner needs, and tailor our message to reach each target group.In order to move forward together, we needed to agree on our objectives and targets.And, we should agree that we would focus our efforts and activity around the primary targets. After analyzing the differing needs of the different partners, from public service to commercial and their different potential customers from teachers to business professionals, an area of overlap was found.It was in this area of overlap that "in2english".com.cn would operate. MAKING THE MOST OF THE FOUR PARTNERS AND MEETING THE USERS' NEEDS BROUGHT THE SITE SUCCESS Given sufficient goodwill on all sides, we tried to fit the strategy of the four partners and at the same time take advantage of all partners. Site Content Architecture Having identified our target users, "in2english".com.cnsets out to design each zone according to the specific needs of each group.The two primary targets, teachers and business professionals were supported by two separate zones.The Teaching English zone was targeted to Primary school teachers whose own level of English ranges from lower to upper intermediate.Working English was for business people who are interested in learning English for work and whose level is intermediate and upper intermediate.Another target was young urban professionals and the wider ELT public.The Living English zone was created to cater for this group of people who are interested in learning English to enhance their job prospects, and increase travel and educational opportunities.Finally the children of these key group people had a zone of their own For Your Kids although it was hoped some of the parents would find this a fun learning zone as well.In 2004 a community section was added which has become hugely popular with users.They see this resource as a way of improving their English through communicating with users and editors and people's replies and assistance as a motivation to their English language learning interest.The community grew and became as one big friendly family. As the website developed, what became increasingly clear was that many desired to pass exams such as IELTS and from 2003 an additional zone English for Tests was added which featured examination practice for IELTS and later for CET 4 and BULATS.The target group language level was intermediate or post-intermediate and the target group members were typically non-English majors at university or college who had achieved Band 4 in English and were recent or soon-to-be university or college graduates. Innovative and Interactive Content At its inception "in2english".com.cn offered many innovative and cutting edge features and to this day user feedback has still stressed their appreciation of the resources of flash exercises. In an analysis of our strengths and weaknesses this ability to deliver rich interactive content was one of our major strengths.Another important strength lay in our innovative international partnership with "in2english".com.cnrepresenting a quality brand of English.Other innovative features of the website include: interactive content, personalized learning paths, as well as functionality for users to monitor personal progress and achievements. The aim of the zones was to offer users a choice of creative and innovative materials appropriate to their interests and needs.The key qualities of the zones can be described as: innovative, creative, engaging, attractive, practical, easy to use and appropriate to the target audience.These features took full advantage of what the web could offer e-learning.The zones are creative and attractive, subjects are dealt in an original fashion and the layout, colours and graphics used are attractive.The zones look professional and serious and were very attractive for serious and motivated users.Taking the Teaching English Zone of "in2english".com.cn as an example, it was set up in order to provide innovative and creative multimedia English Language Teaching support to a target audience of Chinese teachers of primary EFL.The target users were very different however.The teachers in this target audience typically posses a low level of English ability and often have little to no formal or recognized training or certification in primary EFL teaching.In addition, primary EFL teachers in many schools in China have very little resources available to them outside a textbook.The Teaching English Zone aimed to provide resources for this group of teachers to coincide with the Basic Requirements for Primary School English outlined by the Ministry of Education in China.It also aimed to provide basic instruction about pedagogy and the EFL profession to support those teachers who were looking to improve their skills and those looking to take the Teaching Knowledge Test or some other recognized initial EFL teaching certification.An underlying aim of the site is one of helping to improve the language level of the regular users of the Teaching Zone. When considering how engaging the site was this became an important issue.We felt there was a good deal of support to help independent users, such as, glossaries and feedback. There was also a lot of material which would again encourage motivated users.However some users found it difficult to find materials that they wanted to try and so on pages within the various sections we suggested activities that users could try.Another need that cropped up, when the site was up and running was that serious users might like to have records and so yin2english was started as personalized area where users could record their progress in 2004. Site Promotion In 2001, China was successful in its bid to host the 2008 Olympics and in the same year China also became a member of the World Trade Organization.The market became flooded with low Standard English language materials.The challenge was to create quality e-educational content and relevant English language learning materials.Next although the partners all had established ELT credentials it was important to develop loyalty and awareness of the "in2english".com.cnbrand.Each of the partners committed themselves to promoting the site in various ways. The British Council gave press and PR support, including news distribution and media relations, specifically reaching general and educational media.They also encouraged brand exposure through tied-in activities around British Council projects that reached teachers and teacher training sectors.When it came to brand exposure the BBC were particularly efficient.They developed 30 to 60 second radio slots of bite-sized quality English language learning content, with an "in2english".com.cnsponsorship line to help expose the brand on BBC's own airwaves.They featured "in2english".com.cn in BBC's Learning English and BBC/BC's joint Teaching English online and offline promotional and educational support materials.Through tied-in activities around BBC projects in China, they reached trade and business sectors, the educational sector, and the wider public with an interest in education and English language learning. On the Chinese side CCRTVU provided details of useful content and subject areas in their primary teacher training courses to help "in2english".com.cntailor teaching materials on the site towards specific teaching needs.They introduced some content from "in2english".com.cninto their own courses and course materials to create a cross promotional environment.CCRTVU's TV presenters promoted the site and "in2english".com.cn was presented in their events, conferences, presentations and grass roots activities amongst their target groups, including the wider public.Finally CRTVU-Online's marketing department provided relevant information for a more market-focused product as well as tied-in activities around relevant media relations and publicity events organized by CRTVU-Online, to help expose the brand.They explored the possibilities of reaching beyond the CCRTVU market, to the wider traditional educational market with online access that had an interest in English language learning. Site Evaluation According to the survey and evaluation of the site, "in2english".com.cn has a great success. The followings are the assessment and evaluation statements.The survey by Synovate shows user satisfaction with the Website.Kim Ashmore, Manager of LearnEnglish Kids ELT group British Council evaluated Living English zone of the site: The zone goes beyond text and makes extensive use of multimediathere are streaming videos, many sound files, links from the zone to community features.These features (multimedia, community features) take advantage of what the web can offer e-learning.The zone makes extensive use of Flash for its activities -the Flash routines look sophisticated and use latest technology.The site is innovative in a field (e-learning) where most free sites offer little more than text and links. Zhang Shaogang, assistant President of CCRTVU expressed his comments on "in2english" saying: This is my first experience of working on a Steering Group with international partners and I have been impressed especially by the working practices, for example all the planning and arrangements before meetings, having an agenda, sticking to an agenda at a meeting which makes them more efficient and focused.I think this is a good way of working with partners.In international co-operation, I always use in2english as a reference point as it is our most successful international project.I bring up in2english and use it as the model of success for all types of high level meeting.Also when I negotiate other international projects, for example with French or German partners, I always refer to in2english as the standard or/benchmark for international co-operation. John Whitehead and Michael Houten from the British Council in London came to "in2english" project team for evaluation and concluded: "in2english.com.cn" is clearly a very successful project and this is a great tribute to the China team and, in particular, the "in2english" operational team.Partners and users alike acknowledge its quality and effectiveness.It has delivered strongly on the defined objectives and has provided the organization with many useful learning points to contribute towards the development of future e-English learning products.Users' statistics from 2003 to 2007 showed the successful achievement of the program.Therefore, based on "in2english".com.cn,BC establish another two similar sites in Egypt and Mexico copying the same model.Furthermore, many schools and universities use resources of "in2english.com.cn" to go with their courses.In fact four students used "in2english.com.cn" as a successful ELT site to write their dissertation for their master degree. DISCUSSION OF PROBLEMS OF SITE DEVELOPMENT AND PARTNERS' COOPERATION Although the site proved to be a great success, problems had to be solved on the way to this success both in the establishment and development of the site and the cooperation and communication of the four partners. Site Update Once the site was launched, despite an overall initially good reaction various problems emerged according to the survey we made.The first focus was "who was the site for?" Generally speaking, the home page design was well received by most adults but some younger people however, gave the impression that they found the site to be somewhat too 'adult'.On the other hand, those who liked the design said that it was suitable for an English teaching website due to its simple design, clarity and clear colour.So the fundamental question became: Is this site for Learners or Teachers?For adults or children?For some people the two pairs of target groups are not easily compatible and this is a problem we will have to address as we move forward.If we have something for everyone does this mean not much real material for anyone in particular?We need to be careful not to do too much with little depth or we risk becoming a jack of all trades and master of none. The most important way that we tried to address the issues of interactivity was through the setting up of a community zone.This together with the yin2english which had been initiated the year before increased the level of user participation.In fact to begin with there were three communities, teachers, kids and a combined business and living community.It was found that parents were not paying attention to their children while online for security reasons and we removed that option. In 2005 Learning English was averaging 40 messages a day with the users of the message board posting messages on a variety of topics, from language queries to general chat.However, unfortunately, the Teaching English community averaged only 0-3 messages a day and the feedback from the teachers was that they did not find it particularly rewarding.Therefore it was decided to merge the two into one overall community.This proved a great success and various activities were employed to make it more attractive.For instance the message board was used to ask competition questions with prizes in order to encourage loyalty.Teachers took part in online forums and loyal users were invited to appear in the 'People in the Spotlight' area in the Living English area. Many users started their own web log, which created debate and interaction between users.A list of top 10, later amended to top 20, users was started, again to encourage loyalty.In short it was found that the site needed to be updated constantly in order to meet the needs of the target groups. Site Promotion After the establishment of the site, we managed to promote the site mainly through some inhouse activities and conferences with the four partners.But it was not very effective.Later we focused on online promotion, such as online syndication, link exchange and newsletters.Our user figures increased dramatically. One problem is brand recognition; that the website was a creation of partners did not seem to be very clear to our users.People were aware of the four logos, however they knew little about them and didn't have a clear idea about the role of the four institutions.The BBC was generally recognized but only when users were told about the BC did they reason that this would give the website an authoritative feel, and therefore it must teach authentic English.Young people don't know a lot about CCRTVU and not so many people know CRTVU-Online because it is a newly established company.A clear strategy for promotion of brand recognition must be well planned to achieve the satisfying effects.The result of the survey is stated in Table 4. Generation of Income As a free website which has run for over 5 years, we do hope to generate some income to sustain the maintenance of the site.The VIP section and, later mobile learning, were attempts to make money as the budget for the site was reduced drastically after 2005.Both, unfortunately, had very limited success due to these resources being available free elsewhere and, in the case of the chat room, of being only of interest to those taking IELTS.Another reason is that people still think that face-to-face study is the most effective method of learning English. Project Management Structure When we first started the project, the first problem was how to manage the project in the areas such as site design, resources, research, promotion, personnel, finance and business development.Different partners have different aims and policies and the members of the project team come from different organizations.We solved the problems by having a threelevel of management leading structure: Project Team, Management Group and Steering Group. • Project Team is responsible for the daily work such as site design and update, resources development, plan for promotion and market.• Management group is in charge of cooperation with the different departments of the four partners and in trying to solve the problems that occurred in the implementation of the team's work.• Steering Group makes a decision on the strategy and orientation of the site and also gives approval of important proposals, such as business plans, personnel and finance. Partners' Strategy Modification The four partners modified their strategy and policy during the five-year cooperation especially the BBC and BC in China.The BBC reduced the investment for resources and personnel due to their greater focus on radio program development with Chinese stations in 2005.The BC, who had given important support in finance in the first three years, reduced the investment and involvement of resources and personnel owing to their new establishment of Global English project in 2007. CONCLUSION In conclusion, it is in the common ground of mutual benefits and in the combination of four partners' strengths that "in2english"'s success lies.Partners respect each other and try to solve the problems that occur with equal commitment to cooperation.On the other hand, they sign agreements in detail to state clearly the project strategy, management, budget, personnel, copyright, and so on.In this way all the partners have a healthy and harmonious cooperation and agree on the overall aims of the site whilst keeping the partnership creative and innovative in the interests of developing international cooperation. Table 1 . Analysis of Overlap Areas in the Strategic Fit of All Partners Support their association with Chinese educational and e-learning ventures, to deliver quality ELT.• Build the brand and establish market-share.•Develop new online/offline products and support materials, towards revenue generation. Table 2 . Users' Satisfaction of the Website Table 4 . Brand Recognition Survey Online (275 people were involved)
5,551
2008-03-01T00:00:00.000
[ "Computer Science", "Education" ]
High-Brightness Self-seeded X-ray Free Electron Laser to Precisely Map Macromolecular Structure We demonstrate a hard-X-ray self-seeded (HXRSS) free-electron laser (FEL) at Pohang Accelerator Laboratory with an unprecedented peak brightness (3.2 × 1035 photons/(s·mm2·mrad2·0.1%BW)). The self-seeded FEL generates hard X-ray pulses with improved spectral purity; the average pulse energy was 0.85 mJ at 9.7 keV, almost as high as in SASE mode; the bandwidth (0.19 eV) is about 1/70 as wide, the peak spectral brightness is 40 times higher than in self-amplified spontaneous emission (SASE) mode, and the stability is excellent with > 94% of shots exceeding the average SASE intensity. Using this self-seeded XFEL, we conducted serial femtosecond crystallography (SFX) experiments to map the structure of lysozyme protein; data-quality metrics such as Rsplit, multiplicity, and signal-to-noise ratio for the SFX were substantially increased. We precisely map out the structure of lysozyme protein with substantially better statistics for the diffraction data and significantly sharper electron density maps compared to maps obtained using SASE mode. Introduction The extreme peak brightness and ultrashort pulses provided by X-ray free-electron lasers (XFEL) [1][2][3][4][5] allow data collection from micrometersized protein crystals at room temperature (the functional temperature of their constituent molecules) while outrunning radiation damage. This 'diffraction-before-destruction approach' has been applied in serial femtosecond crystallography (SFX), which has revolutionized X-ray crystallography and has been considered an important tool to determine the structure of proteins that are di cult to crystallize [6][7][8][9] . XFELs have noisy and spiky spectra because the devices exploit the self-ampli ed spontaneous emission (SASE) that starts from the electron beam shot noise. The uctuation of the noisy and spiky spectra of the XFEL can limit the data quality of SFX. Self-seeding is a promising approach to overcome the de ciencies of XFELs and to realize bright, fully-coherent FEL sources in the hard X-ray domain. The use of seeded FEL pulses with their higher reproducibility and 'cleaner' spectrum than SASE might accelerate convergence of the merged re ection intensities of the SFX data 10 . However, existing hard X-ray self-seeded (HXRSS) FELs have limited radiation pulse energy and spectral brightness, and inadequate stability. A previous study 11`o f the self-seeded XFEL for SFX did not show any improvement in the data-quality metrics of the SFX compared to SASE, in contrast to the expectation that the use of self-seeded pulses might result in SFX data of a superior quality to that collected using SASE pulses. The idea of self-seeding 12 has been proposed to overcome the limitation of SASE FEL, and later a self-seeding scheme with a four-crystal monochromator in Bragg re ection geometry was proposed 13 . A more-compact self-seeding scheme 14 that uses a single-crystal monochromator was proposed for the hard X-ray region; this design exploits the phenomenon that the forward Bragg-diffracted monochromatic beam has a small delay time, which allows a very short (<5 m) chicane. The forward Bragg diffraction (FBD) through a thin diamond crystal produces a train of monochromatic wakes that trail the main X-ray pulse by a few tens of femtoseconds 15,16 . By using a magnetic chicane to detour the electron bunch (e-bunch) so that it and the wake overlap in time, the monochromatic seed signal can be ampli ed in the downstream undulators. The rst successful demonstration of an HXRSS FEL using an FBD monochromator was performed out at the Linac Coherent Light Source (LCLS), produced 8.3-keV X-ray pulses with a bandwidth of 0.4-0.5 eV that was about 1/40-1/50 as wide as the SASE bandwidth 17 . The average pulse intensity of the HXRSS FEL pulses at the LCLS was 573 ± 290 µJ at 5.5 keV, and intensity uctuation was ~50% 18 . The average peak spectral intensity was 1.7 times larger than in SASE mode. In the self-seeding experiments at LCLS, including a soft X-ray self-seeding, the radiation spectrum often showed a pedestal-like distribution around the seeded frequency; this distribution limits spectral brightness 17,19,20 . The pedestals originate in longitudinal phase space modulations produced by microbunching instability (MBI) upstream of the undulators. The problem of the large delays in Bragg-re ection monochromators 13 was overcome, and self-seeding using Bragg re ections was demonstrated at the SPring-8 Angstrom Compact Free Electron Laser (SACLA); the design used a channel-cut Si crystal monochromator with a tiny gap of 90 µm. 21 The seed, with a bandwidth of 1.3 eV (full-width at half-maximum, FWHM) at 9.85 keV, was ltered from the SASE radiation by the 111 Bragg re ections from the channel-cut crystal. The average pulse energy of the self-seeded XFEL at 9.85 keV was 450 µJ, vs. 780 µJ in SASE mode. The peak spectral intensity of the self-seeded XFEL pulses was six times higher than in SASE mode. However, the Xray pulses had a relatively large bandwidth of 3 eV (FWHM) because of the large bandwidth of the seed signal as well as an energy chirp in the e-bunch. Using the Si-220 Bragg re ections from the channel-cut crystal, the bandwidth was reduced to 0.6 eV (FWHM) at 9.0 keV with an average pulse energy of ~250 µJ 22 . At PAL-XFEL we demonstrated an HXRSS FEL using an FBD monochromator ( Fig. 1) that has favorable source features compared to SASE mode: a high average peak spectral intensity exceeded that of SASE by a factor of 12 in self-seeded mode, pulse energy of 0.85 mJ at 9.7 keV; substantial improvements in the stability of self-seeding; and substantially suppressed pedestal effects. We also demonstrated that using the self-seeded XFEL with an unprecedented peak brightness and high stability, can substantially increase data-quality metrics of the SFX, such as R split , multiplicity, and signal-to-noise ratio (SNR), and yield signi cantly sharper electron density maps than those obtained using SASE mode. Improving The Spectral Brightness Of Self-seeded Xfel To increase spectral brightness, we used e-bunches with a higher charge (180 pC) and a longer duration (42 fs FWHM) than in the previous study 17,24 . The e-bunch energy is 8.538 GeV; the undulator parameter is 1.87; and the duration of the SASE FEL radiation pulse is 20 fs (FWHM), as measured using the cross-correlation method 23,24 . A laser heater (LH) was used to suppress MBI (Methods Section). One prerequisite to generate X-rays that have narrow bandwidth (narrowband) and high spectral brightness for the HXRSS FELs is a narrowband seed 25 . For e cient seeding, the bandwidth of the monochromatic seed must match the FEL bandwidth, and the duration of the monochromatic wake should be comparable to or longer than that of the SASE signal for seeding. To generate such a seed, we use high-index Bragg re ections (ie., 33-3 or 115) with an FBD bandwidth of ~0.1 eV, instead of the 004 re ection with a ~0.3-eV bandwidth, as typically used in short-pulse modes 17,24 . Unlike the self-seeding in re ection geometry 21,22 , the path length delay of the seed pulse does not depend on the crystal index, so the high-index Bragg re ections, like 33-3 or 115, can be used to generate a seed bandwidth < 0.1 eV. For a better overlap with long e-bunch and a higher seed intensity, we use the 0 th -order wake of the FBD signal as the monochromatic seed instead of the 1 st -order wake that has been used in previous experiments ( Fig. 1f) (details in ref. [25] and Supplementary Fig. 1). The duration of the 0 th -order wake, in this case, is su ciently long (~65 fs) to allow for su cient delay of the e-bunch with respect to the SASE pulse, and full separation of the seed signal from the SASE background ( Supplementary Fig. 2). Another prerequisite to increase spectral brightness is to suppress the pedestal-like distribution around the central seed frequency. This effect originates in the MBI induced by bunch compression, which creates detrimental sideband modulation of the e-bunch. Once the sidebands are generated, the electron oscillations are driven by the multiple-frequency ponderomotive potential. As a result, the e ciency of FEL generation at the carrier frequency is reduced, and the spectral quality is degraded by diversion of radiation power into sideband frequencies 26,27 . A laser heater can e ciently suppress MBI in both SXRSS 20 and SASE mode; 28-31 however, for the HXRSS, the improvement of spectral brightness by using a laser heater has not been investigated experimentally. The LH can suppress the MBI and increase the peak spectral intensity in self-seeded mode ( Figure 2). The pedestal around the center peak is signi cantly reduced as slice energy spread increases, so the peak intensity increases (Fig. 2a). These results show that the sideband ampli cation due to the MBI is effectively suppressed, so the main peak is solely ampli ed. The fraction of FEL intensity enclosed within the bandwidth shows that spectral purity is signi cantly increased using the LH (Fig. 2b). As the slice energy spread increases, the peak intensity (solid red line) also increases (Fig. 2c); it reaches its maximum when the slice energy spread is ~27 keV. The optimal slice energy spread for self-seeded mode is about 5 keV higher than for SASE. 32 To suppress pedestal effects due to microbunching instability substantially, a calm longitudinal-phase-space with energy modulation further suppressed is required (Fig. 1e). However, the total sum of the spectrum (blue dotted line in Fig. 2c) remains almost constant until at LH = 27 keV; this result supports the hypothesis that unsuppressed MBIs channel the radiation power into the sidebands. Single-shot spectra maps (Fig. 3a) were obtained using a 0.26-eV resolution Si(333) curved-crystal single-shot spectrometer 33 ( Supplementary Fig. 3) for 9.7-keV X-rays in SASE and self-seeded modes. The measured bandwidths of X-rays were 13.0 ± 0.1 eV in SASE and 0.35 ± 0.01 eV in self-seeded mode (Fig. 3b); the latter dropped to 0.24 eV after deconvolution from the spectrometer resolution. More-accurate measurements than these were obtained using a 0.09-eV-resolution Si(333) at-crystal scanning spectrometer; they reveal a time-averaged bandwidth of 0.21 ± 0.01 eV in self-seeded mode, which drops to 0.19 eV after deconvolution (Fig. 3c). The FBD seed bandwidth in the 33-3 Bragg re ection from the diamond crystal is 0.06 eV (Supplementary Table 1), but the resultant bandwidth of the self-seeded XFEL increased to 0.19 eV because of the energy chirp of the e-bunch. Assuming the same pulse duration in the self-seeded mode as the SASE FEL radiation pulse (20 fs), and Gaussian pulse shape, the Fourier-transform-limited HXRSS FEL radiation bandwidth should be ~ 0.1 eV. The single-shot pulse bandwidth is de nitely smaller than the 0.19-eV averaged bandwidth, so the PAL-XFEL HXRSS pulses are less than a factor of two larger than the Fourier-transform limit. Self-seeded mode had 12 times higher average peak spectral intensity than SASE mode (Fig. 3b), but this number is limited by the spectral resolution of the single-shot spectrometer. The average pulse energy of the HXRSS at PAL-XFEL is 0.85 mJ, or 57% of the 1.5-mJ average pulse energy in SASE mode, as measured by the electron energy loss scan 34 . Appropriate undulator tapering was applied for 20 undulators ( Supplementary Fig. 4). The ratio of the integrated spectral area of the single-shot spectrometer for SASE to self-seeded mode is 1.64; this ratio is consistent with the XFEL intensity ratio of 1.76 (1.5 mJ to 0.85 mJ) that was measured by the electron energy loss scan. Overall, the peak brightness of the PAL-XFEL HXRSS FEL is calculated to be 3.2×10 35 photons/(s·mm 2 ·mrad 2 ·0.1%BW), which is 40 times higher than that of SASE, the highest achieved to date. The radiation pulse energy of the PAL-XFEL HXRSS is both high and very stable. The self-seeded mode has a consistently higher intensity than SASE mode (Fig. 3d). In self-seeded mode, > 94% of the shots have an intensity higher than the average SASE intensity (i.e., > 1 a.u.). Such seeding stability is mainly due to the stability of the PAL-XFEL, which has a very small shot-to-shot electron-energy jitter of 0.012% (r.m.s.) 24,31 . The resultant shot-to-shot uctuation of the central radiation wavelength of SASE FEL was measured to be 0.025% (r.m.s.), which is one-half the Pierce parameter ρ = 5 x 10 -4 (Relative SASE bandwidth, ~5.6 x 10 -4 ), so self-seeded pulses are almost always ampli ed. Serial Femtosecond Crystallography With A Self-seeded Xfel To solve a structure for the SFX, the necessary number of indexed snapshot patterns of crystals depends on the SNR of the individual patterns, the symmetry of the crystal, and the variability of parameters on which the diffraction depends from shot to shot (such as the chaotic spectrum of FEL pulses) 6 . These factors in uence the nal accuracy of the merged data. To determine de novo the structure of a protein in which no homologous structures exist, the experimental phasing of SFX data, the data must have high resolution and a very high multiplicity of data sets for phase determination 7,10,35-38 . The large shot-by-shot variations in X-ray intensity and photon energy may make experimental phasing of XFEL data very challenging. We conducted test of self-seeded XFEL for SFX, because the self-seeded XFEL that we achieved performs extremely well. A previous did not show any difference in the data quality metrics of the SFX compared to SASE 11 , but the peak spectral brightness of our XFEL is about ten times higher compared to the XFEL used previously and 40 times higher than SASE, with excellent stability. We expect that reduction in the relative bandwidth from ΔE/E=1.3×10 -3 (SASE) to ΔE/E=1.9×10 -5 (SS) will sharpen diffraction patterns, especially those collected at large scattering angles, which are responsible for increasing the resolution. Also, we expect an increase in ltration rate of raw data owing to the higher spectral intensity of the self-seeded XFEL compared to SASE. We performed a demonstration experiment by mapping out the three-dimensional structure of the lysozyme from chicken eggwhite and performing a comparative analysis of the results obtained using the narrowband HXRSS FEL and the broadband SASE FEL (see Methods for the crystal preparation and experimental conditions). We collected and processed three data sets that had different numbers of images for both self-seeded and SASE modes: SS1/SASE1 (111,467/101,443), SS2/SASE2 (38,510/38,686), and SS3/SASE3 (20,209/20,530). The indexing rates were substantial in all cases. For example, SS1; 70,656 crystal diffraction patterns (63.4%) were identi ed as crystal hits, and 33,663 of them were indexed (47.6%). The index rates of the self-seeding data sets were higher than those of the SASE data sets (Table 1). SFX data quality metrics such as SNR [or I/σ], multiplicity, R split (i.e., the consistency of merged intensity distributions between two half-datasets separated from the full dataset), and correlation coe cient [CC*] strongly depend on the number of images, as is known (Fig. 4, Supplementary Table 2). However, the self-seeding data shows superior metrics than the SASE data at high resolutions, unlike a previous report 11 . Remarkably, the self-seeding data sets had twice the multiplicity of the SASE data set at all resolutions (Fig. 4b), so the nal accuracy of the merged data is improved, even with the same number of hit images (see Methods for SFX data processing). Bank code 1VDS) as a search model, then conducted atomic model re nement using phenix.re ne, then inspected of (mFo-DFc) omit maps. 40 (see Methods for structure determination, re nement, and analysis). To compare and analyze the structures and their electron density maps without bias or error, we performed structural determination using the same numbers of hit images for self-seeding and SASE data sets (SS1/SASE1, SS2/SASE2, and SS3/SASE3). After re nement, when we compared the models with their structure maps (SS1 and SASE1), we found apparent improvements in 2mFo-DFc maps of the self-seeded mode (Fig. 5a), even though lysozyme is a globular protein and has some buried residues that strongly interact with other residues. To get a much better view, we obtained bias-free mFo-DFc omit maps by sorting out the residues (Fig. 5b). Comparison of the mFo-DFc omit maps at 1.75-Å resolution (Fig. 5b) clearly shows that the maps of the ten residues (Phe21/Ala28/Tyr41/Trp46/Phe52/Asn62/Tyr71/Trp81/Trp126/Trp141) are not blurred in self-seeded mode; the maps, including the side chains and the main chains (carboxyl groups, nitrogens on the peptide backbones, and α-carbons), are sharper than those obtained in SASE mode. For instance, in the Phe21 and Asn62 maps, β-carbons and side chains are revealed clearly only in self-seeded mode. Re ned models without a speci c residue were deleted from the original structure (Supplementary Table 3). Comparative analysis of the mFo-DFc electron density maps of the ten residues reveals the superiority of the self-seeded data set over the SASE mode data sets ( Table 2, Supplementary Fig. 5). For example, even though the data-quality metrics of the SS3 data are inferior to those of the SASE1 (SS3 dataset has one-fourth as many indexed images as the SASE1), the omit maps of the ten residues from the SS3 data are better than those from the SASE data. B-factors are crystallographic parameters to explain this big difference. The average B-factors 41 of both protein and solvent waters models are relatively lower in the models from the self-seeded than in those from SASE mode, and the average Bfactors are independent of the number of indexed images (Table1: Model re nement). These traits indicate that the atomic displacement uctuations are relatively weaker when a narrowband self-seeded FEL is used, than when a broadband SASE FEL is used. The reduced uctuations might help increase the re nement of the model with sharpened electron density maps. The overall sharpening of the omit maps obtained from the self-seeding data resulted from phasing-quality data with fewer patterns. The high quality of data obtained in self-seeded mode is a result of the use of recurrent shots from a highly-stable self-seeded XFEL. Conclusion The PAL-XFEL HXRSS successfully demonstrated a forward Bragg diffraction self-seeded XFEL with unprecedented peak brightness (3.2 × 10 35 photons/(s·mm 2 ·mrad 2 ·0.1%BW)) and stability; the average pulse energy is 0.85 mJ at 9.7 keV, the bandwidth (0.19 eV) is about 1/70 as wide, the peak spectral brightness is 40 times higher, and the stability is excellent with > 94% of shots exceeding the average SASE intensity. We used high-index Bragg re ections (33 − 3 or 115) to exploit a narrow seed bandwidth < 0.1 eV and a long wake duration ~ 65 fs. A calm longitudinal-phase-space with energy modulation further suppressed is required to suppress pedestal effects due to microbunching instability substantially. We demonstrated that high-spectral-intensity and high stability self-seeded XFEL improves the data-quality metrics of SFX: it achieves outstanding quality in the SNR, multiplicity, CC*, and R split compared to the large-bandwidth SASE. The high multiplicity of the selfseeding data sets yields phasing-quality data with fewer patterns than in SASE datasets, and improve the re nement of the model with sharpened electron density maps. The self-seeded data set achives superior quality of electron-density map over the SASE mode data sets. Even with one-fourth of the indexed images of the SASE data set, the self-seeded data set shows a better or similar electron density maps for the residues. The improved structure map by the self-seeded XFEL indicates that high brightness narrowband XFEL increases the resolution of signal collection, and helps to solve three-dimensional macromolecular structures with high resolution, especially for a very small crystal. Methods Laser heater to suppress MBI. The laser heater adds a slice energy spread to the 150-MeV e-bunch. The IR laser beam size is comparable to that of the electron bunch, so the energy spread distribution assumes a super-Gaussian pro le that can effectively suppress the MBI. The induced slice-energy spread of the e-bunch at LH as a function of the IR laser energy was measured using a transverse de ector and an energy spectrometer located after the rst bunch compressor. The accelerating sections (L1 and XLIN) and the bunch compressor BC1 downstream of the laser heater were all turned off (Fig. 1). The longitudinal phase space of e-bunch was simulated for three cases of the laser heater condition, where no laser heater case (top), optimized cases for SASE (middle), and self-seeding (bottom) (Fig. 1e). The spectral purity of the self-seeded FEL is very sensitive to the energy modulation of the e-bunch, so the optimal condition of laser heater for self-seeding is different from that for SASE. The CSR (green dotted line in Fig. 2c) measured at the third bunch compressor using a visible CCD camera 42 is due to the MBI; this result shows that MBI should be more suppressed by the laser heater for the self-seeding than for SASE. 35 . In both selfseeded FEL and SASE modes, the X-ray pulse was focused to a beam size of 2.5 μm (horizontal) × 2.5 μm (vertical) (FWHM) using a Kirkpatrick-Baez mirror 44 . The diffraction data were collected using an MX225-HS detector with a 4×4 binning mode (pixel size: 156 μm×156 μm) (Rayonix, LLC, Evanston, IL, USA) at room temperature and monitored by OnDA 45 . The 4 × 4 binning mode was used to match the XFEL repetition rate of 30 Hz. The distance between the sample position and the detector was 111 mm and was validated by comparing the index rate of each data set. Aside from using the self-seeded FEL or SASE mode, all other conditions were identical in both experiments. We collected six data sets: three in self-seeded mode (SS1, SS2, and SS3) and three in SASE mode (SASE1, SASE2, and SASE3). SFX data processing. After data collection, the hit images were ltered using Cheetah (version 8). 46 The parameters for peak detection were optimized for Cheetah including min-snr of 4.0. The pre-processed images were further indexed, integrated, merged, and post-re ned using CrystFEL (version 0.6.3) 47,48 . The experimental geometry was also re ned for CrystFEL. Indexing was performed using DirAx (version 1.17) 49 with peak integration parameters of int-radius = 3, 4, 5. The measured diffraction intensities were merged with process_hkl in the CrystFEL suite 47,48 . To investigate the data statistics from the two modes (self-seeded and SASE) carefully, we processed data with the same methods and the same parameters for consistency. Structure determination, re nement, and analysis. The structure of lysozyme from chicken eggwhite was determined (Table 1) by the molecular replacement method using the Phaser-MR in PHENIX (version 1.14-3260), 40 using a model of lysozyme (Protein Data Bank code 1VDS) as a search model. During the calculation of molecular replacement, we excluded water molecules from the template model to avoid model bias. Water molecules were inspected and added manually using Coot (version 0.8.9), 50 by reference to mFo−DFc maps. Water molecules were placed in correct positions depending on the density map where positive peaks higher than 1.5σ and 3.0σ occurred in the 2mFo-DFc map and mFo-DFc map, respectively. The molecular replacement model was rst re ned with a rigid-body protocol and Cartesian simulated annealing (starting at 5,000 K) using phenix.re ne to reduce model bias. After ve cycles of restrained re nement, the model was evaluated by MolProbity (version 4.4) 51 . The data of lysozyme crystals from both self-seeded and SASE modes belonged to the tetragonal space group P4 3 2 1 2, with unit cell parameters of a = b = 77.56~77.88 Å, c = 37.32 Å, α = β = γ = 90°. To inspect effects on map quality using the self-seeded mode, we made all of the omit maps on residues of the lysozyme model excluding glycine, which cannot present meaningful maps. Therefore, we manually deleted each residue from the lysozyme model and performed phenix.re ne in PHENIX 40 to generate an mFo-DFc map on each residue (Supplementary Table 2). Omit maps were generated to reduce the possible effect of model bias. Six models of the self-seeded and SASE modes were calculated in the same manner for a fair comparison. Data availability The coordinates and structural factors have been deposited in the Research Collaboratory for Structural Bioinformatics (RCSB) under the accession codes 7BYO/7D01/7D04 (for lysozyme from self-seeded mode) and 7BYP/7D02/7D05 (for lysozyme from SASE mode). Declarations Figure 2 Suppression of microbunching instability by a laser heater (LH) centered at 9.7 keV. a, Spectra of self-seeded XFEL as a function of the induced-energy spread by LH. b, Fraction of enclosed within the bandwidth for four different induced-energy spreads by LH. c, Peak intensity of self-seeded FEL (solid red line), total sum of spectrum in a (dotted blue line), and fraction of FEL intensity enclosed within ±0.5 eV (magenta dotted line) as a function of energy spread induced by LH. Green dotted line: coherent synchrotron radiation (CSR) due to the MBI, as measured at the third bunch compressor using a CCD visible-light camera. The optimized LH condition for self-seeding and SASE are different, where the induced energy spread is about 5 keV is higher for self-seeding than than for SASE. Figure 3 Spectral intensity of self-seeded vs. SASE XFEL. a, Color maps of 1,000 SASE and self-seeded FEL spectra measured by a single-shot spectrometer. b, SASE and self-seeded FEL spectra averaged over 1,000 shots, with the peak value of the SASE spectrum set to 1, and Ec = 9.7 keV. c, Crystal angle-scanning spectrum measurement with Si-333 at-crystal scanning spectrometer. Each data point is an average of 150 shots. Vertical axis represents the photo-diode current normalized by a quadratic beam position monitor (QBPM) for the FEL intensity measurement. The step of the crystal angle scan is 0.0001°, which corresponds to 0.022 eV. d, Histogram of radiation intensity (for 1-eV bandwidth around peak) expressed relative to average SASE intensity (1 a.u.) for SASE and self-seeded (SS) modes. Histogram data represent the 1000 single-shot spectra measurements in Fig. 3a. The 33-3 Bragg re ection is used in the diamond FBD monochromator. The ebunch delay in the magnetic chicane is 30 fs. Figure 4 Data quality indicators as a function of resolution. a, signal-to-noise ratio (SNR or I/σ), b, multiplicity, c, Rsplit, and d, correlation coe cient (CC*) derived from three HXRSS and three SASE data sets. The sets SS1/SASE1, SS2/SASE2, and SS3/SASE3 are calculated from 70,656, 27,926, and 12,377 total hit images, respectively. The resolution scale (x-axis) of each gure ranges from 1.75 Å to 3.0 Å to show differences between the self-seeded and SASE modes clearly (speci c values, Supplementary Table 3). CC* represents a direct comparison of crystallographic model quality and data quality on the same scale, especially for multiplying measured data.
6,174
2020-10-02T00:00:00.000
[ "Physics", "Chemistry" ]
The quantum oscillations in different probe configurations in the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {BiSbTe}_{{3}}$$\end{document}BiSbTe3 topological insulator macroflake We demonstrate quantum oscillations in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {BiSbTe}_{{3}}$$\end{document}BiSbTe3 topological insulator macroflakes in different probe configurations. The oscillation period in the local configuration is twice compared to the non-local configuration. The Aharonov–Bohm-like (AB-like) oscillation dominates the transport property in the local configuration and the Altshuler–Aronov–Spivak-like (AAS-like) oscillation dominates the transport property in the non-local configuration. The AB-like oscillation period is 0.21 T and the related loop diameter is 156 nm which is consistent with the reported phase coherence length in topological insulators. The Shubnikov–de Haas oscillation frequency is the same but oscillation peaks reveal a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi$$\end{document}π phase shift in the local and non-local configuration. The Berry phase is \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi$$\end{document}π in the local configuration and 0 in non-local configuration. www.nature.com/scientificreports/ X-ray spectroscopy (EDS) confirmed the stoichiometric ratio of the crystal to be Bi:Sb:Te = 1 : 1 : 3 , while the XRD spectrum confirmed the crystal structure consistent with BiSbTe 3 database. The cleaved BiSbTe 3 single crystal flakes were obtained using the scotch-tape method. The cleaved flake geometry is roughly 3 mm in length, 2 mm in wide, and 170 µ m in thickness. Gold wires were electrically attached to the cleaved crystal surface using silver paste. The Raman and EDS spectrum support that the crystal is BiSbTe 3 . Magnetotransport measurements were performed using the standard 4-probe technique in a commercial apparatus (Quantum Design PPMS) with a magnetic field up to 14 T. The B was applied perpendicular to the large cleaved surface. The data points are taken per 100 Gauss at magnetic field region between 6 to 14 T in the steady magnetic field mode, instead of the sweeping magnetic field mode. The data points are taken after the magnetic field is steady at the setting magnetic field for 1 minute. In this work, we probe the transport characteristics of the BiSbTe 3 in two different probe configurations, the local and the non-local probe configurations. Both probe configurations are four-probe method. As shown in the bottom-right inset of the Fig. 1, the applied current, I 14 , flows through the electrode 1 and 4, and the voltage difference, V 23 , is detected at the electrode 2 and 3 in the local configuration. The applied current, I 21 , flows from electrode 2 to electrode 1, and the voltage difference, V 34 , is detected at the electrode 3 and 4 in the non-local configuration. The resistance, R, is determined by the ratio of the detected voltage difference to the applied current in both two configurations. To avoid the signal interference due to frequently electrode switching in two different probe configurations. The non-local configuration is performed after one takes all data in different magnetic fields and temperatures in the configuration. The non-local probe configuration is widely used to detect the carrier characteristics in the diffusion process in various kinds of materials and systems. Results and discussion The top-left inset of Fig. 1 shows the XRD spectrum and it reveals extremely sharp peaks. That supports the BiSbTe 3 is highly crystallized. Figure 1 shows the temperature dependent resistances in the local and nonlocal measurement configurations both of which showing metallic behavior. Due to the difference of transport mechanisms, the measured resistances in the local configuration is 2-orders higher than that in the non-local configuration. The measured resistances in two configurations follow the same temperature dependence from 300 to 2 K. The residual resistance ratio, R(2 K)/R(300 K) , reaches 0.07 in two configurations and lower than most reported values in topological insulators. These support the highly quality and uniformity of our BiSbTe 3 . The magnetic field, B, would lead to phase shift of the carrier wavefunction in the loop which is patterned by two carriers with different count-directions and that leads to AB and AAS interferences as shown in Fig. 2. The AB interference leads to periodic conductance oscillation on the basis of a magnetic flux quantum, � 0 = h/e . The h and e are the Plank's constant and electron charge, respectively. The AB oscillation period is expressed as �B = � 0 /A where A is the loop area 10 . The AAS interference originates from a pair of time reversal loops and the oscillation is on the basis of a magnetic flux quantum, � 0 = h/2e . It is similar to the AB interference but the oscillation period is half of AB oscillation. It is worthy to pay attention on that carriers travel half loops and form quantum interference at the other side of the loop in the AB oscillation, and carriers travel a whole loop and form quantum interference at the original position in the AAS oscillation. Magnetoresistances are performed in local and non-local configurations. Figure 3 shows the derivative resistance with respected to the applied magnetic field, dR/dB, as a function of magnetic field in both local and nonlocal configurations with temperature and magnetic field as the tuning parameter. It shows periodic oscillations in www.nature.com/scientificreports/ both configurations. The oscillation period in local configuration is double of that in the non-local configuration. The dR/dB oscillation amplitude in the local configuration is larger than that in the non-local configuration. To confirm and identify the intrinsic mechanism of the observed different oscillation periods in two configurations, the fast Fourier transform was performed. As shown in the inset of Fig. 3, there are two oscillation peaks in the local configuration and only one peak in the non-local configuration. The second oscillation frequency is double of the first oscillation frequency in the local configuration. The oscillation period is about 0.21 T in the local configuration and the related loop area is 1.92 ×10 −14 m 2 . The corresponding diameter is 156 nm which is consistent with the reported phase coherence length for topological insulators. That supports that the observed first oscillation peak of 5 T −1 in the local configuration might originate from the AB interference. Different from the conventional AB oscillation in patterned nano structures or nanowires, the observed AB-like oscillation is speculated to be originated from a series connected elastic scattering trajectory loop in a macroflake. These oscillations are corresponding to the AB-like (h/e) and AAS-like (h/2e) interference. The oscillation peak in the non-local configuration is the same as the peak position of second oscillation peak in the local configuration. The AB-like oscillation is diminished and only AAS-like oscillation is observed in the non-local configuration. The weak AAS signal was often covered by AB signal and one rarely detects the sole AAS signal in the conventional probe configuration. Our result revealed that one can individually detect the AB-like and AAS-like interference using different probe configurations. The theoretical calculation supports that one could rule out the smearing effect from the interference from loops with different sizes due to the different contribution loop numbers with different size 20 . The observed single peak originates from the largest loop size which is related to the carrier coherence length. It is theoretically argued that the AAS-like interference would be suppressed in the case of spin-helical carriers with opposite spins in topological insulators 8,10 . Our experimental observation revealed clear AAS-like oscillations from surface state carriers. This supports that carriers with spin-helical texture would not eliminate the AAS-like interference. One question arises as to why the AB-like interference gets suppressed while the AASlike survives in the non-local configuration? Figure 4 shows cartoons of AB-like and AAS-like interferences in a macroflake system with different configurations. Carriers travel half loops and form quantum interference at the other side of loops in the AB-like oscillation. Carriers travel a whole loop and form quantum interference at the original position in the AAS-like oscillation. The sample in this work is in the order of mm which is much longer than the carrier phase coherence length. As shown in Fig. 4, carrier trajectory forms a series of connected AB-like interference loops. The effective quantum oscillation signal is directly related to the combination of these loops. Without the external voltage, carriers would form random-connected AB-like loops in the nonlocal configuration. The effective loop number would greatly increase in the diffusion process. Following the www.nature.com/scientificreports/ Landauer-Büttiker formula in which the detected AB-like signal would be greatly suppressed 21 . On the other hand, AAS-like interference originates from a pair of time-reversal loops. The carrier phase shift by the scattering during the two reverse loops is the same, therefore, the AAS-like would survive the environment scattering 22 . The AAS-like is predicted to be dominant in systems with strong disorders and solely depends on the phase coherence length 23 . These AAS-like loops could individual exist in a mesoscopic system, thus the AAS-like signal is tolerant to the loop number effect. The previous experiment revealed that the AB oscillation frequency were consistent in the local and non-local configurations in an asymmetric quantum ring 5 . The ring geometry is 2 µ m that is close to the carrier elastic scattering length. Similar observation is reported in patterned nano-circuits 6 . As we mentioned in the discussion, the signal reduction originates from the massive random-connected quantum loops. These reported focus on their works on the patterned geometry which confines the carrier transport trajectory and that might weaken the loop number effect in the diffusion process. It is worthy to emphasize that no obvious AAS interference signal is observed in both configurations in the asymmetry rings. That might originate from the asymmetry patterned structure and/or weak AAS oscillation signal in the system. The bottom-right inset of Fig. 5 shows the extracted magnetoresistances as a function of 1/B in local and non-local configurations. It reveals periodic oscillations in two configurations and is known as Shubnikov-de Haas (SdH) oscillations. It is interesting that oscillation peaks revealed a π phase shift in the local and non-local configurations. The SdH oscillation arises from the successive emptying of Landau levels with the increase of B, expressed as 1 B = 2πe A F (N + β), where A F = πk 2 F is the cross section area of the Fermi surface, k F is the Fermi wave vector, N is the Landau level and β is the Berry phase 24,25 . The top-left inset shows the fast Fourier transform of SdH oscillations and a sharp peak at 52 T −1 in both configurations 26 . Following the Onsager relation F = A F 2πe , where F is the SdH oscillation frequency, the corresponding Fermi wavevector, k F , is 3.9 nm −1 that is consistent with reported k F of surface state in BiSbTe 3 topological insulators 26 . That supports that these SdH oscillations originate from surface state carriers in our BiSbTe 3 topological insulator. The β could be interfered from the Landau level fan diagram. Figure 5 shows the Landau level fan diagram with magnetoresistance peaks and dips assigned to Landau level N and N + 0.5 , respectively. The intercept is 0.5 that indicates the β is π in the local configuration and the intercept is 0 that indicates β is 0 in the non-local configurations. The topological insulator surface state carrier is a Dirac Fermion with a Berry phase of π . Our observation revealed that the different probe configuration would diminish the transport characteristics of Berry phase. Similar behavior is observed in the Dirac semimetal Cd 3 As 2 nanoplates 27 , topological insulator nanoribbon 28 , and AlGaAs/GaAs heterostructure 5,29 . That might originate from the transport characteristic of diffusion process in the non-local configuration. It needs further investigation to clarify the detail mechanic of the π phase shift in different probe configurations. Conclusion We demonstrate quantum oscillations in BiSbTe 3 topological insulator macroflakes in different probe configurations. The oscillation period in the local configuration is double of that in the non-local configuration. The Aharonov-Bohm-like (AB-like) oscillation dominates the transport property in the local configuration and the Altshuler-Aronov-Spivak-like (AAS-like) oscillation dominates the transport property in the non-local configuration. The AB-like oscillation period is 0.21 T and the related loop diameter is 156 nm which is consistent with the reported phase coherence length in topological insulators. The Shubnikov-de Haas oscillation frequency is
2,889.6
2022-03-25T00:00:00.000
[ "Physics" ]
The use of slotted – displacement ceiling diffuser in rooms with stationary workplaces with computer equipment . This paper presents a specification of premises with a stationary workstations. An analysis of thermal loads occurring in a public utility rooms equipped with a computer, electronic and multimedia equipment was carried out. Attention was drawn to an annual occurrence of a positive heat balances in an occupied workstations and heat losses in winter time in unoccupied premises. For an air distribution a slotted displacement ceiling diffuser was proposed, used for mixing ventilation (MV) in up-up type of air exchange in room. The results of measurements in the form of air flows in the area of its operation are provided. The graphs show the graphical distribution of air velocities and temperatures in the vertical plane passing through the transverse axis of the air diffuser. The study focused on one of the representative airflow of supply air and the behaviour of the air stream during heating and cooling was presented. Introduction Nowadays, specialised workstations fitted with a computer equipment are often designed. Such workplaces are located in buildings for educational purpose (computer rooms at universities, in schools, in educational centres), in offices (training rooms in banks, telecommunications), in entertainment (gaming rooms, internet cafes) and even production (rooms for regulation and control of technological processes or monitoring and supervision rooms). Such computer rooms require efficiently operating ventilation or air conditioning installations [1]. The presence of a large number of people and computer equipments causes a significant deterioration of air quality, and in the absence of efficient air-conditioning contributes to the reduction of user concentration and reduced performance [2]. While favourable microclimate conditions positively affect the well-being, improve the reception of information and, as a result, the desired increase in work efficiency [3,4]. Creation of conditions in which a person would feel a state of thermal comfort is the main goal of installing air conditioning devices [2]. The most important factors of the room's microclimate are the thermodynamic air parameters and velocity in people's occupancy area [5]. In workplaces, both the temperature and a relative humidity of the air should be shaped. This means that the air should be heating and humidification in a winter time, and a cooling and dehumidification in the summer, in HVAC systems. The condition of people's well-being in public premises with the use of mechanical ventilation, or air conditioning, is determined by the correct calculation of the ventilating air stream and the proper solution for distributing this air in the room [6]. With an improperly designed flow, a significant part of the supplied air may not take part in the assimilation of unnecessary heat gains. The result of this is the creation in the room of the so called air dead zones or areas with uneven temperature distribution. The current state of air conditioning technology allows to prepare air of any parameters, solve the most complicated installation systems and link them with automatic control devices and control and measurement apparatus. However, it is difficult to obtain a proper and effective distribution of air in the room [1,7]. In computer rooms, the most reasonable is to bring fresh air directly into the people's area [8][9][10]. This solution is related to a down-up type of air exchange in rooms (displacement ventilation, DV or stratum ventilation, SV) [11]. In the case of computer rooms, this solution can be difficult to implement. This applies in particular to architectural and construction limitations related to distribution of air installations and the installation of a supply elements. Often, there are several computers installed in a relatively small area, where the height of the rooms does not exceed 3.0-3.5 m. In such situation, the use of a part of the height for installing an additional floor to make the space for running air installations (air ducts) may be unacceptable. In addition, maintaining the cleanliness of floors, so that the air does not entrain dust particles, is often difficult to implement, especially when the air-conditioned room is a room in an academic building [12,13]. Students/pupils change every 45 or 90 minutes and often enter the room almost straight from the street. The air flow from the top to the bottom is unfavourable when the supplied air mixes with the conventionally carried out streams of used air (air which assimilated heat and moisture from people and other sources) [14]. However, it is possible to resolve the supply elements and such arrangement of diffusers and exhausters in the room so that convection currents from occupants and electronic equipment are used and the thermal sensations of the users are correct. The description of the ceiling diffuser with a limited and properly directed air stream intended for the organisation of up-up air exchange is contained in this paper [15]. Analysis of rooms with stationary workplaces with computer equipment Thermal loads of sample computer rooms located on the last floor of one of the Wroclaw University of Science and Technology's buildings were analyzed [16]. In each workstation there are 12 computers installed in the area of 30 m² (Fig. 1). Thus, there is 2.5 m² per one computer station. The distance between the workstations is 1.25 m. Due to thermal loads resulting from the location of external walls, the rooms were divided into two zones: with windows on the south-east (SE) and north-west (NW) side. The share of window surfaces in the total habitable and usable area of the room is 15%. Carrying out a detailed analysis of all annual thermal loads, depending on the internal conditions, is an important factor influencing the selection of the air conditioning system [17]. The heat balance values in these rooms change during their use and depend on both internal conditions (number of people and equipment and lighting included) as well as external (seasons and day, exterior wall positioning, window type and area, etc.). In the case of premises with an external walls one should expect both -heat gains from solar radiation in a summer and heat losses in a winter time. The architectural and construction solutions of the entire building as well as individual rooms have a significant impact on the heat gains [18]. Of particular importance are the thermal accumulation of the building and the types of transparent divisions used along with their exposure [19]. The graphics below show the course of heat gains, during the use of the room, from individual sources in a summer time (Fig. 2) and the course of heat gains and losses in a winter time (Fig. 3). When using all computers the heat gains definitely exceed losses and must be assimilated by air conditioning. However, in the period of breaks in the use of the rooms and the lack of heat gains, the occurring heat losses must be compensated. The analysis of thermal loads shows that with the maximum heat gain in the conditions of fixed operation, the basic sources of heat are: -computers that represent 60% of the total heat gain, -users, about 20% of the total heat gain, -solar radiation through transparent partitions about 10%, -other, for example, electric lighting and non-transparent partitions about 10%. The quoted heat shares are similar for computer rooms with a similar area per one workplace. It should be noted that the gains of heat from electronic devices and people are changing to a small extent. However, heat gains from other sources are variable. The maximum heat gains in each room are 4.5 kW. Their occurrence time is similar in premises of a given zone, while for premises of a different zones the time of their occurrence is different. In winter time, in unused rooms, at calculated outdoor air temperatures of the winter period (at -18°C), heat balances may be negative and amount to -0.75 kW. Then, the heat losses occurring must be covered by air systems or central heating. When using the computer room (at the same time and under the same conditions) heat gains definitely exceed the losses, and the heat balance comes to 50-70% of the value from the summer period. It should be noted that the heat gain from solar radiation will only be included on sunny days. On cloudy days, these gains will not affect the heat balance. For the analysed computer room, the volume of the air supplied from the ceiling side should be between 1650 m³/h to 2200 m³/h and from the floor side not less than 4500 m³/h. This is due to the accepted temperature increases (difference between exhaust and supply air temperature) in the occupied zone (people's area), in order to meet the thermal comfort conditions and do not feel drafts. In each case it provides huge air change rates. The size of flow rates directly affects the dimensions of the distribution ducts and the ventilation or air handling unit (AHU). If the additional temperature gradient were taken into account in the calculations, in the case of high rooms, which would be created above the heat sources, the needed air flow rate could be reduced to 2700 m³/h for each room by the ceiling side air supply. And using the creation of convection currents from people and electronic equipment in the organization of up-up type of air exchange with a slotted -displacement ceiling diffuser, the air flow rate could be reduced to 1350 m³/h. Measurement station with slotteddisplacement diffuser The design of the prototype slotted -displacement ceiling diffuser was discussed, among others in [16]. Fig. 4 includes a construction graphic drawing and a view of the diffuser, from which air flows through the centre of perforated surface and two slots, pre-limiting and directing supply air. The air flow rate supplying from the central part of the diffuser is characterized by high inductance and marked turbulence. On the outside, it is limited by air slots that prevent from spreading to the users' heads. By reducing the intense mixing, the air stream is brought above the floor, without spreading from side to side, and thus only a small air suction takes place. It begins to stay on the floor, and when meeting heat sources, assimilates their excess and due to convective streams, rises up, straight to the exhaust installation. Such effect of this diffuser has been validated by research in real conditions on a full-scale measurement room in which the real processes were simulated. The advantages of displacement ventilation and down -up type of air exchange by supplying air to the room zone with the smallest unit heat gains and its exhaust from heat sources are used. The schematic graphic diagram of the measurement station is shown above in Fig. 5. The ventilation air installation of the researched workstation consists of a supply and exhaust part. The external (outdoor) air or the mixture of external and recycling air (from the room), is prepared in the AHU and transported to the computer room and supplied by a single slotted -displacement diffuser. Depending on the needs, the air will be heated in the electric heater, cooled in a glycol cooler or without heat treatment, introduced into the supply air duct. The air from the room is removed through the ceiling, which is contacted by grids located in the false ceiling. Air velocity and temperature measurement results The study used experimental measurements in a full-scale test room with dimensions of 4 m width x 3 m length and 2.85 m height and a HVAC control system. The air velocity and temperature measurements were carried out at different distances from air supply diffuser on measurement station. Sixteen measurement levels were determined at sixteen heights from the floor, as presented in Fig. 9 a) (one measurement session consisted of eight heights -left, while measurements in the second session were carried out after moving the tripod core, with fixed probes, by 0.15 m down -right). The measurements were carried out mainly in the longitudinal and transversal axes of the diffuser containing 29 vertical measuring lines. Measuring grid points have been spaced at intervals of 0.05 or 0.1 m. During the tests, the air flow rates (from 325 to 625 m³/h every 75 m³/h), the width of the slot limiting the stream (from 0 to 45 mm every 5 mm), and the degrees of perforation of the outer plate and the internal plates levelling the flow (from 20 to 50% every 10%) were changed. With larger airflow than 625 m³/h in the area close to the floor around the people (occupant zone), the velocity exceeded the value of 0.5 m/s. The share of air flowing through the slots in relation to the total volume of air flow varied from 30% to 60%. The difference in the result of the subtraction between supply air and room temperature did not exceed 1 K. Such conditions are in fact rare. At very low outside air temperatures, the ventilation/air conditioning unit can also fulfils fully or partially the duty of space heating. Thus, the supply air must have a higher temperature [20]. However, for most of the year it is necessary to assimilate the heat generated by the electronic equipment and users. The supply air temperature must therefore be lower than in the room [21]. The obtained measurement results in the form of a graphic air velocity and temperature distribution (in vertical planes passing through the transverse axis of the diffuser) for air flow rate of 550 m³/h are depicted on the diagrams (Figs. 7 and 8). The velocity distribution for the transverse axis was chosen because it represents the air flow towards the heads, neck and back of the users/occupants. Extending the stream in the longitudinal direction is not so important. It can only cause an increase in velocity in the transition zone when the diffusers are placed too close to each other. The optimal distance of diffusers, resulting from the width of two computer stations is 2.5 m which guarantees maintaining the velocity below 0.3 m/s in people's area. The measurements were conducted with attached simulation systems for heat gains from people and computers. Heat sources were placed near the outer walls, away from the diffuser, as it seen in Fig. 9b. The diagrams do not show a cross-section through the entire room. The focus was only on the stream of air supplied from the diffuser. The air velocity distribution and temperature over the heat sources and local convective flows arising there are not shown because it did not come within the scope of research. The range of measurements performed is schematically shown in Fig. 9b. The average velocity of supply air from the centre perforated part was in range of 0.45 to 0.85 m/s, while from the limiting gaps it was from 1.65 to 3.25 m/s, in the range of tested air flow rates 325-625 m³/h and with a constant gap width of 20 mm. A velocity of 0.4-0.8 m/s was noted under the diffuser in the people's area. In addition to the area of operation of the diffuser, i.e. where people are continuously present, the velocity did not exceed 0.3 m/s. The range of increased velocities can be observed in the transition between computer stations, where a person is temporarily staying and moving. A shorter, higher air flow should be perceived by users as refreshing [22]. High velocity zone, so-called zone of influence of the diffuser, was from 0.3 to 0.6 m. The slotted -displacement diffuser will be useful in rooms with clearly marked passageways, where there will not be a stationary workstation directly under the diffuser. Conclusion In currently constructed buildings, one may see the premises saturated with an electronic, computer and multimedia equipment, which means that heat gains occur throughout the year. In addition, energy efficient technology for construction of new buildings and good insulation of the external partitions (walls, roof) significantly limit the heat loss or heat gain by heat transfer. Heat deficiencies can occur only in a winter time calculation conditions, and in rooms where internal heat sources are turned off (computers, electric lighting) and where there are no users. Based on the above in 24/7 use of computer rooms, especially without external partitions, there is only a small change to the overall heat balance throughout a year. In the case of premises with an external walls, insufficiently insulated, both -heat gains from solar radiation in summer and heat losses in winter vary substantially. It should be noted that the heat gains from electronic devices and people are changing in a small extent, they are almost stable. However, heat gains from other sources are variable and depend on external conditions. It is proposed to use the air exchange system with the slotted -displacement ceiling diffusers of the original construction as an alternative solution to the most recommended systems with supply from a floor level. This paper presents selected results of measurements of the diffuser in non-isothermal conditions. The temperature difference between supplied and room air in both (heating and cooling) cases was about 5 K. The graphical distribution of velocity and air temperature shows the steady and limited air distribution. Although the air flows from the central part of the diffuser are characterised by high inductance and marked turbulence, they are limited by supply air slots preventing from spreading to the users' heads. The advantages of displacement ventilation and the organization of down-up type air exchange can be used due to supplying of air into the room zone with the smallest unit heat gains and its exhausting over concentrated heat sources. Such air movement gives good results in shaping the microclimate of premises with variable and high thermal loads. The measuring diffuser allows for the assimilation of heat similar to that for overhead mixing ventilation (MV) while meeting all the advantages of a positive displacement ventilation (DV) in the area of workplaces and continuous residence of people. The work was supported by The Faculty of Environmental Engineering, Wroclaw University of Science and Technology, Poland. No. 0401/0055/18.
4,166.4
2019-01-01T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
The Effect of Comparison of Soybeans and Coconut Water on Bio-Battery Electrical Power for Education © 2021 Kantor Jurnal dan Publikasi UPI Article History: Received 29 Jan 2021 Revised 09 Feb 2021 Accepted 11 Feb 2021 Available online 11 Feb 2021 ____________________ Keyword: Alternative energy, Bio-battery, Coconut water, Education, Electrolyte paste, Soybeans Indonesian Journal of Multidiciplinary Research Journal homepage: http://ejournal.upi.edu/index.php/ IJOMR/ Indonesian Journal of Multidiciplinary Research 1(1) (2021) 49-54 Valensia et al., The Effect of Comparison of Soybeans and Coconut Water... | 50 DOI: http://dx.doi.org/10. 17509/xxxxt.vxix pISSN 2776-608X eISSN 2776-5970 INTRODUCTION The world is currently facing an energy crisis. In addition, the large amount of battery waste is dangerous and underutilized. Bio-battery is a battery with paste that is made of natural materials (Sumajaya et al., 2019). Electrolyte paste is one of the most important parts of a bio-battery because a quality electrolyte paste will produce good durability. Along with the times, researchers have developed bio-batteries derived from environmentally friendly organic materials. So that the resolution of conventional batteries that do not damage the environment is bio-batteries, besides bio-batteries are the answer to people's concerns about the impact of battery waste which is very dangerous for the environment (Siddiqui & Pathrikar, 2013). In making bio-batteries, we can utilize natural materials, including banana peels (Pulungan et al., 2017), durian peels (Khairiah & Destini, 2017), orange peels (Salafa et al., 2020), pineapple peels (Atina, 2015), and cassava peels . However, there is no bio-battery manufacture using soybeans and coconut water. Soybeans contain minerals such as potassium, calcium, and phosphorus which are electrolytes that are capable of ionizing and conducting electricity (Suryaningsih, 2016). Whereas coconut water contains natural electrolytes such as nitrogen, phosphorus, potassium, chlorine, sulfur, iron, calcium, and magnesium. The mineral content of potassium in young coconut water is the highest (Parsena et al., 2016). This research was conducted to create alternative energy by utilizing abundant biomass in nature. In this study, an electrolyte paste made from soybeans (SBs) and coconut water (CWs) with the ratio of 7/1, 6/2, 5/3, 4/4, 3/5, 2/6, and 1/7. To support the analysis, an electrical voltage test and a battery resistance test for wall clocks were carried out. The main novelties of this study are (1) use of soybean biomass with coconut water as an electrolyte paste, (2) testing of bio-battery resistance to the wall clock, and (3) comparison of the composition of the two materials. Bio-battery manufacture The flow chart of the experimental method is shown in Figure 1. Soybeans and young coconut were obtained from Bandung, Indonesia. The soybeans were mashed using a blender. To determine the effect of the composition on the characteristics of the bio-battery, mixing of soybeans (SBs) and coconut water (CWs) with the ratio of 7/1, 6/2, 5/3, 4/4, 3/5, 2/6, and 1/7. After that, the electrolyte paste on the used 1.5 volt battery is removed until the battery is clean. An empty battery is filled with electrolyte paste from soybean and coconut water and then closed the battery. Bio-battery testing 2.2.1 Electrical Test Batteries from soybeans and coconut water were tested with a voltmeter to determine the voltage generated. The voltage is recorded and then compare with conventional batteries. Endurance test Bio-battery with electrolyte paste derived from soybeans and coconut water was tested on the wall clock. The length of time the clock can move is recorded (time in minutes). Bio-battery electric voltage The bio-battery electric voltage is shown in Figure 2. It can be seen that the more coconut water composition in the bio-battery causes the battery voltage to be higher. This is because the electrolytes produced in coconut water are in the form of minerals. Coconut water contains one of which is potassium and chloride (Parsena et al., 2016). The reaction between the potassium and chloride salts produces the potassium chloride salt. In water, the potassium chloride salt can conduct electricity because it ionizes. The ionization reactions that occur are as follows: (Singgih & Ikhwan, 2018) KCl → K + + Cl -Thus, the more ions produced, the greater the electric current produced and as a result, the conductivity of the electrolyte solution is also greater. Conversely, if fewer ions are produced, the resulting electric current will also be smaller and consequently, the electrical conductivity will also be smaller (Purnomo, 2010). Bio-battery electrical life The duration of the movement of the clockwise by the bio-battery is shown in Table 1. In the ratio 7/1, 6/2, and 5/3 (SBs / CWs) were unable to move the clock hand. This is because the electric current generated is not sufficient. As in result 3.1, it is shown that the less coconut water composition in the paste, the smaller the electric current produced by the battery. Whereas in the ratio of 3/5 it can move clockwise for the longest which is 328 minutes. This shows that the soybean content in the paste affects how long the battery lasts. With a sufficient amount of coconut water to conduct electricity, the more soybeans content in the paste, the more resistance it is to the battery. This is because soybeans contain minerals such as potassium, calcium, and phosphorus as well as vitamins A and B. Potassium, calcium, and phosphate ions can conduct electric current (Suari, 2019). 3.3 Comparison of bio-battery samples with conventional batteries Conventional batteries have an electric voltage of 1.5 volts, with the same size batteries that use soybeans and coconut water with a ratio of 1/7 can produce an electric voltage of 1.2 volts (based on Figure 2). This shows that this bio-battery can be used as renewable alternative energy. Meanwhile, the bio-battery life still needs to be improved again, because the bio-battery with electrolyte paste is only able to move the wall clock hands for the longest 328 minutes. CONCLUSION The effect of the comparison of soybean and coconut water on the quality of the bio-battery was tested. The coconut water content in the paste functions not only as an electrolyte solution itself, but also helps in ionizing the soybeans. The more coconut water composition in the electrolyte paste, the greater the bio-battery voltage. While the composition of soybeans affects the bio-battery life, where the more soybeans at the right water composition, the more battery life will be.
1,453.2
2021-04-20T00:00:00.000
[ "Agricultural And Food Sciences", "Engineering" ]
Identifying, and constructing, complex magnon band topology Magnetically ordered materials tend to support bands of coherent propagating spin wave, or magnon, excitations. Topologically protected surface states of magnons offer a new path towards coherent spin transport for spintronics applications. In this work we explore the variety of topological magnon band structures and provide insight into how to efficiently identify topological magnon bands in materials. We do this by adapting the topological quantum chemistry approach that has used constraints imposed by time reversal and crystalline symmetries to enumerate a large class of topological electronic bands. We show how to identify physically relevant models of gapped magnon band topology by using so-called decomposable elementary band representations, and in turn discuss how to use symmetry data to infer the presence of exotic symmetry enforced nodal topology. Introduction − There have been considerable efforts in the last few years to provide a taxonomy of nontrivial topological band structures enforced or allowed by time reversal and crystalline symmetries [1][2][3][4][5][6][7][8][9][10][11][12][13]. This work has brought powerful new concepts that tie crystal and magnetic structures to band topology. At the same time these ideas provide efficient methods to efficiently search for topological materials resulting in a vast database of ab initio driven predictions of new electronic topological materials [14,15]. Such materials include gapless and gapped bulk topological matter with protected boundary states and anomalous transport properties. The culmination of these efforts to classify band topology based on symmetry and to use symmetry data to diagnose topological bands is called topological quantum chemistry (TQC) [6,12]. In this paper, we show that the TQC approach can be adapted to magnon band topology, providing a classification of symmetry-determined topological bands in spin wave Hamiltonians. The ideas can be used to diagnose magnon topology on one hand, and on the other to build models and identify candidate topological magnon materials. The physical foundation for this work is that topological bands by definition cannot be built from a Wannier basis while preserving all underlying symmetries. Topological quantum chemistry rests on an enumeration of all possible Wannierizable band structures through socalled elementary band representations (EBRs), to be described in more detail below so that, essentially by elimination, one may establish whether some set of bands is topologically nontrivial. Ab initio methods are central to TQC. The closest analogue in widespread use to study magnetic excitations is linear spin wave theory which is based on an expansion, to quadratic order, of the spins in fluctuations around some magnetic structure. The goal of this paper is to show how to pass from elementary symmetry information − the crystal structure and the magnetic order − to linear spin wave models with nontrivial topology. Our starting point is to establish how crystal and time reversal symmetries are implemented within linear spin wave theory. In contrast to electronic systems, the band structures of interest emerge from an effective exchange Hamiltonian. We describe how this Hamiltonian, in conjunction with the minimal energy magnetic structure, fixes the symmetries of the problem. These are encoded in some magnetic space group. We then outline how to build band representations for magnons starting from the local moments on each magnetic site giving a complete table of all site symmetry groups compatible with magnetic order. Band representations minimally encode symmetry information in the magnon band structure. With these ingredients, we are in a position to identify constraints that magnons place on the possible symmetry data and hence on the possible topological bands. In particular, it turns out that magnons in systems with significant spin-orbit coupling form a subset of all electronic topological bands. With these foundations, we then show, first in general and then through a series of examples, how to use symmetry information alone to build exchange models whose elementary excitations have nontrivial gapped and nodal magnon topology and to identify candidate materials. Examples include Chern bands, antiferromagnetic topological insulators, and three-fold and six-fold nodal points. Crucially, our workflow can be straightforwardly reversed, to diagnose nontrivial topology from spin wave fits to experimental data. EBRs and Topology − Before getting into the specifics for magnons, we give a lightning introductory review of TQC. We refer the reader to the supplementary section [54] for more technical details that will not, however, be necessary to appreciate the remainder of this paper. The essential symmetry ingredients of TQC are nothing more than the symmetry group G M of the magnetic structure and the Wyckoff positions of the magnetic ions that appear in any structural refinement of a magnetic material. The group G M is generally one of the magnetic space groups that encodes combinations of crystallographic point group symmetries, lattice translations, time reversal symmetry and perhaps non-symmorphic elements. To each Wyckoff position q, we may assign a site symmetry group (SSG) G q defined as the subgroup of G M that leaves the site invariant. This is generally isomorphic to a magnetic point group. We then need to include some information about the underlying lattice degrees of freedom − the nature of the atomic orbitals. These necessarily transform under some representation of G q . Following Zak, from these representations of the magnetic SSG we may arrive at a representation of the full G M group by the standard process of induction [55]. The result is a so-called band representation (BR). The BR is a momentum space representation of all elements of G M that contains information about the connectivity of the bands and the topology. To connect to topology we define elementary band representations (EBRs) to be BRs that are not unitarily equivalent to a direct sum of two or more BRs. These hold a distinguished place in relation to topology because they are the elementary units from which all Wannierizable band structures can be built for a given symmetry group. Any set of bands that cannot be built from EBRs is necessarily topological overall. All EBRs for all magnetic space groups have been tabulated − each one characterized by eigenvalues of all symmetry operations at high symmetry momenta. For all 1, 651 magnetic space groups, there are roughly 20, 000 EBRs. In order to diagnose topological bands, one should in principle determine whether each energetically isolated set of bands can be written as a direct sum of EBRs with non-negative integer coefficients. If so, the bands are trivial. If not, they are symmetry-determined topological bands. A more finegrained determination of the nature of the topology then requires further analysis. Symmetry enforced nodal topological bands can be read off directly from the dimension greater than one irreducible representations at high symmetry points, lines and planes. Magnons and Symmetry − Building on the principles behind TQC we now discuss the ideas in relation to magnons. In this work we are mainly interested in crystalline solids with localized magnetic moments and nonvanishing local dipolar order parameter S α i for site i and component α. The magnon or spin wave excitations are the transverse fluctuations of the local ordered mo-ments. We restrict our attention to the typical case where these form coherent propagating bands. This means we neglect the role of multi-magnon states and possible interesting questions of novel topology [56] and fragility that arise from such states. We also neglect magnetic excitations beyond the ground state multiplet that could be handled within a multi-boson formalism (see e.g. [36]) to which TQC ideas may also be applied. The symmetries of the magnon bands are descended from those of the magnetic Hamiltonian H M considered to be composed of exchange couplings, dipolar couplings, single ion anisotropies and perhaps an external magnetic field. The magnetic order breaks the symmetries of the magnetic Hamiltonian down to a subgroup. It is important to note that the relevant symmetry groups for magnons are single-valued because the bands are spinless or bosonic. These are the groups that are relevant to weakly spin-orbit coupled electronic systems. However, in the context of magnons, these groups are relevant to the case where the moments and the spatial transformations are locked, which can only happen when spin-orbit coupling at the microscopic level is significant. The spin-orbit coupling is reflected in the appearance of anisotropies in the magnetic Hamiltonian. As is well-known, there are many cases where the magnetic Hamiltonian has discrete or continuous rotation symmetries. In such cases, magnetic order may lead to residual symmetries described by the spin-space groups [57][58][59][60]. Topological quantum chemistry applied to such groups is beyond the scope of this work. We consider the case where these residual symmetries are those of a magnetic space group G M with n S sublattices in the magnetic primitive cell leading to n S bands considered to be computed from linear spin wave theory based on Hamiltonian where the transformation properties of 2n S componentΥ(k) can be inferred from the transformations of the S ± i transverse spin components in a frame where S z is the direction of the ordered moment. For reference, explicit formulas are given in the Supplementary Section [54]. To build band representations, we must first identify the SSG from that of the Wyckoff position of the magnetic ions by requiring that the on-site S z transforms as the total symmetric irrep of the SSG. This constraint reduces the possible 122 magnetic point groups to a set of 31 groups isomorphic to SSGs. The relevant orbital content is given by the local frame transverse spin components S ± i . We give a complete list of the magnetic SSGs in the Supplementary Section together with the irreducible representations of the SSG for which S ± i form a basis [54]. Given this information, one may build a band representation for magnons and, again, explicit formulas are given in the Reference Material [54]. Given an energetically isolated set of magnon bands one may then ask whether this decomposes into EBRs. The EBRs relevant to magnons corresponding to all magnetic structures and significant spin-orbit exchange are tabulated. In the remainder of this paper we give concrete examples of how to use the tabulated EBRs to build models of topological magnons. We take two main routes. The first is to focus on cases where the symmetry information about band connectivity allows EBRs to split up into disconnected bands. By definition at least one of the resulting bands must be topological. Our second focus will be on nodal topology. Several models are known with Dirac and Weyl magnon touching points [16]. But symmetry can enforce higher order degeneracies 3, 4 and 6-fold degeneracies, and we show how to build models with such degeneracies. Magnon topology from decomposable EBRs − To build models of decomposable EBRs we focus on cases where the magnetic ions live on maximal Wyckoff positions, i.e. positions of maximal magnetic point group symmetry for a given G M . These are distinguished by the fact that BRs induced from such sites are themselves EBRs and not composites of EBRs (apart from some well-understood exceptional cases). We give a complete table of decomposable EBRs that can be obtained from maximal Wyckoff positions and the allowed SSGs organized by magnetic space group and Wyckoff position [54]. The utility of this table is that one may couple moments living on such Wyckoff positions and be sure that there will be nontrivial topology in the resulting magnon bands provided free parameters are tuned to avoid accidental degeneracies and provided the number of free parameters is adequate to reduce the symmetries to the required G M . This approach is a highly efficient means to build models of magnon topology and contrasts to generic cases of nontrivial topology where, in practice, one should compute so-called symmetry indicators as a function of free couplings to diagnose the topology. We take an example to illustrate the main ideas − the well-established case of Chern magnon bands in the Kitaev-Heisenberg honeycomb model with [111] polarized moments [31,61]. We reverse the usual logic to show how the model might have been inferred from the tabulated decomposable EBRs. Let us consider magnetic space group F31m (#162.77 in the BNS convention) and Wyckoff position 2c corresponding to honeycomb layers. The magnetic site symmetry group is 32 and the moments are perpendicular to the honeycomb planes. The orbital basis on the 2c positions (J + q , J − q ) transforms under the 1 E + 2 E irreps of the SSG. Consultation of tables in the Supplementary Section [54] or on the Bilbao crystallographic server [62,63] reveals that induction to the full space group yields a single EBR that is decomposable into two bands. From symmetry alone we have therefore inferred the presence of nontrivial magnon band topology. A guide to using the Bilbao tables is given in the Supplementary Section [54]. With this established, we may now build a model host-ing the decomposable EBR and further characterize the nature of the topology. To do this, one should write down couplings between the magnetic moments that both stabilize the required magnetic structure and respect the resulting magnetic space group symmetries. Both conditions are important. For example, it is straightforward to stabilize the structure with ferromagnetic Heisenberg exchange but the resulting model has higher symmetry than F31m owing to a spin-space symmetry coming from the spin rotation symmetry of the underlying Hamiltonian. One may systematically compute all exchange couplings allowed by symmetry. To nearest neighbor these are the Heisenberg, Kitaev, Γ and Γ terms [31,64]. Kitaev and Heisenberg are sufficient to respect F31m and a magnetic field may be applied along [111] to stabilize the structure if necessary. A linear spin wave calculation then reveals two propagating magnon bands with a gap between them. For decomposable EBRs the topology is not necessarily symmetry indicated but it turns out that the C 3 symmetry indicator formula [1] for the Chern number characterizes the topology in this case: where the product is over n bands and Θ n (k) is the eigenvalue of C 3 at wavevector k in band n. This reveals that the model has two magnon bands with Chern numbers ±1, the order depending on the sign of the Kitaev exchange. We now sketch another example of gapped band topology working from the table of decomposable EBRs but this time without reference to an example already in the literature. Consider space group P 4 (#75.1, a type I MSG) with Wyckoff position 2c and irreps 2B for the transverse spin components. This again leads to a single decomposable EBR, now with SSG C 2 compatible with ferromagnetic [001] magnetic order. The lattice is tetragonal with a basis (0, 1/2, 0) and (1/2, 0, 0). We compute all symmetry-allowed exchange couplings for first up to fourth nearest neighbors and choose some set of couplings that stabilizes the required magnetic structure. The linear spin wave spectrum has two dispersive gapped bands and the Chern number can, once again, be computed from a symmetry indicator formula where C is the Chern number of the n band(s), while ξ(k) and ζ(k) are the eigenvalues respectively of C 4 and C 2 . Fig. 1 shows the lattice structure and the band structure with the eigenvalues indicated. The computed Chern numbers are ±1. The method is not restricted to diagnosing Chern bands as we show now with a third example. We take space group P c 6/mcc (#192.252) and Wyckoff position 4c which has SSG −6m 2 . This corresponds to an AA stacked honeycomb lattice with moments perpendicular to the plane that are ferromagnetically ordered in the plane and antiferromagnetically aligned between planes. Crucially this system is symmetric under time reversal times a translation that maps one layer to the next. The two magnon bands within each layer each carry a net Chern number which reverses between layers. One may show [54] that the coupled four magnon bands correspond to a single EBR that is decomposable. The result is an antiferromagnetic topological insulator that can be realized with an anisotropic exchange model for the in-plane moments with Heisenberg exchange between the layers. An explicit calculation of the band structure is provided for reference [54] (see also [65]). Symmetry enforced nodal topology − In this part, we turn our attention to nodal topology focussing on exotic degeneracies that are enforced by symmetry: magnonic analogues of multifold fermion degeneracies [66,67]. In the supplementary section we show how to use the Bilbao tables [62,63] to establish symmetry-enforced degeneracies and give extensive tables of such degeneracies for magnons [54]. Here we show how to build models based on the symmetry information. The [76]. The magnetic structure has a magnetic 2−fold screw and a magnetic S 4 symmetry. The single-valued symmetry group enforces a 3−fold degenerate point at Γ [62,63]. We may establish this fact directly from a simple model for the magnons consisting of antiferromagnetic Heisenberg coupling with a weak 111 Ising anisotropy in the exchange that lifts the considerable degeneracy of the Heisenberg model [77] in favor of the AIAO structure. A linear spin wave calculation based on this model [54] reveals four dispersive modes with a spectral gap and the three-fold degenerate point at Γ. The existence of this quadratically dispersing three-fold point has previously been noted in Ref. [78] as a parent state for Weyl fermions upon symmetry breaking with strain or an applied magnetic field. Our next example has both three-fold and six-fold degenerate magnons. Inspection of the table of degeneracies [54] reveals six-fold degeneracies for magnetic space group 230.148 and Wyckoff position 24c. The nearest neighbor exchange leads to two decoupled magnetic sublattices of corner-sharing triangles. This is the hyperkagome structure that arises on the R sites of garnets with chemical formula R 3 M 5 O 12 . The magnetic structure compatible with 230.148 is shown in Fig. 2. The moments are oriented along three cubic directions on each triangular face. This structure is observed in the material Dy 3 M 5 O 12 (M=Al,Ga) [79][80][81]. The 24 Wyckoff sites are composed of 12 magnetic sublattices plus a translation through (1/2, /1/2, 1/2) as the lattice is bcc. We therefore expect 12 magnon modes. We compute the symmetry-allowed exchange couplings to nearest neighbor. There are six such couplings and one of these is an effective Ising exchange with easy axes along the cubic directions on different sublattices in the pattern required to stabilize the magnetic structure. With this as the dominant coupling, we consider a model with all six nearest neighbor couplings included and with antiferromagnet Heisenberg exchange coupling the two hyperkagome sublattices. A sample spin wave spectrum is shown in Fig. 2. This has several multi-fold bosonic points including four 3−fold points at Γ with quadratic dispersion and one 6−fold point at H on the zone boundary with linear dispersion that is a doubled spin-1 Weyl point. All the degeneracies in the spectrum are compatible with the group theory analysis. Discussion − The classification of topological materials based on crystalline and time reversal symmetries is at a mature stage. In the foregoing we have connected the symmetry-based classification scheme based on elementary band representations to topological magnons. To do this, we showed how symmetries are inherited by magnons from those of the underlying exchange Hamiltonian and indicated how to build band representations for magnons. We have given conditions for the ex- isting tables of EBRs to be applicable to topological magnons. We have shown through several examples that one can use the computed decomposable elementary band representations for single-valued magnetic space groups to build realistic, non-fine-tuned models of topological magnon band structures. We have also used tabulated symmetry-enforced degeneracies as a guide to building exchange models of exotic nodal topology such as sixfold degenerate touching points. Magnons provide an excellent platform to explore the interplay of magnetic symmetries and topology in conjunction with inelastic neutron scattering. In addition to model-building and experimental discovery within the framework laid out here, important open avenues are to explore magnon topology beyond the decomposable EBR paradigm within the TQC framework and to extend TQC to the spin-space groups that are applicable to Heisenberg models among other systems. PM acknowledges useful discussions with Alexei Andreanov on magnetism in the garnets. This work was in part supported by the Deutsche Forschungsgemeinschaft under grants SFB 1143 (project-id 247310070) and the cluster of excellence ct.qmat (EXC 2147, project-id 390858490). Abstract This section contains supporting information for the paper "Identifying, and constructing, complex magnon band topology". Section I briefly introduces magnetic space groups and their band representations. EBRs are introduced and their role in accounting for topological bands. Section II discusses band representations in relation to magnons, enumerates all relevant site symmetry groups, reviews linear spin wave theory, the Berry phase for bosons and the implementation of magnetic symmetries within this formalism. Subsection D also describes how to use the Bilbao tables to extract information about EBRs and nodal topology. Section III describes, in detail, several examples of decomposable EBRs for magnons. Finally, Section IV discusses nodal topology originating from EBRs. OGY We have described briefly the essential ideas behind the EBR approach to topological band structures. We now make these ideas more precise by first reviewing aspects of the theory of space groups and band representations of these groups. A. Basic definitions and properties of space groups A space group G is a group of crystal lattice symmetries. There are 230 such groups in three dimensions that each have a coset decomposition where T are the primitive lattice translations (forming a normal subgroup) and those of the form {g| t} combining point group element g and non-Bravais translation t. The combination rules are A site symmetry group of real space point q, G q is the finite subgroup of G that leaves the point invariant. G q is isomorphic to a point group. A Wyckoff position is the set of points inside the primitive cell whose site symmetry groups are the same (or, more precisely, in the same conjugacy class). A random point will tend to have only the identity as its site symmetry group and it is then labelled as a general position. Each point q has an orbit which is the set of points reached from q through elements g of the space group. Each Wyckoff position has a multiplicity that counts the number of points in the orbit of the position that live in the same cell. The above definitions refer to the crystal in real space. But for constructing representa- Going to momentum space provides a basis of states on which the band representation can act. In this expression space group element g and site symmetry group element h are related by where t βα = gq α − q β and R is the point group element in g. So the α and β coset representatives are fixed given g and h. Evidently the band representation links k and Rk. In the case where Rk is the same as k up to a reciprocal lattice vector, the corresponding block in the band representation is a representation of the little group at k. However, the band representation has off-diagonal blocks that contain information about how different points in the zone are connected. C. Elementary Band Representations A band representation constructed via the method detailed in the previous section may be decomposable into the direct sum of two or more band representations. If this is the case it is called composite and otherwise elementary. More precisely, we first define an equivalence between two band representations ρ G if it is possible to find unitary S(λ; h, k) such that λ : [0, 1] tunes smoothly from one of the two band representations to the other: . Such a function preserves the quantization of any Wilson loops in momentum space. This notion of equivalence is explicitly realized by inducing a BR from two distinct sites q 1 , q 2 with respective site symmetry groups G q 1 and G q 2 . A line between the two points is associated with SSG G q 1 ∩ G q 2 and by moving along this line the induced band representation defines S(λ; h, k). It follows that equivalence of band representations amounts to being able to find a site that interpolates between the SSGs of the endpoint BRs. With this notion of equivalence, we now define composite BRs to be those that are equivalent to direct sums of BRs. An EBR can be characterized by the multiplicity of irreps at all high symmetry momenta. Elementary band representations (EBRs), thus defined, are the fundamental symmetryderived bands built from localized orbitals. In contrast, as we noted in the main text, the key distinguishing feature of topological bands is that they are not Wannier localizable. The foundation of TQC is a complete enumeration of the EBRs for all 1651 magnetic space groups together with the compatibility relations that constrain how little groups at particular momenta are connected. This task, while considerable, is possible at all because the number of EBRs is finite, bounded by the number of irreps of SSGs at all Wyckoff positions of all magnetic space groups. In addition, many BRs induced in this way are actually composite. It turns out that to capture all EBRs it suffices (modulo some carefully characterized exceptions) to consider only the irreps of so-called maximal SSGs. Maximal SSGs are defined as SSGs G q such that there is no finite group H for space group G for which G q ⊂ H ⊂ G. Given each EBR, one may further ask whether it is decomposable or not by computing the compatibility relations for the constituent bands. The result is that there are 20206 magnetic EBRs belonging to the 1651 (single-valued or spinless) magnetic space groups of which 1907 are decomposable. For our purposes, these are the relevant magnetic space groups. A similar enumeration has been carried out also for the doubled (or spinful) magnetic space groups. A complete tabulation of these EBRs organized by magnetic space group may be found on the Bilbao Crystallographic Server [1,2]. Given a set of bands that are energetically isolated, one may then assess whether they can be decomposed, on their own, into a combination of EBRs (with non-negative integer coefficients). If so, the bands are topological trivial (or fragile). If not they are topologically non-trivial. In this situation, should the coefficients be integer-valued including negative integers the topology is fragile and otherwise it is stable. In addition, single EBRs may be composed of multiple bands that are not forced to be connected by compatibility relations. In the main text we assign particular importance to such cases. These decomposable EBRs have the property that at least one of the disconnected component set of bands must be topological. In cases where one component is trivial the decomposable EBR is a self-contained case of fragile topology. In the examples we have explored, the two disconnected components are both topological. EBR are also useful for assessing the existence of topological semimetals. These arise from connected EBRs where, in electronic systems, the bands are filled up to touching points or lines within the EBR. All these insights have been put to use diagnosing band topology in the electronic band structures of materials. Given the symmetry group of a crystalline material and the Wyckoff positions and orbitals of the constituent ions, symmetry places strong constraints on the EBRs that may occur in the band structure. From the computed band structure (usually performed along high symmetry directions in momentum space), one may compute the multiplicities of the irreps at these momenta. From the identities of the tabulated EBRs one may then make the assessment of whether some given set of bands is reducible into EBRs. This approach massively generalizes the Fu-Kane criterion for two-dimensional topological insulators that, in the original formulation, allows one to compute the Chern number purely from discrete data at high symmetry momenta. By now, analogous formulas called symmetry indicator formulas are known for all space groups and all (double-valued) magnetic space groups each of which allows one to diagnose directly from irrep multiplicities whether the band or group of bands is trivial or not. II. MAGNETIC SYMMETRY AND MAGNONS In this section, we briefly review the essential facts about magnons and their symmetries. Magnons are to be understood as coherent magnetic excitations about some spontaneous or field-induced magnetic structure. We focus our attention on commensurate magnetic order characterized by some periodic arrangement of moments with nonzero vacuum expectation value J α i for sites i and components α. To understand magnon symmetries it is helpful to begin with the magnetic Hamiltonian describing coupled magnetic moments on a lattice. The lattice itself has symmetries specified by one of the 230 space groups. The Hamiltonian may have higher symmetry however: spin rotation symmetry for Heisenberg couplings or time reversal symmetry when the couplings are of even degree in the moments. In the most general case, the magnetic Hamiltonian has a spin-space symmetry composed of elements with somewhat decoupled spin rotation symmetries. However, in this paper, we restrict our attention to the case of strong spin-orbit coupled moments so that the moments are locked to spatial transformations. Under this assumption, the magnetic space groups are adequate to describe all the relevant symmetries. An important implication of this assumption is that we are explicitly or implicitly considering the case where the magnetic interactions of all types allowed by spin-space-locked symmetry are present and significant. In other words, the exchange is considered to be maximally anisotropic. From the point of view of materials, the restriction to magnetic space groups is strictly speaking correct as spin-orbit coupling is always present even when the orbital moment is quenched at the single ion level. Anisotropies in the exchange will be present even in such instances. However, for practical purposes, this assumption is too severe as there are many materials where the interactions are experimentally indistinguishable from the Heisenberg limit or where the spin-orbit is weak enough that a residual spin-space symmetry remains. Such cases are discussed in greater detail in Ref. 3. For such cases, we stress that the techniques we employ can be used straightforwardly to study spin-space groups also. But since they have not yet been tabulated we leave a systematic study of their topological properties as a task for the future. As discussed above, magnetic space groups can be classified into four different types. The magnetic space group symmetries of the magnetic Hamiltonian fall into classes I or II. Respectively these are the ordinary space groups (I) and groups of the form G + T G where To build a band representation we require the magnetic site symmetry groups. These can be viewed as the set of elements of G M that both leave the site invariant up to a primitive translation and leave the magnetic order invariant. Thus, given q 1 an orbit of the Wyckoff position, applying the elements of magnetic site symmetry group G q 1 the condition that the magnetic order by left invariant is: Expressed another way, J z must transform like the total symmetric irrep of G q 1 . Those Wyckoff positions that do not satisfy this constraint are not compatible with order and must reduce to Wyckoff positions of a less symmetric magnetic space group. Using this constraint we recover the magnetic structures compatible with the magnetic space group by listing the Wyckoff positions compatible with order and applying: where q i is the orbit of the Wyckoff position relative to g i and R zα i is the rotation matrix associated to g i . None of the grey groups are possible since time-reversal does not preserve the magnetic order. In total, 31 out of the 122 magnetic point groups are possible magnetic site symmetry groups. These are listed in Table I. We are now in a position to discuss magnons. These are transverse modes built from the J ± i components. In order to build up band representations for magnons, the starting point is the set of site symmetry groups for which J z transforms as the totally symmetric representation listed in Table I. Given these groups, we may establish how the transverse spin components transform and this information completely fixes the allowed representations of the SSG for the purposes of building the band representation. These SSG representations are also given in Table I. From the table we see that J ± i will, in general, induce a pair of EBRs, which are the same if real or complex conjugates of one other if complex. This fact reflects distinctive paraunitarity of the bosonic Bogoliubov diagonalization that produces two set of bands at positive and negative energies with complex conjugated eigenvector thus redundantly encoding information about the band structure. It is important to note that the Herring criterion forbids these complex conjugated EBRs from pairing within a single EBR when the anti-unitary elements are considered. This is a result of the magnetic order constraint preserving J z , which translate in the prohibition to mix of J + and J − . What distinguishes band representations induced from these SSG representations? We have seen that the interesting band representations are the EBRs obtained from the maximal Wyckoff positions since all the other BRs can be seen as composite of these. Therefore the EBRs can be divided into two groups for magnons. One group is induced from maximal Wyckoff positions compatible with magnetic order − they can be induced directly starting from the orbitals J ± q on the maximal Wyckoff positions q themselves. The second group has maximal Wyckoff positions that are not compatible with magnetic order or comes from representations describing different orbitals than J ± q . These can only be induced as a part of a composite representation from orbitals J ± q L from a less symmetric Wyckoff position q L . In practice, the first group allows one to construct magnon spectra composed of a single EBR, with straightforward topological identification once a gap is present. All the decomposable EBRs which can appear among this kind of single EBRs in magnon band structures are listed in Tab. X. A. Linear Spin Wave Theory So far we have discussed the transformation properties of magnons at a relatively abstract level. To connect to the magnon band structures of materials we use the standard Holstein- where the spins are of length S and the bosons a, a † satisfy the usual commutation relations [â,â † ] = 1. Linearizing these givesĴ Using this bosonic representation for the spins and expanding the magnetic Hamiltonian around the mean field ground state leads to the quadratic Hamiltonian (16) and the A ab (k) and B ab (k) depend on the exchange couplings in the local quantization frame as follows: Note that these expressions with the factor one-half defineJ αβ ab for α, β = ±. The diagonalizing transformation on Eq. 15 to find the spin wave spectrum must preserve the commutation relations where η ab = 1 if a = b ≤ m and η ab = −1 if a = b ≥ m + 1 and zero otherwise. Here In order for both conditions to be satisfied V is not unitary in general, as would be the case for fermions, but paraunitary meaning that The transformation is unitary only in the case where the number non-conserving terms in the Hamiltonian vanish. B. Berry Phase and Berry Curvature The Berry phase is of central importance to band topology. For bosonic systems the Berry phase for band n is where is the projector onto the nth band. From this one may compute the Berry curvature for the nth band The integral of this curvature over a 2D slice through the Brillouin zone is quantized and deformable only by closing a band gap. C. Symmetries In this section, we show how to build a representation of the group elements for magnons. Leaving time reversal aside for now, a group element takes the form S = {g| t} acting on a lattice site R i + r a as Under this transformation, local moments are mapped from one lattice position to another preserving the moment orientation and, in general, rotating the transverse components. Under a C n rotation of the transverse components about the moment orientation and we further note that inversion leaves moments invariant as they are pseudo-vectors and that reflections are equivalent to an inversion times a C 2 with axis perpendicular to the mirror plane. The space group element S acting onĴ ± ia gives where U ± g ab permutes sublattices and carries out local rotations. We then Fourier transform as where ν = 0, 1 keeps track of both standard conventions. A short calculation reveals that On the basis of this transformation law and the invariance of H SW , one may show that Including time reversal symmetries as followŝ completes the set of transformations on transverse components of the magnetic moment allowing one to construct a representation on the basis of a ka and a † −ka . D. Use of Bilbao Crystallographic Server MBANDREP tool for magnons Here we explain briefly how to make practical use of the magnetic band representation tool MBANDREP on the Bilbao Crystallographic Server for magnon systems [5,6]. This tool is very useful for topological magnons for two main purposes: identifying decomposable Of the six possible band representations here we can immediately neglect the double valued ones (last 3 columns with barred irreps) and focus on the single valued ones as is appropriate for spin waves. In particular, the orbitals J ± q transform under the unitary subgroup of the site-symmetry group 3 as irreps 1 E, 2 E as we can see in Tab. I. Therefore we are interested in the EBR in the red circle 1 E ↑ G(2) and 2 E ↑ G (2), where the number in parentheses indicates the number of bands given by this Wyckoff position (here Wyckoff 2c is a two sublattices basis producing two bands). One of the irreps 1 E or 2 E is associated to the positive energies bands while the complex conjugated one to the negative bands produced by Bogoliuobov diagonalization. We therefore see immediately that the orbital J ± q induces a decomposable EBR, meaning that a gap is topological. A list of all possible single decomposable EBRs for magnons can be found in Tab A. Honeycomb Heisenberg-Kitaev FM [111] model For simplicity the first case considered is the two dimensional Honeycomb Heisenberg-Kitaev ferromagnet. It has already been shown in Ref. [7,8] (where an emergent spin-space symmetry enforces the gap closure [3]). In this section, we focus on the case where the moment is along [111] and show that the nontrivial topology can be understood from the perspective of a decomposable EBR. Crystal structure The crystal structure we consider is an honeycomb lattice with an edge-shared octahedral environment around the sites so that Kitaev couplings are allowed by symmetry. The magnetic moments are polarized perpendicular to the honeycomb plane. This magnetic structure is described by the magnetic group F31m (#162.77 in BNS setting) with Wyckoff position 2c that has site symmetry group 32 . The lattice primitive vectors are: and basis coming from Wyckoff position 2c is (origin at the center of the hexagon): of the group are: where z corresponds to [111]. Also we define for later use in the exchange coupling matrix the bonds joining nearest neighbors: Exchange Hamiltonian The nearest neighbor model on this lattice has, as symmetry allowed exchange terms, the following on the x, y, z bonds in Eq. 42: In addition we allow for a magnetic field of magnitude h in the [111] direction. The linear spin wave Hamiltonian approximation is: This Hamiltonian contains only the J, K and h couplings. The Γ and Γ terms merely renormalize the J, K, h model so we have omitted them for simplicity. Band topology The orbital basis (J + q , J − q ) lives on the Wyckoff position 2c with associated site symmetry group generators (here we use the primitive lattice basis): where we have indicated also the orbit transformations g 1 = {E| 0} and g 2 = {−1| 1, 1, 0} forming the coset decomposition G = α g α (G q 2c 1 T ). The group G q 2c 1 is therefore isomorphic to 32 and the orbitals transform under the representation: which induces a two-band decomposable elementary band representation (see Table X). The two bands, once split, produce a topological gap with chiral surface states. The system is a Chern insulator with a bulk invariant associated with a non-trivial Wilson loop that can be computed through the symmetry indicated formulas of the point group C 3 : where C is the Chern number of the n band(s) and Θ(k) are the eigenvalues of C 3 . The eigenvalues and irreps of the two bands are: which corresponds to the induced 2 E ↑ G band representation (while 1 E ↑ G can be found for the negative eigenvalues). B. Honeycomb XYZ-DM FM [001] model The honeycomb lattice offers another famous topological gapped model, the Haldane model, which we study here in the context of EBR. The honeycomb isotropic Heisenberg has Dirac cones due to P T symmetry pinned at K by C 3 . To lift this degeneracy, the spin-space time-reversal symmetry present in Heisenberg need to be broken. This can be achieved either by having anisotropic Heisenberg (XYZ model) or by introducing a next nearest neighbour DM interaction with out of plane magnetic order (spin wave analog of Haldane model). In both case a gap with non-trivial Chern number will arise. For completeness here we analyze the full model XYZ-DM from an EBR perspective. Crystal structure The crystal structure we consider is an honeycomb lattice with magnetic moments polar- The primitive lattice and basis are the same as III A. Here we define additionally for later use the bonds joining the next nearest neighbors: Exchange Hamiltonian The nearest neighbor anisotropic Heisenberg interaction respect the symmetry of the honeybomb lattice and reads for the x, y, z bonds in Eq. 42: The next nearest neighbor DM interaction has D = Dẑ and exchange hamiltonian for bonds 2x, 2y, 2z bonds in Eq. 53: The linear spin wave Hamiltonian approximation is: where: Band topology The orbital basis (J + q , J − q ) lives on the Wyckoff position 2c with associated site symmetry group generators (here we use the primitive lattice basis): where we have indicated also the orbit transformations g 1 = {E| 0} and g 2 = {−1| 1, 1, 0} forming the coset decomposition G = α g α (G q 2c 1 T ). The group G q 2c 1 is therefore isomorphic to −6m 2 and the orbitals transform under the representation: which induces a two-band decomposable elementary band representation (see Table X). The two bands, once split, produce a topological gap with chiral surface states. The system is a where C is the Chern number of the n band(s) and η(k), Θ(k), ζ(k) are the eigenvalues of C 6 , C 3 , C 2 . The eigenvalues and irreps of the two bands are: which corresponds to the induced 2 E ↑ G band representation (while 1 E ↑ G can be found for the negative eigenvalues). C. Stacked honeycomb AFM topological insulator We now consider a system with AA stacked honeycomb planes with anisotropic couplings within each layer and AFM Heisenberg exchange between layers. This results in a magnonic topological crystalline insulator as noted in [9]. The in-plane model (with decoupled layers) is, from a symmetry perspective, identical to that studied in Sec. imposes a Kramers degeneracy in the plane k z = π, which protects the hybridized surface states from gapping, leading to a topological insulator with a Z 2 invariant [9]. Crystal structure The crystal structure is a stacked honeycomb lattice with magnetic moments along [001] anti-aligned between layers. In the previous section we have shown how the single layer correspond to group P 6/mm m , but here we have an additional black and white translation between the two AFM layers with magnetic space group type IV P c 6/mcc (#192.252). The Wyckoff position is 4c with site-symmetry group −6m 2 . The generators of the group are: The lattice primitive vectors are: and basis coming from Wyckoff position 4c is (origin at the center of the hexagon): Exchange Hamiltonian The model we consider has exchange on the honeycomb bond aligned with y and the components refer to the crystallographic frame withẑ perpendicular to the honeycomb layers. We then tile all bonds using C 3 and translations exactly as in Eq. 54. We coupled the AA layers with a simple AFM Heisenberg J c coupling. We also include a second-neighbor in-plane Dzyaloshinskii-Moriya coupling considered in Sec. III B, even if not strictly necessary for the non-trivial topology here. The linear spin wave Hamiltonian is: where: where the x, y, z and 2x, 2y, 2z bonds are the same as Eq. 42 and Eq. 53 and on all honeycomb layers. Band topology The orbital basis (J + q , J − q ) lives on the Wyckoff position 4c with associated site symmetry group generators (here we use the primitive lattice basis): The group G q 4c 1 is therefore isomorphic to −6m 2 and the orbitals transform under the representation: which induces a four-band decomposable elementary band representation (see Table II). The subduced irreps in reciprocal space of the two branches (2 bands each) are: which corresponds to the induced 2 E ↑ G(4) band representation ( 1 E ↑ G(4) for negative eigenvalues). The model has a nontrivial Z 2 invariant linked to the black and white translation {1| 0, 0, 1/2} [9]. This is indeed reflected by the EBR picture. When the two layers are decoupled, there are two decomposable EBRs 2 E ↑ G(2) (layer spin up) and 1 E ↑ G (2) (layer spin down) of the kind in Eq. 62, which produce bands with opposite Chern num-ber for the opposite layers. When the two layers couple, the black and white translation {1| 0, 0, 1/2} pairs the two EBRs into a new single EBR which is nevertheless decomposable and therefore topological. Finally the band representation, beside the topological gap, also predicts the nodal plane E 1 E 2 (2) between each pair of bands. We have established the topological character of the magnon bands based on symmetry. A more refined analysis reveals that this model has a nontrivial Z 2 invariant that can be computed from the Berry phase A (n) µ (k) in the pairs of bands joined by Kramers degeneracies. Thus for bands n = 1, 2, 3, 4 where n = 1, 2 form the lower energy pair Ref. [9] show that the where HBZ refers to half of the zone such that the remainder is covered by k → −k. Crystal structure Here we consider the space group P 4 (#75.1) with Wyckoff position 2c. The system is described by a single decomposable EBR with a site-symmetry group C 2 compatible with FM [001] magnetic order and must host a topological gap once the EBR is split. The lattice is a simple tetragonal with primitive vectors: The basis coming from Wyckoff position 2c is: The lattice and the first Brillouin zone are shown in Fig. 7. There are 4 symmetries in the group, all around the axial z direction: Also we define for later use in the exchange coupling matrix the additional lattice points: Exchange Hamiltonian The model J 1 + J 2 + J 3 + J 4 on this lattice consists of 10 different bond types: Applying the symmetries in Eq. 80 the exchange terms are constrained to 26 possible coupling parameters: We consider now a field h polarized state in the [001] direction and apply the LSW approximation obtaining M (k) with: in Eqs. 17 and 18. Band topology The orbital basis (J + q , J − q ) sits here on the Wyckoff position 2c with associated site symmetry group: where we have indicated also the orbit transformations g 1 = {E| 0} and g 2 = 4 − 001 0} forming the coset decomposition G = α g α (G q 2c 1 T ). The group G q 2c 1 is therefore isomorphic to C 2 and the orbital have representation: which induce a two-bands decomposable elementary band representation (see Table III). The two bands once splitted produced a topological gap with chiral surface states. where C is the Chern number of the n band(s), while ξ(k) and ζ(k) are the eigenvalues respectively of C 4 and C 2 . The eigenvalues and irreps of the two bands are: which corresponds to the induced B ↑ G band representation. IV. NODAL TOPOLOGY IN MAGNONS EBRs are also useful in determining the nodal topology of a given magnetic space group. While different EBRs can accidentally cross each other, degeneracies cannot be enforced between them. The only symmetry enforced degeneracies are inside EBRs themselves. For magnons, all the single-valued EBRs are relevant, although only a subset can be directly induced as single EBR (due to compatibility with magnetic order). Those that cannot be induced in this way must be induced from lower symmetric Wyckoff positions as components of a composite of EBRs. In Tab. IV the total number of single-valued EBRs with enforced degeneracy is given with The complete set of symmetry data required to obtain the various types of nodal topology can be found on the Bilbao Crystallographic Server [1,2]. For convenience we provide tabulation of the the magnetic space group and Wyckoff positions relevant to magnons that enforce the more exotic nodal features: • Nodal point 6-fold in Tab. V. Here we mention that if we rel ax the locking between spin and space and we deal with spin-space groups, generally the degeneracies present in the system are much higher, producing more exotic nodal features which are not possible in magnetic space groups. As an example of a 3-fold degeneracy we consider the case of the pyrochlore antiferromagnet with all-in/all-out (AIAO) magnetic order. The pyrochlore lattice is a lattice of corner-sharing tetrahedra and the AIAO order has propagation vector k = 0 with moments pointing into or out from the tetrahedral centers. A simple model leading to this magnetic order is the nearest neighbor antiferromagnetic Heisenberg coupling and the symmetry-allowed nearest neighbor Ising exchange where i, j are primitive fcc sites and a, b are tetrahedral sublattice indices,ẑ a is the local 111 direction on sublattice a and J > 0, K < 0. The magnetic order breaks down the paramagnetic symmetries to a type III magnetic One may show that there are six allowed couplings to nearest neighbor. These are:
11,872
2022-03-13T00:00:00.000
[ "Physics" ]
Engineering better biomass-degrading ability into a GH11 xylanase using a directed evolution strategy Background Improving the hydrolytic performance of hemicellulases on lignocellulosic biomass is of considerable importance for second-generation biorefining. To address this problem, and also to gain greater understanding of structure-function relationships, especially related to xylanase action on complex biomass, we have implemented a combinatorial strategy to engineer the GH11 xylanase from Thermobacillus xylanilyticus (Tx-Xyn). Results Following in vitro enzyme evolution and screening on wheat straw, nine best-performing clones were identified, which display mutations at positions 3, 6, 27 and 111. All of these mutants showed increased hydrolytic activity on wheat straw, and solubilized arabinoxylans that were not modified by the parental enzyme. The most active mutants, S27T and Y111T, increased the solubilization of arabinoxylans from depleted wheat straw 2.3-fold and 2.1-fold, respectively, in comparison to the wild-type enzyme. In addition, five mutants, S27T, Y111H, Y111S, Y111T and S27T-Y111H increased total hemicellulose conversion of intact wheat straw from 16.7%tot. xyl (wild-type Tx-Xyn) to 18.6% to 20.4%tot. xyl. Also, all five mutant enzymes exhibited a better ability to act in synergy with a cellulase cocktail (Accellerase 1500), thus procuring increases in overall wheat straw hydrolysis. Conclusions Analysis of the results allows us to hypothesize that the increased hydrolytic ability of the mutants is linked to (i) improved ligand binding in a putative secondary binding site, (ii) the diminution of surface hydrophobicity, and/or (iii) the modification of thumb flexibility, induced by mutations at position 111. Nevertheless, the relatively modest improvements that were observed also underline the fact that enzyme engineering alone cannot overcome the limits imposed by the complex organization of the plant cell wall and the lignin barrier. Background Wheat straw is an abundant coproduct of the agri-food industry that is currently considered to be a primary source of lignocellulosic biomass for second-generation biorefining. The composition of wheat straw is typical of graminaceous species, containing arabinoxylan (20% to 25% dry weight (DW)), cellulose (35% to 45% DW) and lignins (15% to 20% DW) in variable proportions that are determined by both cultivar characteristics and pedoclimatic differences [1,2]. Regarding the ultrastructure of wheat straw, the internode regions, which in DW terms represent the majority of wheat straw, are characterized by different tissue types, which notably display different levels of lignification. The central cavity, or lumen, of straw is lined by pith that covers parenchyma cells and that possesses mainly primary cell walls. Moving further outwards to the external part of wheat straw, one can identify sclerenchyma cells, xylem tissue and finally the outer epidermis, all of which possess lignified secondary cell walls [3,4]. Endo-β-1,4-xylanases (EC 3.2.1.8, xylanase) randomly depolymerize the backbone of β-1,4-linked xylans [5], including arabinoxylans such as those found in wheat straw. Current commercial uses for xylanases mainly focus on the paper, food and animal feed industries [6,7], but it is increasingly recognized that these will also be important for biorefining of lignocellulosic biomass [8,9]. Indeed, recent studies have shown that xylanases are needed in cellulase cocktails in order to alleviate the inhibition of various cellulose-degrading enzymes by xylo-oligosaccharides [10]. Also, the development of ambitious approaches such as consolidated bioprocesses [11], which require the use of microorganisms possessing the dual ability to degrade complex biomass and convert the fermentable sugars into useful products, will also create new demands for highly efficient xylanolytic systems. To date, most industrial processes that employ xylanases use enzymes that belong to the glycoside hydrolase family GH11 [12]. Bacterial GH11 xylanases are mostly single domain enzymes that exclusively act on β-1,4 links between xylosyl units in xylans and display a β-jelly roll structure that has been likened to a partially folded human right hand ( Figure 1) [13]. Likewise, the prominent elements of the GH11 three-dimensional structure, which is composed mainly of two β sheets and one α helix, have been identified using terms such as 'thumb', which describes a large mobile loop that is located above the active site cleft, 'palm', whose halffolded structure forms the active site cleft, and fingers, which constitute one side of the active site cleft and whose 'knuckles' bear a secondary substrate binding motif [14,15]. Despite the fact that xylanases will be necessary for biorefining operations, very little R&D has so far been focused on the improvement of xylanases specifically for biorefining purposes, and in particular for increased activity on complex biomass. This is partly because a lot of effort has been focused on cellulase engineering, and also because presently it is unclear on what basis improvements could be achieved. Regarding the action of xylanases on lignocellulosic biomass that has not been subjected to prior pretreatment, very little is known, though some studies of GH11 xylanase from Thermobacillus xylanilyticus (designated Tx-Xyn) actions on wheat bran and straw, and have provided insight into the factors that might determine overall enzyme efficiency. Nevertheless, the available information is still sparse, making the prospect of rational engineering rather haphazard. Alternatively, random approaches coupled to enzyme in vitro evolution could be a suitable way to tackle xylanase engineering. So far, the use of such techniques on xylanases has been limited to the improvement of thermostability [16][17][18][19][20] and alkaliphilicity [21][22][23]. In these studies, screening methods relied on the use of isolated xylans, such as Remazol Brilliant Blue (RBB)-xylan and birchwood xylan. However, in a recent study we have developed a new microtiter plate-based screening method that is far more suitable for the study of xylanase action on complex biomass [24]. Therefore, in this paper, we describe the use of this screening procedure in an enzyme engineering project that has focused on the moderately thermostable Tx-Xyn. This enzyme was selected, because it has already been extensively studied, notably with regard to its activity on insoluble complex substrates such as wheat bran and straw, which is not the case for other GH11 xylanases [25][26][27][28]. Using a combination of random mutagenesis and DNA shuffling, we have isolated several Tx-Xyn variants that showed increased activity on wheat straw and improved synergistic action, when used in combination with a commercial cellulase preparation. Screening of randomly mutagenized xylanase libraries The different steps of the engineering strategy are summarized in Figure 2. The initial phase of this work involved the use of error-prone PCR (epPCR) to generate random biodiversity. In preliminary work, we observed that more than 10 base mutations/kb produced >70% inactive clones. Therefore, a progressive strategy Figure 1 Ribbon representation of Thermobacillus xylanilyticus xylanase (Tx-Xyn) three-dimensional structure. The schematic protein is 'color-ramped' from the N-terminus (blue, N-ter) to the Cterminus (red, C-ter). The relevant regions of 'thumb', 'palm' and 'fingers' are highlighted in frames, and the 'knuckles' in the fingers region are indicated by an arrow. employing three successive rounds of epPCR was preferred, with moderate mutational charge (5 to 7 base mutations/kb) at each stage. The results of activity screening (where activity can generally be considered to be the product of both expression levels and specific activity) at each round are summarized in Table 1. Regarding the first round of screening, this work has already been reported by Song et al. [24]. Although the best mutant from this first round, designated Tx-Xyn-AF7, displays a wild-type amino acid sequence, its DNA sequence contains two mutations (at nucleotide positions 27 and 516) that cause approximately twofold higher expression of the recombinant enzyme. Therefore, the sequence encoding Tx-Xyn-AF7 was used as the template for the second round of epPCR. DNA sequence analysis of ten library clones, taken from the second-generation library, revealed an average mutation rate of 5.4 base substitutions/kb and a transition/transversion ratio of approximately 1.4, indicating that the mutations were relatively unbiased in this respect. A total of 4,333 clones were screened on intact wheat straw (In-WS), and the 4 most active clones (>4CV) were selected, using the activity of Tx-Xyn-AF7bearing clones as the base case for comparison. DNA sequencing revealed that all four clones were characterized by single amino acid changes. Two clones were mutated at position 3 (Y3L and Y3H), while two others were mutated at independent, but neighboring locations (W109R and Y111H). Examination of the three-dimensional structure of Tx-Xyn revealed that Y3 lies in the distal glycon part of the active site cleft, while W109 and Y111 are situated nearby and in the thumb region, respectively; thus all three residues are potentially important for enzyme function. For this reason, at this stage in the experiment it was decided to focus on these mutations for the creation of further mutant libraries. However, to ensure that all of the possible permutations would be present in the third generation, recombination was achieved using sitedirected mutagenesis. Consequently, five double mutants (Y3L-W109R, Y3L-Y111H, Y3H-W109R, Y3H-Y111H and W109R-Y111H) and two triple mutants (Y3L-W109R-Y111H and Y3H-W109R-Y111H) were created. Together with the other four original single mutants, these were used as parental templates for the next round of epPCR, which led to the creation of a fourth generation ( Figure 2). To efficiently challenge clones present in the fourth library, the microtiter plate assay was modified by replacing In-WS with xylanase-depleted wheat straw (Dpl-WS). The principle behind this was to select clones that produce enzymes that can actually hydrolyze arabinoxylans that are inaccessible or resistant to wild-type xylanase. The key features and performance descriptors of this modified assay are summarized in Table 2. Overall, the CV value for individual wells of Tx-Xyn-AF7 control varied between 8% to 11%, indicating that this screen was sufficiently reliable for library screening. DNA sequence analysis of a randomly picked sample of fourth-generation library clones revealed an average mutation rate of 7.2 nucleotide substitutions/kb. Likewise, functional screening using the modified Dpl-WS assay indicated that 0.6% of screened clones presented activities that were significantly higher (>5CV) than the mean value of the activity of Tx-Xyn-AF7 clones. Therefore, the top 30 clones were isolated and used for subsequent rounds of DNA recombination. Optimization of mutant xylanases using DNA recombination To further increment the functional fitness of the enzymes expressed by the candidate clones obtained from random mutagenesis, the staggered extension process (StEP) DNA shuffling approach was adopted, because it offers a much simpler procedure than classical DNA shuffling [29,30]. This method was used to successively create fifth, sixth and seventh-generation libraries. To appreciate the impact of the iterative use of StEP on overall library fitness, Figure 3 shows the relative performance of fourth-generation to sixth-generation libraries. At each generational increment, library fitness increased in accordance with expectations [30][31][32]. The results of statistical analyses performed on the three successive libraries (fifth, sixth and seventh generations) that were created using this method are summarized in Table 3. For the initial round of DNA shuffling, 30 clones were used as parental input. After DNA shuffling, the library was submitted to screening using the modified Dpl-WS assay. This step allowed the selection of seven hits whose activities were significantly higher (>7CV) than the mean value of the activity of Tx-Xyn-AF7 clones. DNA sequencing revealed that these 7 clones contained 11 point mutations, including Y111H and some new amino acid substitutions ( Figure 2). As before, the seven mutants were used as parental input for two further rounds (sixth and seventh) of DNA shuffling. After the creation of the seventh-generation library, the experiment was stopped, because DNA sequencing of the highest performing seventh-generation clones showed that five mutational combinations out of a total of seven had already been identified in the sixth generation ( Figure 2). This observation suggested that the evolutionary itinerary had almost reached an end, with very little new biodiversity being introduced. Among the seven best performing seventh-generation clones, Y6H-Y111H and Y6H-S27T-Y111H displayed the highest activity increase (>8 CV) in the screening, compared to that of wild-type control (Tx-Xyn-AF7). In addition, among the six amino acid substitutions that were detected in clones obtained from DNA shuffling, Y111H was present in every template and the frequency of Y6H and S27T increased from the fifth generation to the seventh generation (Table 3). Consequently, we decided to focus on clones containing these three amino acid changes for enzyme production and characterization. Overall mutants that were retained for characterization included Y6H-Y111H, S27T-Y111H and Y6H-S27T-Y111H from the seventh-generation screening and the single mutants Y111H, Y6H and S27T. Site-saturation mutagenesis (SSM) at positions 3 and 111 Among the second-generation clones, selected for higher activity on In-WS, two amino acid positions, 3 and 111, were pinpointed as potentially interesting locations. Therefore, in addition to the use of Y3H and Y111H as parental templates for further random mutagenesis and DNA shuffling, SSM was performed to investigate the importance of these two residues with respect to enzyme activity on recalcitrant arabinoxylan (AX) in wheat straw (that is, Dpl-WS). In each case a library was created and 288 clones were screened using the modified Dpl-WS assay. This number of clones was sufficient to ensure a 99.87% probability that all possible amino acid variants were present [33]. Additionally, a random sample of each library was submitted to DNA sequence analysis in order to control the success of the experiment. Figure 4 shows the results of the screening of the two site-saturation libraries. Overall, the Y111N (N represents any amino acid) library provides a larger population of improved clones, though both libraries contain a small minority of clones that display activities that are above the value of μ + 4σ of wild-type control (where σ is standard deviation and μ is mean value). Three highest performing clones were selected from each library and analyzed by DNA sequencing. All three clones from the Y3N library displayed the same Y3W mutation, whereas two clones from the Y111N library were phenotypically and genotypically identical (encoding the mutation Y111S) and one displayed an Y111T mutation. In view of these results, three individual clones encoding Y3W, Y111S and Y111T were retained for further characterization. Characterization of key properties of the Tx-Xyn mutants Since the screening of mutant enzyme libraries obeys the maxim 'you get what you screen for', the mutants selected in this work were only improved with respect to the hydrolysis of wheat straw. Hence, other important properties such as thermostability could have been negatively affected. Consequently, the thermostability of each mutant was assessed (Table 4). Although the thermostability of some mutants at 60°C was clearly affected (for example, that of Y6H and Y6H-Y111H), all of the enzymes were sufficiently stable to enable the measurement of kinetic properties without any major modifications to the protocols that were routinely used to characterize wild-type Tx-Xyn. It is also noteworthy that all of the mutants were highly stable at 50°C, since measured activity remained stable over a 6 h incubation period. Each of the mutants was characterized with regard to its ability to hydrolyze birchwood xylan (BWX) and lowviscosity wheat arabinoxylan (LVWAX). According to our findings (data not shown), BWX is devoid of α-Larabinosyl substitutions, and LVWAX displays an A/X ratio of 0.54. Concerning wild-type Tx-Xyn, its turnover number and performance constant were higher for LVWAX, though the apparent K M value was lower on BWX. This tendency was also displayed by the majority of the mutants (Table 5). Regarding the apparent values of K M , all of the mutants displayed improved affinity for BWX, but this was not the case for LVWAX. Notably, Y111H was the mutant that displayed the best affinity for BWX, while its affinity for LVWAX was unaltered. However, the rate constant for Y111H-mediated hydrolysis of BWX was lowered when compared to that of the wild-type enzyme, but was improved on LVWAX. Intriguingly, the opposite was true for Y111T, for which the value of k cat was 48% greater than that of Tx-Xyn on BWX, but identical to that of Tx-Xyn on LVWAX. When Y111H was combined with other mutations (for example, S27T-Y111H or Y6H-Y111H), its influence on the performance constant appeared to be dominant, annulling the improved activity on BWX, displayed by the single mutants S27T and Y6H. Assessment of the impact of Tx-Xyn mutants on wheat straw To further evaluate the altered properties of the different mutants, their activities on the original wheat straw samples (In-WS and Dpl-WS) were examined. Reactions were performed using pure preparations of wild-type and mutant xylanases either alone or in the combination with Accellerase 1500 (a cellulase cocktail). The results of HPAEC-PAD analyses performed on the reaction supernatants are shown in Figure 5A,B, which show the conversion of total xylose and glucose (that is, % tot. xyl and % tot. glu , w/w) in the straw residues. The soluble sugar yields are summarized in Additional files 1 and 2. The hydrolysis of Dpl-WS revealed that all of the mutants could release further amounts of soluble xylose equivalents and that their performance was superior to Table 5 Kinetic parameters of Thermobacillus xylanilyticus xylanase (Tx-Xyn) and mutants for hydrolyses involving either birchwood xylan (BWX) or LVWAX Mutant Kinetic parameters a SR c BWX LVWAX The melting temperature (T m ) was determined using differential scanning fluorimetry (DSF) and the half-life (t 1/2 ) was defined as the period necessary for the initial activity to be reduced by 50% at 60°C. that of wild-type Tx-Xyn. The mutants S27T and Y111T produced the most outstanding results, because these could release 2.3-fold and 2.1-fold more xylose equivalents from Dpl-WS than Tx-Xyn. The lowest performers were Y111H and Y3W, which yielded 35% and 46% more xylose equivalents, respectively ( Figure 5A). However, it should be noted that even the best variant S27T could only release 2.5% tot. xyl of Dpl-WS (5.5 g xylose per kg wheat straw), which is evidence of the recalcitrance of this substrate. For the hydrolysis of In-WS (pH 5.8), wild-type Tx-Xyn released 43.7 g equivalent xylose per kg wheat straw. This represents 4.4% of the dry weight and 16.7% of total xylan (16.7% tot. xyl ) content. Similar results were obtained for the mutants Y6H, Y6H-Y111H, Y6H-S27T-Y111H and Y3W, but five other mutants yielded higher amounts (18.6% to 20.4% tot. xyl ) of soluble xylose equivalents, with the best mutant being Y111T ( Figure 5A). The five mutants displaying improved activity on In-WS, were further selected to investigate synergy with cellulases on In-WS, operating at the optimum pH for Accellerase (pH 5.0). Likewise, suitable control reactions at pH 5.0 were performed using only mutant xylanases, or wild-type Tx-Xyn. All controls revealed that the different xylanases displayed reduced hydrolytic capacity, compared to their activity at pH 5.8 ( Figure 5A). According to its manufacturer, Accellerase 1500 principally contains endoglucanase and β-glucosidase activities. In our trials, Accellerase alone was able to solubilize 7.3% tot. xyl and 18.9% tot. glu In-WS ( Figure 5B). However, in combination with xylanases, higher yields of xylose and glucose were measured, which were greater than the sum of the yields of Accellerase and xylanase alone, clearly revealing synergistic interactions between the enzyme participants. The mixture of wildtype Tx-Xyn and Accellerase solubilized 24.5% tot. xyl and 23.6% tot. glu of In-WS ( Figure 5B). However, significantly the different mutants were able to improve on this performance, solubilizing 27.4 to 29.0% tot. xyl and 24.9 to 26.4% tot. glu from In-WS. Discussion Is enzyme engineering a useful strategy to improve biomass deconstruction? Artificial enzyme evolution, relying on in vitro random mutagenesis and DNA recombination techniques, is a powerful strategy to pinpoint functional determinants and to rapidly improve enzyme fitness with regard to a variety of physical or biochemical properties [34][35][36]. However, the need for an appropriate screen is vital. In this work, we relied on a previously described screening method, which allowed us to address a highly ambitious target, which was the isolation of enzymes that display higher activity on raw biomass. To our knowledge, no such enzyme engineering has yet been attempted, mainly because biomass-degrading enzymes are improved for their activity on artificially isolated biodiversities or pretreated biomass, wherein the notion of chemical and structural complexity is totally omitted or mainly cellulose is present, with lignin and hemicelluloses being very minor components [37][38][39]. Therefore, the underlying rationale of our approach was to investigate to what extent the fitness of a xylanase, or for that matter any other biomass-degrading enzyme, can be independently improved for hydrolysis of complex biomass, without interfering with the structural and chemical complexity of the substrate. Likewise, we hoped to provide a novel angle on the understanding of the factors that govern the enzymatic deconstruction of raw biomass. Our previous study revealed that the Tx-Xynmediated hydrolysis of wheat straw is a complex reaction that cannot be modeled using Michaelis-Menten kinetics and does not reach completion even at high enzyme loading and after long time periods [24,28]. To achieve the first phase of the reaction requires quite long incubation times (approximately 8 h), thus screening using raw wheat straw (that is, In-WS) provides a means to find variants that display improved initial catalytic rates, which can result either from the improvement of intrinsic catalytic properties of the xylanase, or from an increase in enzyme production. However, the use of In-WS is not appropriate to isolate xylanases that will surpass the sugar solubilization yield of the wildtype Tx-Xyn. For this purpose, it is more appropriate to use Tx-Xyn-pretreated wheat straw (that is, Dpl-WS), which should provide a means to identify enzyme variants that can accelerate the latter phase of the reaction and better surmount the obstacles that prevent further action by Tx-Xyn. Therefore, in the strategy developed here, both screening approaches were applied, first in an attempt to accelerate the reaction and second to improve the overall impact of xylanase action on wheat straw. Overall, all of the qualitative indicators that are presented here show that the enzyme evolution approach was successful in increasing the fitness of Tx-Xyn for biomass hydrolysis. At each step, clones with ever increasing activity could be selected and the ultimate analysis of the best clones revealed that several could actually better hydrolyze wheat straw, especially when their action was coupled to a cellulose cocktail. Nevertheless, unsurprisingly the overall impact of the improvements was modest, but these results need to be considered in the light of current knowledge. Two recent studies [3,40] have attempted to relate enzyme action on wheat straw to changes at the ultrastructural level. These authors have shown that a mild hydrothermal pretreatment (185°C, 10 min) releases approximately 34% of available xylans (that is, approximately 8.2% of the initial DW), which appear to come from the pith that lines the central lumen of wheat straw. Further treatment of the sample with a cellulase cocktail released glucose and xylose from cellulose microfibrils and xylans, respectively, apparently present in the parenchyma cells that form the cortex. However, enzymatic degradation was impotent on lignified cells (for example, sclerenchyma cells). In our experiments, total xylans in wheat straw represent approximately 26% DW and Tx-Xyn can release 16.7% of these (that is, 4.4% DW). The mutant Y111T is able to solubilize approximately 21.9% tot. xyl or 5.3% DW over a 24-h period. Taken together, our results reveal that the hydrolysis of wheat straw using Tx-Xyn variants procures solubilization yields that are inferior, but not dissimilar, to those obtained using mild hydrothermal treatment, and thus it is tempting to suggest Tx-Xyn also preferentially hydrolyzes pith and parenchyma cells. The failure of Tx-Xyn, or variants thereof, to further solubilize xylans is probably not linked to intrinsic catalytic potency or to substrate selectivity of Tx-Xyn and its mutants, but rather to the inaccessibility of the substrate. Indeed, coupling of wild-type Tx-Xyn to that of a cellulase cocktail clearly revealed a certain degree of synergy, releasing approximately 24% of the theoretical yield of sugars. Significantly, mutants generated in this work amplified this synergy and achieved higher levels of sugar solubilization, indicating that the enzymatic removal of cellulose exposes xylan and vice versa. Possibly, the improved action of the mutants allows a slightly more profound degradation of the parenchyma cells that form the cortex of wheat straw. However, the results of this study indicate that enzyme engineering alone cannot overcome the limits imposed by the lignin barrier, which is progressively exposed by the peeling action of the xylanase/cellulases cocktail. Structure-function relationships revealed in this study One of the remarkable findings in this study is the identification of a relatively small number of mutations. After six rounds of combined mutagenesis and DNA shuffling, seven mutants possessing a total of six point mutations were identified. Among these mutations, three emerged (amino acids 6, 27 and 111) as important positions, because of their reoccurrence in the seven mutants. In addition, another three mutants (Y3W, Y111S and Y111T) were isolated from SSM libraries, in which amino acids 3 and 111, respectively were targeted. Tyr3 and Tyr6 are located at the B2 β strand in the Nterminal region of Tx-Xyn, whereas Ser27 forms part of the 'knuckles' region of fingers and Tyr111 is located on the thumb ( Figure 6A). The examination of the different combinations that were obtained reveals that generally these mutations did not provide additive benefits. For example, regarding the mutants Y6H-Y111H, S27T-Y111H and Y6H-S27T-Y111H, the two point mutation variants Y6H and S27T displayed greater hydrolytic potency on Dpl-WS than any of these combinations. Similarly, S27T displayed the highest catalytic efficiency towards the two soluble xylan substrates, BWX and LVWAX. Therefore, it appears legitimate to consider the impacts of the different mutations independently. The findings presented here concerning the reduced thermostability of mutants displaying substitutions at positions 6 (Y6H) and/or 111 (Y111H) clearly provide support for the existence of hydrophobic patches that might mediate the oligomerization, and thus the thermostabilization, of Tx-Xyn in solution. According to Harris et al. [41], Tyr6 and Tyr111 are surface exposed aromatic amino acids that along with nine other aromatic residues participate in the formation of intermolecular 'sticky patches' that form the basis for thermostability in Tx-Xyn. Nevertheless, it is also important to note that not all mutations at position 111 produced the same effect. Notably, the mutant Y111T displayed thermostability very close to that of the wildtype Tx-Xyn. Interestingly, the mutant S27T actually increased thermostability, which agrees with a trend among certain proteins, including GH11 xylanases, that correlates thermostability with an increased Thr:Ser ratio [42,43]. Among the six mutants bearing single substitutions, S27T, Y111H, Y111S and Y111T displayed improved hydrolysis of In-WS and synergy with the cellulase cocktail. However, the selection of the mutants Y6H and Y3W in our assay was more surprising, because these single mutants did not appear to improve wheat straw hydrolysis, although their specificity towards BWX was clearly altered and Y6H displayed the highest k cat value on both BWX and LVWAX. The mutants S27T, Y111S and Y111T also showed increased specificity towards BWX, indicating that all single site mutants selected in our assay had acquired an improved ability to hydrolyze less substituted xylans, displaying an Ara:Xyl ratio that is comparable to that of wheat straw xylan (Ara:Xyl ratio of 0.091). Curiously, the only exception to this trend was the double mutant Y6H-Y111H, which displayed unaltered specificity on In-WS, when compared to wild-type Tx-Xyn. The amino acid Ser27 is located in a region that has been identified as a secondary binding site (SBS) in the GH11 xylanases from Bacillus circulans [14] and Bacillus subtilis [44,45]. Tx-Xyn shares 73% amino acid identity with the xylanase from B. circulans xylanase, and this figure increases to 81% when one just considers the SBS determinants, suggesting that a functional SBS Figure 6B). In this context, it is noteworthy that Ser27 is located in a relatively deep part of a surface groove in Tx-Xyn that is linked to a shallower region via Ser25, and that surface grooves are potential ligand binding sites [46]. Therefore, one can speculate that Ser27 constitutes an element of a SBS in Tx-Xyn. Functionally, it is proposed that the SBS in certain GH11 xylanases interacts with three or four xylosyl units via hydrogen bonds and Van der Waals interactions, and possibly improves binding of xylan polymers in the active site cleft [14]. The mutation of Ser27 to Thr certainly leads to a localized increase in hydrophobicity, which is probably favorable for xylan binding to the putative SBS. Indeed, experimental evidence supports this, because the mutant S27T significantly reduced the Michaelis constant for the hydrolysis of BWX and, to a lesser extent, for LVWAX. In this respect, it is also noteworthy that among the other mutations identified during the directed evolution process (Table 3), figure S29N, N30D and V139A, which are also in the vicinity of the putative SBS region in Tx-Xyn. Therefore, a complementary study of these mutations could be an interesting way forward to better define the Tx-Xyn SBS and understand its effect on the enzyme activity. The thumb loop is known to be of prime functional importance in GH11 xylanases. The open and closing of this loop almost certainly plays a key role in substrate selectivity, binding [47][48][49] and product release [50]. Regarding substrate binding, the conserved tip of the thumb, composed of the motif Pro-Ser-Ile (position 114 to 116 in Tx-Xyn), is involved in binding of xylosyl residues at the -1 and -2 subsites via hydrogen bonds [45,51,52]. Tyr111 and its opposing neighbor Thr121 are located at the base of the loop where they control the movement of this structure [50,53]. The mutation of Tyr111 to either His, Ser or Thr reduces the spatial occupancy at position 111 ( Figure 6C), although this is less so for His, and thus probably renders the loop more mobile and more inclined to fold downwards and inwards towards the -1 and -2 subsites. The overall effects of these changes would be improved catalytic turnover and possibly improved binding affinity, both of which are observed for the mutants Y111S and Y111T. Regarding the loop movement, the mutation of Tyr6 is also worth considering. The relatively conservative substitution of this residue by a slightly less bulky histidine clearly improved the enzyme turnover on both BWX and LVWAX, but had a slightly negative effect on substrate affinity in the case of LVWAX. This implies that Tyr6 might influence the movement of the loop, although a direct interaction is impossible. Nevertheless, Trp7 forms part of the -2 subsite and faces Pro114 and Ile116, which form the thumb tip. Slight adjustments in the position of Trp7 could facilitate the open-close movement of the thumb loop, with the risk of disturbing the high-energy interaction between this residue and the -2 xylosyl moiety. Finally it is noteworthy that many of the mutations that were identified in this study involved the loss of aromatic side chains. Often, the non-productive binding by lignin is cited as a major cause of enzyme inefficiency on lignocellulosic biomass [54][55][56][57]. In an earlier study, it was shown that wild-type Tx-Xyn was strongly absorbed by both wheat straw and isolated wheat straw lignin [28]. In a more recent study [58], it has been shown that phenolic acids can act as non-competitive multisite inhibitors of Tx-Xyn that might provoke conformational alterations of the enzyme. Therefore, it is tempting to speculate that the elimination of surface exposed aromatic amino acid side chains might lower such inhibitory effects. Conclusions Using a random mutagenesis and directed evolution approach we have been able to generate a number of mutants whose behavior is globally coherent with the screening assay that was employed. Several mutants display improved hydrolytic activity on wheat straw and show increased synergy with cellulase, though none are sufficiently potent to be able to overcome the accessibility barrier, which inevitably blocks the way to further hydrolysis of polysaccharides. General materials and regents Unless otherwise stated, all chemicals were of analytical grade and purchased from Sigma-Aldrich (St Louis, MO, USA). The T7-promoter based vector pRSETa was purchased from Invitrogen (Cergy Pontoise, France), and the Escherichia coli host strains Novablue(DE3) and JM109(DE3) were obtained from Stratagene (La Jolla, CA, USA) and Novagene (Darmstadt, Germany), respectively. All restriction enzymes, T4 DNA ligase, Taq DNA polymerase and their corresponding buffers were purchased from New England Biolabs (Beverly, MA, USA). Oligonucleotide primers were synthesized by Eurogentec (Angers, France), and the DNA sequencing was performed by GATC Inc. (Marseille, France). Sterile 96-well cell culture microtiter plates and sealing tapes were purchased from Corning Corp. (NY, USA), and other polypropylene microtiter plates were from Evergreen Scientific (Los Angeles, CA, USA). The low viscosity wheat flour arabinoxylan (LVWAX) was obtained from Megazyme (Wicklow, Ireland), and the birchwood xylan (BWX) was purchased from Sigma-Aldrich. Mutagenesis procedure and library construction Random mutagenesis was carried out by epPCR using an established protocol [31]. The template was (first round only) the DNA encoding Tx-Xyn (Swiss-Prot accession number Q14RS0, bearing the substitution N1A) or (in subsequent rounds) Tx-Xyn-AF7 described by Song et al. [24]. Briefly, the PCR reaction mixture (50 μl) contained 5 ng of template DNA, 0.3 μM of primers epF and epR (see below), 0.2 mM dGTP/ATP (equimolar mixture) and 1 mM dCTP/TTP (equimolar mixture), 7 mM MgCl 2 , 5 IU Taq polymerase and (in the third round of epPCR only) 0.05 mM of MnSO 4 . Reactions were conducted using the following sequence: 1 cycle at 94°C for 2 min, 30 cycles at 94°C for 1 min, 1 cycle at 42°C for 1 min and 1 cycle at 72°C for 1 min, and finally 1 cycle at 72°C for 5 min. The amplicons were purified using QIAquick PCR Purification Kit (Qiagen, Courtaboeuf, France) and were digested with EcoRI and NdeI and inserted into a similarly digested pRSETa vector. The ligation mixture was used to transform competent E. coli Novablue (DE3) cells. epF: 5'-GGAGATATACATATGGCCACG-3'; epR: 5'-GGAT-CAAGCTTCGAATTCTTACC-3'. DNA recombination was carried out using an adapted StEP method [30,32]. The PCR reaction (50 μl) contained 5 ng of total template DNA (equimolar mixture of each parental gene), 0.3 μM of each primer, 0.2 mM of each dNTP, and 5 IU Taq polymerase. Reactions were conducted using the following sequence: 1 cycle at 94°C for 2 min; 40 cycles comprising a step at 94°C for 30 s and 1 step at 58°C for 2 s; followed by 40 cycles with 1 step at 94°C for 30 s and 1 step at 56°C for 2 s. Afterwards, 20 IU of DpnI was added to the PCR reaction, which was incubated at 37°C for 1 h, before amplicon purification and digestion with EcoRI and NdeI. Finally, the mutant library was generated by ligating the digested amplicons to EcoRI/NdeI-digested pRSET plasmid DNA and transforming the resultant products into competent E. coli Novablue(DE3) cells. The mutational combinations W109R-Y111H, Y3H-W109R-Y111H, Y3L-W109R-Y111H, S27T, and Y6H were created through site-directed mutagenesis. This was achieved using the QuikChange site-directed mutagenesis kit, according to the manufacturer's instruction. The oligonucleotide primers employed in PCRs are listed in Additional file 3. Library screening on intact and xylanase treated wheat straw Wheat straw (Triticum aestivum, cv. Apache) harvested (2007) in France was milled using a blade grinder that procured a fine powder having an average particle size of 0.5 mm. After, the wheat straw powder, designated In-WS, was washed with distilled water (10 volumes), filtered using a Büchner funnel equipped with Whatman No.4 filter paper (pore size: 20 to 25 μm), dried in an oven at 45°C and then sterilized by autoclaving. To prepare xylanase-treated wheat straw (designated Dpl-WS), 20 g In-WS were suspended in 50 mM sodium acetate buffer, pH 5.8 (containing 0.02% NaN 3 ) containing Tx-Xyn (150 BWX U/g biomass) and incubated at 60°C for 70 h. Afterwards, the reaction mixture was heated at 95°C for 5 min to inactivate the enzyme. The solid residues were recovered by filtration (see above) and dried as before. The sugar composition of both wheat straw substrates (Table 2) was analyzed according an established protocol [60]. Microtiter plate-based screening of mutant libraries was performed according to the method described by [24]. Briefly, individual E. coli transformants were grown in the wells of 96-well microtiter plates and then cells were recovered and lysed using the combined effect of lysozyme (0.5 g/l) and freeze-thaw cycling (-80°C and 37°C). The screening of xylanase activity was then achieved using a four-step protocol, which involved (1) substrate delivery into microtiter plates (2) addition of xylanase-containing cell lysates (3) incubation and (4) measurement of solubilized reducing sugar using a micro-DNS assay. The important experimental details of these steps are summarized in Table 1. When Dpl-WS was employed in the place of In-WS, the incubation time was extended to 16 h and, consequently, microtiter plates were thermosealed using polypropylene film to reduce evaporation. In all microtiter plate screening, wells containing transformants expressing wild-type Tx-Xyn were included as internal controls. These were used to calculate a coefficient of variation (1 CV = σ/μ × 100%) of Tx-Xyn activity, which was employed to assess the activity of mutant variants. Xylanase expression and purification The production in E. coli JM109(DE3) cells and purification of Tx-Xyn and variants thereof was performed according to the previously described procedure [61]. Briefly, purification followed a two-step protocol involving ion-exchange (Q sepharose FF) and then affinity chromatography (Phenyl sepharose) operating on an ÄKTA purification system (GE Healthcare, Uppsala, Sweden). Enzyme conformity and purity were assessed using SDS-PAGE and theoretical extinction coefficients were computed using the ProtParam server [62]. The concentration of xylanase solutions was determined by measuring UV absorbance at 280 nm and then applying the Lambert-Beer equation. Evaluation of xylanase-mediated hydrolysis on Dpl-WS and In-WS To measure xylanase activity using In-WS or Dpl-WS as substrates, a reaction mixture in 50 mM sodium acetate buffer, pH 5.8 was prepared that contained 2% (w/v) biomass, 0.1% (w/v) bovine serum albumin (BSA), 0.02% (w/v) NaN 3 and an aliquot (final concentration of 10 nmol enzyme/g biomass) of Tx-Xyn or a mutant thereof. To analyze the combined effect of xylanase and cellulase on In-WS, reactions were conducted as described above, except that Accellerase 1500 (Genencor, Rochester, NY, USA) (0.2 ml cocktail per g biomass) was added to the reaction mixture and reactions were buffered at pH 5.0. To assess the action of Accellerase 1500 alone, xylanase was omitted. All hydrolyses were performed at 50°C for 24 h with continuous stirring (250 rpm) in screw-capped glass tubes, and then stopped by heating at 95°C for 5 min. For analysis, the reaction mixture was centrifuged (10,000 g for 2 min) and then the supernatant was filtered (polytetrafluoroethylene, 0.22 μm), before injection onto a high performance anion exchange chromatography system with pulsed amperometric detection (HPAEC-PAD). For monosaccharide analysis, separation was achieved at 30°C over 25 min on a Dionex Carbo-Pac PA-1 column (4 × 200 mm), equipped with its corresponding guard column and equilibrated in 4.5 mM NaOH and running at a flow rate of 1 ml/min. For the analysis of xylo-oligosaccharides (XOS), a Dionex Car-boPac PA-100 column (4 × 200 mm), equipped with its corresponding guard column and equilibrated in 4.5 mM NaOH was employed. Separation of various XOS was achieved by applying a gradient of NaOAc (5 to 85 mM) in 150 mM NaOH over 30 min at 30°C, using a flow rate of 1 ml/min. Appropriate standards (monosaccharides such as L-arabinose, D-xylose, D-glucose and D-galactose and various XOS displaying a degree of polymerization of 2 to 6) at various concentrations (2 to 25 mg/l) were used to provide quantitative analyses. Finally, the quantitative results from HPAEC analysis (monomeric and oligomeric sugars) were converted into the amount of soluble monosaccharide equivalents (designated 'average solubilized weight'), and the percentage conversion was calculated as follows, either in terms of xylose or glucose: Conversion % tot.N = average solubilized N theoretical total N × 100% (w/w) Where 'N' represents xylose or glucose, and the 'theoretical total N' is the total amount of sugar N present in the initial straw sample ( Table 1). Determination of kinetic parameters To measure the kinetic parameters of Tx-Xyn and its mutants, BWX and LVWAX were used as substrates at eight different concentrations (0 to 12 g/l). Hydrolysis reactions (1 ml) were performed at 60°C in NaOAc, pH 5.8 using approximately 4.5 and 3.5 nM of xylanase for BWX and LVWAX assays, respectively. During the course of the reaction, aliquots (100 μl) were removed at 3-min intervals, and immediately mixed with an equal volume of 3,5-dinitrosalicylic acid (DNS) reagent to stop the reaction. The quantity of solubilized reducing sugars present in samples was assessed by the DNS assay [63]. Finally, results were analyzed using SigmaPlot V10.0, which generated values for k cat and K M . Taking into account the heterogeneous nature of the substrates, computed K M values are apparent values having units of g/l. Thermostability assay To measure the thermostability of the xylanases used in this study, enzyme solutions (100 mM in 10 mM Tris-HCl buffer, pH 8.0) were incubated at 50°C and 60°C for up to 6 h. At intervals, aliquots were removed and used to measure residual xylanase activity on BWX (at 5 g/l) at 60°C using the DNS method to quantify solubilized reducing sugars. One unit (1 U BWX) of xylanase activity was defined as the amount of xylanase required to release 1 μmol of equivalent xylose per minute from BWX. Enzyme half-life (t 1/2 ) was deduced by fitting the curve of ln(residual activity) = kt where t is the time and k is the slope, and t 1/2 = k -1 ln(0.5) [16]. Determination of melting temperature by differential scanning fluorimetry (DSF) A sample in 20 mM Tris-HCl buffer, pH 8.0 was prepared that contained 100 mM NaCl, SYPRO Orange (Invitrogen, final concentration 10 ×), and an aliquot (final concentration of 6.75 μM) of Tx-Xyn or mutant xylanases thereof. Negative controls containing either SYPRO or xylanase alone were analyzed in parallel. A CFX96 Real-Time PCR Detection System (Bio-Rad) was used as a thermal cycler and the fluorescence emission was detected using the Texas Red channel (λ exc = 560 to 590 nm, λ em = 675 to 690 nm). The PCR plate containing the test samples (20 μl per well) was subjected to a temperature range from 20°C to 99.5°C with increments of 0.3°C every 3 s. The apparent melting temperature (T m ) was calculated by the Bio-Rad CFX Manager software.
9,760.8
2012-01-13T00:00:00.000
[ "Biology", "Engineering", "Environmental Science" ]
Competing Anisotropy-Tunneling Correlation of the CoFeB/MgO Perpendicular Magnetic Tunnel Junction: An Electronic Approach We intensively investigate the physical principles regulating the tunneling magneto-resistance (TMR) and perpendicular magnetic anisotropy (PMA) of the CoFeB/MgO magnetic tunnel junction (MTJ) by means of angle-resolved x-ray magnetic spectroscopy. The angle-resolved capability was easily achieved, and it provided greater sensitivity to symmetry-related d-band occupation compared to traditional x-ray spectroscopy. This added degree of freedom successfully solved the unclear mechanism of this MTJ system renowned for controllable PMA and excellent TMR. As a surprising discovery, these two physical characteristics interact in a competing manner because of opposite band-filling preference in space-correlated symmetry of the 3d-orbital. An overlooked but harmful superparamagnetic phase resulting from magnetic inhomogeneity was also observed. This important finding reveals that simultaneously achieving fast switching and a high tunneling efficiency at an ultimate level is improbable for this MTJ system owing to its fundamental limit in physics. We suggest that the development of independent TMR and PMA mechanisms is critical towards a complementary relationship between the two physical characteristics, as well as the realization of superior performance, of this perpendicular MTJ. Furthermore, this study provides an easy approach to evaluate the futurity of any emerging spintronic candidates by electronically examining the relationship between their magnetic anisotropy and transport. Bloch theory 7 . These independent studies imply that though PMA and TMR share common characteristics in the orbital symmetry, a competing relationship may exist between the two physical characteristics owing to their opposite preferences in electronic occupation. We notice that in spite of continuous efforts devoted to the contact engineering of the CoFeB/MgO MTJ, its recent progress had come to a standstill after several groundbreaking works. This is because the two characteristics were often investigated independently [8][9][10][11][12] and rarely considered in the same research setting. In this paper, we utilized angle-resolved x-ray magnetic spectroscopy to explore how the orbital symmetry electronically regulates TMR and PMA, following which a strict investigation of their cross interactions near the CoFeB/MgO hetero-junction was performed. This approach exclusively solved the unclear mechanism of the strong thickness-anisotropy sensitivity of CoFeB/MgO and successfully correlated it with TMR. To some extent, this approach is also novel in simultaneously predicting the MTJ's TMR/PMA from an electronic perspective. We also discovered an overlooked superparamagnetic (SPM) phase hidden behind the MTJ's signature PMA, which could be a fatal cause for the limited TMR. With a broader vision, the finding is beyond MTJ and equally important to heterostructured spintronic systems from the viewpoint of interfacial electronic effects in relation to the magnetic anisotropy and transport property. Figure 1(a) presents the magnetic moment (blue-ball curve) as a function of CoFeB thickness for the Ta(30 Å)/CoFeB (9, 10.5, 12, 13, 14, and 15 Å)/MgO(11 Å)/Ta(100 Å)/Si multilayer sample. The CoFeB moment monotonically increases with increasing thickness, while a dead layer of 9 Å is found because of the zero moment obtained in the CoFeB = 9 Å sample. Anisotropy switching (indicated by the sign change of K u ) from the out-of-plane to in-plane occurs with CoFeB thicknesses exceeding 13 Å, as supported by the thickness-dependent magnetic-hysteresis (M-H) curves superimposed on the top of Fig. 1(a). The critical thickness for anisotropy switching is consistent with classic references 8,9 , hence confirming our fabrication reliability. As revealed by the red-ball curve of Fig. 1(a), the anisotropy switching arises from a weakening of the out-of-plane anisotropy constant K u upon increasing the thickness. Interestingly, despite the positive value of K u , the out-of-plane coercivity (H c ) vanishes on reducing CoFeB to 10.5 Å, which suggests the loss of magnetic stability. A similar phenomenon has been reported by Cheng et al. 13 , wherein the same stacking structure exhibits a sharp drop of H c around 10 Å. The zero-field-cooling (ZFC) and field-cooling (FC) measurements in Fig. 1(b) reveal that all the investigated samples intrinsically exhibit a blocking temperature (T B ). This indicates the existence of a superparamagnetic (SPM) phase featuring short-range magnetic ordering. The SPM phase is ascribed to the 9-Å dead layer as a consequence of the inter-diffusion effect from the capping layer of Ta 14,15 . Upon annealing, Ta would penetrate CoFeB, causing demagnetization by destroying CoFe's BCC structure. The penetrated Ta magnetically isolates CoFeB and causes it to behave as a uni-axial, single-domain Stoner-Wohlfarth particle on MgO's surface 16 , which enables the SPM phase 17,18 . An illustration of the origin of SPM is given in Fig. 1(c), which refers to the coexistence of a major dead (~9 Å) and a minor alive CoFeB phase on MgO, at which the fraction of the alive phase is proportional to the CoFeB thickness exceeding 9 Å. This phenomenon of magnetic inhomogeneity (i.e., the coexistence of the dead and alive CoFeB) is validated by temperature-dependent H c and x-ray magnetic spectroscopic sum-rule analysis (S- Fig. 1 of Supplementary Materials). Figure 1(c), in fact, rationalizes the irregular moment enhancement of CFB10.5 as a kind of magnetic instability because of the sudden formation of discontinuous but magnetically alive CoFeB phase spread over the 9-Å dead layer, which abruptly boosts the magnetization. Further growth of CoFeB magnetically stabilizes the film, as characterized by a perfectly linear momentthickness dependency ( Fig. 1(a)). Results Despite better magnetic stability, SPM appears to be an inherent property because of the presence of T B even if CoFeB exceeds 10.5 Å. This is essential to the MTJ technology, which relies on the CoFeB-MgO combination. In particular, this combination appears to have reached the TMR limitation in recent years, perhaps because of the harmful magnetic inhomogeneity. In fact, this phenomenon is easy to be overlooked by the CoFeB/MgO's signature property of PMA. This is because magnetic anisotropy refers to a spatial moment-aligning preference, which is physically opposed to the SPM characterized by random flipping. To confirm this dual character, we performed temperature-dependent measurements on CFB10.5, the sample carrying the most significant SPM. In Fig. 2(a), the sample exhibits notable H c at low temperature while gradually losing its magnetic stability upon warming. The saturation magnetization (M s ) decreases by 10% from 50 K to 200 K (inset of Fig. 2(a)). This suggests a ferromagnetic (FM) → SPM transition due to thermal fluctuation. However, this magnetic transition is anisotropic because it only occurs out-of-plane, as indicated in Fig. 2(b). In addition, as an important indicator to magnetic anisotropy, the magnetic squareness (M r /M s , Fig. 2(b)) sharply drops out-of-plane for T > 200 K . This temperature coincides with CFB10.5's T B (200 K), at which SPM emerges ( Fig. 1(b)). This implies that PMA is an overwhelming mechanism of CoFeB/MgO, which constrains the magnetic disordering effect (i.e., the SPM phase) to occur uni-axially. Angle-resolved XAS/XMCD was then performed on the samples with three incident x-ray angles of 5°, 45°, and 90° with respect to the film's in-plane direction, as depicted in Fig. 3. This is to explore the complex electronic effects that regulate the PMA. The collected angle-dependent Fe (left figure) and Co (right figure) L 2 /L 3 XAS of CFB10.5 are presented in Fig. 4 Fig. 3, respectively. It is noteworthy that the Fe XAS intensity significantly decreases on rotating the film from the in-plane direction ( [100]) to the out-of-plane direction ([001]). Since the XAS intensity is proportional to the number of unoccupied electronic states, this suggests that the electrons prefer to occupy the degeneracy-lift states of the out-of-plane d-band orbital (defined as d[001] henceforth). The angle-dependent sum-rule analysis from Fe/Co XAS/XMCD is presented in the inset of Fig. 4, where data of orbital-to-spin (L z /S z ) moment ratio are given for the consideration of electronic-configuration calibration. We find that the d[001] occupation preference results in an increase of L z . An identical trend is observed in the angle-dependent Co L 2 /L 3 XAS. The coherent enhancements of Co/Fe L z as a result of the electronically filled d[001] could indicate the origin of PMA. In fact, theoretical studies 5,6 have suggested that the PMA is driven by the crystal-field effect arising from the d-band degeneracy due to CoFeB/MgO's broken symmetry at the interface; our result is the first that validates this hypothesis. For further validation, in Fig. 5(a), we collected helicity-dependent Fe and Co L 2 /L 3 XAS of CFB12 and CFB13, in addition to CFB10.5, by fixing the x-ray to [001]. We define μ(+) and μ(-) as the XAS spectra with positive and negative helicities generated from circularly polarized x-rays, respectively. Therefore, μ(+) and μ(-) intensities are inversely proportional to the occupations of majority and minority spin states of d[001], respectively, based upon a spin-dependent photo-excitation process (middle inset of Fig. 5(a)). The minority states (μ(-) intensity) of Co and Fe appear to decrease coherently with increasing thickness. However, the majority states (μ(+) intensity) are independent of the thickness change, as a localized electronic characteristic of hard magnets 19 . On correlating with Fig. 2(a), where K u is modified but the PMA is persistent in this thickness regime (10.5-13 Å, gray area of the figure), Fig. 5(a) suggests that the CoFeB/MgO electronically drives the PMA by coherently populating electrons in the d[001] minority states of the two magnetic elements. This substantiates the PMA-d[001] correlation from the viewpoint of the electronic spin state and thus assigns this correlation as the dominant mechanism for PMA. By thinning CFB13 to CFB10.5, the heterojunction's broken-symmetry effect is intensified, leading to a more electronically filled d[001] that is responsible for the pronounced PMA. This finding constitutes the first experimental evidence for solving the elusive mechanism responsible for the strong thickness-anisotropy sensitivity of CoFeB/MgO, despite its extensive applications in MTJ over the years. Therefore, the controversial SPM-anisotropy phenomenon can be understood as a coincident nature where the d[001] occupation electronically/energetically forces CoFeB/MgO to align perpendicularly in spite of a secondary, short-range ordering effect arising from a magnetically inhomogeneous interface underneath (Fig. 2(c)). Figure 5(a) also sheds lights on CoFeB's spin-polarized states from Fe and Co L 3 XMCD spectra probed along [001] with thickness dependency. The XMCD signal, originating from the difference between majority and minority occupations, is an indicator of the level of spin polarization of the chosen element. Given the fact that both Co's and Fe's XMCD signals are more optimized in CFB13 than in the other two samples, we understand that increasing the thickness (reducing broken symmetry) would enhance the spin polarization by unequally populating the d[001] spin states of Co and Fe. TMR, which is physically supported by the different tunneling possibilities for majority/minority electrons, is therefore expected to be reflected by XMCD. In other words, the XMCD intensity (i.e., spin polarization) of the d[001] would reveal TMR information because the spin-polarized d[001] state shares electronic similarities with the specific tunneling mechanism of CoFeB/MgO 2,3,7 . Following this principle, therefore, a higher TMR ratio should be achieved by increasing the CoFeB thickness, because of the enhanced spin polarization of the d[001] states, on the basis of the XMCD results. This hypothesis is exclusively supported by the thickness-dependent TMR data in Fig. 5(b), where a minute thickness increment of 0.5 Å in the CoFeB free layer can lead to notable TMR enhancement, and this TMR-enhancing trend persists up to 13 Å, where the PMA ends ( Fig. 1(a)). This points to a strong sensitivity of the TMR to the d[001] spin polarization, which can actually be probed/predicted by XAS/XMCD. Discussion To summarize the PMA-TMR relationship from XAS/XMCD results, we provide a spin-dependent electronic diagram for CoFeB/MgO in Fig. 6, which is specifically constructed by the d[001] occupation. The two important physical characteristics, PMA and TMR, are both influenced by the minority state of the d[001] occupation, as marked by the yellow part of the diagram. The d[001] minority states become more filled by thinning the CoFeB, and stronger PMA is enabled by L z stabilization. In contrast, upon increasing thickness, the lift on d[001] minority states would decrease the occupation of electrons, which enhances the spin polarization by enlarging the difference between the minority (yellow) and immovable majority (blue) states. This polarization enhancement therefore gives rise to an effective regulation of TMR. Nevertheless, it is essential to know that although PMA and TMR share the d[001] occupation, the two physical characteristics compete with each other because of the opposite thickness/broken-symmetry The corresponding XMCD obtained from the difference between two XAS helicities, together with thicknessdependency, are presented below the XAS spectra. μ(+) and μ(-) stand for the XAS probed by left and right x-ray helicities, respectively, whose intensities directly reflect the number of unoccupied majority and minority states, respectively, as illustrated in the middle inset of the figure. (b) The TMR (in %) for the MTJ with the CoFeB free layer of 10 Å, 10.5 Å, 11 Å, and 13 Å (bottom pinned CoFeB layer was fixed to 9 Å). The magnetic field range for TMR measurements was + /− 1.5 kOe. The positive and zero values of the TMR refer to the antiparallel and parallel states of the two CoFeB electrodes in MTJ, respectively. (d) Comparison for the TMR and Fe/Co XMCD with respect to CoFeB thickness. The left y-axis unit of (c) shares the same TMR scale as that of (b). effects. This is confirmed by both macroscopic (Figs 1(a) and 5(b-d)) and microscopic ( Fig. 5(a)) approaches. This actually explains the stagnant status of current MTJ technology, i.e., on promoting PMA for faster magnetization switching by reducing thickness, TMR is inevitably sacrificed owing to the submerged minority tunneling channel. This is why TMR appears much larger in an MTJ with in-plane anisotropy (IMA) than in that with PMA 10-12 . Therefore, recent research efforts devoted to improving the performance of the CoFeB/MgO MTJ could still stay in a scenario compromising the PMA-TMR competition. The concept of creating a sharp interface at the capping-layer/CoFeB junction to prevent inter-diffusion is essential, whereas limited, to improve the MTJ fundamentally. We suggest that the development of independent mechanisms of TMR and PMA is key to meet the demanded but contradictory requirement by turning the two physical characteristics into a complementary relationship. One possible alternative is to search for capping materials that can electronically/structurally drive the CoFeB's PMA independently from MgO through a crystallization process, which of course has good contact resistance but poor miscibility with CoFeB to prevent the SPM formation. This also implies that current contact engineering in this particular material combination needs to be revisited. Through this finding, we hope to place future MTJ research on a more scientific footing and that hard work in contact engineering in this or similar MTJ systems will not go unrewarded. This study is also beneficial to the growing spintronic technology that requires a ferromagnet/semiconductor combination 20,21 , the inner workings of which between the ferromagnetic layer's anisotropy and spin polarization are critical to the device's functionality. Subtle modifications of these inner workings can be clearly resolved, as in the work presented here. Methods Stacks consisting of Ta (30 Å)/CoFeB (t = 9, 10.5, 12, 13, 14, and 15 Å)/MgO (11 Å)/Ta (100 Å)/Si were deposited in a custom vacuum chamber with a base pressure of 10 −9 Torr. Samples were denoted CFBt to correspond to the CoFeB film with a specific thickness (t). Metallic layers were deposited by dc magnetron sputtering (ANELVA C-7100) in an Ar atmosphere of 5 × 10 −3 Torr with deposition rates of 1.25 Å/s and 0.5 Å/s for Ta and CoFeB, respectively. MgO layers were deposited by rf magnetron sputtering in a 5 × 10 −3 Torr Ar atmosphere with a deposition rate of 0.089 Å/s. Upon deposition, the stacks were subjected to annealing at 300°C for 30 min. A transmission electron microscope (TEM) was used to probe the layer thickness and to confirm the [001]-textured MgO. Magnetic properties (hysteresis) were analyzed using a vibrating sample magnetometer (VSM) at desired temperatures. The TMR measurements were performed on the MTJ cells with a CoFeB-free layer of 10, 10.5, 11, and 13 Å, where occupation probed by the angle-resolved XAS/XMCD. Yellow and blue parts in the electronic diagrams represent the occupied minority and majority states, respectively, and the transparent green bar linking the two diagrams corresponds to the change of minority occupation (while the majority is unchanged) that determines the TMR strength, which is reflected by XMCD. Gradient bars for PMA (red) and TMR (blue) indicate their strength changes with thickness and describe the opposite/competing correlation between the two physical characteristics. Scientific RepoRts | 5:17169 | DOI: 10.1038/srep17169 the bottom CoFeB electrode was fixed at 9 Å. The MTJ cell was 180 nm in diameter and was microfabricated by photolithography and Ar ion milling, and TMR was collected using a Princeton Measurements Corporation -MicroMag 3900 transport measurement system, supported by a Keithley 2400 meter and a Kepco BOP 12/36 power supply. TMR was measured using a four-point probe for a magnetic-field range of + /-1500 Oe. Sample fabrication and transport measurements were performed at the Electronics and Optoelectronics Research Laboratories, Industrial Technology Research Institute, Taiwan. For x-ray characterizations, x-ray absorption spectra (XAS) and x-ray magnetic circular dichroism (XMCD) were collected over Co/Fe L 2 /L 3 -edges to provide element-specific, spin-dependent electronic information. All the presented XAS and XMCD data were normalized to the post-edge jump and XAS integration, respectively, which ensured a reliable quantitative comparison by regularizing the data with respect to the variations in absorber concentration and any other aspects of the measurement. Sum-rule analyses 22 were operated over the XAS/XMCD spectra to obtain atomic spin (S z ) and orbital (L z ) moments. All synchrotron data were collected at the National Synchrotron Radiation Research Center (NSRRC), Taiwan, under a magnetic field of 1 T.
4,132.8
2015-11-24T00:00:00.000
[ "Physics" ]
Inter-urban mobility via cellular position tracking in the southeast Songliao Basin, Northeast China Position tracking using cellular phones can provide fine-grained traveling data between and within cities on hourly and daily scales, giving us a feasible way to explore human mobility. However, such fine-grained data are traditionally owned by private companies and is extremely rare to be publicly available even for one city. Here, we present, to the best of our knowledge, the largest inter-city movement dataset using cellular phone logs. Specifically, our data set captures 3-million cellular devices and includes 70 million movements. These movements are measured at hourly intervals and span a week-long duration. Our measurements are from the southeast Sangliao Basin, Northeast China, which span three cities and one country with a collective population of 8 million people. The dynamic, weighted and directed mobility network of inter-urban divisions is released in simple formats, as well as divisions’ GPS coordinates to motivate studies of human interactions within and between cities. Background & Summary Popular use of cellular phones enables measurements of large-scale human mobility traces, which have become readily available and served as proxy for human mobility. The underlying interactions of meta-populations within and between cities have been extensively studied both in applied work (e.g., inter-urban mobility 1 , urban activities 2 , urban evolution 3 , heterogeneous responses during extreme events 4 ), and epidemiology studies of mobility networks 5,6 . To study human movements, especially among cities, the analytic framework of mobility networks provides a useful way to characterize interactions among people in different sites. Although transportation and interaction patterns between locations change at hourly and daily scales, many studies of human mobility assume they are static [7][8][9] , neglecting the nature of mobility dynamics. This is, arguably, due to the lack of fine-grained public datasets that could describe the mobility dynamics between cities. There are some open access datasets covering small geographical locations taking into account the time ordering of interactions, such as networks of wifi hotspots within a city 10 and networks of students in a university campus 11 . However, fine-grained movement datasets covering large geographical regions including multiple cities with large populations are still missing from the open-access datasets. In this paper, we curate and amass a fine-grained dataset of mobility to study inter-urban interactions. We capture cellular position tracking of millions mobile phone users from an open-data program in Changchun city. Each location in our dataset represents a group of cellular stations in an official administrative division. We assume that individual stays at a location if her location is the same at least for half an hour in an hour time interval. Directed movement of each individual from a source location O to a destination location D denotes a change of location for the corresponding individual. We record the time of the directed movement as the time of arriving D in our dataset. The overall directed mobility network of locations is finally compiled by sequentially processing the directed movements for all individuals. In the network, a node represents a location. A weighted edge represents the total number of users' movements between a pair of locations in each hour. The dataset contains movements of near 3-million anonymized cellular phone users among 167 divisions (henceforth locations), covering 4 geographically adjacent areas (Changchun City, Dehui City, Yushu City, and Nong'an County) for a one-week period starting on August 7, 2017. This total geographic area, located in the southeast Songliao Basin in the center of the Northeast China Plain, Northeast China, covers more than 20 square kilometers and, in 2017, had a population of nearly 8 million. To facilitate the use of the open data, we process the above raw dataset to extract a dynamic and directed mobility network of locations. We make these networks available through files in CSV file format, separated by commas. There are 2 files released in 2 folders. The first file denotes the mobility network with four columns ordered by origin location, destination location, their weight and time. For spatial analysis applications, we also provide a geospatial file denoting the GPS information for each location, containing three columns ordered by location associated with its latitude and longitude. Although this described dataset is a major step towards enabling research about human mobility, it has several limitations. First, despite the fact that the dataset covers a cohort of millions of movements, it is only for a one-week period in summer time. Depending on the application, longer periods of time intervals might be needed. Second, we define a user has movement only when s/he stays in a new location at least half an hour. This may also induce bias as it ignores quick movements. Third, the individual's destination position is the last known recorded location of the individual. This recording might cause bias. The individual might actually already be in D during the whole period of t and t − 1. Fourth, the individual's original position might have been unrecorded at an earlier time (e.g., an hour or a day) than her/his recorded arrival time t, since it depends on the last time that the user used her/his phone. We caveat the researchers to be careful about their conclusions when using these data. Methods Original data sources. Our data consist of location records of millions of anonymized cellular phone users for one week starting from August 7, 2017. These locations include 4 geographically neighboring areas (i.e., Changchun City, Dehui City, Yushu City, and Nong'an County). A cellular phone is assumed to be located at the location of the closest cellular base station that it interacts through sending or receiving signals. In the raw movement data each base station is a unique unit. Note that a set of cellular base stations can serve a metapopulation to provide services together. There are over 12,000 cellular stations with their exact GPS location information. Using the input of GPS positions associated with cellular stations, we can get their official administrative division codes in 2017 version using the Amap APIs (https://lbs.amap.com/api/webservice/guide/api/georegeo), as well as the GPS information of each administrative division. In total, these cellular stations located in 167 divisions, with 100 in Changchun, 27 in Yushu, 18 in Dehui, and 22 in Nong'an. Each division includes 72 stations on average with a standard deviation of 65 stations. We group together a set of base stations as one location if they are within the same division. There are nearly 3-million phone users in this study. Most of these users are active with enough credits left in their accounts. The accounts with no credit stop receiving signals automatically in a few days by the company's system. For each user, we aggregate the corresponding location records into hourly movements. Specifically, we assume an individual stays in a location at least half an hour to be considered in that particular location. If a user is spending less than 30 minutes in a location, we assume s/he does not visit the corresponding location during the corresponding hour. Some trips may have large time intervals perhaps due to phones being out of battery power. As such, we do not consider trips whose duration more than 12 hours (less than 0.3% of the total trips). And accordingly, each Defining the mobility network. Considering each place (a city or a country) as multiple metapopulations in different locations, we construct the directed mobility network for each hour of the week. Each location is represented as a node in our network. Edges are directed, connecting nodes where users move from origins to destinations and weighted by the total number of users' movements in each hour-location scenario. An individual directed movement from location i to location j at time t denotes that in a user's movement, location j emerges after the previous location i at time t. Data records This dataset is released by 2 comma-separated values (CSV) files, each in a folder, including more than 70-million movements 12 . The first file includes the hourly mobility network with four columns ordered by origin location, destination location, edge weight, and arriving time. The weight is the number of movements per hour between the origin location and destination location. The second file includes the GPS information for each location, containing three columns ordered by location associated with its latitude and longitude. Shenzhen taxi passengers. The x-axis denotes the logarithmic degree. Y-axis is the probability density function for the kernel density estimation. For the mobility network, we estimate the degree of a node as the total number of hourly movements starting or ending in this location across 168 hours in the whole week, as the density plot colored by blue. In contrast, we show the degree distribution of the static mobility network with zones as nodes and passenger flows between nodes as edges, aggregating 2,338,576 trips by taxi passengers in 13,798 taxis in Shenzhen from 18 April 2011 to 26 April 2011 over 1634 zones 7 , as the density plot colored by black. We fit the two datasets by Gamma distributions for our released dataset and Shenzhen. More details of fitness summaries are shown by texts associated with each plot. www.nature.com/scientificdata www.nature.com/scientificdata/ Finally, two folders are used to to group these files 12 . The first folder (Week-Mobility-Network) includes (Mobility.txt), the file of the hourly-mobility network for the entire week. The second folder (GPS-Location) includes (GPS.txt) the file of latitude and longitude information for each location in the mobility network. (1) Mobility.txt In the mobility network, each row represents the total number of hourly movements by people from locations i to j in the corresponding day. There are four columns ordered by origin location, destination location, their weight, and time. The format for this file is the following. (2) GPS.txt The GPS information for each location. The format of this file is organized as three columns ordered by location identifier and the corresponding latitude and longitude information. • Location: numerical administrative division code for each location; • Latitude: numerical values for the latitude of the corresponding location; • Longitude: numerical values for the longitude of the corresponding location. technical Validation The reliability of location and time information of users' movements in the network data largely depends on the reliability of the underlying source data. We verify the consistency via the geographic-explicit distribution of locations. We visualize 400 locations on a geographic map, as shown in Fig. 1. Fig. 3 Community structures over days. We construct the daily mobility network via aggregating 24 hourly mobility networks by summing all edges' weights. The Louvain community detection algorithm 13 serves to probe community structures based on the daily mobility network for each day of the week (subgraphs a to g). We map community structures with colors denoting different communities in each day. An inter-urban community represents nodes in this community that belong to different locations. We consider 3 community-based measures to reveal the interactions of inter-urban mobility, as shown in subtable h. Specifically, R is the percentage of nodes in an inter-urban community over all nodes. M denotes the mean number of nodes in a community. N represents the number of communities with more than 10 nodes. We can observe Sunday is special, bridging weekday and weekend inter-urban mobility patterns and connect otherwise disconnected inter-urban locations with the highest R and the lowest N. www.nature.com/scientificdata www.nature.com/scientificdata/ Mobility network. In the mobility network, nodes are defined as locations, and edges weighted by the mobility flows between nodes. We verify the consistency of the mobility network with people's daily life with the hourly movement flows over seven days of the week, as shown in Fig. 2a. A movement denotes an individual movement, whose origin node is different from its destination node. For each hour, we count the number of movements between locations as the hourly movement flow. The hourly movement flows of all working days show two traffic peaks (morning and evening). The morning period is starting at 9:00, and the evening is beginning at 17:00. Both are approximately 4 hours long, similar to the reported mobility flows in the literature for another Chinese city of Shanghai with the morning period starting at 9 am, and the evening period starting 4 pm 2 . As for weekends, traffic peaks are slightly lower and especially weak in the afternoon. Figure 2b shows the trip durations for 24 hours. The y-axis denote the proportion of trip number over all across trip durations. We can observe that trips with less 12 hours account for over 99.7% of the total trips. The degree of a node denotes the total number of hourly movements passing through the corresponding node during the 168 hours of the week. Figure 2c shows the degree distribution as compared to the degree distribution of another mobility network for another Chinese city (i.e., Shenzen) 7 . We can observe that the part of the log degree distribution for high degree values follows a Gamma distribution with a mean value of 10.9387. In contrast, the reported log degree distribution of the mobility network for Shenzhen 7 shows a quite different Gamma distribution with a mean value of 5.5516. Network structure analysis. Additionally, we analyze the community structure of the mobility network using the Louvain community detection algorithm 13 . In each day, the inter-urban mobility network often consists of communities-groups of metapopulations in locations who are highly intra-connected, but only loosely interconnected 14,15 . Figure 3 shows the community structures for each day with colors denoting different detected communities. To explore the interactions of inter-urban mobility, we consider the inter-urban community, which represents nodes in this community belong to different locations. We consider three community-based measures. Specifically, R is defined as the percentage of nodes in the community that indicate inter-urban movement. High R denotes the strong movement between locations, resulting in multiple inter-urban locations ending up in the same community. M denotes the mean size of nodes in a community. High M denotes the high average size of locations in a local affiliation. N represents the number of communities with more than 10 nodes. High N denotes the high variability in mobility with more local affiliations. We can observe Sunday is special with the highest R and the lowest N, bridging weekday and weekend inter-urban mobility patterns and connect otherwise disconnected inter-urban locations. Code availability Matlab code for data analysis of location correction and mobility network construction can be obtained freely from Supplemental File 1 with no restrictions to access.
3,367
2019-05-23T00:00:00.000
[ "Geography", "Computer Science" ]
THE INFLUENCE OF COSPLAYING IN INCREASING JAPANESE LANGUAGE AND CULTURE LEARNING AT BINUS UNIVERSITY This research was intended to see the prospect of whether the significance of cosplay subculture had given certain influence towards these colorful communities. The target population was Bina Nusantara University students who were either in Japanese Department or those who were interested in Japanese culture. By utilizing the online questionnaire, this research used qualitative random sampling approach and expected to be used as a reference in designing the future curriculum since it was quite applicable and relevant to the condition in Indonesia. Based on the findings, cosplaying has been proven to be a beneficial activity that may help and motivate the learners to understand Japanese language and culture. Finally, the researchers suggest building Japanese language curricula based on the cosplaying activity that may be implemented soon to attract Japanese enthusiasts. INTRODUCTION According to Matsuura and Okabe (2014), cosplay is a term in Japanese culture that refers to the combination of costume and play as part of expressing the affection for anime and manga's story and characters.The difference between the western countries and Japanese is the Japanese government quickly embraces the pop culture as part of the national culture to promote the tourism industry of the country.There are no surprise people can easily find the cosplayers, which are mostly dominated by female high school and college students in places such as Akihabara and Harajuku in Tokyo, Japan. The exact presence of cosplaying in Indonesia is unknown.The researchers predict that this culture comes through from the comic books and movies.The private TV stations in the 80s also create a new fan club of certain anime or manga characters.This Japanese popular culture phenomenon is captured by the printing industries by translating the popular pocket-sized Japanese comic books into Bahasa.Gradually, the readers start to idolize the characters and its costumes. There is any difference between Indonesian and the Japanese cosplayers.There have been many Indonesian cosplayers are copying this Japanese sub-culture.The Indonesian cosplayers still think that the Japanese cosplayers are still the best because it is very difficult and expensive to make.However, albeit its limitations, the Indonesian cosplayers are capable of holding annual conventions where they can show their passion in dressing like their idols.This fandom phenomenon is like the Doujinshi convention, which is held regularly in Japan.In various articles, Chen (2015); Ito and Crutcher (2014); and Lamerichs (2011) have documented that the cosplayers do not only exhibit themselves as their favorite character, but also imitates the gesture, the way they speak, the words that the character mostly used, act in the character's ways, think the character's thoughts, and even assume the character's soul.In other words, they are expected to bring their favorite anime/manga characters into life. In relation to the hugely popular anime/manga characters, BINUS University also experiences similar phenomenon as the university is located in Jakarta.This existence of cosplayers, although not officially recorded, it can be caused by several reasons.For example; (1) Most of the students are at the undergraduate level with the range of the twenties.(2) Most BINUS University students come from Jakarta, and most of them are exposed to Japanese popular culture which makes them highly interested in Japanese culture.(3) BINUS University has Japanese Department, and it has courses that discuss Japanese popular culture.(4) Japanese culture is taught and endorsed through classes and courses (at Japanese Department), the student association or Unit Kegiatan Mahasiswa (UKM), and the major student community or Himpunan Mahasiswa Jurusan (HMJ).The students from different background surprisingly dominate this UKM.They join the club simply because they are interested in it.(5) The HMJ is a club endorsed by the Japanese Department.This club is quite homogenous in terms of its population.Therefore, the students that involved in this club are more similar in its activities and programs. At BINUS University, cosplaying itself is completely an independent course without endorsement from the university.The cosplayers are designed and make their costumes and accessories.According to Matsuura and Okabe (2014), this process of the self-making costume is the actual spirit of cosplaying.The cosplayers are making their costume following their favorite popular Japanese characters have become a standard in the Cosplay community.By looking at this trend, it is worth to analyze the nature of cosplaying at BINUS University to find out the advantages or disadvantages of this activity.With almost similar topic, Ito and Crutcher (2014) has studied the possibilities of learning the Japanese culture based on the fan culture in the US, which stated that the Cosplay community is characterized as an interest-driven, peer-based reciprocal learning environment. Based on the empirical and experience observation within the domain of BINUS University, these cosplayers do exist among these students.Although the number is not recorded accurately, these groups of fans have an intense relationship inside and outside the university.Some members of the group who join the Nippon Club are interested in Japanese culture that endorsed by the Japanese Department.Although the Department does not officially support the students to do Cosplaying, there is a tendency to perceive that the students who do Cosplaying have a better understanding about the Japanese culture compared to those who do not.From time to time, the non-Japanese Department students also show the interest towards the Japanese popular culture.The Nippon Club organizes and conducts their events independently.Because of this phenomenon, the researchers believe that it is part of the self-immersion that strongly motivates the Cosplayers to learn about Japanese culture.Based on these notions, it is conceivable to assume that BINUS University is a perfect place since the students have similar interests and characteristics. The researchers use the terminology of manga in this article not just for the comic book.In Japan, the manga is loved by all the people regardless of age, sex, social class, occupation, and educational level.In the larger sense, manga can be including caricature, cartoon (editorial, political, and sports), syndicated panel, and comic strip.The terms of manga, according to Ito and Crutcher (2014) generally refers to the story that is read like a graphic novel that is well-known for the set of frames and balloons.Some series are lasted more than a few decades and contain ten thousands of pages.Since 1990, every year the Japanese government endorses this culture by awarding the manga artists.Because of the rewards, it aspires many young and adults' artists to become manga artists.The manga's visual texts should be understood as a sign that represents the social and cultural reality.Unlike superheroes comics' from America who are truly have the super power, many protagonists in Japanese manga are rather ordinary people with no special occupations.Otmazgin (2014) has stated as the readers follow these protagonists, they also learn about the occupations, the vocational jargon, the unique events, professional and social situations the protagonist's encounter, and the ways that occupation functions in society.The protagonists are also representing their social class and status, prestige or honor by the society.Lamerichs (2011) has stated that fan culture such as manga, anime, and cosplaying is like religion or food where they can facilitate the empathy, which many of its readers feels that they share the same experience with the heroes and heroines.Reading the manga could create a group of fanatics who share the same values, experiences, and dreams.These activities according to Ito and Crutcher (2014) create a sort of bond between the readers which is later shown in the form of cosplay.He also states that cosplay is a combination of the English words which are costume and play.This term was coined by the game designer, Takahashi Nobuyuki who invented the term in the 1980s.Cosplaying and public performance in Japan is related to a long tradition of Kabuki Theater dating back to the feudal times in which the male actors portray both men and women who wear make-up.Ito and Crutcher (2014) has mentioned that the cross-dressing among the players is common during the performance, which happened in Takarazuka Theater Group, which was founded in 1914.The most notable cosplay group is the Harajuku girls, or the huge community of cosplayers that occurs in Harajuku, Tokyo.Like Akihabara, Harajuku is the home of exotic, carefree public performance, and cosplay expression.It is the center of fashion in Tokyo from the high fashion to the eccentric and personal fashions, which is displayed on the streets. How cosplaying is formed and become parts of urban culture in the big cities is explained by Smith in Chen (2015) the definition about fan culture, which stated that the fan culture is the body of people who are fans of a pop culture, that has been an active group and existed in many postmodern societies.According to Chen (2015), even though they exist, its genres and dynamics of its forms are difficult to define.This phenomenon is often called as the fandom, which in fact is a multicultural territory, where each fan community subscribes to its unique media substances, values, and contexts.The young people have their fan cultures, as a part of youth subculture, many of them are little known or completely unknown to most of us.The fan culture has become a significant arena for the communication studies because anime or manga fandom as a subculture has an enormous influence on the youngsters.According to Chen (2015), in Japanese culture, the fans are totally involved in making manga doujinshi (self-published comic fanzines), cosplaying (costume play), and participating in fan activities and conventions.This research is expected to evaluate the possibilities of doing cosplaying and how it may be beneficial towards the people who are doing this type of subculture.In extension, hopefully, this research opens new insights on the possibility of exposing cosplaying as one of many alternative ways of learning the Japanese language. The significance of the research can be formulated into four purposes, they are; (1) to propose the different approach for the practitioners in teaching Japanese language and culture, (2) to enrich the Japanese Department curriculum, (3) provide learning alternatives for Japanese language students or enthusiasts, and ( 4) can be a model for language practitioners or teachers.In the larger context, this research is expected to provide a new approach in teaching the foreign language especially, Japanese language.The findings may be relevant to the teachers as well as the students.The output of this research is expected to be used as a reference in designing future curriculum since it is quite applicable and relevant to the condition in Indonesia. METHODS This research is using a qualitative random sampling approach where the samples are BINUS University students.Thirty-one undergraduate level students are selected regardless of age, gender, or departments.However, the equal number of students is selected between the students from Japanese students and the non-Japanese students to ensure the balance of the analysis. The researchers will take the opportunity to talk to these cosplayer students.From these conversations, hopefully, the information can be collected, documented, and recorded especially the social interaction at those conventions and to find the fan artists to conduct further formal interviews.The additional resources are selected if they are more than sixteen years old, have considerable experience in the making of doujinshi or cosplay, and are willing to be interviewed.The formal interview is semi-structured with a few questions that are designed to guide the major course of the interviews and additional questions that are drawn up according to the responses that are given during the interviews.Each interview lasts at least one hour, and it is recorded. To explain the meaning of the research, the researchers will apply the qualitative research.Patton (2002) as quoted by Suri (2008) states that the qualitative research is concerned with developing the explanations of social phenomena.For the quantitative information, the researchers will organize, summarize, and describe the surveys.These techniques are called descriptive nonexperimental research.According to Johnson & Christensen (2014), the primary purpose of this type of research is to provide an accurate description or picture of the status or characteristics of a situation or phenomenon.Commonly, the researchers follow these three steps.First, randomly select a sample from a defined population.Next, determine the characteristics of the sample.Last, infer the characteristics of the population based on the sample. For the first stage, the researchers will determine the research subjects and methods.The second and the third stages, the researchers will develop questionnaires and the list of the questions to be distributed during the cosplay conventions and events in Jakarta.At the next stage, the researchers will document the student's answer and categorized them into several categories, which later analyzed and interpreted using the theory.The final stage is arranging the outcomes and interprets the result into meaningful information. RESULTS AND DISCUSSIONS This research involves BINUS University students.The selected informants from different departments are going to be treated as the major sample.Although, it is focused on the twenty informants, at the end of the sample it grows into 31 active respondents.Johnson and Christensen (2014) have stated that this snowball phenomenon sampling is commonly happened during the qualitative study and often used to maximize rapport between the respondents and interviewer during the interviews and field observations.During the field observations at cosplay events, the researchers use digital cameras, camcorders, digital voice recorders, and field notes to record the flow of events.After the event parties, the researchers will record the informal conversations.The interviews focus on the informant's experiences from the first time they participate in the cosplay community.Although the researchers have already prepared the set of questions for the interview, these questions are focused on facilitating the informant's narrative and do not correct the deviation if it occurs.Guidelines for these topics are asked during the interview and can be listed into three categories.The first categories are how the respondent perceived changes in him or herself as these cosplayers gain online or offline cosplay experience.The second categories are about the honest opinions regarding the certain behaviors at the cosplay events.And the last categories are the types or characteristics of a popular cosplayer within the cosplayer community. Based on the first question of the questionnaire and the interview, it reveals that from all the 31 respondents, most of them answer that the reasons they are cosplaying are because they are either interested in the Japanese culture, wanted to be the character they wanted to be, attracted to modeling and costume-making, and for the personal purposes (satisfaction, enjoyment, and curiosity).In addition, some of the respondents address that because of cosplaying they can learn new skills such as sewing, make-up, hair/wig styling, and properties making.The second answer from the questionnaire results, which asked about the type of cosplaying they prefer, all 31 respondents give different answers.Interesting findings are found in different variables such as the cosplaying type (original, animation, manga or others), cosplaying categories (permanent or absolute, semi-absolute, momentum or passive), cosplaying frequency and cosplaying expenses. There are many types of cosplaying that the Japanese enthusiasts could adapt but based on the respondents' responses, most of them chose to do the cosplay on animation character, followed by fewer original, game and manga characters.Meanwhile, in defining the category, the respondents are mostly into momentum cosplaying (74,2%) which is doing the cosplay of certain character as it is on a trend, followed by the passive cosplaying (19,4%), and then the least semi-absolute and permanent/absolute cosplaying.Regarding of how often the respondents conduct cosplaying, approximately 87,1% of the respondents says that it all depends on whether there's an event or not, followed by few who said once a week or a month, and none answered every day as their cosplaying frequency.Finally, the respondents give the last variable of cosplaying expenses, which is mostly either below Rp500.000(29%) or from Rp500.000 -Rp2.500.000(64,5%), while the rest tend to spend more than Rp2.500.00 for cosplaying.In relation to the expenses, it is crucial to note that most of the respondents say that the cosplaying budget comes from their allowance rather than their salary or other sources (96,8%). In relation to the language and cultural proficiency, as stated in the third question of the questionnaire, it is interesting to apprehend that most of the cosplayers have few Japanese language proficiency (58,1%) and even to Japanese-illiterate (22,6%) compared to those who are above JLPT (Japanese Language Proficiency Test) N3 (16,1%).It is also quite impressive 100% of the respondents state that they would be willing to learn the character's personality, language, and body gesture to achieve the maximum cosplaying potential.By looking at this data, there have also been findings on how most of the cosplayers tend to believe that cosplaying indeed helps them to learn Japanese, such as character, culture, language, and others (87,1%), meanwhile others do not (12,9%).Finally, the respondents also state that after trying to be a cosplayer, more than half of them (with different proficiency JLPT N levels) are willing to learn more about Japanese language while the rest (22,6%) say the otherwise. By reading the data, the researchers manage to analyze the social impact within these lavished Japanese cultures.First, the issue of how the cosplayers feel when they are doing cosplay.Most of the respondents state that by cosplaying, they tend to learn new soft skills such as sewing, hair/makeup styling, and even properties making.In extension, they also believe that cosplaying will enable them to feel some degree of satisfaction, enjoyment, and curiosity.These are very useful because students are usually the one who is motivated and believe that anything is possible.Most of the cosplayers from 31 respondents believe that cosplaying would eventually teach them about the Japanese language since they are required to understand the character they are playing (by oral expression, body gesture, and behavior).It can be defined that by doing the cosplay it is a mixture of the implicit and autonomous learning experience. CONCLUSIONS Based on this research, cosplaying is proven to be a beneficial activity that may help and motivate the learners to understand Japanese language and culture.Most of the respondents learn to achieve perfection in playing his or her character.It is not just about the costume and gimmicks; it is about being able to reflect the character that the cosplayer desired to be.They want to be acknowledged that they are capable of becoming that character.Most of them (96,8%) state that they are willing to join a free Japanese course if proposed.Based on the report on their cosplaying budget, it can also be stated that money is not entirely the issue. In conclusion, this research has proven the effectiveness of cosplaying for the respondents or in broader perspective where cosplaying foster learners to learn Japanese art and culture.These cosplayers have stated their willingness to spend their energy, time, and finance to their fandom activity.To answer the issue that is brought up with this research question, the cosplaying does have its pedagogical implications-albeit not in its finest accordance-within the Japanese mastery.In addition, seeing the positive impact it has, it is also safe to suggest the notion of constructing the curricula based on the Cosplaying activity.This notion may be implemented soon to attract the Japanese enthusiasts, both for the Japanese and non-Japanese departments.
4,379.2
2017-10-31T00:00:00.000
[ "Education", "Linguistics", "Sociology" ]
A new species of Trismegistomya Reinhard (Diptera: Tachinidae) from Area de Conservación Guanacaste in northwestern Costa Rica Abstract Background The New World genus Trismegistomya Reinhard, 1967b (Diptera: Tachinidae) previously included only the type species Trismegistomya pumilis (Reinhard, 1967a) from Arizona, U.S.A. New information We describe a new species of Trismegistomya, Trismegistomya jimoharai Fleming & Wood sp. n., from Area de Conservación Guanacaste (ACG) in northwestern Costa Rica, reared from wild-caught caterpillars of Melipotis januaris (Guenée, 1852) (Lepidoptera: Erebidae). Our study provides a concise description of the new species using morphology, life history, molecular data and photographic documentation. In addition to the new species description, we provide a redescription of the genus, as well as of its type species Trismegistomya pumilis. Introduction The monotypic genus Trismegistomya Reinhard, 1967b (Dexiinae: Voriini) was initially erected and described under the name Trismegistus Reinhard, 1967a from a single female specimen collected in Portal, Arizona; however, as this name was preoccupied by Trismegistus Johnson & Snyder, 1904, it was replaced in that same year by Trismegistomya Reinhard, 1967b. In his original description, the author compared the type species, Trismegistus pumilis Reinhard, 1967a, to Myiophasia Brauer & Bergenstamm, 1891 (a parasitoid of Coleoptera and now a synonym of Gnadochaeta Macquart, 1850), but mentioned that they had "decisively different cephalic characters." Trismegistomya belongs to the tribe Voriini within the subfamily Dexiinae (O'Hara and Wood 2004). The Voriini are a cosmopolitan assemblage of genera, with a strong representation in the Neotropics. Voriini are a quite well-studied tribe; one of the more recent papers, on the Voriini of Chile, provided information on their biology, hosts and distribution (Cortés and González 1989). The tribe can generally be characterized by the following combination of character states: conical head profile (longer at level of pedicel than at vibrissa); proclinate, divergent and well developed ocellar setae; frons wide; proclinate and reclinate orbital setae present in both sexes; facial ridge bare; prosternum bare; anepimeral seta absent or poorly developed so as to appear hair-like; infrasquamal setae present; apical scutellar setae strong and decussate; dm-cu crossvein oblique, making posterior section of CuA equal to anterior section; R setulose at least to crossvein r-m and sometimes beyond; middorsal depression of ST1+2 reaching posterior margin; and aedeagus elongate and frequently ribbon-like (Cortés and González 1989). Voriini parasitize larvae of Lepidoptera, primarily belonging to families of Noctuoidea (Guimarães 1977), by laying flattened membranous incubated eggs directly on the cuticle of the host (Herting 1957). To date, there has been no other work on Trismegistomya since its original description. This work aims to build on the knowledge of the genus by adding a new species based on differences in external morphology and by providing COI (coxI or cytochrome c oxidase I) gene sequences. We also add a description of the previously unknown male of Trismegistomya pumilis (Reinhard, 1967b). This paper is part of a broader effort to name and catalog all of the tachinid species collected from the ACG inventory , Fleming et al. 2015b, Fleming et al. 2015c, Fleming et al. 2015a, Fleming et al. 2015, Fleming et al. 2016a, Fleming et al. 2016b, Fleming et al. 2017). This 1 4+5 series of taxonomic papers will represent a foundation for later, detailed ecological and behavioral accounts and studies extending across ACG ecological groups, whole ecosystems and taxonomic assemblages much larger than a genus. Project aims and rearing intensity All reared specimens were obtained from host caterpillars collected in Area de Conservación Guanacaste (ACG) , Janzen et al. 2011, Janzen and Hallwachs 2011, Fernandez-Triana et al. 2014, Janzen and Hallwachs 2016. ACG's 125,000+ terrestrial hectares cover portions of the provinces of Alajuela and Guanacaste inclusive of the dry forested northwestern coast of Costa Rica and, inland, of the Caribbean lowland rainforest. ACG comprises three different ecosystems and their intergrades, ranging from sea level to 2000 m. The tachinid rearing methods are described at http:// janzen.bio.upenn.edu/caterpillars/methodology/how/parasitoid_husbandry.htm. Since its inception, this inventory has reared over 750,000 wild-caught ACG caterpillars. Any frequencies of parasitism reported here need to be considered against this background inventory (Smith et al. 2005, Smith et al. 2006, Smith et al. 2008, Janzen et al. 2011, Janzen and Hallwachs 2011, Rodriguez et al. 2012, Janzen and Hallwachs 2016. Comparative details of the parasitoid ecology of these flies will be treated separately in later papers, once the alphataxonomy of ACG caterpillar-attacking tachinids is more complete. Voucher specimen management Voucher specimen management follows the methods first outlined in . All caterpillars reared from the ACG efforts receive a unique voucher code in the format yy-SRNP-xxxxx. Any parasitoid emerging from a caterpillar receives the same voucher code as a record of the rearing event. If and when the parasitoid is later dealt with individually it receives a second voucher code unique to it, in the format DHJPARxxxxxxx. These voucher codes assigned to both host and parasitoids may be used to obtain the individual rearing record at http://janzen.bio.upenn.edu/caterpillars/database.lasso. To date, all DHJPARxxxxxx-coded tachinids have had one leg removed for DNA barcoding at the Biodiversity Institute of Ontario (BIO) in Guelph, ON, Canada. All successful barcodes and collateral data are first deposited in the Barcode of Life Data System (BOLD, www.boldsystems.org) (Ratnasingham and Hebert 2007) and later migrated to GenBank. Each barcoded specimen is also assigned unique accession codes from the Barcode of Life Data System (BOLD) and GenBank, respectively. Inventoried Tachinidae were collected under Costa Rican government research permits issued to DHJ and exported from Costa Rica to Philadelphia, en route to their final depository in the Canadian National Insect collection in Ottawa, Canada (CNC). Tachinid identifications for the inventory were conducted by DHJ in coordination with a) morphological inspection by AJF and DMW, b) DNA barcode sequence examination by MAS and DHJ and c) correlation with host caterpillar identifications by DHJ and WH through the inventory itself. Dates of collection cited for each ACG specimen are the dates of eclosion of the fly, not the date of capture of the caterpillar, since the fly eclosion date is much more representative of the time when that fly species is on the wing rather than the time of capture of the host caterpillar. The collector listed on the label is the parataxonomist who found the caterpillar, rather than the person who retrieved the newly eclosed fly from its rearing container. The holotypes of the species newly described herein are all deposited at CNC. Acronyms for depositories CNC Canadian National Collection of Insects, Arachnids and Nematodes, Ottawa, Canada Descriptions and imaging The description of the new species presented here is complemented with a series of color photos, used to illustrate the morphological differences with already known species. Imaging was carried out using the methods outlined in . The morphological terminology used follows Cumming and Wood (2009). Measurements and examples of anatomical landmarks discussed herein are illustrated in Fig. 1. Male terminalia were not examined, as the material was scarce. DNA Barcoding The DNA barcode region (5' cytochrome c oxidase I (CO1) gene, Hebert et al. 2003) was examined from two specimens of ACG Trismegistomya jimoharai sp. n. We obtained DNA extracts from a single leg using a standard glass fiber protocol (Ivanova et al. 2006). We amplified the 658 bp region near the 5' terminus of the CO1 gene using standard primers (LepF1-LepR1), following established protocols for production and quality control (Smith et al. 2006, Smith et al. 2008. Interested readers may consult the Barcode of Life Data System (BOLD) (Ratnasingham and Hebert 2007) for information associated with each sequence (including GenBank accessions), using the persistent DOI: dx.doi.org/10.5883/DS-ASTRIS. Description Male. Head: strongly conical in shape, 1.3× wider than tall in frontal view, in profile 1.35× wider at level of pedicel than at level of vibrissa; fronto-orbital plate and parafacial dull grey-silver tomentose; frontal vitta of reddish-brown color, 1.2× width of fronto-orbital plate; eye bare, occupying 0.75× height of head; postpedicel 1.18× pedicel; pedicel brilliant orange; arista slightly longer than postpedicel, abruptly tapered apically; antennal insertion level with middle of eye; gena almost 0.25× height of eye, strong genal groove of deep dull red color, contrasting with silver gena; two pairs of vertical setae, inner vertical setae convergent, almost 2x as long as outer vertical setae, which are strongly divergent; ocellar setae weak but present and strongly divergent; fronto-orbital plate with single row of frontal setae, at most one frontal seta below upper margin of pedicel and two rows of short setulae outside of frontal setae; parafacial bearing one row of weak proclinate parafacial setae directly adjacent to facial ridge (so close that the facial ridge appears as setulose), appearing as a continuation of frontal setae; one row of scattered setulae on remainder of parafacial, extending to lower proclinate orbital seta; two pairs of proclinate orbital setae, one pair of lateraloclinate or reclinate upper orbital setae; palpus yellow-orange, only slightly haired. Thorax: entirely black; dorsally with a very slight grey tomentum presuturally, only visible under certain angles of light, otherwise appearing as glabrous black; chaetotaxy: three postpronotal setae arranged in a straight line; two notopleural setae; three postsutural acrostichal setae; 3-4 postsutural dorsocentral setae; 2-3 postsutural intra-alar setae; three postsutural supra-alar setae; 2-3 katepisternal setae; anepimeron bare, with 2-4 short and stout hair-like setae, lacking any strong elongate anepimeral setae; scutellum with 1-2 pairs oflateral setae; one pair of apical setae; discal scutellar setae ranging from one pair to absent. Wings: hyaline with a slight yellow tinge; bend of vein M obtuse, ending at wing margin; crossvein dm-cu slightly oblique; wing vein R bearing 4-6 setulae dorsally, extending 0.75× distance from node to crossvein r-m. Legs: short and stout; entirely glabrous black, densely covered in appressed short setulae. Abdomen: ground color appearing glabrous reddish-black; very slight silver tomentum visible on anterior margins of tergites when viewed under different angles of light; mid-dorsal depression of ST1+2 extending only halfway across syntergite, not reaching tergal margin; marginals present only as a complete row of setae on both T4 and T5; one row of discal setae on T5. Females differ from males only in their terminalia. Diagnosis Trismegistomya is distinguished from other voriines by the following combination of characters: small, 3-3.5 mm long; habitus glabrous black, with only light tomentum presuturally only evident under certain angles of light; conical head profile with axis of pedicel subequal to head height; deeply excavated clypeus; vibrissa inserted above lower margin of face; yellowish wing reaching beyond tip of abdomen; abdomen ovate with mid-dorsal depression of ST1+2 not reaching tergal margin, possessing median marginal setae on T4 and T5 and median discal setae on T5 only. Description Male (Fig. 2). Length: 3 mm. Head (Fig. 2b): postpedicel orange basally, directly adjacent to pedicel; arista orange on apical half; ocellar setae well-developed, proclinate and divergent, arising behind anterior ocellus; fronto-orbital plate with two regular rows of short setulae outside of frontal setae; frontal setae appearing as continuous with parafacial setulae; two pairs of proclinate orbital setae, one pair of reclinate upper orbital setae. Thorax (Fig. 2a, c): three postsutural supra-alar setae, middle seta twice as long as outer two; two postsutural intra-alar setae; four postsutural dorsocentral setae; scutellum with one pair of basal scutellar setae, two pairs of lateral scutellars and one pair of weaker divergent apical scutellar setae; scutellum bearing one pair of weak but evident discal setae slightly wider than apical scutellars. Legs (Fig. 2c): entirely glabrous black, as in generic description. Wing (Fig. 2a): hyaline, very slightly darkening basally; basicosta dark brownr; wing vein R bearing 5-6 setulae extending 0.75× distance from node to crossvein r-m on dorsal surface; ventral surface bearing at most 0-2 setulae; calypters pale white translucent. Abdomen (Fig. 2a, c): ground color of abdomen glabrous maroon or reddish-black; very slight silver tomentum only visible when observed under different angles of light, appearing as a silver sheen on dorsal surface of tergites; complete row of marginal setae present on both T4 and T5; median discal setae absent on all tergites except row on T5. Female: unknown at this time, assumed to be similar to male as is the case with the type species. Diagnosis Trismegistomya jimoharai sp. n. is easily differentiated from its only congener, the type species T. pumilis Reinhard, by the following combination of traits: basal portion of postpedicel distinctly orange; arista orange apically; and four postsutural dorsocentral setae. Etymology Trismegistomya jimoharai sp. n. is named in honor of Dr. James O'Hara of Ottawa, Canada in recognition of his many years of support to the curation, taxonomy and administrative logistics of the on-going effort to inventory the caterpillar-attacking species of Tachinidae of ACG and secure the residence of their voucher specimens in the Canadian National Collection. Ecology T. jimoharai has been reared two times, from only four larvae of Melipotis januaris (Guenée, 1852) (Lepidoptera: Erebidae) reared by the inventory to date and collected while feeding on Pithecellobium oblongum Benth. (Fabaceae) in ACG dry forest. It should be noted that Trismegistomya has not been noted out of the other 808 rearings of Melipotis spp. reared by the inventory. Male: as female, except for terminalia. Diagnosis Trismegistomyia pumilis can be differentiated from its only congener, T. jimoharai sp. n., by the following distinctive combination of traits: basal portion of postpedicel not distinctly orange; arista concolorous with postpedicel; and three postsutural dorsocentral setae instead of four. Ecology Unknown; specimens of T. pumilis were collected via Malaise traps and sweeping.
3,108
2019-04-10T00:00:00.000
[ "Biology", "Environmental Science" ]
Congestion Management Based on Optimal Rescheduling of Generators and Load Demands Using Swarm Intelligent Techniques . This paper presents the Congestion Management (CM) methodologies and how they get modified in the new competitive framework of electricity power markets. When the load on the system is increased or when some contingency occurs in the system, some of the lines may become overloaded. Thus, the loadability of the system should be increased by generating and dispatching the power optimally for the secure operation of power system. In this paper, the CM problem is solved by using the optimal rescheduling of generating units and load demands, and the Swarm intelligent techniques are used to handle this problem. Here, the CM problem is solved by using the Particle Swarm Optimization (PSO), Fitness Distance Ratio PSO (FDR-PSO) and Fuzzy Adaptive-PSO (FA-PSO). First, the generating units are selected based on sensitivity to the over-loaded transmission line, and then these generators are rescheduled to remove the congestion in the transmission line. This paper also utilizes the demand response offers to solve the CM problem. The effective-ness of the proposed CM methodology is examined on the IEEE 30 bus and Indian 75 bus test systems. Introduction In the deregulated power system, the challenge of Congestion Management (CM) for the transmission system operator is to create a set of rules that ensure sufficient control over producers and consumers (generators and loads) to maintain an acceptable level of power system security and reliability in both the short term (real time operation) and the long term (transmission and generation construction) while maximizing the market efficiency. The system is said to be congested when the producers and consumers of electric energy desire to produce in amounts that would cause the transmission system to operate beyond one or more transfer limits. Congestion has direct impact on security and reliability of the system. Discrete changes in system configuration may result, due to some contingency or outage rendering the system, into an unsecured state and make other lines to undergo congestion too, resulting in dynamic congestion. Congestion Management (CM), that is controlling the transmission system so that the transfer limits are observed, is perhaps the fundamental transmission management problem [1]. When a generator is a price taker, it can be shown that maximizing its profit requires bidding its incremental cost. When a generator bids other than its incremental costs in an effort to exploit imperfections in the market to increase profits, its behavior is called strategic bidding. If the generator can successfully increase its profits by strategic bidding or by any other means than the lowering costs, it is said to have market power and one of the main cause of market power is congestion. Various approaches have been presented in the literature for solving the CM problem. The general methods adopted to relieve the congestion involve the rescheduling of generator power outputs, providing the reactive power support, and curtail the load demands/transactions. Optimal Power Flow (OPF) based CM techniques are widely available in the literature [2]. A new Particle Swarm Optimization (PSO) technique to relieve the line congestion with minimum rescheduling cost of generators is proposed in [3]. POWER ENGINEERING AND ELECTRICAL ENGINEERING VOLUME: 15 | NUMBER: 5 | 2017 | DECEMBER A new CM strategy by generator rescheduling using the Cuckoo Search algorithm is proposed in [4] to minimize the rescheduling cost of generators. A CM approach in a deregulated electricity market using improved inertia weight PSO has been proposed in [5]. Reference [6] proposes a technique for optimum selection of participating generators using generator sensitivities to the power flow on congested lines and minimizes the deviations of rescheduled values of generator power outputs from the scheduled levels. Reference [7] proposes an Artificial Bee Colony algorithm which was inspired by intelligent foraging behavior of honeybee swarm to solve the CM problem based on the generator rescheduling. Reference [8] proposes a CM approach using the generator rescheduling and genetic algorithm to identify the minimum cost of rescheduling. Reference [9] proposes an approach to alleviate the transmission congestion by using rescheduling of the active and reactive power output of generators. Reference [10] proposes a transmission CM approach in a restructured market environment using a combination of demand response and Flexible Alternating Current Transmission System (FACTS) devices. A new CM framework considering the dynamic voltage stability boundary of power system is proposed in [11]. Reference [12] presents an exhaustive and critical review on the topic of CM. This review focuses on the conventional methods of CM. A multi-objective CM framework while simultaneously optimizing the competing objective functions of CM cost, voltage security, and dynamic security is proposed in [13]. A rescheduling CM based strategy in hybrid electricity market structure for a combination of hydro and thermal units is proposed in [14]. Reference [15] proposes a one-step methodology for CM of a hybrid power market that consists of a power pool and bilateral contracts between the market participants. A new CM method based on the voltage stability margin sensitivities is proposed in [16]. Reference [17] presents a price volatility optimization methodology capable of assessing demand response and willingness to pay factor in real time by tracing each load for its competency to retain its place in the market without/optimized curtailment. A methodology of real-time CM of MV/LV transformers is proposed in [18]. Reference [19] proposes a generation rescheduling-based approach for CM in electricity market using a novel ant lion optimizer algorithm. Reference [20] investigates how the demand management contracts can help the electricity sector in both regulated and deregulated environments. A demand-side based CM approach for managing transmission line congestion has been proposed in [21] for pool based electricity market model. From the above literature, it is clear that the operational aspects of power systems pose some of the major challenging problems encountered in the restructured power industry. The present paper focuses on the CM problem within an OPF framework in the restructured power market scenario. The objective of the conventional OPF problem is changed to include a mechanism which enables the electricity market players to compete and trade, while ensuring the system operation within the security limits. The motivation of this paper is to solve the CM problem by using the optimum rescheduling of generators and load demands. The participating generators are selected based on their sensitivity to the overloaded line, and then these generators are rescheduled to relieve the congestion in the transmission lines. The proposed CM problem is solved using the PSO, Fuzzy Adaptive PSO (FA-PSO) and Fuzzy Distance Ratio PSO (FDR-PSO) algorithms. The simulation results are performed on IEEE 30 and practical Indian 75 bus test systems. The rest of the paper is organized as follows. Section 2. presents the Congestion Management (CM) problem formulation. The description of Swarm intelligent techniques (such as PSO, FA-PSO and FDR-PSO) are presented in Sec. 3. Section 4. presents the simulation results and discussion. Section 5. presents the contributions with concluding remarks. Congestion Management (CM): Problem Formulation There are two broad paradigms that may be employed to alleviate the congestion in the system. These are the cost-free and the not-cost-free paradigms [22]. The former includes actions like outaging of congested lines or operation of phase shifters, transformer taps, or FACTS devices. These means are termed as cost-free only because the marginal costs involved in their usage are nominal. The not-cost-free paradigm includes: • Generation Rescheduling: This leads to generation operation at an equilibrium point away from the one determined by equal incremental costs. Mathematical models of pricing tools may be incorporated in the dispatch framework and the corresponding cost signals are obtained. These cost signals may be used for congestion pricing and as indicators for the market participants to rearrange their power injections/ extractions in order to avoid such congestion. • Prioritization and curtailment of loads/transactions: A parameter termed as willingness-to-pay-to-avoid-curtailment was introduced in Reference [23]. This can be an effective instrument in setting the transaction curtailment strategies which may then be incorporated in the OPF framework. POWER ENGINEERING AND ELECTRICAL ENGINEERING VOLUME: 15 | NUMBER: 5 | 2017 | DECEMBER These models can be used as part of a real-time open access system dispatch module [24]. The function of this module is to modify the system dispatch to ensure secure and efficient system operation based on the existing operating condition. It would use the schedulable resources and controls subject to their limits and determines the required curtailment of transactions to ensure uncongested operation of the power system [24]. Each generator in the power system has different sensitivity to the power flow in an overloaded/congested branch. A Generator Sensitivity (GS) to a line is described as the ratio of change in the active power flow in k th transmission line connected between the buses i and j (i.e., ∆P ij ) due to the change in power generation by gth generator (i.e., ∆P g ) [6], [7] and [8], and it is represented as: The generator sensitivity values are calculated considering the slack bus as reference. Therefore, the sensitivity of slack bus/slack generator to any congested transmission line in the system is always zero. Generators having the large and non-uniform values of sensitivities are selected for the participation in the CM by rescheduling their generation outputs. The basic power flow equation on congested line can be written as [6]: Neglecting the P-V coupling, the Eq. (1) can be expressed as: In this paper, the CM problem is solved by considering the rescheduling of generating units, and also by using demand response offers provided by the load demands. The CM problem only by using the rescheduling of generating units is formulated as [6]: where g = 1, 2, 3, . . . , N g . C g is the incremental and decremental price bids submitted by the generating units. These are the electricity prices at which the generating units are willing to adjust their active power outputs. The CM problem using the rescheduling of generating units and demand response offers is formulated as: where k = 1, 2, . . . , N D . The first term in the above objective function is the rescheduling cost of participating generators and it is expressed as [25]: where a i , b i , c i are the fuel cost coefficients of i th generating unit. The second term is the demand response cost associated with the demand response offers provided by the load demands, and it is expressed as [25]: where a k , b k , c k are the cost coefficients of demand response offers provided by the k th load demand. The above objective functions (i.e., Eq. (4) and Eq. (5)) are solved, subjected to the following constraints: where F 0 l is the power flow caused by all contracts requesting the transmission line service. F max l is maximum line flow limit of a transmission line connected between buses i and j. where g = 1, 2, 3, . . . , N g . P max g and P min g are maximum and minimum limits of generator power outputs. The power balance equation is expressed by using: The demand response offers provided by the k th load demand are restricted by [25]: that is [26], As mentioned earlier, in this paper, the proposed CM optimization problem is solved using the intelligent swarm techniques. The description of these techniques is presented below. POWER ENGINEERING AND ELECTRICAL ENGINEERING VOLUME: 15 | NUMBER: 5 | 2017 | DECEMBER Swarm Intelligent Techniques In this paper, Particle Swarm Optimization (PSO), FDR-PSO and FA-PSO are used to solve the CM problem. PSO is a population based stochastic optimization technique developed by Eberhart and Kennedy in 1995, inspired by social behavior of bird flocking or fish schooling [27]. PSO shares many similarities with evolutionary computation techniques like Genetic Algorithms (GAs). The system is initialized with a population of random solutions and searches for optimum by updating generations. In PSO, the particles fly through the problem space by following current optimum particles [28]. Particle Swarm Optimization (PSO) PSO is initialized with a group of particles (solutions) and then searches for optimum through a number of generations. In each generation, each particle is updated by following two best stored values. First one is the best value that has seen so far, this is called p best . Another best value tracked by the particle swarm optimizer is the best value obtained so far by any particle in the population, this is called as g best . PSO procedures based on the above concept can be described as follows. Each particle tries to modify its position using the current velocity and the distance from p best and g best . The modification can be represented by the concept of velocity. Velocity of each particle can be modified by [29]: Using the above equation, a certain velocity that gradually gets close to p best and g best can be calculated. The current position can be modified by using: This search procedure is called as Classical PSO. The work of PSO is explained with help of the flow chart shown in Fig. 1 [30]. This search procedure is called as Classical PSO. The work of PSO is explained with help of the flow chart shown in Fig. 1 [30]. No Yes Is convergence criteria reached? Update particle velocities and positions STOP Assign fitness to each particle Evaluate p best and g best particles Increment iteration count continues to best position the same be optimum of t In the F cognitive le from the exp better fitness in the veloc update equa outperforms of the PSO while being [31]. It sele updating eac chosen to sat  It must  It shou The sim satisfies the the ratio of distance. In particle's ve nbest, with maximize th expressed as into the velocity update expression, and it is shown below: The typical values of ω are in the range [0.9, 1.2]. Equation (15) consists of three terms [27]. The first term is the inertia velocity of particle, which reflects the memory behavior of particle; the second and third parts are utilized to change the velocity of the particle. Particle velocities in each dimension are limited to a maximum velocity V max whenever particle velocity exceeds V max . V max is usually specified by the user. Fitness Distance Ratio PSO (FDR-PSO) In the original PSO, each particle learns from its own experience and the experience of the most successful particle. From the literature, it has been proved that the particle positions in PSO oscillate in damped sinusoidal waves until they converge to points between their previous P best and g best positions. During this oscillation, if a particle reaches a point which has better fitness than its previous best position, then the particle continues to move towards the convergence of the global best position discovered so far. All the particles follow the same behavior to converge quickly to a good local optimum of the problem. In the FDR-PSO algorithm, in addition to the Sociocognitive learning processes, each particle also learns from the experience of neighboring particles that have a better fitness than itself. This approach results in change in the velocity update equation, although the position update equation remains unchanged. This al- gorithm outperforms PSO and many of the recent improvements of the PSO algorithm on many benchmark problems, while being less susceptible to premature convergence [31]. It selects only one other particle at a time when updating each velocity dimension and that particle is chosen to satisfy the following two criteria: • It must be near the current particle. • It should have visited a position of higher fitness. The simplest way to select a nearby particle that satisfies the above mentioned two criteria is to maximize the ratio of the fitness difference to the one-dimensional distance. In other words, the d th dimension of the i th particle's velocity is updated using a particle called the n best , with prior best position P j . It is necessary to maximize the Fitness Distance Ratio (FDR), and it is expressed as [31]: In FDR-PSO algorithm, the particle's velocity update is influenced by the following three factors: • Previous best experience i.e. P best of the particle. • Best global experience i.e. g best , considering the best P best of all particles. • Previous best experience of the "best nearest" neighbor i.e. n best . Hence, the new velocity update equation becomes: The position update equation remains the same as in Eq. (14). Fuzzy Adaptive PSO (FA-PSO) Fuzzy Adaptive PSO (FA-PSO) [32] is developed to design a fuzzy system to dynamically adapt the inertia weight (Ω) for the CM problem. To get a better Ω under fuzzy environment, the inputs i.e., fitness of current inertia weight, current location (i.e., solution); and the output: correction of inertia weight (∆Ω) are required to represent in fuzzy set notations. All membership functions are considered in triangular shape, and they are expressed in 3 linguistic variables (S, M and L) for 'Small', 'Medium' and 'Large', respectively, as shown in Fig. 2. The values of XS, XM and XL are selected from the previous experience. As mentioned earlier, it is most difficult to form a crisp mathematical model for the adaptive PSO to change the inertia dynamically. Therefore, a simple IF/THEN rules are suitable to determine ∆Ω in the FA-PSO process. © 2017 ADVANCES IN ELECTRICAL AND ELECTRONIC ENGINEERING  Best global experience i.e. gbest, considering the best Pbest of all particles.  Previous best experience of the "best nearest" neighbor i.e. nbest. Hence, the new velocity update equation becomes, Normalized Fitness: The fitness of current solution (i.e., location) is the most important to predict ω for the right choice of velocity. Here, we used Normalized Fitness (NFIT) value as an input to bound the limit between 0 & 1. Normalized Fitness (NFIT) is defined as: Fu In the minimization problems, minimum value of NFIT denotes a better solution. In the FA-PSO algorithm, Total Cost (TC) from Eq. (4) and Eq. (5) at first iteration may be used as T C max for the next iterations. Only the least cost generator with unlimited POWER ENGINEERING AND ELECTRICAL ENGINEERING VOLUME: 15 | NUMBER: 5 | 2017 | DECEMBER power limits and without considering the constraints is used to determine the T C min that satisfies the demand. This range is fitted to the shape of triangular membership function. Current Inertia Weight Correction (∆ω): We require both negative and positive corrections for ω. The change in inertia weight (∆ω) is described in three linguistic variables (NE, ZE and PE) for 'Negative', 'Zero' and 'Positive' corrections instead of (S, M and L). The selected range for ∆ω is between −0.1 to 0.1. IF/THEN Rules and Defuzzification: The simple IF/THEN rules for the inertia weight correction are presented in Tab. 1. There are 9 rules (3 × 3) for 2 input variables and 3 linguistic variables for every input variable. Generally, the fuzzy control inputs are crisp. Utilizing the arithmetic product, the degrees of fulfillment of the rules that are activated in Tab. 1, are determined. For every rule, the output (fuzzy ∆ω) is scaled in accordance with the DOF [32]. The final output value is the arithmetic sum of results obtained from activated rules. By using the Centriod Method, the final output is defuzzified to a crisp value (∆ω). Binding Fitness: Fitness is an index which is used to determine the superiority of an individual in the swarm. Generally, the objective function is considered as the fitness function and inequality constraints are changed to penalty terms, and these are added to the objective function. The main drawback of this approach is that the best particle/individual can be misjudged as inappropriate for the penalty factors. However, usually, the penalty factors are assigned by an empirical method and are deeply affected by the problem model. To overcome this drawback, a binary fitness has been used; one for the optimum objective function and the other for binding constraints. Optimal objective fitness is equal to the value of Eq. (4) or Eq. (5), which describes the active power rescheduling cost, therefore the congestion cost, i.e., the cost to relieve the congestion. The fitness value of binding constraints is used to scale the level of violation, and is calculated using: where Z is value of inequality constraint. Z min and Z max are minimum and maximum limits of inequality constraints. Results and Discussion In this paper, IEEE 30 [33] and practical Indian 75 bus [34] test systems are selected to show the effectiveness and the suitability of the proposed algorithms applied to solve the CM problem. The parameters selected for PSO are: swarm/population size is 60, acceleration constants C 1 and C 2 are 2, maximum number of generations is 500, maximum and minimum limits of inertia weight are 0.9 and 0.1, respectively. Simulation Results on IEEE 30 Bus Test System IEEE 30 bus test system [33] consists of 6 generating units, 24 load demands and 41 branches. It should be noted, that the obtained generator sensitivity values are the reference with respect to the slack bus. Therefore, the sensitivity of the slack bus generator to any congested line in the system is always zero. Here, two case studies are performed, one is based on only generation rescheduling, and the other one is based on both generators and load demands rescheduling. These case studies are presented further: 1) Case 1: CM Based on Optimal Rescheduling of Generators In this case, only the participating generators are considered to be rescheduled to alleviate the congestion in the system. The generator sensitivity factors for this system are depicted in Fig. 3. Generators which are participating in the CM are selected depending upon their sensitivity to the overloaded/congested line. In IEEE 30 bus system, all the generating units have nearly same sensitivity to the line because it is a small system. Here, the slack bus is selected by default as it is the reference bus. In this case study, the system is over-loaded by 19 % to perform the CM. The scheduled power generation at each selected bus before and after the CM is presented in Tab The above simulation results clearly indicate that FA-PSO provides us with best solution since the costs and losses in the system are lower when compared to PSO and FDR-PSO. 2) Case 2: CM Based on Optimal Rescheduling of Generators and Demand Response Offers As mentioned earlier, in this sub-section both generators and load demands are rescheduled to alleviate the congestion in the system. In this case study, the system load has been increased to 145 % to create the congestion in the system. IEEE 30 bus system has 6 generating units and 21 load demands. In this paper, it is assumed that the System Operator (SO) receives the generator bids and demand response/load shedding offers from customers to perform the CM analysis [26]. In this case study, it is assumed that the amount of load shed at i th bus cannot be more than 30 % of load demand at that bus. Table 6 presents the optimal generation schedules before and after the CM analysis for Case 2 using PSO, FA-PSO and FDR-PSO algorithms. The line flows in the congested lines before and after the CM are presented in Tab. 7. In this case, the transmission lines 1, 10 and 18 are overloaded/congested, and their power flows are 143.13 MVA, 43.82 MVA and 36.08 MVA, respectively. Whereas, the maximum power flow limits of lines 1, 10 and 18 are 130 MVA, 32 MVA and 32 MVA, respectively. To overcome this situation, generation rescheduling and demand response are used. By using this, the congestion in the system has been removed, and the power flows after the CM are presented in Tab. 7. As mentioned earlier, in this paper, the CM has been performed by using the PSO, FDR-PSO and FA-PSO algorithms. Optimum total cost and losses obtained after the CM are presented in Tab Simulation Results on Practical Indian 75 Bus Test System The Indian 75 bus system [34] has 97 branches, 15 generators, 24 transformers and 12 shunt reactors. The generator sensitivity factors for this system are depicted in Fig. 5. The generating units which participate in the CM are selected depending upon their sensitivity to the overloaded line. In practical Indian 75 bus system, 11 out of 15 generators are selected for participation in the CM problem. Here, the slack bus is selected by default as it is the reference bus. Power generated at each selected bus before and after the CM is shown in Tab. 9. Tab. 9: Active power output before and after the congestion management for Indian 75 bus system. Scheduled Generation Before the CM From the above simulation results on IEEE 30 bus and Indian 75 bus systems, it can be observed that FA-PSO algorithm provides us with best solution since the costs and losses in the system are lower when compared to PSO and FDR-PSO algorithms. But, the FDR-PSO algorithm takes less time to find a solution. FA-PSO takes more time as it has to solve the fuzzy logic block for each iteration. Conclusion Congestion Management (CM) using the optimal rescheduling of active power generation and load demands/demand response with Particle Swarm Optimization (PSO), Fitness Distance Ratio-PSO (FDR-PSO) and Fuzzy Adaptive-PSO (FA-PSO) algorithms has been solved in this paper. Here, the generator rescheduling is performed by taking into account the minimization of generation scheduling cost while satisfying all the line flow limits. Generators are selected in accordance with their sensitivity to the overloaded transmission line. The CM problem is modelled as an optimization problem, and it is solved by using the PSO, FDR-PSO and FA-PSO. The proposed CM methodology is implemented on IEEE 30 bus and practical Indian 75 bus test systems. The simulation results show that the FA-PSO gives optimum cost when compared to other swarm intelligent algorithms, whereas FDR-PSO takes lesser computational time to solve this CM problem.
6,080.2
2018-01-14T00:00:00.000
[ "Computer Science" ]
Data-Driven Critical Tract Variable Determination for European Portuguese : Technologies, such as real-time magnetic resonance (RT-MRI), can provide valuable information to evolve our understanding of the static and dynamic aspects of speech by contributing to the determination of which articulators are essential (critical) in producing specific sounds and how (gestures). While a visual analysis and comparison of imaging data or vocal tract profiles can already provide relevant findings, the sheer amount of available data demands and can strongly profit from unsupervised data-driven approaches. Recent work, in this regard, has asserted the possibility of determining critical articulators from RT-MRI data by considering a representation of vocal tract configurations based on landmarks placed on the tongue, lips, and velum, yielding meaningful results for European Portuguese (EP). Advancing this previous work to obtain a characterization of EP sounds grounded on Articulatory Phonology, important to explore critical gestures and advance, for example, articulatory speech synthesis, entails the consideration of a novel set of tract variables. To this end, this article explores critical variable determination considering a vocal tract representation aligned with Articulatory Phonology and the Task Dynamics framework. The overall results, obtained considering data for three EP speakers, show the applicability of this approach and are consistent with existing descriptions of EP sounds. Introduction Major advances on phonetic sciences in the last decades contributed to better description of the variety of speech sounds in the world languages, to the expansion of new methodologies to less common languages and varieties contributing to a better understanding of spoken language in general. Speech sounds are not sequential nor isolated, but sequences of consonants and vowels are produced in a temporally overlapping way with coarticulatory differences in timing being language specific and varying according to syllable type (simplex, complex), syllable position (Marin and Pouplier [1] for timing in English, Cunha [2,3] for European Portuguese), and many other factors. Because of the coarticulation with adjacent sounds, articulators that are not relevant (i.e., noncritical) for the Background: Articulatory Phonology, Gestures and Critical Tract Variables Speech sounds are not static target configurations clearly defined, their production involves complex tempo-spatial trajectories in the vocal tract articulators responsible for their production from the start of the movement till the release and back (e.g., in bilabial /p/ both lips move until the closure that produces the bilabial and the lips open again). All this movement is a so called articulatory gesture. Instead of phonological features, the dynamic gestures are the unities of speech in Articulatory Phonology [27,28] and define each particular sound. Therefore, gestures are, on one hand, the physically tractable movements of articulators that are highly variable, depending, for example, on context and speaking rate, and, on the other hand, the representations of motor commands for individual phones in the minds of the speakers which are invariant. In other words, they are both instructions to achieve the formation (and release) of a constriction at some place in the vocal tract (for example, an opening of the lips) and abstract phonological units with a distinctive function [27]. Since the vowel tract is contiguous, more articulators are activated simultaneously than the intended ones. Consequently it is important to differentiate between the actively activated (critical) articulators and the less activated or passive ones: For example, in the production of alveolar sounds as /t/ or /l/, tongue tip needs to move up in the alveolar region (critical articulator) and simultaneously also tongue back and tongue body show some movement, since they all are connected. For laterals, for example, also tongue body may have a secondary importance in their production [29,30]. Some segments can be defined only based on one or two gestures, bilabials are defined based on the lips trajectories; laterals as mentioned before, are more complex and may include tongue tip and tongue body gestures. Gestures are tempo-spatial entities, structured with a duration and a cycle. The cycle begins with the movement's onset, continues with the movement toward the target -that can be reached or not -, then to the release, where the movement away from the constriction begins, ending with the offset, where the articulators cease to be under active control of the gesture. Individual gestures are combined to form segments, consonant clusters, syllables, words. Gestures are specified by a set of tract variables and their constriction location and degree: Tract variables are related to the articulators and include: Lips (LIPS), Tongue Tip (TT), Tongue Body (TB), Velum (VEL) and Glottis (GLO); Constriction location specifies the place of the constriction in the vocal tract and can assume the values: labial, dental, [alveolar, postalveolar, palatal, velar, uvular and pharyngeal; Constriction degree includes: closed (for stops), critical (for fricatives), narrow, mid and wide (approximants and vowels). For example, a possible specification for the alveolar stop /t/ in terms of gestures is Tongue Tip [constriction degree: closed, constriction location: alveolar] [31]. The tract variables involved in the critical gestures are considered critical tract variables and the involved articulators the critical articulators. The articulatory phonology approach has been incorporated into a computational model by Haskins Laboratories researchers [32,33]. It is composed by three main processes, in sequence: (1) Linguistic Gestural Model, responsible for transforming the input into a gestural score (set of discrete, concurrently active gestures); (2) Task Dynamic Model [32,33], that calculates the articulatory trajectories given the gestural score; and (3) Articulatory Synthesizer, capable of, based on the articulators' trajectory, obtaining the global vocal tract shape, and, ultimately, the speech waveform. Related Work The following sections provide a summary of existing descriptions for European Portuguese sounds and overview previous work on using data-driven methods to determine critical articulators deemed relevant to contextualize the work presented in this article. Gestural Descriptions of (European) Portuguese Essentially manual analyses of a limited set of MRI images and contours made possible the first description adopting the principles of Articulatory Phonology for European Portuguese [31,34]. In a very summarized form, those aspects deemed relevant as background for the present work are References [31,34] Computational Approaches to Critical Gesture Determination As previously mentioned, several authors have proposed data-driven approaches to harness large amounts of articulatory data to advance our knowledge regarding speech production [21,22] and critical articulators, in particular, for a wide range of contexts, such as emotional speech [17], and exploring different approaches, for example, time-frequency features [18]. In a notable example, Jackson and Singampalli [19], consider a large set of articulatory data from EMA to build statistical models for the movement of each articulator. This is performed by selecting data samples, at the midpoint of each phone, and computing statistics describing: (1) the whole articulator data (the grand statistics), used to build the models for each articulator; and (2) the data for each phone (phone statistics). The critical articulators, for each phone, are determined by analysing the distances between the grand and phone probability distributions. By considering a static tract configuration for each of the phones, the dynamic nature of the data is not explored. Nevertheless, by doing so, Jackson and Singampalli present a method that is quite useful in providing clear and interpretable insights regarding articulation and enabling a comparison with existing phonological descriptions. For European Portuguese (EP), the authors have been exploring approaches based on articulatory data extracted from midsagittal RT-MRI images of the vocal tract to automatically determine the critical articulators for each EP sound. While RTMRI offers a large amount of data of the whole vocal tract, over time, the main goal, in a first stage, was to leave the dynamic aspects out and pursue a data-driven method that could provide easily interpretable results for an automated phonological description. This entailed privileging approaches with a specific consideration of the anatomical regions of interest, rather than methods dealing with the overall shape of the vocal tract. In a first exploratory work [23], the authors asserted that critical articulator identification could be performed by extending the applicability of the method proposed for EMA data by Jackson and Singampalli [19]. Those first results were obtained for RT-MRI at 14Hz and, since the original method worked with 100 Hz EMA, the question remained regarding if higher RT-MRI frame rates, along with a larger dataset, could have a positive impact on the outcomes. Since one frame is used as representative of the vocal tract configuration for each sound, a higher frame rate, might enable capturing key moments of articulation, for example, alveolar contact for the /n/. To address these aspects, in Silva et al. [24], the authors explored 50Hz RT-MRI data showing both the applicability of the methods to the novel data along with noticeable improvement of the results. At this point, the representation of the vocal tract configurations was still being performed based on landmarks placed on the lips, tongue surface, and velum, to establish similar conditions as those of the method proposed for EMA. However, the tract data available from RT-MRI can potentiate the consideration of different tract variables moving beyond simple landmarks. In this regard, the authors have explored tract variables aligned with the Task Dynamics framework [25], adopting the concept of constrictions (except for the velum), and showed that these provided an alternative that was not only more compact (less variables involved), but also provided interesting critical articulator results with the benefit of having a more direct relation with existing Articulatory Phonology descriptions, supporting easier comparison with the literature. Nevertheless, the authors identified a few points deserving immediate attention to further assess and improve the tested approach: (1) enlarge the amount of data considered per speaker; (2) consider additional speakers; and (3) explore a novel representation for the velum, to completely avoid landmarks. These aspects are addressed in the present work and described in what follows. Methods The determination of critical articulators is performed from articulatory data extracted from real-time magnetic resonance imaging (RT-MRI). The overall pipeline is presented in Figure 1 and its main aspects are detailed in what follows. Overall steps of the method to determine the critical articulators from real-time MRI (RT-MRI) images of the vocal tract. After MRI acquisition and audio annotation, the data is uploaded to our speech studies platform, under development [35], and its processing and analysis are carried out resulting in a list of critical tract variables per phone. Refer to the text for additional details. Materials and Participants The corpus consisted of lexical words containing all oral [i, e, E, a, o, O, u, 5] and nasal vowels [5, e,ĩ,õ,ũ] in one and two syllables words. Oral and nasal diphthongs as well as additional materials including alternations of nasal monophthongs and diphthongs as in 'som' ('sound') and 'são' ('they are') or 'antaram' ('they sang') and 'cantarão' ('they will sing') were recorded for further investigation of variability in the production of nasality. Due to the strong research question on nasality behind these recordings, the occurrence of single segments is strongly unbalanced. Unstressed oral and nasal vowels were added to the corpus after the third recording. All words were randomized and repeated in two prosodic conditions embedded in one of three carrier sentences, alternating the verb ('Diga'-'Say'; 'ouvi'-'I heard'; or 'leio'-'I read') as in 'Diga pote, diga pote baixinho' ('Say pot, Say pot gently'). The sentences were presented on a screen in randomized order with three repetitions. So far, this corpus has been recorded for sixteen native speakers (8 m, 8 f) of EP. The tokens were presented from a timed slide presentation with blocks of 13 stimuli each. The single stimulus could be seen for 3 s and there was a pause of about 60 s after each block of 13 stimuli. The first three participants read 7 blocks in a total of 91 stimuli and the remaining nine participants had 9 blocks of 13 stimuli (total of 117 tokens). The participants were familiarized with the corpus and the task previously, in which they read the corpus with 2 or 3 repetitions in a noise reduced environment. During the RT-MRI experiment, they were lying down, in a comfortable position, and were instructed to read the required sentences projected in front of them. All volunteers provided informed written consent and filled an MRI screening form in agreement with institutional rules prior to the enrollment on the study. They were compensated for their participation and none of them reported any known language, speech or hearing impairment. Image Acquisition and Corpus Real time MRI acquisition was performed at the Max Planck Institute for Biophysical Chemistry, Göttingen, Germany, using a 3T Siemens Magnetom Prisma Fit MRI System with high performance gradients (Max ampl = 80 mT/m; slew rate = 200 T/m/s). A standard 64-channel head coil was used with a mirror mounted on top of the coil. Real-time MRI measurements were based on a recently developed method, where highly undersampled radial FLASH acquisitions are combined with nonlinear inverse reconstruction (NLINV) providing images at high spatial and temporal resolutions [36]. Acquisitions were made at 50 fps, resulting in images as the ones presented in Figure 2. Speech was synchronously recorded using an optical microphone (Dual Channel-FOMRI, Optoacoustics, Or Yehuda, Israel), fixed on the head coil, with the protective pop-screen placed directly against the speaker's mouth. Figure 2 presents some illustrative examples of MRI images for different speakers and sounds and Figure 3 shows the image sequence corresponding to the articulation of /p/ as in [p5net5]. After the audio has been annotated all the data concerning a speaker (images, audio, and annotations) is uploaded to a speech studies platform in development by the authors [35] where the following steps take place. Vocal Tract Segmentation and Revision The RT-MRI sequences were processed considering the method presented in Reference [11] to extract the vocal tract profiles. Based on a small set of manually annotated images, typically one for each of the phones present in the corpus, to initialize the method, the RT-MRI image sequences are automatically segmented to identify the contour of the vocal tract. One aspect that was considered paramount, at the current stage of the research, was to ensure that the vocal tract data considered for the analysis had been segmented properly, that is, that all relevant structures had no segmentation errors. A few of these errors, for example, a wrongly segmented tongue tip, could disturb the tract variable data distributions and have unpredictable influences over the statistics affecting critical tract variable determination. Such effects could, then, hinder a correct assessment of the capabilities of the method. Therefore, all the segmentations considered were checked by human observers and, when required, revised to perform, for example, fine adjustment of the segmentation at the tongue tip, velum or lips. The revisions were performed by five observers, who revised the segmentations for different images, using the same revision tool, and care was taken so that each observer would only revise a subset of any sound/context to avoid observer bias effect as a factor potentially influencing the outcomes of the critical articulator analysis. Finally, one aspect that was observed for the revised segmentations was that the hard palate could be prone to slight variations of its shape due to a difficulty of the automatic method in establishing its precise location. This was mostly due to the fact that it is a region that is not imaged very clearly with MRI, which also made its manual revision difficult. While these differences were not large, overall, they could impact, for example, the determination of the location of the highest constriction of the tongue body, potentially causing differences of this variable among occurrences of the same sound/context (e.g., with location transitions between tongue back and tongue blade). Therefore, for all the computations presented, the hard palate for each sample was replaced by the mean hard palate across all considered contours, for each speaker. Tract Variables Computation Differently from the original application of the method to EMA [19] and subsequent extension to RT-MRI data by our team [24], for this work the considered variables are not landmarks placed over the vocal tract, but tract variables aligned with the Task Dynamics framework [27,37], that is, mostly based on the concepts of constrictions defined by their location and degree (distance). After preliminary work by the authors [25], which provided interesting results and has shown the applicability of the critical articulator method to this new set of variables, this work further extends this approach by also abandoning the landmark representation for the velum, identified as a limitation in our previous studies. Choice and Computation of Tract Variables Aligned with Articulatory Phonology, we considered the variables depicted in Figure 5. With this set of variables we move away from choosing fixed points over the tongue, as in previous work. Instead, maximal constrictions are determined between different tract segments (e.g., tongue tip and hard palate, identified based on the segmentation data [16]) as follows: • The tongue tip constriction (TTC) is determined as the minimal distance between the tongue tip region and the hard palate. A small segment of the tongue contour, in the neighborhood of the tongue tip, is selected and the distances from each point to the hard palate contour segment points are computed. Of those, the minimal distance is determined and the constriction distance (TTCd) and location (TTCl) obtained; • The tongue body constriction (TBC) is determined as the minimal distance between the tongue body and the pharyngeal wall or hard palate (not including the velar region). The distance between all points of the tongue contour segment (minus those considered for the the tongue tip neighborhood) and all the points in the pharyngeal and palatal segments is computed. The smallest distance obtained corresponds to the point of maximal tongue body constriction. The constriction distance (TBCd) and location (TBCl) is thus obtained. In the example presented in Figure 5, the smallest distance was found between a point located in the tongue back and a point in the pharyngeal wall; • The velar configuration (V) is determined by obtaining the constriction distance between the velum and the pharyngeal wall contour segments (velopharyngeal port, Vp) and between the velum and the tongue body (oral port, Vt); • The lips (LIPS) configuration is characterized by their aperture (LIPa), computed as the minimum distance between the contour segments of the upper and lower lips, and protrusion (LIPp) as the horizontal distance from the left most point of the lips to the reference point p re f . While this does not provide just the lip protrusion distance (having rest as a reference), it is suitable for the intended analysis without having to determine minimum lip protrusion beforehand. In previous work [25], the authors have shown that a change from the vocal tract representation based on landmarks (mimicking the work done for EMA [19]) to an approach more aligned with the Task Dynamics framework reduced the correlation between variable components (e.g., x and y coordinates of the landmark) and among tract variables. This potentially entails that the data provided by each component and variable is overall, more independent, and might offer added insight regarding what is critical. One of the variables that remained as a landmark concerned the velum, represented by the x and y coordinates of a point placed at its back (see Reference [23] for details). To move into an approach more consistent with Articulatory Phonology a novel representation was adopted describing the velum configuration based on the velo-pharyngeal and orovelar constriction distances. Determining Reference Point for Constriction Location The first step for the determination of the constriction location is the definition of a referential to measure the location angle (see Figure 5). For each speaker, the highest point of the hard palate is determined (p HardPalate ) and the point of intersection between a tangent that passes this point and the orientation of the pharyngeal wall is computed (p pharynx ). The reference point (p re f ) is, then, obtained as the intersection of a vertical line passing p HardPalate and a line passing p pharynx at a 45º angle. Data Selection The critical articulator determination requires that a representative frame is selected for each occurrence of each phone. Table 1 shows the phones considered for analysis taking into account the contents of the corpus. Since that different sounds have a different progression, over time, the frame considered to represent the vocal tract configuration is selected from the annotated interval using slightly different criteria, as explained in Table 1. For instance, for /p/, the frame with the minimum inter-lip distance is selected while for oral vowels the middle frame is considered. Similarly to Silva et al. [24,25], and considering the dynamic nature of nasal vowels [8,34,38,39], we wanted to have some additional information to assert if, for different timepoints, the determination of the critical articulators would highlight any relevant differences in behavior. Therefore, each nasal vowel was represented by three "pseudo-phones", focusing the starting, middle and final frame of the annotated interval and named, respectively, [vowel] start , [vowel] mid and [vowel], as in:ã start ,ã mid , andã. Determination of Critical Articulators Critical articulator identification requires the grand statistics, characterizing the distribution, for each variable, along the whole data; and the phone statistics, representing the distribution of the variable, for each phone, considering the phone data selection. Table 2, to the right, summarizes the different statistics computed to initialize the method, adopting the notation as in Jackson and Singampali [19]. Critical articulator identification was performed taking articulator properties (e.g., d, degree and l, location, for the constrictions) independently-the 1D case-for example, TBd for the constriction degree at the tongue body, or combining them-the 2D case. Table 2. Summary of the computed statistics for each landmark and corresponding notation as in Reference [19]. Grand Stats Not The 1D correlation matrices for the articulators (e.g., considering TBl and TTd, etc.), given the size of our data set, was computed considering correntropy, as proposed in Rao et al. [40]. Bivariate correlations (i.e, taking both properties of each articulator together) were computed through canonical correlation analysis [19,41]. For the grand correlation matrices, adopting the criteria proposed in Reference [19], only statistically significant (α = 0.05) correlation values above 0.2 were kept, reducing the remaining ones to zero. The computed data statistics were used to initialize the critical articulator analysis method and 1D and 2D analysis was performed, for each speaker, returning a list of critical articulators per phone. Additionally, we wanted to assess how the method would work by gathering the data for the three speakers to build a "normalized" speaker. This would enable the consideration of a larger dataset potentially providing a clearer picture on the overall trends for the critical articulators for an EP speaker disentangled from specific speaker characteristics. To that effect, we normalized the articulator data, for each speaker, based on the variation ranges, for each variable component, computed over all selected data samples, and considered this gathered data as a new speaker following a similar analysis methodology. This is, naturally, a very simple normalization method, but considering the overall results obtained in previous work [25], it was deemed appropriate for this first assessment including three speakers. To determine the list of critical articulators, the analysis method requires establishing a stopping threshold, Θ C . If the threshold is low, then, a large number of critical articulators will potentially appear, for each sound. Higher thresholds will potentially result in shorter (possibly empty) lists being determined for some of the sounds. The impact of changing the threshold does not affect all the sounds in the same way and relates with the amount of data available and with its diversity, i.e, a certain threshold may yield an empty list for some sounds and a long list for others (i.e., including articulators of less importance). The founding work of Jackson and Singampalli [26], serving as grounds for what we present here, have described this aspect and, as we have also observed, in previous works, this threshold is variable among speakers. As in our previous work, we defined a stopping threshold, Θ C , for each of the speakers (including the normalized speaker), as the highest possible value that would ensure that each phone had, at least, one critical articulator. This resulted in the inclusion of less important articulators for some of the phones, but avoided that phones with a smaller amount of data had no results. Results The conducted analysis has two main outcomes: the correlation between tract variables and the determination of their criticality for the production of different sounds. Additionally, each of these aspects can be analysed from the 1D perspective, where each dimension of the variables is taken independently, for example, lip protrusion and aperture are considered two variables, or the 2D perspective, with each variable as a bidimensional entity. In relation with previous work, particularly Silva et al. [25], these results are obtained for a larger number of data samples, consider a different definition of the tract variable for the velum, and include data for a new speaker, also influencing the normalized speaker data. Figure 6 shows the 1D correlation matrix for the different tract variable components, for the three speakers and the normalized speaker (All). Overall, two sets of mild correlated variable dimensions appear, although differently for the considered speakers: LIP protrusion and aperture; and TT constriction degree and location. As expected, by changing how the velar tract variable is represented, moving from the x and y coordinates of a landmark (as tested by the authors of Reference [25]) into the velopharyngeal and orovelar passages, has made the correlation between them almost disappear. Tract Variable Correlation The consideration of additional data, including more contexts for each phone, had an effect on the results for speaker 8460, when comparing to what was previously observed [25], since the mild correlation between the TB and the LIPS and TT has disappeared. Interestingly, speaker 8545 (along with the normalized speaker) does not show any relevant correlation between any of the variable dimensions. Table 3 shows the canonical correlation [19,41] computed among the different tract variables. Overall, this table provides information about how the different tract variables correlate and further confirms what was observed for the 1D analysis. While the observed correlations are small/mild, a corpus with a greater phonetic richness would lower them further. For instance, if the corpus included the lateral /l/, this would probably further emphasize the independence of the TT towards the TB. Table 4 shows the 1D analysis of critical articulators, for each phone, speaker, and the normalized speaker. Please note that, for the sake of space economy, the name of the tract variables is simplified by removing the T and C, for example, TBCl, becomes Bl. Since each variable dimension is treated independently, it provides a finer grasp over which particular aspect of the variable is most critical, for example, if it is the constriction degree or its location. For the sake of space, for those phones with a list of critical articulators longer than four, only the first four are presented. Considering that the order of the articulators is important, the remaining elements of the list were judged less important to provide any further elements for discussion. Critical Articulators Regarding the determination of critical tract variables, Table 5 presents the results obtained for each speaker (columns 8458, 8460 and 8545) and for the normalized data (column ALL). As with the 1D case, the analysis was performed considering a conservative stopping threshold, Θ C , to avoid the appearance of phones without, at least, one critical articulator. Note, nevertheless, that the order of the articulators is meaningful, starting from the one more strongly detected as critical. The rightmost column, shows the characterization of EP sounds based on the principle of Articulatory Phonology as reported by Oliveira [7] to be considered as a reference for the analysis of the obtained results. Table 5. Critical articulators for the different phones and speakers. Each tract variable is considered as an articulator (2D analysis). The order of the different articulators, for each phone, reflects their importance. The two rightmost columns present the determined critical articulators gathering the normalized data for all speakers (spk All) and a characterization of EP sounds based on the principle of Articulatory Phonology as found in Oliveira [7]. For the sake of space economy, in the tract variable listing the T and C were omitted, for example, TBC became B. Discussion When comparing our preliminary work [25] and the results presented here, several aspects are worth noting. First, a novel speaker was considered (8545, in the third column of Tables 4 and 5) and the obtained results are consistent with those for the previously analysed speakers; second, the larger number of data samples considered for speaker 8460, entailing a larger number of samples per phone and including more phonetic contexts, turned some of the results more consistent with those of speaker 8458, as previously hypothesized [25]; and third, the consideration of one additional speaker for the normalized speaker, did not disrupt the overall previous findings for the critical variable (articulator) analysis. Concerning the 1D correlation, among the different variable dimensions (see Figure 6), the variables are, overall, more decorrelated than in previous approaches considering landmarks over the vocal tract (e.g., see Silva et al. [24]) and has been further improved by the novel representation for the velar data considered in this work. The larger amount of data, in comparison with our first testing of the tract variable aligned with Articulatory Phonology [25], resulted in an even smaller number of correlations. Speaker 8545, along with the normalized speaker, do not show any correlation worth noting. The mild/weak correlation observed for the lips (protrusion vs aperture) and tongue tip constriction (location vs degree) are, probably, due to a bias introduced by the characteristics of the considered corpus. Regarding the tongue tip, mild correlations between TTCl and TTCd may appear due to the fact the the strongest constrictions happen, typically, at the highest location angle. The correlations observed, in our previous work (refer to Figure 7), for speaker 8460, between the lips and the tongue body and tongue tip have disappeared with the larger number of data samples considered, as hypothesized [25]. Individual Tract Variable Components The analysis of critical articulators treating each tract variable dimension as an independent variable is much more prone to being affected by the amount of data samples considered [19]. Therefore, while a few interesting results can be observed for some phones and speakers, some notable trends are not phonologically meaningful, such as the tongue tip constriction location (Tl) appearing prominently for the nasal vowels. Because the normalized speaker considers more data, some improvements are expected here when compared to the individual speakers and, indeed, it shows several promising results. Therefore, our discussion will mostly concern the normalized data. At a first glance, the tongue body (Bl and Bd) appears as critical in a prominent position for many of the vowels, as expected. The lip aperture (La) appears as critical for all bilabials segments (/p/, /b/, and /m/). The tongue tip constriction degree (Td) appears forthe alveolars /n/, /t/ (with Vt) and /d/, the latter also with Tl, which seems to assert tighter conditions for the tongue tip positioning for the /d/. The velopharyngeal passage (Vp) appears as critical for the velar sounds /k/ (with Bd), /g/ (with Bl, Vt, and Bd), probably because some reajustments in the soft palate region preceding the velum. Also labiodental /v/ (with La) and for M, which makes sense, since later concerns the nasal tail. Concerning the lips, it is solely lip aperture (La) that appears as critical for /u/ and its nasal congenere and lip protrusion (Lp) appears across several of the vowels. This might be a similar effect to what we have previously observed for the velum: an articulator may appear as critical for those cases when it will be in a more fixed position during the articulation of a sound. The velum, for instance, tends to appear more prominent for oral vowels since, at the middle of their production, it is closed, while, at the end of a nasal vowel, it can be open to different extents. Therefore, Lp may appear as critical not because the sound entails protrusion, but because the amount of observed protrusion throughout the different occurrences does not vary much. Given the restricted number of speakers and occurrences, one aspect that seems interesting and should foster further future analysis, is the appearance of the orovelar (Vt) and not the velopharingeal (Vp) passage as critical for nasal vowels. This does not diminish the velum opening, but points out that the extent of the orovelar passage is more stable across occurrences and, hence, more critical. Additionally, it is also relevant to note that for /ũ/ and its oral congenere, the tongue body constriction does not appear as critical as happens, for example, with /õ/. Since the velopharyngeal passage and the tongue body constriction do not appear as critical-only the orovelar passage-this may hint that any variation of velar aperture, across occurrences, is compensated with tongue adjustments to keep the oral passage [42,43]. Also of note is the absence of Vt for the more fronted vowel /ĩ/ and its oral congenere. Given the fronted position of the tongue, Vt is large and more variable since its variation is not as limited, as for the back vowels, by velar opening. One example that shows a different behavior between the tongue and velum is /g/, where both Vp and Vt are determined as critical and coincide with Bl and Bd, hinting that both the velum and tongue body are in a very fixed position along the occurrences of /g/. A similar result can also notably be observed for /k/. Overall, Tl is still widely present (as with the individual speakers), mostly not agreeing with current phonological descriptions for EP and should motivate further analysis considering more data (speakers and phonetic contexts) and different alternatives for the computation of the tongue tip constriction. Critical Tract Variables Overall, and considering that the corpus is prone to strong coarticulation effects, the obtained results strongly follow our preliminary results presented in Silva et al. [25] and are mostly in accordance with previous descriptions considering Articulatory Phonology [7]. The TB is determined as the most critical articulator, for most vowels, in accordance to the descriptions available in the literature. The appearance of V, as critical articulator for some oral vowels, earlier than for nasals, is aligned with previous outcomes of the method [19,23,24]. This is probably due to a more stable position of V at the middle of oral vowels (the selected frame) than at the different stages selected for the nasal vowels for which it appears, mostly, in the fourth place, eventually due to the adopted conservative stopping criteria, to avoid phones without any reported critical articulator. It is also relevant to note that, for instance, if some of the nasal vowels are preceded by a nasal consonant it affects velum position during the initial phase of the vowel, which will have an incomplete movement towards closure [44]. This might explain why V does not appear as critical in the first frame (start) of some nasal vowels (typically referred as the oral stage [45]) since the velum is not in a stable position. The lips correctly appear, with some prominence for the back rounded vowels /u/ and /o/ and their nasal congeneres, but the appearance of this articulator for unrounded low vowels, probably due to the limitations of the corpus, does not allow any conclusion for this articulator. Regarding consonants, for /d/, /t/, /s/ and /r/, as expected, T is identified as the most critical articulator, although, for /s/, it disappears in the normalized speaker. For bilabials, /p/, /b/ and /m/ correctly present L as the most critical articulator, and this is also observed for /v/, along with the expected prominence of V , except for speaker 8460. For /m/, V also appears, along with L, as expected. For /p/, the tongue tip appears as critical, for two of the speakers, probably due to coarticulatory reasons, but disappears in the normalized speaker which exhibits L and V, as expected. For /k/, V and TB are identified as the most critical articulators. Finally, M, which denotes the nasal tail, makes sense to have V as critical. The appearance of L, in the normalized speaker is unexpected, since it does not appear for any of the other speakers. By gathering the normalized data, for the three speakers, in speaker ALL, the method provided lists of critical articulators that are, overall, more succinct, cleaner, and closer to the expected outcomes, when compared to the literature [7], even considering a simple normalization method. This seems to point out that the amount of considered data has a relevant impact on the outcomes. While this is expectable, the amount of data seems to have a stronger effect than in previous approaches using more variables [24], probably due to the fewer number of dimensions representing the configuration for each phone. These good results, obtained with a very simple normalization approach, gathering the data for three speakers, may hint on how the elected tract variables are not strongly prone to the influence of articulator shape differences, among speakers, as was the case when we considered landmarks over the tongue. Instead, they depict the outcomes of the relation between parts of the vocal tract, for example, the tongue and hard palate (constriction). Nevertheless, some cases where the normalized speaker failed to follow the trend observed for the individual speakers, as alluded above, for example, for M, hint on the need to further improve the data normalization method. Conclusions Continuing the quest for data-driven methods to enable the determination of critical gestures for EP, this paper adopts a vocal tract configuration description aligned with Articulatory Phonology and, considering tract data obtained from midsagital RT-MRI, presents the analysis of tract variable criticality for EP sounds. Overall, taking into consideration that the corpus was not specifically designed for the analysis of articulator criticality, since, for instance, some EP sounds and contexts are not present, the obtained results are already very interesting. Following on the results presented here and on the experience gathered throughout, there are several aspects that elicit further attention. First of all, during the preparation of the methods considered in this work for the computation of the tract variables, some informal experiments revealed that slight variations, for example, for how lip protrusion is computed (such as, the leftmost point of both lips versus the middle point of maximum lip constriction) can result in slight variations in how the lips appear for some sounds and speakers. This would entail small improvements, for some phones and/or speakers and worst for others. While we kept the method considering the leftmost point of both lips, to enable a direct comparison with previous work [25], and since it already presents good results, this aspect is worth a more systematic exploration. As already mentioned, another aspect that requires further research is the method used to perform speaker data normalization so it can be considered for the normalized speaker. Since some critical articulator results aligned with known descriptions of EP sounds and observed for all speakers disappeared in the normalized speaker, it is paramount to gather further understanding regarding the causes and improve the method accordingly. One aspect that can improve the results and how they are interpreted is to have a full report regarding the considered contexts, for each sound and speaker. This would enable a clearer idea about speaker idiosyncrasies versus those aspects influenced by different amounts of data for a particular context, for example, among speakers. A part of this effort was already put in place, for the work presented here, for example, to obtain all available contexts for each phone, but a more automated analysis and report of these aspects is needed. Finally, the method adopted here pertains a static analysis of critical articulators, i.e, it is based on the selection of a representative frame for each sound (e.g., the middle frame for oral vowels). For the nasal vowels, we split them in three key stages to understand if any notable differences arise, among them, but it seems relevant that this is further explored, for all sounds, to achieve an analysis of criticality over time. In this context, the audio signal, which has only been considered for annotating the data, so far, can be an important asset to explore a multimodal approach to these matters [46]. Author Contributions: Conceptualization, S.S. and A.T.; methodology, S.S. and A.T.; software, S.S. and N.A.; validation, S.S., N.A. and C.C.; formal analysis, S.S.; investigation, S.S., C.C., N.A., A.T., J.F. and A.J; resources, A.T., C.C., S.S., J.F. and A.J.; data curation, S.S., C.C. and N.A.; writing-original draft preparation, S.S., A.T. and C.C.; writing-review and editing, A.T., S.S. and C.C.; visualization, S.S. and N.A.; supervision, A.T. and S.S.; project administration, S.S. and A.T.; funding acquisition, A.T., S.S. and C.C. All authors have read and agreed to the published version of the manuscript.
9,536.8
2020-10-21T00:00:00.000
[ "Physics" ]
Characterization of Traveling Waves Solutions to an Heterogeneous Diffusion Coupled System with Weak Advection : The aim of this work is to characterize Traveling Waves (TW) solutions for a coupled system with KPP-Fisher nonlinearity and weak advection. The heterogeneous diffusion introduces certain instabilities in the TW heteroclinic connections that are explored. In addition, a weak advection reflects the existence of a critical combined TW speed for which solutions are purely monotone. This study follows purely analytical techniques together with numerical exercises used to validate or extent the contents of the analytical principles. The main concepts treated are related to positivity conditions, TW propagation speed and homotopy representations to characterize the TW asymptotic Problem Description and Objectives Typically, the models involving spatial diffusion have been considered as coming from simple physical principles. This is the case of the Fick law that establishes a relation between the variable flux in a media and the gradient of the variable concentration. The application of such law leads to the classical gaussian order two diffusion. Nonetheless, in applied areas such as biology, optics, structures or materials, the gaussian diffusion has been extended to account for new ways of modeling introducing high order diffusion operators. Such operators are currently subjected to intensive research, as an example, Bonheure and Hamel have shown the De Giorgi's conjecture for a fourth order Allen-Cahn equation together with classical solutions bounds [1]. The Allen-Cahn elliptic equation is used to model stationary bi-stable systems in physics, chemistry or biology. In some practical cases, the fourth-order operators emerge from already known order two diffusion. As an example, the classical Fisher-Kolmogorov equation was proposed to study the interaction of different populations in a biological environment: It has been observed an onset of instabilities near degenerate points given by expression (1) ( [2] and references listed there), which led to propose the Extended Fisher-Kolmogorov equation to model the behaviour of bi-stable systems. These systems can be defined as those with only two uniform states and a solutions "traveling" from one stable solutions to the other, either forming a heteroclinic or homoclininc orbit [3]. In [4,5], Peletier and Troy, on one hand, and Bonheure, on the other hand, showed the existence of oscillatory spatial patterns for the Extended Fisher-Kolmogorov equations. Additionally, they exhibited examples of oscillating heteroclinic (the author also called it kinks) and homoclinic orbits (pulses) in the spatial domain. The instabilities were found to be permanent oscillations, leading to think that there shall be evolution flows hidden by the regularity of the second order diffusion. Therefore, the original Fisher-Kolmogorov equation was perturbed with a fourth order spatial derivative, leading to the Extended Fisher-Kolmogorov equation: where ∆ 2 = ∂ 4 ∂x 4 . In the classical sense, the Extended Fisher-Kolmogorov equation requires solutions to have continuous derivatives up to the fourth order. One can think, preliminary, that oscillating functions (such as sine, cosine or a combination of both) may be appropriate candidates to constitute solutions. Peletier and Troy showed the existence of oscillating solutions [5]. In addition and making use of a development in the exponential bundle of solutions, Rottschäfer and Doelman showed the nature of such oscillations [2]. A previous work [6] developed a set of analysis about the existence of minimal heteroclinic orbits for a class of fourth-order ODE systems (not necessarily cooperative) with variational structure. In the present analysis, we develop hetereclinic orbits for a cooperative system within the PDE theory and making use of analytical and numerical evidences to account for solution profiles in the Traveling Waves domain. In addition, the analysis focuses on the construction of exponential bundles of solutions that are represented through homotopy graphs. To illustrate the relevance of the exponential bundles of solutions, recently, Dang [7] has provided a general method on the complex plane to analyze exponential solutions (and others rational and elliptic) for the (2 + 1)-dimensional, the (3 + 1)-dimensional Boiti-Leon-Manna-Pempinelli equations and the (2 + 1)-dimension Kundu-Mukherjee-Naskar equation. In addition, in [8] the author studies Traveling solutions to the non-local Fisher-KPP equation considering only the kernels for which the spatially uniform steady state u = 1 is stable. Further, the search of waves propagation for a damped wave equation has been explored in [9] for a fractional Laplacian with nonlinear source. The existence of solutions is assessed based on Galerkin approximations combined with the potential well theory to show the decay behaviour of solutions. Along the presented analysis, such decay behaviour is presented within the Traveling Waves solutions and exponential bundles. In addition, it is the intention to search for the most appropriate TW-solution with positivity and homogeneous convergence towards stationary solutions. In this sense, some previous works shall be mentioned. In [10], the authors develop multiscale methods and asymptotic analysis to understand the homogenization in domains with heterogeneous strips. With the same intention, an analysis in [11] aims to characterize homogeneous processes in heterogeneous reaction-diffusion environments. In [12], compactness criteria are employed to characterize the homogenization of a diffusionconvection equation with divergence-free velocities. Further, some applications to other disciplinary subjects of homogenizing techniques in heterogenous porous media can be consulted in [13,14] that illustrate the relevance of the topic. Fourth-order operator equations have been assessed by Lyapunov stability approaches [15], in which the existence of bifurcation branches for even, periodic solutions in both the Swift-Hohenberg and extended Fisher-Kolmogorov equations have been considered. During the present analysis, we use the homotopy analysis in stead of pure Lyapunov methods. The high-order operator induces a set of instabilities in the proximity of the stationary solutions. Recently, Díaz and Naranjo [16] have shown the oscillating behaviour of selfsimilar solutions and have characterized regions of positivity for a class of high order cooperative system with no advection. This work permits to introduce insights on the instabilities shown by spatially inhomogeneous structures when they are modeled by diffusion (see in [17] and references there in for further details). Particularly, it is the intention to characterize the propagation features of the heteroclinic orbits connecting two spatially homogeneous solutions anticipated by the cooperative system formulation. Note that cooperativeness shall be understood as the synergistic collaboration between species to prosper and grow in a territory. Our observations will be made in the traveling wave domain, and mainly, in the traveling waves fronts and tips where the transition involving exponential bundles of solutions happen. The set of equations can be summarized as where ∈ R and sufficiently small for our purposes, with initial conditions The minus sign before the fourth order spatial derivative is set to account for a regular asymptotic stable system. As discussed, such degenerate diffusion aims to introduce oscillatory patterns close the equilibrium so as to model a center manifold (in the sense of oscillatory) behaviour. The terms v and u in the u-equation and the v-equation forcing terms, respectively, account for the coupled cooperation between species. The advection is introduced to account for a certain preferred direction in the space, for instance, a direction of food and/or shelter. It is to be noted that terms u(a − u) and v(b − v) have been dealt previously, for a single-order two-diffusion equation, by Kolmogorov, Pretrovskii and Piskunov, leading to classical and so-called KPP problem of order two [18]. The KPP term is typical in biological systems (also called Allee effect) to model the birth, growth and death in species. This analysis pretends to go beyond existence and uniqueness, providing some solutions profiles obtained by a combination of numerical and analytical methods. Furthermore, this study provides a stability analysis in the Traveling wave domain by homotopy representations. Existence and Traveling Waves Structure The TW formulation for problem Q (3) consists on operating in the TW variable (y) and profiles ( f , g), given by the following expressions [19]: where λ 0 is the propagation speed. The system (Equation (3)), then, reads as Call λ = λ 0 + . The TW formulation is subjected to the pseudo-boundary conditions: First, and aiming to understand the basic TW profiles features, we enunciate the Lemma 1: Lemma 1. The traveling waves moves from y → −∞ to y → ∞. In other words, the wave speed λ is positive. (6) by f , the second one by g and integrating by parts (note the calculations are presented only for the first equation): Proof. Multiplying the first equation in Equation such that Each of the involved integrals needs to be assessed with certain conditions at −∞ and ∞ to be specified: The integral is assessed between −∞ and +∞, where we admit the following approximations: Therefore, We continue assessing the next involved integral: such that: The process is followed for the next integral involved: In the assessment of the previous integral, we are interested on determining the sign character rather than a precise value. The cooperative state makes the solutions to evolve closely upon selection of an appropriate value in the TW-speed (to this end, refer to the analysis shown afterwards in Figures 1 to 5). Then, assume that close the equilibrium conditions at −∞ and ∞ the following holds f g ∼ f g . Returning to Equation (15): Finally, we repeat the same integration by parts for the last of the integrals involved: Note that the integral on right has been already assessed in Equation (14). Then, The compilation of the assessed integrals yields Considering a > 0, immediately λ > 0. The same process can be followed for the second equation of Equation (6) to obtain Moreover, considering b > 0, λ > 0 as well. To further characterize the TW existence, the system in Equation (6) is converted into a linear system by the standard change of variables: This can be expressed as Note that the partial derivatives with respect to f i and g i for i = 1, 2, 3, 4 of each component on the right hand vectorial function in Equation (23) are continuous which is needed to ensure the Lipschitz condition. In addition, and to support the global existence of TW profiles f , g, the cooperative system in Equation (6) is solved with a numerical algorithm. One of the key questions to answer is whether it exists a minimal TW speed, λ, such that solutions are stable in the proximity of the stationary solutions. The minimal speed existence, for which non-oscillating behaviour is given, is common to the KPP order two problems [18]. Nonetheless, for higher order systems, it is not possible to ensure the existence of a minimal speed with a monotone decay at infinity, due to the existence of oscillating behaviour. Following the KPP order two philosophy [18], this minimal speed is the critical speed at which any oscillation in the solution vanishes. Nonetheless, in this case, as we are dealing with fourth-order operators, this is not guaranteed, being a notable difference compared to the KPP models involving a second order parabolic operator. Thus, if a minimal speed is not guaranteed, we can search for other common property of the TW profiles in the KPP order 2 problems. In particular, when the TW moves at the minimal-critical speed, the solution is positive everywhere for all y ∈ R. Our target can be translated into finding a suitable value of the TW-speed for which the first minimum in y > 0 is positive. Nonetheless, we cannot ensure the positivity of the solution for y ∈ R as the natural instabilities in the high order operator impedes the possibility of a maximum principle. The estimation of a TW-speed, at which the first minimum in y > 0 is positive in Equation (6), is done via a numerical algorithm. The numerical analysis has been done over a sufficiently large y interval [−1000, 1000] to avoid the influence of the pseudo-boundary conditions (Equation (7)) in the integration domain. The relative and absolute errors for each iteration has been considered as 10 −6 and the number of nodes varies from 10 4 to 10 5 . Additionally, the numerical results provide the evidence related with the global structure of the both f , g for different values of the TW speed λ. First, the TW-speed λ is assumed to be equal for both solutions f , g in which the condition a = b is considered. Previous to any formal proof, Figures 1 and 2 suggest that the oscillatory character of the TW decreases for increasing values of λ. This property is common with the KPP-2 problems [18]. Conjecture 1. Let consider a = b in the set of Equation (6) and that the TW-speed is common to both solutions f , g. Let define the set M as representing the location of the TW front along the y axis and let define expressing a set whose elements locate the minimum points in solution f (y) beyond the TW-front (i.e., in the TW-tail). In these conditions there exist a value of λ for which f (min(m)) > 0. This value for λ represented by λ f (min(m))>0 has been sharply estimated to be λ f (min(m))>0 = 2.394. Note that the conjecture is postulated based on the numerical evidences results presented in Figure 3. Conjecture 1 is particularly relevant and expresses some analogy with the minimal TW-speed (λ min ) typical of the TW associated to the KPP problems of second order [18]. In the high-order case, we cannot directly consider the minimal TW-speed; nonetheless, we have shown the existence of an equivalent λ f (min(m))>0 in the cooperative system. Indeed, the most remarkable difference between the KPP-2 (KPP order two) TW and the cooperative system with a high order operator relays on the fact that we shall change the concept of finding a λ min by finding a λ f (min(m))>0 . For a KPP-2 TW moving at λ min , the profile does not oscillate as the second order profiles changes from the sub-critical solution to the critical one [18]. In the fourth-order cooperative system, we use the concept of λ f (min(m))>0 , for which we know that the oscillatory behavior is observed in the TW-tail when y >> 1 (see Figure 3). This is a common behavior to all high-order parabolic operators. Thus, we have found a λ f (min(m))>0 for which, in an appropriate inner region (inner compared to y >> 1), it is possible to express the high order TW profile in a similar manner as done in the KPP-2 problem. Note that the results obtained, up to now, consider that both TW profiles f , g move with the same speed λ. It is considered, now, the possibility of different TW-speeds resulting in the following cooperative system: We keep on the philosophy of finding suitable TW speeds (in this present case different for f , g) for which the conditions f (min(m)) > 0, g(min(m)) > 0 hold. The property of finding λ f (min(m))>0 and λ g(min(m))>0 for each TW profile ( f , g) is now more subtle. We start by assuming that λ 1 = 2.394 and we try to answer the following question: Is there an interval in λ 2 for which there exist λ 1, f (min(m))>0 and λ 2,g(min(m))>0 ? To answer this question, the cooperative system has been numerically modeled. The following proposition in the form of conjecture compiles the numerical evidences. Admit the following discussion to support the Conjecture enunciation. Consider that a = b in the set of Equation (6). The proof of this proposition is given in Figure 4 (left). It is convenient to highlight that there exists TW profiles for other λ 2 > λ 2,g(min(m))>0 = 2.437 and λ 1 > λ 1, f (min(m))>0 = 2.394 (see Figures 4 (right) and 5 as examples), nonetheless these profiles are oscillatory from the first minimum. Now, the intention is to explore the effect of the advection term over the positivity regions in the TW characterization. To this end, admit that the ∼ λ 0 , i.e., the advection is considered to interact significantly with the TW propagation. As a consequence, consider λ 1 = λ 0,1 + , λ 2 = λ 0,2 + , then while keeping positivity in the first minimum as described in Figure 4. Traveling Waves Homotopy Discussion In the asymptotic case with y → ∞, the cooperative system (Equation (6)) can be linearized, leading to the following expression: First, to simplify the operations involved and without loss of generality, we consider a = b = 1. The linearization exercise permits to represent the system (Equation (28)) by making use of the first order system autonomous matrix: Or equivalently, The parametric analysis in the matrix A is very complex as it involves four parameters over a eighth order matrix. Thus, we will consider some specific values for the parameters involved, so that we account for a sufficient set of evidences to determine the asymptotic behaviour as y → ∞. Additionally, the fact of having linearized, resulting in the matrix A, permits to proceed via homotopy graphs to further determine the behaviour of the exponential bundles associated to the matrix A eigenvalues. For this case, the characteristic polynomial of A for different values of λ reads as For a generic λ the characteristic polynomial of the matrix A reads We represent now the polynominal solutions for different values in λ to check the homotopy evolution starting from λ = 0. The results are summarized in Lemma 2. Lemma 2. (A) For any λ > 0, the linearized system (Equation (28)) (obtained in the limit with y → ∞) with a = b = 1 and λ 1 = λ 2 = λ presents a 6-D stable family of solutions (corresponding to two pairs of complex conjugates: one constant and one real negative eigenvalues); and a 2-D unstable family of solutions (corresponding to two different real solutions). (B) The eigenvalues tend to accumulate into four different clusters with increasing distance among them when λ → ∞. One cluster is formed by the two eigenvalues with Re < 0, Im > 0, another cluster with Re < 0, Im < 0 and another one with Re > 0, Im = 0, (see Figure 6, and for increasing values of λ, see Figure 7 with λ = 60 and Figure 8 with λ = 100 and λ = 1000). (C) There exists the null eigenvalue for any value of λ. Proof. The determination of the eigenvalues in the matrix (Equation (29)) provides the existence of complex eigenvalues which introduce oscillatory bundles in the proximity of the null critical point. This feature is common to all high-order operators (see in [20] and references therein) and expresses the difficulty to determine a pure monotone TW at a suitable speed λ = 0). Under Lemma 2, it is possible to check that the oscillatory solutions cannot be avoided for a suitable value of the TW-speed λ. Even further, one can check by a careful look to Figures 6-8 that the imaginary parts of the complex eigenvalues increase when the TW-speed increases. The characteristic polynomial for the matrix A adopts the following structure: In this case ab = 1, the null eigenvalue is not a solution of Equation (33), as in the case when ab = 1. Again, the asymptotic behaviour of solutions is obtained for particular values of the parameters a, b and λ due to the difficulty of performing a parametric root analysis in Equation (33). Note that, with the aim of understanding the different solution bundles for different actual values of a, b and λ, we analyze first if Equation (33) has a pure imaginary root of the form µ = ki for λ = 0 and k = 0 ∈ R: rearranging terms: From the imaginary part, the two real solutions k = ( a+b 2 ) 1 4 and k = −( a+b 2 ) 1 4 are obtained. Substituting the first root into the real part we arrive to the following equation: In the limit with a → 0 the following equations holds which has not positive real roots (b > 0). One can check, after doing the first derivative, that Equation (38) is always positive for b > 0. On the opposite side, if we consider values for both a → ∞ and b → ∞, the leading term in Equation (37) is the product ab, therefore the function J(a, b) evaluated under these circumstances is negative. Given the positivity of the function J(a, b) for sufficiently small values of a and b, the negativity of the same function for sufficiently high values of a, b and the continuity properties, it can be concluded the existence of a combination of a, b satisfying the condition J(a, b) = 0. Given such values for (a, b), the imaginary solution is given by The existence of such pure imaginary solution in Equation (37) can be further characterized by representing the homotopy graph for a fixed random value of the TW-speed (assumed to be λ = 2). Figures 9-11 confirm, indeed, the existence of such pure imaginary root in Equation (37) for a → ∞ and b → ∞. In addition, and for λ = 0 the homotopy graph presents pure imaginary solutions, see Figure 12. The oscillatory character of the solutions f , g increases with λ → 0. Figure 13 represents the evolution of a TW for a sufficiently small value of λ together with the homotopy diagram. The eigenvalues satisfying Re < 0 has Re → 0. The consecutive figures 14 and 15 give the evolution of the TW and the homotopy graph for different values of the TW-speed. For λ → ∞ and a = b in the same order a b, the homotopy graph tends to dense and converges to four eigenvalues with the following properties: • Two eigenvalues densing to a homotopy point with Re < 0 and Im > 0. • Two eigenvalues densing to a homotopy point with Re < 0 and Im < 0. • Two eigenvalues densing to a homotopy point with Re > 0 and Im = 0. • Two eigenvalues densing to a homotopy point with Re → 0 and Im → 0. The analysis considers the case with a = 1, b = 2 and different values for λ 1 = λ 2 within the same order of magnitude and with substantially different order of magnitudes. The case with a = 1 and b = 2 is analyzed considering two TW-speed cases: the case with λ 1 = 0.1 and λ 2 = 100 is represented in Figure 16 and the case with λ 1 = 100 and λ 2 = 0.1 in Figure 17. The homotopy graphs, for this third case, are formed by two eigenvalues with Re > 0 and Im = 0 (Unstable) and six eigenvalues with Re < 0. Conclusions The objectives highlighted in relation with the searching, finding and characterization of TW profiles have been proposed based on analytical and numerical evidences. The main question related with the existence of TW speeds for which the natural instabilities of the four order operator are minimized has been answered. This fact has permitted to extend the classical concept of TW speed for order two diffusion as done by the KPP models [18]. The weak advection term has been shown to be bounded by the own TW propagation speeds determined so as to keep positivity in the first minimum. In addition, the homotopy representation has permitted to characterize the TW tail in the proximity of the stationary solutions in the cooperative system Q and to determine the asymptotic behaviour of TW solutions via homotopy analysis. Funding: This research has be supported by the Universidad Francisco de Vitoria Research Directorate. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Data is made available upon request. Conflicts of Interest: The author declares no conflicts of interest.
5,580.4
2021-09-17T00:00:00.000
[ "Mathematics" ]
Determining and Prioritizing the Evaluation Criteria of Humanities Scientific Outputs: A Case Study of Language and Literature Fields With regard to the specific nature and variety of the humanities fields and disciplines and the need to evaluate the humanities research outputs according to their nature and intrinsic characteristics, two questions has been posed and answered in this study as follows: “What are the criteria and indicators for evaluating the research outputs of humanities?” and “What is the prioritizing of the evaluation criteria according to the research approaches and goals in humanities?” Considering the differences in the fields of humanities, a case study of language and literature was conducted. This research was done with a mixed method (qualitative and quantitative stages). The first stage was carried out using a library research method to extract the criteria and indicators for the evaluation of the research outputs in the fields of language and literature. In the second stage, in order to finalize and prioritize the criteria, a questionnaire was designed and distributed among a number of experts in the fields of language and literature in two rounds of fuzzy Delphi. In the first stage, 42 indicators were identified and divided into 8 categories of criteria: 1) platform for creation, presentation and publication, 2) writing structure, 3) content, 4) impact in online environment, 5) scientific impact, 6) social impact, 7) economic impact, and 8) cultural impact. The prioritizing of the criteria was also based on their average obtained in the second round of fuzzy Delphi, which shows the impact of research approaches and goals on the priority of using the criteria. INTRODUCTION With the introduction of the information world, the examples of power have changed, and competition over physical resources has given its place to the competition for information, knowledge and innovation.In this context, different sciences, from engineering and basic sciences to Humanities and Social Sciences (HSS), have adopted different ways of affecting and interacting with the information society. [1]As such, humanities can be considered the software and the soul of knowledge, and if all human knowledge is assumed as a system, humanities will be the core and the center of gravitation of the system, and how it is formed and oriented will have a direct effect on the formation and direction of micro and macro social systems. [2]As Lanzillo [3] says, due to their versatile functions as broad as the subject of man and society, humanities are likely to facilitate the application of other sciences, provide the basis for scientific development, and pursue the social-cultural maturity of man as their main goal and their obvious and hidden concern.In practice, humanities are intertwined with different dimensions of individual and social life of people and can be very efficient and effective than other sciences. [4]spite the great importance and function of the humanities, based on the evaluations, it is commonly accepted that humanities research is not effective in society and has a small share of science in different countries of the world. [5]Several reasons have been stated for the underdevelopment of humanities in society, [6][7] which are all valid and can be discussed, but we can also think about the problem from the perspective of how to evaluate research outputs -an issue that has been less attended despite its great importance. [8]Evaluation of the research and the researcher has many functions, such as creating an opportunity for the researcher to defend his/her performance and compensate for his/her weaknesses, gathering information needed for decision-making, policy-making and planning in research, [9] improving the performance and quality of research by determining strengths and weaknesses, optimizing the allocation of research financial resources, [10] providing guidance for designing and modifying evaluation criteria, [11] determining research priorities and designing relevant policies, [12] directing researches in line with the real demands of society, Evaluating different dimensions of research effectiveness, identifying effective research projects for commercialization, determining the quality of programs developed for research activities, and deciding on how to improve the quality of research activities. [13]These can be accomplished if we evaluate research outputs in proportion to each scientific field and use criteria and indicators that are compatible with the nature of these fields. However, humanities research outputs that are part of soft sciences are often evaluated with criteria and indicators designed for hard sciences, including basic, natural and engineering sciences, [14] which do not have the priority and competence to be used in humanities and do not indicate the real position of the humanities research outputs.Accordingly, by comparing two different phenomena with the same criterion, which is not acceptable, the prioritizing of humanities is low compared to other sciences.Although humanities and other sciences are similar in being science, their inherent differences and the specific language of each should be considered in evaluations and the use of a similar model for all fields of science should be avoided. [15]These differences can be found in the four dimensions of research subject and objectives, research method, citation behavior, and science coverage in citation databases.Maybe if the evaluation is done in accordance with the nature and characteristics of humanities, the progress and current position of this field will be better displayed and the level of its research will increase. [16,6]If the evaluation is done with respect to the research outputs of the humanities and their fields, many researches that have been evaluated before without considering the specific characteristics of each field can be evaluated again and appropriately.By using appropriate criteria and indicators in the evaluation of humanities research outputs, the accuracy and validity of the evaluations will increase, their real position will be demonstrated, and stakeholders can reliably use the evaluation results in the prioritizing, planning, policy-making and decision-making in the fields of humanities. Humanities consist of various fields and disciplines, each of which has a significant contribution to the development and progress of society.Since it is not possible to examine all the fields of humanities at once, this study has focused on the evaluation of language and literature due to its importance, position, and diverse outputs considering its range of audiences, which can be regarded as an introduction to the flow of specialized evaluation of humanities.The fields of language and literature deal with different aspects of life and most importantly with the identity and intellectual and cultural maturity of a nation and are considered as a support for the richness of a country's civilization.These fields are entertaining and pleasing and are also a reflection of society and present an image of what people think, say and do in society. These disciplines are a valuable treasure for understanding the values, customs and historical background as well as the future developments of the society, which explain and teach the do's and do not's in the form of a mixture of knowledge and art.Literature is the mirror of every age; it shows the turning point of every period and helps the sociology and anthropology of that time.Based on the reflection theory, literature can contain information about social behavior and values and document the social world for the reader in a transparent way. [17]It is possible to achieve the goals of language and literature fields through the publication of research results in various research outputs such as journal article, conference article, book, research project, dissertations/ thesis, and literary creations/creative literature.Ebrahimi Darcheh et al. [18] discussed the harms and strategies for evaluating the research outputs of humanities, particularly language and literature, and presented three approaches and goals for research in these fields: 1) production of science and promotion of knowledge foundations, 2) applicability and responsiveness to society's problems, and 3) literary creation/creative literature. In evaluations, the criteria and indicators that fit the research outputs under study are usually chosen and applied.However, it is here suggested that the evaluation and application of criteria and indicators should be done according to the researcher's approach and goal of research and publication of research output, since it is by research goal that the most suitable research output for results publication is selected.On the other hand, it seems that when the research goal changes, the importance of the evaluation criteria changes accordingly.By using the evaluation criteria, it is possible to determine to what extent the desired goal has been achieved.Since the objectives of each discipline are defined according to the discipline itself, using an objective-based evaluation, it can be ensured that more attention will be paid to the characteristics of each discipline.Evaluation based on approaches and goals can be considered as the main innovation of this research.Based upon the stated approaches and goals for language and literature fields, the present study is an attempt to answer the following questions: What are the criteria and indicators for evaluating the research outputs of humanities, especially in the field of language and literature? What is the prioritizing of the evaluation criteria according to the research approaches and goals in the fields of language and literature? LITERATURE REVIEW By the growing production of scholarly works in different fields of science, further importance has also been attached to the evaluation of research and research on research.Based on the recorded literature and the sources in the databases, scientific research evaluation dates back to the 1970s.A review of the literature shows that it has not been a long time since the evaluation of humanities research became part of the issues and concerns of experts in the field of research evaluation.In what follows, some recent studies on the humanities research evaluation are reviewed. Despite the importance and necessity [19] and the application and effectiveness of the humanities, [20] there are challenges regarding the evaluation of the research outputs of these sciences, the most obvious of which is the different nature of the humanities from other sciences and the need to conduct evaluation in proportion to the nature of the humanities. [21,18]e to the humanities differences, compared to other sciences, in terms of research subject, [22] diversity of audience, [23] methodology and data collection approach, [24] dependence on the native language, [25] platform and channel of publication of findings, [26] national geography of publications, [27] number of authors, [23] citation behavior, [28,29] coverage in database citations, [30] and method of affecting society, [31][32][33] special attention should be paid to the evaluation of the research outputs in the fields of humanities.As the need is strongly felt for the evaluation of the humanities research output, researchers have sought to compare the humanities with other sciences and show the specific features of the fields of humanities.The number of such studies is relatively large and they have generally come to the conclusion that the evaluation of the humanities through the technical-engineering style criteria and indicators is not suitable and do not show what it should.These studies highlighted the importance and necessity of a specific evaluation of the humanities fields and disciplines, which is in agreement with the research problem in the present study. A review of the literature shows that studies on the evaluation of humanities research have adopted a descriptive approach to investigate issues such as: publishing behavior of researchers, research and citation databases analysis, [34] citation analysis, [35,36] core sources, [37] thematic trend of research, [38] drawing of a map of science, [39] review of journals, articles, books, and dissertations, [40] metadata analysis of information resources, [41] performance of researchers and faculty members, [42] research visibility, [43] and creativity measurement. [44]In the aforesaid works, there are deficiencies in the evaluation of humanities research, as humanities are generally not differentiated from other sciences and a similar approach has been used to evaluate and compare all the fields disregarding their specific characteristics and nature.Some studies have pointed to the special evaluation of the humanities research, but have not elaborated any further on the way of this evaluation.In these studies, the proposed criteria and indicators are often quantitative, and qualitative criteria and indicators are rarely used.Nevertheless, the problem of the incompatibility of the indicators with humanities remains.Ochsner et al. [45] have also looked for agreed-upon concepts of quality in humanities and believe that a research assessment by means of quality criteria presents opportunities to make visible and evaluate humanities research, while a quantitative assessment by means of indicators is very limited and is not accepted by scholars.However, indicators that are linked to the humanities scholars' notions of quality can be used to support peers in the evaluation process (i.e.informed peer review). Concerning the fields of humanities, suggestions have been made for evaluating the research performance of faculty members, [14] educational departments, [46] and researchers. [47][59][60][61][62][63][64][65] The purpose of Thelwall and Delgado [66] research was to make an explicit case for the use of data with contextual information as evidence in humanities research evaluations rather than systematic metrics; Data are already used as impact evidence in the arts and humanities, but this practice should become more widespread.Humanities researchers should be encouraged to think creatively about the kinds of data that they may be able to generate in support of the value of their research and should not rely upon standardized metrics. Despite the large number of the aforesaid studies, scant attention has already been paid to the design and investigation of evaluation criteria and indicators related to a specific aspect or field of humanities.Such studies are limited and usually not up to date.With respect to the language and literature fields, Hug, Ochsner and Daniel [67] proposed criteria to evaluate research quality in three fields: German Literature Studies (GLS), English Literature Studies (ELS), and Art History.Ochsner, Hug and Daniel [68] ranked the criteria obtained from the previous research.D'Souza [55] investigated the characteristics and method of evaluation of creativity in story writing.In the present study, different criteria and the Priority and position of each of these criteria in the fields of language and literature are identified.The prioritization of the criteria is based on the research approaches and goals in these fields, which is a distinct feature in the present study. METHODOLOGY The current research has been conducted in a mixed method (qualitative and quantitative stages), as described below: Review of documents In order to obtain criteria and indicators for evaluating the research outputs of language and literature fields, documents and studies on the subject were analyzed using the library research method.Purposeful sampling of documents at this stage was done, and studies that were compatible with the research subject and question were selected.To this end, sources were examined from the specific to the general: first, studies on the language and literature, then studies on the humanities, and finally in the general dimension, those resources related to other scientific fields were explored.To review the documents, an advanced search of articles (journal and conference articles) was conducted in databases using the search strategy of Table 1.To increase the comprehensiveness and precision of the search, synonyms and related words, Boolean operators, truncation, and phrase searching were included. By entering a search query in each database and retrieving sources, duplicate records (in terms of title and subject) or items with irrelevant titles were removed.Among the sources that had similar results, the most appropriate and up-to-date ones were selected.Then, the records that had related titles (377 records) were reviewed in terms of abstracts, and in this step, unrelated items were removed.In the next step, sources with relevant abstracts (82 records) were reviewed in full text.Finally, among 35 related and suitable records, the evaluation criteria and indicators of research outputs were extracted for use in the next stage.The databases searched and the number of documents retrieved and used are given in Table 2.In the library search, note-taking was used as the data collection tool.MaxQDA, version 2020, software was used for documents analysis. Experts' panel creation The Delphi method is a search method with the characteristics of iterating different rounds and controlled feedback (data analysis in each round) based upon the anonymous statistical group response of experts. [69]To get closer to the real-world and overcome the problem of ambiguity and uncertainty in the judgment of decision makers, the classic Delphi is replaced by the fuzzy Delphi method. In order to finalize the criteria and indicators obtained in the previous stage based on the opinion of experts (adding, subtracting or changing categories) and prioritize them, a researcher-made Delphi questionnaire with a seven-point Likert scale (including verbal expressions) was designed.The questionnaire had 4 open and closed questions and included various items to get the experts' opinion about the importance of the criteria and their suggestions in this regard.The initial questionnaire was revised and finalized using the opinions of 3 experts (in the fields of language and literature, and library and information science), and its face and content validity was confirmed.In order to determine the reliability, the questionnaire was answered by 22 researchers of language and literature fields (English, French, German, Arabic, and Persian) from different universities in Iran. By analyzing the answers in SPSS software, it was found that the Cronbach's alpha of the questionnaire is 0.95, which indicates the homogeneity and internal consistency of the questions and their items and the reliability of the questionnaire. The statistical population of the research is the faculty members of the language and literature fields of Iranian universities.According to Hogarth, [70] six to twelve members are ideal for the Delphi method.Clayton [71] also posited that if the respondents are a combination of experts with different specialties, between five and ten members are sufficient.Usually, in different sources, at least 10 people are considered as a suitable number.The research sample (17 people) was selected purposefully and by snowball method from among the expert faculty members in the fields of language and literature (English, French, German, Arabic, and Persian) from 8 universities in Iran.Criteria such as experience or involvement in the subject area of research (authoring and translating books and publishing articles in the field of research evaluation, or membership in associations and working groups related to the fields of language and literature in research institutions at different levels) and having diverse outputs in the fields of language and literature were effective in selecting the sample. Excel software was used to analyze the data collected from the questionnaires.After collecting the questionnaires of the first round, the fuzzification of the verbal expressions was done based on the Triangular Fuzzy Numbers (TFN).Each verbal expression was assigned a triangular fuzzy number consisting of three values of the lower limit or the minimum (l), the middle limit or the most probable value (m), and the upper limit or the maximum (u), the details of which are given in Table 3. Fuzzification allows answers to be defined qualitatively, which is the advantage of the fuzzy Delphi method over the classic Delphi method.In the next step, using the assigned fuzzy numbers, the fuzzy average of each boundary was calculated (Formula 1).Then, the de-fuzzification of the values was performed, which means calculating the average of the fuzzy averages of the limits for each criterion (Formula 2).The de-fuzzified number is between zero and one.In the comparison that was made between the de-fuzzified number of the criteria and the threshold limit of 0.7 (suitable for the seven-point scale) [72] all the criteria, except four, scored above the threshold.In view of this, the second round of Delphi was also conducted, although no new criterion and indicator had been proposed in the first round, and the Coefficient of Variation (CV) of all criteria was acceptable (between 0 and 0.5) based on Table 4, which indicated the consensus/agreement of the opinions presented. [73]The coefficient of variation is obtained by dividing the standard deviation by the mean. Databases The second-round questionnaire was also given to the experts (11 people of the first round) with the same criteria and indicators, with the difference that in this round, a column containing the de-fuzzified numbers of each criterion in the first round was added to the questionnaire tables.Respondents were asked to state their opinions on the importance of each criterion and, if necessary, change or modify their first-round answers according to the notified numbers. In the second round, fuzzification, de-fuzzification, and comparison of the obtained average with the threshold limit were performed.In this round, only five criteria (four items like the first round and one new item) scored below the threshold of 0.7.The result of comparing the averages of the two rounds showed that the opinions did not differ much and there was consistency in the answers of the two rounds.These two cases show the stability of the answers, which is a key factor in deciding to stop iterating the Delphi rounds.Another necessary factor in deciding whether to continue or stop rounds is the degree of consensus and solidarity of opinions.Various mechanisms such as subjective criteria, descriptive statistics and inferential statistics are used to check the consensus, which Von der Gracht [69] collected in a review study. In the present study, three methods were used to determine whether the experts reached a consensus or not: 1) the difference between the averages of the two rounds for all criteria based on formula 3 was smaller than 0.1, which indicates the consensus of the experts' opinions; 2) the Kendall's W Coefficient of the second round (0.351) compared to the first round (0.280), both with a significance level of lower than 0.01, did not grow much, which indicates that opinions have not changed significantly (in this coefficient, the value 0.1 indicates a very weak agreement, while the value 0.7 indicates a very strong agreement); and 3) the coefficient of variation of all criteria was less than 0.5 in the second round, which is acceptable according to Table 4. Finally, based on the stability and consensus reached, it was decided to stop distributing the questionnaire in the second round. Since the purpose of the fuzzy Delphi method at this stage of the research was to determine the importance and prioritizing of the evaluation criteria of the research outputs of the language and literature fields, no criterion was removed.Rather, using the averages of the second round, the prioritizing of the criteria was done.The closer the average of the criterion is to 1, the higher its importance and prioritize according to the experts. What are the criteria and indicators for evaluating the research outputs of humanities, especially in the field of language and literature? The items that were mentioned in the documents and resources for the evaluation of research outputs were divided into 8 categories of criteria (components) and 42 indicators (subcomponents) according to their consistency with the fields of language and literature, as shown in Table 5.In the questionnaire distributed based on the fuzzy Delphi method, the respondents were asked Definite number Verbal expression (importance level) Fuzzy number F = (l, m, u) Decision rule 0<V≤0.5 Good degree of consensus-no extra rounds required. 0.5<V≤0.8 Less than satisfactory degree of consensus-possible need for an additional round. V>0.8 Poor degree of consensus-definite need for an additional round.What is the prioritizing of the evaluation criteria according to the research approaches and goals in the fields of language and literature? Tables 6-8 show the averages (de-fuzzified numbers) of the first and second rounds of fuzzy Delphi, their difference and coefficients of variation.Then, based on these data, in Table 9, the priority of each of the criteria for evaluating the research outputs of the language and literature fields are presented based on the approach and goal of the research.The purpose of the implementation of fuzzy Delphi was not to screen the criteria, but to determine the importance and priority of each one in the evaluation according to the goal of the research.The goals of research in the fields of language and literature are: 1) production of science and promotion of knowledge foundations, 2) applicability and responsiveness to society's problems, and 3) literary creation/creative literature. According to Table 6, when the goal of research is production of science and promotion of knowledge foundations, criteria such as scientific impact and the content of research outputs have a higher priority for experts, because such research is supposed to help expand the theoretical scope of science.Accordingly, as the criteria go beyond the limit of the scientific environment and become influential in the society, their weight decreases. Based on the experts' opinion, as shown in Table 7, when the goal of the research is applicability and responsiveness to society's problems, social impact is more important than other evaluation criteria.Perhaps because in this goal, the researcher has more connection with the society, the field effect of his/her activities in the form of research outputs is considered and evaluated more important than the scientific impact. According to Table 8, when "literary creation/creative literature" is the main goal of a language and literature specialist's research, cultural impact and its evaluation through content validity criteria are more important, because such works deal with the cultural identity of the society. According to Table 9, the priority obtained for the criteria and their differences in approaches and goals indicate the importance and dominance of targeting in research.The priority of all criteria except two fluctuates with respect to different objectives.The content of the research outputs has been assigned the second priority in all cases, which indicates the high importance of the content of the outputs.Besides, the economic impact of research outputs in all goals has the lowest priority, which can be due to the less inclination of the field of humanities and especially the fields of language and literature towards the economic fields. The findings show that the goal of the research both determines the audience and is effective on the rating of the evaluation criteria.Academic and non-academic audiences have different needs, and therefore the research outputs of each should be evaluated differently.In Figure 1, the prioritizing of the criteria is shown schematically.In each goal, as we approach from the center of the diagram to its margins, the priority and importance of the evaluation criterion decrease. DISCUSSION AND CONCLUSION In this study, first the criteria and indicators for evaluating the research outputs of language and literature fields were determined and prioritized using references and experts' opinions.The literature review shows that the use or non-use of each of the criteria and indicators and the priority of each one is different in various fields. [67]This difference in the current research can be due to the nature of language and literature fields, on the one hand, and their classification based on the goal of the research, on the other hand.Therefore, it is highly emphasized to plan for research evaluation based on the nature of the fields and the research goal. From the point of view of research evaluation, the main stages of research from the beginning to fruition consist of input, process, output, outcome and impact.As the findings of the current research show, according to the goal of the research, the evaluation method of each of these stages can be changed.When the goal changes, the expectations of the stakeholders also change, which in turn requires an appropriate evaluation method.For example, when working in a scientific environment with a specific audience, citation-based indicators are more important, but in a public space with a general audience, the use of research output in cultural centers and acceptance by mass media is more important. When the approach, goal and audience of scientific research and study change, its evaluation criteria will also change.To publish and share the findings of any research, there are various outputs that should not be evaluated the same.These three aspects (goal, output, criteria) must be consistent to achieve effectiveness.Currently, researchers are looking for the impact of their research in areas that are easier to document, but if the evaluation is done from different aspects and prioritizing is done, it will encourage and guide researchers to research and scientific studies that interest them and have diversity.Became; because it is ensured that their efforts will not be in vain and it is effective in evaluation for promotion. The approach and goal of production of science and promotion of knowledge foundations includes theoretical studies, basic and fundamental research for better understanding of subjects, idea generation, innovation, freedom of thought, thinking and presenting original thought, creativity, discourse formation and realm creation, and cultivating or formulating theory.These are the factors of scientific progress and expanding the boundaries of knowledge that can lead to the increase of science and knowledge and reduction of consumerism.If such a goal is considered for research, the evaluation of research outputs from the perspective of scientific impact is a priority.The obtained priorities not only show the importance of each criterion, but also indicate their sequence.The research output in any format should have the indicators of a suitable content in order to be scientifically effective.If the output has high-quality content, it can be provided in a suitable platform for social and cultural use.The writing structure suitable for the audience makes the research output Literary creation/ creative literature Evaluation criteria Priority (based on the average of each criterion in the second round of fuzzy Delphi) Platform for creating, presenting and publishing research outputs.The impact of research outputs in the online environment. 4 6 Scientific impact of research outputs. 1 5 7 Social impact of scientific outputs.4 1 4 Economic impact of research outputs.8 8 8 Cultural impact of research outputs.5 3 1 Table 9: The prioritizing of the evaluation criteria of the research outputs of the fields of language and literature. acceptable and allows it to be used in the online environment and create its economic impact. The approach and goal of applicability and responsiveness to the society's problems includes applied research, focusing on solving a specific problem and its practical aspect in economic, scientific, cultural, social and political fields, applying science and knowledge in line with social responsibility to solve the problems and needs of society, the use of language and literature in the real world and the environmental application of scientific awareness, and activism in serving the society.If the subject of the research output is to solve a social concern, one should expect a social impact from it and consider evaluation in this regard as a priority. The publication of popular and culturally effective content in the online environment causes generalization and promotion of research findings.The publication of applied research with the desired writing structure in a reliable and accessible platform will provide the basis for its entry into the economic arena and generate income. Literary creation/creative literature means to be inspired by artistic and literary feelings and emotions in interpreting the human and social world to meet aesthetic needs and promote values.Unlike the other two approaches and goals, which have a more general aspect and are applicable in all fields of humanities, the approach of literary creation/creative literature is specific to the fields of language and literature and can be a support for the first and second approaches.According to the experts, literary creations/creative literature, which can be in the form of poetry, prose (fiction and non-fiction), and dramatic literature, should first be evaluated in terms of cultural impact.In the next priority, the content and platform of their creation, presentation and publication is considered important and can have a social impact.Proper writing and sharing and publishing research in the online environment can bring scientific and economic impact. The obtained criteria and indicators are in harmony and similarity with the related items in the background section of the research and can generally be divided into two groups: factors related to the academic environment and general factors.Among them are indicators of academic value, innovation, and social value for humanities papers. [48]Besides, criteria such as innovation and originality, impact on the research community, productivity, relation to and impact on society, connection between research and teaching, fostering cultural memory, and connection to other research can be mentioned in the fields of German literature, English literature and art history. [68]At a more specific level, factors such as meaning and relevance, reader's immersive experience, development and control, distinctiveness, voice and originality in the evaluation of the story [55] can be mentioned.As can be seen from the findings, the quality of research is a multidimensional concept; the criteria are influenced by each other and complement each other.Each criterion does not consider all aspects of the research, so one criterion should not be used in isolation, but several criteria should be used in a complementary manner. [74]owever, what is necessary and worthy of sufficient attention is the consistent and quality content of the research output, which if not observed, will disrupt the process of research use and effectiveness. Although Vanholsbeeck and Lendák-Kabók [64] proposed that effectiveness is related to the concept of accountability, this concept is currently not much considered and dominant in the university evaluation culture; however, in the present study, it has taken a significant part.Hinrichs-Krapels and Grant [60] posed some questions as follows: whether the research has been effective or not?Does research input lead to output, outcome and impact?Has it had sufficient efficiency?To what extent the research input lead to the research output?Has the research achieved its desired goals?These questions show the importance of paying attention to research effectiveness, which is mentioned in various sources.For example, Gibson and Hazelkorn [61] focused on the prioritization of arts and humanities research based on their potential for production, economic growth, and job creation to overcome the economic crisis.In this regard, researches that are interdisciplinary and related to social issues are preferred.Paying attention to the knowledge economy helps redefine the goal of higher education and emphasizes the need to design policies for the future of research.Oancea, Florez Petour and Atkinson [51] emphasized the need for qualitative criteria and indicators to configure the cultural value and impact of research.Sörlin [62] pointed to the humanities of transformation, suggesting that research policies should be changed to expand the values of humanities and increase research effectiveness.This can be enabled by integrative humanities using interdisciplinary research. The findings confirm that the impact of humanities has gone beyond the scientific dimension and has been directed to various economic, social, cultural and political aspects.From the obtained criteria and indicators, it is clear that the evaluation of research outputs in humanities has gone beyond the dependence on the academic environment and has tended to influence in wider dimensions.This lays the groundwork for the application of humanities in society and increases the need for evaluation in order to obtain the rate of progress and transformation of research.In other words, it is not only the publication of the research that is important, but its use and effectiveness should also be considered in order to bring about the well-being of the target society.Therefore, according to the nature of each field and the goal of research, it is necessary to choose the appropriate criteria for evaluating research outputs. Among the beneficiaries of this study are policymakers in the field of research, who can adopt a new approach in decision-making and establishing policies for the evaluation of research outputs of humanities and related fields according to the results.Giving importance to impact as an evaluation component has created a major evolution in research evaluation systems.Creating an impact discourse and understanding its importance requires creating an impact infrastructure in the research process, on the one hand, and stabilizing the impact position through applying and updating the impact and eliminating weak points, on the other hand. [32]Understanding the concept of research value from the perspective of its stakeholders is also effective in evaluation and clarifies the nature of expected effects. [75]The criteria for research effectiveness make the researcher responsible in different fields.Paying attention to the impact is the stimulus and reinforcement of research, and following the quantitative and qualitative increase of research, its impact and efficiency will increase.If the problem of impact and impact evaluation is to remain stable, a wider range of internal motivations should be considered and the ability of academics to coordinate with them should also be improved. [76]Criteria and indicators act like a filter; not every research can adapt to them, and therefore low-quality studies are excluded from the cycle.Over time, this mechanism is hoped to make humanities research more targeted and more efficient.In order to make this planning more practical, it is suggested to examine the research outputs according to each field, to assign suitable criteria and indicators for each research output (these two issues are under study for the fields of language and literature), to determine the maximum and minimum standard limits for each indicator, and to localize the indicators according to the conditions of each research center. The movement of universities from the first generation to the fourth generation requires providing its infrastructure and ecosystem in each country.Changing the approach of evaluation criteria and indicators and their compatibility with the paradigms of education, research, entrepreneurship, and influencing society are among the things affected by this generational change in universities.This change of paradigms should be considered in all fields, including humanities, which play a decisive role in society.The criteria and indicators examined in this research tend to the characteristics of the fourth generation of universities and evaluation in it, which can be a suitable suggested model for those involved in the evaluation of humanities and especially language and literature fields.According to Sivertsen, [26] publication patterns are different between humanities fields, while these patterns are similar for each field in different countries.This is a positive point for using the results of this study in the fields of language and literature in other regions. In short, according to the opinion of experts in language and literature fields, what should be considered in the policy of evaluating the scientific outputs of these fields is the difference of scientific outputs and their evaluation criteria according to the nature of the field and its research approaches and goals.The diversity of approaches and goals, which is the starting point of the method proposed in this study for evaluation, will cause the expansion of research to the environment outside the university, the application of research and scientific studies in society, and be useful to the public.This research aimed to introduce the evaluation criteria of scientific outputs in different dimensions in order to take an effective step in changing the view of language and literature and improving the status of this discipline and humanities.The realization of such a view on evaluation in humanities requires an effort to spread this style of evaluation and provide the necessary infrastructure for it at micro and macro levels.It is hoped that the conducted study will be an introduction to this matter. Figure 1 : Figure 1: The prioritizing of the evaluation criteria of the research outputs of the fields of language and literature. Table 4 : Coefficient of variation and consensus. Dorcheh, et al.: Evaluation Criteria of Humanities ScientificOutputs through an open-ended question to state their suggestion for adding, removing or moving criteria and indicators.The issues raised by the respondents in the first round were either included in the criteria and indicators obtained from the review of documents or were unrelated to the research topic.No new items were proposed in the second round.Therefore, what was obtained from the review of the documents was used as the criteria and indicators for evaluating the research outputs of the fields of language and literature.
9,086
2024-04-15T00:00:00.000
[ "Linguistics" ]
Hawking radiation and black hole gravitational back reaction - A quantum geometrodynamical simplified model The purpose of this paper is to analyse the back reaction problem, between Hawking radiation and the black hole, in a simplified model for the black hole evaporation in the quantum geometrodynamics context. The idea is to transcribe the most important characteristics of the Wheeler-DeWitt equation into a Schr\"odinger's type of equation. Subsequently, we consider Hawking radiation and black hole quantum states evolution under the influence of a potential that includes back reaction. Finally, entropy is estimated as a measure of the entanglement between the black hole and Hawking radiation states in this model. I. INTRODUCTION Since the discovery that black holes would have to emit radiation, there have been proposals to explain the loss of information associated with the apparent conversion of pure to mixed quantum states. From the beginning, this information loss was proposed to be fundamental and, the non unitary evolution of pure to mixed quantum states constituted a hypothesis to solve the problem associated with this loss. For example, Steven Hawking own proposal of the non unitary evolution is represented by the "dollar matrix" S / [1] ρ final = S /ρ initial , which allows the evolution of pure quantum states, characterised by the density matrix ρ initial , into mixed states ρ final . The black hole evaporation mechanism and the problem of information loss, collected behind the event horizon, constituted a privileged arena for quantum gravity theories candidates (namely, quantum geometrodynamics [2], string theory [3][4][5][6][7] and loop quantum gravity [8][9][10][11]) to establish themselves beyond General Relativity. However, the scientific community was reluctant to give up unitarity, a crucial feature of Quantum Mechanics, and the hypothesis of a new principle of complementarity, between the points of views of an infinitely distant observer and a free falling observer near the event horizon, was raised [12]. Following a similar approach, it has been emphasised over the time the role of the gravitational back reaction effect [13,14] of Hawking radiation on the event horizon as a way to allow the information accumulated within the black hole to be encoded in the outgoing radiation. In this way, the emergence of a mechanism in which all the black hole information (a four dimensional object in General Relativity) would be accessible at the event horizon (which can be described as a membrane with one dimension less than the black hole), is somehow similar to what happens with a hologram [15]. This new holographic principle was simultaneously proposed and clarified [16,17] in order to incorporate the aforementioned principle of complementarity. The next step happened when it was conjectured the correspondence between classes of quantum gravity theories (5-dimensional anti-De Sitter solutions in string theories) and conformal field theory (CFT -conformal field theory -4-dimensional boundary of the 5-dimensional solutions), the so called AdS/CFT conjecture [18][19][20][21]. This discovery was extremely important to ensure the possibility of a correspondence between the physics that describes the interior of the black hole (supposedly quantum gravity), and the existence of a quantum field theory at its boundary (the surface that defines the event horizon) that would allow to save the unitarity. In 2012, in an effort to analyse important assumptions, such as: 1) the principle of complementarity proposed by Susskind and its colaborators, 2) the AdS/CFT correspondence and, 3) the equivalence principle of General Relativity, in the way that Hawking radiation could encode the information stored in the black hole, a new paradox was discovered [22]. In simple terms, the impossibility of having the particle, which leaves the black hole, in an maximally entangled state (or non factored state) with two systems simultaneously (the pair disappearing beyond the horizon and all the Hawking radiation emitted in the past, a problem related to the so-called monogamy of entanglement), leads to postulate the existence of a firewall that would destroy any free falling observer trying to cross the event horizon. The firewall existence is incompatible with Einstein's equivalence principle. However, if there is no firewall, and the principle of equivalence is respected, according to these authors, unitarity is lost and information loss is inevitable. Apparently, the situation is such that either General Relativity principles or Quantum Mechanics principles need to be reviewed [23,24]. This is an open problem and the role of gravitational back reaction, between Hawking radiation and the black hole, persists as an unknown and potentially enlightening mechanism on how to correctly formulate a quantum theory of the gravitational field. In an attempt to study the possible gravitational back reaction, between Hawking radiation and the black hole, from the quantum geometrodynamics point of view, a toy model was proposed [25]. It was shown and discussed the conditions under which the Wheeler-DeWitt equation could be used to describe a quantum black hole. In particular, a simple model for the black hole evaporation was studied using a Schrödinger type of equation and, the cases for initial squeezed ground states and coherent states were taken to represent the initial black hole quantum state. One can ask, how can a complex equation such as the Wheeler-DeWitt be approximated by a Schrödinger type of equation? In the cosmological context, several formal derivations were carried [2,[26][27][28][29] with the purpose of enabling to use the limit of a quantum field theory in an external space-time for the full quantum gravity theory. Such approaches usually involve procedures like the Born-Oppenheimer or Wentzel-Kramers-Brillouin (WKB) approximations. In this work we review this toy model. It is important to notice that, even though, a full study of the time evolution of the Hawking radiation and black hole quantum states was performed when a simple back reaction term is introduced, an important part of the discussion about the time evolution of the resulting entangled state was left incomplete. In fact, it is exactly the motivation of this paper to address the problem of explicitly describe the time evolution of the degree of entanglement of this quantum system. In addition, another important goal is to get an approximate estimate of the Von Neumann entropy and check is the back reaction can induce a release of the quantum information in the Hawking radiation. These results can be interesting, in the quantum geometrodynamics context, as a simple starting point to more robustly address the black hole information paradox in a canonical quantization of gravity program. This paper is organized as follow. In sections II and III we present an introduction to quantum geometrodynamics and consider a semiclassical approximation of the Wheeler-DeWitt equation. In sections IV and V we derive a simple model of the back reaction between the Hawking radiation and the black hole quantum states, where its dynamics is governed by a Schrödinger type of equation. Finally, in sections VI and VII, we obtain and discuss the main result of this paper, namely the time evolution of the entanglement entropy and the behaviour of the quantum information of the Hawking radiation state. II. QUANTUM GEOMETRODYNAMICS AND THE SEMICLASSICAL APPROXIMATION In the following, we mention a brief description of the bases of J.A. Wheeler's geometrodynamics, which consists in a 3+1 spacetime decomposition (ADM decomposition -R. Arnowitt, S. Deser and C.W. Misner [30]), and obtain General Relativity field equations in that context. The field equations, obtained in this procedure, will exhibit the evolution of a pair of dynamical variables (h ab , K ab ) -the 3-dimensional metric h ab (induced metric) and the extrinsic curvature K ab -on a Cauchy hypersurface Σ t (three-dimensional surface). General Relativity, defined by the Einstein-Hilbert action 1 can be expressed under the hamiltonian formalism. For this purpose, a 3 + 1 decomposition of spacetime (M, g) 2 may be considered, where M is a smooth manifold and g lorentzian metric in M . Moreover, this decomposition consists of the 4-dimensional spacetime foliation in a continuous sequence of Cauchy hypersurfaces Σ t , parameterised by a global time variable t 3 . General Relativity covariance is maintained, in this procedure, by considering all possible ways of carrying this foliation. When we consider the hamiltonian formalism we need to define a pair of canonical variables, however, we can initially identify a pair of dynamical variables constituted, on the one hand, by the 3-dimensional metric induced in Σ t by the spacetime metric where n µ is an ortogonal vector to Σ t . In this way we can separate the metric g, in its temporal e spatial components, according to the following expressions, 1 Without the cosmological constant. 2 We assume that this spacetime is globally hyperbolic, such that we ensure that it can be foliated in Cauchy hypersurfaces. 3 For which a flow of 'time' can be perceived when a observer world line crosses a sequence of Cauchy hypersurfaces. or, in a more suitable compact form, In the previous equation N is called the lapse function whereas N a is the shift vector. The other canonical variable, on the other hand, is the extrinsic curvature Hence, the dynamical variables pair (h ab , K ab ) (with Latin letter indexes, defining 3-dimensional tensor fields) enable us to rewrite Einstein-Hilbert action (1) as We can notice that the lapse function and the shift vector are Lagrange multipliers (since ∂L/∂Ṅ = 0 e ∂L/∂Ṅ a = 0 ) and, according to Dirac [31] we can establish the existence of primary constraints which allow to write the action (6) as with p ab = ∂L/∂ḣ ab (conjugate momentum of the dynamical variable h ab ) and where being the DeWitt metric and D b is the covariant derivative. We can define the hamiltonian constraint and the diffeomorphism constraint through the variation of the action (7) with respect to N and N a . Physically, constraints (9)-(10) express the freedom to choose any coordinate system in General Relativity. More precisely, the choice of the particular foliation Σ t is equivalent to choose the lapse function N and, the spatial coordinates x i choice is equivalent to choose a particular shift vector N a . It is important to emphasise that, related to the DeWitt metric G abcd definition, the kinetic term in equation (9) is indefinite, since not all kinetic functional operators in Eq. 8 share the same sign. This property will persist beyond the quantisation procedure and will play a crucial role in the semiclassical (where it will give rise to a negative kinetic term) approach to the black hole evaporation process. III. CANONICAL VARIABLES QUANTISATION AND WHEELER-DEWITT EQUATION The canonical quantisation programme, according to P.M. Dirac prescription, demands the transition of classical to quantum canonical variables (h ab , p ab ) → ĥ ab , −i δ/δh ab , and also promotes Poisson brackets to commutators. We have to define a wave state functional Ψ (h ab ) belonging to the space of all 3-dimensional metrics Riem Σ. Nevertheless, there are important issues related with: 1. the correct factor ordering in building quantum observables from the fundamental variables ĥ ab , −i δ/δh ab , 2. the interpretation of quantum observables as operators acting on the wave functional Ψ (h ab ) and the adequate definition of a Hilbert space, 3. the classical constraints (9)-(10) conversion to their quantum counterpart and their quantum interpretation, 4. the lack of time evolution in the previous quantum constraints. These questions are thoroughly discussed in [2,32], as well as possible solutions and open problems till the present day. Among the previous mentioned issues, the problem related to the lack of time evolution seems to stand as an essential feature in the formulation of a quantum theory of the gravitational field. If we assume that the wave functional evolution over time depends on a time concept defined after the canonical quantisation, then, the time parameter t will be an emergent quantity [33]. In order to address the black hole evaporation problem and, to explore how information is eventually encoded in Hawking radiation, it would be important to obtain the entropy time evolution as a measure of the degree of quantum entanglement between radiation and black hole states. Since the quantum version of the hamiltonian constraint (9), known as Wheeler-DeWitt equation, and the quantum diffeomorphism constraint are both time independent, the wave functional is connected to a purely quantum and closed gravitational system. In the case involving the study of a black hole evaporation phase, equations (12)-(13) describe a quantum black hole in the context of a purely quantum universe. This situation is not suitable if we consider that we must have several classical observers measuring and depicting the time evolution of the black hole outgoing radiation. These classical observers, experience and describe physical phenomena in a classical language that needs a time parameter. Hence, we need to consider a quantum black hole in a semiclassical universe where time appears as an emergent quantity. Time is the product of an approximation which aims to extract, from the Wheeler-DeWitt equation, an external, semiclassical stage, in which black hole and Hawking radiation quantum states evolve. In reference [2] (section 5.4) we can find a derivation, from equations (12)-(13), of a Schrödinger functional equation. In the following, we highlight some important details of this derivation. Let us start by writing the wave functional as where S[h ab ] is a solution of the vacuum Einstein-Hamilton-Jacobi function [34], since its WKB approximation enable us to extract, at higher orders, a Hamilton-Jacobi equation. In addition, S[h ab ] is also solution to the Hamilton-Jacobi version of (12)-(13), namely with the definitions m 2 Pl = (32πG) −1 , = c = 1 andĤ m ⊥ andĤ m a are assumed to be contributions from the nongravitational fields. Having the solution S[h ab ], we can now evaluate |ψ (h ab ) along a solution of the classical Einstein equations h ab (x, t). In fact this solution is obtained froṁ after a choice of the lapse and shift function has been made. At this point, we can define the evolutionary equation for the quantum state |ψ (h ab ) as which, sinceḣ ab depends on the DeWitt metric G abcd , will have differential operators with the wrong sign in its right hand side. Finally, we are in the position of defining a functional Schrödinger equation for quantized matter fields in an external classical gravitational field as Notice that the matter hamiltonianĤ m , is parametrically depending on metric coefficients of the curved space-time background and contain indefinite kinetic terms. This derivation assumes a separation of the complete system (which state obeys the Wheeler-DeWitt equation and the quantum diffeomorphism invariance) in two parts, in total correspondence with the way a Born-Oppenheimer approximation is implemented. The physical system separation into two parts, one purely quantum and the other semiclassical, is essentially achieved by separating the gravitational from the non gravitational degrees of freedom through an expansion, with respect to the Planck mass m Pl , of constraints (12)- (13). However, we notice that there are gravitational degrees of freedom that can be included in the purely quantum part (quantum density fluctuations whose origin is gravitational, for example). Equation (19), formally similar to Schrödinger equation, is an equation with functional derivatives, in which variable x is related to the 3-dimensional metric h ab . As previously mentioned, we recall that due to the DeWitt G abcd metric definition, a negative kinetic term emerges from the conjugated momentum p ab . In the following section, let us develop a simple model of the black hole evaporation stage [25], which incorporates one interesting feature of the Wheeler-DeWitt equation, namely the indefinite kinetic term, and study some of its consequences. The main objective, here, is to estimate the degree of entanglement between Hawking radiation and black hole quantum states, when we take into account a simple form of back reaction between the two. IV. SIMPLIFIED MODEL WITH A SCHRÖDINGER TYPE OF EQUATION Equation (19) is a functional differential equation, its wave functional solution depends on the 3-metric h ab describing the black hole and matter fields. It is obviously an almost impossible task to solve and find solutions to that equation. However, we can consider a simpler model, assuming a Schrödinger type of equation, which was first considered in [2]. In that work it was argued that in order to study the effect of the indefinite kinetic term in (19), as a first approach, and since we are dealing with an equation which is formally a Schrödinger equation, we could restrict our attention to finite amount of degrees of freedom. This first approach as been successful in cosmology, allowing to solve the Wheeler-DeWitt equation in minisuperspace, which brings a functional differential equation to a regular differential equation. We do not claim that we are doing the exact same process, but instead that a reduction of the physical system to a finite number of degrees of freedom could retain some aspects of quantum gravity that could be studied using much simpler equations. It is an acceptable concern if approximating a functional differential equation to a Schrödinger type of regular differential equation becomes an oversimplification. Nevertheless, it can also be acceptable to think that some physical insight can be obtained by assuming that the indefinite character of the functional equation is mimicked in the simpler model. Let us consider some assumptions in order to obtain the simpler equation. 1. Assuming that the hamiltonianĤ m includes black hole and Hawking radiation parts, and ignoring other degrees of freedom, the simpler equation can take the form, This last equation, where the emergence of a negative kinetic term which plays the role of the functional derivative in the metric h ab in (12), contrasts with an exact Schrödinger equation. Therefore, because variable x is related to metric h ab , we propose to identify it with the variation of the black hole radius 4 2GM/c 2 , which turn out to be also a variation in the black hole mass or energy. Variable y will correspond to Hawking radiation with energy m y . 2. Notice that the kinetic term of the gravitational part of the hamiltonian operator is suppressed by the Planck mass. As long as the black hole mass is large, this kinetic term is irrelevant. One would have in that case, only the Hawking radiation contribution. If, instead we consider the last stages of the evaporation process, when the black hole mass approaches the Planck mass, then the kinetic term associated with the black hole state becomes relevant. 3. The time parameter t in equation (20) was obtained by means of a Born-Oppenheimer approximation and embodies all the semiclassical degrees of freedom of the universe. 4. In equation (20) we consider harmonic oscillator potentials. Beside being simpler potentials, they allow for analytical solutions and, in the Hawking radiation case this regime is realistic [35,36]. For the black hole, this potential is an oversimplification, which can be far from realistic. However, it can help to disclose behaviours also present among more complex potentials, with respect to the entanglement between black hole and Hawking radiation quantum states, during the evaporation process. Furthermore, before dealing with the full problem, simpler models can identify physical phenomena that will reasonably manifest independently of the problem complexity (for example, the infinite square well helps to understand energy quantisation in the more complex Coulomb potential). Let us assume that equation (20) is solved by the variables separation method, so that we can obtain the two following equations 5 , Equations (22) describe an uncoupled system comprising a harmonic oscillator and an inverted one. In figure 1 we illustrate the fact that having regular harmonic potential with a negative (indefinite) kinetic term is equivalent, FIG. 1. The behaviour of a particle in an inverted oscillator potential, with a positive kinetic term, is equivalent to the behaviour of a particle, with a negative kinetic term, in a regular harmonic oscillator potential. in the quantum point of view, to the situation where an inverted oscillator potential has a positive kinetic term. In both situations we have to deal with an unstable system, which would correspond of having variable x varying uncontrollably. A wave function ψ 0 (x , 0) that initially has a Gaussian profile, will evolve over time according to, 5 Where ψ * x is the complex conjugate of ψx. where G inv. (x, x ; t, 0) is the inverted oscillator Green function [37,38], which can be obtained from the harmonic oscillator Green function by redefining ω → (iω). The wave function obtained from the computation of equation (23) shows a progressive squeezing of the state in phase space, which means an increasing uncertainty in the value of x. Physically, in this simplified model, that would correspond to an unstable variation of the Schwarzschild radius or mass of the black hole. Even though, conceivably, a strong squeezing of the black hole state would occur [39], driving its disappearance. In the next section we will introduce the effect of a back reaction, coupling effectively black hole and Hawking radiation quantum states, and see that, under particular circumstances, the system becomes stable and strongly entangled. V. BACK REACTION AND SCHRÖDINGER EQUATION In this section we review and reproduce some results obtained in reference [25]. Notice that in this work a slight change in some definitions will be carried. In addition some new aspects of the model will be discussed. In order to investigate the effects of a back reaction between Hawking radiation and black hole states, let us consider a linear coupling µxy between the variables, where µ is a constant, in equation We should emphasize that, following the Born-Oppenheimer and WKB approximation used to obtain Eq. (19), any phenomenological back reaction effect, here parametrized by µ must be suppressed by the Planck mass [2]. Therefore we can consider that this back reaction coupling constant, as the kinetic term of the gravitational part of the hamiltonian operator, only becomes relevant when the black hole approaches the Planck mass. Consequently we can assume that the constant µ ∼ µ /m Pl . Suppose the initial state, describing the black hole, is the coherent state where which represents a black hole whose Schwarzschild radius oscillates around the value 2GM/c 2 . A coherent state represents a displacement of the harmonic oscillator ground state |0 in order to get a finite excitation amplitude α. For the Hawking radiation initial state, let us consider the Gaussian distribution which describes the radiation state [35,36,39] for a black hole with Schwarzschild radius 2GM/c 2 . Under these conditions, we can expect that, after the product state (27)-(30) evolves in timê the emerging final state |Ψ will be entangled, because the hamiltonian in equation (26) includes a coupling such that Determining the initial state ψ α x0 ⊗ ψ H y0 evolution over time would be solved if we had the propagator related to the hamiltonian of equation (26). Since this propagator is not available, we can instead redefine variables such that we can rewrite equation (26) in the following way In the previous equation, coordinates redefinition (32) implies that and the coupling is If we impose that, in the new variables (Q 1 , Q 2 ), the coupling is K = 0, it follows that the coupling in the original variables (x, y) is given by, with θ ∈] − π 4 , π 4 [. We can check that µ = 0 for θ = 0. In the numerical simulations, to calculate the relevant physical quantities, we will assume that m y = 10 −5 m Pl and ω 2 y = 10 5 ω 2 x such that the potentials, in equation (26), are of the same order, i.e., m y ω 2 y ∼ m Pl ω 2 x . This corresponds to assume that the Hawking radiation energy is well below Planck scale and, the fluctuations of the Schwarzschild radius have significantly smaller frequency than the Hawking radiation energy fluctuations. The numerical factor choice of 10 5 is arbitrary and does not influence the conclusions to be drawn from the results presented in subsequent sections. However, we can establish that the coupling is defined in the interval which is sufficiently broad to explore the more relevant cases. If we substitute the coupling equation (36) in the frequencies definition (34), we will obtain, in the new coordinates, , We can notice an important observation related to equation (38). Since, we assume that ω 2 y ω 2 x , it implies that Ω 2 1 remains strictly positive 6 in the significantly reduced sub interval of the possible angles θ ∈] − π 4 , π 4 [. We can verify that Ω 2 1 is only positive when − arctan ω 2 x ω 2 y < θ < arctan ω 2 x ω 2 y . This observation means that, for values outside the mentioned interval (39), Ω 2 1 is negative and equation (33) turns out to be a Schrödinger equation describing two uncoupled harmonic oscillators in the coordinates (Q 1 , Q 2 ), i.e. In addition, we also have arctan ω 2 x ω 2 y < |θ| < π 4 ⇒ |µ| > 1 (41) which implies that, in equation (26), when the coupling is |µ| > 1 the system becomes stable and this will restraint the influence of the inverted potential. The calculation of the initial state ψ α x0 ⊗ ψ H y0 time evolution, in coordinates (Q 1 , Q 2 ), with the help of the harmonic (25) and inverted oscillator propagators, enable us to obtain Ψ(Q 1 , Q 2 , t), for which an explicit analytical expression is given in appendix A (equation (A1)). Subsequently, we can use the inverse transformation in order to retrieve the wave function in the original coordinates. This wave function has the generic form where the time dependent functions can be found in appendix A, more precisely in equation (A4). One of the main objectives in this paper is to quantify the entanglement degree between black hole and Hawking radiation quantum states. In order to proceed with that idea we have to define the system matrix density Wave function (44) cannot be factored, hence, the initial density matrix |Ψ 0 Ψ 0 |, corresponding to the factored pure state ψ α x0 ⊗ ψ H y0 , has evolved to a pure entangled state described by ρ xy . Recalling the status of the classical observers outside the black hole, they can only access the state of the outgoing radiation, i.e., they can only experiment part of the system. Therefore, it is important to consider the reduced density matrix ρ y obtained by taking the partial trace of the system density matrix ρ xy , i.e., computing ρ y = tr x (ρ xy ). The reduced density matrix elements, for black hole and Hawking radiation, are respectively where |x, y ≡ Ψ(x, y, t), and with the generic form The coefficients defined in the last equation are given in appendix B (equations (B5)-(B7)), and also depend directly on equation (A4). The diagonal reduced density matrix elements are ρ Hr (y, y) = |F | 2 1 2Re (A) exp −2Re (C) y 2 + (Re (E) y + Re (B)) 2 2Re (A) + 2Re (D) y (49) and, for illustration purposes, in figure 2 we can observe their evolution over time. In that case, we have taken µ = 1.01 for the back reaction coupling value. As we emphasised before, for this value, the system is stable and we can notice that the observed behaviours corresponds closely to squeezed coherent states 7 , with an evident correlation between them. VI. ENTROPY, ENTANGLEMENT AND INFORMATION Theoretically, black holes emit radiation, when measured by an infinitely distant observer, approximately with a black body spectrum with an emission rate in a mode of frequency ω where the Hawking temperature is, and the factor γ (ω) embodies the effect of the non trivial geometry surrounding the black hole. Soon after this discovery, D. N. Page made important numerical estimates [40][41][42], of various particle emission rates, for black holes with and without rotation, and the evaporation average time for a black hole with mass M . Later, he made important conjectures [43] about the Von Neumann entropy 7 This observation will be corroborated by inspecting the behaviour of the Wigner functions in appendix B. Squeezed coherent states are obtained through the action of two different operators over the ground state of the harmonic oscillator |α, ξ =D (α)Ŝ (ξ) |0 = e −αâ † −α * â e − 1 of a quantum subsystem described by the reduced density matrix ρ A = tr B ρ AB . If the Hilbert space of a quantum system, in a pure initial random state, has dimension mn, the average entropy of the subsystem of smaller dimension m < n is conjectured to be given by Therefore, the given subsystem will be near its maximum entropy log m whenever m < n. Afterwards, he applied this new conjecture to the case of the black hole evaporation process [44]. Assuming that initially Hawking radiation and black hole are in a pure quantum state, described by the density matrix ρ AB , he showed that the Von Neumann entropies related to the reduced density matrices (radiation -Hr -and black hole -Bh -), S Hr = −tr (ρ Hr log (ρ Hr )) (54) display an information (defined as a measure of the departure of the actual entropy from its maximum value), In addition, he also described, through what is today known as the Page curve (a recent nice review can be found in [46]), the way entropy will evolve 8 (see figure 3) while the black hole evaporates. More recently, he has numerically estimated, based on his previous works, about emission rates of several types of particles, the way Hawking radiation entropy should evolve in time [45]. It is believed that a correct quantum gravity theory should be able to show how Page curve emerges from the assumption of the outgoing radiation and black hole quantum states unitary evolution. It seems pertinent to explore what the simplified model, under analysis, allow us to say about entropy and information. More precisely, we want to estimate how Hawking radiation entropy and information evolve over time according to equation (56). Considering the reduced density matrices (47), we see that to properly calculate Von Neumann entropy (52) we have to diagonalize the matrices, i.e. compute their eigenvalues +∞ −∞ dy ρ Hr (y, y ) f n (y ) = λ n f n (y) . This particular calculation is only known for a few specific cases, as for example, for a system of two coupled harmonic oscillators [47], unfortunately a distinct situation from the case studied here, namely the coupling between harmonic and inverted oscillators. Solving the eigenvalues problem allows a great simplification and the evaluation of Von Neumann entropy becomes simply However, considering the eigenvalues problem technical difficulty, instead of computing the Von Neumann entropy we can estimate the Wehrl entropy [48,49], where H Bh (x, p) is the Husimi function [50] H Bh (x, p) = dx dp The Husimi function is defined to access the classical phase space (x, p) representation of a quantum state, and it is obtained from a Gaussian average of the Wigner function The Wigner function give us an rough criterion on how much a quantum state is distant from its classical limit but, unfortunately, it is not a strictly positive function and cannot be taken as a probability distribution in phase space 9 . The Husimi representation (which is a Weierstrass transformation of Wigner function) enable us to define a strictly positive function and corresponds to the trace of the density matrix over the coherent states basis |α , i.e., If we compare this last definition with equations (58) and (59), we can understand Wehrl entropy as a classical estimate of the Von Neumann entropy, through the analogy Hence, Wehrl's entropy can be considered a measure of the classical entropy of a quantum system, and has already been used [51] in the contexts of cosmology and black holes. We should notice that Wehrl entropy gives an upper bound to Von Neumann entropy, i.e., S W (ρ) ≥ S V N (ρ). The time has come to obtain, in this simplified model, Wehrl entropy and information evolution over time for Hawking radiation. In figure 4 we can find the numerical estimates of Hawking radiation Wehrl entropy and information. These were obtained based on the calculation of Wigner (appendix B) and Husimi functions, using the reduced density matrix (47). We can observe that the entropy start with lower values, this corresponds to a stage where entanglement and correlation between the states are weak. According to figure 2 this happens in a phase where the quantum states become increasingly squeezed and displaced, under the influence of the inverted potential. However when the back reaction begins to grow, correlations and degree of entanglement between the two states increase, and consequently so does the entropy, and both subsystems are forced to oscillate (counteracting the inverted potential). Finally, both states return to their initial configurations, which brings a reduction of their entropies. It is in this last phase that, with a decreasing entropy, the information contained in the state describing Hawking radiation increases as expected from the Page curve. At this point, we can ask ourselves: how much the estimate of the S W (ρ) give us an accurate description of the real behaviour of the Von Neumann entropy S V N (ρ)? Since Wehrl entropy satisfies S W (ρ) ≥ S V N (ρ), inspection of 9 In fact, Wigner function is considered a quasiprobability distribution. figure 2 tell us that the variation from a lower values of the entropy (initial stage of the time evolution) to higher (intermediate stage of the time evolution) and again to lower values (final stage of the time evolution) seems to indicate, with reasonable chance, that Von Neumann entropy can present a behaviour relatively close to Wehrl entropy. In addition, the fact that we have considered the unitary evolution of the pure state ψ α x0 ⊗ ψ H y0 , implies that the system matrix density remains a pure state (S V N (ρ AB ) = 0), while the reduced density matrices, for the two subsystems, correspond to mixed states (S V N (ρ A ) = 0). VII. CONCLUSIONS Even though the simplified model, discussed in this paper, was based on modest assumptions (namely about the initial black hole quantum state, among others), it provides a simple mechanism where one can appreciate the temporal evolution of entropy and the behaviour of information (in a classical approach with Wehrl entropy being evaluated). The model has the advantage that it can be treated analytically and show how the coupling of a harmonic and inverted oscillators can produce results suggesting how the Page curve can emerge. There are certainly many ways in which this model can become more realistic. However, it would also certainly no longer be able to be treated analytically, which would inevitably deprive it of its pedagogical appeal. In one hand, questions such as, • how the squeezing parameter evolves in this model? • what is the exact behaviour, in this model, of Von Neumann entropy S V N (ρ)? • which aspects of the discussed estimates would benefit by considering a more realistic model? • how to apply the same procedure to the functional Schrödinger type of equation (19)? can be pursued as possible future topics of investigation. On the other hand, one can also try to understand to which extent entropy, and Hawking radiation information, estimates can be made in gravitational back reaction scenarios such as those proposed in [52,53]. In that proposal, it is assumed that particles moving at high speeds to and from the event horizon cause a drag [14] which has gravitational effects that can be described by the Aechelburg-Sexl metric [54,55]. It is worth to mention that the discussion of back reaction effects of the Hawking radiation and the correct way to derive the Page curve has been an active field of research in connection with the black hole information paradox. The reader can find complete reviews of the problem and recent progresses in that direction in [56][57][58][59] Finally, the black hole evaporation subject and the fate of the information enclosed inside it, are crucial aspects that any quantum gravity theory candidate will have to unveil. At a time when the first direct evidences of objects that in everything resemble what in General Relativity is described as a black hole are emerging, our scepticism about their real existence starts to fade away. However, it has been a long time since the conceptual problems associated with these hypothetical strange objects have challenged the limits of theoretical physics. In this appendix, we explicitly write the analytical expressions, for the computation of equation (42), and the various time functions which help to define state (44). Although some of the following expressions were originally presented in [25], a re-organisation, and introduction of new time functions, used to write equation (44) justify the necessity to provide the reader with their accurate modifications. When we reverse the coordinate transformation Ψ(Q 1 , Q 2 , t) → Ψ(x, y, t), applying transformations (43), we obtain the state defined in equation (44), where, where, upon the substitution of ρ Bh by equation (47), we get After some algebraic manipulation, we obtain where Re (D (t)) 2Re (C (t)) π m y Re (C (t)) (B5) Concerning the Hawking radiation, a similar procedure enable us to obtain the following Wigner function, In figure 5 we display the time evolution for function W Bh (x, p) in the interval t ∼ [0, 40], which is related to figure 2. This time interval can approximately be taken as measuring one full cycle of 'oscillation' for the black hole state, i. e. , the average time required for the state to return to its initial configuration. Inspecting the aforementioned figure, we can notice that the initial state Wigner function (first left panel of the figure) describes a coherent state ψ α x0 , which is displaced from the origin of phase space, since in agreement with equation (29). After some time has elapsed (top right panel of the figure), the Wigner function starts to squeeze, in the density plot, deforming its initial circular shape to an elliptical one. This illustrates the action of the squeeze operatorŜ (ξ), besides the displacement around the origin of phase space. Finally, we can observe that a full rotation of the displacement center point occurs around the origin of phase space, while various degrees of squeezing affect the shape of the state. defines the back reaction is µ = 1.01, with ωy = ωx × 10 5/2 and my = m Pl × 10 −5 . We verify that, throughout the various stages of the evolution (corresponding to the various panels), the action of the operatorsD (α) (displacement operator) andŜ (ξ) (squeeze operator), produces a full rotation of the displacement center point of the initial Wigner function around the origin of phase space, while various degrees of squeezing affect the shape of the state.
9,181.2
2021-08-12T00:00:00.000
[ "Physics" ]
Toxoplasmosis seroprevalence in urban rodents: a survey in Niamey, Niger A serological survey of Toxoplasma gondii was conducted on 766 domestic and peridomestic rodents from 46 trapping sites throughout the city of Niamey, Niger. A low seroprevalence was found over the whole town with only 1.96% of the rodents found seropositive. However, differences between species were important, ranging from less than 2% in truly commensal Mastomys natalensis, Rattus rattus and Mus musculus, while garden-associated Arvicanthis niloticus displayed 9.1% of seropositive individuals. This is in line with previous studies on tropical rodents - that we reviewed here - which altogether show that Toxoplasma seroprevalence in rodent is highly variable, depending on many factors such as locality and/or species. Moreover, although we were not able to decipher statistically between habitat or species effect, such a contrast between Nile grass rats and the other rodent species points towards a potentially important role of environmental toxoplasmic infection. This would deserve to be further scrutinised since intra-city irrigated cultures are extending in Niamey, thus potentially increasing Toxoplasma circulation in this yet semi-arid region. As far as we are aware of, our study is one of the rare surveys of its kind performed in Sub-Saharan Africa and the first one ever conducted in the Sahel. online | memorias.ioc.fiocruz.br Mem Inst Oswaldo Cruz, Rio de Janeiro,Vol. 108(4): 399-407, June 2013 Sahel is a sub-arid region that undergoes rapid climatic changes (Lebel & Ali 2009) with dramatic consequences on food production and availability. Such a critical situation leads to a massive rural exodus and extensive urbanisation. Niamey, the main town of Niger, is no exception. Since the 1960s, the population of this rather young city has been undergoing an explosive increase due to a very important demographic growth (Sidikou 2011). The number of inhabitants has increased from ~3,000 in the 1920s, > 30,000 in the late 1950s to 707,000 in 2001 and reached more than 1.2 million in 2010 (Sidikou 2011, Adamou 2012. As often in such cases, this was accompanied by many informal settlements and insufficient tracking of necessary sanitary services. Along with other problems, public health is a primordial concern with low clinical capacities and poor accessibility to medical care. In addition, robust epidemiological data for Niger remains scarce for many major diseases such as malaria and human immunodeficiency virus (HIV). From there, other pathologies are even more poorly documented, when not undetected, due to weak screening programs and/or diagnostic facilities. Among them, the worldwide distributed toxoplasmosis is induced by the intracellular protozoan Toxoplasma gondii whose infection may be asymptomatic to lethal, with primo-infection being particularly dangerous during pregnancy due to subsequent abortion or severe clinical consequences on foetus and neonate. Moreover, toxoplasmosis appears as an opportunistic disease in immuno-depressed patients such as HIV-positive ones (Robert-Gangneux & Dardé 2012). In sub-Saharan Africa, human prevalence [reviewed in Mercier (2010)] ranges from 3.9% in Niger (Delacroix & Laporte 1989) to 83.5% in Madagascar (Lelong et al. 1995). In Niger, toxoplasmosis has only been the focus of five studies and seroprevalence values were found to be quite variable, ranging from 3.9-50.5%, with an aver-doi: 10.1590/0074-0276108042013002 Financial support: CBGP (UMR IRD/INRA/Cirad-Montpellier SupAgro) age of 12.8% for the whole country (Table I). A survey conducted on 218 pregnant women in Niamey showed a slightly higher value (i.e. 15.1%) and the most recent survey for the city indicated a global seroprevalence of 18.1% (Table I). On this basis, previous authors have considered toxoplasmosis not to be of primary importance for public health in Niger. Medical monitoring of pregnancy is still poor -when not null -for many women, thus making robust statistics difficult to obtain. Perinatal outcomes, including spontaneous abortion and stillbirth children, seem not to be rare in Niger: the National Service for Sanitary Information (SNIS) evaluate stillborn children to reach 8% (SNIS 2010). In 2010, 37% of patient admissions in the reference maternity hospital in Niamey concerned "abortions" (SNIS 2010). However, such statistics need to be handled with great care since many -if not most -of these cases may be due to complications following illegal abortions (voluntary termination is prohibited in Niger). Such a large proportion of perinatal complications may cast doubt on our perception of the real incidence of the disease in the country. We are aware of no systematic and large-scale monitoring of the disease that would allow one to robustly address the respective role of toxoplasmosis. Transmission to human and warm-blooded animals occurs via three primary ways, congenitally, by ingestion of food and water contaminated with oocysts shed into the environment in faeces of felids (domestic cat or wild felids) or by the ingestion of undercooked meat containing tissue cysts. Although felids are the only known definitive host, T. gondii may infect all homoeotherm animals (i.e. birds and mammals), which then act as intermediate hosts (Tenter et al. 2000). Among them, rodents are found in most types of terrestrial biotopes, where they constitute important prey for wild as well as domestic felids. Moreover, they are usually among the last wild mammals to persist in highly human-modified environments, like large towns. For these reasons, rodents most probably play a pivotal role in the maintenance and circulation of T. gondii in urban habitats (Dubey & Frenkel 1998, Murphy et al. 2008. A study conducted in the city of Lyon, France, suggested that low Toxoplasma prevalence in stray cats may be in part associated with low rodent densities (Afonso et al. 2006). Surprisingly, however, epidemiological surveys of T. gondii in rodents are scarce, especially those dealing with tropical regions (Supplementary data). Seroprevalences were found to be highly variable depending on the species and/or the region. In Sub-Saharan Africa, where only two studies were conducted (Supplementary data), 100% of seropositive Thryonomys swinderianus individuals (n = 104) were found in South Western Nigeria, while only 2.7% and 2.3% of positive wild and commensal rodents were detected in Gabon (n = 37 and 43, respectively) (Supplementary data). To our knowledge, no such survey has ever been conducted in Sahelian countries. Recently, human-mediated transport of invasive rodents has been shown to be responsible for the import of allochtonous human pathogens (Dobigny et al. 2011). This motivated a long-term program that aimed to investigate rodents and rodent-borne human pathogens in the city of Niamey. As part of this wider project, we here provide serological results for Toxoplasma that were obtained from 766 rodents. Seroprevalence data are then discussed in regard to native and invasive rodent host species distribution, as well as urban environments in terms of transmission risk to human populations. MATERIALS AND METHODS Sampling and species-specific identifications of rodents -From 2009-2011, a multi-approach monitoring of urban rodents was performed in order to address several issues including epidemiological ones. To do so, more than 14,560 night-traps were performed using both Sherman and locally made wire-mesh traps in various sites and habitats (houses, gardens, markets as well as industrial-like structures) dispersed throughout the city. As part of this project, we here focus on a serologic survey (Figure) (part of a Spot Image, scene reference 506 132 308 121 010 151 32 T, CNES 2008 © , obtained under licence through the ISIS program, file 553) where they were precisely geo-referenced (e.g., each individual habitation where rodents were captured) in order to be mapped onto a satellite image. However, for the purpose Supplementary data as well as Figure, they were aggregated for a clearer visualisation at the whole town scale. Rodents were live-trapped and brought to the lab where they were usually euthanised within one-eight days, except for 45 of them which were autopsied within eight-23 days (data not shown). All procedures were carried out in agreement to current ethical guidelines for animal care. The age was scored according to weight [following Granjon & Duplantier (2009)] together with sex activity (external testicles plus active seminal vesicles in males; developed mammaes and uterus, presence of embryos and/or embryo scars in females). Intracardiac blood was sampled immediately after death and deposited onto LDA22 Guthrie cards (LDA Laboratory, Saint Brieux, France). The blotting papers were dried and stored in a plastic bag at room temperature (RT). In order to avoid misidentification of rodents due to the possible co-existence of sibling species in West African rodents [reviewed in Granjon & Duplantier (2009)], special attention was paid to species-specific diagnosis. Serological survey of T. gondii -Dried blood spot samples collected on Guthrie card were tested for the detection of T. gondii antibodies. Seven hundred and sixty six rodents were screened at 1:16, 1:32, 1:320 and 1:640 dilutions using a modified agglutination test (MAT) technique (Desmonts & Remington 1980) adapted for blood sample from Guthrie cards, with a cut-off titre at 1:16. Two 5 mm diameter dried blood spot were punched out of each blotting paper circle and placed into the well of a flat bottomed microtitre plate. The blood was eluted out in 80 µL of phosphate buffered saline, pH 7.2 (bioMérieux). Plates were covered and left to elute overnight at RT and at 300 rpm agitation. Ten microlitres of each eluted sample was used in MAT technique. For serological control, fresh blood from seronegative (not infected by T. gondii) and seropositive (experimentally infected with a control of the presence of cysts into the brain) Swiss mice (Mus musculus, Charles River France, L'Arbresle, France) for T. gondii antibodies were spotted onto a 5 mm diameter circle on Guthrie card and allowed to dry at RT for 24 h, before storage at RT in sealed bags. Antibody titres were determined by the last dilution where agglutination pattern could be read in comparison with the negative and positive controls. Statistical analysis -Descriptive analyses of the serological data were based on frequencies, percentage for qualitative variables and means, standard deviations for quantitative variables. Relationships between rodent se-Distribution map of the different trapping sites within Niamey with squares, circles and triangles corresponding to (i) fallow lands and gardens, (ii) habitations and (iii) other site types (industrial-like spots, public buildings, markets and transport stations), respectively (A). Respective parts of rodent species trapped at each trapping site. Circle sizes are proportional to the number of rodent captures (B). Respective parts of seropositive and seronegative rodents detected at each trapping sites. As for B, circle sizes are proportional to the number of rodents that were investigated for Toxoplasma serology. White and red colours indicate seronegative and seropositive rodents, respectively (C). roprevalence and factors such as sex, species and habitats were investigated through chi-squared test or Fisher's exact test, depending on the expected sample size. For each significant factor, a Cochran-Mantel-Haenszel Chi-Squared test was conducted in order to obtain a pvalue adjusted for the other factors. All statistics were performed using the software R v2.10.1 (R Development Core Team 2009). RESULTS From the 46 trapping sites sampled for the present study, 766 rodents could be surveyed for Toxoplasma serology: 123 were black rats (Rattus rattus), 61 were house mice (M. musculus), 66 were Nile grass rats (Arvicanthis niloticus), 12 were giant Gambian rats (Cricetomys gambianus), two were slender gerbils (Taterillus gracilis) and 502 belonged to the genus Mastomys (Supplementary data). Among the latter, 287 were investigated using PCR-RFLP designed by Lecompte et al. (2005) and all but two individuals displayed characteristic Mastomys natalensis profiles as defined by Lecompte et al. (2005). Two animals possessed atypical profiles (not shown) and so were fully sequenced for their cytochrome b mitochondrial gene. These DNA sequences allowed to barcode them and to unambiguously identify them as M. natalensis (Dobigny et al. 2008(Dobigny et al. , 2011. In addition, all karyotyped Mastomys (20 of which had not been molecularly characterised) showed a 2N = 32 karyotype typical of M. natalensis (Dobigny et al. 2008). These 307 unambiguously identified M. natalensis represent 61.2% of the whole Mastomys sample available in the present study. Moreover, they originated from all 30 trapping sites were Mastomys individuals were trapped (Supplementary data). No representative of other Mastomys species has ever been found within the city of Niamey even in the framework of wider investigations (n > 650 Mastomys) (K Hima, unpublished observations). We can therefore conclude that all Mastomys that were trapped in the present survey belong to one single species, namely M. natalensis. Seropositive individuals included both juvenile and adult animals as well as males and females (Table III). In most instances, they corresponded to one single seropositive specimen found within five-65 specimens from one particular trapping site; only in two exceptions (CGA and J-LMO) did we find several seropositive animals within the same trapping site (Table III, Supplementary data). Finally, in three instances, seropositive individuals were part of multiple captures (i.e. several rodents trapped together inside the same trap): two juvenile M. natalensis that were both seropositive, one adult R. rattus female trapped with a seronegative juvenile and two juvenile A. niloticus caught with a seronegative female (Table III). DISCUSSION The present study, the first one of its kind in Sahel, relies on an important collection of rodent blood samples (n = 766). Represented rodent species are typical Sahelian species that were all already known in the area (Dobigny et al. 2002), with both native (A. niloticus, C. gambianus, M. natalensis and T. gracilis) and invasive (M. musculus and R. rattus) species (Granjon & Duplantier 2009). Although differentiating between rural and urban environments in Niamey may sometimes be tricky since the two types of habitats are often continuous when not fully intermingled (houses closely surrounding or lying within gardens and rice fields, gardens within familial concessions etc), rodent species distribution in regards to biotopes was quite clear: A. niloticus, C. gambianus and T. gracilis inhabit gardens and fallow lands, while M. natalensis, R. rattus and M. musculus are typical commensal animals. Global T. gondii seroprevalence in rodents from Niamey was low (< 2%), a result that closely matches those found in Gabon during one of the rare other rodent-focused study performed to date in Sub-Saharan Africa (Mercier 2010) (2.3% and 2.7% of 43 commensal and 37 wild rodents, respectively). This parallels previous studies where positive urban rodents are usually rare. For instance, surveys in Brazilian cities showed 4.7% (out of 43 M. musculus and R. rattus), 5% (out of 181 R. rattus) and 0.46% (out of 217 R. rattus, R. norvegicus and M. musculus) of Toxoplasma rodent careers in the cities of Umuarama, Londrina and São Paulo, respectively (Ruffolo 2008, Araujo et al. 2010, Muradian et al. 2012) (see also Supplementary data for a review about data for rodents in the tropics). As for other studies, the seroprevalence results may be discussed according to the sensitivity and specificity of the serological test. Seroprevalence in our study was evaluated through a modified-agglutination test which is the most commonly used for defining a possible infection in diverse species of animals, as there is no need for specific secondary antibodies. The cut-off is variable according to species and to studies (1:5-1:25) (Dubey & Frenkel 1998, Dubey 2010. The most commonly considered cut-off is 1:25, but T. gondii has sometimes been isolated from animals with antibody titres below 1:25. That explains why we choose the cut-off of 1:16 that represents the lowest dilution available after elution of dried blood spot. The gold-standard for detection of T. in infected animals and hence to define the true prevalence is a mouse bioassay. This was not possible in the context of Niger. PCR-based method for Toxoplasma DNA detection on tissue samples (brain, muscles) is known to have a lower sensitivity than bioassay and serology (Hill et al. 2006, Truppel et al. 2010. When considered separately, seroprevalences in Niamey show quite significant variations depending on the species, with low (< 2% in Mastomys, Mus and Rattus) to moderate (> 9% in Arvicanthis) values. This once again fits to what was observed for rodents elsewhere in the World, with species-specific seroprevalence ranging from close to null (e.g., 0.035% of 571 house mice in Panama) (Frenkel et al. 1995) to 100% (e.g., 104 T. swinderi-anus in Nigeria) (Arene 1986) (Supplementary data). It is also noteworthy that the same rodent species can display extremely different seroprevalence depending on localities or countries. For instance, seroprevalence in R. rattus from Niamey is 1.6% while it reaches 3% (out of 238 black rats) in Micronesia (Wallace 1973a, b), up to 50% (out of 74) in the Philippines (Salibay & Claveria 2005). These specific as well as geographic variations point toward a complex T. gondii epidemiology that most probably involves several interacting biotic and environmental factors (such as hosts communities structure, individual immunologic characteristics, climatic variables, water, landscape physiognomy and composition, as well as their respective spatio-temporal dynamics), thus making each situation potentially different from one another, even locally (Afonso et al. 2006). Seropositive rodents were recorded across the year (Table III), encompassing all of the Sahelian seasons [from the warm and dry season (March and April), through the rainy season (June), to the cool and dry season (October and November)], thus suggesting that Toxoplasma infection may occur throughout the year in Niamey's rodents. However, diachronic monitoring within the same site was not feasible, thus precluding any conclusion about potential seasonal seroprevalence peaks. Another question about the Toxoplasma sylvatic cycle is vertical transmission from a female rodent to its litter (Owen & Trees 1998, Marshall et al. 2004, Hide et al. 2009). Although our data are limited both in nature (we score antibodies, not proper infection cases) and sample size, we can rely on three instances of multiple hence simultaneous captures to partly address this point (Table III). Indeed, when an adult female is caught with one or several juveniles, one can confidently consider that they are mother and descents; in the same manner, co-captured juveniles have good chances to belong to the same litter (Granjon & Cosson 2008) (and references therein). First, two seropositive juveniles of multimammate rats were captured together (NIA 243 and NIA 243b) (Table III). Unfortunately, no data about any adult is available here, thus making it impossible to decipher between independent environmental infections -for example at the same place, such as the nest -and vertical transmission. Second, an adult seropositive female of the black rat (NIA-CGA-15a) (Table III) was caught with a seronegative juvenile. Third, a triple capture included a seronegative adult female and two seropositive juveniles of A. niloticus (NIA-LMO-20 and NIA-LMO-21) (Table III). These two latter cases bring poor support (though not refute) to vertical transmission and rather suggest that animals get infected from the environment (soil and water). Interestingly, the more typical commensal species found in Niamey (M. natalensis, M. musculus and R. rattus) all display low seroprevalences. In particular, only six individuals of the native and widespread species in Niamey, i.e. M. natalensis, were found with detectable Toxoplasma antibodies in spite of a large sample size (n = 501). This species is found within houses in all investigated parts of the city. Importantly, in these urban districts, cats may be numerous since a recent survey in 170 habitations in Niamey revealed that 119 of them (70%) may be associated with the presence of domestic or stray cats (Garba 2012). These cats mainly survive from garbage and wild preys, something that may maximize the risk for them to get infected by ingestion of infected rodents. Low seroprevalence in Mastomys (which is, from far, the dominating species in most habitations, hence the most susceptible to be a major cats' prey) may limit cat predation-mediated infections through commensal rodents, hence in turn decreasing potential transmissions from cats to humans. Another important aspect for public health is the similarly low seroprevalences observed in M. musculus and R. rattus. Indeed, these two invasive species recently established in Niamey (Garba 2012) and it is possible that their populations may potentially extend within the city, potentially partly replacing the native M. natalensis (as this was observed for instance in some parts of Senegal) (Duplantier et al. 1991). Bovine and ovine meat is traditionally well cooked in Niger. Moreover, no seropositive rodent could be found in our large sample (n = 59 black rats) from the slaughter house (ABA) (Supplementary data). In addition, rodent meat consumption by humans is rather rare in Niger, especially in Niamey and most exclusively concerns young boys that occasionally hunt in gardens. Also, previous studies conducted in Nigeria (Olusi et al. 1994) suggest that rodent meat consumption, even not or poorly cooked, may not play a major role in human contamination. For all these reasons, following previous authors (Develoux et al. 1988, Julvez et al. 1996, we believe that contamination through meat consumption is most probably anecdotal in Niamey. Low levels of T. gondii prevalence in both human (see above) and rodents (this study) are congruent with Sahelian climatic conditions such as very low hygrometry, soil and air temperatures as well as high ultraviolet irradiations levels which are poorly suitable for oocysts survival and sporulation [Dumas et al. (1991) reviewed in Tenter et al. (2000)]. This most probably also limits the chance of environmental contaminations. Nevertheless, such extreme and unfavourable conditions may be locally counteracted by human-mediated modifications of the habitat. In particular, the possibly major role of direct waterborne contamination has been receiving increasing support [e.g., reviewed in Jones & Dubey (2010)]. In the absence of other feasible explanations, water was even speculated as a major source of toxoplasmic infection in pregnant women and children from Northern Niger (Dumas et al. 1991). Interestingly, we found here significantly higher seroprevalence in A. niloticus (9.1%) which, in Niamey, is only found within irrigated gardens (Garba 2012). It was found in six out of the seven gardens that were investigated in the present survey (Supplementary data) and seropositive Nile grass rats were found in three of them (J-CYA, J-DAR and J-LMO) (Figure, Supplementary data). Unfortunately, we were not able to statistically address this particular issue here, since we could not decipher between Arvicanthis-specific epidemiological properties and environmental (i.e. garden-associated) conditions. If rodent-borne toxoplasmosis was to be more frequent in such habitats/species, as strongly suggested by our data, Toxoplasma human prevalence in Niamey may increase during the coming years, following current extension of irrigated and cultivated surfaces all along the Niger River as well as the Gountou Yena wadi which both cross the city (Djima et al. 2010). Indeed, food habits are clearly switching towards higher consumption of vegetables (e.g., salads, cabbages) that are produced in urban gardens and sold directly in the different markets of town. Sources of watering are the river itself and/or wells where water temperatures should be consistent with oocyst survival (Jones & Dubey 2010). Rodents such as Nile grass rats feed mainly on the cultivated vegetables (Dobigny et al. 2002) (our own observations and many farmers' personal communications). It was previously shown that risk of infection in French cats was higher with warm and moist weather (Afonso et al. 2006). In Niamey, temperatures are always high (the coldest month January is characterised by a normal minimal temperature of 16.6ºC for the 1971-2000 period) (CRA meteorological database) and regular human-mediated irrigation may locally compensate the Sahelian aridity, thus favouring Toxoplasma infection of rodents inhabiting gardens. As such, the connection in Sahelian cities between oocysts, watered vegetables and rodents could be a key element of Toxoplasma circulation that may deserve to be further scrutinised. To our knowledge, the present study is the first one to focus on T. gondii epidemiology in a Sahelian community of rodents. In spite of a large sample size, seroprevalence was found to be rather low, with a possible exception in A. niloticus that may sign species-specificity and/ or a predominant role of water-mediated infection in irrigated gardens. For a clearer view of the whole picture, several aspects need to be investigated. First, a proper study of true infection cases deserves to be conducted to confirm the absence of vertical transmission. Second, other epidemiological agents, such as water, cats, cattle and, of course, human are important to include. Finally, genomic data about of T. gondii that circulate in Africa are very rare and no data exist for Niger. Relevant analyses are thus urgently required to fill this gap since infectivity and morbidity of toxoplasmosis have been related to the protozoan genotype (Ajzenberg et al. 2002, 2009, Boothroyd & Grigg 2002.
5,622.4
2013-06-01T00:00:00.000
[ "Biology" ]
Frequent epigenetic inactivation of spleen tyrosine kinase gene in human hepatocellular carcinoma. Purpose: The aim of present study was to investigate the methylation and expression status of spleen tyrosine kinase (SYK) in human hepatocellular carcinoma (HCC) and to evaluate this information for its ability to predict disease prognosis. E-cadherin and TIMP-3 methylation was also analyzed here as control because both were associated with poor prognosis in some types of tumors. Experimental Design: We analyzed the methylation status of SYK, E-cadherin, and TIMP-3 in 124 cases of HCC and assessed the correlation of such methylations with clinicopathologic variables and prognosis after tumor resection. Results: We found that SYK, E-cadherin, and TIMP-3 genes were methylated in 27%, 27%, and 42% of HCC neoplastic tissues, respectively. The loss of SYK mRNA or Syk protein expression was highly correlated with SYK gene methylation. The patients with methylated SYK in neoplastic tissues had a significantly lower overall survival rate after hepatectomy than those with unmethylated SYK. No significant difference in overall survival rates, however, was found between groups of patients with methylated and unmethylated E-cadherin or TIMP-3. Patients with negative Syk protein expression had a significantly lower overall survival rate than those with positive Syk protein expression. Multivariate analyses indicated that factors affecting overall survival were tumor-node-metastasis stage, Child-Pugh classification, SYK methylation, or Syk protein status. Conclusions: Our results indicate that SYK methylation and loss of Syk expression in HCC neoplastic tissues are independent biomarkers of poor patient outcome and that determination of SYK methylation or Syk expression status may offer guidance for selecting appropriate treatments. Hepatocellular carcinoma (HCC) is the third most common cause of cancer death in the world (1). China has one of the highest prevalence of HCC, largely because carriers of chronic hepatitis B account for >10% of its population (2). Although the incidence of HCC in the United States is relatively lower, the reported new cases have been increasing steadily (3). The prognosis for patients with HCC is generally poor, even after surgery or chemotherapy. The 5-year overall survival rate is between 35% and 41% after resection of primary tumors (4,5) and between 47% and 61% after liver transplantation (6). Systemic chemotherapy gives a low response rate of only 10% to 20% and has shown no significant benefit with regard to overall survival (7). Given this poor therapeutic efficacy, the development of biomarkers for early detection and accurate prognosis of HCC is crucial for prescribing the most timely and effective treatment. Although the etiology of HCC remains unclear, chronic infection with hepatitis B or C virus, chemical carcinogens (aflatoxins), and other environmental and host factors have been linked to hepatocarcinogenesis (8,9). In China, most cases of HCC develop from liver cirrhosis with chronic infection of hepatitis B virus and/or chronic exposure to aflatoxin B1. In Western countries, however, chronic alcoholism and chronic infection with hepatitis C virus are the major etiologic factors. These various factors are believed to induce a spectrum of molecular alterations that contribute to the initiation and progression of HCC, including the genetic and epigenetic inactivation of tumor-suppressor genes (8,9). Similar to what has been shown in other tumor types, DNA methylation frequently occurs in HCC, represented by p16, p15, GSTP, E-cadherin, TIMP-3, APC, SOCS-1, RASSF1A, and 14-3-3d (10 -14). The prognostic value of methylation of these genes in HCC was either not systematically studied or was found not important in HCC. The spleen tyrosine kinase (SYK) is a tumor/metastasis suppressor gene recently found to be silenced through DNA methylation in breast cancer (15) and T-lineage acute lymphoblastic leukemia (16). Loss of SYK expression has been implicated in increased invasiveness and proliferation of breast tumors (17). Concordantly, overexpression of SYK was shown to inhibit the invasiveness, proliferation, and motility of breast cancer cells (17 -20). SYK was regarded as a novel regulator of metastatic behavior of melanoma cells (21). Decreased SYK expression in primary breast tumors was shown to predict shorter survival among cancer patients (22). Given that SYK methylation is primarily responsible for the loss of SYK expression, aberrant SYK promoter hypermethylation may serve as a valuable prognostic marker. In this study, we correlated epigenetic alterations of SYK with clinical and pathologic variables to determine its prognostic value in HCC. Because methylation of E-cadherin and TIMP-3 have been shown to be associated with poor prognosis in gastric and esophageal cancer (23,24), respectively, we also analyzed the E-cadherin and TIMP-3 methylation status in parallel to compare their prognostic value with that of SYK methylation. Patients and Methods Cell lines. Liver cancer cell lines HepG2 and Hep3B were purchased from the American Type Culture Collection (Manassas, VA) and maintained in recommended culture conditions. Cells were maintained at 37jC in a humidified environment containing 5% CO 2 . Study population and tissue samples. One hundred and twenty-four patients who were consecutively diagnosed with HCC and had undergone hepatectomy from 1998 to 2001 in a single group at the Department of Hepatobiliary Oncology, Sun Yat-sen University Cancer Center, were enrolled in the study. Tissue samples, including 124 samples from primary tumors and 34 samples from matched adjacent nonneoplastic liver tissues, were archived in the liver tumor bank of the institution and stored at À80jC until use. All nonneoplastic and neoplastic samples were histologically confirmed. Neither chemotherapy nor radiation therapy was given before tumor excision. The tumor stages of HCC were classified according to the tumor-node-metastasis (TNM) criteria (25). The degree of underlying cirrhosis was graded, as follows, based on the size of gross cirrhotic nodules and histologic examination: (a) No cirrhosis: The liver was soft and smooth with no cirrhotic nodules. No pseudolobule formation was found microscopically. (b) Mild cirrhosis: The largest nodule on liver surface was <0.4 cm, or cirrhosis was identified by microscopic examination. (c) Moderate cirrhosis: The degree of cirrhosis was between mild and severe cirrhosis. (d) Severe cirrhosis: The largest cirrhotic nodule on liver surface was >0.8 cm, or the liver was notably deformed and complicated by portal hypertension. The study protocol was approved by the Clinical Research Ethics Committee of Sun Yat-sen University Cancer Center. Methylation-specific PCR. A blinded methylation-specific PCR (MSP) analysis was carried out; no clinicopathologic or follow-up data were revealed to the bench researchers until the MSP results were finalized. Genomic DNA was isolated from frozen tissue by digestion with proteinase K, followed by standard phenol/chloroform extraction and ethanol precipitation. Sodium bisulfite (Sigma, St. Louis, MO)induced conversion of genomic DNA was done as described previously (15). The modified DNA was subjected to a two-step MSP protocol to determine the methylation status of SYK, E-cadherin, and TIMP-3 promoter regions (15,26,27). Primers were designed to distinguish between bisulfite-sensitive and bisulfite-resistant modifications of unmethylated and methylated cytosines, respectively. For the first-round MSP, a 30-AL reaction that contained 30 ng bisulfitetreated DNA was processed in 40 thermal cycles. An aliquot (2 AL) of diluted (1:40) PCR product was subjected to the second-round PCR in another 30-AL reaction. For SYK gene, both methylation and unmethylation primers were included in the same reaction. For E-cadherin and TIMP-3, separate reactions for methylation and unmethylation detection were carried out. The primer sequences, PCR condition, and product sizes for each gene are listed in Table 1. To prepare the positive methylation control, 1 Ag genomic DNA from normal human liver was treated in vitro with SssI methyltransferase (NEB, Beverly, MA), yielding completely methylated DNA at all CpGrich regions. Untreated genomic DNA was used as negative control. For positive (SssI treated) or negative (SssI nontreated) controls, 1 Ag DNA each was modified by sodium bisulfite. Thirty nanograms of bisulfite-treated control DNA template underwent nested PCR amplification side by side with testing specimens. H 2 O was also used as negative control in nested MSP. The PCR products were visualized by agarose gel electrophoresis and ethidium bromide staining. In some experiments, cells were treated for 5 days with a DNA methyltransferase inhibitor, 5-aza-2 ¶-deoxycytidine (Sigma), at a final concentration of 2.0 Amol/L. Cells were then collected for RNA extraction. Immunohistochemical assay. Formalin-fixed, paraffin-embedded sections of HCC tumors and adjacent nonneoplastic liver tissues were subjected to immunostaining with an antibody against Syk using the rabbit EnVision Plus kit (DakoCytomation, Carpinteria, CA). Briefly, 5-Am-thick tissue sections were deparaffinized, rehydrated, and subjected to antigen retrieval by boiling in sodium citrate buffer (10 mmol/L, pH 6.0). The sections were incubated at 4jC overnight with Syk primary antibody (1:200 dilution; Cell Signaling, Beverly, MA) and then stained with 3,3 ¶-diaminobenzidine. After visualization of immunoreactivity, the sections were counterstained with hematoxylin and mounted. The immunostained sections were evaluated without any knowledge of the patients' clinical information and status of MSP and RT-PCR of SYK. Normal liver tissues were taken as internal positive controls. The stains were graded as follows: (a) positive when immunoreactivity is equivalent to that seen in normal liver cells or is moderately decreased; and (b) negative when immunoreactivity is weak or there is no immunoreactivity. Statistical analysis. All clinicopathologic and follow-up data were collected in a database. Overall survival times were measured from the date of resection of primary tumors to the date of death or of the last follow-up. Survival curves were constructed using the Kaplan-Meier method and compared using the log-rank test. The prognostic factors for survival after hepatectomy were elucidated by univariate and then multivariate analyses. The following variables were analyzed: patient sex; age; Child-Pugh classification; g-glutamyltransferase level; a-fetoprotein level; tumor size; tumor encapsulation status; presence of macro tumor thrombus in the portal vein; presence of satellite nodules; degree of underlying cirrhosis; TNM stage; expression status of Syk protein and methylation status of SYK, E-cadherin, and TIMP-3 genes in tumor tissues. Significant prognostic factors found by univariate analysis were entered into a multivariate analysis using the Cox proportional-hazards model. The SPSS software package (version 10.0; SPSS, Inc., Chicago, IL) was used for the statistical analyses. P < 0.05 was considered to be statistically significant. Results Promoter hypermethylation leads to SYK silencing in HCC. SYK is expressed in many epithelial cell types. We began our study by analyzing SYK expression status in two liver cancer cell lines, HepG2 and Hep3B. RT-PCR showed that Hep3B but not HepG2 cells expressed SYK mRNA (Fig. 1A). Because DNA methylation is primarily responsible for SYK gene silencing (15), we surmised that the SYK gene promoter might be methylated in HepG2 cells. To explore this possibility, we used MSP to measure both methylated and unmethylated SYK promoter (15). MSP analyses indicated that SYK was methylated in HepG2 but not in Hep3B ( Fig. 2A), consistent with the SYK expression status in these cells. To further substantiate that SYK methylation is primarily responsible for the loss of SYK expression, we treated HepG2 cells with a DNA methyltransferase inhibitor, 5-aza-2 ¶-deoxycytidine, to determine whether demethylation restored SYK expression. As shown in Fig. 1B, 5-aza-2 ¶-deoxycytidine reactivated SYK expression in the HepG2 line as detected by RT-PCR, while not affecting that in Hep3B, suggesting that DNA methylation plays a causal role in the SYK loss of expression in HCC. SYK is hypermethylated in primary HCC. We next examined whether the epigenetic alteration of SYK observed in the HCC cell lines could be extrapolated to primary HCCs. All 124 patients included in the present study underwent surgical resection of primary tumors. The pathologic diagnosis of all HCC cases was confirmed by histologic reviews. We used MSP to evaluate the SYK methylation status in the 124 primary HCC tumors, in which 27 (21.8%) and 90 (72.6%) specimens were found to be SYK methylated and unmethylated, respectively. The remaining seven cases (5.6%) showed amplification of both SYK methylation and unmethylation (Fig. 2B). Coexistence of both methylation status in a given tumor could reflect the heterogeneity of HCC, although contamination from normal tissue DNA cannot be ruled out. To ascertain whether SYK methylation leads to gene silencing in primary HCC, we used immunohistochemistry to assess the Syk protein expression in all 124 tumors (Fig. 3). Immunohistochemical analyses showed that Syk protein was not expressed in 32 (25.8%) HCC cases; in this group, SYK was methylated, methylated/unmethylated, and unmethylated in 24, 5, and 3 cases, respectively. Among the remaining 92 (74.2%) Syk protein-positive cases, SYK was methylated, methylated/unmethylated, and unmethylated in 3, 2, and 87 cases, respectively. The correlation between SYK methylation and loss of Syk protein expression was highly significant (P < 0.001, Spearman test). The three cases in which Syk was expressed but methylated may reflect the heterogeneity of HCC; that is, methylation may occur in a subpopulation of neoplastic tissues that is readily detectable by MSP (28). The three cases in which Syk was not expressed but unmethylated suggest that there are other mechanisms to suppress SYK expression. We also measured the SYK methylation and expression status in matched normal liver tissues. Among the 124 cases, 34 had samples of matched adjacent pathologically nonneoplastic liver tissues that were used for MSP, RT-PCR, and immunohistochemical analyses. SYK gene was found methylated, methylated/ unmethylated, and unmethylated in 0, 3, and 31 in nonneoplastic specimens, respectively, in comparison with 6, 2, and 26 cases of neoplastic tissues, respectively. If the ''methylation/ unmethylation'' was grouped into the ''methylation-positive'' category, the percentage of patients with positive methylation were 8.8% (3 of 34) and 23.5% (8 of 34), respectively, for nonneoplastic and neoplastic tissues. The difference in percentage of methylation-positive patients was not statistically significant (P = 0.186; Fisher's test). The statistical significance could be reached if more samples were available. SYK methylation in nonneoplastic tissues was also observed in earlier studies that may represent DNA methylation in premalignant lesions (29,30). Aging-related gene methylation could also be a contributor (31). The corresponding primary tumors of these three cases were found to have unmethylated SYK. The expression of SYK mRNA as measured by RT-PCR and Syk protein by immunohistochemistry in the 34 cases was entirely consistent, indicating the SYK expressional control occurs at the transcriptional level. Both SYK mRNA and Syk protein were positive in all 34 matching nonneoplastic liver tissues. By contrast, 5 of the 34 primary HCCs expressed neither SYK mRNA nor Syk protein. Among the 29 SYK-positive HCCs, SYK was found methylated, methylated/unmethylated, and unmethylated in 1, 2, and 26 specimens, respectively. These numbers were in comparison with 5, 0, and 0 SYK-negative cases, respectively. Using Spearman correlation test, SYK methylation and SYK expression was strongly correlated (P < 0.001). Collectively, these results indicated that hypermethylation of SYK promoter was largely tumor-specific and responsible for the loss of SYK expression in HCC. Like SYK, E-cadherin and TIMP-3 are thought to be tumor/ metastasis -suppressor genes. DNA methylation could lead to silencing of E-cadherin (27,32) and TIMP-3 (13,30,33) in certain types of tumors, including HCC. Thus, we also assessed the methylation status of E-cadherin and TIMP-3 genes in the 124 HCC cases. We found that E-cadherin was methylated, methylated/unmethylated, and unmethylated in 21, 9, and 82 cases, respectively (noninformative in 12 cases). TIMP-3 was methylated, methylated/unmethylated, and unmethylated in 32, 14, and 64 cases, respectively (noninformative in 14 cases; Fig. 4). Gene methylation is believed to be an aberrant alteration that is associated with neoplastic progression; amplification from unmethylation allele is likely contributed by common contaminant of normal tissues. Thus, we classified cases with both methylation and unmethylation amplification into methylation-positive group in this study for clinical correlation analyses and prognostic evaluation. When this criteria is adopted, the percentage of patients with positive methylation of SYK, E-cadherin, and TIMP-3 gene became 27.4% (34 of 124), 26.8% (30 of 112), and 41.8% (46 of 110), respectively. Correlation of gene methylation with clinicopathologic variables. We next correlated the methylation status of SYK, E-cadherin, and TIMP-3 with 12 clinicopathologic variables, including patient gender, age, hepatitis B infection status, Child-Pugh classification, g-glutamyltransferase and a-fetopro-tein values, tumor size, status of macro tumor thrombus in the portal vein, satellite nodule, tumor capsule, degree of underlying cirrhosis, and TNM stage ( Table 2). The patient age ranged from 23 to 76 years, with a median age of 48 years. Fourteen (11.3%) of the patients were women and 110 (88.7%) were men. Hepatitis B surface antigen was detected in 115 patients (92.7%). Hepatitis C antibody was positive in only two cases (1.6%), whose hepatitis B surface antigen was negative. One hundred and five patients (84.7%) had histologically confirmed liver cirrhosis, and the remaining 19 (15.3%) did not. Tumor size ranged from 2 to 21 cm, with a median size of 7.5 cm. After a median follow-up of 2.6 years among 124 patients, 40 patients died of HCC and 8 patients died of other disease. Seventy-six patients were still alive at the time of last follow-up report. The 3-and 5-year overall survival rates were 58.3% and 40.9%, respectively. No significant correlation was observed among SYK methylation and the above clinicopathologic variables. The percentage of patients with positive methylation of E-cadherin and TIMP-3 were significantly higher among those with Child-Pugh class B than those with Child-Pugh class A. In addition, E-cadherin methylation was significantly more frequent in patients with moderate or severe underlying cirrhosis, although the pathophysiologic mechanism is not clear. Prognostic value of gene methylation in HCC. The prognostic value of 11 widely used clinicopathologic variables and the methylation status of SYK, E-cadherin, and TIMP-3 were analyzed in the 124 HCC cases. Univariate analyses showed that Child-Pugh B classification, g-glutamyltransferase level >100 U/L, the presence of macro tumor thrombus in the portal vein, the presence of satellite nodule, the presence of severe or moderate cirrhosis, and TNM stage >II predicted relative poor patient survival (Table 3). We also divided all cases into two groups according to the methylation status of SYK, E-cadherin, or TIMP-3 to determine Fig. 1. A, expression of SYK mRNA in HCC cell lines HepG2 and Hep3B. SYK mRNA expression was determined by RT-PCR. A reverse transcriptase (RT)n egative control (À) was used to rule out false positives resulting from contaminated genomic DNA. mRNA for b2-microglobulin (B2MG) was also analyzed to verify the RNA integrity. A blank control (H 2 O) was included in each PCR experiment. PCR products and a molecular weight (MW) marker were run on an agarose gel followed by ethidium bromide staining. Bands of 507 and 115 bp are expected for SYK and b2-microglobulin transcripts, respectively. At least two independent experiments were carried out. B, restoration of SYK mRNA by treatment with DNA methyltransferase 1inhibitor. HepG2 was treated for 5 days with (+) or without (À) 2.0 Amol/L 5-aza-2 ¶-deoxycytidine (5Aza-dC). As a control, Hep3B cells were processed in parallel. Total RNA was harvested and RT-PCR amplified as detailed in (A). whether these factors had prognostic value. Patients whose primary tumors exhibited SYK hypermethylation had lower rates of overall survival (P = 0.0288, log-rank test) after resection; the 3-and 5-year overall survival rates were 40.6% and 30.5%, respectively, for patients with tumors that showed SYK hypermethylation, compared with 66.3% and 56.1%, respectively, for those without SYK methylation. The status of E-cadherin (P = 0.8578) and TIMP-3 (P = 0.6725) methylation did not significantly influence patient survival ( Fig. 5; Table 3). In addition, prognostic value of the expression status of Syk protein was analyzed. Univariate analyses showed patients whose tumors exhibited negative Syk protein expression had lower rates of overall survival (P = 0.0405, log-rank test) after surgical resection; the 3-and 5-year overall survival rates were 40.5% and 30.4%, respectively, for patients with tumors that showed negative expression of Syk protein, compared with 65.7% and 55.6%, respectively, for those with positive expression of Syk protein. The six clinicopathologic factors and methylation status of SYK (or Syk protein status) found to be prognostic on univariate analysis were entered into a multivariate model to identify independent predictors of overall survival. Cox multivariate proportional-hazards model indicated that the factors significantly affecting overall survival were Child-Pugh classification (P = 0.038), TNM stage (P = 0.003), and SYK methylation status (P < 0.001; Table 4). When we used the expression status of Syk protein to replace the methylation status of SYK in Cox multivariate model analysis, the factors significantly affecting overall survival were Child-Pugh classification (P = 0.040), TNM stage (P = 0.025), cirrhosis (P = 0.048), and Syk protein expression (P = 0.007). These data suggested that SYK gene methylation represented a surrogate for loss of SYK gene expression as an independent prognostic marker. Discussion In this study, we analyzed methylation of the SYK, E-cadherin, and TIMP-3 genes in 124 cases of HCC and correlated the methylation status with clinical and pathologic features to determine whether these markers can predict disease outcomes. The E-cadherin and TIMP-3 tumor-suppressor genes have been extensively studied and their suppressor activity has been characterized in several experimental settings (32 -35). Our results support the suppressor roles of these two genes in HCC by showing methylation of E-cadherin and TIMP-3 in 26.8% and 41.8% of the cases, respectively. SYK, however, has been less well characterized. It was initially implicated as a tumorsuppressor gene in breast cancer (17). SYK promoter methylation leading to gene silencing has been shown in breast cancer (15) and acute lymphoblastic leukemia (16). The loss of SYK expression is thought to contribute to tumor progression by promoting tumor invasion, proliferation, and motility. Here, we showed that SYK hypermethylation was present in 27.4% of the HCCs and was associated with gene silencing. The tight correlation between SYK methylation and loss of SYK expression, together with the causal role of SYK methylation in gene 4. The E-cadherin and TIMP-3 genes were hypermethylated in primary HCC tumors. A two-step MSP protocol was used to analyze the gene methylation status. DNA was extracted from tissues, treated with sodium bisulfite, and then subjected to first-round PCR amplification. Then, in a nested PCR, methylation-specific or unmethylation-specific primers were used in separate reactions. For the E-cadherin gene, products of 112 and 120 bp were expected for methylated and unmethylated DNA, respectively. For theTIMP-3 gene, products of 116 and 122 bp were expected for methylated and unmethylated DNA, respectively. silencing, indicates that epigenetic inactivation of SYK contributes to the progression of HCC. In this project, we explored the possibility of using SYK methylation as a prognostic marker compared with E-cadherin and TIMP-3 gene methylation. The main focus of this study was to identify accurate biomarkers of prognosis for HCC patients after hepatectomy. Several clinicopathologic features and molecular markers, with varied predictive power, have been linked to HCC prognosis. 36 -39) and molecular markers (p27 expression and p53 mutation; refs. 40,41). In this study, the prognostic value of SYK, E-cadherin, and TIMP-3 methylation in tumor cells was investigated. Although methylation of E-cadherin and TIMP-3 have been shown to predict a worse prognosis in node-positive diffuse gastric cancer and in esophageal adenocarcinoma, respectively (23, 24), we did not find any correlation between either E-cadherin or TIMP-3 methylation and HCC patient survival. In contrast, methylation of SYK in HCC tissues predicted poor overall survival after hepatectomy on univariate analysis. Furthermore, Cox multivariate proportional-hazards model confirmed that methylation of SYK in HCC was an independent and strong predictor of overall survival of these patients. SYK methylation seems to be a more powerful biomarker for risk prediction in HCC than other classic clinicopathologic features, such as TNM staging and Child-Pugh classification (Table 4). It remains to be seen whether the use of SYK methylation as a prognostic tool can be extended to other tumor types, such as breast carcinoma. An earlier study indicated that in breast cancer patients, low SYK mRNA expression in tumors predicted short survival time (22). Presuming that the loss of SYK expression results from DNA methylation, SYK methylation is conceivably suitable for use as a biomarker of breast cancer prognosis. The association between SYK methylation and poor survival rates suggests that SYK plays an important role in HCC progression. Because this study included only Chinese patients, it is not known whether the prognostic value of SYK methylation can be extended to HCC cases resulting from other etiologic factors. It has been reported that rates of p16 methylation in HCC vary significantly among different geographic locations (e.g., it is present in 34.4% of cases from China and Egypt but only 12.2% of those from the United States and Europe). Similar geographic variations have been observed for estrogen receptor-a methylation and CpG island methylator phenotype (42). Whether SYK methylation has such geographic and ethnic variation and whether SYK methylation is associated with certain etiologic factors need to be further investigated. Because CpG island methylation is a reversible epigenetic change, the use of demethylation agents presents a novel therapeutic opportunity (43). Early clinical trials with demethylation compounds, such as 5-azacytidine and 5-aza-2 ¶-deoxycyti-dine, have shown disappointing results in solid tumors. Their use in hematologic malignancies, however, has yielded promising responses (44,45), despite their high toxicity and chemical instability. The therapeutic outcome could be compromised without knowledge on the methylation status of tumor-related genes; demethylation agents should be effective only for patients with epigenetic inactivation of key tumor-suppressor genes. Therefore, sensitive detection and a better understanding of the frequency of gene methylation must be obtained before the use of such demethylation drugs can be optimized. The present study showed that one, two, and all three of the SYK, E-cadherin, and TIMP-3 genes were methylated in 38.7% (48 of 124), 17.7% (22 of 124), and 4.8% (6 of 124) of our HCC cases, respectively. Thus, 61.3% of the HCC patients had at least one of the three genes methylated. They may benefit from the demethylation-based therapy. Furthermore, a new generation of demethylation drugs that are more chemically stable, such as zebularine, could be more effective clinically and may be applicable in solid tumors (46). In conclusion, the present data show that the SYK gene can be silenced through epigenetic pathway and that positive methylation of SYK is an adverse prognostic factor among HCC patients. This information can be used to identify high-risk HCC patients who may benefit from adjuvant or more aggressive therapy after resection of primary tumors. It also justifies further studies of novel demethylating agents in the treatment of HCC.
6,048
2006-11-15T00:00:00.000
[ "Biology", "Medicine" ]
Research on Modeling and Analysis of Generative Conversational System Based on Optimal Joint Structural and Linguistic Model Generative conversational systems consisting of a neural network-based structural model and a linguistic model have always been considered to be an attractive area. However, conversational systems tend to generate single-turn responses with a lack of diversity and informativeness. For this reason, the conversational system method is further developed by modeling and analyzing the joint structural and linguistic model, as presented in the paper. Firstly, we establish a novel dual-encoder structural model based on the new Convolutional Neural Network architecture and strengthened attention with intention. It is able to effectively extract the features of variable-length sequences and then mine their deep semantic information. Secondly, a linguistic model combining the maximum mutual information with the foolish punishment mechanism is proposed. Thirdly, the conversational system for the joint structural and linguistic model is observed and discussed. Then, to validate the effectiveness of the proposed method, some different models are tested, evaluated and compared with respect to Response Coherence, Response Diversity, Length of Conversation and Human Evaluation. As these comparative results show, the proposed method is able to effectively improve the response quality of the generative conversational system. Introduction Along with the rapid development of artificial intelligence, the use of generative conversational systems based on joint structural and linguistic models is increasingly being observed and is being applied in some interesting robotic cases. Generative conversational systems provide the ability to generate conversational responses actively. Additionally, they are also not limited by conversation content. Implicitly, this provides several benefits for human life, such as in the family environment, hospitals, entertainment venues, etc. Conversational systems are composed of a neural network-based structural model and a linguistic model. The neural network-based structural model mainly performs feature extraction and semantic understanding on input sequences. In addition, the linguistic model can determine the probability of the existence of the output sequence by determining a probability distribution for an output sequence of length m. The response quality of the system, with respect to aspects such as diversity, informativeness and multi-turns, is greatly influenced by different structural models and linguistic models. However, Common and foolish responses are often generated by the prediction of responses with the general statistical linguistic model in conversational system. Meanwhile, linguistic models based on Maximum Mutual Information (MMI), Mutual Information (MI), Pointwise Mutual Information (PMI) and Term Frequency-Inverse Document Frequency (TF-IDF) are also derived to increase the coherence between the input sequence and system response. For example, responses that enjoy unconditionally high probability, as well as biases towards responses that were specific to the given input, could be avoided by the linguistic models based on MMI [15]. The responses that enjoy high probability but were ungrammatical or incoherent could be avoided by the linguistic models based on MI [16]. The nonspecific responses could be avoided by the linguistic models that incorporated the TF-IDF term [2]. Similarly, the linguistic models based on PMI were able to predict a noun as a keyword reflecting the main gist of the response in order to generate a response containing the given keyword [8]. These studies of the coherence between the input sequence and system response can increase the informativeness to some extent, but more foolish responses are still unavoidable in testing. Therefore, a linguistic model based on MMI and a foolish punishment mechanism is proposed. To comprehensively improve the response quality of the conversational system with respect to the aspects of the structural model and the linguistic model, the attention with intention-based structural model and TF-IDF-based linguistic model were combined [2]. The joint model firstly modeled intention across turns using RNN, and then incorporated an attention model that was conditional on the representation of intention. It subsequently avoided generating non-specific responses by incorporating an IDF term in the linguistic model. A structural model based on forward and backward neural networks and a linguistic model based on PMI were also combined [8]. The joint model firstly used PMI to predict a keyword, then generated a response containing the keyword using the structural model. These joint models improved the informativeness of system responses by combining the developed structural model and linguistic model. Therefore, in order to improve the response quality of conversational systems in terms of diversity, informativeness and multi-turns, a novel joint model is established in this paper, which combines the dual-encoder structural model with the linguistic model. The theoretical model is also proven to be effective by comparison with an experiment. To address the problem of the lack of diversity, informativeness and multi-turns, a joint model is presented in the paper. In Section 2, a novel dual-encoder structural model based on the new CNN and strengthened attention with intention is established. In Section 3, the linguistic model based on MMI and the foolish punishment mechanism is established. In Section 4, the experiments on generative conversational system based on the joint structural and linguistic model are built. In Section 5, comparisons are drawn between the joint model and baseline models. Model Architecture In this section, a novel dual-encoder model structure based on the new CNN and strengthened attention with intention (SAWI-DCNN) is proposed, where CNN, rather than RNN, can be used to obtain the long-term context. First, the pre-processed input sequences are processed in encoder 1, as shown in Figure 1. Meanwhile, previous target tokens are processed in encoder 2. Second, the output sequence of encoder 1 distributes attention at the strengthened attention layer, where the distribution of attention is affected by the state of encoder 2, including conversational intention [2,17,18]. Finally, the output sequence of the attention distribution and encoder 2 is iterated to generate the predicted target token at the fully connected layer. ① is the input pre-processing layer; ② is the dual-encoder layer (Encoder 1: left; Encoder 2: right); ③ is the conversational intention layer; ④ is the strengthened attention layer; ⑤ is the fully connected layers. Input Pre-Processing The input sequence ( ) E are extracted and mined in the conversational system model. However, deeply hidden semantics can only be excavated by the system with difficulty when context is discarded in different interactions. Conversely, too much noise is brought into the conversational system when the context is included in its entirety. Thus, the input regarding the response of the previous turn is controlled in encoder 1 in order to increase the perception of the conversational environment and improve the interaction turns. The updated input vectors (k) new E can be defined as are the sentence-level embedded vectors [19] of the input sequence of the current turn k and the output sequence of the previous turn k-1, respectively. Note that the result of 1 is the input pre-processing layer; 2 is the dual-encoder layer (Encoder 1: left; Encoder 2: right); 3 is the conversational intention layer; 4 is the strengthened attention layer; 5 is the fully connected layers. Input Pre-Processing The input sequence m ∈ R f represents the embedding vector at the position m during the k turn conversation. The features and deep semantics of embedded vectors E (k) are extracted and mined in the conversational system model. However, deeply hidden semantics can only be excavated by the system with difficulty when context is discarded in different interactions. Conversely, too much noise is brought into the conversational system when the context is included in its entirety. Thus, the input regarding the response of the previous turn is controlled in encoder 1 in order to increase the perception of the conversational environment and improve the interaction turns. The updated input vectors E (k) new can be defined as E ,e (k−1) Y ∈ R f are the sentence-level embedded vectors [19] of the input sequence of the current turn k and the output sequence of the previous turn k − 1, respectively. Note that the result of f(·) is a biased vector, which is able to control the input generated by the previous output sequence. Thus, Equation (1) can be rewritten as When embedded vectors are input into CNN, multiple vectors are convoluted simultaneously by convolution kernels. In addition, the sense of order of vectors decreases with the increase of the convolution layer. For this reason, the absolute position is embedded in the input sequence in order to increase the temporal order of vectors and enable the model to perceive which part of the input sequence is being processed. The joint embedding vector is expressed as where S (k) ∈ R m×f is the joint input vectors and P Dual-Encoder The dual-encoder consists of stacked convolution blocks, which include the new CNN, Gated Linear Units (GLU) [9], Residual connections [20], and scaling factors. The outputs of the convolution blocks are represented as h dm ] ∈ R mf in encoder 1 and encoder 2, respectively. Each convolution kernel is parameterized as W ∈ R w , b w ∈ R in the new CNN. In addition, the input vectors S (k) are mapped to the output vectors Y ∈ R 2m×f through the new CNN, in which the output vectors have twice the dimensionality of the input vectors. The information flows of the output Y = [A, B] ∈ R 2m×f of the new CNN can be controlled by GLU, which provides a linear path for information gradient flows and solves the gradient problem caused by nonlinear gating. Thus, the gated linear unit is added to the convolution blocks. where A and B are a nonlinear input, A, B ∈ R m×f ; ⊗ refers to the point multiplication operation; the dimension of the output f(·) ∈ R m×f is half the size of Y; the information flow A related to the current context is controlled by the gates σ(B). Meanwhile, in order to enable the conversational system to further mine deeply semantic information in a conversational environment, the conversational intention vector Z (k) ∈ R f is added to encoder 2 as a bias of the convolution output. Residual connections from the input of each convolution block to the linear gating output are added to avoid degradation caused by network depth. In addition, the scaling factors µ are also added to the convolution blocks to preserve the input variance at the beginning of training. Thus, the output of the convolution block can be expressed as h (k,l) are the outputs of l − 1th convolution block in encoder 1 and encoder 2, respectively; meanwhile, the scaling factor µ is a hyper parameter that satisfies µ = √ 0.5. In the test, the distribution of the target sequence token is predicted at the top level of the fully connected layers through the linguistic model based on MMI and the foolish punishment, as shown in Section 3. 1-D Dynamic Convolutional Neural Networks (DCNN) Since the dimension of input vectors is reduced when the convolution and pooling of the vectors are performed by CNN, it is difficult to increase the number of CNN layers when dealing with variable-length vectors of the input sequence. Therefore, a new Convolutional Neural Network architecture, consisting of a one-dimensional Wide Convolution layer, a dynamic k-max pooling layer, a flatting layer, a dropout layer, and a recurrent fully connected layer, is proposed. As shown in Figure 2, one-dimensional Wide Convolution Operations are adopted [21]. This aims to ensure that the vectors of the whole variable-length input sequence containing the edge words are convoluted by convolution kernels, generating a non-empty feature map c. The two-channel and multi-convolution kernels are used for convolution in order to improve convolution speed and obtain more features. This is initiated by defining the convolution kernels width with one dimension. In addition, the dropout layer is used for regularization. This aims to prevent the occurrence of over-fitting and divergence of the prediction. Meanwhile, in order to align the variable-length vector of both the input and output sequences, a recurrent fully connected layer is proposed. The recurrent fully connected layer is similar to the fully connected layer in RNN. In addition, the dimension of the recurrent fully connected layer is defined as an integer multiple of the input token vector. Finally, the output is generated by sliding the fully connected layer. where M ∈ R m is a convolution kernel; b m ∈ R is a bias; S ∈ R s×f is the input vectors; c ∈ R (s+m−1)×f is a feature map trough convolution operation and f(·) is an activation function. respectively; meanwhile, the scaling factor μ is a hyper parameter that satisfies μ= 0.5 . In the test, the distribution of the target sequence token is predicted at the top level of the fully connected layers through the linguistic model based on MMI and the foolish punishment, as shown in Section 3. 1-D Dynamic Convolutional Neural Networks (DCNN) Since the dimension of input vectors is reduced when the convolution and pooling of the vectors are performed by CNN, it is difficult to increase the number of CNN layers when dealing with variable-length vectors of the input sequence. Therefore, a new Convolutional Neural Network architecture, consisting of a one-dimensional Wide Convolution layer, a dynamic k-max pooling layer, a flatting layer, a dropout layer, and a recurrent fully connected layer, is proposed. As shown in Figure 2, one-dimensional Wide Convolution Operations are adopted [21]. This aims to ensure that the vectors of the whole variable-length input sequence containing the edge words are convoluted by convolution kernels, generating a non-empty feature map c. The two-channel and multiconvolution kernels are used for convolution in order to improve convolution speed and obtain more features. This is initiated by defining the convolution kernels width with one dimension. In addition, the dropout layer is used for regularization. This aims to prevent the occurrence of over-fitting and divergence of the prediction. Meanwhile, in order to align the variable-length vector of both the input and output sequences, a recurrent fully connected layer is proposed. The recurrent fully connected layer is similar to the fully connected layer in RNN. In addition, the dimension of the recurrent fully connected layer is defined as an integer multiple of the input token vector. Finally, the output is generated by sliding the fully connected layer. The dimensions of vectors after wide convolution are variable with the varying lengths of different input sequences. The edge vectors are expanded by means of zero filling when the vectors of the input sequence are convoluted by convolution kernels. Thus, the dimension of the convoluted feature map is larger than the input sequence vectors. The one-dimensional dynamic k-max pooling process is defined in order to align the output vector state with the input sequence vectors at each moment. The pooling parameter is defined as where s is the length of the input sequence. The pooled feature map of the single channel convolution and pooling operations is represented as C max ∈ R s×f , where the sequences of the feature map values are related to the source and the subscripts are arranged from small to large. Centralizing Intention The attention weights of the input sequence can be distributed each time using attention models. In addition, according to the attention distribution, the semantic information of the input sequence can be further understood by the conversational system. The attention distribution of the encoder 1 output state can be affected not only by the previous output state of encoder 2, but also by the conversation intention [2,7,16], just like a human being. Conversation intention can represent the conversation context and the primary motivation of the conversation. However, the role of conversation intention in conversation responses is not immediately obvious. This is mainly influenced by the desire that the additional noise have no contribution to the distribution of attention. Thus, to reduce the redundancy of intention caused by the increase in conversation turns, a dynamic model of the intention vector is established, and forgetting gates are added to the model. Hence, the final dynamic model of the intention vector can be expressed as where Z (k) ∈ R f is an intention vector of the k-th turn; tanh(·) refers to the tanh operation and f t is a forgetting gate that can control the previous intention. f t ∈ R f×f is expressed as where W t ∈ R 1×f is a transformation matrix; b t ∈ R f×f is a bias and h (k,top) S ∈ R f is a sentence-level vector of the encoder output at the k-th turn that can be expressed as where h is the output vectors of the top-layer convolution block in encoder 1. Intensity-Strengthening Attention Because the attention weights are distributed according to the contribution of each token in the sequence, and the sum of the attention weights is 1, the effect of a single attention [22] becomes weaker and weaker as the input sequence increases in size. Indeed, the distribution of a single attention will be more distracted, and can even reach zero when the input sequence is longer. An intensity-strengthening attention method is proposed in order to address the problem of the small attention distribution and the partial over-distribution. To preserve more context for the current state of encoder 2, the previous output sequence is convoluted. Thus, the features of the output sequence at the current time are as follows: where h is the output vectors of the top-layer convolution block in encoder 2. The current state of encoder 2 consists of the features of the output sequence and the previously predicted target token g (k) i−1 , which are expressed as The query vector d where W Q h ∈ R dx×f and h K h ∈ R dx×f are transformation matrices. Therefore, the input C (k) i to the connection layer can be expressed as where W V h ∈ R dx×f is a transformation matrix. The overall intensity of attention is enhanced through superimposed attention, which reduces the effects on attention of both distraction and inattention. The output of encoder 1 contains the context and location information of the input sequence. Similarly, the output state of encoder 2 includes the context, previously predicted target token, and intention information. Therefore, with the calculation of attention distribution, the results are determined by the above information. Linguistic Model Based on MMI and FPM To guarantee the existence of the output sequence, the probability of the predicted target sequence needs to be estimated by the linguistic model. In addition, a linguistic model based on MMI, which can improve the response coherence of the conversational system and reduce the generation of irrelevant responses, is adopted to estimate the probability of the output sequence in the paper [6]. Nevertheless, foolish responses such as "I don't know" and "what?" are still unavoidable in the process of testing. Therefore, a foolish punishment mechanism (FPM) is added to the linguistic model based on MMI to reduce the number of foolish responses. U(Ŷ) = N n=1 p(ŷ n |ŷ 1 ,ŷ 2 ,ŷ 3 , · · ·ŷ n−1 )·g(n) (20) where λ is a hyper parameter for the general response punishment; γ is the first token to be punished; and n is the index of the target tokens, which is generated at time n. The predicted target tokens are punished by calculating the probability of foolish responses Y, which is predicted by the previous output sequenceŶ. For example, the current target token is predicted based on the previous output sequence as input. Then the target token is compared with the foolish responses. If the predicted target token is similar to the foolish response tokens, then the token is regarded as a foolish target token. According to the comparison results, the probability of the predicted target tokens being foolish response tokens is obtained, and the probability is used as the punishment for foolishness. Ten sequences Y of foolish responses like "I don't know" and "I have no idea" are manually built, which are often generated by the general model. Although the system generates more total categories of foolish responses than the manually built sequences of foolish responses, these responses will be similar to the established foolish responses. Therefore, the foolish punishment function is defined as where N Y is the number of foolish responses; N y is the number of tokens in the foolish responses Y. Meanwhile, the final objective function is defined as where λ 1 and λ 2 are hyper parameters. Both are set to be equal to 0.25. In the test, the generative conversational system needs to sample the predicted target tokens to maximize the probability of the output sequence. In addition, the Beam Search algorithm [23] is often adopted. The Beam Search algorithm is a graph-searching algorithm that can quickly find the optimal output sequence. However, the Beam Search algorithm is prone to generating erroneous responses in the sampling process, e.g., the traditional Beam Search algorithm is easily affected by previously sampled tokens and large local probabilities. Moreover, the correct response sequence cannot be produced. Therefore, in this paper, the Diverse Beam Search algorithm [24] is used to predict target tokens, as it is able to improve the diversity of output sequences by sampling on the basis of grouping using the Beam Search algorithm. Datasets and Training The OpenSubtitles (OSDb) dataset, an open-domain dataset, is applied in these experiments. The OSDb contains 60M scripted lines spoken by movie characters [25]. 301,000 question-answer pairs are randomly selected, of which 300,000 are used for training and 1000 are sampled for testing. 512 hidden units are adopted for the dual encoder in the model. All embedded vectors have a dimensionality of 512. Meanwhile, the same dimensionality is also adopted for linear layer mapping between the embedded sizes and hidden layers; a learning rate of 0.001 is used. In addition, subsequently, a mini-batch of 256 is used; the filter widths are set to 3 and 5, respectively, and the stacked convolution blocks are set to 3 in both encoders. The model is trained with mini-batches by back-propagation, and the gradient descent optimization (Adam Optimizer) is performed. Automatic Evaluations Automatic evaluations of response quality are an open and difficult problem in the conversational field [19,26]. In addition, while there are existing automatic evaluation methods related to machine translation, such as Bilingual Evaluation Understudy (BLEU) and METEOR, these metrics for evaluating conversational system do not correlate strongly with human evaluations, and have been negated by many scholars for the purposes of conversational evaluation [19]. Influenced by the automatic evaluation of multi-turns and response diversity, as proposed by Li [16,27], in which the degree of response diversity is calculated by the number of distinct unigrams in the generated responses, and inspired by conversational targets, the authors propose two automatic evaluation criteria-response diversity and response coherence-in order to indirectly reflect the relationships between system responses and real responses. Response Coherence: the proposed measure for evaluating response coherence is to compute the cosine similarity between the question and the system responses based on embedding using the greedy matching method [18]. In other words, the similarities between the question and the responses are calculated by random sampling of samples in the test. In addition, the mean operation is applied to the similarity of the samples. The coherence of question and responses is greater where the similarity is greater. Response Diversity: Although the method of BLEU [28] is pointed out as being unreasonable for evaluating the coherence between system response and human evaluation [19], the idea behind BLEU is to calculate the similarity between two sequences. Therefore, response diversity is proposed to be calculated by an improved method of BLEU, which evaluates response diversity on the basis of a calculation of candidates for responses, instead of a calculation of system responses and real response [15,16]. Candidates for responses used in the test are generated by the Diverse Beam Search algorithm. The value of BLEU is obtained by pairwise calculation of candidates and averaged by mean operation. Multiple candidates generated each time are defined as a sample. In addition, the means of a sample calculated by the BLEU method are regarded as the response diversity. Samples are sampled randomly, and response diversity is calculated during the test. Response diversity is greater when the similarity is weaker. Length of the Conversation: Li et al. [16] proposed a method for evaluating the turns of a conversation: a conversation ends when a foolish response like "I don't know" is generated, or two consecutive responses are highly overlapping. In the test, the above method is adopted to determine the length of a conversation in which eight interactions are defined as one turn. Human Evaluation Although the response quality of the system can be indirectly reflected by the coherence, diversity and length of the conversation, the relationship between system responses and real responses cannot be determined by their simple linear superposition. Therefore, the current popular method of human evaluation is used for comprehensive evaluation. To improve the quality of human evaluation, 500 data points are randomly collected from the test questions and responses, and the system responses and baseline model responses are labeled by five volunteers. Meanwhile, five-grade interpretation criteria proposed by Zhang et al. [29] are adopted as labeling criteria. 1. It is not fluent or is logically incorrect in responses; 2. The response is fluent, but irrelevant to the question, including irrelevant regular responses; 3. The response is fluent and weakly related to the question, but the response can answer the question; 4. The response is fluent and strongly related to the question; 5. The response is fluent and strongly related to the question. The response is close to human language. In the test sample, 1000 samples were randomly collected in order to calculate the response coherence, response diversity, and length of the conversation. As can be seen from Table 1 Table 2. As can be seen from the data, the output system responses are diverse and coherent. In addition, the model trends toward generating short responses. Meanwhile, foolish responses may be produced with an increase in the length of the question. Inputs Responses What are you doing? predefined foolish responses. In addition, when the BELU value was more than 0.5, the response was considered to be a foolish one. As can be seen from the data, compared with SAWI-DCNN without FPM, the joint SAWI-DCNN has a strong inhibitory effect on foolish responses. Table 3. Foolish responses evaluation (%). Models Foolishness SAWI-DCNN 8% SAWI-DCNN (except FMP) 26% The Diverse Beam Search algorithm was used to sample the predicted target tokens and select the candidates with the greatest likelihood probability. Some of the sampling results are shown in Table 4. It can be seen that the SAWI-DCNN trends toward generating high-quality responses, whereas foolish responses like "I don't know what you are talking about" and "what?" are easily produced by LSTM+Attention and CNN+Attention. The responses of SAWI-DCNN and the baseline models were sampled randomly and evaluated by humans. The results are shown in Table 5, where the labels (1)(2)(3)(4)(5) correspond to the grading against the five-step interpretation criteria. For example, 1 corresponds to the response "It is not fluent or is logically incorrect in responses", and 2 corresponds to the response "The response is fluent, but irrelevant to the question, including irrelevant regular responses". Values are the percentage of the response number for the sample collected in each grade. The larger the ratio, the more prone the model is to producing responses with the corresponding feature in the five-step interpretation criteria. The quality of the model can be judged on the basis of the response distribution in the corresponding five-step interpretation criteria, i.e., the higher the quality of model response is, the higher the distribution of grades will tend to be in the responses. The parameter AVE is the average grade of responses, which is calculated on the basis of the corresponding response distribution of the samples and the weights. As can be seen from the data in Table 5, high-grade responses are more easily generated by SAWI-DCNN than by the baseline models. In addition, there is a trend towards high-quality responses being produced with higher average grade scores. Conclusions In this paper, a generative conversational system was investigated based on a structural model and a linguistic model. The structural model was initially established based on the new CNN and strengthened attention with intention. Similarly, the linguistic model was established based on MMI and FPM. Both were combined into the form of a conversational system. Different models were tested and evaluated under automatic evaluation and human evaluation. The results of automatic evaluation were observed and compared in terms of response diversity, response coherence, and length of the conversation. Meanwhile, to the results of the proposed method were also observed and compared based on human evaluation in terms of comprehensive response quality. Finally, by evaluating these comparative results, it can be concluded that the proposed joint model greatly and significantly improves the conversational system. This work paves the way for generative conversational systems, in which the optimal combination of a structural model and a linguistic model is the key to improving the response quality of the system.
6,771.8
2019-04-01T00:00:00.000
[ "Computer Science" ]
The identification of α -clustered doorway states in 44, 48, 52 Ti using machine learning A novel experimental analysis method has been developed, making use of the continuous wavelet transform and machine learning to rapidly identify α -clustering in nuclei in regions of high nuclear state density. This technique was applied to resonant scattering measurements of the 4 He( 40,44,48 Ca, α ) resonant reactions, allowing the α - cluster structure of 44,48, Introduction Experimental studies of physical systems are often concerned with answering simple questions: Does the Higgs boson exist? Can we observe gravitational waves? Ideal experiments are designed whereby the results depend on the answer to these questions, and so by making such measurements these answers can be inferred. It is, however, often also the case that these fundamental properties are just one of many complex and independent parameters that affect the experimental data. The other parameters could be anything from other fundamental physical constants, which are perhaps unknown or known to poor precision, to experimental effects such as the detector resolution and efficiency. Therefore, in order to answer the 'interesting' questions, one must first answer many 'uninteresting' questions about the meaa e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>(corresponding author) surements, and in fact often it is these uninteresting questions which dominate the efforts of researchers in their fields. In this article we present a novel technique which uses machine learning [1] to bypass the difficult and uninteresting parts of the analysis, and address the fundamental questions directly. Machine learning refers to a set of numerical algorithms which allow computers to learn patterns and make predictions without encoding those patterns explicitly. These techniques have exceptional analytical potential, and have been used to great effect in a plethora of fields, for example to perform image analysis and facial recognition [2], to understand the sentiment of a paragraph of text [3], to automatically identify interesting events in high energy physics experiments, such as the LHC [4], and to automatically distinguish between true gravitational wave signatures and those produced by non-astrophysical noise in LIGO data [5]. Here the fundamental question we wish to address is: given an experimental energy spectrum produced by the resonant scattering of a nucleus with 4 He, is α-clustering observed in the structure of the compound nucleus formed in this reaction? Alpha-clustering is the phenomenon whereby protons and neutrons form sub-structures within the nucleus, and it can usually be ascribed to specific nuclear energy levels, known as α-clustered states. This has been shown to play a pivotal role in dictating the properties and interactions of light nuclei [6,7], yet it has not been observed to the same extent in heavy nuclei. It is tempting, therefore, to suggest that systems which contain few nucleons are more likely to form cluster structures than those composed of many nucleons, and efforts to understand this trend have led to considerable experimental and theoretical work investigating α-clustering in medium mass nuclei, some of which is detailed in Ref. [8]. It is unclear, however, whether the reduction of experimentally observed α-cluster structures in heavy nuclei truly reflects a shift in structural preference away from α-clustering, or whether experimental difficulties which arise with increasing nuclear mass have concealed the cluster structures in this region. One experimental difficulty which is unique to heavier systems is the increasing nuclear level density. This leads to more complex experimental spectra, and also means that α-clustered states often serve as doorway states [9] in the α decay-channel, and as such, rather than searching for a single α-clustered state, one must instead search for groups of fragmented states all sharing the strength of the original clustered state. Usually the analysis of these experimental spectra requires the extraction of the properties of all of the energy levels which are populated in the reaction, and then the energy levels are compared with a theoretical nuclear model in order to ascertain whether or not they exhibit signs of α-clustering. However, the significant increase in the complexity of the spectra means that unambiguously extracting all of the energy levels is a very challenging prospect, and is often the primary obstacle when analysing experimental data in this mass region. In this scenario, the uninteresting properties are the energy levels, which are difficult to extract and the majority of which will correspond to non-clustered structures. So rather than attempting to extract the states, in this article a technique is developed which simulates many spectra, each time assuming a unique and random combination of nuclear states, but in each case controlling for whether or not an α-clustered structure is present. Machine learning is then employed to learn the differences between spectra which do or do not contain α-clustered states, independent of the properties of the other states in the spectrum. This algorithm can then be applied to the measured data to ascertain the existence of α-clustering. In this article this technique is employed to examine the evolution of α-clustering in titanium isotopes. Previous work on 44 Ti has identified a range of α-clustered states, many of which have been shown to be fragmented [10][11][12]. These observations are in agreement with predictions made by αcluster model calculations [8, ch. 2] and a deformed basis Antisymmetrised Molecular Dynamics calculation [9], indicating a good understanding of the underlying α-cluster structure. There has, however, been comparatively little work done to investigate similar structures in neutron rich titanium isotopes. Analyses of α-transfer reactions have indicated that the degree of α-clustering in titanium isotopes decreases with increasing nuclear mass, both in the ground state [13,14] and in excited states [11], however, a measurement of 48 Ca(α,α) elastic scattering shows significant resonant structure [15]. This may be indicative of α-clustered states in 52 Ti above the α-decay threshold, however no formal analysis was performed on this measurement. The present work investigates 44,48,52 Ti by measuring the resonant scattering reactions, 4 He( 40,44,48 Ca,α). This allows the degree of α-clustering above the α-decay threshold to be compared consistently between the three isotopes, and is an ideal testing ground for a novel machine learning technique as 44 Ti can be used to test the reliability of the procedure, as it is already well understood, before the technique is applied to the neutron rich isotopes. Experimental measurements and results The 4 He( 40,44,48 Ca,α) measurements were made using the Thick Target Inverse Kinematics (TTIK) technique [16]. The reaction chamber was filled with 4 He gas, which acted firstly as a medium to smoothly decrease the energy of the calcium ions as they travel through the chamber via electronic interactions, and secondly as the target for the desired nuclear reactions. This allows a measurement to be made of the entire excitation spectrum without changing the beam energy. The scattered α-particles were measured using two 1mm thick Double-sided Silicon Strip Detectors (DSSDs), placed at the opposite end of the reaction chamber to the beam entrance in the E-E configuration. This ensured that the measurements consisted purely of α-particles and allowed the measurements to be made at a scattering angle of 180 • in the centre-of-mass frame. The measured spectra are shown in Fig. 1, and more details on the experimental work can be found in Ref. [17,18]. A crucial aspect of the TTIK technique is that the measured spectra are in fact a convolution of the true excitation function with the experimental resolution. This serves to reduce the height of any resonances which are much narrower than the experimental resolution. This behaviour can severely hinder the analysis of TTIK spectra if the experimental resolution is poor, however, if it is small enough such that it only impacts states which are too narrow to be considered α-cluster candidates, and it does not cause neighbouring states to become indistinguishable, then it can be considered a useful property as its only effect will be to remove nonclustered states from the spectra. In the present work, REX [19], a Monte-Carlo simulation of thick target resonant scattering experiments, was used to calculate the experimental resolution as 45 keV at Full Width Half Maximum. Alpha clustered doorway states model The cross-section, dσ/d , of the resonant reactions measured in this work can be calculated directly from the energy levels in the compound nucleus using R-matrix theory [20]. It is, therefore, possible to simulate dσ/d by first generating a set of 'non-clustered' energy levels, and then optionally coupling these levels to an α-clustered doorway state. The simulated spectra are generated from the energy levels using the Simplified R-Matrix [21], and classified as either non-clustered (no α-clustered doorway states), or clustered (one α-clustered doorway state). Many clustered and nonclustered spectra were generated, each time with a unique and random set of energy levels. The Simplified R-Matrix calculates dσ/d for reactions where all initial and final state nuclei are spin-0. The cross-section is calculated as a function of excitation energy, E x , and centre-of-mass scattering angle, θ , from the excitation energies, E λ , orbital angular momenta, L λ , partial decay widths, λμ , and total decay widths, λ , of the energy levels, where the energy levels are indexed by λ and the decay channels are indexed by μ. This is written explicitly as where is the centre-of-mass energy of the system, m μ is the reduced mass, P L λ is a Legendre polynomial of order L λ and φ L λ is the partial wave phase shift. The partial wave phase shifts exist only in the simplified version of the R-matrix to account for the behaviour of the interference between the resonances and the background amplitude. In this work they were randomised between 0 and π to account for all possible types of interference. In practice the cross-section is not measured as a continuous quantity, and instead is measured in a finite number of excitation energy bins. In order to ensure that the simulations match the experimental data the cross section was calculated discretely for each experimental bin, dσ/d n , where E x n and θ n are the excitation energy and scattering angle of the bin respectively. Additionally, the background amplitude was defined by fitting a smoothing spline to the experimental spectra which approximated the background, and sampling this at E x n . Finally the simulated cross-section was convoluted with the experimental resolution, and noise was added based on the experimental signal to noise ratio, in order to make the simulations as directly comparable to the measured spectra as possible. The non-clustered energy levels were simulated by generating a set of shell-model like energy levels, known as class-I energy levels and indexed by λ I , characterised by ensuring that the levels adhere to the appropriate statistical distributions (described below) indicative of the shell model. The partial widths, λ I μ , for each decay channel μ were constructed to follow Porter-Thomas statistics [22] by Gaussianly distributing the reduced widths, γ λ I μ , with a mean of 0 and variance given by γ 2 μ . The partial widths are calculated from the reduced widths using where P μL λI is the penetrability through the combined Coulomb and centrifugal barrier, and L λ I is the orbital angular momentum in channel μ. The penetrability was calculated from the regular and irregular Coulomb wavefunctions [23]. The values of γ 2 μ dictate the average strength of each decay channel. In these simulations they were chosen by defining the mean square ratio to the Wigner limit for single particle decays, θ 2 sp , and the ratio to the single particle strength for each decay channel, R μ/sp . The Wigner limit, γ 2 μw , is a theoretical upper bound on the reduced width. Written formally, this gives For all of the spectra in this work the only open decay channels are the proton, neutron and α channels. Since the proton and neutron decays are both decays to single particles, R p/sp , R n/sp ∼ 1, however, one would expect average αdecay strength to be weaker than the proton and neutron strengths for purely shell-model type states as the α-particle is a more complex particle, and so R α/sp < 1. The excitation energies, E λ I , and spins and parities, J π , were generated such that the nearest neighbour state spacings of states with the same J π followed the Wigner distribution [24], defined as where D J π is the mean nearest neighbour state spacing for states with the same J π , and is calculated from the overall mean state spacing, D , using the Gaussian cutoff factor from the Fermi-gas model [23], where the spin cutoff factor σ spc is defined by assuming that the nucleus is a rigid rotating sphere. The clustered spectra were generated by coupling an αclustered doorway state, known as a class-II state, to the set of class-I states, to produce a set of compound states, indexed by λ. The class-II state was assumed to exist in a highly deformed secondary minimum in the deformation potential energy surface, and was characterised as being α-clustered by a large ratio to the Wigner limit in the α-channel, θ 2 II,α , and zero decay widths in all other channels. Its spin and parity, J π II , were randomised, and its excitation energy, E II , was randomised uniformly within the measured energy range. The coupling between the class-I and class-II states was based on the work by Bjørnholm and Lynn [25] for the treatment of fission isomers. The compound states were generated by solving the eigenvalue equation where E I is a diagonal matrix containing E λ I , E λ is the excitation energy of the compound state and C (I) λ and C (II) λ are the coefficients which produce the compound state from the class-I and class-II states. The matrix H c is a 1 × N I matrix, where N I is the total number of class-I states. The elements of H c are 0 for class-I states which have a different J π to the class-II state, and otherwise are taken from a normal distribution, centred on 0 with a variance given by H 2 c . This ensures that the class-II state only couples to class-I states of the same J π , and the use of a normal distribution is justified in Ref. [25] to account for the random behaviour of the overlap between the class-I and class-II state wavefunctions. The value of H 2 c defines the strength of the coupling, and, therefore, the number of class-I states which will couple significantly to the doorway state, known as the fragmented states. However, the number of fragmented states depends also on the state spacing of the class-I states. Therefore, N c is defined for each clustered spectrum, which is directly proportional to the expected number of fragmented states, and from this H 2 c is defined as The reduced width amplitudes of the compound states are calculated from C (I) λ and C (II) λ as An ensemble of spectra, containing an equal number of clustered and non-clustered spectra, were generated using this model. The input parameters, D , θ 2 sp , R α/sp for both types of spectra and additionally θ 2 II,α , N c , E II , J π II for the clustered spectra, were randomised within sensible ranges to ensure that all reasonable scenarios were accounted for. Choosing the ranges for each of these parameters is akin to choosing a prior distribution in Bayesian statistics. The ranges used and their justifications are given in Table 1, and an example of the clustered and non-clustered spectra produced are shown in Fig. 2. This spectrum ensemble was used as 'training data' to train a Random Forest Classifier (RFC) to classify spectra as either clustered or not clustered, where each spectrum is characterised by a set of 'features' calculated from dσ/d n . More details on the RFC are given in Sect. 4. The features used were calculated from dσ/d n using a combination of the Continuous Wavelet Transform (CWT) [28] and a Principle Component Analysis (PCA) [29]. It Table 1 The parameter ranges used to produce the ensemble of spectra Parameter Range Justification D 40-60 keV Chosen empirically based on the measured spectra, and is consistent with the state spacings measured in TTIK measurements of α-scattering from other medium mass nuclei [26] θ 2 sp 0.02-0.05 Chosen to generously encompass the value extracted from 44 Ca( p, p) measurements, θ 2 p = 0.034 [27] R α/sp <20% Chosen to ensure that the α-channel was significantly reduced compared with the proton and neutron channels for the shell model type states E II -Chosen to be within the experimentally measured energy range for each measurement J π II ≤ 7 − All higher spins have a negligible contribution to the measured spectra due to the large centrifugal barrier. Furthermore only natural parity states were allowed since the entrance channels for all of the measurements were composed of spin-0 nuclei N c 2 Chosen empirically to ensure that the doorway state couples to more than one class-I state, but remains suitably localised-not coupling to all states in the spectrum. This value was not randomised since the randomisation of the state spacings and coupling matrix elements was already sufficient to produce a variety of state fragmentations was shown in Refs. [17,18] that the CWT is an effective tool for the identification of α-clustered doorway states from TTIK measurements. The CWT calculates wavelet coefficients, W ,nm , from dσ/d n by folding it with an appropriately chosen wavelet, (E). The wavelet is scaled by δE m , known as the scale parameter, which allows features in the spectrum to be expanded as a function of scale. The wavelet coefficients are calculated as where is a dummy variable used to facilitate the integration, and in practice the integral was calculated numerically using the trapezoidal rule. In this work the complex Morelet wavelet [28], which can be thought of as a windowed Fourier transform, was used. This is defined formally as where d defines the size of the window, and in this work d = 0.8 MeV. In this case δE m is the equivalent of the period in a typical Fourier transform, and W ,nm is similar to a Fourier transform coefficient, but localised at E x n . In this work 70 values of δE m were used, uniformly spaced between 0 and 1 MeV. The CWTs of the 4 He( 40,44,48 Ca,α) spectra are shown in Fig. 3. In this work the magnitude of the wavelet coefficients, W ,nm , are used and the phases are discarded, as it was observed that the phases contained little useful information regarding the α-clustered nature of the spectrum. It would, however, be inefficient to use W ,nm directly in the RFC as they are not orthogonal, with large correlations between neighbouring values of W ,nm , and a large number of coefficients are required to adequately characterise a spectrum, which leads to an unnecessarily computationally intensive analysis. Instead a PCA is performed on W ,nm as a form of dimensionality reduction. This constructs a new set of orthogonal features from W ,nm , chosen to ensure that the largest fraction of the variance in the original feature set is retained in the fewest possible features. In this case 300 PCA features were used, which accounted for 99.3% of the variance in the W ,nm feature set. More details on the PCA algorithm can be found in Ref. [29]. The PCA algorithm is very sensitive to the initial distributions of the features, and In each case the heatmap shows the magnitude of W ,nm as a function of δ E m and E x n works optimally when these are approximately normally distributed and normalised to a mean of 0 and a variance of 1. In order to accomplish this, the logarithm was taken of W ,nm , and the logged values were independently normalised to have a mean of zero and unit variance across the training data. The PCA was then performed on these normalised log wavelet coefficients. The result of this process is a set of PCA features, PCA k , which each correspond to a certain W ,nm distribution. Some examples of these distributions are shown in Fig. 4 for k = 0, 1, 2, 20, and an example of the stages of producing the PCA variables from a raw spectrum are shown in Fig. 5. The ensemble of PCA k for all of the simulated spec- The consequence of using PCA features as opposed to directly using the W ,nm features is that they much more naturally describe the overall properties of the spectrum than they do the properties of individual resonances within the spectrum. For example it is evident from Fig. 4 that PCA 0 represents the average amplitude of the resonances throughout the spectrum, relative to the amplitude of the noise in the spectrum, and PCA 1 represents whether or not the average resonant amplitude increases or decreases throughout the spectrum. The higher order PCA variables then begin to account for the shapes of the resonances, the spacings between the resonances and the widths of the resonances, however, these properties are all merged by the PCA algorithm, obscuring the properties of individual resonances. While this may lead to a reduction in the sensitivity of this algorithm to the more subtle effects of α-clustering on the spectra, the dominant effects ought to still be captured by the PCA features. Machine learning A RFC [30] is an ensemble machine learning method, which combines many randomised decision trees to produce a more robust and sophisticated classification than is possible using a single decision tree. Each tree is randomised by training it on a random subset of the training data, and at each node in For each stage of processing the top plot shows an example spectrum, and the bottom plot shows the distribution of some randomly chosen features across the entire training data set the tree the optimal splitting criterion is chosen from a subset of the available features. The RFC classifies a spectrum by allowing the individual decision trees to perform the classification independently, and then averaging the results. This method produces a pseudo-likelihood that the spectrum is clustered, L c , which is calculated as the fraction of the decision trees which predict that the spectrum is clustered. It is possible to calibrate the pseudo-likelihood to give the true likelihood that the spectrum is clustered, L c . This calibration was performed by calculating L c for every spectrum in the training data via fivefold cross-validation [31], which splits the training data into 5 segments and then trains the RFC on 4 of those, before using it to calculate L c for the spectra in the 5th segment. This process is repeated, leaving out each of the segments one at a time, until L c has been calculated for every spectrum in the training data. All of the clustered and non-clustered spectra were then binned separately as a function of L c , producing two histograms, N c n and N nc n respectively, with bin centroids at L c,n . The true clustering likelihood was then calculated from these histograms as the fraction of the spectra in each bin that are clustered, given formally as Finally a logistic function was fit to L c,n as a function of L c,n , producing the continuous function L c (L c ), under the constraints that L c (0) = 0 and L c (1) = 1. This function was then used to convert between L c and L c , an example of which is shown in Fig. 6. Five-fold cross-validation was also used to tune the RFC hyper-parameters by calculating the percentage of the cross-validated classifications which were correct, known as the classification accuracy. The hyper-parameters that were tuned were the total number of decision trees which compose the RFC, and the minimum number of events which may be contained within a single node of a decision tree. The optimal values chosen were 1000 decision trees and 75 events respectively. While traditional RFCs use fully grown Fig. 7 The sensitivity of the RFC as a function of the number of class-II states in the spectrum decision trees, rather than limiting them by defining a minimum number of events per node, it was found in this work that fully grown trees sometimes overfit to the training data, producing unreliable results. In addition to the classification accuracy, two other quantities were used to assess the quality of the RFC, the fraction of the clustered spectra which were classified correctly (sensitivity) and the fraction of the non-clustered spectra which were classified correctly (specificity). These are often also referred to as the True Positive Rate (TPR) and True Negative Rate (TNR) respectively. These are used, in addition to the accuracy, to probe the behaviour of the RFC in the following section. Results Three RFCs were produced, one each for 44 Ti, 48 Ti and 52 Ti, with cross-validated classification accuracies of 76%, 77% and 79% respectively, sensitivities of 74%, 78% and 81% respectively and specificities of 77%, 76% and 79% respectively. It is interesting to observe the dependence of the sensitivity of the RFCs on some key simulation parameters, as the sensitivity can be treated as a measure of how easy it is to observe an α-clustered doorway state. The three RFCs all behaved similarly, so one can assume that the conclusions drawn here are applicable to all three measurements, and only the results for 44 Ti are presented. Firstly it was important to ascertain that the RFCs were capable of identifying α-clustered states in spectra containing more than one, despite being trained only on spectra with a single α-clustered state. The sensitivity of the RFC was plotted as a function of the number of class-II states in the spectra in Fig. 7, demonstrating that the sensitivity increases with the number of class-II states in the spectrum. This is to be expected for a sensible RFC since if there are many class-II states present it becomes less likely that the RFC will miss all of them. Fig. 8 The sensitivity of the RFC as a function of θ 2 II,α , fit with a Gaussian process using a Matern kernel (line). The shaded region indicates a 1σ confidence interval The sensitivity was calculated as a function of θ 2 II,α by binning the training data uniformly into 40 θ 2 II,α bins and calculating the sensitivity independently for each bin. These values were then smoothly interpolated using a Gaussian process with a Matern kernel, which assumes that the data points ought to be correlated highly with those close in θ 2 II,α , and uses the magnitude of the errors on the data points to infer the smoothness of the interpolation and the size of the confidence interval. The data and the Gaussian process fit are shown in Fig. 8. Below θ 2 II,α ∼ 0.25 the sensitivity decreases, while above it plateaus. This indicates that if the α-clustered doorway state one is attempting to observe has a large ratio to the Wigner limit in the α-channel, above 0.25, it is much easier to observe than if one attempts to observe a similar state with a smaller θ 2 II,α . This is a sensible result, as states with small α-widths will look similar to class-I states, and, therefore, be more difficult to identify. Finally the sensitivity was calculated for each J π II , as a function of E II . This is plotted in Fig. 9, and shows that at high energies, low-spin doorway states are difficult to observe, and conversely at low energies high-spin doorway states are difficult to observe. This is because the resonant amplitude is proportional to (2J + 1) 2 , which amplifies high-spin states, however, the increased centrifugal barrier for high spin states dramatically decreases their penetrability factor and, therefore, their decay widths. Therefore, at low energies, where the barrier penetrability is especially dominant, the high spin states are difficult to populate, whereas at high energies they are populated and their increased amplitude dominates the spectrum, obscuring the low-spin resonances. Upon their application to the experimentally measured data, the RFCs predicted clustering likelihoods of 92%, 41% and 83% respectively, indicating that it is very likely that 44 Ti and 52 Ti contain at least one α-clustered doorway state and unlikely that 48 Ti does. This is consistent with previous observations of α-clustered doorway states in 44 Ti [10][11][12], as well as with a previous analysis of these data, which iden- Fig. 9 The sensitivity of the RFC as a function of the excitation energy of the class-II state, for each J π of the class-II state (data points with error bars). The values are fit smoothly using a Gaussian process with a Matern kernel (solid line), and a 1σ confidence interval is shown (shaded region). The Gaussian process fits are compared in the bottom-right plot tified doorway states in 44 Ti and 52 Ti but not in 48 Ti by examining the characteristic CWT scales of these measurements [17,18]. Next, the sensitivity of these results to the ranges used to produce the ensemble of training spectra was investigated. The upper and lower limits of D and θ 2 sp , the lower limit of θ 2 II,α , the upper limit of R α/sp and the value of N c were all varied, and new training ensembles were generated, to which new RFCs were fit and clustering likelihoods were recalculated for each isotope. The clustering likelihoods are plotted as a function of the parameter limits in Fig. 10. Firstly, while L c is almost completely insensitive to the choice of limits on D , it does exhibit a dependence on the other parameter limits, to varying degrees of severity. The clustering likelihood decreases slightly for all isotopes as both θ 2 sp limits increase. This is because as these limits increase, the average widths of the non-clustered resonances increases, reducing the difference between clustered and nonclustered spectra. The clustering likelihood also decreases for all isotopes as the lower limit on θ 2 II,α increases. Increasing this limit effectively increases the threshold at which a state is considered α-clustered, and consequently the clustering likelihood ought to naturally decrease as this increases and the criteria for α-clustering gets harsher. It is also the case that the clustering likelihoods increase for low values of R α/sp . This is because the value of R α/sp dictates the average size of the non-clustered resonances. If the simulated resonances in the non-clustered spectra are all very small, then any resonances in the measured spectra will produce a large clustering likelihood. Finally, it can be seen from the clustering likelihoods as a function of N c that while fragmented states are observed in 44 Ti and 52 Ti, if one looks for non-fragmented α-clustered states instead (i.e. small values of N c ), then the clustering likelihood falls below 0.5 for all three isotopes, indicating none are observed. This is consistent with the expectation that if α-clustered states exist in this mass region, they ought to behave as doorway states. Overall however, while there are some small variations in L c for extreme values of the parameters, the fundamental results that 44 Ti and 52 Ti contain α-clustered doorway states, while 48 Ti does not, are preserved, indicating a robust analysis. It is possible to calculate the relative importances of each PCA parameter, which indicates which parameter has the most influence over the resulting classification. This is calculated by evaluating the average 'height' of each parameter in the decision trees, and assuming that the most important parameters are those that are used earlier (or higher). These importances are plotted in the lower panel in Fig. 11. It is clear that the importance is highest for the lowest order PCA variables, suggesting that it's the overall group properties which contribute most significantly to the classification, for example the average resonance amplitudes, and the higher order terms are not as important. This demonstrates that the RFC is predicting the existence of α-clustered doorway states by examining the average resonant amplitude observed in the spectra, and how the resonant amplitude varies as a function of excitation energy. It is also possible to calculate the contribution each PCA feature makes to L c , L c,k , such that L c = 0.5+ k L c,k . For example, a negative contribution for a given parameter means Fig. 10 The clustering likelihood for 44 Ti (red), 48 Ti (green) and 52 Ti (blue), as a function of the limits used for the training data. In each case one limit is varied, and the others are held constant at their default values given in Table 1. In each plot the horizontal black dashed line indicates L c = 0.5, and the vertical black dotted line indicates the default parameter value that parameter represents a swing towards not clustered, and a positive contribution represents a swing towards clustered. These clustering likelihood contributions are plotted for each nucleus in the bottom panel of Fig. 11. These values can be used to assess exactly how the RFCs made the classification decisions for 44,48,52 Ti. In all three cases PCA 0 contributes negatively, indicating that alone the average amplitude of the resonances is not large enough to demonstrate the existence of an α-clustered doorway state. However, in the cases of 44 Ti and 52 Ti PCA 1 makes a very large positive contribution to L c . It is clear from looking at the spectra that both of these nuclei have large resonances at low excitation energies, and so it seems reasonable to conclude that the existence of large resonances at low excitation energies is indicative of αclustered doorway states in 44 Ti and 52 Ti. Note this work has used a binomial classification system, where the result must be one of two results (clustered or not) which could introduce a systematic bias. In future work it could be generalised to a multinomial classification problem, where predictions are attempted if the data are (A) shell model, (B) alpha clustered, (C) alpha clustered and coupled to shell model, (D) …etc., with a different class for each nuclear structure or model to Discussion To summarise, by training an RFC to evaluate the differences between spectra simulated either with or without αclustered states, α-clustering has been identified in 44 Ti and 52 Ti. The results for 48 Ti are less conclusive, but tentatively suggest that α-clustering is not present in this energy region. If one searches for a single α-clustered state in the spectra, rather than sets of fragmented α-clustered states indicative of a doorway state, then none of the measurements return a positive result, indicating that the α-clustered structures observed in 44 Ti and 52 Ti act as doorway states. This suggests that the doubly-magic nature of 40 Ca and 48 Ca is particularly important for the existence of α-clustered states. The use of machine learning here has allowed these conclusions to be drawn without requiring the extraction of the individual spins, parities, energies and widths of the nuclear energy levels. This is very powerful, as it is likely that those parameters could not be robustly extracted from the current measurements alone, yet using this technique it was still possible to quantitatively answer the crucial, fundamental questions of α-clustering in this mass region. It is important to note that the combination of the PCA and the RFC here constituted quite a 'blunt' machine learning algorithm, since it effectively focused only on the average resonant amplitude of the measurements and ignored the more subtle features such as the state spacing and the resonance shapes. It may be possible to improve upon the results shown here by employing a more sophisticated machine learning technique, such as convolutional neural networks, which have been used with great success for image analysis in other fields [32]. Data Availability Statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: The spectra here are simulated. The related data are addressed by publication [17], so the present data availability statement is valid.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.
8,670.2
2021-03-01T00:00:00.000
[ "Computer Science" ]
A Note on Cube-Full Numbers in Arithmetic Progression <jats:p>We obtain an asymptotic formula for the cube-full numbers in an arithmetic progression <jats:inline-formula> <math xmlns="http://www.w3.org/1998/Math/MathML" id="M1"> <mi>n</mi> <mo>≡</mo> <mi>l</mi> <mfenced open="(" close=")" separators="|"> <mrow> <mi mathvariant="normal">mod</mi> <mtext> </mtext> <mi>q</mi> </mrow> </mfenced> </math> </jats:inline-formula>, where <jats:inline-formula> <math xmlns="http://www.w3.org/1998/Math/MathML" id="M2"> <mfenced open="(" close=")" separators="|"> <mrow> <mi>q</mi> <mo>,</mo> <mi>l</mi> </mrow> </mfenced> <mo>=</mo> <mn>1</mn> </math> </jats:inline-formula>. By extending the construction derived from Dirichlet’s hyperbola method and relying on Kloosterman-type exponential sum method, we improve the very recent error term with <jats:inline-formula> <math xmlns="http://www.w3.org/1998/Math/MathML" id="M3"> <msup> <mrow> <mi>x</mi> </mrow> <mrow> <mfenced open="(" close="" separators="|"> <mrow> <mn>118</mn> </mrow> </mfenced> <mo>/</mo> <mrow> <mfenced open="" close=")" separators="|"> <mrow> <mn>4029</mn> </mrow> </mfenced> </mrow> </mrow> </msup> <mo><</mo> <mi>q</mi> </math> </jats:inline-formula>.</jats:p> Introduction and Main Results Let k > 1 be a fixed integer and n be a positive integer. We call n a powerful number (or k-full number) if n � 1 or for a prime p dividing n, p k also divides n. Let P k denote the set of powerful numbers. Suppose k � 2, 3, and this defines square-full numbers and cube-full numbers, respectively. Erdo .. s and Szerkeres [1] first introduced powerful numbers and gave n≤x,n∈P k where c k,m are effective constants and Δ k (x) ≪ x (1/(k+1)) . From then on, many authors have studied the powerful numbers and got a lot of relevant conclusions (see [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18] and references therein). In 2013, Liu and Zhang [19] investigated the distribution of square-full numbers in arithmetic progressions and got an asymptotic formula n≤x,n∈P 2 n≡l(mod q) 1 � α(l, q)x (1/2) + O q (49/141)+ε x (19/47) under the condition of (q, l) � 1. By utilizing the method of exponent pairs, Srichan [20] then obtained n≤x,n∈P 2 n≡l(mod q) 1 � where the error terms had been corrected by Watt [MR3265055]. Recently, Chan [21] got a new asymptotic formula n≤x,n∈P 2 n≡l(mod q) which improved his own result with Tsang [22]. As a critical step, he [21] mainly dealt with a sum in the form of by following closely Montgomery and Vaughan's construction [23]. It is somewhat similar to Dirichlet's hyperbola method shown in Figure 1. Actually, they divided the above sum into four parts as shown in Figure 2 and then discussed them separately. Motivated by this idea, we turn to discuss the following sum with three parameters: By extending the construction from Montgomery and Vaughan [23], the summation (5) is divided into eight parts as shown in Figure 3. en, relying again on Kloosterman-type exponential sum method, an asymptotic formula of (5) is obtained. Finally, an asymptotic formula of cube-full numbers in an arithmetic progression is derived. Some Lemmas Before we start the proof, let us give a few lemmas which are needed later. Proof. e first result can be found in Lemma 3 of [21]. Note that e second and third one can be proved in the same way. e proofs of the last three are slightly different. For example, by orthogonal property of additive characters, we have Journal of Mathematics 3 where G 3 is the set of all characters χ(mod q) such that χ 3 � χ 0 , the principal character. □ Lemma 2. For q ≥ 1 and (l, q) � 1, Proof. Let ‖x‖ be the distance from x to the nearest integer; then, we have Proof. Using the trivial estimation of the innermost sum, we have en, by orthogonal property of additive characters, we obtain Interchanging the order of summations and combining Lemma 2 and Eq. (12.48) on page 324 of [24], we have Finally, we get By definition of N 3 (n; q), N 3 (n; q) ≪ ε q ε (see Lemma 2.2 in [25]). Lemma 4. If we define Proof. Following much the same way as Lemma 4 of [21], we first suppose q � rs with (r, s) � 1. By the "reciprocity" formula ss + rr ≡ 1(mod q), where ss ≡ 1(mod r) and rr ≡ 1(mod s), and the additive multiplicity of exponential function e an 3 + bn 4 q � e asn 3 + bsn 4 r e arn 3 + brn 4 s , (24) with n � sx + ry, Now we just need to discuss the argument in the following cases: (I) Prime moduli q � p case. Now eorem 2 obtained by Moreno and Moreno [26], which is a special form of the Bombieri-Weil bound [27], implies provided that (( is is impossible if p > 4, by comparing the degrees of both sides of the above. If p ≤ 4, the validity of (26) can be easily checked. (II) Prime power moduli q � p β case with β > 1. Obviously, we only need to consider it with the assumption (a, b, p) � 1. Following the proofs of Lemma 12.2 and 12.3 in [24] with the equation of S(a, b; q), we obtain where g(y) � ay 7 + b y 4 , Journal of Mathematics Note that g ′ (y) � ((3ay 10 − 4by 3 )/y 8 ) and h(y) � ((3ay 11 + 10by 4 )/y 10 ). Now we concentrate on the number of solutions of congruence equation β � 2α with α ≥ 1. en, the congruence equation is Relying on the properties of indices, we deduce that (32) has no solution when p > 3 and one solution when p � 3. Next, we assume (b, p) � 1. If p � 2 and 4‖a, then (32) has at most seven solutions. And if p > 3 and (a, p) � 1, (32) also has at most seven solutions. en, we have where β � 2α + 1 with α ⩾ 1. Firstly, in the same way, if (b, p) � p, by the analysis in the case β � 2α, the sum in (30) is empty unless p � 3 in which case one has In any case, we have Combining (25), (26), (33), and (34), we finally obtain which completes the proof of Lemma 4. By applying Lemma 4, we have the following. e remaining part of the proof is similar to eorem 4 in [21]. Proof of Theorem 2 Consider three positive parameters λ, μ, and ]. By extending the construction from Montgomery and Vaughan [23] as shown in Figure 3, we have Journal of Mathematics First, we estimate T 3 . For T 33 , we have by Lemma 3. en, we estimate T 31 and T 32 . For T 31 , we know e first term in the above formula is In order to simplify our final result, by using Euler's summation formula which can be found in eorem 3.2 in [28], the constant ∞ b�1 (1/b (4/3) ) can be rewritten as us, we obtain the asymptotic formula of T 31 as Next, we deal with T 32 . 8 Journal of Mathematics If we let F c (μ) � a≤μ N 4 (lac 3 ; q) − ((ϕ(q))/q)μ, then the first term in the above formula is − x (1/4) q c≤x ] N 4 lc 3 ; q − ((ϕ(q))/q) And the second term is So, we can get And for T 34 , following closely Chan [21] as shown in Figure 2 instead of Dirichlet's hyperbola method shown in Figure 1, we just need to divide the interval (a, b): { a > x n , b > x m , a 3 b 4 ≤ x} in the same way. Note that the sum Journal of Mathematics 9 can be estimated with the help of asymptotic formula given at the end of page 101 in [21]; then, using Lemma 5, we can get where 2 J 1 satisfies (X μ 1 /(2 J 1 q (1/2) )) ≪ 1. Picking λ 1 to have the same size as μ 1 and combining the previous results, we have For T 2 , in the same way as T 3 , we have 10 Journal of Mathematics And we can obtain For T 22 , again in the same way as the proof of T 3 , we have (54) If we let F b (μ) � a≤μ N 5 (la 2 b; q) − ((ϕ(q))/q)μ, then the first term in the above formula is Journal of Mathematics and the second term is en, we can obtain 12 Similar to the proofs of T 2 and T 3 , we can get the following: Journal of Mathematics 13 Finally, we discuss T 8 as follows: dividing interval Let a 0 be the intersection of the plane b � x μ , c � x ] + n((x ((1− 3λ− 4μ)/5) − x ] )/m), and curved surface a 3 b 4 c 5 � x, which is and then we obtain Similarly, we have We define the intersection of a 3 b 4 c 5 � x and x ] + n((x ((1− 3λ− 4μ)/5) − x ] )/m) as 14 Journal of Mathematics Now we divide the interval in the nth part and follow the construction in [23]; firstly, we have rectangles In the remaining regions we place additional rectangles R ijk . If we let 1 ≤ k ≤ 2 j− 1 , in the same way we place a further rectangles and so on, then we can get R ijk which is e remaining regions are In this part, expanding R i and R ijk to a cube with height ((x ((1− 3λ− 4μ)/5) − x ] )/m), we can get R ni , R nijk , S ni , S nijk S nijk ′ , and the remaining region S n correspondingly. en, we have the asymptotic formula en, we further obtain in which the main term is Now we deal with the error term of T 8 . Note that us, the area of S niJk is at most which implies that the estimation of the first error term is Area of S niJk q ≪ 1 Conflicts of Interest e authors declare that they have no conflicts of interest.
2,364.6
2021-03-09T00:00:00.000
[ "Mathematics" ]
Using Virtual Objects With Hand-Tracking: The Effects of Visual Congruence and Mid-Air Haptics on Sense of Agency Virtual reality expands the possibilities of human action. With hand-tracking technology, we can directly interact with these environments without the need for a mediating controller. Much previous research has looked at the user-avatar relationship. Here we explore the avatar-object relationship by manipulating the visual congruence and haptic feedback of the virtual object of interaction. We examine the effect of these variables on the sense of agency (SoA), which refers to the feeling of control over our actions and their effects. This psychological variable is highly relevant to user experience and is attracting increased interest in the field. Our results showed that implicit SoA was not significantly affected by visual congruence and haptics. However, both of these manipulations significantly affected explicit SoA, which was strengthened by the presence of mid-air haptics and was weakened by the presence of visual incongruence. We propose an explanation of these findings that draws on the cue integration theory of SoA. We also discuss the implications of these findings for HCI research and design. Here we explore the avatar-object relationship by manipulating the visual congruence and haptic feedback of the virtual object of interaction.We examine the effect of these variables on the sense of agency (SoA), which refers to the feeling of control over our actions and their effects.This psychological variable is highly relevant to user experience and is attracting increased interest in the field.Our results showed that implicit SoA was not significantly affected by visual congruence and haptics.However, both of these manipulations significantly affected explicit SoA, which was strengthened by the presence of mid-air haptics and was weakened by the presence of visual incongruence.We propose an explanation of these findings that draws on the cue integration theory of SoA.We also discuss the implications of these findings for HCI research and design. I. INTRODUCTION V IRTUAL reality opens up new possibilities for human action and interaction.This has expanded the horizons of human agency and has promising applications in various domains such as medicine [1] motor rehabilitation [2], [3], cooperation [4], and animation and editing [5].Many of these applications depend on the user interacting with a virtual object in an immersive or non-immersive environment.This is normally through a virtual avatar, which is itself controlled by the user.An important consideration is the means of this interaction.One option is through a physical device such as a controller, or a wearable device that can track the user's movements.Another option is through hand-tracking which allows the user to directly interact with virtual environments, which has been suggested to be a more naturalistic mode of interaction [6]. Here, we consider the psychological variable known as Sense of Agency (SoA) in these potentially more naturalistic interactions with virtual objects.SoA refers to the feeling of control over one's actions and their effects [7].This has been the focus of much research in psychology, and has, in the past 10 years or so, also attracted growing interest in the HCI community [8].Primarily, this is because of the recognition that users' sense of being in control of a system is fundamental to effective user interface design [9].Additionally, HCI research has benefitted from the adoption of rigorous measures and theories developed in psychological research on SoA.This is something we aim to continue in the present study in which we investigate SoA in the context of the avatar-object relationship. II. TRACKING THE VIRTUAL AGENT Although hand-tracking may be preferable in terms of its support of natural gesture-based interaction, there are concerns about its accuracy and precision [6].This is particularly relevant when it comes to SoA, which is known to be acutely sensitive to perturbations in the relationship between a movement and its visual representation (e.g., [10], [11]).This feature of agency processing is captured by the comparator model, which emphasises the importance of a correspondence between expected and actual action feedback in generating the SoA [12]. In line with this, an extensive body of research has already confirmed that the relationship between user and avatar movement is important for the experience of agency.For example, artefacts such as latency, jitter and spatial congruency that disrupt the user-avatar relationship have been shown to impact SoA [13], [14], [15], [16], [17], [18].Naturally, these contingencies are considered to be of importance to user representation in HCI [19].What has seldom been investigated, however, is whether this extends to our interactions with objects in the virtual environment.This is something we explore here, by assessing the effect of manipulating the relationship between a virtual action aimed at an object and the behaviour of that object.Psychological theories have consistently emphasised the importance of environmental feedback in informing SoA (e.g., [20], [21]), and the limited research in this area would appear to support this.For example, it has been shown that when causing an object to move on a screen, the extent of the movement in terms of its congruency with the force applied [22] can impact SoA.In light of this we would expect that disruption of the virtual action-object relationship reduces SoA. Another variable of interest in the context of hand-tracking technology is haptics.Although hand-tracking allows for more naturalistic interactions, as a result there is a lack of tactile feedback that would typically accompany actions in the physical world.Psychological theories of SoA emphasise the importance of bodily feedback and sensory signals in the construction of this experience [20], [21].In this way, the absence of haptic feedback would potentially harm SoA.To overcome this issue, technology has recently been developed that is able to provide mid-air haptic feedback without the need for wearables or physical objects [23], [24].These arrays use ultrasound which targets focused points on the hand, stimulating mechanoreceptors and transmitting vibrotactile sensations. Cornelio-Martinez et al. [25] demonstrated mid-air haptic feedback for gesture-based touchless interactions to be beneficial, increasing SoA as compared to visual.Recent research by Evangelou et al. [14] has looked at the presence mid-air haptics for virtual objects of interaction and shown this to optimise SoA under certain conditions.Moreover, their study demonstrated that the presence of this haptic information also protects against the loss of SoA arising from user-avatar latency.This latter finding is important in the present context as it suggests that any putative disruption of the avatar-object relationship with hand-tracking could also be mitigated by the presence of mid-air haptics. III. EXPERIMENT AND CONTRIBUTIONS The present study explores a) the effect of disruption to the avatar-object relationship, and b) it's possible mitigation by haptic feedback in a non-immersive virtual environment.With this, we aim to contribute to HCI by looking at whether the responsiveness of virtual objects affects SoA, and whether the positive effects of mid-air haptics extend from the user-avatar relationship to the avatar-object relationship. Participants pressed a virtual button with their avatar hand, which caused an auditory tone after a brief delay.In a visually congruent condition, the virtual hand made contact with the button which caused it to visibly depress.In an incongruent condition, the button did not visibly depress.The button press interaction was either accompanied with haptic feedback emulating a physical button press or no feedback at all.We measured SoA via the interval estimation paradigm [26].This is an implicit measure of SoA based on changes in time perception associated with voluntary actions and effects (Fig. 1).More specifically, when someone feels in control of their action and its effect, they perceive a compression of time between the two, referred to as intentional binding [27], [28].We supplemented the binding measure with explicit self-report measures of agency, whereby participants were asked to rate their feelings of controlling the button press and causing the tone outcome.These questions are adapted from previous research [14] and tailored to the task. A. Participants Based on a medium effect size (f = .25)and desired power of .9, using G * Power [29] we calculated the required sample size to be 30 participants.In total, we recruited 32 participants (18 females, 1 prefer not to say) via email or the SONA participation database.They received a compensatory £15 Amazon voucher for their participation.Ages ranged from 18-50 years (M = 30.2years; SD = 7.8 years).Two participants were excluded from analyses due to not following instructions (time estimates exceeding the maximum of that instructed) or too many unreported missing trials demonstrating a lack of concentration.Handedness was measured via the short form revised Edinburgh Handedness Inventory [30] to ensure that the dominant hand was used.For mixed handers (scores ranging 60 to −60) their self-reported preferred hand was used.There were no reported visual or hearing impairments. B. Materials and Apparatus An interactive non-immersive virtual scene (see Fig. 2(a)) was setup and run via Unity game engine (v2019.4.12f1).There was a virtual button and a virtual hand displayed on the screen.A Leap Motion camera was used to track the participants' hand movements, which were displayed on the screen as movements of the virtual hand towards the virtual button.The Leap Motion camera was attached to an Ultraleap STRATOS Xplore development kit which uses ultrasound technology to transmit tactile sensations directly to the hand [23].This was used to provide haptic feedback for the button press (see Fig. 2(b)).The sensation for the button was designed to emulate a physical button force, with a circle shaped sensation that ranged dynamically from maximum intensity at the tip down to no feedback at the point of click, and back up. A 14" HD monitor was used to display the virtual hand and button.The Ultraleap device was positioned so that the participant's dominant hand would be tracked at a similar height to the desk (see Fig. 2(a)).This allowed for a more naturalised button-press interaction.The pressing of the virtual button was followed by an auditory tone after a variable delay.One second later a UI panel was displayed on the screen, which could be interacted with via keyboard and mouse.Headphones were used to minimise the possible sensory conflict between the mid-air tactile sensation and the auditory noise generated by the ultrasound array. C. Tasks and Measures To measure intentional binding (implicit SoA), we adopted the direct interval estimation method from Moore et al. [31].Participants were told that the interval between the button press and the tone would vary randomly between 1 ms and 999 ms.In reality, however, only three intervals are presented: 100 ms, 400 ms or 700 ms in a pseudorandomised order.Participants entered their estimations manually in the UI panel and clicked to submit and continue for each trial.Shorter interval estimations are taken to indicate a stronger SoA. For explicit SoA, two questions were adapted from previous work [14] and tailored to the task: "I feel in control of the button press" for control over intentional action and "I feel I am causing the tone by pressing the button" for causation of the outcome.These were measured on a Likert scale of 1 (strongly disagree) to 7 (strongly agree) and reported every 12 trials (3 times per condition), thus higher average scores represent greater explicit agency. D. Design We used a 2 (haptic feedback) x 2 (visual congruence) withinsubject design.Haptic feedback was manipulated at two levels: with or without.Visual congruence was also manipulated at two levels: the button would depress with the movement of the virtual hand (Fig. 3(a)) or it would remain fixed (Fig. 3(b)).Each 36-trial condition was split into three steps.Each step consisted of 12 trials with the three interval lengths presented in a pseudorandomised order.At the end of each step we collected the self-report measures.A Latin square method was used to counterbalance conditions across participants. E. Procedure Participants were told they would be interacting with a nonimmersive virtual scene, using a hand tracking system, where they would press a button and hear a tone after a short delay.They were required to estimate the time interval between when the button is pressed and when they hear the tone, and that this can vary between 1-999 ms. For the learning phase, participants were sat at a safe distance from the monitor, put the headphones on and the Ultraleap apparatus was adjusted to a point that it was in a natural position.In this practice block, they would hover their hand over the ultrasound array to enter the virtual environment then press the button by making a downward movement of their estimate in the "Enter milliseconds" UI panel via the keyboard. Following this they clicked submit via mouse.On these practice trials only they also received feedback of the exact time delay.These time delays were all either 50 ms, 500 ms and 950 ms to give them an idea of the lower, middle and far end of the scale.This block consisted of 10 trials with haptic feedback and visual congruence so as to also familiarise participants with the technology.In this time, participants were also instructed to try and avoid pressing the button twice in a single trial as this would render the trial void.If this did occur they were to report this and enter 0. Moving onto the experimental block (Fig. 4), it was reiterated to participants that intervals would now range from 1-999 ms.They then completed 36 trials per condition, split into three blocks of 12 trials.After each block, an additional UI panel opened with each self-report question consecutively, and participants were told to click the answer (1-7, 1 being strongly disagree and 7 being strongly agree) that best indicates their experience.They then clicked continue in order to proceed to the next block of trials.A message was displayed to signal the end of a condition, after which participants were permitted a two minute break if necessary.When the session finished, participants were debriefed and asked if they had any questions or if they noticed anything about the experiment. V. RESULTS One participant was removed from the intentional binding analysis due to reporting losing concentration in one condition which led to consistent input of under 100 ms.No outliers were detected (all Z<3).Interval estimations were averaged for each condition.Lower scores indicate greater binding, and therefore, stronger implicit SoA.Scores for self-reported control and causation were averaged for each condition separately, with higher scores indicating greater explicit SoA.Data were processed in Excel and analysis carried out in Jamovi 2 and R. B. Haptics and Visual Congruence on Self-Reported Control and Causal Influence Due to significant departures from normality in the selfreport data (Shapiro Wilk, p<.05, Skewness Z>1.96), we applied the aligned rank transform (ART; [32]) before conducting the ANOVAs.This method permits factorial ANOVA on nonparametric data to also examine interactions. A 2x2 repeated measures ANOVA was conducted on the aligned ranks for self-reported control with haptic feedback (with or without) and visual congruence (congruent or incongruent) entered as within-subject factors.There was a main effect of haptic feedback, F(1, 87) = 18.78, p<.001, η p 2 = .18,such that feelings of control over the button press action were greater with haptic feedback than without (Fig. 6(a)).There was also a main effect of visual congruence, F(1, 87) = 30.46,p<.001, η p 2 = .26,revealing a greater sense of control over action when the button press was congruent compared to when not (Fig. 6 A 2x2 repeated measures ANOVA was conducted on the aligned ranks for self-reported causation with haptic feedback(with or without) and visual congruence (congruent or incongruent) entered as within-subject factors.There was a main effect of haptic feedback, F(1, 87) = 5.26, p = .024,η p 2 = .06,such that feelings of causing the outcome were greater with haptic feedback than without (Fig. 7(a)).There was also a main effect of visual congruence, F(1, 87) = 9.17, p = .003,η p 2 = .10,revealing a greater sense of causal influence when the button press was congruent compared to when not (Fig. 7(b)).There was no significant interaction, F(1, 87) = 0.07, p = .785,and so post-hoc tests were not carried out. VI. DISCUSSION The aims of this study were to investigate the impacts of midair haptics and visual congruence on SoA in touchless virtual interactions.We found that binding was not affected by either of these manipulations; however, both self-reported control over action and causal influence of outcome were.We discuss these results and their implications below. To our knowledge this study is the first to look at mid-air haptics and visual congruence with implicit SoA when interacting with virtual objects.The lack of a significant effect here is surprising, especially given the apparent importance of these variables for SoA [8], [20].However, one possible explanation comes from the cue integration model [21].According to this model, SoA is based on various agency cues, including internal sensorimotor signals and external sensory feedback.The relative influence of these cues is determined by their reliability.Indeed, it has been shown that in situations where internal sensorimotor signals are reliable, external sensory information will have less influence (e.g., [31], [33]).This may explain our findings: the presence of internal sensorimotor signals could have attenuated the influence of haptics and visual congruence (external cues to agency). Intriguingly, explicit SoA was strengthened by haptic feedback and weakened by visual incongruence.Notably, both control over action and the perceived sense of causing the resulting outcome were affected.Although at first these findings seem at odds with the implicit agency findings, the cue integration approach may shed some light.It has been suggested that implicit and explicit aspects of SoA are influenced by different agency cues [34].Importantly, implicit levels rely more on sensorimotor signals and explicit levels more on external sensory feedback.In this way, the modulation of self-reported control and causation by haptics and visual congruence is predicted by the model. In terms of user experience and design considerations, our findings have two key implications.The first is to confirm the importance of visual congruence when users are interacting with virtual objects.We show that this factor can negatively impact users' experience of both controlling the object and through that, causal influence on the environment.Future research could also look into the extent of these effects too, for example whether more recent physics-based hand-object interactions [35] actually strengthens agency.Second, our findings extend previous research which investigated the effect of mid-air haptics on explicit SoA.Previously, we have suggested that the influence of haptics may be limited to protecting explicit feelings of control under conditions of agentic uncertainty [14].However, here our data suggest the presence of haptics can generally strengthen both explicit control over objects and the resulting causal influence.Overall, these findings are noteworthy in the context of HCI and design given of the foundational role of SoA in broader user experience, influencing other psychological variables such as motivation, engagement and presence [36]. VII. LIMITATIONS One limitation we consider here concerns the minimal selfreport data collected.This limited the scope of other interesting effects on user experience that could have been examined.For example, a virtual embodiment questionnaire [16] could explore the effects of this external avatar-object relationship on the general sense of embodiment.In line with this it would have been interesting to explore the relationship, if any, between embodiment and SoA, something that has attracted interest in the fields of both psychology and HCI [37].Furthermore, some open-ended qualitative questions that might give voice to a broader range of agentic experiences than is permitted by our purely quantitative approach. Another limitation relates to the non-immersive virtual environment.While this is appropriate for our aim here, it does limit the scope of its broader significance when it comes to HCI applications.For example, it would be interesting to note whether these effects extend to or even change in an immersive virtual environment.Despite this limitation, it should be noted that previous research has shown that implicit and explicit SoA are not affected by such a change of modality [38]. VIII. CONCLUSION In sum, this study investigated object-related visual-haptic effects on SoA in a non-immersive virtual environment.For implicit SoA, there was no significant influence of these external sensory variables, perhaps because of the presence of internal sensorimotor signals (which implicit SoA relies on heavily).For explicit SoA, there was an overall strengthening with haptic feedback, and an overall weakening with visual incongruence.These findings can be explained under the cue integration approach, which may offer a useful framework for understanding how different variables are likely to influence user experience in this content. Using Virtual Objects With Hand-Tracking: The Effects of Visual Congruence and Mid-Air Haptics on Sense of Agency George Evangelou , Orestis Georgiou , Senior Member, IEEE, and James Moore Abstract-Virtual reality expands the possibilities of human action.With hand-tracking technology, we can directly interact with these environments without the need for a mediating controller.Much previous research has looked at the user-avatar relationship. Fig. 1 . Fig. 1.Changes in perceived time between actions and outcomes associated with the sense of agency. Fig. 4 . Fig. 4. Visualization of a typical experimental trial within a block.Actual intervals pseudorandomized for 12 trials x3 for block step measure. Fig. 5 . Fig. 5. Mean interval estimations plotted as a function of visual congruence and haptic feedback.The error bars represent standard error across participants. Fig. 6 . Fig. 6.Ratings of control over the virtual button plotted as a function of visual congruence and haptic feedback.The middle lines of the boxplot indicate the median; upper and lower limits indicate the first and third quartile.The error bars represent 1.5 X interquartile range or minimum or maximum. Fig. 7 . Fig. 7. Ratings of causal influence over the tone plotted as a function of visual congruence and haptic feedback.The middle lines of the boxplot indicate the median; upper and lower limits indicate the first and third quartile.The error bars represent 1.5 X interquartile range or minimum or maximum.
4,977
2023-05-08T00:00:00.000
[ "Computer Science", "Psychology" ]